## How to detect duplicate documents in billions of urls

You have a billion urls, where each is a huge page. How do you detect the duplicate documents?

My initial thoughts:
We can only use some heuristics to do the detection:

1. If the two documents have exactly the same links inside the page
2. They have the same title
3. Their creation time are the same

Solution:
Observations:

1. Pages are huge, so bringing all of them in memory is a costly affair. We need a shorter representation of pages in memory. A hash is an obvious choice for this.
2. Billions of urls exist so we don’t want to compare every page with every other page (that would be $O(n^2)$).

Based on the above two observations we can derive an algorithm which is as follows:

1. Iterate through the pages and compute the hash table of each one.
2. Check if the hash value is in the hash table. If it is, throw out the url as a duplicate. If it is not, then keep the url and insert it in into the hash table.

This algorithm will provide us a list of unique urls. But wait, can this fit on one computer?

• How much space does each page take up in the hash table?
• Each page hashes to a four byte value.
• Each url is an average of 30 characters, so that’s another 30 bytes at least.
• Each url takes up roughly 34 bytes.
• 34 bytes * 1 billion = 31.6 gigabytes. We’re going to have trouble holding that all in memory!

What do we do?

• $v\%n$ tells us which machine this document’s hash table can be found on.
• $v / n$ is the value in the hash table that is located on its machine.