Design a database with terabytes of data that supports efficient range queries

You have to design a database that can store terabytes of data. It should support efficient range queries. How would you do it?

My initial thoughts:
B+ Tree. Internal nodes are numbers but leaves are ranges. Support dynamic modifications and maintain.

Solution:
Construct an index for each field that requires range queries. Use a B+ tree to implement the index. A B+ tree organizes sorted data for efficient insertion, retrieval and removal of records. Each record is identified by a key (for this problem, it is the field value). Since it is a dynamic, multilevel index, finding the beginning of the range depends only on the height of the tree, which is usually quite small. Record references are stored in the leaves, sorted by the key. Additional records can be found by following a next block reference. Records will be sequentially available until the key value reaches the maximum value specified in the query. Thus, runtimes will be dominated by the number of elements in a range.
Avoid using trees that store data at interior nodes, as traversing the tree will be expensive since it won’t be resident in memory.

How to detect duplicate documents in billions of urls

You have a billion urls, where each is a huge page. How do you detect the duplicate documents?

My initial thoughts:
We can only use some heuristics to do the detection:

  1. If the two documents have exactly the same links inside the page
  2. They have the same title
  3. Their creation time are the same

Solution:
Observations:

  1. Pages are huge, so bringing all of them in memory is a costly affair. We need a shorter representation of pages in memory. A hash is an obvious choice for this.
  2. Billions of urls exist so we don’t want to compare every page with every other page (that would be O(n^2)).

Based on the above two observations we can derive an algorithm which is as follows:

  1. Iterate through the pages and compute the hash table of each one.
  2. Check if the hash value is in the hash table. If it is, throw out the url as a duplicate. If it is not, then keep the url and insert it in into the hash table.

This algorithm will provide us a list of unique urls. But wait, can this fit on one computer?

  • How much space does each page take up in the hash table?
    • Each page hashes to a four byte value.
    • Each url is an average of 30 characters, so that’s another 30 bytes at least.
    • Each url takes up roughly 34 bytes.
  • 34 bytes * 1 billion = 31.6 gigabytes. We’re going to have trouble holding that all in memory!

What do we do?

  • We could split this up into files. We’ll have to deal with the file loading / unloading—ugh.
  • We could hash to disk. Size wouldn’t be a problem, but access time might. A hash table on disk would require a random access read for each check and write to store a viewed url. This could take msecs waiting for seek and rotational latencies. Elevator algorithms could elimate random bouncing from track to track.
  • Or, we could split this up across machines, and deal with network latency. Let’s go with this solution, and assume we have n machines.
    • First, we hash the document to get a hash value v
    • v\%n tells us which machine this document’s hash table can be found on.
    • v / n is the value in the hash table that is located on its machine.

How to avoid getting into infinite loops when designing a web crawler

If you were designing a web crawler, how would you avoid getting into infinite loops?

My initial thoughts:

  1. Keep a list of visited webpage. If the webpage is already visited, do not follow that link.
  2. Make a webpage with links less than a threshold as the base-page. Stop searching at base pages.

Solution:
First, how does the crawler get into a loop? The answer is very simple: when we re-parse an already parsed page. This would mean that we revisit all the links found in that page, and this would continue in a circular fashion.
Be careful about what the interviewer considers the “same” page. Is it URL or content? One could easily get redirected to a previously crawled page.
So how do we stop visiting an already visited page? The web is a graph-based structure, and we commonly use DFS (depth first search) and BFS (breadth first search) for traversing graphs. We can mark already visited pages the same way that we would in a BFS/DFS.
We can easily prove that this algorithm will terminate in any case. We know that each step of the algorithm will parse only new pages, not already visited pages. So, if we assume that we have N number of unvisited pages, then at every step we are reducing N (N-1) by 1. That proves that our algorithm will continue until they are only N steps.

Design the data structure for large social network

How would you design the data structures for a very large social network (Facebook, LinkedIn, etc)? Describe how you would design an algorithm to show the connection, or path, between two people (e.g., Me -> Bob -> Susan -> Jason -> You).

My initial thoughts:
This is clearly a graph structure with nodes as user profiles and edges as connections. Hence, showing the connection between two people is just to find a path between two nodes in the graph. If user just want to see the connections, we can use BFS to do the search. If user needs to see the shortest path, then we can implement the Dijkstra’s algorithm.

Solution:
Approach:
Forget that we’re dealing with millions of users at first. Design this for the simple case.
We can construct a graph by assuming every person is a node and if there is an edge between two nodes, then the two people are friends with each other.

class Person {
	Person[] friends;
	// Other info
}

If I want to find the connection between two people, I would start with one person and do a simple breadth first search.
But… oh no! Millions of users!
When we deal with a service the size of Orkut or Facebook, we cannot possibly keep all of our data on one machine. That means that our simple Person data structure from above doesn’t quite work—our friends may not live on the same machine as us. Instead, we can replace our list of friends with a list of their IDs, and traverse as follows:

  1. For each friend ID:
    int machine_index = lookupMachineForUserID(id);
    
  2. Go to machine machine_index
  3. Person friend = lookupFriend(machine_index);
    

There are more optimizations and follow up questions here than we could possibly discuss, but here are just a few thoughts.

  • Optimization: Reduce Machine Jumps
    Jumping from one machine to another is expensive. Instead of randomly jumping from machine to machine with each friend, try to batch these jumps—e.g., if 5 of my friends live on one machine, I should look them up all at once.
  • Optimization: Smart Division of People and Machines
    People are much more likely to be friends with people who live in the same country as them. Rather than randomly dividing people up across machines, try to divvy them up by country, city, state, etc. This will reduce the number of jumps.
  • Question: Breadth First Search usually requires “marking” a node as visited. How do you do that in this case?
    Usually, in BFS, we mark a node as visited by setting a flag visited in its node class. Here, we don’t want to do that (there could be multiple searches going on at the same time, so it’s bad to just edit our data). In this case, we could mimic the marking of nodes with a hash table to lookup a node id and whether or not it’s been visited.
  • Other Follow-Up Questions:
    • In the real world, servers fail. How does this affect you?
    • How could you take advantage of caching?
    • Do you search until the end of the graph (infinite)? How do you decide when to give up?
    • In real life, some people have more friends of friends than others, and are therefore more likely to make a path between you and someone else. How could you use this data to pick where you start traversing?

The following code demonstrates our algorithm:

	public class Server {
		ArrayList<Machine> machines = new ArrayList<Machine>();
	}

	public class Machine {
		public ArrayList<Person> persons = new ArrayList<Person>();
		public int machineID;
	}

	public class Person {
		private ArrayList<Integer> friends;
		private final int ID;
		private final int machineID;
		private String info;
		private final Server server = new Server();

		public String getInfo() { return info; }
		public void setInfo(String info) {
			this.info = info;
		}

		public int[] getFriends() {
			int[] temp = new int[friends.size()];
			for (int i = 0; i < temp.length; i++) {
				temp[i] = friends.get(i);
			}
			return temp;
		}
		public int getID() { return ID; }
		public int getMachineID() { return machineID; }
		public void addFriend(int id) { friends.add(id); }

		// Look up a person given their ID and Machine ID
		public Person lookUpFriend(int machineID, int ID) {
			for (Machine m : server.machines) {
				if (m.machineID == machineID) {
					for (Person p : m.persons) {
						if (p.ID == ID){
							return p;
						}
					}
				}
			}
			return null;
		}

		// Look up a machine given the machine ID
		public Machine lookUpMachine(int machineID) {
			for (Machine m:server.machines) {
				if (m.machineID == machineID)
					return m;
			}
			return null;
		}

		public Person(int iD, int machineID) {
			ID = iD;
			this.machineID = machineID;
		}
	}

Design a feed system of stock informaiton

If you were integrating a feed of end of day stock price information (open, high, low, and closing price) for 5,000 companies, how would you do it? You are responsible for the development, rollout and ongoing monitoring and maintenance of the feed. Describe the different methods you considered and why you would recommend your approach. The feed is delivered once per trading day in a comma-separated format via an FTP site. The feed will be used by 1000 daily users in a web application.

My initial thoughts:
A .csv file containing 5000 lines. Each line stores the information for a company. It can be easily stored and parsed.

Solution:
Let’s assume we have some scripts which are scheduled to get the data via FTP at the end of the day. Where do we store the data? How do we store the data in such a way that we can do various analyses of it?

  • Proposal #1
    Keep the data in text files. This would be very difficult to manage and update, as well as very hard to query. Keeping unorganized text files would lead to a very inefficient data model.
  • Proposal #2
    We could use a database. This provides the following benefits:

    • Logical storage of data.
    • Facilitates an easy way of doing query processing over the data.

    Example: return all stocks having open > N AND closing price < M.
    Advantages:

    • Makes the maintenance easy once installed properly.
    • Roll back, backing up data, and security could be provided using standard database features. We don’t have to “reinvent the wheel.”
  • Proposal #3
    If requirements are not that broad and we just want to do a simple analysis and distribute the data, then XML could be another good option.
    Our data has fixed format and fixed size: company_name, open, high, low, closing price. The XML could look like this:

    <root>
    <date value=“2008-10-12”>
    	<company name=“foo”>
    		<open>126.23</open>
    		<high>130.27</high>
    		<low>122.83</low>
    		<closingPrice>127.30</closingPrice>
    	</company>
    	<company name=“bar”>
    		<open>52.73</open>
    		<high>60.27</high>
    		<low>50.29</low>
    		<closingPrice>54.91</closingPrice>
    	</company>
    </date>
    <date value=“2008-10-11”> . . . </date>
    </root>
    

    Benefits:

    • Very easy to distribute. This is one reason that XML is a standard data model to share /distribute data.
    • Efficient parsers are available to parse the data and extract out only desired data.
    • We can add new data to the XML file by carefully appending data. We would not have to re-query the database.

    However, querying the data could be difficult.