I was reading a bunch of papers and articles on Distributed Systems for my work recently when I stumbled into the concept of consistent hashing. I had some ideas about distributed hash tables (DHT) before, but I had never delved deeper into the subject. Out of curiosity I studied up some more on consistent hashing and was amazed to find such a simple algorithm with such extraordinary utility in distributed systems. The implementation is also pretty straightforward when you use an efficient container like the binary search tree (BST).

Consistent hashing is a hashing technique that maps keys to nodes with the assumption that nodes may join and leave the system at random. The defining feature is that when a new node joins or an existing node leaves the system, only a small set of key-to-node assignments need to change. If we have M keys and N nodes in our system, the expected number of key-to-node re-mapping is M/N in the event of a node addition or removal. This is a significant improvement from regular hashing, which would probably re-map almost all the keys when a node leaves or joins.

In traditional hashing we have a fixed number of bins or buckets where we place our keys in. In consistent hashing we remove this limitation of a fixed number of bins. Instead, we hash the keys to a virtually unlimited integer space and place our bins randomly throughout the same integer space. The bin that is closest to a hashed key in a clockwise direction is our target bin for the key!

Brief description of how consistent hashing works:

Image source: https://www.toptal.com/big-data/consistent-hashing

I am also currently working with Python for some of our deep learning projects at work (I usually code in Java at work), so it made sense to choose Python as the language of choice for writing up on consistent hashing!

The key ingredient in implementing the consistent hashing algorithm is using an efficient data structure to quickly look up the number that is equal to or greater than the key's hash value. One such data structure is the binary search tree (BST). A BST can store all the node IDs of existing nodes. When a key needs to be mapped to a node, we simply hash the key and look up the node ID nearest to the hash value of the key.

The binary search tree we've used needed a little modification from the standard implementation to make it act like a sorted ring of values. When looking up a value, if we reach the end of the BST and no target node was found we needed to wrap around and return the first node.

The Python code implementing consistent hashing with the binary search tree (BST) from scratch can be found on my GitHub account here. The GitHub repo also contains test code to test the implementation.

The basic implementation can be improved further to make the key-to-node mapping more balanced by creating virtual nodes for each node and place them randomly throughout the ring. In this setup, each node will have K replicas placed around the king. This has the effect of increasing the probability of hitting the nodes more evenly. With fewer nodes the "gaps" between the nodes are wider and may lead to some nodes receiving more keys mapped to them than the others. With replicas of nodes we effectively reduce the gap sizes and increase the probability of hitting the nodes higher in a more uniform manner.

**The idea**Consistent hashing is a hashing technique that maps keys to nodes with the assumption that nodes may join and leave the system at random. The defining feature is that when a new node joins or an existing node leaves the system, only a small set of key-to-node assignments need to change. If we have M keys and N nodes in our system, the expected number of key-to-node re-mapping is M/N in the event of a node addition or removal. This is a significant improvement from regular hashing, which would probably re-map almost all the keys when a node leaves or joins.

In traditional hashing we have a fixed number of bins or buckets where we place our keys in. In consistent hashing we remove this limitation of a fixed number of bins. Instead, we hash the keys to a virtually unlimited integer space and place our bins randomly throughout the same integer space. The bin that is closest to a hashed key in a clockwise direction is our target bin for the key!

**Algorithm summary**Brief description of how consistent hashing works:

- All keys and nodes are mapped to the same integer space (typically between -2^64 and 2^64, or something like that).
- If we have N nodes, they are assigned IDs which are essentially the hash numbers of their names.
- If we have M keys, they hashed to the integer space where the nodes are already mapped to.
- If a key's hashed number matches a node's ID, then we trivially return the node [ID]. Otherwise, we find the next node ID greater than the key's hash value. If no such node ID were found within the range's positive end, we wrap around and return the node ID with the smallest value. Thus we basically form a ring of nodes in the system.
- When a new node is added to the system it is placed in the hash ring according to it's ID (which is the hash value of it's name). All nodes between it's ID and it's previous node's ID are then re-mapped to this node. So, the only node that is affected in the process is the node immediately after this new node in the ring. Some of the keys that would point to that next node will now point to the newly added node. The expected number of keys moved around is M/N.
- Similarly, when a node leaves the system, all the keys between the leaving node and the node immediately preceding it are re-mapped to the node immediately following the leaving node in the ring. Again, expected number of keys moved around is M/N.

Image source: https://www.toptal.com/big-data/consistent-hashing

**Implementation**I am also currently working with Python for some of our deep learning projects at work (I usually code in Java at work), so it made sense to choose Python as the language of choice for writing up on consistent hashing!

The key ingredient in implementing the consistent hashing algorithm is using an efficient data structure to quickly look up the number that is equal to or greater than the key's hash value. One such data structure is the binary search tree (BST). A BST can store all the node IDs of existing nodes. When a key needs to be mapped to a node, we simply hash the key and look up the node ID nearest to the hash value of the key.

The binary search tree we've used needed a little modification from the standard implementation to make it act like a sorted ring of values. When looking up a value, if we reach the end of the BST and no target node was found we needed to wrap around and return the first node.

The Python code implementing consistent hashing with the binary search tree (BST) from scratch can be found on my GitHub account here. The GitHub repo also contains test code to test the implementation.

**Improved load balancing**The basic implementation can be improved further to make the key-to-node mapping more balanced by creating virtual nodes for each node and place them randomly throughout the ring. In this setup, each node will have K replicas placed around the king. This has the effect of increasing the probability of hitting the nodes more evenly. With fewer nodes the "gaps" between the nodes are wider and may lead to some nodes receiving more keys mapped to them than the others. With replicas of nodes we effectively reduce the gap sizes and increase the probability of hitting the nodes higher in a more uniform manner.