melreams.com

Nerrrrd

How does a hash function work anyway?

A while ago I wrote about how hash maps work, but something’s been bugging me. How does the hash function do its thing? I know hash functions make variable length data into fixed length data but how do they do that? To be clear I’m interested in the kind of hash you would use for a hash map, you would definitely want a more secure hash to keep your passwords safe.

Thanks to the magic of the internets, it’s really easy to find the function java uses to calculate a String’s hashcode.

/* Returns a hash code for this string. The hash code for a String 
object is computed as s[0]*31^(n-1) + s[1]*31^(n-2) + ... + s[n-1]
using int arithmetic, where s[i] is the ith character of the string, 
n is the length of the string, and ^ indicates exponentiation. 
(The hash value of the empty string is zero.)
Returns: a hash code value for this object. */

public int hashCode() {
  int h = hash;
  if (h == 0) {
    int off = offset;
    char val[] = value;
    int len = count;

    for (int i = 0; i < len; i++) {
      h = 31*h + val[off++];
    }
    hash = h;
  }
  return h;
}

Okay great, that’s totally clear, right? ;)

Yeah, I have no idea what it’s actually doing either. But I can find out!

First of all, where are the values of hash, offset, and count coming from? They must be instance variables because they weren’t passed in as parameters. I poked around in the String code a little more and it turns out hash is defaulted to 0 when it’s declared, offset is set to 0 in the constructor, and count is set to the size of the string when it’s created.

Unrelated image from pexels.com to make this post look nicer in social media shares.

The first thing hashCode actually does is checks if hash is 0. If it’s not, then we know we already computed the hash and we can just return it and go on with our day. Makes sense, why do the same calculation over and over again when we can just do it once and store the result? I think that’s the same reason count is stored separately instead of just calling value.length() when you need it. We know the length will never change because Strings are immutable, so why not save ourselves a lookup?

The next weird thing is how the method is adding a number to a char. Chars are characters, not numbers, aren’t they? Well, yes and no. According to the docs, a char is “a single 16-bit Unicode character. It has a minimum value of '\u0000' (or 0) and a maximum value of '\uffff' (or 65,535 inclusive).” That 0 to 65,535 part seems suspiciously like a number :) You can also test that out yourself in the Java REPL. It turns out Java will happily treat a char like an int if you ask it to.

The rest of it is pretty simple, we’re just looping through every character in the string and adding (31 * current hashcode) + current character to the existing hashcode.

Okay, but how does that map a string of any length to a hash code of fixed length? Shouldn’t a longer String always have a larger hash code? Not if your hashcode is an integer! Those just roll over into negative numbers if you add too large of a number to them. And because 2 and -821785444 are both integers they take up the same amount of memory, which means that no matter what size String you start with, the hashcode is always the same size.

Another interesting little detail of how hashmaps actually use those hashcodes is that they rehash your hashes. If everyone used random Strings for keys then they wouldn’t need to, but because keys are usually Strings with some kind of meaning, that means the hashes for those keys won’t be evenly distributed. That is, a hashcode doesn’t have an equal chance of being any number from -231 to 231-1, you’re going to get clumps of hashes around some numbers because you’re more likely to use some Strings than others.

Great, but why does that matter? Performance! The more collisions you have (different Strings that happened to work out to the same hashCode), the more elements you need to look at to find the one you wanted and the worse your performance is. To get around that, java does some bitwise operations on the hashcode to reduce the number of collisions.

Now we all have some idea what actually happens when you use a HashMap :)

How does quicksort work, anyway?

Unrelated image from pexels.com to make this post look nicer in social media shares.
Unrelated image from pexels.com to make this post look nicer in social media shares.

Why yes, I am going to keep mining that article about stuff you should know for programming interviews for blog post ideas :) While I don’t think that a lot of the common interview concepts from that article are actually worthwhile to ask about in an interview, I do think they’re interesting bits of nerd trivia and going in depth into how stuff works shows that nothing the computer does is magic.

Sort algorithms in particular are a weird interview question because you should basically never implement one at work. There are always edge cases, but in general if you actually write a sort function you have done something bad and you should feel bad. The correct way to implement a sort function is to import a library and go on with your day.

That said, sort algorithms are interesting in their own right. They’re one of those things that seem incredibly simple and boring until you start thinking about how you would tell a computer how to sort things. There are also way more sort algorithms than you might think, all with their own pros and cons.

Quick sort uses a divide and conquer strategy – instead of sorting the entire array you give it, it picks a pivot point (different implementations do this in different ways, one of the simplest methods is just to take the middle element of your array), rearrange the elements of your array so that everything less than the pivot is on the left and everything greater is on the right. Then you break the array into halves and recursively search each one until everything in the array is in order. There’s a really helpful gif at the top of the wikipedia article about quicksort that explains it better.

Because quick sort rearranges the array elements by swapping them, it requires very little memory, which was a big deal when it was invented by Tony Hoare in 1959. To this day it’s one of the fastest sorting algorithms, provided you do a good job of picking your pivot point. If you do a bad job of that things go off the rails, particularly if your array is mostly sorted already. In that case quick sort can (if you don’t check for a sorted or mostly sorted array) effectively unsort and resort your array which is pretty slow, surprisingly enough.

Another efficient (in this case it’s a technical term for sort algorithms that are efficient enough to actually use) sort algorithm is merge sort. Merge sort is even older than quick sort, it was invented in 1945 by John von Neumann. Like quick sort, it uses a divide and conquer strategy, the difference is that merge sort divides the array into the smallest pieces it can, then merges those pieces into two element arrays, then merges those into four element arrays and so on until it produces a completely sorted array. As usual, wikipedia has a gif that explains it visually.

Merge sort requires much more memory than quick sort does because of the way it creates new arrays while it’s sorting. This can be an issue if you’re sorting especially large arrays, although I’m sure more advanced algorithms based on merge sort can do some sort of trickery to mitigate that :) On the upside, it’s a stable sort – if you have two objects in the array with the same sort order, they’ll stay in that order – unlike quick sort. It’s also good at handling slow sequential media like tape drives and handling linked lists, which quick sort is slow at and heap sort can’t handle at all.

Heap sort, the last sort algorithm I want to talk about today, is an interesting one. Unlike quick sort and merge sort, heap sort puts all the elements of the array into a heap first, then uses that to sort the array.

Quick digression from sorting: a heap is a partially ordered tree structure. In a heap, the child nodes are always less than (in a min heap, or always larger in a max heap) the parent node, but siblings aren’t in any particular order relative to each other. The root node is always the largest or smallest element in the heap, and if you remove it the heap rebalances itself so the next largest or smallest element becomes the new root.

Back to heap sort: once you have a heap it’s very simple, you just take the root, add it to your array, let the heap rebalance itself, take the new root, and so on until your heap is empty.

In comparison with other sorts, heap sort is a little slower than quick sort on average but has better worst case performance. Merge sort has similar time bounds (average, best case, and worst case time it takes to sort an array), but takes up more memory because a heap sort can be done in place. On the other hand, merge sort is stabl, parallelizes well, and works on datasets too large to fit into memory at once, which neither quick sort not heap sort can do.

One last piece of trivia: the Timsort algorithm, implemented in 2002 by Tim Peters, is based on merge sort and insertion sort (a very simple sort algorithm) and is the standard sort function in Python and Java.

There’s a huge amount of detail I skipped over, I recommend poking around wikipedia if you’re interested in more detail about the many, many, many ways you can sort a list. Just don’t ask about them in interviews, because all you’ll learn by doing that is whether your interviewee looked them up beforehand :)

How does a breadth-first search work, anyway?

In a recent post I mentioned having read an article about passing programming interviews that said it was important to be able to write a breadth-first search and to understand how hash maps work. I covered hash maps last time, so this time let’s talk about breadth-first searches.

The first question is what on earth is a breadth first search? It’s a way of searching a tree structure. in a breadth-first search, you look at all the nodes at a particular ‘level’ of the tree before looking at anything in the next level. Another way you can do it is depth-first, where you follow each node’s children down and down until you run out of children, then go back up to the next child node you haven’t already visited and follow it’s children down until you run out again, and so on until you’ve visited all the nodes in the tree.

This is definitely a case where a picture is worth 1000 words. Here’s the order you visit nodes in a breadth-first search:

By Alexander Drichel - Own work, CC BY 3.0, https://commons.wikimedia.org/w/index.php?curid=3786735
Order nodes are visited in a breadth-first search. By Alexander DrichelOwn work, CC BY 3.0

and here’s the order in a depth first search:

Order nodes are visited in a depth first search. By Alexander Drichel - Own work, CC BY-SA 3.0
Order nodes are visited in a depth-first search. By Alexander DrichelOwn work, CC BY-SA 3.0

Great, what’s a breadth-first search for? According to wikipedia it’s good for a bunch of problems in graph theory that I totally don’t understand, and some more understandable stuff like finding the shortest path between two nodes in a tree and serializing a binary tree in such a way that you can easily deserialize it.

So how do you do a breadth-first search anyway?

bijulsoni has graciously provided an example in their article Introduction to Graph with Breadth First Search(BFS) and Depth First Search(DFS) Traversal Implemented in JAVA on Code Project. If you’re interested, that code is provided under The Code Project Open License (CPOL) 1.02, which basically states that you can do whatever you like with the code but don’t come crying to them if it doesn’t work.

Here’s a breadth-first search:

 
public void breadthFirstSearch() { 
    //BFS uses Queue data structure 
    Queue q=new LinkedList(); 
    q.add(this.rootNode); 
    printNode(this.rootNode); 
    rootNode.visited=true; 
    while(!q.isEmpty()) { 
        Node n=(Node)q.remove(); 
        Node child=null; 
        while((child=getUnvisitedChildNode(n))!=null) { 
            child.visited=true; 
            printNode(child); 
            q.add(child); 
        } 
    } 
    //Clear visited property of nodes 
    clearNodes(); 
} 

and to compare, here’s how a depth-first search works:

public void depthFirstSearch() {
    //DFS uses Stack data structure
    Stack s=new Stack();
    s.push(this.rootNode);
    rootNode.visited=true;
    printNode(rootNode);
    while(!s.isEmpty()) {
        Node n=(Node)s.peek();
        Node child=getUnvisitedChildNode(n);
        if(child!=null) {
            child.visited=true;
            printNode(child);
            s.push(child);
        } else {
            s.pop();
        }
    }
    //Clear visited property of nodes
    clearNodes();
}

The complete, runnable code can be downloaded from the article linked above if you’d like to run it yourself. getUnvisitedChildNode() does what you would expect, so I left it out to save space. What I find really interested about the breadth-first and depth-first algorithms is that they’re practically identical except for the different data structures used to hold the nodes we’re working on. The simple change from a queue (where you add items to the end and remove items from the head) to a stack (where you both add and remove items from the end) is all it takes to change a breadth-first search to a depth-first search.

Now we all know how a breadth-first search (and a depth-first search as a bonus) works. You can safely forget the details, secure in the knowledge you can look it up if you need to :)