melreams.com

Nerrrrd

Why is javascript called javascript?

Unrelated image from pexels.com to make this post look nicer in social media shares.
Unrelated image from pexels.com to make this post look nicer in social media shares.

Or more specifically, why is javascript called javascript when it has basically nothing in common with java?

First, let’s have a little context. Javascript originally ran only in the browser and made webpages interactive. Java was primarily a server-side language but could run in the browser in the form of applets (they’re very rarely used now, don’t worry if you’ve never heard of them). Javascript is also dynamically typed (you can put anything you want in a variable and the interpreter figures it out (or not) at runtime), uses prototypal inheritance (you create new objects using an existing object as a prototype), and has had closures (first-class functions that ‘remember’ the state of the variables they had access to when they were declared) from the beginning. Java, on the other hand, is statically typed (thou shalt not put a double in a variable that you declared as an int), uses class-based inheritance (you create objects using a blueprint called a class and only classes can extend other classes), and only just got closures (called lambdas in java) in 2014 when java 8 came out.

About the only thing java and javascript have in common is that they both use c-style syntax (curly brackets and periods, basically). So given all that background information about how little java and javascript have in common, why on earth is javascript named javascript?

Back in the 90s when javascript came out, java was the shiny new thing everyone was excited about, and the theory I’ve always heard about the name was that it was a marketing ploy intended to make people think javascript actually did have something to do with java and was therefore cool. According to this interview with Brendan Eich, the inventor of javascript, javascript actually was intended to sorta, kinda, have a little bit of a relationship with java: “the idea was to make it a complementary scripting language to go with Java, with the compiled language.” And according to this press release, javascript’s design was: “complementary to and integrated with Java” It turns out javascript actually can interact with applets, too.

These days javascript has nothing to do with java, and honestly it may never have had much to do with it, but just because you’ve heard the “purely a marketing ploy” theory over and over doesn’t make it the whole truth. Now, who can tell me how the moral of the story applies to programming? ;)

How does binary search work, anyway?

Unrelated photo by Matthew Weibe to make this post look nice in social media shares.
Unrelated photo by Matthew Weibe to make this post look nice in social media shares.

Or, yet another blog post idea I stole from that article about programming interview concepts. You can find the rest of those posts under the how does it work? tag.

Binary search is an extremely simple idea that’s useful for much more than finding an element in a sorted array. The way binary search works is you compare the item you want to find with the item in the middle of the array, then whichever “side” of the midpoint of the array your element falls on, you compare it to the midpoint of that half until you find the item you want.

According to wikipedia a binary search makes at worst O(log2n) comparisons, which is pretty great when you have a large array. Because the search halves the search space each iteration, the maximum number of iterations you’ll need is the number of times you can divide your array in half. For an array of 100 items, you should only need 7 searches, and for an array of 200 items, you only need 8, and for 1000 items, only 10 searches. See how slowly the number of searches grows as the array gets a lot larger? That’s just cool!

Also cool: you can use that concept for more than just finding stuff in arrays. Back in college they taught us to narrow down where a bug in your code is using a binary search. Basically, comment out half of your code and see if the bug still happens. If it doesn’t, uncomment half of the commented half. If it does, comment out another half of the uncommented half. It always felt weird to do that, like I should’ve just been able to see the problem by reading the code, but it worked. If you’re new or just totally stumped, give it a try.

Of course, binary search isn’t the only way to find things. Hash maps can be even faster, but all they can tell you is whether your target item exists or not. If you want to return the next largest or smallest item in the event that you don’t find an exact match, hash maps are no help at all. There are binary search trees too (which I’ll go into more detail on in a future post), but ironically even though they’re named binary search trees, binary searching an array is usually faster. The problem with binary trees is it’s hard to keep them perfectly balanced, so you might have more “halves” on one side than the other which messes with your search efficiency. What binary trees are good for is quick updates – it can be a real hassle to add or remove an item from an array, trees are much easier to work with. There’s also plain old linear search, where you start at one end of your array and look at every item until you find the one you want. If you’re not going to search your array enough times for it to be worth sorting it, linear search is good enough.

That kind of tradeoff can actually make algorithm questions interesting. I still don’t care even a little bit whether you looked up how to reverse a binary tree before the interview, but I care a lot if you think to ask if you’re going to search that array enough times to recoup the cost of sorting it. Programmers, myself included, can be terrible about overengineering to solve problems that don’t actually exist. But more on that in another post!

How does quicksort work, anyway?

Unrelated image from pexels.com to make this post look nicer in social media shares.
Unrelated image from pexels.com to make this post look nicer in social media shares.

Why yes, I am going to keep mining that article about stuff you should know for programming interviews for blog post ideas :) While I don’t think that a lot of the common interview concepts from that article are actually worthwhile to ask about in an interview, I do think they’re interesting bits of nerd trivia and going in depth into how stuff works shows that nothing the computer does is magic.

Sort algorithms in particular are a weird interview question because you should basically never implement one at work. There are always edge cases, but in general if you actually write a sort function you have done something bad and you should feel bad. The correct way to implement a sort function is to import a library and go on with your day.

That said, sort algorithms are interesting in their own right. They’re one of those things that seem incredibly simple and boring until you start thinking about how you would tell a computer how to sort things. There are also way more sort algorithms than you might think, all with their own pros and cons.

Quick sort uses a divide and conquer strategy – instead of sorting the entire array you give it, it picks a pivot point (different implementations do this in different ways, one of the simplest methods is just to take the middle element of your array), rearrange the elements of your array so that everything less than the pivot is on the left and everything greater is on the right. Then you break the array into halves and recursively search each one until everything in the array is in order. There’s a really helpful gif at the top of the wikipedia article about quicksort that explains it better.

Because quick sort rearranges the array elements by swapping them, it requires very little memory, which was a big deal when it was invented by Tony Hoare in 1959. To this day it’s one of the fastest sorting algorithms, provided you do a good job of picking your pivot point. If you do a bad job of that things go off the rails, particularly if your array is mostly sorted already. In that case quick sort can (if you don’t check for a sorted or mostly sorted array) effectively unsort and resort your array which is pretty slow, surprisingly enough.

Another efficient (in this case it’s a technical term for sort algorithms that are efficient enough to actually use) sort algorithm is merge sort. Merge sort is even older than quick sort, it was invented in 1945 by John von Neumann. Like quick sort, it uses a divide and conquer strategy, the difference is that merge sort divides the array into the smallest pieces it can, then merges those pieces into two element arrays, then merges those into four element arrays and so on until it produces a completely sorted array. As usual, wikipedia has a gif that explains it visually.

Merge sort requires much more memory than quick sort does because of the way it creates new arrays while it’s sorting. This can be an issue if you’re sorting especially large arrays, although I’m sure more advanced algorithms based on merge sort can do some sort of trickery to mitigate that :) On the upside, it’s a stable sort – if you have two objects in the array with the same sort order, they’ll stay in that order – unlike quick sort. It’s also good at handling slow sequential media like tape drives and handling linked lists, which quick sort is slow at and heap sort can’t handle at all.

Heap sort, the last sort algorithm I want to talk about today, is an interesting one. Unlike quick sort and merge sort, heap sort puts all the elements of the array into a heap first, then uses that to sort the array.

Quick digression from sorting: a heap is a partially ordered tree structure. In a heap, the child nodes are always less than (in a min heap, or always larger in a max heap) the parent node, but siblings aren’t in any particular order relative to each other. The root node is always the largest or smallest element in the heap, and if you remove it the heap rebalances itself so the next largest or smallest element becomes the new root.

Back to heap sort: once you have a heap it’s very simple, you just take the root, add it to your array, let the heap rebalance itself, take the new root, and so on until your heap is empty.

In comparison with other sorts, heap sort is a little slower than quick sort on average but has better worst case performance. Merge sort has similar time bounds (average, best case, and worst case time it takes to sort an array), but takes up more memory because a heap sort can be done in place. On the other hand, merge sort is stabl, parallelizes well, and works on datasets too large to fit into memory at once, which neither quick sort not heap sort can do.

One last piece of trivia: the Timsort algorithm, implemented in 2002 by Tim Peters, is based on merge sort and insertion sort (a very simple sort algorithm) and is the standard sort function in Python and Java.

There’s a huge amount of detail I skipped over, I recommend poking around wikipedia if you’re interested in more detail about the many, many, many ways you can sort a list. Just don’t ask about them in interviews, because all you’ll learn by doing that is whether your interviewee looked them up beforehand :)

How does a breadth-first search work, anyway?

In a recent post I mentioned having read an article about passing programming interviews that said it was important to be able to write a breadth-first search and to understand how hash maps work. I covered hash maps last time, so this time let’s talk about breadth-first searches.

The first question is what on earth is a breadth first search? It’s a way of searching a tree structure. in a breadth-first search, you look at all the nodes at a particular ‘level’ of the tree before looking at anything in the next level. Another way you can do it is depth-first, where you follow each node’s children down and down until you run out of children, then go back up to the next child node you haven’t already visited and follow it’s children down until you run out again, and so on until you’ve visited all the nodes in the tree.

This is definitely a case where a picture is worth 1000 words. Here’s the order you visit nodes in a breadth-first search:

By Alexander Drichel - Own work, CC BY 3.0, https://commons.wikimedia.org/w/index.php?curid=3786735
Order nodes are visited in a breadth-first search. By Alexander DrichelOwn work, CC BY 3.0

and here’s the order in a depth first search:

Order nodes are visited in a depth first search. By Alexander Drichel - Own work, CC BY-SA 3.0
Order nodes are visited in a depth-first search. By Alexander DrichelOwn work, CC BY-SA 3.0

Great, what’s a breadth-first search for? According to wikipedia it’s good for a bunch of problems in graph theory that I totally don’t understand, and some more understandable stuff like finding the shortest path between two nodes in a tree and serializing a binary tree in such a way that you can easily deserialize it.

So how do you do a breadth-first search anyway?

bijulsoni has graciously provided an example in their article Introduction to Graph with Breadth First Search(BFS) and Depth First Search(DFS) Traversal Implemented in JAVA on Code Project. If you’re interested, that code is provided under The Code Project Open License (CPOL) 1.02, which basically states that you can do whatever you like with the code but don’t come crying to them if it doesn’t work.

Here’s a breadth-first search:

 
public void breadthFirstSearch() { 
    //BFS uses Queue data structure 
    Queue q=new LinkedList(); 
    q.add(this.rootNode); 
    printNode(this.rootNode); 
    rootNode.visited=true; 
    while(!q.isEmpty()) { 
        Node n=(Node)q.remove(); 
        Node child=null; 
        while((child=getUnvisitedChildNode(n))!=null) { 
            child.visited=true; 
            printNode(child); 
            q.add(child); 
        } 
    } 
    //Clear visited property of nodes 
    clearNodes(); 
} 

and to compare, here’s how a depth-first search works:

public void depthFirstSearch() {
    //DFS uses Stack data structure
    Stack s=new Stack();
    s.push(this.rootNode);
    rootNode.visited=true;
    printNode(rootNode);
    while(!s.isEmpty()) {
        Node n=(Node)s.peek();
        Node child=getUnvisitedChildNode(n);
        if(child!=null) {
            child.visited=true;
            printNode(child);
            s.push(child);
        } else {
            s.pop();
        }
    }
    //Clear visited property of nodes
    clearNodes();
}

The complete, runnable code can be downloaded from the article linked above if you’d like to run it yourself. getUnvisitedChildNode() does what you would expect, so I left it out to save space. What I find really interested about the breadth-first and depth-first algorithms is that they’re practically identical except for the different data structures used to hold the nodes we’re working on. The simple change from a queue (where you add items to the end and remove items from the head) to a stack (where you both add and remove items from the end) is all it takes to change a breadth-first search to a depth-first search.

Now we all know how a breadth-first search (and a depth-first search as a bonus) works. You can safely forget the details, secure in the knowledge you can look it up if you need to :)