What if I don’t know?

Unrelated image from to make this post look nicer in social media shares.
Unrelated image from to make this post look nicer in social media shares.

I was listening to the Developer Tea podcast the other day and caught a re-air of an episode about how to handle questions that you don’t have good answers to. The very short version of their answer (which I totally agree with) is don’t say “I don’t know” and stop there, follow it up with “but I can find out.” And then, you know, actually find out and follow up with the person who asked the question :)

It’s been such a long time since I worried about that that I forgot it was still an issue for people. I’m not saying I feel great about it when I get asked a question I can’t immediately answer, but my perspective is that my job as a developer is much more about knowing how to figure stuff out than it is about knowing the answers off the top off my head. I’m not Google, and it’s not reasonable to expect me to be.

My theory (and readers, if you think I’m out to lunch here, I’d appreciate you letting me know in the comments) is that newer devs are more likely to freak out about not knowing the right answer because they’re used to having to produce an answer right away or lose marks on tests in school. One of many, many ways being a professional developer is different from school is that there’s no time limit and everything is open book.

Seriously, no reasonable person is going to react badly if you tell them you’re going to find the answer and get back to them. And if they do react badly, they’re a jerk and if that jerk is your boss, you should seriously consider finding a new job. Production applications are just too complicated for any one person to remember every detail of the back end, the front end, the database, the logging, the monitoring, the API, the deployment process, or whatever other pieces you deal with. It’s just not possible for a person to memorize every detail of a complex system and if you think it is you have no business managing developers.

Speaking of complexity, a completely reasonable answer to a question might be “Amy built that feature, she would know better than I would.” Just telling someone where to look or who to ask is really helpful, it’ll make it easier for them the next time they need an answer. Sometimes, say if your boss asks you a question, they might need you to do the legwork and go ask Amy for them, but that doesn’t mean it’s not helpful for the next time they have a question about that feature.

Unlike school, very few parts of professional development involve a time limit. Sure, if production is down you’re going to want to figure out why as quickly as possible, but even that is more about problem solving skills than it is about having stuff memorized. If my boss asks me a question, it’s far more useful to them to take five minutes to go look it up and make sure it’s right than it is to spit out an answer right then and there.

Basically none of professional development is closed book, either. That’s one of the reasons for my very limited interest in general knowledge questions in technical interviews (the other one is that a stressful, high stakes situation like an interview is going to give you false negatives because people go blank when they actually know the subject perfectly well). Not remembering the difference between a StringBuilder and StringBuffer is never going to matter in my career, that takes about 30 seconds to Google.

“That’s great,” you say, “but what if I do all my research and I still don’t know what the best option is?” That’s totally fine! Write down what you learned and share that with your boss/whoever asked you the question. I’ve had plenty of conversations with my boss where I showed them what I found and straight up told them “I don’t know what the best thing to do is, but here’s what I found.” As long as you’ve done something to help find the answer you’ve at least saved your boss the time it took to research it, that’s still helpful.

Not knowing the answer is just not an issue as a developer. The issue is if you don’t make an effort to find the answer, and making an effort is something anyone can do.

The dining philosphers

Unrelated image from to make this post look nicer in social media shares.
Unrelated image from to make this post look nicer in social media shares.

I read this article by Uncle Bob called The Lurn (it’s a companion piece to an earlier article of his called The Churn), where he mentioned a lot of topics that he thinks are more important for developers to learn about than just scrambling to keep up with the latest languages and frameworks (not that you shouldn’t learn a few of those, but you’re going to hit diminishing returns pretty quickly because not that many things are actually a meaningful leap forward and eventually you’re going to start seeing enough patterns that learning yet another slightly different framework just isn’t a good use of your time).

One of the topics he mentioned was the Dining Philosphers problem, which I’d never heard of so I went and looked it up. The problem (quoting wikipedia, of course) is:

Five silent philosophers sit at a round table with bowls of spaghetti. Forks are placed between each pair of adjacent philosophers.

Each philosopher must alternately think and eat. However, a philosopher can only eat spaghetti when they have both left and right forks. Each fork can be held by only one philosopher and so a philosopher can use the fork only if it is not being used by another philosopher. After they finish eating, they need to put down both forks so they become available to others. A philosopher can take the fork on their right or the one on their left as they become available, but cannot start eating before getting both of them.

Eating is not limited by the remaining amounts of spaghetti or stomach space; an infinite supply and an infinite demand are assumed.

The problem is how to design a discipline of behavior (a concurrent algorithm) such that no philosopher will starve; i.e., each can forever continue to alternate between eating and thinking, assuming that no philosopher can know when others may want to eat or think.

Yep, it’s a concurrency problem. Specifically, a deadlock problem. Just because it sounds like it should be simple to get all of our philosphers fed doesn’t mean it actually is. The simplest solution would be that each philosopher picks up the fork on their left and waits for the fork on their right to become available – which never happens because the philosopher to their right has already taken it and is waiting for the philosopher on their right to put down their fork.

Okay, so let’s give the philosophers a time limit. If you grab one fork but don’t get another one within five minutes, you put your fork down and wait five minutes, then try again. No dice :( This is something called a livelock (that link explains it particularly well), where all the philosophers grab one fork, wait five minutes, put their forks down, wait five more minutes, and try to get two forks again. Forever. It’s almost the same thing as a deadlock except that the processes can still try to do things, they just can’t actually accomplish anything.

That only happens if every philosopher has the same wait time, though. What if we assign each philosopher a different wait time? That sort of works, but the philosopher with the longest wait time is likely to get less spaghetti because they try to pick up a fork fewer times, and if they’re particularly unlucky may get no spaghetti at all. We could try random wait times too, but only if we wanted to tear our hair out trying to figure out why one process is super slow one day and fine the next. Random numbers are out to get you – if you generate enough of them, eventually you’re going to get a long streak of [insert least helpful result here] and the people trying to use the feature that never manages to get two forks will hate you.

Another solution is to number all the forks and tell the philosophers that they must pick up the lower numbered fork first when they need one. The philosophers each grab the lower numbered fork until we get to philosopher number five, who needs to take fork one before fork five, but fork one is already taken so they don’t take any forks. That leaves fork five available to philosopher four, who sets down forks four and five when they finish eating, which lets philosopher three take forks three and four, and so on around the table.

That actually works pretty well if you only ever need two forks, but if you take forks three and four, then discover you need fork one as well, you have to set down the forks you already have and start over again from two.

Yet another solution is to introduce a waiter and make the philosophers ask them for permission to take forks. The waiter only gives permission to one philosopher at a time to pick up forks, which is great in terms of preventing deadlock but kinda sucks if one philosopher needs forks two and three, and another one needs forks four and five and they can’t get permission to pick up forks at the same time even though they don’t want the same forks.

If you relax the constraint about philosophers talking to each other, there’s another solution: all the forks are either clean or dirty (and start out dirty), and all the philosophers are numbered. When two philosophers want the same fork, the lower numbered one gets it. Any philosopher can ask for a fork, and the philosopher who has it must hand it over if it’s dirty but keeps it if it’s clean. When a philosopher finishes eating, they clean their forks and set them down. This prevents deadlocks and fixes the problem of philosophers who haven’t eaten never getting a fork, but (there’s always a but) introduces more overhead in the form of making it possible for philosophers to talk to each other and keep track of who has which fork.

Those are by no means all the possible solutions, but the individual solutions – and there are plenty of them – are less important than the general point of the exercise, which is that concurrency is hard :) Specifically, it’s prone to weird edge cases and is hard to think about because there are so many moving parts.

And now we both know what the dining philosophers problem is all about.

Simple stuff that will make your new team think you’re amazing

Unrelated image from to make this post look nicer in social media shares and because Pallas Cats are the best.
Unrelated image from to make this post look nicer in social media shares and because Pallas Cats are the best.

Even if you just graduated, there are some simple things you can do to make your new team think you’re amazing, and they don’t even involve any code. Ironically, a huge amount of the stuff that makes someone a really good programmer has nothing to do with sitting down with an editor and writing code. Being an exceptional programmer is as much about making your team more effective as it is about just writing good code yourself.

A big thing that can help your team get more done is to write absolutely everything down when you set up your development environment for the first time at a new job. Everybody means to do this but hardly anyone actually does. No judgement, my current job is the first one where I’ve done a good job of documenting environment setup and it’s my fourth job since graduating from college. If you can, it’s even better if you can write a simple script to set up stuff like environment variables. In some offices everyone gets to choose their own OS and scripts are only so useful, but even a script that only works for one OS at least gives other people something to work with.

Why is this important? Because you are far from the last developer your company is ever going to hire and the less time the people who come after you spend setting up their environments the more time they can spend learning the codebase and actually making themselves useful. It seems minor, but the hours it takes to set up an environment really add up over multiple hires. It’s also incredibly helpful if you’re allowed to work from home and need to setup your environment again on your personal computer or if you switch OSs or need to do a full reinstall or your computer dies and you need to setup a new one. Seriously, just documenting environment setup will make your team think you’re amazing and make everyone who comes after you more productive.

But if you want to do even more, it’s also awesome if you can document parts of the codebase as you learn them. No matter how much sense that code makes right now, you will forget it, and again, this is super helpful to everyone who comes after you.

If your team doesn’t have a place to store docs, see if you can start one. At my work we use google apps already, so it was really simple to create a google sites wiki that can only be viewed by people from our organization. If you don’t use google apps, pretty much any tech company is going to have a spare box you could spin up a wiki on. Whatever you do, just make sure there’s some sort of security – if you’re not going to put concrete details about your code in your docs, there’s not much point having them – and that you’re not using a free service that’s going to up and disappear out from under you one day. If nothing else, you can always put some text files on a shared drive and call it good :)

While you’re at it, it never hurts to document processes like how to deploy to staging and prod, how to fix common errors, where all of the log files for different services are, where those different services themselves are. Basically any time you have a question you have as a new developer, write the answer down where other people can find it. If you need to know, so will the next dev.

The less time your team spends re-learning code or hunting down a lot file, the more everyone gets done. That’s something you can help with no matter how new you are.

How does variable scope work?

a black woman types on a laptop
Photo from create her stock

Scope is where a variable exists can be accessed in your code, and it’s surprisingly complicated.

The short version is that a variable that was declared inside a block only exists inside that block. Great, what’s a block? In Java, it’s anything between a set of curly brackets { }. Blocks can be nested, too. A method inside of a class can see the class’s variables, and a block inside of a method (like an if or a loop) can see the method’s variables. Nesting only works inwards, outer blocks can’t see inside inner blocks. Once a block finishes executing its variables cease to exist, so there’s really nothing for the outer block to see.

Java also makes things more complicated with scope modifiers: classes, methods, and variables can be public, private, or protected. Those modifiers only apply outside of methods, they don’t make sense for variables that are just going to vanish after the method executes. Public means the “thing” (class, method, or field/class variable) can be accessed from outside the class. Private and protected classes are sort of a weird special case, they only make sense if you have a nested class inside another one. Public things can be accessed from outside the class, private things can be accessed only from inside the class, and protected things can only be accessed by subclasses of the class where they were declared.

Just like each time a method is run it gets a fresh copy of all of its variables, each time a class is instantiated it gets a fresh copy of all of its variables too. Unless any of them were declared static, in case you didn’t already have enough to keep track of :) The difference between classes and objects created by instantiating those classes was a hard concept for me when I started programming, don’t feel bad if you’re confused. Classes aren’t things you can interact with, they’re just blueprints for things. Once you create an actual thing by instantiating a class, then you can call methods on it. Unless you made that method or variable static, then you can use it without actually having an instance of the class. Static variables are special, normally every instantiation of the class gets its own variables, but static variables are shared between all instances of the class.

Why would you want to have just one instance of a variable for all instances of a class you’ll ever have? If you have any constants in the class, they should be declared static (also final, which means, surprisingly enough, that you can never change it) to save memory. If the constant is only ever going to have one value, there’s no reason to waste memory by giving every instance of the class its own copy. You might also want to make something static if it’s a shared resource like a logger. You really only need one logger per class, so again you can save memory by sharing one logger instead of giving every instance its own copy.

So that’s all well and good, but why does scope work the way it does? Partially I think it’s for the convenience of the programmer :) How much would it suck if you could never reuse a variable name because all variables were visible anywhere? And how much of a hassle would it be to keep track of which variable you were using versus which one you meant if all of them were visible?

Aside from convenience, there is an actual reason for variable scope to work the way it does. It has to do with the way the computer keeps track of exactly which code is executing at any given time. To over simplify a bit, when a program is executed it gets two chunks of memory, the stack and the heap. The heap is where objects live, and the stack is where the computer keeps track of where it is in the program and what values all the variables have. When method a calls method b, the computer stores the state of method a on the stack, then creates a new state for method b with all of its variables and stores it on the stack too. When method b finishes, the computer keeps its return value (if it returned anything), to update method a’s state with, then it throws away the state for method b. That’s what I was talking about when I said that variables cease to exist when a block finishes executing – the computer throws out its only record of what values those variables had. You can read more about memory and execution in this article, which has some pictures to help visualize what’s going on.

What about static variables, if they live in the stack why don’t they disappear? My understanding is that they actually live in the heap (where stuff stays until you deliberately get rid of it), and what goes on the stack is just the address of where the static variable lives in the heap.

Scope might seem really simple if you’ve been programming for a little while, but let me throw a wrench into that idea: multi-threading. With multiple threads running the same code, it can be really difficult to figure out why on earth your variable suddenly has a value that makes no sense. Not that I’ve ever spent multiple days swearing at my computer for making no sense ;)

Things they don’t tell you in school about production code

Unrelated image from to make this post look nicer in social media shares.
Unrelated image from to make this post look nicer in social media shares.

One of many things school can’t really prepare you for is what it’s actually like to write production code. That’s not a knock on my education or anyone else’s, it’s just not possible to get the experience of writing production code without, you know, writing production code. That said, I’m going to try to explain it anyway :)

Like I said in When is it done?, when I was in college I thought “done” meant “compiles and seems to give the right answer for a couple of happy-path tests. Actually “done” in a meaningful sense is much more than that, and so is writing code that’s really for really real ready for production.

Having code that compiles and probably even works is all well and good, but how will you know how it’s performing or whether it’s working right in production? This is the kind of thing you don’t think about for school assignments, you just hand them in and then you’re done with them forever. Logging suddenly becomes a really big deal when you need to know whether your code is working right and if not, what you need to fix.

All the details of how to log, when to log, what you should log have filled many books, so I’ll just say that it takes practice to figure out how much logging is enough but not too much and that you should feel free to lean on your senior devs to help you with that. At the very least, everything you log should include some information about the user who was logged in at the time and the account/project they’re part of if that applies. Knowing that something happened isn’t terribly helpful if you don’t know anything about the context it happened in. If in doubt, err on the side of more information. You can always filter it out or just ignore it if you need to.

In addition to logs, monitoring is also really important. Most production servers have a health check, a way to figure out if the server is up and can access things like the database and other external services. Why external services? Because a server that can’t talk to the cache/social network/payment provider/etc isn’t good for very much. More things they don’t tell you in school :) Like logging, health checks take practice too. Comprehensive checks are great, but you may not want your server to say it’s down when an external service is responding slowly either.

Metrics are important too. Whether you use a third party analytics service or roll your own with Graphite and StatsD, it’s really useful to be able to see at a glance whether your app is behaving normally. At a minimum you probably want to know how many requests you’re getting per minute and how many errors, plus anything domain specific like how many level starts or level ends per minute, how many purchases, how many new signups, etc.

Yet another thing you probably didn’t do in school is code reviews. In school, it generally doesn’t matter if anyone else understands your code. At work, it’s important than someone else is able to fix your code if anything goes wrong while you’re on vacation or home sick or away at a conference or if you change jobs. Having one point of failure is always a bad idea whether you’re talking about servers or people who are the only person who really knows x.

For very similar reasons, documentation is important too. Aside from people getting sick or going on vacation at inconvenient times, documentation is really useful when you come back to a piece of code months later or when you’re working on something that somebody else wrote. It’s also great for helping new hires get up to speed. Just because it took months for you to learn the codebase doesn’t mean it has to be that hard for the next new hire.

Unit tests also become a lot more important when you’re working on production code. They’re not just for getting your teacher off your back, they’re a great safety net when you have to change things and want to make sure you didn’t break anything that used to work. In school you hardly ever return to previous assignments, but at work you change things over and over again and it becomes really helpful to have a way to make sure you didn’t break stuff that doesn’t involve manually checking everything. Also, the more you can automate tests for, the less you have to test manually.

I’m sure I’ve just scratched the surface of things that you didn’t learn in school about production code. Readers, what most surprised you about production code?

How do binary trees work, anyway?

Programming interview concepts are back! You can find the rest of those posts under the how does it work? tag.

Before we talk about how a binary tree works, we should probably talk about what it is. A binary tree is just a tree data structure where each node has at most two children. Thank you wikipedia :) There’s nothing preventing you from making a tree where each node has more than two children, it just wouldn’t be a binary tree. A tree, binary or not, isn’t necessarily sorted either.

A public domain work found on the wikimedia commons. By Derrick Coetzee
A public domain work found on the wikimedia commons. By Derrick Coetzee

A tree works a lot like a linked list, each node has references to its children, allowing you to walk down the tree. You can also implement a binary tree using a plain old array (see the picture to the right), but that can waste a lot of space if your tree isn’t both balanced and complete. Balanced, when we’re talking about trees, means both sides have the same number of nodes (or at least close to the same number), and complete means that on each ‘level’ all the nodes are filled in. In the example binary tree below, it’s not balanced because one side has 5 nodes and the other only has 3, and it’s not complete because there’s one node missing on the third level and two or three nodes missing on the fourth, depending on whether your definition of ‘complete’ allows for any leaf nodes on the right-most end of the last level  of the tree to be missing. That doesn’t actually have much to do with the rest of this post, I just thought it was nifty :)

A binary search tree is a special case of tree where each node has 0-2 children and the nodes are sorted so that you can perform a binary search. In my post about how a binary search works, I mentioned that binary trees aren’t actually the fastest data structure to use for a binary search because it’s hard to balance a binary tree.

A public domain work found on the wikimedia commons. By Derrick Coetzee
A public domain work found on the wikimedia commons. By Derrick Coetzee

How do you balance a binary tree, anyway? Well, if you sorted all of your items before you added them to your tree, then you could start with the item in the middle, then add the middles of the two halves, then add the middles of those halves, and so on until you’ve added everything. That method only works if you already have all of the items you’re going to put in the tree and can be bothered to sort them, though. What do you do if you need to add more items later?

Basically you need to re-arrange your tree until it’s balanced (or at least close enough) again. Some of the ways you can do this are with self-balancing trees like red-black trees or AVL trees. Both of those trees add some extra data to each node to help it both figure out if it’s out of balance and get it back into balance.

In red-black tree, the extra data is the “colour” of the node. Because there are only two colours this only takes one extra bit to store. The colours, by the way, are totally arbitrary so don’t knock yourself out trying to understand the deeper meaning behind them :) According to one of the inventors of the red-black tree, red and black were the colours that looked the best on the laser printer they had available, which they were eager to use since they worked at Xerox PARC where the laser printer was invented.

A red-black tree uses the following rules to keep itself from getting badly unbalanced:

  1. A node is either red or black.
  2. The root is black. This rule is sometimes omitted. Since the root can always be changed from red to black, but not necessarily vice versa, this rule has little effect on analysis.
  3. All leaves (NIL) are black.
  4. If a node is red, then both its children are black.
  5. Every path from a given node to any of its descendant NIL nodes contains the same number of black nodes. Some definitions: the number of black nodes from the root to a node is the node’s black depth; the uniform number of black nodes in all paths from root to the leaves is called the black-height of the red–black tree.

Red-black trees do some funny business with their nodes – what you would think of as a leaf node actually has two leaves that are always black and don’t contain any information. If you’re wondering “well if they’re always black and don’t contain any information, can’t I just pretend they exist and not waste memory on them?” the answer is yes, you can totally do that.

The thing with the pretend leaves is that you need them for the third rule about leaves always being black. When you add a node to a red-black tree, you don’t add it as a real leaf, you add it to the closest node that has a value and then pretend it has black leaves. For the first couple of nodes after the root, this is super simple – the root is black, the new nodes are red, their pretend leaves are black, and everything is good. If you have more than a couple nodes in your tree, things get complicated. That’s where you break out rotations. Because this post is already pretty long I’m going to refer you back to the wikipedia article on red-black trees and this youtube video by OnlineTeacher. Normally I kind of loathe videos, but the pictures in that one are actually really helpful. Tree rotations are one of those things that are really simple when you can see them and really, really confusing when you have to describe it in words. The short version is that because of the way binary search trees are arranged, you can rotate notes back and forth around the root of your subtree, which is going to make precisely no sense unless you already know what I’m talking about :)

My understanding of AVL trees is that they work on largely similar principles to red-black trees but because they’re more rigidly balanced they’re faster on retrieval but slower on updates. Everything is a tradeoff.

And finally, because I keep hearing about it as an interview question, how do you reverse/invert a binary tree?

First, let’s define what reversing a binary tree actually means. Before I looked this up I thought it had something to do with swapping the root and the leaves, which makes no sense because tree structures normally have only one root node. It turns out the question actually means swapping the left and the right children of each node.

From my quick bout of googling, it sounds like a fairly simple recursive algorithm to walk the tree and swap each node’s right child for its left child. Now you know.

As you might have noticed, my research here centered pretty heavily on wikipedia so if I messed anything up, tell me about it in the comments.

IDE of the day

If you work with javascript, you need to try JetBrains WebStorm. It has a bunch of really great features I don’t use (I hear there’s support for node and angular and typescript) and sweet, sweet auto complete :) I still wish it was possible to have strongly-typed Eclipsey levels of auto complete with javascript, but something is much better than nothing. It doesn’t always work perfectly, but WebStorm is pretty good at taking you to the definition of a function or object too.

Full disclosure: JetBrains changed their licencing scheme not so long ago and it’s really confusing now. You do not have to keep paying forever! Once you’ve paid for 12 months you get a perpetual fallback licence that gives you only security updates but your product still works.

Either way, you can try it out for free, so why not give it a shot?

ps If any readers know of a better JS IDE I would absolutely love to hear about it.

Passion isn’t everything

Unrelated image from to make this post look nicer in social media shares.
Unrelated image from to make this post look nicer in social media shares.

This post is a bit of a counterpoint to my previous post about why I love programming even though it’s frustrating as hell sometimes. While I feel very lucky that I get to make a living doing something I love, I really don’t like my industry’s obsession with passion. It’s great if you feel passionate about programming, but it’s simply not necessary.

“But Mel,” you say “do you really want to work with some checked-out code monkey who half-asses everything, doesn’t give a shit about technical debt, and counts the minutes until it’s time to go home?” That’s a false dichotomy right there. There are many, many more choices than “passionate programmer” and “checked-out code monkey.” No, of course I don’t want to work with someone who doesn’t care about doing a good job. Fortunately, there are only about a zillion other points on the spectrum of “total passion” to “no passion at all.” Not being the most passionate programmer who ever lived in no way means you don’t care about doing a good job or want to improve. It just means you have other things going on in your life. Honestly, I think that’s healthier than being obsessed with just one thing.

Not to mention having other interests actually makes you a better coder. Seriously, go read that article it’s really good. Having just one obsession makes it much more likely that you approach problems from just one direction where having other interests allows you to come at problems from a different perspective. Take Adam Tornhill for example, he took ideas from forensic psychology and applied them to code analysis to get some really interesting results.

Obviously you can be passionate about more than one thing, but that still doesn’t mean passion is necessary to be a good programmer. You can take pride in your work even if you aren’t in love with what you’re working on. Using myself as an example, I stocked shelves at Wal-Mart before I moved to Victoria to go to Camosun. Was I passionate about taking things out of boxes and putting them on shelves? Of course not, but I still took pride in doing a good job. I went home every day knowing I made the department manger’s lives easier, not harder. Pro-tip: doing nothing is more helpful than leaving a mess someone else has to clean up.

Or to use a more relevant example, I don’t like doing front-end layout. The endless fiddling and wondering if that element would look better where it is or 5 pixels to the right isn’t satisfying for me, it’s just annoying. But if you give me a screen mockup, the end result will match it no matter how much swearing it takes. I don’t enjoy the process, but I do enjoy knowing I did a good job.

What’s really important is caring about the quality of your work, we just use “passion” as a proxy for that because we don’t know how to measure it. Unfortunately, we don’t really know how to measure passion, either. Sure, somebody who has side projects or contributes to open source is probably passionate about programming, but that doesn’t mean someone with no public git repos doesn’t give a shit. Simply having the time to work on side projects is an enormous privilege. People who have kids, or sick relatives they need to take care of, or who need to freelance to bring in extra cash, or who have disabilities or are neurodivergent, or would rather spend time with friends and family than do more work outside of work, or just have time-consuming hobbies may simply not have the time or energy to perform passion by working on publicly shareable side projects.

None of those things mean you don’t care deeply about programming, they just mean that you have other responsibilities or interests. Hint for employers: people who have responsibilities are more focused when they’re at work because they know they can’t put in a few extra hours later and they really hate changing jobs because it’s even more of a pain in the ass. That 20-something rockstar (ugh) dev who has no serious ties to the area might decide to move to San Francisco tomorrow. Your 30-something dev who has a mortgage and a kid is a lot less likely to randomly sell their house and uproot their family. Not that you shouldn’t trust single 20-somethings, but can we please stop pretending they’re the only worthwhile devs?

If you must look for passion, at least look for actual passion and not “free time and nothing better to do”. Ask candidates what they love about programming. Ask them if they have opinions about tools and languages and programming styles. Ask them what they would learn if their job gave them some free time and a resource budget.

But if you’re realistic about what you need, I think you’ll agree that passion is a red herring and what actually matters is caring about doing a good job. Fortunately, that’s a lot easier to find.

Does it actually need to be optimized?

Unrelated image from to make this post look nicer in social media shares.
Unrelated image from to make this post look nicer in social media shares.

Learning to focus on one tiny part of your problem and ignore everything else is a really useful skill as a dev, but ironically it can also get you into trouble. It’s just as important to keep the bigger picture in mind as it is to break your problem down into little pieces and do them one at a time. Why yes, this is one of those posts that is as much for me as it is for you :)

Just because a process is slow doesn’t mean it needs to be optimized. If it hardly ever gets called, who cares if it’s slow? I know, I know, it feels wrong to see something that’s slow and leave it that way, but it’s not worth the dev time unless the process gets called often enough. Slow alone isn’t necessarily bad, it’s slow and called a lot that’s a problem. If you’re lucky you have metrics to look at and know for a certainty what gets called often, otherwise you’ll be making an educated guess. This is where understanding your application and thinking about how all the different pieces fit together comes in handy.

For example, anything you can do asynchronously is not going to be your first priority for optimization. If you can hide the processing time from the user, you may never need to optimize it. Initialization, while it is the user’s first impression of your app, also happens only once a session. First impressions are certainly important, but other actions in your app will happen much more often which makes them better targets for optimization. Assuming your load time is reasonable, of course :)

Of course, learning experiences are important too, so don’t worry about this too much if you’re a junior developer. The way you learn what isn’t worth spending time on is by messing up, it’s an unavoidable part of the process. If in doubt, talk it over with your team lead/dev lead/someone with more experience, learn as much as you can from other people’s mistakes. You’ll also learn more about programming by optimizing, so even if the end result isn’t exactly critical to your application, the practice you’ve gotten means it wasn’t a total waste of time.

One of the concepts I’m still mastering as a programmer is that nothing exists in a vacuum. Context is much more important than any individual piece of code – it’s not, “speed this thing up or let it suck” it’s “speed this thing up or do one of a dozen other things that could be more useful.” Remember, your time has a value. Speeding up one piece of code, as personally satisfying as that can be, may mean much less to your users and your bottom line than a bugfix or a new feature. Developer time is expensive, it just makes sense to spend it on the things with the greatest returns.

The next time you’re about to optimize something, ask yourself how often that code is going to be called. It might sound too simple to be useful, but trust me, it’s a very easy question to forget.

.NET JWT library tip of the day

If you use JWTs (JSON web tokens) and need to generate or consume them in .NET, you might get the idea that the Microsoft library listed on is the way to go. It’s by Microsoft, that means it’s official and trustworthy, right?

Don’t be fooled! I mean, it is official and trustworthy, but I had a horrible time trying to use it. Save yourself the trouble and use jose-jwt if you need to handle JWTs in .NET. The readme alone is a thing of beauty, it has a shockingly comprehensive set of examples for pretty much everything you would ever want to do with a JWT. The library really is as easy to use as the examples make it look. I was able to generate a JWT with it in just a few minutes and as you might have guessed from my posts about switching to Linux, my .NET experience is extremely out of date :)

Learn from my mistakes, just use jose-jwt and pretend you never heard of the Microsoft library.