Mel Reams

Nerrrrd

Talk of the day

So it turns out there are a lot of keynotes I like, and one of them is from Keep Ruby Weird 2015 by Sandi Metz. Fair warning, parts of that talk hard to listen to – she plays recordings from an old psychological experiment that would absolutely never pass ethical review today. However, even if you skip that part (she warns you when the hard bits are coming up), there are some really excellent points in that talk about community and how to good ideas rather than conformist ideas out of your team. Seriously, it’s really worth watching.

To comment or not to comment?

Unrelated image from pexels.com to make this post look nicer in social media shares.
Unrelated image from pexels.com to make this post look nicer in social media shares.

One of many things nerds like to spend far too much time debating is whether or not to comment their code. On one hand, comments make code a lot easier to understand, but on the other, many people claim that good code is self-documenting.

If you haven’t been writing code professionally for long, it probably seems totally obvious that comments help you understand code you’re new to or haven’t looked at in a while. For the most part, they do help. The problems are that bad comments really don’t help and that comments only help if they’re correct.

What makes a comment a bad comment? If you’re going to add comments, they should explain why the code does what it does, not what it’s doing. Take the examples in this reply to a reddit thread – it’s totally unhelpful to add comments like “//Reduce’s player counter by 1” to a line of code like “mageCounter–;” That comment doesn’t add any information that isn’t in the code. Of course the counter is being reduced by 1, that’s obvious. What’s useful is knowing why the counter is being reduced.

Not only is that comment not helpful, it wastes space. The more comments like that you have, the less code you can see on the screen at once. Yes, that sounds really trivial, but seriously, it’s incredibly annoying to have to scroll up and down to see the whole method.

How can a comment be incorrect? If the code changed and the dev who changed it forgot to update the comment. Out of date comments can waste huge amounts of your time when you need to change a piece of code because everyone naturally assumes the comment is correct. If it wasn’t, why would it be there? Sadly, developers are only human and sometimes we get so excited about the code finally working that we forget to update the comment.

That’s part of why some people say that you should not have comments, that they’re a sign you didn’t name your variables and methods and classes well enough. It is true that if the names in your code are their own comments, then they won’t fall out of date the way actual comments can. On the other hand, there are limits to what variable names can tell you. I think the most important comments are the ones that explain weird code that works around a bug in an API or something strange you did for performance or just a workaround for a design choice that didn’t pan out long-term.

Those are the kind of things that self-documenting code just can’t document. There’s no method name that can explain why some of your tests use one dummy mailserver and other tests use a different one because there’s no single dummy mailserver that handles all of the test cases you need. At least, not without that method name being terrible code on its own and at that point, you might as well use a comment no matter how much you normally oppose them.

My view of comments is somewhere in the middle – I think they’re a great tool for keeping track of what you’re doing while you code, but once you’ve written the code you should only leave in the comments that are important to help understand it. As much as possible, you really should rely on good naming, if only because good names are always a good idea :)

And in general, I think absolutism is a waste of time. Even otherwise excessive “here’s what the code is doing” style comments could be really helpful if you’re working in a low level language or doing something clever with pointers where it’s just not immediately clear what the code is doing, let alone why. It’s more about context than it is about hard and fast rules. Surely you didn’t go into programming because you ever wanted to be certain about anything ;)

Happy thanksgiving!

Hey look, a vaguely related image from pexels.com! It's still only here to make this post look nicer in social media shares, though.
Hey look, a vaguely related image from pexels.com! It’s still only here to make this post look nicer in social media shares, though.

Happy thanksgiving readers!

In the spirit of thanksgiving (and not making you read too much when you’re just going to end up in a turkey coma), here are some things I’m grateful for:

Being lucky enough to enjoy doing a job I can make a good living at. The tech industry certainly has its flaws, but it beats the hell out of the jobs I had between high school and college where the only reason my employers didn’t pay me even less was because they legally couldn’t.

Incredible free resources like Khan Academy and the many, many MOOCs out there. It’s incredible how much stuff you can learn at no cost but your time.

Conferences that record their talks and put them online. Most of them have excellent sound and video quality too. Look at how many Sandi Metz talks you can watch without ever buying a conference ticket or even putting on pants!

Beginner friendly communities like Code Newbie and Java Ranch/Code Ranch. I love that there are places online where it’s okay to not know everything already.

Learn to code/learn to code better resources like Free Code Camp, Exercism, Programming Praxis, Code Kata, and so many more. People put incredible amounts of work into these tools just to help other people become better programmers and that’s awesome.

JSON – working with XML is generally terrible and I’m delighted JSON has become the new standard. It sounds petty but seriously, any day I don’t have to fight with XML is a good day :)

What about you, readers? What nerdy stuff are you grateful for?

CSS tip of the day

If for any reason you ever need to center a circle inside of a another circle using CSS, here’s how. That delightful person even created a jsFiddle so you can test it out yourself. And in the spirit of almost-Canadian-thanksgiving, I give thanks for stack overflow :)

What if I don’t know?

Unrelated image from pexels.com to make this post look nicer in social media shares.
Unrelated image from pexels.com to make this post look nicer in social media shares.

I was listening to the Developer Tea podcast the other day and caught a re-air of an episode about how to handle questions that you don’t have good answers to. The very short version of their answer (which I totally agree with) is don’t say “I don’t know” and stop there, follow it up with “but I can find out.” And then, you know, actually find out and follow up with the person who asked the question :)

It’s been such a long time since I worried about that that I forgot it was still an issue for people. I’m not saying I feel great about it when I get asked a question I can’t immediately answer, but my perspective is that my job as a developer is much more about knowing how to figure stuff out than it is about knowing the answers off the top off my head. I’m not Google, and it’s not reasonable to expect me to be.

My theory (and readers, if you think I’m out to lunch here, I’d appreciate you letting me know in the comments) is that newer devs are more likely to freak out about not knowing the right answer because they’re used to having to produce an answer right away or lose marks on tests in school. One of many, many ways being a professional developer is different from school is that there’s no time limit and everything is open book.

Seriously, no reasonable person is going to react badly if you tell them you’re going to find the answer and get back to them. And if they do react badly, they’re a jerk and if that jerk is your boss, you should seriously consider finding a new job. Production applications are just too complicated for any one person to remember every detail of the back end, the front end, the database, the logging, the monitoring, the API, the deployment process, or whatever other pieces you deal with. It’s just not possible for a person to memorize every detail of a complex system and if you think it is you have no business managing developers.

Speaking of complexity, a completely reasonable answer to a question might be “Amy built that feature, she would know better than I would.” Just telling someone where to look or who to ask is really helpful, it’ll make it easier for them the next time they need an answer. Sometimes, say if your boss asks you a question, they might need you to do the legwork and go ask Amy for them, but that doesn’t mean it’s not helpful for the next time they have a question about that feature.

Unlike school, very few parts of professional development involve a time limit. Sure, if production is down you’re going to want to figure out why as quickly as possible, but even that is more about problem solving skills than it is about having stuff memorized. If my boss asks me a question, it’s far more useful to them to take five minutes to go look it up and make sure it’s right than it is to spit out an answer right then and there.

Basically none of professional development is closed book, either. That’s one of the reasons for my very limited interest in general knowledge questions in technical interviews (the other one is that a stressful, high stakes situation like an interview is going to give you false negatives because people go blank when they actually know the subject perfectly well). Not remembering the difference between a StringBuilder and StringBuffer is never going to matter in my career, that takes about 30 seconds to Google.

“That’s great,” you say, “but what if I do all my research and I still don’t know what the best option is?” That’s totally fine! Write down what you learned and share that with your boss/whoever asked you the question. I’ve had plenty of conversations with my boss where I showed them what I found and straight up told them “I don’t know what the best thing to do is, but here’s what I found.” As long as you’ve done something to help find the answer you’ve at least saved your boss the time it took to research it, that’s still helpful.

Not knowing the answer is just not an issue as a developer. The issue is if you don’t make an effort to find the answer, and making an effort is something anyone can do.

The dining philosphers

Unrelated image from pexels.com to make this post look nicer in social media shares.
Unrelated image from pexels.com to make this post look nicer in social media shares.

I read this article by Uncle Bob called The Lurn (it’s a companion piece to an earlier article of his called The Churn), where he mentioned a lot of topics that he thinks are more important for developers to learn about than just scrambling to keep up with the latest languages and frameworks (not that you shouldn’t learn a few of those, but you’re going to hit diminishing returns pretty quickly because not that many things are actually a meaningful leap forward and eventually you’re going to start seeing enough patterns that learning yet another slightly different framework just isn’t a good use of your time).

One of the topics he mentioned was the Dining Philosphers problem, which I’d never heard of so I went and looked it up. The problem (quoting wikipedia, of course) is:

Five silent philosophers sit at a round table with bowls of spaghetti. Forks are placed between each pair of adjacent philosophers.

Each philosopher must alternately think and eat. However, a philosopher can only eat spaghetti when they have both left and right forks. Each fork can be held by only one philosopher and so a philosopher can use the fork only if it is not being used by another philosopher. After they finish eating, they need to put down both forks so they become available to others. A philosopher can take the fork on their right or the one on their left as they become available, but cannot start eating before getting both of them.

Eating is not limited by the remaining amounts of spaghetti or stomach space; an infinite supply and an infinite demand are assumed.

The problem is how to design a discipline of behavior (a concurrent algorithm) such that no philosopher will starve; i.e., each can forever continue to alternate between eating and thinking, assuming that no philosopher can know when others may want to eat or think.

Yep, it’s a concurrency problem. Specifically, a deadlock problem. Just because it sounds like it should be simple to get all of our philosphers fed doesn’t mean it actually is. The simplest solution would be that each philosopher picks up the fork on their left and waits for the fork on their right to become available – which never happens because the philosopher to their right has already taken it and is waiting for the philosopher on their right to put down their fork.

Okay, so let’s give the philosophers a time limit. If you grab one fork but don’t get another one within five minutes, you put your fork down and wait five minutes, then try again. No dice :( This is something called a livelock (that link explains it particularly well), where all the philosophers grab one fork, wait five minutes, put their forks down, wait five more minutes, and try to get two forks again. Forever. It’s almost the same thing as a deadlock except that the processes can still try to do things, they just can’t actually accomplish anything.

That only happens if every philosopher has the same wait time, though. What if we assign each philosopher a different wait time? That sort of works, but the philosopher with the longest wait time is likely to get less spaghetti because they try to pick up a fork fewer times, and if they’re particularly unlucky may get no spaghetti at all. We could try random wait times too, but only if we wanted to tear our hair out trying to figure out why one process is super slow one day and fine the next. Random numbers are out to get you – if you generate enough of them, eventually you’re going to get a long streak of [insert least helpful result here] and the people trying to use the feature that never manages to get two forks will hate you.

Another solution is to number all the forks and tell the philosophers that they must pick up the lower numbered fork first when they need one. The philosophers each grab the lower numbered fork until we get to philosopher number five, who needs to take fork one before fork five, but fork one is already taken so they don’t take any forks. That leaves fork five available to philosopher four, who sets down forks four and five when they finish eating, which lets philosopher three take forks three and four, and so on around the table.

That actually works pretty well if you only ever need two forks, but if you take forks three and four, then discover you need fork one as well, you have to set down the forks you already have and start over again from two.

Yet another solution is to introduce a waiter and make the philosophers ask them for permission to take forks. The waiter only gives permission to one philosopher at a time to pick up forks, which is great in terms of preventing deadlock but kinda sucks if one philosopher needs forks two and three, and another one needs forks four and five and they can’t get permission to pick up forks at the same time even though they don’t want the same forks.

If you relax the constraint about philosophers talking to each other, there’s another solution: all the forks are either clean or dirty (and start out dirty), and all the philosophers are numbered. When two philosophers want the same fork, the lower numbered one gets it. Any philosopher can ask for a fork, and the philosopher who has it must hand it over if it’s dirty but keeps it if it’s clean. When a philosopher finishes eating, they clean their forks and set them down. This prevents deadlocks and fixes the problem of philosophers who haven’t eaten never getting a fork, but (there’s always a but) introduces more overhead in the form of making it possible for philosophers to talk to each other and keep track of who has which fork.

Those are by no means all the possible solutions, but the individual solutions – and there are plenty of them – are less important than the general point of the exercise, which is that concurrency is hard :) Specifically, it’s prone to weird edge cases and is hard to think about because there are so many moving parts.

And now we both know what the dining philosophers problem is all about.

Simple stuff that will make your new team think you’re amazing

Unrelated image from pexels.com to make this post look nicer in social media shares and because Pallas Cats are the best.
Unrelated image from pexels.com to make this post look nicer in social media shares and because Pallas Cats are the best.

Even if you just graduated, there are some simple things you can do to make your new team think you’re amazing, and they don’t even involve any code. Ironically, a huge amount of the stuff that makes someone a really good programmer has nothing to do with sitting down with an editor and writing code. Being an exceptional programmer is as much about making your team more effective as it is about just writing good code yourself.

A big thing that can help your team get more done is to write absolutely everything down when you set up your development environment for the first time at a new job. Everybody means to do this but hardly anyone actually does. No judgement, my current job is the first one where I’ve done a good job of documenting environment setup and it’s my fourth job since graduating from college. If you can, it’s even better if you can write a simple script to set up stuff like environment variables. In some offices everyone gets to choose their own OS and scripts are only so useful, but even a script that only works for one OS at least gives other people something to work with.

Why is this important? Because you are far from the last developer your company is ever going to hire and the less time the people who come after you spend setting up their environments the more time they can spend learning the codebase and actually making themselves useful. It seems minor, but the hours it takes to set up an environment really add up over multiple hires. It’s also incredibly helpful if you’re allowed to work from home and need to setup your environment again on your personal computer or if you switch OSs or need to do a full reinstall or your computer dies and you need to setup a new one. Seriously, just documenting environment setup will make your team think you’re amazing and make everyone who comes after you more productive.

But if you want to do even more, it’s also awesome if you can document parts of the codebase as you learn them. No matter how much sense that code makes right now, you will forget it, and again, this is super helpful to everyone who comes after you.

If your team doesn’t have a place to store docs, see if you can start one. At my work we use google apps already, so it was really simple to create a google sites wiki that can only be viewed by people from our organization. If you don’t use google apps, pretty much any tech company is going to have a spare box you could spin up a wiki on. Whatever you do, just make sure there’s some sort of security – if you’re not going to put concrete details about your code in your docs, there’s not much point having them – and that you’re not using a free service that’s going to up and disappear out from under you one day. If nothing else, you can always put some text files on a shared drive and call it good :)

While you’re at it, it never hurts to document processes like how to deploy to staging and prod, how to fix common errors, where all of the log files for different services are, where those different services themselves are. Basically any time you have a question you have as a new developer, write the answer down where other people can find it. If you need to know, so will the next dev.

The less time your team spends re-learning code or hunting down a lot file, the more everyone gets done. That’s something you can help with no matter how new you are.

How does variable scope work?

a black woman types on a laptop
Photo from create her stock

Scope is where a variable exists can be accessed in your code, and it’s surprisingly complicated.

The short version is that a variable that was declared inside a block only exists inside that block. Great, what’s a block? In Java, it’s anything between a set of curly brackets { }. Blocks can be nested, too. A method inside of a class can see the class’s variables, and a block inside of a method (like an if or a loop) can see the method’s variables. Nesting only works inwards, outer blocks can’t see inside inner blocks. Once a block finishes executing its variables cease to exist, so there’s really nothing for the outer block to see.

Java also makes things more complicated with scope modifiers: classes, methods, and variables can be public, private, or protected. Those modifiers only apply outside of methods, they don’t make sense for variables that are just going to vanish after the method executes. Public means the “thing” (class, method, or field/class variable) can be accessed from outside the class. Private and protected classes are sort of a weird special case, they only make sense if you have a nested class inside another one. Public things can be accessed from outside the class, private things can be accessed only from inside the class, and protected things can only be accessed by subclasses of the class where they were declared.

Just like each time a method is run it gets a fresh copy of all of its variables, each time a class is instantiated it gets a fresh copy of all of its variables too. Unless any of them were declared static, in case you didn’t already have enough to keep track of :) The difference between classes and objects created by instantiating those classes was a hard concept for me when I started programming, don’t feel bad if you’re confused. Classes aren’t things you can interact with, they’re just blueprints for things. Once you create an actual thing by instantiating a class, then you can call methods on it. Unless you made that method or variable static, then you can use it without actually having an instance of the class. Static variables are special, normally every instantiation of the class gets its own variables, but static variables are shared between all instances of the class.

Why would you want to have just one instance of a variable for all instances of a class you’ll ever have? If you have any constants in the class, they should be declared static (also final, which means, surprisingly enough, that you can never change it) to save memory. If the constant is only ever going to have one value, there’s no reason to waste memory by giving every instance of the class its own copy. You might also want to make something static if it’s a shared resource like a logger. You really only need one logger per class, so again you can save memory by sharing one logger instead of giving every instance its own copy.

So that’s all well and good, but why does scope work the way it does? Partially I think it’s for the convenience of the programmer :) How much would it suck if you could never reuse a variable name because all variables were visible anywhere? And how much of a hassle would it be to keep track of which variable you were using versus which one you meant if all of them were visible?

Aside from convenience, there is an actual reason for variable scope to work the way it does. It has to do with the way the computer keeps track of exactly which code is executing at any given time. To over simplify a bit, when a program is executed it gets two chunks of memory, the stack and the heap. The heap is where objects live, and the stack is where the computer keeps track of where it is in the program and what values all the variables have. When method a calls method b, the computer stores the state of method a on the stack, then creates a new state for method b with all of its variables and stores it on the stack too. When method b finishes, the computer keeps its return value (if it returned anything), to update method a’s state with, then it throws away the state for method b. That’s what I was talking about when I said that variables cease to exist when a block finishes executing – the computer throws out its only record of what values those variables had. You can read more about memory and execution in this article, which has some pictures to help visualize what’s going on.

What about static variables, if they live in the stack why don’t they disappear? My understanding is that they actually live in the heap (where stuff stays until you deliberately get rid of it), and what goes on the stack is just the address of where the static variable lives in the heap.

Scope might seem really simple if you’ve been programming for a little while, but let me throw a wrench into that idea: multi-threading. With multiple threads running the same code, it can be really difficult to figure out why on earth your variable suddenly has a value that makes no sense. Not that I’ve ever spent multiple days swearing at my computer for making no sense ;)

WordPress Appliance - Powered by TurnKey Linux