Why I Unit Test

If you’ve done any software development in the last fifteen years, you’ve heard people harping on the importance of unit testing. Your manager might have come to you and said “Unit tests are great! They document the code, reduce the risk of adding bugs, and reduce the cost and risk of making changes so we don’t slow down over time! With good unit tests, we can increase overall delivery velocity!” Those are all great reasons to unit test, but they are all fundamentally management reasons. I agree with them, but they don’t go to the core of why I, as a developer, unit test.

The reason I unit test is simple: Unit testing is both an opportunity and a strong incentive to improve new and existing designs, and to improve my skills as a designer of software. The trick is to write as few unit tests as possible and ensure that each test is very simple.

How does that work? It works because writing simple unit tests is intrinsically boring, and the worse your code is, the more difficult and boring it will be to test. The only way to get any traction with unit testing is to drastically improve your implementation to the point where it can be covered with hardly any unit tests at all, and then write those.

Avoiding unit tests by improving your implementation

Here are some approaches for writing fewer unit tests:

  • Refactor out repeated code. Each block of code that you are able to abstract out is one less unit test to write.
  • Delete dead code. You don’t have to write unit tests for code that you can delete instead. If you think this is obvious, then you haven’t seen many large legacy code bases.
  • Externalize framework boilerplate as configuration or annotation. That way, you only have to write unit tests for product logic rather than scaffolding.
  • Every branch of code needs at least one unit test, so every if statement or loop you can remove is one less test to write. Depending on your implementation language, if statements and loops can be removed by subtype polymorphism, code motion, pluggable strategies, aspects, decorators, higher order combinators or a dozen other techniques. Each branch point in your code is both a weakness and a requirement for additional testing. Remove them if at all possible.
  • Identify deeper data-flow patterns and abstract them. Often pieces of code that don’t look similar can be made similar by pulling out some incidental computations. Once you’ve done that, then underlying structures can be merged. That way, more and more of your code becomes trivially testable branch-free computations. In the limit, you end up with a bunch of simple semantic routines (often predicates or simple data transformations) strung together with a double handful of reusable control patterns.
  • Separate out your business logic, persistence, and inter-process communications as much as possible, and you can avoid a bunch of tedious mucking with mock objects. Mock objects are code smells, and overuse of them may indicate that your code has become overly coupled.
  • Figure out how to generalize your logic so that your edge cases are covered by your main flow, and single tests can cover diverse and complex inputs. Too often we write single-purpose code for special cases, when we could instead search for more general solutions that cover those cases without special handling. Note however, that discovering the simpler, more general solutions is often much more difficult than creating a bunch of special cases. You may not have enough time to write small amounts of simple code, and instead have to write large amounts of complex code.
  • Recognize and replace logic that is already implemented as methods in existing libraries, and you can push the trouble of unit testing off onto the library’s author.
  • If you can simplify your data objects so much that they are immutable and their operations follow simple algebraic laws, you can utilize property-based testing, where your unit tests literally write themselves.

But yammering is cheap, let’s see some code!

Finding deep patterns and abstracting out repeated code

A common pattern in data-science code is to look to find the element of some collection for which some function is optimized. The simplest Java code for this might resemble the following:

  double bestValue = Double.MIN_VALUE;
  Job bestJob = null;
  for (Job job : jobs) {
    if (score(job) > bestValue) {
      bestJob = job;
  return bestJob;

This is quick enough to code that you might write it without even thinking about it. Just a loop and an if! What can go wrong? That’s fine the first few times you write it, but you’re building up technical debt every time. Writing unit tests is where the repetition and risk starts to really show up. Every block of code like this will need tests not just for correctness in the common case, but also for a bunch of edge cases: what happens if we passed in an empty collection? a single element collection? null? Even the simple code above has some bugs that unit tests can find, but you have to write a lot of them every time you wish to do an optimization, and I don’t know about you, but frankly I’ve got more useful things to do with my time.

A better solution is to realize that even this small amount of code repetition can and should be abstracted out, coded and tested only once. It also gives us a chance to genericize the code and fix some edge cases.

    public static <J> J argMax(Iterable<J> collection,
                               Function<J, Double> score) {
      double bestValue = Double.MIN_VALUE;
      J bestElement = null;
      if (collection != null) {
        for (J element : collection) {
          if (score.apply(element) > bestValue) {
            bestElement = element;
      return bestElement;

This code needs to be unit tested only once. For an even better solution, we can replace all of this logic with a library call (in this case from Google’s Guava library):

  public static <J> J argMax(Iterable<J> collection,
                             Function<J, Double> score) {
    return Ordering.natural().onResultOf(score).max(collection);

After that, you only need unit tests for each different scoring function you use. Everything else has already been handled.

Avoiding unit tests: a path to understanding great software design

The thing about all of these unit-test avoidance techniques is that they are essential to the process of creating robust and supple designs even if you weren’t going to do any unit testing at all! Too often, in our rush to simply get something working, we don’t follow these techniques, but continual unit testing gives us a time and a reason to do it right. In this way, you can leverage aggressive laziness in implementing unit tests to drive continuous improvement of your project design and implementation.

At least, it can if you let it. If you spend your unit testing time writing unit tests for your code without improving its underlying design, you’ll most likely never learn anything, and you’ll have little reason to create code with quality better than “it mostly works.” If you spend your unit testing time looking to minimize the total amount of testing code that you write (by improving your product code), you’ll quickly learn just what it means for software to be well-designed. I don’t know about you, but that’s why I love programming in the first place.

Dave Griffith is a software engineer at Indeed and has been building software systems for over 20 years.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Share on RedditEmail this to someone