Try Blue Cat Reports for free

Free 7-day trial, no credit card required 😻

Get the power-up!

How much automated testing is enough automated testing?

Updated:

I had been a proponent of automated testing for a long time before we really adopted it at my last company. It was one of those things people wanted to do but never felt they had the time for. When we all actually started writing tests for our code was at the start of a new project, a rewrite of an existing product. The fact that on day one we could create and maintain 100% test coverage was a strong driver in writing more tests. I knew that code coverage was an imperfect metric, but it's so easy to track it's natural to use it. This is, I think, the key problem with automated testing that leads people to think that 100% is what they should be aiming for. When measuring coverage is so easy, and 100% coverage of your first piece of code is possible it's natural to try and maintain that.

I think a lot of people who haven't really considered what tests they should write also assume 100% coverage is what they should aim for. This seems to be the default opinion people come to the question with, if they even realise that there is a question to be asked.

Why 100% test coverage is actually a bad goal

I believe you should definitely not try to write "as much automation as possible" or fall into the common trap of thinking that 100% automation is an ideal to be aimed for.

Writing automated tests takes time, running them takes time and maintaining them takes time. Imagine for the sake of argument that only 50% of your tests are useful. In this case you have wasted time writing the remaining 50%. You need to wait for the other 50% to finish executing to get the results of the useful ones. And, you need to spend time maintaining them. Most of the work in software development is in the maintenance, not in originally creating the code. Less code is a laudable aim, this also applies to your tests.

So, you may well ask, how can we decide if we've written enough tests for a piece of code or not? Which is lucky because that's what I'm going to tell you next :).

How to know if you've written enough tests

I propose splitting your tests into four groups:

  1. Core functionality of your app. If your app is a website monitoring tool then this would probably be any alerts which need to fire if a site has problems. If these go wrong your app isn't serving its core purpose so having automated regression tests to run on every code change makes sense.
  2. The basics (often called a smoke test). For example you could use Selenium to click all the links to navigate through a web app and make sure the server responds with a 200 for each page. Note, I don't mean checking the content of the pages, just that you can navigate the app and it's not obviously broken. These tests ensure any manual testers aren't having their time (and morale) wasted with obviously broken builds.
  3. Anything fragile which is likely to break. This is hard to define for new code, but if you're working on some legacy code which you know to be bug prone then building tests around any changes you make is a good way to build towards being able to refactor more safely.
  4. Things the developers wanted to automate because it was easy to do. Sometimes it makes sense to add tests while you're building something, that shouldn't be discouraged.

By structuring your test cases along the above lines, when one fails people know why it was created. If it is the test that is wrong and not the code under test they should understand why it's important to fix the test rather than delete/disable it. Or if it's not important to maintain, they can delete it if that's what they want to do. Note, I would suggest explicitly laying out your code using these four categories so anyone reading it knows which group the test is in.

Code review of the tests then becomes easier as someone can look at the functionality under test and form their own opinion about the tests which should appear in each category. This is where a code coverage report is useful, a scan through that as part of a code review may show something up which you think should be tested.

You can obviously adapt the above to your own needs, but I think the categories are vague enough they would suit most teams.

Agile team? Using Trello? You should check out Corrello - Dashboards for agile teams using Trello.

Avoid these 5 Trello mistakes!

Enter your email below to get our 5 mistakes to avoid in Trello email series 😻

Everything copyright © Cherry Wood Software ltd.
All rights reserved.