Reporting Automated Test Results Effectively

[article]
Summary:
The modern iterative software development lifecycle has developers checking in code to version control systems frequently, with continuous integration handling building and running automated tests at an almost equally fast rate. This can generate an enormous amount of test data. Here’s how you can ensure you are reporting results effectively across your team and realizing all the benefits of that information.

The modern iterative software development lifecycle has developers checking in code to version control systems frequently, with continuous integration handling building and running automated tests at an almost equally fast rate. Depending on the size of the development team, this can generate an enormous amount of test results data. What should you do with it?

I have been responsible for automated test infrastructure, including reporting of results, across large teams in the video game industry. I have found that when reported effectively, test results data generated from automation can help teams ship faster; find, isolate and fix bugs faster; prevent regression; and even track performance improvements.

Here’s what you can do to ensure you are reporting results effectively across your team.

Storage Is Key

Results data should not be ethereal. It’s common for results data to immediately get thrown away after it’s generated, but that is a mistake. This is the first thing to get right: Store all of your test results for at least a couple of weeks, or preferably a month.

Looking at the results of a single test run in isolation is better than nothing, but keeping a history of all your test runs can help you perform comparisons in order to track progress, identify regression, and even find flaky tests, which can help uncover issues that are not always highly reproducible, such as memory allocation issues or race conditions.

Most importantly, storing results data allows analytics to be carried out on the data. You can find out what changes were responsible for introducing a failure. By storing the latest revision number or build number with a test run, you can have a developer narrow down exactly what changes introduced an issue. I have seen developers save hours of their time thanks to this. You can also quickly find out what change fixed an issue and separate new failures from existing failures.

Historical results trends can even help improve process. I had a team leader decide to change a process after having a look at our results trends for one team that showed patterns of result dips after check-ins later at certain times of the week.

More Than Pass or Fail Results

Files and artifacts generated by automated tests, such as logs, screen captures, and performance data files, should also be stored with the results data. Together, all of this data should be easily accessible by everyone on your team, from the last test run to this time three days ago. Too often the only way to find data is for a build engineer to remote into a server somewhere. Do not make your team rely on only one or two people to get at this information; it should be available to everyone.

Logs should be detailed enough to provide precise information for developers to fix issues faster. Ensuring call stack information and verbose details of step-by-step execution are posted is an important factor. This way, even system-level and integration tests can highlight the exact source of a problem. This is critical if your team has a low number of unit tests (or no unit tests at all).

Often, teams use catch-all integration or end-to-end tests to get more value from less time-intensive test authoring. Solid reporting of where these tests failed is absolutely critical to make these test failures actionable. Without it, a developer would need to go through a long process to reproduce the issue.

If your tests store performance data, you can plot trends to see how things are progressing over time. This is one of the most useful bonus outcomes of automated tests and storage of results data.

The Hidden Benefit from Visible Data

Making automated test results more visibly prominent has an interesting hidden benefit. You will not realize until you actually do it that something special starts happening: Reporting automated tests effectively has a cyclical effect of making your automated tests better, increasing coverage, rapidly increasing quality, and driving the writing of more and better tests.

I did not see this coming when I was implementing reporting for my team a few years ago. I just needed a way to report results, so I put something together to do it. Suddenly there was a massive interest in the automated test results for the first time throughout the development team, especially leads.

These tests had existed for a while; I’d just made them more visible, so now people could see how often tests were running, what they were testing, and what they were not. There was an uptake in the number of new tests being written, existing tests were improved to make them more reliable, and management and leads even started to get more invested in automation and asked for more of it. Actioning of failures was made a priority. Getting to a 100 percent pass rate across all branches and build flavors—and maintaining it—became something of a sport in our team.

I have reflected on why this sudden change in culture occurred, when there had been a general apathy toward automated tests and continuous integration across the wider team before. I think it came down to three reasons:

  • Visibility increases mindshare
  • There is greater clarity around return on investment
  • People like to be rewarded for their hard work

We started putting our results on an office TV screen, and they were also made accessible to everyone via their browsers. While the same tests had existed before, they has been written to a console log with files in a temp directory on a build server somewhere that no one other than the build engineer could access. They were out of sight and out of mind, so no one understood the importance and value of the automation.

The return on investment was also unclear without visibility. Now we could see how many bugs we were finding, and we discovered that with better reporting, we could quickly identify and narrow down the causes and fixes. This helped improve reliability of tests, too, because constantly seeing unreliable tests change the pass rate was no longer tenable with the results right in front of our eyes; it made a mockery of our testing.

Lastly, feature code work is obviously rewarding; you ship a product that people use, and that is a highly visible reward. Writing automated tests, however, did not have a visible reward before. No one really knew about them or how important they were. Solid reporting changed that so it did become rewarding, because now everyone could see the tests.

Here’s another tip: Try making the test author name visible next to the test too, along with whoever fixed the test after it broke. Then people can see who was writing the tests, who was fixing them, and every time bugs were discovered and prevented us from shipping in a deteriorated state. This credits the test author and whoever fixed the test.

Transform your team’s automated test workflow today with effective reporting. You will find you ship higher quality releases faster.

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.