Testing is Essential to Agile SCM

[article]
Summary:
Rather than being an afterthought for SCM, an appropriate testing strategy is what enables an SCM in an agile environment. To be more agile, you need to avoid the silo-based perspective of development, SCM, and testing being three different disciplines. Instead, think about how the processes in one part of your development ecosystem affects what you can do in the others.

Occasionally we hear the question, "What does testing have to with software configuration management, anyway?" Testing is essential for agile SCM environments and agile CM environments are built on testing.  To explain this idea, we need to talk about what SCM is and how testing helps.

A software configuration management system is in place to manage change. In a 1990 Software Engineering Institute Technical Report, Susan Dart said, “The goals of using CM are to ensure the integrity of a product and to make its evolution more manageable.”

The traditional approaches to software change management focus on the software lifecycle and identifying and controlling what changes happen in what context. Identification of the facts of changes is important.  What matters most to many stakeholders is less the fact of the change, but rather the impact of the change to them. People care that their system works as expected, that the functionality was added as desired and not removed accidentally. We want to ensure that the value of our configurations is increasing over time rather than decreasing! To verify these things you need to test the working application. In this sense, you can't fulfill the goals of SCM without testing.

As a system evolves, you want to know at what point risks increase. The more frequent and automated the testing, the easier it is to identify the point in time when a system is in a configuration that does not work.  This is where something like continuous integration comes in. In his recent book Continuous Integration: Improving Software Quality and Reducing Risk, Paul Duvall discusses how a CI system is a central part of the software quality lifecycle.  It allows testing not only for functionality, but also for various metrics.
We will now describe the trade-offs between and identification-based approach to SCM and a verification-based approach.



Stability and Progress



CM environments balance:

  • Stability:  How certain you can be that the code works at any given point
  • Progress:  How quickly a codeline evolves to encompass new features or fixes. How to set the balance varies depending on the needs of the project. Stability is always important since, without stability it's difficult, if not impossible, to make progress. Stable in this sense means that little changes, but by this definition, stable can become stale rather quickly. A more useful definition of stable is that the quality of the code is staying as good as it was. With this sense of stable positive change can happen.
One way to ensure stability is to add controls and processes to ensure quality, but it is easy to make the controls interfere with the business of an organization: delivering new functionality. The classic example of this is requiring changes go through extensive (manual) review before being worked on and then requiring extensive (and time-intensive) testing before making a change. The idea behind this is well meaning: software is complicated and we don't know the effect of a change, so let's be very cautious in making changes. These rules will improve stability, at least on the surface but at a great risk to your rate of progress. In our previous article,  The Illusion of Control we discuss that it is often better to have more visibility into effects of a change rather than attempt to guarantee that a change will be "safe." For more on how to emphasize transparency over traceability, see our 2007 September article Lean-Agile Traceability: Strategies and Solutions.

Another approach is to understand that it is extremely difficult to understand the effect of a change and, instead, change the criteria for approving a change to do what it was expected to do, and did not break any existing functionality. You can do this by having a codeline policy that requires that new code have:

  • New unit tests
  • Pass a workspace build, including unit tests
  • Pass an integration build, including all unit and integration tests

Each of these rules can also be validated through an automated process.  Test coverage tools can allow you test whether new code has unit tests.  For example, testing can check not only functional compliance (the code still works based on the tests) but process compliance, as well (the metrics that we consider important are also met).

Continuous Integration: Improving Software Quality and Reducing Risk has excellent practical advice on how to use your build process to measure various quality and policy metrics.



Another difference between this point of view and traditional SCM is that traditional SCM is often event-based, focused on baselines, individual changes, etc. Lean/agile SCM is focused on managing the flow of change across the value stream. In an event-based model, the fact that a developer made a change is of primary interest, and our infrastructure is focused on tracking (and perhaps preventing) the developers from making a change.

In the model that we are discussing here, the item of interest is the impact that the developer's change had on the system.  We can use criteria like code quality to initiate action, rollback changes, etc. The difference is one of priorities and focus. We care about tracking the various events, as they allow us to recover when things start going in the wrong direction. We want to report on the impacts, however, and not simply the events.



Tests


Much has been written about the different kinds of tests: functional, integration, unit tests and those who write them (developers, QA engineers, etc.), so we won't cover that here. It is important to understand that there are many places in the software development ecosystem timeline where testing happens and each has an impact on SCM:

  • During coding: Software developers write unit tests for any changes/additions that they are making, and frequently run the unit tests for other parts of the code to ensure that their change did not break anything. This might also be a good time to extend the functional test suite. When a developer feels that their work is ready to share, she updates her workspace, does a final build and runs the unit test suite.
  • Once code is submitted an automated integration build is run. This build might run all the unit tests as well as any integration tests.
  • Periodically, nightly or more frequently as possible, longer running automated regression or functional tests are fun.
  • As new featured appear on the scene, manual testing can happen for those features that do not yet have automated tests.

Roles

The "SCM is testing" position also frustrates people because it seems to be placing an QA function (testing) in the hands of another team (release engineering). To be able to successfully respond to change you need to forget about those boundaries, and think of SCM as being an element (perhaps a central element) of the software development environment

As mentioned above, there is a sense in some circles that testing is not relevant to the SCM community because testing is not part of build management or release engineering. The problem with this idea is that if SCM is about ensuring the integrity of the product, what other mechanisms do we have to do this? While we often speak in favor of cross-functional teams where people do not have traditional roles and everyone has a broad skill set, it is likely that many organizations have some sense of boundaries.

  • The software developer is responsible for writing unit tests for any code she touches and for running any tests available so as to verify the stability of the codeline might also work with the QA Engineer to write functional tests. She might also help maintain build scripts for modules they understand architecturally. 
  • The QA engineer can help with defining functional and integration testing can look at incorporating test coverage reporting into the automated suite - to ensure that code is appropriately exercised by those tests, and to maintain the desired levels of test coverage for new functionality
  • The release engineer can enable the development team to do local builds help define criteria for acceptance at various leve

Process and Workflow Testing


Many companies use workflow and tools to help automate their process, e.g. having defects being reported and going through a certain lifecycle with different sign-off points and different levels of authority. Agile teams tend to keep these processes as simple as possible - allowing people to make decisions flexibly based on their own judgement.  What happens, though, if you are in a situation where this is not deemed sufficient? Don't forget that changing a process requires testing to ensure that the process now does something slightly different. It is very common to find organisations making process changes on the live system in a rather uncontrolled manner.

So, treat your workflow process as something that needs:

  • Change control and configuration management:  Can you reliably report on what was changed and when? Can you roll back changes if they didn't work?
  • Testing as for code:  Can you automate this and make sure the workflow still works and a minor change hasn't broken something else?
  • Release:  Do you have a test environment so that changes can be made offline, tested and then applied to the live environment?

These things need to be taken into account when choosing the tool you will be using for your process:  What reporting requirements are necessary, are there any licensing implications for test environments?



How to get there


While not part of the traditional definition of SCM, testing is an enabler for an agile SCM environment. Rather than controlling risk by slowing change, you control risk by monitoring the state of your codeline after each change.

Testing and test driven development are not things that happen naturally alas, and the reasons for this could be the subject of another article. Here are some steps that you can take:

  • Make sure that running tests is part of your build (even if tests don't exist yet)
  • Make passing tests part of the criteria for a good build
  • Include code coverage metrics in your build reporting. Consider failing builds if test coverage goes down (which implies that code was added without tests)
  • Start writing tests. (where to start, strategy)

Writing tests where there are none is challenging, but necessary.

About the author

About the author

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.