The Difference between Structured and Unstructured Exploratory Testing

[article]
Summary:
There are a lot of misunderstandings about exploratory testing. In some organizations exploratory testing is done unprofessionally and in an unstructured way—there's no preparation, no test strategy, and no test design or coverage techniques. This leads to blinds spots in the testing, as well as regression issues. Here's how one company made its exploratory testing more structured.

More and more organizations are applying exploratory testing in their testing efforts. It fits well with test automation, and it works flexibly with agile and DevOps. But there are a lot of misunderstandings about exploratory testing. In some organizations we see that exploratory testing is done unprofessionally and in an unstructured way.

In this article we will discuss a case in which an organization grew from unstructured exploratory testing to structured exploratory testing.

The Challenges of Unstructured Exploratory Testing

The organization builds tailor-made software. In one of their projects they were building an app for consumers to operate home-based devices connected to the internet of things. The team built a hybrid app—they built a website that, by means of a shell, can be used as an app. The app was developed for the newest versions of Android and iOS, but Windows Phone was not supported.

Although quality was important, the team did not have a tester. The developers who built the software tested their own work by means of a unit test, and then the code was reviewed by another developer. After the unit test and the code review, the new software was installed on a separate test environment, and another developer did a functional test. In this test they focused mainly on the happy path. They didn’t test error conditions or unexpected situations. There wasn’t very much structure in this test, either.

In some ways this could be called exploratory testing, as the developer just started testing and, depending on the outcome, they decided what the next test case would be. However, there was no preparation. The developer did not make a test strategy and did not use test design techniques or coverage techniques. This led to blind spots in the testing—and regression issues.

After the functional test, the team did an integration test to see if the system worked well with systems delivered by other suppliers. This was also not very well structured and only tested the happy path. After the integration test, an acceptance test was done in a separate environment by so-called field testers from the client, who could be real users or employees. These sessions were a kind of bug hunt, but again with less preparation and structure. It was unknown what was tested.

Test automation was not used at this period of time. The regression test was done by hand by means of a standard checklist. The developers preferred to spend their time on new features instead of regression testing, so often the regression test was not done properly, or sometimes not done at all. This introduced the risk of regression issues, which was serious because the app was used on many different devices and environments.

The trigger to improve testing was an upgrade to a higher operating system version, which caused the app to crash every time the user tried to open it. The bug, as well as the root cause, could have been found in early test stages if it had been tested in a structured way, but this particular bug was along the unhappy path, which the team didn’t consider. Things had to change.

Moving to Structured Exploratory Testing

First, the team improved the requirements. Then the client made a list of features and situations that had to be checked before a new version was released. Finally, a tester was hired to improve the coverage of testing.

The tester introduced structured exploratory testing. He did a product risk assessment together with other stakeholders so that some of the blind spots were made explicit. He also made lists of things that should be tested for every feature, both along the happy path and the unhappy path. He asked the team what the user should experience if a certain service was unavailable or if some components could not be reached, and for some very complex parts of the app, he made detailed scripts. The tester also used test design techniques and coverage techniques in a practical way.

He documented the test ideas in mind maps, which saves time, lends itself to easier maintenance, and gives more oversight and flexibility compared to a spreadsheet or document.

An example mind map with test ideas

Most of the actual testing was done by the tester, although other team members participated as well.

An important aspect of the testing was that in addition to the product risk assessment, the outcome of the previous tests also was used to decide what to test next. When a certain bug is found, this can be an indication that the same kind of bug could be in another part of the software—or, when a specific part of the software works well, this could be an indication that the same kind of software could be tested less thoroughly somewhere else.

So there was no fixed test strategy or overall plan for what to test when, but tests were determined based on the risks and the outcome of other tests. We call this a continuous test strategy. Both test strategy and test plan was not a once-in-a-project activity, but a continuous activity, which made testing much more flexible.

Another measure was the introduction of mobile test automation. There are different mobile test automation solutions available, both open source as well as commercial tools. In this project we decided to use Cucumber to make it easy for the users to add or adjust test scenarios written in plain text. The web driver was a combination of two tools, Selenium and Appium, to make cross-platform testing possible, considering both iOS and Android were supported. The framework was set up in such a way that it was generic and easy to reuse or expand if needed.

The automated regression test cases came from two sources. The first source was a subset of tests that were done in the past. Automating all tests would make the regression test set too big, which would increase the time and budget to make it, maintain it, and run it, so the trick was to balance coverage with the size of the regression test set. A small, automated regression test set with high coverage is preferable. The second source for test cases was the list of features and situations that had to be checked. The advantage of automating this list was that the team could guarantee the client that all items on the list were tested. After a demo of the automated regression test, the client had more confidence in deploying new releases.

Because the team worked in two-week releases, the automated regression test was done every two weeks. The team also found that because of the automated regression test, they had more time to build features. Before the regression test set was automated, the team had spent a lot of time doing an incomplete regression test manually.

Both the client and the team were satisfied with the results. After the structured exploratory testing process and the automated regression test were in place, there were no longer any major defects in the app or the website.

Lessons Learned

We learned four things from this case.

First, there is a difference between structured and unstructured exploratory testing. In structured exploratory testing, you make a test strategy and do test planning, although both the strategy and the planning should be flexible—it is not wise to make one big test strategy up front and stick to it. Throughout the process, you should ask what to test next and how to test it.

Another aspect of structured exploratory testing is the usage of test design techniques and coverage techniques. These techniques add value to testing but should be used in a more flexible way. Not having the knowledge of test strategy, test planning, test design techniques, and coverage techniques could be a risk.

The second thing we learned is that we can spend less time on documentation by not making detailed test scripts if they are not needed. Using tools like mind maps also helps make lightweight documentation.

The third thing we learned is that test automation is useful but not an answer to all our problems. It is still important to think carefully about what to test and how to test it. And the tools available today can’t do all the testing.

The fourth thing we learned is that testing still is a craft. Good testers know their methods, tools, and techniques, and these skills should be present in a team. The times of big, separate test teams are over—at least, in most organizations—but testing skills are still necessary.

When quality is considered important, test teams should employ structured exploratory testing, and this requires professional testing skills.

User Comments

1 comment
srinivas kadiyala's picture

Thanks for the article. MindMap Image is Blurred. Can you please replace with good quality image.

 

January 29, 2019 - 8:34pm

About the author

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.