How Machines Will Impact the Way You Test Software: An Interview with Geoff Meyer

[interview]
Summary:

In this interview, Geoff Meyer, a test architect in the Dell EMC infrastructure solutions group, discusses whether or not testers should be nervous about artificial intelligence, what testers can do right now to keep up with the times, and when AI is most useful for software teams.

Josiah Renaudin: Welcome back to another TechWell interview. Today I’m joined by Geoff Meyer, a test architect in the Dell EMC infrastructure solutions group and keynote speaker at this year’s STAREAST. Geoff, thank you so much for joining us.

Before we dig into your keynote, could you tell us a bit about your experience in the industry?

Geoff Meyer: Thanks, Josiah. I’ve been in this industry more than thirty years, since graduating with a CS degree in 1985 from San Diego State. I started as a server SW developer with NCR (they used to be known as National Cash Register), moved through the SW dev ranks and into SW management. Twenty years ago, I moved from San Diego to Austin, Texas, to help build the server business of Dell Technologies, which is twenty-on years old.

About eight years ago, the Dell Server organization started its agile transformation, so I moved into test to help navigate the evolution into agile and test automation culture. It was after reading the Capgemini World Quality Report 2016-17 that I first considered experimenting with analytics in the testing domain, so I’m by no means an AI expert and certainly not a data scientist.

Josiah Renaudin: Could you briefly describe what you see as the Fourth Industrial Revolution and what role artificial intelligence plays in that?

Geoff Meyer: In the course of my research, several books that I read, Race against the Machine and What to Do When Machines Do Everything, the authors used the lens of societal experiences from past Industrial Revolutions experiences to view the implications of AI. Each of the past IRs were spawned by what they called “general-purpose technologies,” or technologies that were usable and beneficial by virtually every single business sector.

The two earlier IRs were fueled by the loom, steam power, the combustible engine, electricity, and light bulb. Most famous in the first IR was the Luddites. This group of textile workers and weavers in the early 1800s destroyed weaving machinery as a form of protest of the mechanized loom. The third IR was fueled by the silicon chip and computer. The foundation of the fourth IR is the internet and AI. AI represents the next general-purpose technology, and its use is rapidly cutting across industry after industry, some much faster than others.

Analytics and machine learning have been working behind the scenes managing our retirement accounts and other business processes. More recently, AI has emerged in the form of personal assistants: Siri, Alexa, and Nest. AI is the underpinning for the autonomous vehicle, which is on the verge of replacing vast numbers of drivers and upending the global transportation sector.

Josiah Renaudin: Just being honest, how nervous should testers be that AI will go beyond just enhancing testing and eventually replace the things manual testers do?

Geoff Meyer: The title of my presentation was intentionally provocative ["What’s Our Job When the Machines Do Testing?"], but I don’t see analytics and AI replacing testers anytime soon. We really need to view the onset of AI as a partnership with the machines, rather than the machines taking our jobs. Let the machines do the tasks in our jobs that they're better equipped at, and let us focus on those human tasks that we're better equipped for.

There are a few points that I’d like to make here to allay the concerns of the demise of testing jobs. First, analytics and AI requires data for it to even be useful. A lot of organizations still aren’t capturing the type of data that they need, and even if they are, it’s not in a form that’s readily useful. Second, we need to have a really good idea of the most impactful problems in our job in which to apply analytics or machine learning. To get this right requires a domain expert, which are the testers themselves. Lastly, as a testing community, we’ve just recently been through a similar cycle where we feared a massive loss of jobs due to test automation. Most of us survived through that, mainly because we found out that we really only automated one task, albeit a time-consuming task, out of our job. And when we did, we realized that there was a whole lot more things that we could now get to, i.e., more exploratory testing, more automation for coverage, etc.

Josiah Renaudin: It feels like plenty of teams understand the value AI will eventually present but just aren’t sure if it’s worth investing in right now. From what you’ve seen, should software teams be researching and making use of AI right now? If not, will they start to fall behind?

Geoff Meyer: I don’t think it’s business-critical that SW teams go out and start hiring data scientists yet, but what I do believe is essential is that they start getting a handle on their data and start experimenting with analytics and machine learning. If they are not already, they need to be tracking and capturing all their data, whether it be source control, requirements, defects, test cases, test configuration, failure history, logs, customer-reported issues, etc. Because analytics and AI are both so heavily dependent on data, I don’t make much distinction between the two. In fact, I see analytics as a good stepping stone to an AI capability.

My recommendation for organizations is to start down this path by building a foundation in analytics through the use of proof-of-concepts. Step one: Start capturing data. Step two: Pinpoint pain points—domain experts required. Step three: Analytics POC (build initial model). Step four: Data cleanup and operationalize. Step five: Machine learning POC.

Josiah Renaudin: What are some exciting ways that you’ve seen AI optimize both test and IT operations? And what sort of returns are teams seeing?

Geoff Meyer: I’ve seen several interesting applications. One of these is analytics-based, the other two are machine-learning based.

The first case, the analytics example, involved our POCs last year that are now in the process of operationalizing. The pain point that we were trying to alleviate had to do with the selection of the best available configurations. When we’re testing a server, there’s roughly eighteen major subsystems that go into a server, i.e., CPU, memory, network interface card, etc., and each of those have multiple vendor options. According to our data scientists, for a typical server, this means that there are over 465 million possible and unique configurations for us to test.

We cross-checked this against the actual number of unique configurations that we sold in the prior generation, and fortunately, that was only 500,000 possible configurations. The thing that keeps our lead test engineers up at night is that due to development schedules and prototype costs, we realistically have the ability to test around 500 configurations… so our testers spend weeks of planning making sure that we’ve identified the right number of configurations to test against that expose and identify the highest number of and most critical defects. With our configuration planning model, we expect this to drop from weeks to hours because the model will provide us the list of high-value test configurations at any given point in time.

In the second case, we have a team at DellEMC XTremeIO who found themselves caught in the test automation trap. They had automated their test suite so well, that they were unable to complete the test case failure triage from the preceding night’s test run, before the next testable build was released. Their solution was to implement a machine-learning algorithm that established a “fingerprint” of test case failures and system/debug logs such that they’re algorithm would predict which test failures were “duplicates.” The genius behind this solution is that they would attack the “new” pile of defects and triage them, and IF there was no new build, they would attack the “duplicate” pile. I came across a term the other day by, Todd Pierce, called “precision testing,” and I think that this a fantastic example.

Both of the above two examples use analytics and ML to attack process. The third example is one of the best uses of ML-enabled bots embedded as a Tool. Two companies that I've talked to: Jason Arbon of Test.AI and Dan Belcher of MABL, have trained their bots on how to navigate sites (more than 10,000), categorize the pages within an app, and use that information to crawl, compare, and track deltas in behavior and performance. The goal of these tools is to maximize test coverage when you only have a limited amount of time for testing. Humans tell the bots at a Gherkin-like instruction, and the AI figures out how to do it. The test isn't brittle anymore. Even if the page changes, the test itself is still valid. This is an interesting way to get into AI without having your own data scientist or data.

Josiah Renaudin: Have you found that AI is more useful for menial testing tasks, like dynamically generated regression test suites?

Geoff Meyer: This is a perfect example of the use of analytics and AI. As Leslie Hitchcock, professor at the London School of Economics, suggests, cognitive automation allows us to "take the robot out of the human" and frees up cycles for us to do more important things. The term I'm using internally is “smart assistant.” For us, identifying the high-value test suite was the second of our proof-of-concept projects and that we are now in the process of operationalizing. Some of the other questions that we've got on our shortlist to develop "smart assistants" for include:

  • What test cases/scripts should be retired rather than re-factored?
  • What is the release risk for this build given the testing that's been completed?
  • What tests will give us the best coverage given the specific changes in the current build?

Josiah Renaudin: Two-part question here: How can testers both improve their skills and ready themselves mentally for the future of testing that will rely so heavily on AI? What can they do now to stay ready?

Geoff Meyer: Glad that you split them into two. I'm going to take the last part first. Getting ready for the machines mentally is important, and I spend the last part of my talk on the topic of becoming a better human. As Paul Merrill says, "The machines are learning, are you?" Well, in order to be a better human, we always need to be learning, too. With the advent of agile development practices, developing software is all about developing software… socially. That's a good thing and we've been getting better at working well with others. Unfortunately, just as the social dynamics of software testing is skyrocketing, technology, in the form of our smartphones/social networking/texting, is distracting and detracting us from our opportunities to continually improve our social skills. Simon Sinek has a great interview on "Millennials in the workplace" that highlights this. And that's just the social learning side. On the technical side, those of us in the engineering community realized years ago, that the technologies and practices in are industry change so fast, that continuous learning is an essential part of our job.

For the second part, I don't think that I've achieved sufficient knowledge in this space to have formed an opinion on whether we as testers need to develop a skillset in data science or not. I'm trending towards “yes,” based on our recent experiences in adopting test automation. During that transition, we found that if we didn't learn “developer” skills, we started falling behind our peers in terms of value to the organization.

Josiah Renaudin: More than anything, what central message do you want to leave with your keynote audience?

Geoff Meyer: Embrace the machine, don't fear it. We survived the test automation cycle, and we'll survive this one, too. View the machine as your smart assistant, and really think through which of your tasks are menial, tedious, require a lot of “think” time, has a structured process around it, and has data. Target those tasks for what's being termed cognitive automation, and free yourself up to focus on the more important tasks in your job that aren't getting done.

Geoff MeyerA test architect in the Dell EMC infrastructure solutions group, Geoff Meyer has more than thirty years of industry experience as a software developer, manager, program manager, and director. Geoff oversees the test strategy and architecture for more than 400 software and hardware testers across India, Taiwan, and the United States. He leads initiatives in continuous testing, predictive analytics, and infrastructure as a service (IaaS). Outside of work, Geoff is a member of the Agile Austin community, contributor to the Agile and STAR conferences, and an active mentor to veterans participating in the Vets4Quality.org program, which provides an on-ramp to a career in software quality assurance.

About the author

Upcoming Events

Apr 28
Jun 02
Sep 22
Oct 13