Like lots of folks, I used to calculate automation ROI by measuring ‘hours saved’ if a person were to do these checks instead of a machine. Perhaps that’s how the market trend generally evolved and a way for vendors to sell their products / services. After working for years in the industry and listening to other thought leaders and folks sharing in the community, I feel the ‘cost cutting’ might be there, but not in the way most of us think about it which should change the way we think about automation.
To make that a bit obvious, what would you say is the ROI of a piano for an average user? It’s not easy to quantify the return on investment for a ‘tool’, but that does not mean it’s any less important depending on the circumstance.
The cost saving silver bullet
For years till date automation tools and services have been sold as a method of reducing cost. In theory it does sound logical, however after working in the industry for years, I don’t know of anyone who has really ‘seen’ these cost cuttings including myself. Let’s dissect the calculation of cost reduction in detail to try and pinpoint the discrepancies.
The story goes something like this:
“Savings per test cycle= Tests/checks automated x Execution effort (man hours) per check”
And then we’d calculate the break-even point when the savings equal the initial investment in preparing that automation suite plus any other costs etc. For an accountant this would make perfect sense, except the “effort per check” cost does not exist! Let me explain.
Automated checking Vs Testing
The first problem is equating automated checks execution time to a tester’s man hours. The way a machine runs a script is not the same as how a person would test that feature. There is a lot of background to this concept if you are not familiar with methodologies like Rapid Software Testing and related concepts. For those who are not, let me try and summarize the required concept quickly.
The verb “Testing” is an act of “Thinking” and “Communicating” on how to test a specific feature. Once the tester decides what to test, then he / she executes the scenarios. A machine is incapable to “Test” since it cannot “Think” neither can it “Communicate” like a human. It can only “execute” what it’s instructed to check.
(Thanks to the RST community, James Bach, Michael Bolton and folks for articulating this clarity)
The missing effort
Let’s take an example of a candidate application which would hypothetically require around 1000 man hours to test the complete application (btw many products would fit this description). How many testers would be needed to regress over this application within 2 weeks? Around 13 full time testers. Do you think the team would have 13 testers on the team? Mostly not, they would have less than needed people and make do with whatever time they get.
Now, half the effort of “Testing” was the thinking part which a machine cannot do (Some would argue, including me, a lot more than half). The other half is supposed to be spent on “Execution”, where only a small percentage is actually being spent since the team size is ALWAYS smaller than needed.
That’s how there ‘might’ have been ‘some’ savings in terms of man hours but practically there are next to none because most teams are not operating under the assumptions followed while calculating the ROI.
Then why Automate?
Increased test coverage
From our example, we were not able to test the complete application. And from my practical experience, many products are ‘way’ less tested they should be. Adding a dozen more tester’s does not seem to be practical either.
To cover more ground, testers can program a dumb machine to do the basic ‘execution’ which they have to unwillingly do (since its boring doing the same thing again) every time a release is going out. This frees up their time to do intelligent work and get the repetitive checks done by a machine.
Testers focus on important areas
This might seem a repetition from the point above, but there is a slight and important difference. Tester’s don’t just free up their time, but they can now also leverage the dumb grunt by focusing just on the thinking part and delegate as much possible the ‘execution’ part. A high percentage might not be possible, but if automation is leveraged properly, the test quality can improve significantly since most time would be spent on ‘thinking’ than doing repetitive stuff. More on test scenarios that are ideal candidates for automation here.
Quick feedback – Find problems earlier
How many times has it happened after a bug fix an important feature stops working altogether, and this comes to light at the 11th hour when there isn’t enough time to regress the fix properly either.
There is a lot of value in getting feedback quickly. Different checks running at different stages of the development process can provide the needed feedback at that point. As an example, a possible plan could be run unit tests and high-level checks during development, complete regression in QA stage, user acceptance tests on production, or any process that suits your product and team.
Quick feedback – A big step towards CD
The companies to be most successful are the ones taking an idea from the drawing board into the consumers hand most quickly. This is where continuous delivery comes in. The race to minimize the ‘time to market’ can become a huge factor giving the first movers advantage. This video will give more detail on how automation facilitates that.
Increased confidence in the product
An inherent problem with exploratory testing is “it’s done by humans”, which is a good thing too but people tend to forget. A tester might not test the same function the exact same way every time or forget to test altogether. With automated checks, we can be certain of what features were tested and if they are working.
This makes the decision to ship a release much easier and allow for some quantitative measures to take decisions on. Although this alone cannot be enough to make the call, but coupled with decent exploratory testing, it makes a difference.
It’s not just the team, customers of the product also can have a sense of satisfaction of knowing certain checks being automated ensuring the functionality will most probably have gone through a checking process.
Commitment to quality
From our example and my experience, most teams do not have enough testing staff to completely regress the application every time a change is made. Some would argue it’s also impractical. Having automation in place shows the commitment towards ensuring maximum areas of the application get tested or checked before shipping to the customers.
This is where the phrase ‘Quality is a mindset’ comes in. When we hold ourselves and our product to a high standard, it will necessitate to indulge into some form of automation process, because most modern applications are not possible to test adequately with a cost-effective sized team.
There are saving, but not the way we calculate them
Equating man hours to machine hours of execution is not the correct formula for finding your return on investment for an automation project. The returns do not come in tester’s man hours saved, rather in different forms which are by no means less important, just less obvious.
The real value comes from increased test coverage, allowing testers to focus on what really matters while delegating grunt work to a machine, get quicker feedback on fundamental problems or features, a major milestone towards reducing time to market, an increased confidence of the customers and the team in the product’s quality which shows the commitment to quality having an impact on the end consumer.
Feel free to share what other benefits do you feel automation brings to the table.
ROI isn’t really a monetary type factor in automation, it is an efficiency one. Which can have monetary impacts, but not the hard dollar type people expect. Automation allows you to get that “Illusion of Speed”, meaning that once you can run tests in a distributed method in parallel (and unattended) you appear to “compress” the amount time it takes to run it all. By running in parallel in a distributed manner the overall time to execute the workload stack is the same, but now it is multiple threads of tests running instead of a single one. So your time to execute is bound by the largest or slowest set (chunk) of tests that run.
Thank you for sharing. Alan makes a great point.