Like lots of folks, I used to calculate automation ROI by measuring \u2018hours saved\u2019 if a person were to do these checks instead of a machine. Perhaps that\u2019s how the market trend generally evolved and a way for vendors to sell their products \/ services. After working for years in the industry and listening to other thought leaders and folks sharing in the community, I feel the \u2018cost cutting\u2019 might be there, but not in the way most of us think about it which should change the way we think about automation.<\/p>\n
To make that a bit obvious, what would you say is the ROI of a piano for an average user? It\u2019s not easy to quantify the return on investment for a \u2018tool\u2019, but that does not mean it\u2019s any less important depending on the circumstance.<\/p>\n
For years till date automation tools and services have been sold as a method of reducing cost. In theory it does sound logical, however after working in the industry for years, I don\u2019t know of anyone who has really \u2018seen\u2019 these cost cuttings including myself. Let\u2019s dissect the calculation of cost reduction in detail to try and pinpoint the discrepancies.<\/p>\n
The story goes something like this:<\/p>\n
\u201cSavings per test cycle= Tests\/checks automated\u00a0 x\u00a0 Execution effort (man hours) per check\u201d<\/p>\n
And then we\u2019d calculate the break-even point when the savings equal the initial investment in preparing that automation suite plus any other costs etc. For an accountant this would make perfect sense, except the \u201ceffort per check\u201d cost does not exist! Let me explain.<\/p>\n
<\/p>\n
The first problem is equating automated checks execution time to a tester\u2019s man hours. The way a machine runs a script is not the same as how a person would test that feature. There is a lot of background to this concept if you are not familiar with methodologies like Rapid Software Testing and related concepts. For those who are not, let me try and summarize the required concept quickly.<\/p>\n
The verb \u201cTesting\u201d is an act of \u201cThinking\u201d and \u201cCommunicating\u201d on how to test a specific feature. Once the tester decides what to test, then he \/ she executes the scenarios. A machine is incapable to \u201cTest\u201d since it cannot \u201cThink\u201d neither can it \u201cCommunicate\u201d like a human. It can only \u201cexecute\u201d what it\u2019s instructed to check.<\/p>\n
(Thanks to the RST community, James Bach, Michael Bolton and folks for articulating this clarity)<\/p>\n
<\/p>\n
Let\u2019s take an example of a candidate application which would hypothetically require around 1000 man hours to test the complete application (btw many products would fit this description). How many testers would be needed to regress over this application within 2 weeks? Around 13 full time testers. Do you think the team would have 13 testers on the team? Mostly not, they would have less than needed people and make do with whatever time they get.<\/p>\n
Now, half the effort of \u201cTesting\u201d was the thinking part which a machine cannot do (Some would argue, including me, a lot more than half). The other half is supposed to be spent on \u201cExecution\u201d, where only a small percentage is actually being spent since the team size is ALWAYS smaller than needed.<\/p>\n
That\u2019s how there \u2018might\u2019 have been \u2018some\u2019 savings in terms of man hours but practically there are next to none because most teams are not operating under the assumptions followed while calculating the ROI.<\/p>\n
<\/p>\n
From our example, we were not able to test the complete application. And from my practical experience, many products are \u2018way\u2019 less tested they should be. Adding a dozen more tester\u2019s does not seem to be practical either.<\/p>\n
To cover more ground, testers can program a dumb machine to do the basic \u2018execution\u2019 which they have to unwillingly do (since its boring doing the same thing again) every time a release is going out. This frees up their time to do intelligent work and get the repetitive checks done by a machine.<\/p>\n
<\/p>\n
This might seem a repetition from the point above, but there is a slight and important difference. Tester\u2019s don\u2019t just free up their time, but they can now also leverage the dumb grunt by focusing just on the thinking part and delegate as much possible the \u2018execution\u2019 part. A high percentage might not be possible, but if automation is leveraged properly, the test quality can improve significantly since most time would be spent on \u2018thinking\u2019 than doing repetitive stuff. More on test scenarios that are ideal candidates for automation here<\/strong><\/a>.<\/p>\n <\/p>\n How many times has it happened after a bug fix an important feature stops working altogether, and this comes to light at the 11th<\/sup> hour when there isn\u2019t enough time to regress the fix properly either.<\/p>\n There is a lot of value in getting feedback quickly. Different checks running at different stages of the development process can provide the needed feedback at that point. As an example, a possible plan could be run unit tests and high-level checks during development, complete regression in QA stage, user acceptance tests on production, or any process that suits your product and team.<\/p>\n <\/p>\nQuick feedback \u2013 Find problems earlier<\/h3>\n
Quick feedback \u2013 A big step towards CD<\/h3>\n