Legacy products usually have some common themes around them. Typically older technology stacks along with ‘not so efficient’ processes as well. Among the many legacy teams I’ve known, I see this pattern clearly and has an immense effect on automation initiatives.
Front end testability
Most applications using older technologies do not have a sound architecture at the front end, a central location from where the application on different layers can be controlled. If using a framework like Angular 2, there would be a central control flow and defined modules. With classical applications, since all web traffic is asynchronous, there are might be multiple frames running around independently creating a sever timing issue for automation. This cannot be generalized obviously but many legacy applications might have similar problems.
Apart from finding dynamic delay challenge, for UI automation you might get lucky or in serious trouble. Either all objects would have unique IDs since they are independently created and sent from the back end, or they might be using some really outdated framework which hardly any UI tool would understand, in which case it might be best to let go of UI automation.
Mostly it’s the former case because pages are typically designed by the developer at the back end. With modern frameworks like Angular, they facilitate development by doing the heavy lifting and require just a skeleton design at the front end and sending data to display from the back end. That makes life easier for rendering the page, but the DOM is not that ‘UI automation’ friendly then so to speak.
Back end testability
Back ends or the business layer would be even worse. Exposed APIs which can be used to create API level tests might be very few, again a generalization, but not as many as a containerized application for sure. Plus since the APIs were not designed for other than developers to consume, sometimes it’s really hard to make sense of what’s going on. In many cases the documentation might not be as strong either making it even more difficult to decipher the web services.
This has a greater impact than most would imagine. Tests at API level are efficient and easy to maintain, which goes a long way in automation. Trying to test everything from the UI level is not possible and takes lots of ‘unnecessary’ effort.
The same might happen on the database layer. Except for the UI level, nothing else might have been prepared keeping in mind anyone other than the developers would use. The database schema might have similar architectural challenges for testers. Finding out what tables are being updated and when and architectural change does happen in the schema, noting its effect on tests might be another challenge.
Release and patching
Like server side programming, the release process might not be designed for anyone other than the person doing the patching to interact with it. To get the most out of automation, some level of continuous integration would go a long way. This piece can vary a lot from team to team, but at times this can also be a big challenge. Getting hooks into the source control manager might not be as easy. The processes being followed might not be compatible with the true essence of code versioning, which might necessitate getting the release process changed first.
Second problem would be rendering test environments at different stages of the SDLC. Traditional testing might not be requiring that before automation. But automation would need separate environments on all stages we run tests. Render isolated environments on demand which can patch the latest code base and get latest automation tests would not be easy to being with.
It’s more than just the technology
The technology is not the only important piece. The culture in legacy teams is the bigger challenge. These teams are mostly formed by a few colleagues who have been working for a very long time together. In the early years of the product, the processes they used might have worked perfectly for them. But as technology changes happened and eventually senior team members start to leave and new folks come in, the legacy processes start to fall apart.
The culture part is far more of a significant player, which we will examine in the next post.
It’s still worth it
If you do have a legacy product and are looking to automate it, it might still be worthwhile to automate it. In the process, the complete product team will have to evolve to some extend as well. The truth is, it’s better to evolve than to be extinct. All the challenges we discussed and more that we will shed light on in future posts, all do have ways to solve them I believe. The point is it’s not going to be a cake walk and to have realistic expectations of what to expect from this endeavor is the whole point of this post.
Managing an automation project is overlooked at times, I feel perhaps, owing to the technicality of it, undermining some essential fundamentals is a mistake all to common. @Katrina shared a beautiful presentation titled Automation and Managementat the #AutomationGuild conference 2017. There was a whole lot content, along with my reflection on the topic for one post. Follow up posts will cover the complete session. In this post, summarizing my leanings from her talk and my thoughts around deciding what to automate.
Drifting along without a concrete decision making criteria on what to automate will not render optimal utilization of automation efforts. Automating everything is not a good idea either, so how to “Deciding what to automate” that will fit the bill?
Understand your product’s strategy
How is the product being pitched in the market / the competitive advantage? What features are important to customers and the business, giving an idea of what’s important in the overall strategy.
Is the product in its growth stage? If so lots of changes are in store, for products in the maturity phase of the product’s life cycle, drastic changes might not be expected. What areas are planned for changing and what does the release plan look like.
The best phrase I learned here was “Testing is reputation assurance, to protect corporate image”. Automation can provide a big boost to build that credibility. To do that, find out what application features can be marked as high risk. She recommended a great article by @James Back on the topic Heuristic Risk-Based Testing to identify risk areas and make them a priority in automation.
Goals of your automation
The usual goals are increase test coverage and reduce time to market. However each product / company has their own unique challenges as well. Take them into consideration as well.
Few places where I kicked off automation, they had no formal test scenarios. Having formal test scenarios would have a great benefit to them, which we added as a goal in our automation effort.
Further goals should be quantifiable. The analogy given was ‘you should be able to add it to a check list and check it off once done’. Everyone mentions SMART goals, but creating one is not easy, not impossible either.
One of the best practices in automation has always been to make code reusable. Take that to higher layer of abstraction, at the requirements layer. Serenity, a tool which helps with that, I heard for the first time at the conference can help. Behavior Driven Development also allows repetition to some extend, however is not designed for that purpose.
Could be done without a tool or framework also, as done in one of the automation projects I worked on. Functional points were automated as independent units, making them reusable to construct larger scenarios. The thought process has to be there, you can then mold any tool / framework in use to achieve that.
When should a test be automated? I had not thought about this before. Katrina quoted a great piece of research from Brian Merik “When should a test be automated” published in 1998. Some ideas from back then are now mainstream, but his answer on ‘what to automate’ is neat. One of his points is “The cost of automating a test is best measured by the number of manual tests it prevents you from running and the bugs it will therefore cause you to miss”. most veterans of the industry might have thought of this earlier, but the words make it all clearer. If a certain feature is hardly used, for instance where manual testing for the feature is scheduled for just few hours per year, does not make sense to automate (that’s an extreme case, I know, using an outlier example to make a point!)
Also it is a good idea is to define how much time to spend automating (which is 99% of the time not going to be accurate). Having a ball park figure helps with prioritizing activities which are more important. Further, limits on execution time is a great requirement to have at the outset. Will prove to be considerable help when moving towards continuous integration
The only constant thing is change. Have an idea of how the above might change, that affects your decision making criteria. This is certainly hard to extrapolate; if broken down, one might have a good chance at figuring it out with reasonable probability.
Companies go through change due to external and internal environments. The external is further categorized according to amount of influence from different entities, these range from direct customers and suppliers to government laws and countries trade agreements. Companies operating in disrupting industries, or ones having a tricky market segment, are more prone to external environmental change. Organizations in stable markets with comfortable market segments, tend to be less affected by external factors. Internal environment I feel is relatively simple with few ingredients like, higher management’s mix, organizational structure, team’s skill set and future direction at the moment.
I have to admit though, C-level management is all about finding what’s changing and to navigate the ship accordingly, and it’s not an easy job. At the level of detail needed for automation, it can be managed.
When you crunch in the numbers, add folks from all teams to derive conclusions to get the right answer. Everyone has a different insight. No one team can have the whole picture. To get the best value for time spent, get opinions from across the board.
A great tip here was, “Allow yourself to be influenced by others”. Until others feel you can be influenced by their opinion, you might not get the best suggestions.
Team’s level of expertise
Although this should not be considered at the outset, but something to consider towards the end. I have seen this point coming way up the list, my rationale for bringing it down here is, as quoted from my favorite book “How google works”, “the internet has rendered resources and computation power almost limitless”. If you see a clear benefit in setting the scope as, for instance, UI, integration and unit tests with 80% code coverage, while the current QA team cannot handle that, it’s not hard to find resources to get it done.
When bringing current expertise level at the beginning, decisions made will have that bias in them unwillingly. However, if circumstance do not permit augmenting required expertise, then current skill level is a good idea to consider.
Anything else you would consider while deciding what to automate?