Framework design

What is an Automation Framework?

By | February 9th, 2018|Framework design|

In the early months of learning automation, I realized having some sort of custom built code running on top of an automation tool is important. Over the years I have been learning the art of architecting the perfect automation framework design, I noticed the term automation framework does not mean the same thing to everyone, and hence this endeavor in clarifying my interpretation of the term.


What does framework mean

The coolest definition I found was from Wikipedia, the first section is worth reading, quoting just a few sentences to make a point. A framework is:

  • ‘an abstraction providing generic functionality’,
  • ‘Software frameworks may include
    • support programs, compilers, code libraries, tool sets, and APIs
    • that bring together all the different components
    • to enable development of a project or system’.


Seems a framework is not just one tool, program or library. Rather a group of them joined together to create software. Therefore, for each software product / project a framework would include everything from libraries used to develop the code till peripheral tools such as compilers, source control and so on.


Automation framework used loosely

I’ve seen folks refer to a specific tool, library or a dependency as a framework. In the category of tools for automation, there is a long list of ‘types’ of these from:

  • Vendor based
    • UI automation ‘tools’ (UFT, TestComplete)
    • Supporting tools like Soap UI / PostMan (API)
  • Open source libraries / dependencies like Selenium, TestNG
  • Open source (GUI) tools like Jenkins
  • Code based tools like Maven, NPM


All these are often jumbled up into words like tools, libraries, dependencies and frameworks which are all used interchangeably. While the intent here is not to classify what jargon is suitable for what type, the use of framework for any one of them is not accurate IMHO. Of all the tools / libraries mentioned, none can ‘individually’ be called an automation framework.


An Automation Framework, a customized architecture

Most often, any off the shelf / generic library cannot be used without any tweaks / augmenting it’s functionality. To support automation of any AUT, we do need to add more functionality around the tool’s functions (watch this video in case you are not doing that right now) or use multiple tools / libraries to achieve a more complex task performing lots of steps with one call.

We create an abstract / high-level architecture which utilizes a lot of tools / libraries at the same time to facilitate creation and execution of automated checks. This ‘framework’ is managed by the automation engineer with a library of its own, utilizing other libraries / tools methods where needed. Our tests use these framework functions instead of the tools functions directly. For instance, the framework might recognize an object using one library, write a value using another and report the findings using a third tool.

More from Wikipedia on this, ‘A framework’s control flow is dictated by the framework itself, not the caller’. As in frameworks we design for our products, they usually dictate how the checks / tests will be written and the results generated.


Are there any Off the shelf Frameworks?

While a single tool cannot be called a framework, there are off the shelf frameworks we know of used in software development. The well-known MFC and .Net ‘application frameworks’ are an example. These are more than just a library. Along with giving a librar(ies) to use, they have design patterns of their own, enforce some programming practices wen using the framework,

The topic does get debatable at this point, but I like to distinguish an automation framework and application frameworks based on intent and usage of them. Consider comparing .Net with Selenium / Jasmine / Jenkins or any of the other famous tools / libraries used for automation. Is the usage similar in both cases? Dot Net is a world of its own which has its own design patterns recommended and integrates up and down the stack with other technologies. Dot Net is not just one library, but allows support for a whole technology stack summed up into one ‘framework’. The automation tools we just mentioned cater to specific needs only and are driven by the user instead of the tool / library driving the control flow.

Another way to look at it is frameworks are a higher level of abstraction integrating a lot of different dependencies / libraries providing most / all needs for a stable automation framework within one package. There are a lot of efforts being done to accomplish this in an off-the-shelf like products, but most are not ‘widely famous’ at this point.



An automation framework can be characterized by an architecture which can is utilizing multiple tools / libraries to perform various tasks needed to develop, run, report and maintain automation scripts. For the most part at the current stage of evolution in automation, I feel this would mostly be custom made architecture designed to meet automation needs of a specific product.


For further reading, this article explains the ‘pillars’ for automation framework design, principles to consider while designing a framework.

Pillars of Automation Framework Design

By | December 5th, 2017|Framework design|

As soon as you mention an automation framework, one would find people start talking about Selenium, TestNG so on and so forth. I classify the automation project’s architecture designed by the automation team as a framework, the underlying tools could be any unit test framework or automation tool used.

Most automation efforts get stuck in the maintenance phase where the upkeep of the scripts starts to outrun the utility we are getting from them. The architecture is not designed with best practices in mind to coup with upkeep and future needs of the product.

While working on safety critical products I caught up on a few design principles heavily used in that industry. Here are the design principles I call ‘The pillars of framework design’ learned over the years.


The only constant is change. If the code we write today is going to be hard to maintain in the future, the rework and upkeep cost is going to skyrocket. Nicely written code in my books does not just do the job today, but can keep up with the changes to come in time and be easy to maintain. I recommend every team to have a definition of maintainable code they want to follow as per their needs.

There are a lot of factors determining if a piece of code is maintainability. I would elude to a few here, a detailed discussion on these would be done some other time.

  • Naming conventions should be defined and followed.
  • Code commenting standards outlined and should be able to create code documentation from it.
  • Code complexity should be to a minimum. Automation projects do get complex anyway if there is no check on this, they can get out of hand pretty quickly.
  • Logging of test results. Report logs should not just show what passed and failed, rather should have debug information in there (we know we’ll have false negatives right!) and be an easy read at the same time.


The way automation generates great benefits is by being reusable, so should your individual modules in the code. Reusability should be embedded in everything you do in an automation project.

A major concept here is creating wrappers. Again, from my old embedded lessons (the buzz word today being IoT), for areas which we expect fundamental changes to happen, it’s best not to use them directly, instead create a wrapper on top of them even if it would have just one call in it. This makes enhancements easy and adds lots of portability.

Here are a few places to keep reusability in mind:

  • Selection of scenarios to automate.
  • Creating architecture of the project, have separate layers calling / utilizing each other.
  • Methods within each layer could be used by one another.
  • Reusable test data.
  • Smaller tests which could be combined to create larger / use case checking scripts.


Often when teams stat with automation they have the immediate goal in mind and the long term picture is taken into account. In most automation projects, there will always be a need to expand the project adding more tests and functionality to it. Therefore, the architecture should be built keeping in mind future scalability. Time and again have I seen projects not able to scale up and the need for a massive rework.

On starting an automation project, I would estimate of how many test cases we can expect to run, for instance one project was 5000 scripts. One the initial framework design is completed, we ran a sample project running 5000 scripts (a few tests can be executed again and again) to see if everything would work on scale. This would not be exactly like the actual case, but would be a good estimate.


The most important pillar for an automation project. The environment in which these scripts are to run is ever changing. The AUT (Application Under Test) is changing, browsers / services are changing even the automation tools / languages are changing. With all this change going on, it’s no surprise automation projects do succumb to robustness issues.

This is a vast subject and lots of variables to consider here, however I’ll be eluding to just a few here.

The root cause of flakiness (the automation term for not robust) is when the expected state of the AUT is not matching what the automation code is expecting. Therefore, the main guideline is to identify the possible states the overall system can go into and handle them in the code.

I break this exercise down into two areas, proactive and reactive handlers. Proactive handlers are where we know there is a higher possibility of something going wrong and we prevent it before it can happen. A great example would be delays. We know for a modern web application, delays are going to be a problem, use dynamic delays as a standard rule before every action. Test data would be another one. If you are sure there will be a problem (which most of the time there is), handle it can turn into a problem. (refer this article in TEST Magazine to read more).

Reactive handlers are when the state change has happened, and now we want to get back on track, let’s say an unexpected popup appeared. Under this heading most the important tool is event handlers. Apart from the basic Try{} Catch{} blocks, have event handlers to take care of unexpected events like unwanted popups, a page not loading and so on.

While this post does not prove to be a complete guideline, it’s a food for thought on important aspects of framework design.

Logging automation results – Best practices learned over time

By | June 29th, 2017|Framework design|

In developing an automation framework, there are many important aspects to research and designed carefully to get the right results. Logging script execution results is usually taken for granted and the default reporting mechanism provided by the tool / framework is utilized without much modification. However all end users including manual test team up to execs across the product team will only be seeing the test report on a regular basis. The usual audience are the automation team, manual testers, and managers from QA and product development. Each group has a different purpose to view the report and should be catered.


The objective here should be to give enough detail for the reader to make a decision on how good / bad the results are in 30 seconds. Apart from common details (execution time, no. of scripts ran and pass / fail ratio) one can add results on modules failed, checkpoints filled, time spent on failed steps and so on. In case the batch is running two different sets of scripts, like acceptance criteria scripts and complete regression, you may want to have separate summaries for both.

Report structure

The report structure should allow for lots of detail with very less cognitive effort required by the reader. Best way to do this is to have indentation on many levels in the report. Typically I will format my scripts in the following indentation sequence

Project->Module->Screen->Test Case->Manual test case steps -> Debug information for the step.

The structure gives a logical grouping to scripts. Again my criteria would be for the reader to reach the desired manual step or debug information level within 30 seconds.

If the scripts are written from manual test cases (which is a good practice), I would strongly recommend to have the same steps in the automation result as of the manual test case. If there were no written manual test cases to begin with, still the high level test steps should be similar to how a manual test case would be read. This gives lots of ease to the manual tester reviewing the results and the automation engineer in diagnosing / fixing the scripts.

 Debug information

The most important aspect of the report for me is the debug information. This area is usually left to the default feature set provided by the tool. Without this information, the poor soul maintaining the scripts has to go through lots of hoops to find the root cause of a failure.

I think of debug information as evidence left at a crime scene. You would want as much clues in the report to get to the right conclusion, instead of having to send lots of stuff to the forensics!

The most crucial piece is to ensure if each object was found correctly. To keep track of this, print the complete x-path / unique ID of every object before performing any operation on it. That gives you half of the problems which stem from getting the wrong object.

The control flow of the project is another very important area. Within each manual test’s step there would be many smaller steps performed by the automation framework. It’s a good idea to leave bread crumbs of where your program’s control flow has been to. We do this by whenever the control flow enters a function, we add the function name to the log. In case of a failure, one can pinpoint exactly in which function the script failed.

When writing text in fields, it’s good to log that as well. Many tools would again do that for you, in case it’s not there, would be a good idea to do so. Same goes for screenshots. In case of a failure you might want to have a screenshot at the time of failure. This can be on every operation as well. Depending on the number of steps performed, one can customize the frequency of images taken, otherwise handling a test report with a huge size is not a fun task.

The suggestions given here related to adding debugging information into the scripts will clutter your results. This can be overcome by 1) having nested results where all the debug information for each step could be within an indented folder. 2) You can have a flag to turn on and off the debug log information. This way once certain sub-projects are stable we can disable the debug log information, and have it for areas where we might expect undue failures.

Issue reporting

Once a failure is diagnosed as a bug and reported in the project’s issue tracking tool, it’s a good idea to add a place-holder in the script where the failure occurred. This way the next time it fails due to the bug, we do not have to investigate it again only to find it has been reported.


Most tools / frameworks provide the basic details mentioned here. As per my experience, TestComplete and QTP do a good job with some areas here. One of the things I like about TestComplete is its reporting. It allows lots of detail by default which is configurable, and has lots of options to customize the report to a very deep level. Certainly many suggestions given here would not be available readily in any tool. I recommend building a list of common routines to incorporate these in a test report and used throughout the project.


The effort of building many of these features in the framework might seem a little steep while reading, but the pay-off is many times worth the effort. Even though object recognition is the single most important criteria for me while selecting a tool, going through its options to customize the test report and what it offers by default is a great idea as well.

Don’t forget to share any other features or tips you use in your automation test report!