In developing an automation framework, there are many important aspects to research and designed carefully to get the right results. Logging script execution results is usually taken for granted and the default reporting mechanism provided by the tool / framework is utilized without much modification. However all end users including manual test team up to execs across the product team will only be seeing the test report on a regular basis. The usual audience are the automation team, manual testers, and managers from QA and product development. Each group has a different purpose to view the report and should be catered.

 Summary

The objective here should be to give enough detail for the reader to make a decision on how good / bad the results are in 30 seconds. Apart from common details (execution time, no. of scripts ran and pass / fail ratio) one can add results on modules failed, checkpoints filled, time spent on failed steps and so on. In case the batch is running two different sets of scripts, like acceptance criteria scripts and complete regression, you may want to have separate summaries for both.

Report structure

The report structure should allow for lots of detail with very less cognitive effort required by the reader. Best way to do this is to have indentation on many levels in the report. Typically I will format my scripts in the following indentation sequence

Project->Module->Screen->Test Case->Manual test case steps -> Debug information for the step.

The structure gives a logical grouping to scripts. Again my criteria would be for the reader to reach the desired manual step or debug information level within 30 seconds.

If the scripts are written from manual test cases (which is a good practice), I would strongly recommend to have the same steps in the automation result as of the manual test case. If there were no written manual test cases to begin with, still the high level test steps should be similar to how a manual test case would be read. This gives lots of ease to the manual tester reviewing the results and the automation engineer in diagnosing / fixing the scripts.

 Debug information

The most important aspect of the report for me is the debug information. This area is usually left to the default feature set provided by the tool. Without this information, the poor soul maintaining the scripts has to go through lots of hoops to find the root cause of a failure.

I think of debug information as evidence left at a crime scene. You would want as much clues in the report to get to the right conclusion, instead of having to send lots of stuff to the forensics!

The most crucial piece is to ensure if each object was found correctly. To keep track of this, print the complete x-path / unique ID of every object before performing any operation on it. That gives you half of the problems which stem from getting the wrong object.

The control flow of the project is another very important area. Within each manual test’s step there would be many smaller steps performed by the automation framework. It’s a good idea to leave bread crumbs of where your program’s control flow has been to. We do this by whenever the control flow enters a function, we add the function name to the log. In case of a failure, one can pinpoint exactly in which function the script failed.

When writing text in fields, it’s good to log that as well. Many tools would again do that for you, in case it’s not there, would be a good idea to do so. Same goes for screenshots. In case of a failure you might want to have a screenshot at the time of failure. This can be on every operation as well. Depending on the number of steps performed, one can customize the frequency of images taken, otherwise handling a test report with a huge size is not a fun task.

The suggestions given here related to adding debugging information into the scripts will clutter your results. This can be overcome by 1) having nested results where all the debug information for each step could be within an indented folder. 2) You can have a flag to turn on and off the debug log information. This way once certain sub-projects are stable we can disable the debug log information, and have it for areas where we might expect undue failures.

Issue reporting

Once a failure is diagnosed as a bug and reported in the project’s issue tracking tool, it’s a good idea to add a place-holder in the script where the failure occurred. This way the next time it fails due to the bug, we do not have to investigate it again only to find it has been reported.

 Implementation

Most tools / frameworks provide the basic details mentioned here. As per my experience, TestComplete and QTP do a good job with some areas here. One of the things I like about TestComplete is its reporting. It allows lots of detail by default which is configurable, and has lots of options to customize the report to a very deep level. Certainly many suggestions given here would not be available readily in any tool. I recommend building a list of common routines to incorporate these in a test report and used throughout the project.

 Conclusion

The effort of building many of these features in the framework might seem a little steep while reading, but the pay-off is many times worth the effort. Even though object recognition is the single most important criteria for me while selecting a tool, going through its options to customize the test report and what it offers by default is a great idea as well.

Don’t forget to share any other features or tips you use in your automation test report!