The product had been tested for years now by the testing team doing exploratory tests, and writing test cases for important areas. The application size increased day by day eventually reaching to more than a thousand test cases. That is when the testing team thought of delegating the ‘checking’ part of regression to automated scripts to free some time for real testing.

Many product teams coming towards automation have reached this stage and are looking for a way to shift written manual tests to automated scripts. The tests in many cases include some rich scenarios which the team wants to leverage, and are looking for scripting an exact copy. Naturally this comes with inherited challenges, some of which I am about to share on how we managed for one particular product.

Before moving on, some tools claim to automate manual tests from a word document etc. that is not being discussed here (plus I yet have to see that work!).

 

Test case to script mapping

Ideally all manual tests should be part of the automation suite as it is. However, differences are bound to creep in. To maintain traceability between tests and automated scripts, creating a mapping document would be a good idea. Essentially map every manual test to an automation test. For a discrepancy in test scenarios, mention the reasons with appropriate tags (for ease of filtering).

As the application evolves, changes in manual tests come in and scripts need to be updated. Having this document would

  • The change would become way easier for the person updating scripts if any prior discrepancy was written with reasoning readily available.
  • Secondly during regression it would be very clear which areas automation is not looking at and the manual tests might want to look into.

 

Scripts incapability vs sentient beings

There are always some steps in manual testing which the testing tool is not able to perform. Could be a physical activity outside the product, portion of the application not automatable, a very complex bunch of scripts needed to improvise in different application states. Instead of just leaving out the test altogether I usually recommend

  • Alter the scenario to suit the script, salvage whatever you can, and forego what cannot be done.
  • Break the test in two. For the second test use pre-populated data / test scenario to avoid the area not automatable.

The mapping document comes in very handy here.

 

Manual test steps in report

Test reports generated from automated scripts should be readable primarily by the manual testing team. Usually I see teams with test reports showing all the automation mumbo-jumbo right off the bat, creating lots of confusion for someone not involved in automation.

I strongly advise to include test steps as it is from the manual test case in the automation test report. Under each step should be the script read / write details the tool is performing.  Non-automation folk can then make sense out of it, also it creates lots of ease for the automation team to fix issues.

 

Dual purpose

Apart from mapping differences from manual tests, this document was used by us to have an overview of the complete automation suite’s health. Scripts which we knew were faulty and needed updates, scripts needed in-depth investigation, scripts failing due to a reported issue, all these status updates were appended to this document.

Even if you don’t have manual tests to map to, still every automation project must have one spreadsheet with at least the fields listed. These are a huge time saver when managing batch runs / daily runs.

Care to share what you did to map manual tests?

Till next time, Happy automating!