The ‘not’ so small code changes

By | November 29th, 2017|Management|

I recently made the good old ‘one liner change’ in our automation framework having a 22-hour batch run with one permutation only. We discovered a week before regression my ‘small change’ had caused a straight 25% false failures. When my awesome team suggested to revert the change, I realized I did not recognize the demon at that time. This demon goes by many names, ‘one liner change’, ‘small change’, ‘localized change’ and so on (and will be referred as demon from here on in letter and spirit).


Cause of the trap

Have you ever sat with a friend driving a car and you feel a bit unsafe, like he / she does not have as much control over the car? And how is the driver feeling in that moment, pretty confident for the most part. I feel a similar case happens when one person on the team feels about the ‘small change’ in code. In the moment of making the change, it feels like we were just putting a bandage on a little bunny’s scratch. Till later on it turns out that was a lion we just misfired a pump action at and now are running for our lives.


The demon effect

The demon is lethal mainly because it does not look like will create much of a problem. Often teams end up making such changes close to regression / release giving less time to react. Eventually the decision of releasing with bugs or delaying the release has to be taken which no one likes. I would equate this with “Putting good money after bad money” (adding more time to salvage the situation) hardly ever works except for gifting with more frustration (my published thesis on the subject here).


Small and ‘not so small’ change

While small changes or localized changes do exist, sometimes the demon is misunderstood for one. What defines a demon then? A formula I like to use is if the change is encapsulated within a small area of the application, a low-level module or one class affecting functionality of just that module or class. The mistake I have seen made is a functionality which is not encapsulated within a confined class / module but we feel it’s used only in one or few specific areas being confused with a localized change. Unless there is no way this change can be accessed or affect directly anything outside its scope, it’s not a localized change.


Quarantine the demon

The change must be done no doubt, but can be done in time of peace, not on the 11th hour a day before regression. Implement it in the early sprints of a release, so there is time to react and adjust before the deadline hits. Firstly, while implementing you hopefully will not be in a rush and can think through. Secondly there would be time to test issues around just that change and would be easier to identify the cause of bugs we see.


A general best practice is to make changes in increments rather than doing the whole big change at once. If you have automation running on UI or API level, that makes things easier. As you push the big change in increments, you get to know right away any outright effects it might have. And if you don’t have automation, I would say first think of doing that, otherwise you can focus manual testing as much possible towards testing the ripple.


Big decisions take time

Like in life big decisions take time not just to research on, but to think about and reflect upon. Similarly, I feel taking time for architectural changes to ‘sink in’ or internalize takes time. By doubling the resources working on it, it would not get done in half the time. For big changes, I usually start thinking about them ahead of time with my team, so when we do start doing it we have things in perspective and can make a better judgement.


Care to share what else would you would do to make an architectural change transition smoothly?

Manual Tests & Automated scripts Traceability

By | June 29th, 2017|Management|

The product had been tested for years now by the testing team doing exploratory tests, and writing test cases for important areas. The application size increased day by day eventually reaching to more than a thousand test cases. That is when the testing team thought of delegating the ‘checking’ part of regression to automated scripts to free some time for real testing.

Many product teams coming towards automation have reached this stage and are looking for a way to shift written manual tests to automated scripts. The tests in many cases include some rich scenarios which the team wants to leverage, and are looking for scripting an exact copy. Naturally this comes with inherited challenges, some of which I am about to share on how we managed for one particular product.

Before moving on, some tools claim to automate manual tests from a word document etc. that is not being discussed here (plus I yet have to see that work!).


Test case to script mapping

Ideally all manual tests should be part of the automation suite as it is. However, differences are bound to creep in. To maintain traceability between tests and automated scripts, creating a mapping document would be a good idea. Essentially map every manual test to an automation test. For a discrepancy in test scenarios, mention the reasons with appropriate tags (for ease of filtering).

As the application evolves, changes in manual tests come in and scripts need to be updated. Having this document would

  • The change would become way easier for the person updating scripts if any prior discrepancy was written with reasoning readily available.
  • Secondly during regression it would be very clear which areas automation is not looking at and the manual tests might want to look into.


Scripts incapability vs sentient beings

There are always some steps in manual testing which the testing tool is not able to perform. Could be a physical activity outside the product, portion of the application not automatable, a very complex bunch of scripts needed to improvise in different application states. Instead of just leaving out the test altogether I usually recommend

  • Alter the scenario to suit the script, salvage whatever you can, and forego what cannot be done.
  • Break the test in two. For the second test use pre-populated data / test scenario to avoid the area not automatable.

The mapping document comes in very handy here.


Manual test steps in report

Test reports generated from automated scripts should be readable primarily by the manual testing team. Usually I see teams with test reports showing all the automation mumbo-jumbo right off the bat, creating lots of confusion for someone not involved in automation.

I strongly advise to include test steps as it is from the manual test case in the automation test report. Under each step should be the script read / write details the tool is performing.  Non-automation folk can then make sense out of it, also it creates lots of ease for the automation team to fix issues.


Dual purpose

Apart from mapping differences from manual tests, this document was used by us to have an overview of the complete automation suite’s health. Scripts which we knew were faulty and needed updates, scripts needed in-depth investigation, scripts failing due to a reported issue, all these status updates were appended to this document.

Even if you don’t have manual tests to map to, still every automation project must have one spreadsheet with at least the fields listed. These are a huge time saver when managing batch runs / daily runs.

Care to share what you did to map manual tests?

Till next time, Happy automating!

How to Hire an Automation Engineer

By | June 29th, 2017|Management|

Recently I had the opportunity to go through the cycle of augmenting an automation engineer to join our team. It had been a while since I used to do this (much frequently) few years ago. This time I stumbled upon few ‘new’ realizations, especially for hiring automation engineers, which never occurred to me before. With many folks I see struggling to find suitable candidates for this title, I thought of jotting down some lessons learned through the process. The best part, its one step really (to being with)

 Cut the Crap

Candidates undervalue their skill set (especially the ones you want to hire). Furthermore, seeing a huge job description with all the buzz words the hiring manager could find on the internet, scares away potential candidates from applying. There are usually a few key skills for any given position which are vital for the team. With a lot of ‘name dropping’, candidates get confused and hesitate to apply due to lines like ‘Expert with 3+ years experience in SQL server’, which in many cases, there is a 5% probability the new hire will be writing complex queries.

Our job ad was not performing well. The candidates we got were not even a match on paper. HR suggested to have another look at the job description, and she was right. Hesitantly I reviewed and found the job add was scary to say the least. No doubt the position had considerable requirements, but the essentials were few. While cutting down the content, the few skills I felt were essential for an automation engineer / SDET (NOT a lead position) are presented here.

Programming aptitude

Not ‘10+ years of experience in Java’, who could code a talking parrot. If your project is in Java, don’t necessarily look for a Java guru. If a person has the aptitude of constructing algorithms and can demonstrate good programming skills with any language, he/she can learn the new language or framework. When the technology changes (which it will), they will be most comfortable to adapt, more willing to learn new tools / languages and leave their comfort zone.

Testing acumen

A term I use referring to a tester’s mind set. Who is able to craft test scenarios covering different aspects of the AUT, having the process related knowledge, the skill to extract requirements, to tell a great story when writing up an issue and so on. I always believe an automation project is as good as the scenarios being automated. If the scripts are checking trivial functionality, it’s not going to create the difference we want.

The counter argument I’ve heard, since test cases are being given, automation folks don’t need in depth testing experience or knowledge. Well, a person with technical insight and a tester’s eye will be more suitable to suggest what areas are not being tested and how to test them. Even if a team has Unit tests, integration tests and UI tests, still someone needs to create scenarios for the system level tests verifying the business logic on system level.


The one thing an automation team might never get rid of is ‘technical problems’. Each day is going to be a new day. Either you will make mistakes and learn (for the most part) or you would be wise and spend considerable time in learning to avoid making the mistake. Through all these endeavors, the only thing to keep you going is the right attitude.

In any job having the right attitude is the most fundamental aspect, specially true for automation folks. They never run out of problems, and they can never get tired of solving them!

Smart creative

Under non-technical skills there are many qualities hiring managers aspire to, this one which I found lacking in some cases and most relevant specially in this type of position. A term coined by Google’s “Smart creatives”, which is the enhanced version of Peter Drucker’s knowledge workers. I understood it as ‘A person with the fire to learn new things, who is technical and business savy and has a creative personality’. Imagine what a smart creative technical tester would do – Put development on DEFCON 1! Actually no, instead I feel he / she should bridge the gap between both camps and get best of both worlds.

Automation exposure

I had some candidates who were not very savy with the traditional automation frameworks, but had great programming and learning skills, could develop algorithms and had learned other programming languages. Those candidates were equally good in my dictionary and I would definitely hire such a person.

If your project is in Selenium Cucumber, it would be unwise to hire someone who is already working on this technology, because there is no new learning for them. The increased salary will keep them content only for so long. Look for people who have the level of exposure you need, not necessarily in the same tool. For elementary positions a basic understanding of how generally UI automation works, common problems faced in automation and how the DOM works would suffice.

 The predicament

This position is fundamentally hard to fill. QA folks with the ‘testing acumen’ have traditionally kept a distance between them and technical aspects of developing software. Development folks have the technical exposure but lack the testing acumen. Where to find this mix breed!

Hence focus on the fundamentals. I feel by going deep into finding testing knowledge or technical skill set, the job gets even harder. Not to say hire someone with less of any of the two, instead hire someone with the ‘aptitude’, not necessarily +x years with a huge checklist.

The job description should revolve around the fundamentals, not more than a hand full of bullets. As the founding smart creatives put it “Widen the aperture”. Employers tend to narrow down candidates with very specific backgrounds only. We had an interesting candidate when we widened the aperture. He had an accounting background and demonstrated great skill in different development technologies too. A lot of hidden gems are left out when the filters are too tight around filtering resumes.

Care to share what else you would consider while hiring an automation engineer?