The Automation Guild 2018 conference this year was a nice event, people from around the world coming together to learn and improve their automation efforts.

I had the opportunity to be part of the expert round table panel along with Angie Jones and Oren Rubin which was a great experience. There were a lot of great questions, which showed how awesome the community was. Sharing some of those questions here and teasers from the answers.

 

How to catch bugs earlier

If you want to catch issues earlier in the cycle, then essentially you are talking about shifting left. Get tests executed earlier in the cycle to get early feedback. Divide the tests to execute at different stages of the development life cycle, some can be executed during development, during patch testing, during regression testing, basic tests after deployment on production and so on. Automation checks would go a long way in achieving this, but ultimately, it’s going to be testers who can really catch issues.

The purpose of automation is not to capture bugs, rather take over the mundane tasks which keep testers from spending their time ‘checking’ functionality. Automation is eventually not going to report a lot of bugs, it’s rather enabling testers to capture more bugs, a great point from Angie.

 

Which coding language is recommended

In my opinion, more important than selecting a language is developing the aptitude of designing algorithms. Learn the fundamentals of programming which are common across all languages. Once you can develop an algorithm to solve problems, language selection becomes less important. Plus, languages will keep on changing and you can never stick to one language forever. If you start with learning a language per se for instance Java, you might get tangled up in syntax issues and debugging problems than actually learning to code.

A great point from Oren was about JavaScript, it’s a very easy language to begin with and also is very popular (in fact most popular) these days. More importantly for automation engineers I think it makes more sense since UI automation is usually a great part and using JavaScript allows synergies with using client side scripts and browser’s native functions as well. Usage of promises is also a very powerful concept mentioned by Oren which can certainly help with dynamic delays.

 

Retry failing tests

A solution to flaky tests is sometimes suggested as rerunning the flaky test. In multiple tries if it does not fail consistently, means there is no bug here rather just a script issue.

All of us at the panel thought retrying flaky tests is not a great idea. It adds lots of execution time and pushes away the concept of trying to avoid flakiness in the first place. Rather, avoid flakiness at all costs. There are a lot of ways to do that, use what works best for you. Apart from tips like using dynamic delays, using the AUT’s client side functions etc, try breaking the test into smaller tests. Longer scripts have a greater tendency to fail and cause problems.

The ultimate solution for flakiness is to have a great automation framework built to support the execution and handle unexpected states of the application.

 

How to elevate automation’s success to management

The goal of automation is to identify risks and inform of any potential issues going into the field. Unfortunately, many times we end up working on issues which might not directly translate to the value added towards this goal. It’s also hard to get matrices which reflect this, but we should not stop trying. The ‘Matric black hole’, a term from the book ‘deep work’, talks about this concept as well and about the complex nature of getting matrices on productivity which tend to be subjective and very hard to measure.

One of the values at Quality Spectrum is generating ‘business value’. This point eludes to that goal, to get tangible results proportionate to the effort done. It’s a long and complex road for which certainly we don’t have answers yet, but it is an important enough problem to solve. For starters, I feel measuring how much more test coverage we have is a better one. Not in the sense of number of tests executed, but how much of the code base have we tested. Tools from companies like SeaLights can provide you with what areas of the code has been executed while you have been doing your tests and what areas remain to be tested. Another angle can be from the user perspective of coverage. This topic deserves a separate post, so I’ll just pause the discussion here.

A great point from Oren was the value of seeing passing test coverage. Getting the assurance of certain tests passing does add to the confidence in the build and allows testers to focus on other areas. Seeing a green mark on your automation results does have value and should not be discarded.

Management for the most part does not have a clear idea on how to measure value from automation and often are looking at the wrong metric. This comment from Angie was so true, I too never recall anywhere management having a clear vision of how to assess automation’s success, unless the person at the top has been doing automation themselves.

 

If you did not have the opportunity to join the guild this year, I’d recommend doing that. It gives a broad idea of what’s going on in the community, introduction to lots of new concepts and tools which one might not know if they were as important and where to start.

 

A lot of questions went unanswered on the first day’s round table I was at for which I am in the process of creating video replies for. The answers are uploaded to this playlist.

 

If you have any questions you’d like me to talk about, send them to questions@quality-spectrum.com.