Reducing UI Automation Problems to Debug

By | March 14th, 2018|UI Automation|

How much time would you guess an average programmer spends on debugging code? The few studies I’ve looked at estimate around 50% of the time. This is a great insight to improve one’s efficiency. If extra thought is put into while writing the feature / script the first time, it can have a many times fold reduction in the time spent on debugging problems. Secondly if we get better at debugging, that too can have a large impact on a programmer’s productivity

In this article, with reference to UI automation, I’ll briefly go over how to avoid having to debug more problems in the first place. Then some tips on debugging to increase the speed and effectiveness of fixes from my experience.

Before we talk about how to debug, let’s take a step back and discuss how to reduce the number of times we have to debug problems in the first place. With reference to UI automation problems, how to reduce flakiness, to manage maintenance time and debug lesser number of problems. Apart from flakiness, there are a few other common factors as well causing issues, being discussed further on.

Avoiding flakiness

It is impossible to write a piece of code that would not have any problem ever, therefore debugging is inevitable. However, if a program is written with the deliberate intention to fix issues while the ‘imminent’ debug exercise, you and your team are in for a long and bumpy road. It’s far better to spend extra time thinking over the algorithm and architectural impact / update instead of jumping right onto the implementation.

Here are a few general tips on reducing ‘inherent’ flakiness and coding problems down the road.

Code complexity

Luckily, I started my career on safety critical devices. One of the practices I picked from there was code complexity. A concept I don’t see a lot of software development teams these days.

“Cyclomatic Complexity” is a software metric used to identify the complexity of a piece of code developed by Thomas J. McCabe, Sr in 1976 [1]. Using this (and other metrics) the complexity is calculated with the intent of requiring the code to be less than a specific benchmark value (for firmware written for embedded / IoT devices the benchmark is 30).

The premise of the concept is, once the code goes above a certain threshold, the software is more prone to defects [1] and can go into unknown states, therefore should limit the amount of complexity we add to our code. Going into the details of it would require a separate article, to summarize on reducing it, as the number of decisions increase within a method, break into multiple methods / classes / modules to reduce the complexity.

Framework design principles

Quality is built into the design, cannot be painted on a product once it’s developed. The same goes for automation code, if you don’t have a carefully thought framework developed, there is no way you can have a stable automation run. Adding bandages on deeply infected wounds does not solve the problem, the root cause must be fixed.

Pillars of framework design

In my experience, there are four “pillars of automation framework design” to be followed, specifically for UI automation projects which are maintainability, reusability, scalability and robustness. This is a very extensive and important subject. To get started you can go over all four pillars brief description in this article. Following best practices while architecting helps in reducing the number of issues we would see and must debug.

Mostly flakiness in automated scripts is a direct result of poor architectural framework design. You can either face a problem learn from the mistakes and then correct them, or learn from other’s experience and avoid introducing a problem. Best to learn from other’s mistakes.

Environment stability

An automation environment can include a lot of things, while all components of an environment are important, referring the few having the most impact on flakiness here.

Test data

Pre-requisite test data is the most common reason of flakiness. Talking about test data under the test environment might be debatable, however the wrong test data management process (or the absence of one) can create havoc on your results. You can read more on the subject from this article written for TEST Magazine.

Environment Setup

Setting the environment (especially in cases where you might be orchestrating one on the fly) for the AUT and automation can save lots of debugging time waste. Examples include setting browsers to use with settings like unblocking popups, clearing cache, browser restore settings, autocomplete features and more. OS settings like opening required ports / firewall settings, any installations needed might be as well.

Patching

The patching and deployment process of your AUT. While this might not strictly come under the automation team’s ‘dominion’, but can have a great impact on automation test runs. Errors in patching can happen especially where the process is not seasoned enough.

Tools / Libraries

Tools and dependencies used for automation also need to be vetted. I’ve seen tool’s / library’s bugs causing problems. For instance, tools crashing or giving unexpected errors. It’s best to test for such possible issues during the selection process and test scalability of solutions to be used.

Selection of scenarios to automate

Some tests are inherently going to cause problems during execution and be hard to maintain. Such tests are best not automated. This again is a separate subject on selection of tools to automate, here we’ll mention only a few important ones.

Features expected to go through major change can be put off until the feature is reasonably agreed upon and implemented. In case of rapid changes specifically around UI elements can cost more in maintenance than doing it manually the first few iterations.

We talked about code complexity, AUT features can be complex as well which would also create similar problems to complex code. A very long test, or lots of different features being tested in one flow, or an inherently single complex feature can all have this effect. In many cases such scenarios can be broken down into multiple smaller cases reducing the complexity. In cases where that’s not possible, might be best not to automate such scenarios, or have a strong strategy to do it.

Debugging will live on

Taking all precautions to reduce code complexity and automate less complex features, following design principles, environment stability and choosing the right scenarios to automate might all reduce the number of problems you’d have to debug, but will never reach the point where no maintenance or debugging would be needed. In the next post will go over tips on how to efficiently debug problems without growing a fat tumor worrying about fixing broken code.