Over the past few months our automation team had the feeling we are not covering enough ground in our UI tests compared to the efforts being invested. That’s when we got serious about API testing. This post and consecutive ones build up the case for why to invest in API automation.

User Interface automation (aka Test Automation)

The term ‘test automation’ here is highly inaccurate. The term automation is incorrectly used to describe User Interface (UI) automation, which really is a subset of the many types of automation one can have. Back in the day this was the most prevalent type of automation software testers did. The community has evolved and now we have a few more to choose from. Case in point, automation is no longer synonymous to UI automation.

UI tests till date remain the most common type of automation test engineers love to program. The first word coming to mind when thinking of automation in software testing is Selenium and the likes. Before going into why invest in automation on the API level, let’s look as some drawbacks of this type of automation (despite its wide spread use).

Time intensive development

A UI tests mimics the end user. Therefore, these scripts perform all the tasks a person would do on the front end. That typically involves loading screens / pages, writing / reading a bunch of fields / elements and lots of waits here and there. Scripting all these actions, even with a highly reusable framework, takes some time. The result of the complete exercise is usually a bunch of data exchanges between the client and server. If we just create the data packet a client sends to the server (POST response), that would be comparatively be very few lines of code and very homogeneous too. On the unit level, things would get even easier.

This is the development part, maintaining UI tests is even more cumbersome. Since the code base is many times bigger than API level tests for instance, it becomes very expensive to maintain too. More lines of code and more complex – higher maintenance cost.

Fragility

Anyone in automation longer than a few months has tasted the fragility of UI test scripts. By following good development practices robustness of these scripts can be increased to a reasonable level, but UI tests are by nature are fragile. The UI elements can change quite rapidly, the browser version updates can have a change in behavior, response on different browsers is not the same and the list goes on.

Back in the day of WinRunner, UI automation was the mostly the only automation type most people used. So, there was no benchmark to compare UI tests against. The comparison was between good frameworks / tools and not so efficient ones. Now compared to other automation types in robustness, UI automation ranks at the bottom.

Slow execution

Big and complex UI automation batch runs take ridiculously long time to execute. Compared to a person doing that step manually they are fast no doubt, but again comparing to other automation types, these take a lot more time. To have a one to one comparison, we automated a test in our API suite and UI suite. The execution takes roughly around 7 minutes on the UI level, compared to just 12 seconds on the API level (that’s 35 times faster). A UI batch run taking 24 hours to run can complete probably in 40 minutes on API level.

The reason for this slowness is inherently the way UI tests are designed. For instance, after filling in 20 – 50 fields and many delays in between, the end result is usually a REST call to the server and getting a response back. If we construct the request and validate the response, it takes less than 2% time spent compared to filling the form and then validating it.

Reduced test coverage

A lot of business logic is hard to replicate from the front end. Some product features need very specific and detailed environmental settings to trigger that line of code on the business layer. UI automation very is inefficient for that. In many cases, it would be impractical especially where other hardware is involved, for instance, a car refueling hardware device sending information to a web server once the refuel is done. Instead you might want to just simulate the message on the API / Integration layer.

To perform even small tasks on the UI, it takes a lot of time compared to API level or unit level tests. This naturally has the effect of covering a lot less ground. For instance, a 500 man hour effort might cover 50 scenarios on the UI level, but on API level they might be able to bag 200 – 250 tests, depending on the application and both automation frameworks.

Lastly you can NEVER get a 100% test coverage from UI level. To reach there you would need API level and unit level tests to make sure every line of code is tested.

 

In the next post, we’ll briefly talk about what API level tests are and advantages of using them.