Bug Blog

Latest News In Software Testing, Design, Development, AI And ML.

Why Test Automation Fails (...And What To Do About It)

When done right, test automation can lead to significant efficiency and cost improvements. Unfortunately, many organisations fail to maximise the full potential of their automation program due to a number of common challenges. Here's how to overcome them...
Bugwolf helps digital and delivery teams release software faster with more confidence by unblocking the software testing bottleneck and increasing testing coverage.
Learn More

In June 2009 an Air France passenger plane from Rio de Janeiro to Paris crashed without any survivors in the middle of the Atlantic Ocean.

Investigators found that the plane's speed indicator malfunctioned and the autopilot disengaged when it encountered turbulence. When the pilots took over the controls they erroneously tried to gain altitude instead of doing the opposite, causing the engines to stall mid-flight.

This tragedy once again shows that no automated systems are perfect. When it comes to complex tasks like flying a passenger jet or running tests on an enterprise software system automation without human involvement will almost always result in something bad happening.

This is the reason why an overwhelming majority of testing professionals say that test automation by itself doesn't save money or time or guarantee quality software.

But there’s just no stopping automated testing, mainly because of a sea change in business environment.

The attractions of automated testing

Across all industries, enterprises are wrestling with the challenges of digital transformation.

While business used to drive IT, the trend has reversed. Nowadays IT plays a major role in how products and services are delivered.

Enterprises no longer can get away with clunky software. Consumerisation of enterprise software and increasing customer (both internal and external) expectations means that the mandate of testing and QA departments isn't just about reporting bugs.

It has expanded to business outcomes like increases in revenue growth, customer satisfaction, and uninterrupted business operations.

Couple that with the rise of Agile and DevOps and it's easy to see why automated testing makes business sense. Manual testing alone can no longer keep up with rapid development and release cycles.

And then there’s the ROI on automated testing.

While you shouldn’t run automated tests if your only goal is to save money ROI, considerations will come up and it’s good to understand the automated testing math. While your mileage may vary, this post lays out a roadmap to calculating your ROI.

“If a tester on average costs $50 an hour and if a senior tester who creates automated tests costs $75 an hour, that would cost about $400 and $600 respectively per day per tester.

Now, consider a team of 10 testers, five senior-level and five entry-level, with a monthly loaded cost of $105,000 (for 168 hours per month). We’d get a total of 1,350 hours costing $78.00/ hour (this is assuming each tester realistically works 135 hours per month due to breaks, training days, vacations, etc.). If we automate testing, the cost of labor would remain the same, but for the effort of 3 test automation engineers, we’d achieve  16 hours a day of testing and will run 5x more tests per hour.

This results in  the equivalent of 5,040 hours per month of manual testing created by the three test automation engineers. Then, consider the rest of the team doing manual testing (7 people x 135 hours/month). That amounts to 945 hours more, ending with a combined total of 5,985 hours of testing at $17.54/hour ($105,000 divided by 5,985 hours).”

Another benefit of automated testing is the speedy discovery of bugs. The sooner the bugs are discovered, the cheaper it is to fix them (bugs found post release cost 5x more to fix than bugs found before release).

But people are still intimidated by automated testing.

The challenges behind automated testing

Running centralised tests in centers of testing excellence doesn’t cut the mustard in a decentralised and fast moving agile development environment.

While the need of the hour is agile, and in many environments, automated testing, there are still a number of hurdles to overcome.

These are the top challenges of applying testing to agile development as reported by Capgemini in their 2017 World Quality Report:

  • Early involvement of testing team in inception phase or sprint planning (44%)
  • Difficulty in identifying the right areas on which test should focus (44%)
  • Lack of appropriate test environment and data (43%)
  • Lack of a good testing approach that fits with the agile development method (43%)
  • Lack of professional test expertise in agile teams (43%)
  • Inability to apply test automation at appropriate levels (41%)
  • Difficulty to re-use and repeat tests across sprints/iterations (40%)
  • No real difficulties with testing in agile (1%)

Apart from these, here are some of the reasons why automated testing fails:

1. Not considering a testing project like a software development project

Automated testing projects are like software projects and testers are increasingly expected to be able to code or at very least be familiar with software development methodologies.

Because of complexity in setting up automated testing environments and the specialised knowledge needed for writing scripts and planning tests a small team of testers with coding experience will beat a large but poorly skilled team any day.

2. Not running enough manual testing first

It may sound oxymoronic but if you want to automate software testing you will have to know what to test manually. Knowing a system inside out will let you optimise testing for higher impact, and you can gain that insight only after you have gotten your hands dirty with the system.

You also have to keep in mind that automation is a means to an end, and not the end result. Ideally when you are running automated tests you free up the time of your team to do more high value tasks, like exploratory testing or destructive testing where testers try to break the product apart or do things that it was not designed to do.

3. Not focusing on the philosophy behind automated testing

Ad hoc, unstructured automation is a recipe for failure. Automation should always be approached in an on-going, systematic way. 

Rather than expecting your automation processes to be perfect from day one, organisations should start small and adopt a culture of continuous improvement where teams are rewarded for increasing test coverage over time.

Similarly, teams should be aware of the limitations and blind spots that may exist, even with robust automation processes, and ensure they complement automation with appropriate manual testing.

4. Not considering the complete costs of automation

Automating tests isn’t simply about the costs associated with the tool. There is the costs associated with implementing the test environment into your IT stack, the costs associated with creating the tests, and the cost of maintenance.

In fact, test environment management is one of the most important challenges respondents faced according to the World Quality Report 2016-17:

  • 48% had to maintain multiple versions of the test environment.
  • 46% didn’t have the facilities to book and manage their environments.
  • 46% faced issues with the right mix of tools.

Another point to consider is corporate culture. If there isn’t a serious mandate from the top in favour of automated testing the resistance to change will kick in and that perfectly set up automated testing rig will sit unattended, resulting in wastage of time and money.

5. Not identifying a proper testing strategy

Most organizations run end-to-end because they focus on real user scenarios.

Developers love it because it offloads most of the testing to others.

Decision makers favour it because these tests are based on what the user will experience, and testers have the satisfaction of reporting bugs which might come up in an actual user environment.

But automating end-to-end tests is inefficient because the user is ultimately concerned not with the number of bugs, but with bug fixes. End-to-end tests can also become drawn out and unreliable if there’s no way to isolate the code which causes failures.

Automating this inverted pyramid anti-pattern will frustrate your testing efforts, and will essentially result in a Garbage In Garbage Out scenario.

Inverted Test Automation Pyramid

 

Instead, do the opposite. Invert your testing strategy so that you run multiple unit tests for every end to end test. This will uncover bugs faster and generate more reliable results.

Once you have the process down pat, follow rule #2 and automate it.

Google follows the pyramid approach to testing and recommends a split of 70/20/10 in favour of unit/integration/end-to-end testing to start with.

 

Ideal Testing Automation Pyramid

Can automated testing and UAT co-exist?

Test automation isn’t a replacement for UAT. Automation helps you to find out problems that you already know exist much more efficiently.

But UAT is about figuring out bugs which you had no idea even existed. Bugwolf UAT teams routinely uncover hundreds of bugs while working with enterprises with extremely sophisticated automated testing systems.

Once you have run user acceptance tests you can use the insights gathered from that sweep to tweak automated test cases, and improve the overall code quality.

This process will eventually let you ship code quickly without compromising on quality and reliability.

So, what’s preventing your automated testing campaigns from showing the desired results?

Bugwolf helps digital and delivery teams release software faster with more confidence by unblocking the software testing bottleneck and increasing testing coverage.
Learn More

Bug Newsletter

Keep up to date with the latest in beta, bug & product testing