Contact Us We cut software testing from weeks to days. Let’s talk for 15 minutes to see if we can accelerate your digital delivery too. Schedule a call with our CEO Ash Conway.
Users of enterprise software, for instance, have long complained about the poor user experience, the inflexibility, and the lack of usability of existing tools. A survey by Forrester found that:
75% of users couldn’t easily access information from existing enterprise systems.
69% of enterprise employees want an engaging mobile first experience but only 55% enterprises have implemented three or less mobile apps.
Because of poor design 62% employees delay tasks which need them to log into multiple systems, affecting overall efficiency and outcomes.
The usability issues with existing enterprise tools have contributed to the shadow IT phenomenon where enterprise users are increasingly using user friendly third party tools like Dropbox or Slack instead of sticking to officially approved software, sometimes with serious security and data governance repercussions.
However, user-centred design shouldn't be viewed as a "nice-to-have". Software design and the product itself are increasingly inseparable. A confusingly designed internet banking application will make customers jump ship even if the bank offers attractive interest rates or better perks compared to the competition.
Poorly designed software has real world implications beyond a user spending twice as much time trying to understand how a system works.
For instance, last year alone, Tricentis reported that software failures within the transport industry resulted in the recall of 21,228,066 cars, grounding of 8,831 planes, and affected 22,712,987 people.
The government sector has seen the maximum number of fails (for example: the Australian Census blunder or the US Healthcare.gov debacle), with far reaching impact for millions of people who depend on public sector services. The global cost of government software failure has been estimated to be $5,703,579,938 in 2016.
On the whole, the cost of software failure has risen from 2015 to 2016 in terms of people affected (up 2.3% to 4.4 billion), assets impacted (up 260% to 1.1 trillion), and companies afflicted (up by 52% to 363).
The seeds of software failure are sown early in a project, when business requirements are not managed properly (CIO magazine found the numbers of failed projects to be as high as 71%) or when the end user doesn’t have a say in the design and execution.
Prioritising business requirements
So if you want to set up your project for success you will have to focus on getting your requirements right. In the context of UAT the sponsor is in charge of setting the business requirements which will then be made into test cases.
The usability tests will have both functional as well as non-functional (stress, reliability, performance, speed etc.) requirements to be tested.
One way to prioritise business requirements and user stories is to use the MoSCoW method, which Wikipedia defines as:
“A prioritization technique used in management, business analysis, project management, and software development to reach a common understanding with stakeholders on the importance they place on the delivery of each requirement.”
The MoSCoW acronym breaks down as:
Mo: Must have this test done.
S: Should run this test, if possible.
Co: Could run this test if other issues are fixed.
W: Would run this test if possible in the future.
This arrangement makes it easy for sponsors to eliminate any kind of confusion while drawing up business requirements.
This prioritisation ensures that the most important tests are conducted first, and more importantly, tests which don’t really matter in the larger scheme of things are deferred for a later date. Given the expense and time requirement of the UAT process, this formula guarantees the highest impact and keeps UAT cycles short and manageable.
UAT acceptance criteria
The UAT acceptance criteria (UAC) is a series of simple statements that distill the business requirements and give stakeholders an idea of the time and costs involved in the entire project.
When you get your UAC right you will be laser focused on your testing processes and not embark on a wild goose chase. If you were to use a decision tree, this is what it would look like:
The scope of UAT
Because UAT deals with user experience, it should ideally cover:
Operational Requirements: Are the requirements around data capture, data processing, data distribution, and data archiving met?
Functional Requirements: Are all business functions met as per expectations?
Interface Requirements: Is data passing through the system as per business requirements?
UAT isn’t about testing whether the radio buttons on a particular form function properly. These tests fall in the entry criteria of UAT, which also include:
Completion of unit testing, integration testing and systems testing
Absence of dealbreakers, high or medium level defects during integration testing
Fixing of all major errors, except for cosmetic errors
Defect free completion of regression testing
Complete Requirements Traceability Matrix (RTM)
Communication from systems testing team certifying that the system is ready for UAT.
Diligently following this process will provide much greater clarity and certainty around the purpose of UAT within your organisation. This will not only help fast-track and streamline the process as you move forward, but also help win support from stakeholders in other areas of the organisation.
If you are new to Bugwolf and would like to learn more about how we help with user acceptance testing, the quickest and easiest way to find out more is to Request A Demo by clicking HERE.
Keep up to date with the latest in beta, bug & product testing.