Bug Blog

Latest News In Software Testing, Design, Development, AI And ML.

Software performance - the test lab vs. the field

Increasing performance has been an important aspect of digital technology ever since the birth of the first truly digital machines in the late 1940s. These days, software has become as important, if not more important, than hardware; and ensuring software performance in the field has become a critical aspect of software testing as more and more functions of our society come under the control of software applications.
Bugwolf helps digital and delivery teams release software faster with more confidence by unblocking the software testing bottleneck and increasing testing coverage.
Learn More

When we say performance, we generally mean such factors as memory usage, disk space, disk operations and CPU cycles as well as the smoothness with which the new application interacts with established applications and the operating system in which it resides.

Much of performance is determined by functional testing, the purpose of which is to verify that the product is working correctly. This requires the development of testing scenarios that reflect, as closely as possible, the environment that the application will be operating in while also pushing the envelope by confronting the application with scenarios that, while they may be less likely, could cause serious problems if they come up in the field.

The problem is that it's impossible to cover every contingency. The best anyone can do is test against the probable, the possible and the unlikely but still possible. The component or system must meet its guidelines and perform in terms of time and usability, and correct response to input with regard to stakeholder requirements. However, software must be robust as well. It must perform well under adverse conditions and be able to deal with incorrect inputs.

There is a big difference between the testing lab and the field. It has to do with predictability. The operational environment contains human beings, who don't always behave predictably, and situations that aren't always predictable. This is why it is important to retest software when conditions change. For example, the Patriot missile batteries were credited with shooting down a number of Scud missiles during the First Gulf War. They worked well enough, except when they didn't work at all. The Patriot's computer required a very accurate on board clock to calculate the trajectory of an incoming Scud. Unfortunately, there was a software glitch that didn't show up in tests. This caused the onboard clock to lose time the longer the system remained on. The problem wasn't spotted because the system wasn't run long enough during tests and no one realized that soldiers in a combat zone would simply turn the system on and leave it on. The glitch caused the Patriot battery in Dhahran to misfire. The incoming Scud hit a barracks, costing the lives of twenty-eight soldiers.

While most software glitches do not result in fatalities, they can result in lost revenue, major reworking and public relations problems. This makes it important to test as well as possible and revisit deployed software from time to time to make certain that changing circumstances haven't made applications lose viability.

Bugwolf helps digital and delivery teams release software faster with more confidence by unblocking the software testing bottleneck and increasing testing coverage.
Learn More

Bug Newsletter

Keep up to date with the latest in beta, bug & product testing