Three steps for an effective software testing strategy that ensures your product won't become a liability
5 minute read
Originally published September 30th, 2021
in Entrepreneur
Even the most well-funded and innovative development teams run into software challenges. Just look at the recent unforeseen software glitch that delayed a long-awaited test flight and temporarily kept the Mars rover Ingenuity on the ground.
If it can happen with tens of millions of dollars invested and the eyes of the world watching, you bet it can happen to startups and small businesses. Although they might not be in the business of exploring alien worlds, their software, whether a product itself or part of a supporting ecosystem, can be just as mission-critical. But if NASA can't even prevent a failure to launch, how can small businesses expect to?
The answer is a thorough and detailed quality assurance testing strategy. It's the only way to be sure the software you put your (and your investors') time, money and reputation into can take flight successfully.
Many companies bypass adequate QA testing processes for the sake of getting innovative products to market quickly. Startups especially tend to leave software testing on the back burner, and for understandable reasons: First, most new companies want to disrupt. Their game is to move fast, break things and make waves. They're developing and iterating quickly on their software. And a thorough QA testing process could slow that momentum down.
What's more, software testing can feel like it's just one more thing to add to a growing checklist. Developers are busy and regularly bogged with mile-high lists of responsibilities and deadlines. When companies assume busy developers or product teams will handle testing, they create just another checkbox on these already massive lists. Most developers and product teams won't have the capacity to check every detail and run through QA testing as thoroughly as they should.
This isn't to suggest that startup founders and software developers don't mind buggy software. They've experienced software issues before and know they don't want to burn their reputation by creating poor user experiences. At the same time, they're not accustomed to setting up a QA department or baking software testing strategies into their everyday rhythms. Without those measures and analyses, however, they put their company, and even users, in some cases, at serious risk.
From a financial perspective, a lack of testing by a dedicated QA team equates to a pure misallocation of resources. Having developers run a QA strategy is wildly expensive, because developers tend to be paid handsomely. Expecting them to spend their time conducting QA tests when they could be developing makes little fiscal sense. (They also, broadly speaking, resent the work.)
Another risk involves brand and quality. Companies that release subpar products often get bitten by bad press, bad reviews and a bad reputation. That makes it much harder to win future investment dollars or confidently bring more products to market. If the buggy product leads to a major user problem, a lack of software testing could also turn into a legal liability.
Take the well-known computer ethics case of Therac-25. The machine was designed to administer radiation treatment to cancer patients with the help of an onboard computer. Whereas earlier successful models relied more on hardware for safety controls, however, Therac-25 relied on software. Developers released the product in 1982, but just five years later, it was recalled when patients reported being "burned" by the machine. It turns out, Therac-25 exposed six patients to overdoses of radiation, killing four and leaving two with serious injuries. Later review from regulatory agencies exposed inadequate software testing as part of the problem.
Of course, as one of the most disastrous software bugs in history, Therac-25 is an extreme example of what can go wrong. But it does highlight that even small bugs in software can cause massive problems. Still, many companies don't know where to begin pulling together an effective software testing strategy. The following steps can help:
Overlaying QA tests onto an existing software development strategy can present significant challenges. It's easier to add QA at the beginning stages, even if the end game isn't to hire a dedicated QA person, team or outsourced agency. Really, having a laid-out, consistent process is more than half the battle when it comes to QA testing.
At my software development company, we start baking testing into the development process with user stories. Basically, these are just high-level descriptions of specific requirements. We build user stories into every new project to keep everyone clear on expectations. These user stories include objective acceptance criteria we must meet before considering the story complete. Without user stories and acceptance criteria, product requirements become ill-defined and problematic. Having the user stories in place avoids miscommunication around when a product's actually completed and whether it's doing what it's supposed to do.
Consider the case of a product manager who writes a user story that describes the product's function but doesn't identify any acceptance criteria. The engineer could misinterpret the user story, developing a product that doesn't fit the product manager's intended vision. With no one and no way to confirm that the product has satisfied initial requirements, the product keeps getting passed along with no assurance that it's actually on track. Acceptance criteria provide a clear set of expectations to test against.
In the very early stages of my company, we relied on our developers (or worse, our clients) to test our products. This was before we had a QA team or any type of software testing procedure. One night, a client called to say that his site was a blank, white page. After investigation, we discovered the root of the problem: The build we deployed would only work when people were already logged into the site. Those who were logged out experienced immediate failure. Our developer had only briefly checked the site while logged in.
Not a good look for our company at the time, but the experience did serve as a great incentive to institute QA testing checklists. After all, had we gone through a testing process, we never would have deployed a site that would break for nearly every user.
Product managers need to put both pre-deploy and post-deploy checklists in place to keep everything operating as expected. If you're not familiar with the practice of creative effective checklists, take a look at Atul Gawande's book The Checklist Manifesto. Gawande outlines how major industries, including medicine, use checklists to stave off complacency and enforce quality. At the same time, he gives great advice when it comes to making sure a QA software test checklist doesn't drown in minute details and become untenable.
No QA testing process is complete without a solid framework for reporting and tracking issues and fixes. Developers need a convenient way to see tester- and customer-reported bugs and other issues and track progress toward fixing them.
An issue tracker such as Jira, GitLab or GitHub can keep everyone in the know about reported issues and responses. GitHub actually has a code scanning tool to alert developers of issues in their work. These tools also maintain a full history of the discussion around the user story or bug being worked. This ensures that you'll always have legacy knowledge and the context of the full scope of the development and innovation process.
No matter how excellent your development team may be, there's simply no such thing as bug-free code. But a thorough and detailed QA testing strategy that's baked into the process from the very beginning can help protect your reputation and make sure your investments pay off.