Insight and analysis on the information technology space from industry thought leaders.

When Should a Startup Test Software Manually?

There are a lot of advantages to software test automation, but sometimes it is more prudent for startups to stick with manual testing. Here's a look at the pros and cons of both types of testing for startups.

software testing

No company today can afford to ship broken software-based products. Even if you're not in a highly regulated industry, bad quality assurance (QA) can decimate sales and prompt job losses, reputational damage, customer loss, and even loss of life.

QA sounds like a given, but software testing is growing more complicated, thanks to the expansion from just desktop applications to web, mobile, and embedded applications. Startups ordinarily test a large amount of their software code manually. After enough expansion, they're usually unable to keep up with the volume. This is where software test automation does wonders, and conventional wisdom suggests you might as well always use it. And why not, right?

But as impressive as test automation is, does a startup always need automated testing? Are there, in fact, times when manual testing is more prudent? And if you decide to switch to automation as a startup, how do you do it? When is the right time?

What's Your Company's Maturity?

Ask yourself: Are you a startup whose product is still in flux? You may not be ready for automated testing if underlying frameworks and target platforms are changing by the week.

Startups move extremely quickly when starting from zero, often pushing function to the max while they attract investment, introducing a colossal amount of new code weekly. Doing this will likely bring numerous foundational changes to features, UI, and the like.

Related:What Are Software Testing and Quality Assurance?

Automated testing, in this case, doesn't necessarily help because the process of creating build automation test scripts takes time. If you must do this from scratch every time your app undergoes significant change, you're wasting developer resources.

Conversely, relying on manual testing will only prove possible if your app development stabilizes and you gain enough scale. For example, suppose one of Europe's most prominent industrial players has had a product on the market for over a decade. In that case, its enormity probably means nobody wants to change much under the hood out of fear it will break. Automated testing makes lots of sense in that case because it reduces the risk of human error.

Manual and Automated Software Testing Require Different Skill Sets

The other thing to understand is that automated and manual testing are not necessarily mutually exclusive. Each has its merits, but they also require vastly different skill sets.

Suppose a startup company is launching a food delivery app. The QA process would look something like this:

  • Click on buttons or links to see if they work correctly

  • Check if the users can enter data into text fields

  • See if search bars, drop-down menus, and navigation are working

In a scenario like this, manual testers would typically involve non-scripters, non-programming savvy individuals. And that's a good thing. You don't want those testers to make assumptions a programmer would make about how things run. You want them to look at a GUI, not overthink it, and use it like a "normal" human consumer. This approach reveals entirely different problems.

Automated testing, on the other hand, uses scripted sequences to execute a test case scenario, hunting for bugs/defects — the goal being to simplify manual labor into scripts that can be executed repetitively anytime.

What Should You Test Manually vs. Automatically?

Any startup dipping into automated testing must first figure out which test cases to automate.

As a rule of thumb, if you want to maximize test automation, you should generally automate these:

  • Tests vulnerable to human error

  • Tests that are repetitive and monotonous

  • Extensive tests necessitating many data sets

  • Time-consuming tests

  • Business-critical or high-risk tests that must be completed consistently

  • Tests that must be performed on multiple hardware or software platforms

More specifically, these are better left automated to save time and effort:

  • Regression tests: These are extensive, recurring, and dependent on the same input variables.

  • Data-driven tests: Many functional tests require testing using multiple divergent data sets to evaluate a range of positive and negative cases.

  • Performance testing suites: Automated testing can be used to speed up system performance testing under varying conditions.

  • API tests: Automated tests find flaws faster since they trigger API regressions whenever the API is modified.

That said, some test cases are more effectively executed manually:

  • Exploratory testing: Real users explore programs differently from standardized routines. Exploratory testing can't be automated since it requires human cognition.

  • UX testing: Automated tools can't always capture intangibles like how people feel about a product, how likely they are to utilize it, and how aesthetically pleasing it is.

  • Testing accessibility: This is best accomplished through manual testing, which examines the user's interaction with the process or application. You can augment this manual testing with application analytics tools that provide insights into how people use your application or device.

How Would You Transition from Manual to Automated Testing?

What about when your startup's software reaches a scale that makes manual testing impractical? Well, the transition isn't an on/off switch. You have to investigate tools early (and believe me, some organizations start this process far too late).

The good news is you don't have to wait to get started on testing approaches that complement automation like static code analysis, which reviews code in programming without executing it. Many free alternatives are a great way to dip into automation until you adopt more advanced tools.

You should generally consider these factors when switching to test automation:

Design the right framework for your team

Your test suite's effectiveness relies on your framework. The right framework depends on your programmers' skill set, your software development procedures, and nature of the software.

For example, when Specsavers built out an extensive portfolio of in-house Java/web applications on Linux and Windows, they needed a GUI testing tool that required no scripting experience from staff. They chose an automated GUI testing tool that let them specify tests that needed performing by simply entering data into spreadsheets that could then drive GUI tests.

Choose the right tools

Think about future objectives when your startup scales up, not just current goals. Will the tool still exist in five years? Are new releases issued frequently? What does support look like?

It might be tempting to pick low-code/no-code toolkits, aka "the easy tools." Many companies initially gravitate toward these tools because they're easy to learn, but some revert back within a year because they lack the flexibility and robustness to help them achieve sufficient coverage testing.

And while lots of free, open source test automation tools exist, be cautious. While significant, they don't necessarily provide troubleshooting support, and should you need to comply with certification standards, these tools won't be compliant with regulations out of the box. Getting this wrong could result in a costly and lengthy time-to-market process for your product.

Set manageable goals and a quick learning curve

Moving from manual to automation isn't easy, so start with digestible objectives. Start with automating the most-used regression test suite or the longest and most tedious one. Identify and eliminate recurring bugs, then gradually improve automation to cover more testing areas.

Measure the automated testing tool's performance

When you start using an automated testing tool, measure its performance! You don't want to use a tool that doesn't meet your testing goals. You can begin by determining:

  • How fast the turnaround time is for executing tests compared to before using the tool.

  • Whether time spent developing test suites has shortened.

What Quality Assurance Certifications Do I Need?

Whether you're in a regulated industry (like safety-critical sectors, e.g., automotive, manufacturing, healthcare) or you want to demonstrate to customers that you meet industry standards and quality practices, there are several certifications you can earn. They're usually a great way to evaluate software vendors.

  •  ISO 9000: International standards for establishing and maintaining an adequate quality assurance (QA) system for businesses. To be ISO 9000 certified, an organization must be audited on its functions, products, services, and processes.

  • Capability Maturity Model Integration (CMMI) Level: Used to analyze the maturity of an organization's processes and provide recommendations for process improvement.

  •  TMMI: A five-level model that provides a framework to help companies assess the maturity of their testing processes and optimize them.

Software testing can be daunting for a startup — after all, it's a critical factor in the market success of a product and all future products a company might release. But a good software quality assurance management strategy that follows best practices, with a strong focus on testing, can alleviate a lot of the stress and ensure on-time, repeated releases that meet and exceed your users' expectations.

Jan Aarsaether is QA Tools Advisor at Qt Group.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like