The IT industry has changed a lot in the last decade. The QA profession has evolved with it. Currently, test automation and CI / CD pipelines are relevant part of the standard methods for quality assurance. The purpose of this article is to explore where manual testing lives in the current automation-dominated ecosystem. When talking to customers, we noticed a pattern: Automation is often perceived as the standard approach, while manual testing is perceived as outdated. This oversimplification contains some truth but misses critical nuances that are essential for successful software development.
The 80-20 rule
As engineers our job is to solve problems. More importantly, our job is to solve problems efficiently. Here, we will detail the strategy we find the most cost-efficient, which we call the 80-20 Rule.
A project, testing is conducted both manually and automatically. Resources are split with 20% allocated for manual testing and 80% to automation. Please note that this separation excludes non-technical work like reading emails and writing cool blog posts like this one. We are referring to core activities. While it may be convenient if the same people handle both the manual and automated efforts, it is not mandatory.
Exploratory testing
For every new feature, the QA explores the full breadth of interactions that the software allows. The QA is expected to be creative and not limit themselves to the interactions described in their written task. The purpose here is not just to cover requirements, but to also find gaps in them.
As an example: We develop an online shop and we have an existing functionality, where some items may be discounted or discounted only under certain conditions. We add a new feature, a shopping cart, which allows the user to purchase multiple items with a single transaction. Common sense and user experience suggest that discounted items should be calculated with their discounted price, adding or removing multiple discounted items from the cart should apply the discount rules correctly, multiple browser tabs should interact with the same cart and result in the same total price. However, what happens to the cart if the user logs out or changes the locale and currency?
When in doubt, we should ask the product owner of the new cart functionality for clarification, but exploratory testing does not expect the product owner to explicitly list all possible combination of rules and interactions in advance in the task description. This is why we call it exploration. Such exploration of software behaviors often yield results that are relevant for our customers but hard to detect when simply reading task descriptions. The simple truth is, you cannot fully perceive the software unless you put yourself in the user’s position.
This manual effort has another benefit. Users are human and humans are emotional. Some behaviors of your software will make your users furious. Processing their feedback works but it takes time. Manual testers on the other side use our software daily and their feedback is a relevant forecast of user perception.
Manual reproduction of errors in automation
Automated tests fail. This is literally their job. Unfortunately, they do not catch bugs, they catch symptoms. As QAs, it is our job to investigate those symptoms and present carefully curated and described tasks to developers. The first step in such investigation is always manually reproducing of the issue. If the issue is reproducible, we can show it to developers outside the context of automation. This eases communication tremendously.
If the issue only happens in automation, our task is to figure out why. There are multiple possible reasons. Automation is code, so there might be a bug in our automation framework. Alternatively, the product has a race condition with only 5% chance to occurring, which is why only automation catches it. Sometimes, failures only happen in a specific environment, so the root cause might be in our infrastructure. In any of those cases, manually executing of the failing test gives us valuable direction for further troubleshooting.
What are the benefits of automated testing?
Modern software is massive. As a project matures, the number of possible test cases tend to explode. Modern software is dynamic, it changes with every commit, like a living being. Every change may break something that worked in the past. So, you not only have a massive number of test cases, but you also need to repeat them constantly.
The volume of testing
For example, imagine a well-developed online shop with all the features a customer may expect like shop filter, search, address management, payment management and of course multi-language support. A project like that may seem standard, but it is entirely possible to have more than a thousand test cases. Even a standard chatbot menu may have dozens of possible interaction paths. The sheer volume of this work is so immense that as a project grows, automation becomes the only approach to achieve the desired minimum level of quality across the online shop. This is a scalability problem.
The testing speed
Automation is code and requires programming. The technical complexity of a test is small, but if we need to do automation, chances are we need to do a lot of it. While individual tests tend to be simple, as their volume grows, maintaining a mature automation framework becomes a serious challenge. There is no right way to do it, but there are many wrong ways. How do we make thousands of tests run in an hour and not seven hours? How do we ensure tests don’t break each other with subtle side effects? UI tests are notoriously unstable, what can we do about that? How do we make tests fail in ways that give us insight rather than confuse us and waste our time? These questions demonstrate the challenges we need to solve when we manage not 20 tests, but 200, and later – 2000. At such high volumes, it is very difficult to measure our test coverage. Another challenge is to be clear about the responsibility of a given test and what is covered elsewhere.
The point here is that automation is hard. You should consider it as an extension of your core programming effort.
When is manual testing enough?
So, if automation is so hard, do we really need it? In many situations, manual testing is perfectly sufficient. The early stages of startups or the MVP stages of individual projects work fine with manual testing only. Any project that does not need to scale significantly can be fine without automation. Only a small part of the software rely on manual testing only. This brings us to our next point.
To sum up, we can break the topic down into two points:
Automation is complex - this means higher costs, especially at the start of the project
Automation is essential if the test volume is high and there are regular repetitions
The psychology of the humans behind the QA title
QAs are human, and we bring our own personalities to the table. Some of us have preferences for either manual or automation work. This paragraph explores why that happens.
With the rise of automation, a new and extremely complex career path for QAs has also evolved. QAs who focus their careers on automation often must devote their entire capacity to achieving professional status. As we emphasized earlier, manual testing is still critical to the quality of our software, so it is not in the interest of the business to neglect it. How do we address this trend?
Traditional manual testing includes many activities that are not directly about testing, such as excessive work documentation. With agile manual testing approaches,we reduce unnecessary efforts and realize efficient manual tests in addition to test automation.
Sometimes we see the opposing trend. Many QA colleagues don’t have a technical background like developers, which entails certain barriers to starting test automation. This paragraph tries to investigate why. Automation is programming, and programming is incredibly intimidating. We can say programming is just hard, but this does not paint an accurate picture. A more accurate diagnosis is that this industry has a knowledge-sharing problem. In my experience, most companies internally share a minimal volume of the critical internal technical knowledge needed to move work forward. We often see our internal systems as unpredictable black boxes with strange and unexpected behaviors and side effects. Programming in an uncertain environment is a scary job. The real solution is improving internal technical knowledge sharing, but this is extremely difficult.
To summarize, QAs have our human subjective preferences, but everyone should not lose sight of the totality of necessary QA activities.
AI – What the future holds
AI has the potential to be immensely useful and it is expected to automate a lot of our work. However, I notice a pattern that I would like to address here. Current AI tools excel at writing new code but struggle with troubleshooting and modifying existing and malfunctioning code. You may notice that the hard challenges for automation, which I listed earlier in this article, are almost exclusively related to the second problem - working with an existing codebase.
A common misconception is that the main activity of test automation is writing tests. This is not true. Writing tests is generally simple. Activities related to investigating test failures and other problems, and then addressing them, are the most resource-intensive tasks. This is our main job, and it is largely uncertain how AI will affect it. We will take a closer look at this topic in another article.
Would you like more insights into the development of a diva-e project? Please get in touch with us!