In the vibrant realm of tech discourse, the "humans vs. AI" battle cry echoes through conversations, casting these entities as rivals in an unfolding saga of innovation. Observers tend to either assume that AI tools won't replace humans in the context of knowledge work or that, eventually, AI will grow so powerful that software systems become autonomous and render humans totally obsolete.
The truth is that such a viewpoint reflects simplistic thinking about the relationship between humans and AI. Rather than viewing these entities as competitors, we should realize that they complement each other. AI will always need humans to rectify its errors and provide it with domain-specific knowledge and insights. Humans will always be able to benefit from AI not just to save time but also as a way of unlocking insights that they'd otherwise miss.
This take, at least, is how I've come to view AI in the context of software testing – a domain I know well because I’ve been engaged in both AI and software testing independently for over 30 years. In this article, I'd like to unpack how the relationship between human software testers and AI is playing out and what businesses can do to ensure that both types of assets complement each other, rather than competitors, to speed up software testing.
The use of AI in software testing
Let me explain how teams typically use AI in software testing today.
There are two basic ways to test software—a process that developers and Quality Assurance (QA) engineers usually perform every time their organizations produce new code to ensure that the code is free of bugs before they deliver it to end-users. One way is to test manually, which is very time-consuming. The other is automating tests using tools that evaluate how applications behave.
Automated testing is much faster than manual testing. But to perform automated tests, you need to write the code that powers the tests, which can be time-consuming. As a result, the typical team today automates only around a quarter of its tests. However, most teams say they'd like to automate at least half of their tests, according to a recent survey of QA professionals by Kobiton, my employer.
You also have to maintain automated tests by updating them as your testing applications evolve. For instance, when you add a new feature or change the user interface, you have to update your tests accordingly. That's also a lot of work, so QA teams often wait for the product to stabilize before proceeding with automation.
AI can help in both of these regards. It can automatically generate tests, reducing a task that might take hours or days to mere minutes. Sophisticated AI tools can also automatically update tests and fix errors within tests. In these ways, AI helps teams take more advantage of test automation, which means they can test more aspects of their application in less time.
Ultimately, these benefits lead to faster software delivery cycles, meaning end-users get feature enhancements and bug fixes more quickly.
Humans and AI: A mutually beneficial relationship
But just because you can use AI to generate and update automated test scripts doesn't mean you can remove humans from the picture. On the contrary, even for teams that use AI extensively in software testing, humans have two key roles to play:
● Refining and Informing AI: Even the best AI tools never get things 100% correct. Humans must review AI-generated test code to ensure that it tests the right things. They must also address any false failures caused by AI-generated tests. They are also responsible for feeding the AI system with domain-specific knowledge and insights, which are crucial for enhancing the AI's understanding and performance.
● Managing test strategy: AI can write virtually any test you ask. However, it lacks the ability to know which tests are most critical or to identify gaps in a team's testing strategy. Humans are indispensable for these tasks because they can assess overall testing operations and ensure that the AI's efforts effectively align with the team's objectives.
The interaction between humans and AI in software testing goes beyond simply dividing tasks. It's not just about humans handling what AI cannot. It's about a collaborative relationship where AI also uniquely enhances human capabilities. For instance, AI doesn't only expedite processes through automated test generation. More importantly, it provides insights and learning opportunities that might otherwise be unattainable for humans.
Take, for example, one of Kobiton's AI models, trained on the top 50 apps in the app store. This model has developed an ability to identify vital visual traits contributing to effective UI design. It distinguishes between good and poor UI designs, offering insights that might be too subtle or complex for a human tester to spot. When this AI model analyzes a visual interface, it doesn't just perform tests. It educates the testers by highlighting best practices in UI design gleaned from its extensive training. Testers can learn and apply these best practices in real-time, enhancing their skills and knowledge.
AI capabilities like these don't mean that AI is superior to humans. Instead, they exemplify AI's ability to complement human skills. AI excels in processing vast amounts of data and identifying patterns that might elude the human eye. In contrast, humans bring contextual understanding and strategic thinking to the table. Together, they create a more robust and efficient testing process where learning and improvement are continuous.
How humans and AI can work together in software testing
How do humans and AI collaborate effectively to enhance software testing? The answer starts with programming to signal when human review is needed. For example, AI tools designed for software testing, in particular, can often indicate a confidence threshold when writing code for test steps. If these tools are not sufficiently confident about the accuracy of a test step, they can flag this for human attention.
Then, humans can review incidences of low-confidence code. They can fix any issues while also benefiting from suggestions that the AI produced but that they may not have thought about on their own. Thus, humans and AI work together structured and systematically, enhancing the overall testing process.
Conclusion: A fusion between humans and AI
In software testing, humans and AI need each other. Humans need AI to accelerate workflows and surface insights that the typical human might miss. Meanwhile, AI needs humans to refine automatically generated material. Organizations unlock the greatest value from AI-based solutions by achieving a fusion between humans and AI.
But to do this well, you can't simply give human software testers AI tools and expect them to use them to maximum effect with no structure. You need an approach that automatically loops humans into AI-based workflows when necessary while letting AI take the lead where it makes sense.
Frank Moyer, chief technology officer of Kobiton, wrote this article.
The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends. Image credit: iStockphoto/Arsenii Palivoda