New research shows that 60% of organisations are now using artificial intelligence in their software testing processes, but 80% still lack the in-house expertise required to manage AI testing effectively.
Applause has published its latest annual The State of Digital Quality in Functional Testing 2025 report, drawing on responses from over 2,100 software development and testing professionals globally. The report indicates a doubling in AI adoption for software testing since last year, with 30% reported in 2024 rising to 60% now. Despite this rise, 92% of organisations continue to find it challenging to keep up with the rapid pace of changing requirements in the development landscape.
AI adoption and challenges
According to the report, AI is most commonly used to develop test cases, with 70% of respondents leveraging AI for this purpose. Automation of test scripts (55%) and using AI-driven analysis to recommend improvements (48%) are also notable applications. Other reported uses include test case prioritisation, autonomous execution and adaptation of test cases, identifying coverage gaps, and self-healing test automation capabilities.
The lack of in-house AI testing expertise presents ongoing hurdles for many organisations. Four out of five survey participants said their teams were not equipped with sufficient expertise, leading to nearly a third relying on external partners to address AI testing challenges. Furthermore, inconsistent or unstable environments (87%) and insufficient time allocated for testing (85%) emerged as prominent obstacles encountered by development teams, despite improvements brought by AI adoption.
To mitigate the risks posed by agentic AI, a third of organisations utilise crowdtesting. This approach enables the inclusion of human-in-the-loop (HITL) coverage, bridging gaps left by automated testing and enhancing overall quality assurance.
Value of human oversight
The report highlights the importance of maintaining human involvement as AI becomes more embedded in QA processes. This is particularly important for agentic AI, which is capable of operating autonomously across large systems.
"Software quality assurance has always been a moving target," said Rob Mason, Chief Technology Officer, Applause. "And, as our report reveals, development organisations are leaning more on generative and agentic AI solutions to drive QA efforts. To meet increasing user expectations while managing AI risks, it's critical to assess and evaluate the tools, processes and capabilities we're using for QA on an ongoing basis – before even thinking about testing the apps and websites themselves."
Mason also noted the potential risks involved with agentic AI and underlined the need for human interventions:
"Agentic AI requires human intervention to avoid quality issues that have the potential to do serious harm, given the speed and scale at which agents operate. The trick is to embed human influence and safeguards early and throughout development without slowing down the process, and we know this is achievable given the results of our survey and our own experiences working with global enterprises that have been at the forefront of AI integration."
Shift-left approach gains ground
The study reports that more than half of organisations are now integrating QA during the planning, design, and maintenance phases of the software development lifecycle (SDLC). Compared with last year, when 42% limited testing to a single stage of the SDLC, only 15% continue to do so this year, highlighting a marked shift towards earlier and continuous QA involvement.
The adoption of multiple functional testing types is widespread, with 91% conducting a range of tests including performance, user experience, accessibility, and payment testing. User experience (UX) testing is the most popular, with 68% of organisations identifying it as a priority, followed by usability testing (59%) and user acceptance testing (54%).
Quality metrics and customer focus
Organisations reported that customer satisfaction and customer sentiment or feedback are the top metrics in their assessment of software quality. With many businesses placing emphasis on diverse testing strategies, nine out of ten respondents said their team conducted several different types of testing to improve and maintain digital quality standards.
The report underscores the necessity of a combined approach, involving AI-powered tools, human oversight, external partners, and crowdtesting resources to address the increasing complexity and demand for digital quality assurance.