The risks of using AI in the software development pipeline
The unveiling of a new technology is often accompanied by much fanfare about the significant positive impact it will have on society.
Think back to events such as the creation of the internet, the mobile phone, cloud computing, and now artificial intelligence. Each was lauded as a big step forward for daily life.
However, the disruption caused by such advances doesn't always come down to the technology itself but rather how it is utilised by the end user. Unfortunately, a positive outcome isn't always guaranteed.
A recent StackOverflow survey[1] revealed approximately 76% of developers are using (or are planning to use) AI tooling in the software development process. This represents a rapid, seismic shift in how software is created, especially at the enterprise level.
In just three years, it seems many development teams have shifted from gradual changes in the software development life cycle (SDLC), opting for enormous productivity gains and instant output.
However, these gains come at a price that business leaders should not be willing to pay. The rampant, plentiful security bugs plaguing every major artificial intelligence and large language model (AI/LLM) coding assistant represent a code-level security risk for an organisation. Indeed, the best-performing tools are still only accurate around half the time.
These tools - in the hands of a developer with low security awareness - simply expedite a volume of vulnerabilities entering the codebase, adding to the ever-growing mountain of code under which security professionals are buried.
AI coding assistants are not going away, and the upgrade in code velocity cannot be ignored. However, security leaders must act now to manage their use safely.
The growing appeal of AI-assisted coding
Today, software developers are expected to perform a wide range of tasks, and that list is growing in scope and complexity. It stands to reason that, when an opportunity for assistance presents itself, your average overworked developer will welcome it with open arms.
The issue, however, is that developers will choose whatever AI model will do the job fastest and cheapest, and that may not be in the best interests of their organisation.
Take DeepSeek as an example. By all accounts it's an easy, highly functional tool that is (above all), free to use. However, despite the initial hype, it would appear the tool has significant security issues[2], including insecure code output, backdoors that leak sensitive data, and guardrails around creating malware that are far too easy to clear.
The challenge of insecure code development
Attention has recently been focused on so-called 'vibe coding'. The term refers to coding undertaken exclusively with agentic AI programming tools like Cursor AI. The developers use prompt engineering rather than writing and continue to prompt an LLM until the desired result is achieved.
Naturally, this process places complete trust in the LLM to deliver functioning code, and the way in which many of these tools are programmed is to process answers with unwavering confidence in their accuracy.
Independent benchmarking from BaxBench[3] reveals that many popular AI/LLM tools capable of acting as coding assistants produce insecure code. This has led BaxBench to the conclusion that none of the current flagship LLMs are ready for code automation from a security perspective.
With 86% of developers indicating they struggle to practice secure coding[4], this should be a deep concern to enterprise security leaders. While it is absolutely true that a security-skilled developer paired with a competent AI tool will see gains in productivity this does not represent the skill state of the general developer population.
Developers with low security awareness will simply supercharge the delivery of poor-quality, insecure code into enterprise code repositories, exacerbating the problems the AppSec team is already ill-equipped to address.
Skilling the next generation of software developers
Vibe coding, agentic AI coding, and whatever the next iteration of AI-powered software development will be are not going away. Indeed, they have already changed the way developers approach their jobs.
The solution is not to ban the tools outright and possibly create a monster in the form of unchecked, 'shadow AI' within development teams. Rather, the next generation of developers must be shown how to leverage AI effectively and safely.
It must be made clear why and how AI/LLM tools create acceptable risk, with hands-on, practical learning pathways delivering the knowledge required to manage and mitigate that risk as it presents itself.
Organisations that don't follow this path risk opening themselves up to security holes that could cause widespread disruption and loss.