How AI’s impact will shape cybersecurity strategies in 2024
Having survived a year of rapid technological advances and high-profile cyber breaches, it's now time for security professionals to polish the crystal ball and predict what lies ahead in 2024.
If the past 12 months are any guide the hottest topic in town will continue to be artificial intelligence (AI). As well as reshaping many aspects of work, the technology will continue to pose new challenges when it comes to cyber security.
For this reason, four key predictions for 2024 are:
1. Government regulation of AI will transform the cybersecurity industry.
As tends to happen when there is a low barrier to entry for such transformative technology, AI adoption has outpaced any official regulation of mandates at a government level.
With significant movements in general cybersecurity guidelines and benchmarks around the world, including CISA's Secure-By-Design and Default principles in the United States, and a discussion paper from the Australian government, it is virtually a certainty that regulations around AI use will be announced sooner rather than later.
While much of the debate surrounding the usage of AI tools and Large Language Models (LLMs) has focused on copyright issues with training data, there has also been discussion about how they can best be used in the area of cybersecurity.
When it comes to coding, some AI tools struggle to display contextual security awareness, and this factor is deeply concerning as more developers are adopting AI coding assistants in the construction of software. This trend, however, has not gone unnoticed, and in a time of increased scrutiny for software vendors adopting security best practices, government-level intervention should come as no surprise.
2. Demand for AI coding tools will create the need for more developers, not fewer.
Throughout 2023, there was much media coverage of the potential impact that AI could have on human jobs. Experts speculated the tools could reshape everything from copywriting and reviewing to the legal sector and coding.
However, after carefully reviewing the sector, there appears to be no evidence that software development jobs are at a collective risk. There is little doubt that AI/ML coding tools do represent a new era of powerful assistive technology for developers, but they are trained on human-created input and data, which has rendered the results far from perfect.
According to a Stanford University study into developer use of AI tooling, it is likely that unskilled developers using the technology will become dangerous. The study claimed participants who had access to AI assistants were more likely to introduce security vulnerabilities for the majority of programming tasks, yet also more likely to rate their insecure answers as secure.
This situation poses a significant issue. Poor developers will be enabled to introduce security issues faster, and if anything, this will only increase the need for security-skilled developers with the knowledge and expertise to code securely and use AI technology safely.
3.There will be significant consequences for software vendors that ship insecure code
To ensure sufficient attention is given to software security, vendors will increasingly be unable to 'pass the buck' when it comes to the security within their products.
This will prevent them from largely making it the responsibility of the consumer. A shakeup of this magnitude will help to move the needle towards code-level security being taken seriously.
Colonial Pipeline, SolarWinds, and, more recently, the MOVEit data breach are all large-scale cyberattacks that affected US government-level systems at some point. With new guidelines such as these in play, there is a real possibility that future highly visible breaches will receive greater scrutiny and reprimand.
4. A reactive security approach will no longer be appropriate
While the importance of having a comprehensive and robust security strategy will continue to increase, organisations that rely on reaction and incident response as the only core tenets of their plan will find themselves in a place of unacceptable exposure and risk.
Security professionals need to act swiftly in the face of adversity and outright attack. However, modern times require modern solutions, and they cannot afford to take a less-than-holistic approach.
During 2024, 'shift left' needs to be more than a rapidly aging buzzword. Code-level security needs to be prioritised, alongside upskilling and verifying the competence of the developers working on the software and critical digital infrastructure.
Now, more than ever, government and enterprises alike must commit themselves to putting in place a preventative, high-awareness security program in which every member of staff is enabled to share responsibility.
Together, these predictions will shape the priorities of security teams across all industry sectors. By understanding the security implications of AI, security teams will be well-placed to avoid the negative side effects that the technology could deliver.