IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image

Snowflake's 2024 cybersecurity outlook: What lies ahead?

Fri, 22nd Dec 2023
FYI, this story is more than a year old

To establish a robust cybersecurity program, organizations must initially comprehend the evolving nature of the overall threat environment. 

AI will be a huge boon to cybercriminals before it becomes a help to security teams.

Cybercriminals and bad actors will benefit from the widespread deployment of advanced AI tools before their targets can set up AI in their own defence. A lot of businesses are cautious about adopting and using new technologies — there's cost, regulatory requirements, reputational risk, and more at stake if it's done poorly. However, bad actors won't wait. For example, phishing is still a big deal, and most phishing emails are pretty clumsy and dumb. Generative AI will make this already effective attack vector even more successful. They'll have the full firepower of large language models and generative AI, and defenders will be playing catch-up. Eventually, the playing field will even out, but I expect a lot of pain in the meantime.

Cyberattackers will continue to shift left. 

The whole "shift left" aspect of DevOps and DevSecOps is enabled by automation, and automating functionality in the production environment means there's less human error for attackers to exploit. As a result, attackers are now looking for ways through developer environments because that's where human mistakes can still be discovered and exploited, and we'll, unfortunately, see this escalate as suspicious actors become increasingly mature in the coming year. It's harder for security teams to defend against such attacks, and it's even more challenging to create baselines for acceptable development activity than for an automated, well-managed production environment. Development is naturally chaotic and experimental, so understanding what's normal and abnormal in a development environment is very difficult. However, it's imperative that CISOs and security teams figure it out. This is where you throw everything — humans, machine learning, and AI — at understanding what suspicious behaviors look like to mitigate them.

The AI data supply chain will be a target of attack. 

AI developments are fast-moving and startling in their breadth and capability and thus will be extra-challenging to security teams. Digging into the potential vulnerability of the data itself, it's important to realistically assess the risk. We're talking about an adversary playing a relatively long game by injecting false or biased data into foundational large language models (LLMs). Picture a propaganda operation in which a political actor plants content that clouds the truth about a nation-state conflict, election integrity, or a political candidate. It's not far-fetched to imagine an operation specifically designed to influence a foundational model trained on the open internet — and jumping from political shenanigans to business attacks is not an enormous leap. Plant some stories, misinform a foundational model, and down the line, your LLM could give you inaccurate or secretly biased advice about a certain company or business strategy. However, the vast majority of cyberattacks are about money, and this sort of attack doesn't have an immediate financial payoff. Furthermore, a lot of the solution to this problem is stuff that any good security organization is already doing. People can talk about finding new security approaches for new AI tools, but the best defence is a longstanding best practice: Make sure your partners and suppliers have earned your trust. Most of our existing security controls and practices apply to generative AI as well, so vetting your vendors' practices and controls remains a sound and effective way to improve your security posture, regardless of these new attack vectors.
 

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X