IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image

Lasso launches automated Red Teaming for GenAI security

Fri, 28th Mar 2025

Lasso has announced the launch of its automated Red Teaming solution, designed to improve the security of Generative AI applications by simulating real-world cyber-attacks.

Lasso's new technology addresses a key challenge faced by organisations using Generative AI tools, which have become increasingly popular yet lack comprehensive security testing methods. The solution autonomously identifies vulnerabilities within Large Language Models (LLMs) both before their deployment and during their operation, allowing companies to shore up defences against potential exploits.

Ophir Dror, Chief Product Officer and Co-Founder of Lasso, explained the limitations of existing security measures. "Traditional LLM red teaming, which includes manual testing, is obsolete and no match for the scale and complexity of modern GenAI models," Dror said. "With GenAI adoption accelerating, enterprises simply cannot afford the risks that come with vulnerable LLM deployments. Lasso's Red Teaming enables organizations to continuously test and harden their GenAI applications before attackers find the gaps."

To highlight the role of its Red Teaming system, Lasso evaluated two LLMs—Llama 3.2 and DeepSeek R1. The assessment indicated strong protective measures within Llama 3.2 against the misuse of intellectual property and data leaks; however, it revealed deficiencies in protection against hallucinations, and potentially illegal, criminal, or defamatory content generation.

Conversely, the security approach of DeepSeek R1 was found to be skewed towards filtering political content, especially topics related to China, while lacking safeguards against other critical issues such as data leakage and misinformation, leaving many security dimensions inadequately protected.

The Red Teaming initiative integrates a comprehensive suite of capabilities, relying on a database of hundreds of thousands of known attacks to simulate breaches. By deploying autonomous agents, the system evolves independently of public databases, continuously updating its repository of potential threats, providing organisations with tailored and actionable insights.

Alongside attack simulations, Lasso's system generates detailed model cards, documenting each vulnerability discovered, and offers optimisation and remediation advice. This capability helps organisations implement robust security measures effectively, maintaining a secure environment across all applications and models.

The provision of system prompt analysis further enhances security by identifying weaknesses, suggesting improvements, and implementing automatic guardrails. This not only streamlines the security process but also significantly reduces the effort and time required for manual interventions.

Lasso's focus extends beyond Red Teaming, with expertise in content anomaly detection, privacy and data protection, and the broader security landscape of LLM applications. The company prioritises compliance, preventing data breaches, and guaranteeing consistent security of LLM-based operations through vigilant monitoring of model inputs and outputs.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X