Fortifai debuts to safeguard and audit AI technologies
In light of the growing concerns surrounding the safety and reliability of AI-driven outputs, Fortifai, an Australian-based tech firm, has officially announced its entrance into the market. This company, with its central focus on the rapidly expanding AI sector, aims to validate the safety, alignment, and precision of AI outputs.
The AI industry has shown astronomical growth over the years, but with growth comes risk. Fortifai recognises these potential pitfalls and has tailored its services to test AI outputs for their safety, accuracy, and alignment. Additionally, it offers services to organisations to help them comprehend the AI technology they plan to adopt and delivers comprehensive cyber security assessments of AI applications.
Fortifai's range of services is intended to offer security, transparency, and safety to both corporate entities and government bodies that are on the path to developing and implementing AI solutions. Moreover, their role as a third-party means that their validation comes as an industry-trusted confirmation.
A report titled "Trust in Artificial Intelligence" published by the University of Queensland offers some revealing figures on the trust Australians have in AI. According to their findings, a mere 34% of Australians are inclined to trust AI systems. Notably, when it comes to placing faith in AI acting in the public's best interest, the majority lean towards universities and defence forces rather than commercial institutions. This underscores the pressing need for businesses to adopt transparent processes and perform rigorous due diligence to not only mitigate risks but also to foster trust amongst their consumers.
Fletcher Roberts, Director at Fortifai, shed light on the company's mission stating, "Businesses are eager to remain at the forefront of AI developments, and early adoption is a way forward. However, the landscape is riddled with ethical, security, and real-world operational risks. Activities like due diligence, simulated testing, and analysis of AI outputs aren't just optional procedures – they're imperative corporate governance practices that must be finalised before any deployment."
His sentiment was echoed by Jock Haslam, another Director at Fortifai, who added, "The adoption of AI isn't a novel concept, but the rate at which companies are embracing generative AI is unprecedented. Organisations shouldn't hesitate in seeking guidance when gauging the threats presented by AI adoption. We at Fortifai are poised to assist these organisations. Our role will be to facilitate understanding, analyse potential safety implications for clients and their data, and subsequently implement safeguards against the unintended repercussions of hasty AI adoption."
Lending further credibility to Fortifai is the pedigree of its founders, Roberts and Haslam, renowned for co-founding Hashlock, a leader in the realm of blockchain security and smart contract auditing. With a client list boasting names like The Verida Network and Redbelly Network, and accolades such as being featured on CoinmarketCap as the premier smart contract auditor, Hashlock has firmly established itself as an authority in the emergent tech auditing domain.
Fortifai and Hashlock together form a robust duo, both working synergistically towards a unified objective – ensuring top-tier cyber security for emerging technologies.