IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image

Preparing for a new AI-driven era in software development

Tue, 11th Jun 2024

After a blockbuster start to AI adoption last year, we can expect 2024 to be all about the further advancement of AI in Australia. With 68% of Australian businesses having already implemented AI technologies, and a further 23% planning to implement in the next 12 months, integrating AI will become standard across all industries of products and services. To ensure the successful and ethical adoption of AI, organisations will also need to consider the role of DevSecOps in building AI functionality alongside the software.

In particular, Australian organisations should pay attention to four key trends as they rethink how to prepare for the AI revolution in DevSecOps. Successfully aligning with these trends will position their business for success. Ignoring them could stifle innovation or, worse, derail business strategies.

The four trends are:

1. Organisations will embrace AI across the board
Harnessing AI to drive innovation and deliver enhanced customer value will be critical to staying competitive in the AI-driven marketplace.

To prepare, Australian organisations must invest in revising software development governance and emphasising continuous learning and adaptation in AI technologies. This will require a cultural and strategic shift. It demands rethinking business processes, product development, and customer engagement strategies. And it requires training — which

DevSecOps teams say they want and need. GitLab’s Global DevSecOps Report reveals that 81% of respondents said they would like more training on how to use AI effectively.
As AI becomes more sophisticated and integral to business operations, companies will also need to navigate the ethical implications and societal impacts of their AI-driven solutions, ensuring that they contribute positively to their customers and communities.

2. Increased use of AI in code testing
The evolution of AI in DevSecOps is already transforming code testing, and the trend is expected to accelerate. GitLab’s research found that while only 41% of DevSecOps teams currently use AI for automated test generation as part of software development, that number is expected to reach 80% by the end of 2024 and approach 100% within two years.

As AI tools are integrated into their workflows, organisations are grappling with the challenges of aligning their current processes with the efficiency and scalability gains that AI can provide. This shift promises a radical increase in productivity and accuracy — but it also demands significant adjustments to traditional testing roles and practices. Adapting to AI-powered workflows requires training DevSecOps teams in AI oversight and fine-tuning AI systems to facilitate its integration into code testing to enhance software products’ overall quality and reliability.

Additionally, this trend will redefine the role of quality assurance professionals, requiring them to evolve their skills to oversee and enhance AI-based testing systems. It’s impossible to overstate the importance of human oversight, as AI systems will require continuous monitoring and guidance to be highly effective.

3. Ongoing threats to IP ownership and privacy
The growing adoption of AI-powered code creation increases the risk of AI-introduced vulnerabilities and the chance of widespread IP leakage and data privacy breaches affecting software security, corporate confidentiality, and customer data protection.

To mitigate those risks, organisations must prioritise robust IP and privacy protections in their AI adoption strategies and ensure that AI is implemented with full transparency about how it’s being used. Implementing stringent data governance policies and employing advanced detection systems will be crucial to identifying and addressing AI-related risks. Fostering heightened awareness of these issues through employee training and encouraging a proactive risk management culture is vital to safeguarding IP and data privacy.

The ongoing review of Australia’s Privacy Act proposes a number of changes that have implications for AI users, particularly in the areas of data analytics and the use of biometric data. Businesses seeking to leverage the advantages of AI must prioritise understanding the impact of these changes.

The security challenges of AI also underscore the ongoing need to implement DevSecOps practices throughout the software development life cycle, where security and privacy are not afterthoughts but are integral parts of the development process from the outset. In short, businesses must keep security at the forefront when adopting AI — similar to the shift left concept within DevSecOps — to ensure that innovations leveraging AI do not come at the cost of security and privacy.

4. A rise in AI bias
While 2023 was AI’s breakout year, its rise also put a spotlight on bias in algorithms. AI tools that rely on internet data for training inherit the full range of biases expressed across online content. This development poses a dual challenge: exacerbating existing biases and creating new ones that impact the fairness and impartiality of AI in DevSecOps.

To counteract pervasive bias, developers must focus on diversifying their training datasets, incorporating fairness metrics, and deploying bias-detection tools in AI models, as well as explore AI models designed for specific use cases. One promising avenue to explore is using AI feedback to evaluate AI models based on a clear set of principles, or a “constitution,” that establishes firm guidelines about what AI will and won’t do. Establishing ethical guidelines and training interventions are crucial to ensure unbiased AI outputs.

As Australian organisations ramp up their shift toward AI-centric business models, they must first establish robust data governance frameworks to ensure the quality and reliability of the data in their AI systems. AI systems are only as good as the data they process, and bad data can lead to inaccurate outputs and poor decisions.

Developers and the broader tech community should demand and facilitate the development of unbiased AI through constitutional AI or reinforcement learning with human feedback aimed at reducing bias. This requires a concerted effort across AI providers and users to ensure responsible AI development that prioritises fairness and transparency.

To fully harness the potential for AI transformation across Australia’s technology landscape and beyond,  business leaders and DevSecOps teams will need to confront the anticipated challenges amplified by using AI — whether they be threats to privacy, trust in what AI produces, or issues of cultural resistance. Navigating the new era in software development and security requires a comprehensive approach encompassing ethical AI development and use, vigilant security and governance measures, and a commitment to preserving privacy. 

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X