Rethinking mainframe modernisation: why AI changes the equation
Legacy mainframe systems remain the silent workhorses behind some of Australia's most business-critical workloads: from bank transactions and insurance claims to government records and public sector administration. Their strength lies in handling complex, mission-critical applications that have reliably served industries for decades.
But the accelerating pace of technological change, coupled with hardware lifecycles and a shifting talent pool, demands a proactive approach to their eventual end-of-life. The question is no longer if we modernise, but how to navigate this complexity without destabilising the systems that keep the economy running. Past efforts have often been slow, expensive, and risky, which explains why so many mainframes remain. Yet the risks of aging technology, dwindling expertise, and stalled innovation are mounting - and demand new approaches.
Cost, complexity and technical debt
One of the biggest barriers to mainframe modernisation is its sheer cost and complexity. This is not a routine tech refresh but the unpicking of decades of accumulated technical debt, much of it hidden in undocumented code and intricate interdependencies. In Australia, where government agencies and financial institutions run some of the largest estates, compliance obligations only amplify the challenge.
The talent shortage compounds the issue. Many of the engineers with deep mainframe expertise are nearing retirement, creating a potential knowledge vacuum that makes future change even harder.
In our experience, phased migrations have been the traditional way forward - starting with non-critical workloads, codifying tribal knowledge before it disappears, and investing in cross-skilling programs. While these strategies have value, they are slow, resource-intensive, and often leave core systems untouched.
Data migration
Data migration remains one of the highest-risk stages of modernisation. Petabytes of critical business information - often poorly documented - must be moved without disruption. Even minor errors risk data loss or corruption, with severe consequences in Australia's highly regulated sectors like financial services and healthcare.
Techniques such as API façades, incremental replication via event streaming, and anti-corruption layers can help decouple legacy systems from modern platforms. But success hinges on treating data migration not just as a technical exercise, but as a business transformation tied to measurable outcomes like resilience, compliance, and customer experience.
Culture of change
Mainframe modernisation succeeds or fails as much on people as on technology. Without preparation, resistance sets in quickly. Legacy systems are also deeply embedded in broader IT ecosystems, making integration and observability gaps common pitfalls.
Fostering a culture of readiness is essential. This means equipping teams with the skills to own the new environment, embedding observability early by baselining current workloads, and ensuring transparent communication about long-term benefits. Prioritising API-first integration and automated testing reduces friction, helping new and legacy systems work together while building confidence across the organisation.
The advent of AI
What has changed recently is the arrival of generative AI, which offers new ways to analyse legacy codebases, accelerate transformation, and reduce reliance on scarce experts. This shift towards AI-accelerated, behaviour-first modernisation reframes the problem. Instead of slowly unravelling decades of code, AI can make sense of it in weeks, and instead of risky rewrites, workloads can be regenerated with confidence.
At Thoughtworks, we describe this second stage as forward engineering - a new way of thinking about how software is written for the modern era. Traditional "code conversion" approaches often struggled to preserve business logic or left organisations locked into brittle translations. Forward engineering takes a different path: using AI to generate demonstrably equivalent workloads in modern languages while applying automated equivalence testing to prove correctness. This shifts the emphasis from merely migrating old code to deliberately rewriting and reimagining systems for long-term adaptability.
The distinction matters. Reverse engineering allows organisations to rapidly understand what legacy systems are doing; forward engineering ensures what replaces them is not just a copy, but a living, maintainable foundation. Together, they offer a model that dramatically reduces risk while speeding up delivery.
This approach changes the economics of modernisation. For example, reverse-engineering a 10,000-line module that once took six weeks can now be done in two, potentially saving tens of thousands of days of effort across a program. Defect resolution accelerates, onboarding times for new developers shrink from months to days, and comprehension lags for business queries fall from days to minutes.
A strategic imperative
AI does not eliminate the need for careful planning, cultural readiness, or phased transitions. But it does give organisations more control, visibility, and confidence than traditional methods ever allowed. It turns modernisation from a leap of faith into a measurable, navigable path.
From our work with Australian enterprises, it's clear that mainframe end-of-life is not an event to be feared, but a strategic imperative to be approached with diligence and foresight. With forward engineering and AI-enabled methods now proven in practice, organisations have the chance to modernise at the speed and scale required. At Thoughtworks, we see this as a turning point: mainframe modernisation is no longer a barrier, but a strategic opportunity to build resilience, agility, and long-term competitiveness.