IT Brief Australia - Technology news for CIOs & IT decision-makers
Keir garrett

International Women's Day: Embedding ethics, inclusion and skills at the heart of AI transformation

Thu, 5th Mar 2026

As we celebrate International Women's Day, one truth stands out: the future of technology will be shaped by the diversity of the people behind it. Diverse teams drive better outcomes; limited voices create limited futures. When a broad range of perspectives guides how AI is built and governed, progress accelerates, and when they're absent, the risks grow. Afterall, the AI systems we create today will help shape how we work, learn, and collaborate tomorrow, so we can't afford to build that future on narrow viewpoints.

Yet women face a double exposure risk in the AI economy: underrepresentation in high-growth AI roles, and overrepresentation in functions most vulnerable to automation. In fact, IDC predicts that by 2027, half of enterprises will rely on AI agents to redefine how humans and machines collaborate. Even with women making up only around 30% of the AI workforce, we can't assume these systems will reflect the full diversity of perspectives they're intended to serve.

For me, the issue isn't just representation. Closing the diversity gap is important, but real progress requires stepping beyond the numbers. It means creating pathways into AI, empowering diverse voices in decisions, and designing systems that help people learn and thrive as technology evolves - all with inclusion built in from the outset, not as an afterthought.

Embracing ethics and inclusion in AI design: Beyond a cookie-cutter approach

When it comes to artificial intelligence, we consistently see one-size-fits-all approaches fall short. Biased data in, biased data out. And when development teams skew toward a single demographic, bias doesn't only show up in datasets, it surfaces in which problems are prioritised, how success is defined, and what risks are accepted.

In the agentic era, autonomy raises the stakes: small weaknesses in data, design, or oversight can be amplified once decisions are made and replicated at scale. A practical way we could counteract this bias is to audit datasets for representation gaps, test models for unequal outcomes, stress-test edge cases, and involve a diverse panel of human reviewers and developers throughout the AI lifecycle.

Inclusive design is both a safeguard and a smart approach to delivery. By considering different perspectives early, organisations reduce bias, anticipate challenges, and deliver AI that performs better for people and the business alike.

Designing AI with people at the centre, not just the technology

In my experience, neglecting workforce readiness deepens existing inequalities, and women are disproportionately affected. Trying to fix these gaps after the fact is costly - and avoidable. Ethical AI and economical AI are inextricably linked. That's why HR needs to move from a supporting role to a strategic one, ensuring reskilling, job transitions, and inclusion plans are embedded from the start.

Ethical AI can't be outsourced to a model. It requires human judgment and accountability at every stage. This means putting in place a framework to stress-test edge cases, audit datasets for representation, check for unequal outcomes, and involve diverse reviewers throughout development and deployment.

Building a fairer future from the ground up

With widespread organisational restructuring across Australia and the globe, businesses are rethinking their operating models in favour of automation. My challenge to leaders is this: if the business model is already on the operating table, use the moment to redesign it deliberately – in ways that not only lift efficiency, but also support meaningful upskilling and embed diversity and inclusion before new systems have the chance to cement old patterns.