IT Brief Australia - Technology news for CIOs & IT decision-makers
Australia
Accountability top concern in AI agent workplace use

Accountability top concern in AI agent workplace use

Mon, 11th May 2026 (Today)
Sofiah Nichole Salivio
SOFIAH NICHOLE SALIVIO News Editor

Salesforce has published survey findings showing that accountability is the leading concern among Australia and New Zealand knowledge workers using AI agents at work. The research also points to widespread use of the tools in day-to-day tasks.

The survey of 2,132 knowledge workers found that 76 per cent had already interacted with AI agents in the workplace, and 37 per cent were independently delegating specific tasks to them. At the same time, 38 per cent said the lack of accountability for mistakes made by AI was their top concern, ahead of trust in outputs and other operational issues.

The figures add to the broader debate over how companies are introducing AI into routine work while formal oversight structures remain unsettled. Workers also pointed to data privacy as a major issue, with 46 per cent citing it as a concern, while 40 per cent said trust in and reliability of outputs remained a problem.

Another finding suggests employee confidence is often developing outside formal workplace training. More than seven in 10 respondents said personal use of AI had increased their confidence at work, while 71 per cent said it had improved their trust in the technology.

Confidence gap

The research showed a clear divide between managers and other staff. Among non-managers, 59 per cent cited lack of accountability as a concern, compared with 29 per cent of managers.

A similar gap emerged on trust and reliability: 60 per cent of non-managers identified it as a concern, versus 34 per cent of managers.

Workers also identified what would make them more comfortable with AI agents acting on their behalf. Nearly half, or 47 per cent, said greater transparency and control over what the systems were doing would improve confidence, while 45 per cent said access to expert technical support would help.

Justin Tauber, General Manager, Agentic Technology, Trust & Adoption, Salesforce, said organisations were moving into a new phase of AI use. "We are moving past the novelty phase of AI into a period of high-volume output. The risk we face isn't just about efficiency, it's about delegation without direction. If we treat AI as a 'set and forget' tool, we risk creating a backlog of work that lacks human oversight. Accountability is the one thing AI cannot automate, and it's time we treat it as the most critical component of the agentic enterprise."

That assessment reflects a shift in the discussion around workplace AI. The issue is no longer simply whether staff will use these systems, but how responsibility is allocated when software makes decisions or completes tasks with limited supervision.

Workplace change

The findings point to a workforce willing to use AI agents but waiting for clearer rules on oversight and responsibility. Respondents were full-time workers across Australia and New Zealand in roles including finance, marketing, IT, law, education, research, healthcare administration and consulting, up to middle management.

Salesforce also included a customer example from hipages, an online marketplace linking tradespeople with homeowners. The company introduced AI agents by first asking staff which tasks were repetitive and which still required human judgment. The approach was presented as a way to build trust through consultation rather than imposing AI use from the top down.

Jeremy Burton, Chief Technology Officer, hipages Group, said employees closest to day-to-day operations should help shape how the technology is used. "The ideas don't always need to come from the executives or the leadership. They should come from the ground where everyone knows the product, knows what they do day to day and understands how it could be better. We're seeing our reps take on more high value tasks, where nuance is required and human touch is more important."

Tauber said the wider impact of AI agents would involve a change in how knowledge work is defined. "Scaling AI effectively depends on more than access to the right technology. The old mental model of the knowledge worker, who simply generates reports and emails, is dead. In the agentic enterprise, our human teams move from being double-checkers to standard-setters."

He added: "We are moving from a skills mindset - reflecting what we're trained to do - to a responsibility mindset, to focus on what we ought to be doing. Our survey shows workers are ready to lead, but they need the rules of the arena."