Deloitte to refund part of $440,000 fee after Australian govt report’s AI-generated errors

Deloitte. Photo: Video grab

#Deloitte# Australia# AI errors# government report# Future Made in Australia# Artificial Intelligence# consultancy ethics

IBNS-CMEDIA: Canberra: Global consulting firm Deloitte has agreed to refund part of its $440,000 (A$290,000) fee to the Australian government after admitting that generative AI tools were used in preparing a report assessing the government’s “Future Made in Australia” initiative.

The Department of Employment and Workplace Relations had commissioned the firm in 2024 to review the compliance framework and IT system that automatically penalises job seekers who fail to meet mutual obligation requirements, The Guardian reported.

However, the final report—released in July—contained serious inaccuracies, including academic citations referring to non-existent individuals and a fabricated quote from a Federal Court ruling, according to the Australian Financial Review.

The department published an updated version of the report on its website on Friday, removing more than a dozen fake references and footnotes, correcting typographical errors, and revising the reference list.

Australian welfare academic Dr Christopher Rudge, who first identified the discrepancies, said the report exhibited AI “hallucinations”—where AI systems generate false or misleading information by filling gaps or misinterpreting data.

“Rather than simply replacing a single fake reference with a real one, they’ve removed the hallucinated citations and, in the updated version, added five, six, even seven or eight new ones in their place. So what that suggests is that the original claim made in the body of the report wasn’t based on any one particular evidentiary source,” he said.

Deloitte’s response

The firm admitted to using AI but said it was only employed in the early drafting stages, with the final document reviewed and refined by human experts.

Deloitte maintained that AI usage did not affect the “substantive content, findings or recommendations” of the report.

While acknowledging that generative AI tools were used, Deloitte did not directly link the errors to artificial intelligence.

In the revised version, the company disclosed that its research methodology had involved a large language model—specifically, Azure OpenAI GPT-4o.

A Deloitte spokesperson confirmed that “the matter has been resolved directly with the client.” The department said the refund process is underway and that future consultancy contracts could include stricter rules regarding AI-generated material.

Ethical concerns

The episode has triggered broader discussion about the ethical and financial accountability of using artificial intelligence in consultancy work, particularly in government-funded projects.

As consulting firms increasingly rely on AI for efficiency, questions are being raised about the extent of human oversight and whether clients receive genuine value.

Notably, Deloitte recently entered a partnership with Anthropic to provide nearly 500,000 employees worldwide access to the Claude chatbot—underlining the growing integration of AI into professional services.

The case represents one of the first significant instances in Australia where a private firm has faced repercussions for undisclosed AI use in a government project.