December 18, 2025 09:59 pm (IST)
Follow us:
facebook-white sharing button
twitter-white sharing button
instagram-white sharing button
youtube-white sharing button
‘Worst is over,’ says IndiGo CEO after flight chaos; staff told to ignore speculation | Chaos at Hyderabad's Lulu Mall! Nidhhi Agerwal swarmed by fans, police register case | TCS bets big on AI, shares spike as company reveals ambitious plan | Delhi goes into emergency mode! Work from home, vehicle bans as AQI hits ‘severe’ | Massive fire guts shanties near Eco Park in Kolkata; no casualties | Indian Visa Application Centre in Dhaka shuts down early amid rising security concerns | Market update: Sensex tumbles 120 points, Nifty below 25,850 at closing bell | ‘Won’t apologise’: Prithviraj Chavan stands firm on controversial Operation Sindoor remark despite backlash | India summons Bangladesh High Commissioner after provocative 'seven sisters' remark | Amazon eyes $10 billion investment in OpenAI — a gamechanger for AI industry!
Photo: Video grab

Deloitte to refund part of $440,000 fee after AI-generated errors found in Australian govt report

| @indiablooms | Oct 07, 2025, at 04:59 pm

Canberra: Global consulting firm Deloitte has agreed to refund part of its $440,000 (A$290,000) fee to the Australian government after admitting that generative AI tools were used in preparing a report assessing the government’s “Future Made in Australia” initiative.

The Department of Employment and Workplace Relations had commissioned the firm in 2024 to review the compliance framework and IT system that automatically penalises job seekers who fail to meet mutual obligation requirements, The Guardian reported.

However, the final report—released in July—contained serious inaccuracies, including academic citations referring to non-existent individuals and a fabricated quote from a Federal Court ruling, according to the Australian Financial Review.

The department published an updated version of the report on its website on Friday, removing more than a dozen fake references and footnotes, correcting typographical errors, and revising the reference list.

Australian welfare academic Dr Christopher Rudge, who first identified the discrepancies, said the report exhibited AI “hallucinations”—where AI systems generate false or misleading information by filling gaps or misinterpreting data.

“Rather than simply replacing a single fake reference with a real one, they've removed the hallucinated citations and, in the updated version, added five, six, even seven or eight new ones in their place. So what that suggests is that the original claim made in the body of the report wasn't based on any one particular evidentiary source,” he said.

Deloitte’s response

The firm admitted to using AI but said it was only employed in the early drafting stages, with the final document reviewed and refined by human experts.

Deloitte maintained that AI usage did not affect the “substantive content, findings or recommendations” of the report.

While acknowledging that generative AI tools were used, Deloitte did not directly link the errors to artificial intelligence.

In the revised version, the company disclosed that its research methodology had involved a large language model—specifically, Azure OpenAI GPT-4o.

A Deloitte spokesperson confirmed that “the matter has been resolved directly with the client.” The department said the refund process is underway and that future consultancy contracts could include stricter rules regarding AI-generated material.

Ethical concerns

The episode has triggered broader discussion about the ethical and financial accountability of using artificial intelligence in consultancy work, particularly in government-funded projects.

As consulting firms increasingly rely on AI for efficiency, questions are being raised about the extent of human oversight and whether clients receive genuine value.

Notably, Deloitte recently entered a partnership with Anthropic to provide nearly 500,000 employees worldwide access to the Claude chatbot—underlining the growing integration of AI into professional services.

The case represents one of the first significant instances in Australia where a private firm has faced repercussions for undisclosed AI use in a government project.

Support Our Journalism

We cannot do without you.. your contribution supports unbiased journalism

IBNS is not driven by any ism- not wokeism, not racism, not skewed secularism, not hyper right-wing or left liberal ideals, nor by any hardline religious beliefs or hyper nationalism. We want to serve you good old objective news, as they are. We do not judge or preach. We let people decide for themselves. We only try to present factual and well-sourced news.

Support objective journalism for a small contribution.
Related Videos
RBI announces repo rate cut Jun 06, 2025, at 10:51 am
FM Nirmala Sitharaman presents Budget 2025 Feb 01, 2025, at 03:45 pm
Nirmala Sitharaman on Budget 2024 Jul 23, 2024, at 09:30 pm