- NextWave AI
- Posts
- Deloitte to Repay Australian Government After AI Errors Discovered in Official Report
Deloitte to Repay Australian Government After AI Errors Discovered in Official Report
How Canva, Perplexity and Notion turn feedback chaos into actionable customer intelligence
Support tickets, reviews, and survey responses pile up faster than you can read.
Enterpret unifies all feedback, auto-tags themes, and ties insights to revenue, CSAT, and NPS, helping product teams find high-impact opportunities.
→ Canva: created VoC dashboards that aligned all teams on top issues.
→ Perplexity: set up an AI agent that caught revenue‑impacting issues, cutting diagnosis time by hours.
→ Notion: generated monthly user insights reports 70% faster.
Stop manually tagging feedback in spreadsheets. Keep all customer interactions in one hub and turn them into clear priorities that drive roadmap, retention, and revenue.
In a striking development highlighting the growing scrutiny around artificial intelligence in professional services, global consulting giant Deloitte has agreed to repay part of its $440,000 (AUD) fee to the Australian government. The decision follows the discovery of multiple errors in an official report that was found to contain AI-generated inaccuracies and fabricated references.
The report in question, prepared in 2024, evaluated the “Future Made in Australia” initiative—a program aimed at modernizing Australia’s compliance systems and enhancing automation within government services. Deloitte was commissioned by the Department of Employment and Workplace Relations (DEWR) to assess the performance of an IT system responsible for issuing penalties to job seekers who failed to meet specific mutual obligation requirements.
However, when the final report was released in July 2025, it quickly came under scrutiny after independent experts noticed irregularities that pointed to the misuse of generative AI during its drafting process.
Discovery of AI ‘Hallucinations’
The first to raise the alarm was Dr. Christopher Rudge, a welfare law academic, who found that the report contained several fabricated citations and quotes. According to the Australian Financial Review, some references mentioned academics and court judgments that did not exist, and even included a fake quote attributed to a Federal Court decision.
Dr. Rudge described these inaccuracies as clear examples of AI “hallucinations” — a term used when large language models generate false or misleading information that appears credible but lacks factual basis.
“Rather than simply replacing one fake reference with a real one, they’ve removed the hallucinated citations and, in many cases, added several new references,” Dr. Rudge explained. “This indicates that the claims made in the original report were not supported by any authentic evidence.”
Government Steps In
After the issue gained media attention, the Department of Employment and Workplace Relations reviewed the document and confirmed multiple inconsistencies. On Friday, the department uploaded a revised version of the report to its official website.
The updated document removed over a dozen fictitious references, corrected numerous typographical errors, and disclosed that generative AI had been used as part of the research methodology. Deloitte’s updated version formally acknowledged the involvement of a large language model (Azure OpenAI GPT-4o) during the early drafting stage.
In response, the department announced that Deloitte had agreed to refund a portion of its consultancy fee and that stricter guidelines would be introduced for the use of AI in future government contracts.
A department spokesperson confirmed that “the refund process is underway”, and that measures would be taken to prevent similar incidents in future engagements.
Deloitte’s Response
Deloitte, one of the “Big Four” accounting and consulting firms, admitted to using AI tools in the preparation of the report but insisted that it was done only during preliminary research and drafting.
In a public statement, Deloitte emphasized that human experts had reviewed and refined the final report, ensuring that the “substantive content, findings, and recommendations” remained valid.
While the firm did not directly blame artificial intelligence for the inaccuracies, it conceded that AI-generated material was included without proper verification in some parts of the report.
A Deloitte spokesperson stated that the firm had resolved the issue directly with the client and that internal review processes had been strengthened to prevent similar errors. “The matter has been resolved directly with the Department. We take data integrity and transparency seriously, and our teams are committed to maintaining the highest professional standards,” the spokesperson added.
Ethical and Financial Implications
This controversy has ignited a broader debate about the ethical use of AI in high-value consulting and public policy work. As governments and corporations increasingly adopt AI tools to enhance productivity, questions are being raised about accountability, transparency, and human oversight.
Critics argue that while AI can streamline workflows and generate insights quickly, overreliance without proper validation risks undermining public trust—particularly in government-funded projects where accuracy is paramount.
Industry analysts note that this case may become a landmark precedent for AI governance in consultancy work. It underscores the need for clear disclosure when AI systems contribute to research or analysis, especially when those outputs influence policy or public spending.
A Growing Trend of AI Integration
Ironically, Deloitte has been one of the leading voices advocating for AI integration in business operations. Earlier this year, the firm signed a partnership deal with Anthropic, granting nearly 500,000 Deloitte employees worldwide access to the Claude AI chatbot—a tool designed to assist with data analysis, report writing, and client engagement.
This deal was intended to enhance productivity and demonstrate Deloitte’s commitment to innovation in professional services. However, the recent incident has highlighted the risks of blending automation with advisory services without adequate human supervision.
Lessons for the Consulting Industry
The Deloitte case sends a clear warning across the consulting and technology sectors: AI cannot be a substitute for human judgment in critical analysis and policy recommendations. While generative AI tools like ChatGPT, Claude, and Azure OpenAI models are increasingly capable of producing complex text, they remain prone to fabrication and factual inaccuracies if not carefully monitored.
Experts suggest that firms should implement transparent AI disclosure policies, ensure rigorous fact-checking procedures, and provide training to consultants on ethical AI usage. Moreover, clients—especially government agencies—must demand full transparency regarding whether and how AI tools are used in the preparation of official reports.
Broader Impact on AI Regulation
The Australian government’s handling of this issue may influence how other countries approach AI governance in professional environments. In recent months, both the European Union and United States have been drafting regulations that require companies to label AI-generated content, particularly in official or high-stakes contexts.
This case may also inspire policymakers to introduce disclosure clauses in consulting contracts, ensuring that clients know when generative AI has been used in research or documentation.
As AI continues to reshape industries, balancing innovation with accountability will remain a central challenge. Deloitte’s refund and acknowledgment may be seen not only as a corrective action but also as an early step toward ethical maturity in the age of AI-assisted work.

