bbieron@platformeconomyinsights.com

Deloitte Delivered Report to Australian Government Filled with AI Errors

Oct 1, 2025

Report from ABC News

In Brief – Consulting firm Deloitte Australia will refund part of the AU$440,000 ($290,000) it was paid by the Australian Government’s Department of Employment and Workplace Relations for a report assessing the agency’s use of automated welfare penalties after the document was found to include at least 20 fabrications and inaccuracies that appear to have been generated by an AI system. The initial 237-page report, which was posted on the agency’s website in July, included numerous false citations, references to non-existent academic papers, and a fabricated quote attributed to a federal judge who is a Sydney University professor of public and constitutional law. The errors were caught by a Sidney University researcher who was tipped off by the fabricated quote linked to a book the Sidney University law professor had not written. Deloitte confirmed that “some footnotes and references were incorrect” and that the “matter has been resolved directly with the client.” The company initially did not respond when asked if the errors were generated by AI, but later confirmed they were. The revised report, posted in October, disclosed that Microsoft’s Azure OpenAI system had been used in its preparation. The department said the core findings and recommendations remained unchanged.

Context – In July, a US district judge in New Jersey withdrew his decision in a biopharma securities case after lawyers pointed out that his opinion contained numerous errors, including made-up quotes and misstated case outcomes, that were very likely the product of AI. Chatbot “hallucinations” in legal briefs first made news in mid-2023, less than a year after ChatGPT’s release, with two lawyers being sanctioned in US District Court. Chief Justice John Roberts led off his 2023 Annual Report on the Federal Judiciary by discussing AI technology in the legal system, warned about hallucinations, and urged responsible use. The Judicial Office of the UK Ministry of Justice released similar AI guidance. Legal world hallucinations keep popping up. Of course, everyone knows that they need to check all AI work, but everyone also knows they should not drink and drive.

View By Monthly
Latest Blog
Major Brazilian Law to Protect Teens on Social Media Goes into Effect

Report from AP News In Brief – A Brazilian law enacted last September to shield minors from harmful online content has taken effect with experts calling it a milestone in the protection of children and adolescents. The Digital Statute of Children and Adolescents does...

European Commission Expands Their DSA Probe of Online Porn Sites

Report from CBC News In Brief – The European Commission has announced that they have preliminarily found four large adult content platforms to be in breach of the Digital Services Act (DSA) for failing to protect minors from being exposed to pornographic content on...

UK Government Targeting Manosphere Content on Online Platforms

Report from The Guardian In Brief – More than 60 Labour MPs have urged Ofcom, the country’s communications and digital regulator, to use its authority under the Online Safety Act to press platforms to better protect young men from risks they argue are linked to...

Google Proposes a Publisher Opt-Out for AI-Enabled Search in the UK

Report from MediaPost In Brief – Google has outlined plans to give publishers more authority over how their content appears in AI-driven search features in response to the consultation by the UK Competition and Markets Authority (CMA) regarding application of the...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required