Deloitte to Refund Australian Government Over Faulty AI Report
Accounting and consulting firm Deloitte has decided to issue a partial refund to the Australian government for a report riddled with incorrect citations and references to nonexistent research. This incident highlights the difficulties that professional companies face when misusing AI tools.
The report, titled “Target Compliance Framework Guaranteed Review,” was published in August by the Department of Employment and Workplace Relations (DEWR) and cost Australian taxpayers around AUD 440,000 (approximately USD 290,000). Deloitte has acknowledged its use of Azure OpenAI’s GPT-4o in handling the report.
Shortly after its release, Chris Ludge, a deputy director at the University of Sydney, spotted several citations to papers that did not actually exist, including multiple references to fake reports supposedly authored by Lisa Burton Crawford, a law professor at the University of Sydney. Crawford voiced her worries over the misuse of her name and requested clarification from Deloitte regarding these misleading citations.
In light of these issues, Deloitte and DEWR published an updated version of the report last Friday, which included “a few revisions to references and footnotes.” This new version acknowledged the use of the Generated AI Large Language Model (Azure OpenAI GPT-4O) in assessing the alignment of system code to business requirements and compliance needs.
The revised report eliminated 14 sources from the original 141-listed references, including the fictitious works attributed to Crawford and others. Among these were erroneous citations related to a Federal Judge Jennifer Davis, originally presented as “Davis” in the flawed report.
Deloitte Australia has stated it will retract the final payment in its contract with the government; however, it remains unclear what portion of the total contract this represents. A spokesman for DEWR confirmed that the core content and recommendations of the independent review had not changed.
Despite the updates, Ludge criticized the report’s foundation, calling its recommendations flawed and lacking reliability. He noted concerns about Deloitte’s use of AI for critical analysis tasks without proper disclosure, raising questions about the report’s overall trustworthiness.
This incident is not unique; professional firms have frequently stumbled due to over-reliance on AI for accurate data. One of the most troubling areas is the legal field, where lawyers have mistakenly cited fabricated case law generated by AI chatbots.
In a previous report, Breitbart News highlighted a major law firm’s acknowledgment of the risks associated with this practice, describing it as “nausea and horrifying.”
In an internal memo presented in court documents, Morgan & Morgan’s chief transformation officer cautioned over 1,000 lawyers at the firm that citing fake cases could lead to severe consequences, including termination. This warning followed an incident where one of the firm’s lead attorneys cited eight fictitious cases in a lawsuit against Walmart, which were later found to have been generated by the AI chatbot ChatGpt.
This raised alarms about the increasing reliance on AI tools in legal contexts and the associated risks of using them without thorough verification. Walmart’s legal team urged the court to consider sanctions against Morgan & Morgan, asserting that the cases cited “do not exist outside the realm of artificial intelligence.”
Following this, the lead attorney was swiftly removed from the case and replaced by his supervisor. The supervisor expressed “great embarrassment” about the situation and agreed to cover all fees related to Walmart’s response to the false filings, highlighting that this case should serve as a cautionary tale for both the firm and the broader legal community.




