Professional services and consulting firm Deloitte announced a major enterprise AI deal with Anthropic. This announcement came on the same day it was revealed that the company would issue a refund for a government-contracted report that contained inaccurate, AI-produced information. The situation presents a clear picture of Deloitte’s strong commitment to AI even as it confronts the practical challenges of the technology. Deloitte is not alone in facing this kind of challenge.
The timing of this announcement is notable. On the very day Deloitte promoted its increased use of AI, the Australia Department of Employment and Workplace Relations stated the consulting company would have to issue a refund for a report it produced. The Financial Times reported that the report included AI hallucinations. The department had commissioned a 439,000 dollar independent assurance review from Deloitte, which was published earlier this year. The Australian Financial Review reported in August that the review contained a number of errors, including multiple citations to non-existent academic reports. A corrected version of the review was uploaded to the department’s website last week. Deloitte will repay the final installment of its government contract.
Deloitte announced on Monday its plan to roll out Anthropic’s chatbot Claude to its nearly 500,000 global employees. Deloitte and Anthropic, which formed a partnership last year, plan to create compliance products and features for regulated industries including financial services, healthcare, and public services. According to reporting from CNBC, Deloitte also plans to create different AI agent personas to represent the various departments within the company, such as accountants and software developers. A Deloitte leader stated the company is making this significant investment because its approach to responsible AI is aligned with Anthropic’s, and together they can reshape how enterprises operate.
The financial terms of the deal, which Anthropic referred to as an alliance, were not disclosed. This deal is not only Anthropic’s largest enterprise deployment yet, but it also illustrates how AI is embedding itself in every aspect of modern life, from professional tools to casual home use.
Deloitte is not the only entity recently caught using inaccurate AI-produced information. In May, the Chicago Sun-Times newspaper had to admit it ran an AI-generated list of books for its annual summer reading guide after readers discovered some of the book titles were fabricated, even though the authors were real. An internal document viewed by Business Insider showed that Amazon’s AI productivity tool, Q Business, struggled with accuracy in its first year. Anthropic itself has also been criticized for using AI-hallucinated information from its own chatbot, Claude. The AI research lab’s lawyer was forced to apologize after the company used an AI-generated citation in a legal dispute with music publishers earlier this year.

