A senior partner at KPMG Australia has been penalised A$ 10,000 (around US$ 7,000) after being caught using generative AI tools to complete an internal training assessment on the responsible and ethical application of artificial intelligence. The incident, which was first reported by the Financial Times, involved the partner uploading training materials into an external AI platform to generate correct answers for the course that was designed to teach proper AI usage. The breach was identified internally, after which the individual was required to retake the exam.
KPMG Australia revealed that more than two dozen staff members have been caught using AI assistance to complete internal training tests during the current financial year.
Partners found using AI in professional training
Andrew Yates, CEO of KPMG Australia, acknowledged the difficulty that companies face in regulating AI tools amid their rapid mainstream adoption. He stated, “Like most organisations, we have been grappling with the role and use of AI as it relates to internal training and testing. It’s a very hard thing to get on top of given how quickly society has embraced it.”
Yates added that while some employees breach policy due to everyday familiarity with generative AI tools, the firm treats violations seriously and is actively exploring ways to strengthen detection and enforcement within its current self-reporting framework. What makes the case more peculiar is that the mandatory training on ethical AI use is vulnerable to the same technology it seeks to govern responsibly.
Disciplinary proceedings against KPMG partners
The KPMG incident surfaced during a recent Australian Senate inquiry into industry governance. Greens senator Barbara Pocock called the episode “extremely disappointing” and criticised the existing oversight system as “toothless,” allowing misconduct to persist with limited consequences.
The Australian Securities and Investments Commission (ASIC) confirmed it had been notified but will not take further action until the relevant professional accounting body launches disciplinary proceedings against the partner. ASIC noted that audit firms are not obligated to report such internal breaches, leaving responsibility primarily with individuals to self-disclose to their professional bodies.
This is not an isolated issue in the accounting sector. The Association of Chartered Certified Accountants (ACCA) discontinued remote examinations late last year due to the inability to effectively counter increasingly advanced cheating methods powered by AI. The Big Four firms (KPMG, Deloitte, PwC, EY) have faced similar cheating scandals and subsequent fines in multiple jurisdictions over recent years.
With generative AI becoming a core part of many industries, the KPMG partner incident urges firms to update their workplace policies and invest in better detection technologies. The idea is also to enforce stricter controls for preserving integrity in training, examples and client deliverables.
