Start your day with intelligence. Get The OODA Daily Pulse.

The Department of Justice (DOJ) released a revised guidance yesterday for corporate compliance programs incorporating artificial intelligence (AI) to help federal prosecutors more effectively assess these systems. This updated framework — the DOJ’s Evaluation of Corporate Compliance Programs — stems from a directive by Deputy Attorney General Lisa Monaco, who in early 2024 emphasized the importance of stricter penalties for criminals exploiting generative AI for misconduct. Part of a broader DOJ initiative to curb AI misuse, the guidance sets higher accountability standards, requiring companies to ensure their AI systems are ethical, effective, and capable of mitigating risks. More than just a regulatory update, it ushers in a new era of corporate governance, where AI must be monitored, tested, and continuously improved to avoid harm.The guidance revolves around three fundamental questions: Is the company’s AI-driven compliance program well-designed? Is it earnestly implemented? And most critically, does it work in practice? Prosecutors are instructed to assess whether companies’ AI systems are equipped to detect and prevent misconduct and whether they’re regularly updated to address emerging risks. AI can offer tremendous advantages for compliance, such as automated risk detection and real-time monitoring, but the DOJ seems to believe that companies cannot rely on these systems without proper oversight. AI must be trustworthy and ethically aligned with both the law and internal governance policies. Companies are therefore expected to ensure that their AI systems function transparently, and that decisions influenced by AI are subject to human review where necessary.

Full report : AI, Compliance, And Corporate Accountability: DOJ’s New Guidance.