Even beyond its scope, the EU AI Act offers an opportunity for general counsel to focus on how to regulate AI.
Even beyond its scope, the EU AI Act offers an opportunity for general counsel to focus on how to regulate AI.
By Laura Cohn
Evolving or implementing AI governance is one of the top 5 priorities for legal leaders in 2025. But with a patchwork of rules starting to appear across the U.S., and the EU AI Act starting to take effect, general counsel (GC) are grappling with how to prepare their organizations for new requirements.
Though the EU AI Act goes beyond what U.S. state and local efforts entail, it offers an opportunity for GC to focus on the commonalities and the differences. History shows that once leading jurisdictions set the agenda, others follow suit.
Proactively implementing EU AI Act compliance strategies now could help prevent fines and reputational damage down the line.
Colorado, Illinois, Utah and New York City have all implemented AI laws that businesses must adhere to, and new legislation could pass in California soon. Look for commonalities between the laws — and the EU AI Act — across three principles: transparency, risk management and fairness.
To help meet the obligation to notify consumers, direct your legal and compliance teams to:
Work with IT or other relevant stakeholders to update notices on automated chatbots to disclose to users that they are interacting with AI — and include an option to speak with a human
Collaborate with IT and any other function using AI to ensure the organization has a process for labeling AI-generated content as such
Because the EU AI Act builds on the principles of the General Data Protection Regulation (GDPR), risk assessment processes required for compliance with the measures often overlap. Work with legal and compliance staff to:
Incorporate questions into existing risk assessments and intake processes to surface high-risk AI use cases
Integrate questions used in the EU AI Act–mandated Fundamental Rights Impact Assessment (FRIA) into existing Data Protection Impact Assessments (DPIAs), for high-risk AI projects
The EU AI Act and laws in New York City and Colorado include measures to uphold workplace integrity when using AI in employment processes. Ask HR partners whether the organization uses HR technology vendors with AI functionality and, if it does, ask:
What data are they using?
What assumptions go into their algorithms to create a “match”?
How will they comply with current and future regulations?
What steps do they take to mitigate the harm caused by bias?
The EU AI Act, which became law in 2024, outlines a set of rules for organizations operating in the EU. The measure takes a risk-based approach and levies significant fines for noncompliance. EU AI Act compliance includes such requirements as mandating impact assessments on fundamental individual rights, adopting processes to minimize bias in AI outputs and disclosing AI use to customers and regulators.
Gartner recommends that organizations create AI policies based on the commonalities of emerging AI laws across the EU and U.S. states and localities to ensure compliance wherever the business operates. Look for commonalities between the laws across transparency, risk management and fairness.
Attend a Conference
Accelerate growth with Gartner conferences
Gain exclusive insight on the latest trends, receive one-on-one guidance from a Gartner expert, network with a community of your peers and leave ready to tackle your mission-critical priorities.
Drive stronger performance on your mission-critical priorities.