ECIIA: The AI Act – Road to Compliance
The content is a guide for internal auditors on achieving compliance with the EU AI Act, which uses a risk-based framework to categorize AI systems and imposes varying obligations. It outlines roles and responsibilities within the AI value chain, details a phased implementation timeline, and emphasizes the need for organizations to prepare by inventorying and assessing their AI systems. A survey of over 40 companies indicates widespread AI adoption but a lack of deep understanding of the Act among internal auditors, highlighting the need for enhanced AI risk auditing skills and training.
ECIIA: The AI Act – Road to Compliance
Summary of Read Full Report (PDF)
"The AI Act: Road to Compliance," serves as a practical guide for internal auditors navigating the European Union's Artificial Intelligence Act, which entered into force in August 2024. It outlines the key aspects of the AI Act, including its risk-based approach that categorizes AI systems and imposes varying obligations based on risk levels, as well as the different roles of entities within the AI value chain, such as providers and deployers.
The guide details the implementation timeline of the Act and the corresponding obligations and requirements for organizations. Furthermore, it presents survey results from over 40 companies regarding their AI adoption, compliance preparations, and the internal audit function's understanding and auditing of AI. Ultimately, the document emphasizes the crucial role of internal auditors in ensuring their organizations achieve compliance and responsibly manage AI risks.
-
The EU AI Act is now in force (August 1, 2024) and employs a risk-based approach to regulate AI systems, categorizing them into unacceptable, high, limited, and minimal risk levels, with increasing obligations corresponding to higher risk. There's also a specific category for General Purpose AI (GPAI) models, with additional requirements for those deemed to have systemic risk.
-
Organizations involved with AI systems have different roles (provider, deployer, importer, distributor, authorised representative), each with distinct responsibilities and compliance requirements under the AI Act. The provider and deployer are the primary roles, with providers facing more extensive obligations.
-
Compliance with the AI Act has a phased implementation timeline with key dates starting from February 2025 (prohibited AI systems) through August 2027 (high-risk AI components in products). Organizations need to start preparing by creating AI inventories, classifying systems by risk, and establishing appropriate policies.
-
Internal auditors play a vital role in helping organizations achieve compliance with the AI Act by assessing AI risks, auditing AI processes and governance, and making recommendations. They need to ensure the implementation of AI Act requirements within their organizations.
-
A recent survey of over 40 companies revealed widespread AI adoption but a relatively low level of understanding of the AI Act within internal audit departments. Most internal audit departments are not yet leveraging AI, but when they do, it's mainly for risk assessment. Ensuring adequate AI auditing skills through training is highlighted as a need.
Related Articles
Gemini 3.1 Pro and the Case for Model-Agnostic Agentic Infrastructure
Google's Gemini 3.1 Pro doubled its reasoning benchmarks overnight. Here's why that makes model-agnostic agentic infrastructure more critical than ever.
Google Gemini 3.1 Pro, ChatGPT Ads, and Why Organizations Need to Own Their AI Infrastructure
Google launches Gemini 3.1 Pro with advanced reasoning while OpenAI rolls out ads in ChatGPT. These two moves reveal a growing tension in enterprise AI: who controls the intelligence layer, and whose interests does it serve?
ChatGPT Now Has Ads — And It Should Change How You Think About AI Infrastructure
OpenAI has started showing ads inside ChatGPT responses. This marks a turning point: organizations relying on consumer AI tools are now subject to someone else's monetization strategy. Here's why owning your AI infrastructure matters more than ever.
Gemini 3.1 Pro Just Dropped — Here's What It Means for Organizations Running Their Own AI
Google's Gemini 3.1 Pro launched today with 1M-token context, native multimodal reasoning, and agentic tool use. Here's why model releases like this one matter most to organizations that own their AI infrastructure — and why locking into a single provider is the costliest mistake you can make.
See the ibl.ai AI Operating System in Action
Discover how leading universities and organizations are transforming education with the ibl.ai AI Operating System. Explore real-world implementations from Harvard, MIT, Stanford, and users from 400+ institutions worldwide.
View Case StudiesGet Started with ibl.ai
Choose the plan that fits your needs and start transforming your educational experience today.