Skip to main content
EU Regulation 2024/1689

EU AI Act.
Is your AI compliant?

The EU AI Act is the world's first comprehensive AI regulation. It classifies AI systems by risk level — from banned practices to minimal obligations. Most requirements apply from August 2, 2026.

---
Days
--
Hours
--
Minutes
--
Seconds

until August 2, 2026

AI systems classified by risk

The EU AI Act uses a risk-based approach. All providers and deployers of AI systems in the EU market must classify their systems into one of four risk tiers.

Social Scoring

Banned

Biometric Surveillance

Banned

Credit Scoring

High Risk

Recruitment AI

High Risk

Medical AI

High Risk

Chatbots

Limited Risk

Recommender Systems

Limited Risk

Spam Filters

Minimal Risk

The cost of non-compliance

Banned AI practices

€35M

or 7% of global annual turnover

whichever is higher

High-risk violations

€15M

or 3% of global annual turnover

whichever is higher

Incorrect information

€7.5M

or 1.5% of global annual turnover

whichever is higher

What the AI Act requires — and what SiteGuardian monitors

The EU AI Act defines obligations across the AI lifecycle. SiteGuardian monitors the requirements that map to technical and compliance posture checks.

Art. 9

Risk Management

Monitored

SiteGuardian tracks AI-related compliance through questionnaire assessment, maps AI system risks to regulatory requirements, and monitors your overall risk posture across all applicable frameworks.

Art. 13

Transparency

AI system documentation and user notification requirements. Providers must ensure high-risk AI systems are designed to be sufficiently transparent for deployers to interpret output. This is an organisational measure.

Art. 14

Human Oversight

High-risk AI systems must be designed to allow effective human oversight, including human-in-the-loop, human-on-the-loop, or human-in-command approaches. This is an organisational measure.

Art. 15

Accuracy & Robustness

High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity. Technical testing and validation requirements. This is an organisational measure.

Art. 26

Deployer Obligations

Monitored

SiteGuardian monitors AI system performance and compliance posture, generates audit evidence for deployer obligations, and tracks whether your organisation meets its duties under the AI Act.

Art. 50

Transparency for Limited Risk

Chatbot and deepfake disclosure obligations. Users must be informed when interacting with AI systems. SiteGuardian covers this through questionnaire-based assessment of your AI system inventory.

Art. 52

Algorithmic Transparency

Monitored

SiteGuardian's DSA checks cover recommender system transparency requirements (cross-reference to Art. 27 DSA), ensuring platforms disclose their algorithmic recommendation parameters.

Art. 72

Reporting Obligations

Monitored

Providers and deployers must report serious AI incidents. SiteGuardian supports incident detection and reporting workflows, tracks notification deadlines, and generates pre-filled reports for authorities.

Key deadlines

The AI Act enters into force in phases. Mark these dates.

1 August 2024

Entry into force

Regulation 2024/1689 published in the Official Journal and enters into force.

2 February 2025

Banned AI practices

Prohibitions on unacceptable-risk AI systems apply: social scoring, real-time biometric surveillance, manipulative AI, emotion recognition in workplaces/schools.

2 August 2025

GPAI rules

Rules for general-purpose AI models (including foundation models) and governance structures apply.

2 August 2026

Most obligations apply

High-risk AI system requirements, transparency obligations, deployer duties, conformity assessments, and penalty enforcement take effect.

2 August 2027

Annex I high-risk AI

Requirements for high-risk AI systems embedded in regulated products (medical devices, machinery, toys, etc.) under existing EU product safety legislation.

Start preparing today

Scan your website to see where you stand. SiteGuardian maps findings to AI Act articles — so you know exactly what to address.

Free forever for 1 monitor. No credit card required.

Frequently asked questions

What is the EU AI Act?
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive regulation for artificial intelligence. It uses a risk-based approach to classify AI systems into four tiers: unacceptable risk (banned), high risk (strict requirements), limited risk (transparency obligations), and minimal risk (no specific obligations).
When does the EU AI Act take effect?
The AI Act has phased deadlines: February 2025 for banned AI practices, August 2025 for general-purpose AI (GPAI) rules, August 2, 2026 for most obligations including high-risk AI system requirements and deployer duties, and August 2027 for high-risk AI embedded in regulated products (Annex I).
Who must comply with the EU AI Act?
All providers (developers) and deployers (users) of AI systems placed on or used in the EU market must comply, regardless of where they are established. This includes EU-based companies and non-EU companies whose AI systems affect people in the EU.
How is the AI Act different from the GDPR?
The GDPR regulates personal data processing, while the AI Act specifically regulates AI systems based on risk. They are complementary — high-risk AI systems that process personal data must comply with both. The AI Act adds requirements for transparency, human oversight, accuracy, and robustness that go beyond data protection.
How does SiteGuardian help with AI Act compliance?
SiteGuardian maps AI compliance requirements through questionnaire-based risk assessment, monitors algorithmic transparency via DSA cross-references (Art. 27 DSA for recommender systems), tracks deployer obligations and compliance posture, generates audit evidence, and supports incident detection and reporting workflows for serious AI incidents.