EU AI Act.
Is your AI compliant?
The EU AI Act is the world's first comprehensive AI regulation. It classifies AI systems by risk level — from banned practices to minimal obligations. Most requirements apply from August 2, 2026.
until August 2, 2026
AI systems classified by risk
The EU AI Act uses a risk-based approach. All providers and deployers of AI systems in the EU market must classify their systems into one of four risk tiers.
Social Scoring
BannedBiometric Surveillance
BannedCredit Scoring
High RiskRecruitment AI
High RiskMedical AI
High RiskChatbots
Limited RiskRecommender Systems
Limited RiskSpam Filters
Minimal RiskThe cost of non-compliance
Banned AI practices
€35M
or 7% of global annual turnover
whichever is higher
High-risk violations
€15M
or 3% of global annual turnover
whichever is higher
Incorrect information
€7.5M
or 1.5% of global annual turnover
whichever is higher
What the AI Act requires — and what SiteGuardian monitors
The EU AI Act defines obligations across the AI lifecycle. SiteGuardian monitors the requirements that map to technical and compliance posture checks.
Risk Management
MonitoredSiteGuardian tracks AI-related compliance through questionnaire assessment, maps AI system risks to regulatory requirements, and monitors your overall risk posture across all applicable frameworks.
Transparency
AI system documentation and user notification requirements. Providers must ensure high-risk AI systems are designed to be sufficiently transparent for deployers to interpret output. This is an organisational measure.
Human Oversight
High-risk AI systems must be designed to allow effective human oversight, including human-in-the-loop, human-on-the-loop, or human-in-command approaches. This is an organisational measure.
Accuracy & Robustness
High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity. Technical testing and validation requirements. This is an organisational measure.
Deployer Obligations
MonitoredSiteGuardian monitors AI system performance and compliance posture, generates audit evidence for deployer obligations, and tracks whether your organisation meets its duties under the AI Act.
Transparency for Limited Risk
Chatbot and deepfake disclosure obligations. Users must be informed when interacting with AI systems. SiteGuardian covers this through questionnaire-based assessment of your AI system inventory.
Algorithmic Transparency
MonitoredSiteGuardian's DSA checks cover recommender system transparency requirements (cross-reference to Art. 27 DSA), ensuring platforms disclose their algorithmic recommendation parameters.
Reporting Obligations
MonitoredProviders and deployers must report serious AI incidents. SiteGuardian supports incident detection and reporting workflows, tracks notification deadlines, and generates pre-filled reports for authorities.
Key deadlines
The AI Act enters into force in phases. Mark these dates.
1 August 2024
Entry into force
Regulation 2024/1689 published in the Official Journal and enters into force.
2 February 2025
Banned AI practices
Prohibitions on unacceptable-risk AI systems apply: social scoring, real-time biometric surveillance, manipulative AI, emotion recognition in workplaces/schools.
2 August 2025
GPAI rules
Rules for general-purpose AI models (including foundation models) and governance structures apply.
2 August 2026
Most obligations apply
High-risk AI system requirements, transparency obligations, deployer duties, conformity assessments, and penalty enforcement take effect.
2 August 2027
Annex I high-risk AI
Requirements for high-risk AI systems embedded in regulated products (medical devices, machinery, toys, etc.) under existing EU product safety legislation.
Start preparing today
Scan your website to see where you stand. SiteGuardian maps findings to AI Act articles — so you know exactly what to address.
Free forever for 1 monitor. No credit card required.