Skip to content

EU AI Act: How to Prepare Your Business Before August 2026

On August 2, 2026, the core provisions of the EU AI Act become enforceable. For businesses operating in or serving the EU market, this means concrete requirements around risk assessment, documentation, governance and transparency for AI systems. Yet many organizations still lack a compliance plan. This article provides a practical overview of what the regulation requires, how the risk classification works, and how to prepare in time.

The EU AI Act is the world's first comprehensive legislation for artificial intelligence. It regulates AI systems based on risk — the higher the risk a system poses to people's rights, health and safety, the stricter the requirements. The regulation applies regardless of company size. It is the AI system's risk classification that determines which rules apply, not your revenue or headcount.

The regulation has extra-territorial reach, similar to GDPR. Any organization — regardless of where it is based — must comply if its AI systems are used within the EU or produce outputs that affect EU residents. For Swedish businesses, this aligns with the national AI strategy launched in February 2026, which aims to position Sweden among the top ten AI nations while ensuring responsible use.

If you are already working on an AI strategy and roadmap, compliance with the AI Act should be integrated into that process. It is not a separate compliance project — it is part of how you plan and execute AI initiatives responsibly.

Timeline: What applies now and what is coming

The AI Act is being phased in gradually. Several key milestones have already passed, and the most significant phase starts in August 2026.

  1. February 2025: Prohibitions on unacceptable-risk AI systems took effect. This includes social scoring, manipulative AI and certain forms of biometric surveillance.
  2. August 2025: Rules for general-purpose AI models (GPAI) such as GPT and Claude became applicable. Providers of these models must meet transparency and documentation requirements.
  3. August 2026: The core provisions become enforceable. Requirements for high-risk AI systems, risk assessments, quality management and transparency take effect.
  4. August 2027: Additional requirements for specific AI systems classified under product safety legislation.

This means businesses have roughly three months to prepare for the broadest phase of requirements. Organizations that have not started should act now.

The AI Act's risk categories — where do your systems fall?

The AI Act uses a risk-based framework with four tiers. The classification determines which requirements apply to each AI system you use or provide.

Unacceptable risk (prohibited)

AI systems deemed to pose an unacceptable risk are banned outright. Examples include social scoring, real-time biometric identification in public spaces (with limited exceptions), and AI that exploits vulnerabilities to manipulate behavior. These prohibitions have been in effect since February 2025.

High risk

High-risk AI systems are the core focus of the regulation and carry the most requirements. This category includes AI used in recruitment and HR, credit scoring, education, access to public services, law enforcement and critical infrastructure. Organizations using or providing high-risk AI must meet requirements for risk management, data quality, documentation, human oversight, cybersecurity and transparency.

Limited risk

AI systems with limited risk primarily face transparency obligations. This includes chatbots, AI-generated content and systems that interact directly with users. Users must be informed when they are interacting with AI and when content is AI-generated.

Minimal risk

AI systems with minimal risk — such as spam filters and AI in games — are not subject to specific requirements. The majority of AI systems fall into this category.

The first step for any organization is to map all AI systems in use and classify them according to the risk framework. Without that mapping, you cannot assess which requirements apply.

Key requirements for high-risk AI systems

If your organization develops, deploys or uses AI systems classified as high-risk, comprehensive requirements take effect from August 2026. Here are the most important ones:

  • Risk management system: A documented system for identifying, assessing and mitigating risks throughout the AI system's lifecycle.
  • Data quality: Training, validation and testing data must meet quality standards for relevance, representativeness and accuracy.
  • Technical documentation: Detailed documentation of the system's purpose, functionality, limitations and performance.
  • Logging and traceability: Automatic logging of system activity to enable compliance monitoring.
  • Human oversight: Systems must be designed so that humans can monitor, intervene and, if necessary, override or shut down operations.
  • Cybersecurity: Robustness and resilience against manipulation, adversarial attacks and technical failures.
  • Fundamental Rights Impact Assessment (FRIA): An assessment of the AI system's impact on fundamental rights.

Note that the AI Act and GDPR overlap significantly. If your high-risk AI system processes personal data, you will need to conduct both a FRIA under the AI Act and a DPIA under GDPR Article 35. Coordinating these assessments is both practical and efficient.

Five steps to prepare your business

Meeting the AI Act's requirements does not have to be overwhelming — but it does require structure and ownership. Here are five concrete steps to get started.

1. Inventory your AI systems

Start with a comprehensive inventory of all AI systems used or being developed in your organization. Include third-party solutions, embedded AI in existing tools and internal prototypes. Document their purpose, data usage and affected stakeholders. An AI readiness assessment can be a useful starting point for understanding the full scope of your AI usage.

2. Classify according to the risk framework

Assess each identified AI system against the regulation's risk categories. Focus on the system's use case and consequences — not the technology itself. Systems used for recruitment, credit scoring or employee management often fall into the high-risk category.

3. Conduct a gap analysis

Compare your current state against the requirements for your classified systems. What documentation already exists? What risk management and oversight processes are in place? Where are the gaps? Prioritize the largest gaps given the timeline.

4. Establish AI governance

Assign clear roles and responsibilities for AI governance. There should be defined ownership for compliance, risk management and quality. Embed governance into existing processes rather than creating a separate AI function. Many organizations integrate this into their broader AI change management efforts.

5. Document and implement

Create the technical documentation, risk management systems and oversight processes required. Test them in practice before August 2026. Plan for ongoing maintenance — compliance is not a one-time effort but a continuous process.

Penalties and enforcement

The AI Act carries significant penalties. Fines can reach up to 35 million euros or 7 percent of global annual turnover for the most serious violations. Lesser violations can still result in fines of up to 15 million euros or 3 percent of turnover.

Beyond the financial consequences, non-compliance creates reputational risk and can affect customer relationships and business opportunities. Companies that proactively invest in AI governance and transparency strengthen their position with customers, partners and regulators alike.

Summary

The EU AI Act places concrete requirements on businesses that use or provide AI systems. The risk-based model means requirements vary based on a system's impact, but every organization needs to at minimum map its AI usage and classify its systems.

With three months until the core provisions take effect, the time to act is now. Organizations that integrate compliance into their AI strategy — rather than treating it as a separate project — gain better governance, lower risk and a stronger foundation for scaling AI responsibly.

Need support with AI governance and preparing for the AI Act? Contact us to discuss how we can help you build a sustainable foundation for responsible AI in your organization.

Alaa Hijazi

AI advisor, Strative

Need help with your AI initiative?

Let's discuss how we can help.

Contact us