AI Compliance for SMEs

Even if the EU AI Act doesn't apply directly, the Act is already influencing client expectations and international best practice, making alignment a smart move for non EU organisations


The AI Act is the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.

The AI Act formally takes effect through 2025–26, placing new responsibilities on organisations that develop or deploy AI systems across Europe.

Its main goal is to ensure that AI systems operating within the EU are safe, transparent, traceable, fair, and environmentally responsible. They should remain under human supervision to prevent harmful outcomes.

For UK businesses, it may be tempting to dismiss this as a continental concern. However, the Act is already shaping international norms, procurement language, and client expectations far beyond the EU’s borders. Understanding it, and adopting its underlying discipline, is now a signal of credibility, not just compliance.

A summary of the EU AI Act

The Act classifies AI systems by risk level:

Risk level Description & Examples Requirements
Prohibited AI systems that pose an unacceptable risk to safety, livelihoods or rights, such as social scoring or manipulative biometric surveillance. Not allowed under any circumstances.
High Systems that can affect health, safety or fundamental rights, including credit scoring, recruitment, education access or product safety. Must meet strict requirements for risk management, data governance, transparency, and human oversight.
Limited AI that interacts with people or generates content, such as chatbots or AI-generated content, where users must be informed they’re engaging with AI. Transparency obligations (e.g. user disclosure).
Minimal All other uses with negligible risk, such as analytics, recommendation tools or spam filters. No additional legal obligations under the Act.

If you market or operate an AI system inside the EU, these rules will apply to you. If you work with EU clients or partners, they will likely expect your systems to be aligned.

ISO 42001 – the voluntary standard behind the regulation

To meet the Act’s requirements, many organisations are turning to ISO/IEC 42001, the new international standard for an AI Management System (AIMS). It provides a structured way to plan, operate, and continually improve AI governance.

An AIMS helps a company:

  • Define clear roles and accountability for AI use.

  • Maintain an inventory of models, data sources, and applications.

  • Manage risk and bias systematically.

  • Establish human oversight and intervention points.

  • Log, review, and improve performance over time.

For SMEs, this doesn’t have to be heavy-weight. A few concise policies, a model register, and regular reviews can form a credible lightweight AIMS that scales as the business grows.

What SMEs should start tracking now

Even if you’re based in the UK, where the law isn’t applicable, adopting these practices will prepare you for clients, audits, and future regulation:

  • Model and data inventory - keep a record of every model in use, its purpose, data origin, and owner.

  • Data provenance - record where data comes from, how it’s transformed, moved, and used, ensuring quality and lawful handling throughout its lifecycle.

  • Human oversight - ensure someone is responsible for reviewing significant AI outputs or decisions.

  • Performance and bias testing - store validation results and note any limitations or fairness concerns.

  • Logging and traceability - maintain basic logs of model versions, parameters, and outcomes.

  • Risk register - capture identified risks (e.g. inaccuracy, bias, misuse) with mitigation actions.

  • Transparency and communication - make sure users know when they’re interacting with AI and can request human review.

These points form the backbone of trustworthy AI practice and will be invaluable if a client, partner, or regulator ever asks how your system works.

Why this matters for UK businesses

The UK’s current stance is “pro-innovation”, with sector regulators interpreting high-level guidance rather than enforcing a single AI law.

Outside of the UK context, supporting cross-border trade in AI will also require a well-developed ecosystem of AI assurance approaches, tools, systems, and technical standards which ensure international interoperability between differing regulatory regimes. UK firms need to demonstrate risk management and compliance in ways that are understood by trading partners and consumers in other jurisdictions. - UK Department for Science, Innovation and Technology, Introduction to AI Assurance

That means a business that already tracks its AI assets, manages risk, and documents oversight will fit naturally into both UK and EU regimes Demonstrating this maturity reassures clients, investors, and auditors that your AI work is well-governed, not experimental.

Compliance as a confidence signal

For SMEs, compliance should not be seen as bureaucracy but as evidence of professionalism.

When you can point to an AI policy, an inventory, and a repeatable review process, you’re showing that your company treats AI with the same care as finance or security.

Clients increasingly want partners who combine technical capability with governance awareness. By embracing ISO 42001 principles now, you can meet that demand long before it becomes mandatory.

Conclusion

The EU AI Act may not yet apply directly in the UK, but its influence is unavoidable. Building even a lightweight AI Management System based on ISO 42001 will help SMEs stay ahead of regulation, strengthen client trust, and make AI projects easier to scale safely.

Responsible AI is quickly becoming the new standard of business competence. Those who start documenting, measuring, and improving today will find tomorrow’s compliance not a burden, but a natural extension of good engineering and good governance.

References and further reading

Choosing the right tool for the job
Insight

Choosing the right tool for the job

LLMs are powerful but they're just one part of the AI landscape, albeit an enormous mountain

AI Compliance for SMEs
Insight

AI Compliance for SMEs

Even if the EU AI Act doesn't apply directly, the Act is already influencing client expectations and international best practice, making alignment a smart move for non EU organisations

Why agentic AI is about leverage, not automation
Insight

Why agentic AI is about leverage, not automation

The true value of agentic AI comes from using it to do more with less

Skip to main content