AI & ethics

The EU AI Act and mental health: what does 'high-risk' mean for ordinary people?

By Thomas Silkjær4 min read

The EU AI Act classifies certain AI systems in health as "high-risk," triggering requirements for transparency, documentation, and human oversight. For you as a user, this means the right to know that AI is involved – and in certain cases the right to an explanation of the decision.

Somewhere in Brussels, someone has written a document. It spans hundreds of pages. Among other things, it covers AI systems in health and mental health – and some of them are classified as "high-risk." It sounds technical. It sounds remote. But it may affect apps and tools you already use.

What "high-risk" actually means

The EU AI Act is Europe's attempt to regulate artificial intelligence. Article 6 sets out the classification rules, and the specific list of high-risk areas is in Annex III. Health is among them – particularly AI systems that form part of medical devices or are used for decisions with significant impact on people's health.

"High-risk" doesn't mean banned. It means additional requirements:

  • Transparency. For certain types of AI systems, Article 50 requires transparency – you must know that you're interacting with an AI.
  • Documentation. High-risk systems must be able to account for what they do and how they reach their conclusions.
  • Human oversight. Article 14 requires that high-risk systems have human oversight – not as a formality, but as a genuine check.
  • Risk management. The developer must actively identify and reduce the risk of harm.

This isn't bureaucracy for its own sake. It's an acknowledgement that when AI touches how people understand themselves, the stakes are higher.

Why mental health is special

Think about the difference: an AI that recommends a film can get it wrong. You waste two hours. An AI that tells you your family dynamic is dysfunctional can change how you see the people closest to you. It can change how you talk to your children. Or whether you do at all.

The potential for harm is real. Not because AI is malicious – but because people take AI-generated descriptions seriously. Especially when they're about something you're already uncertain about.

A wrong film recommendation costs two hours of your Sunday. A wrong description of your family's dynamic can cost trust.

That's why the EU has chosen to impose additional requirements on AI systems that can significantly affect people's health and safety.

What it means for you as a user

The EU AI Act gives you rights – depending on the type of AI system you interact with:

  • In certain situations, you have the right to know that AI is involved – Article 50 covers, among other things, direct interaction with AI systems and emotion recognition.
  • You may have the right to an explanation. Article 86 provides the right to an explanation when a high-risk AI decision affects your health, safety, or fundamental rights.
  • High-risk systems must have human oversight. The system must be designed so a human can intervene if something is wrong (Article 14).

These rights aren't universal for every AI app – they depend on how the system is classified. But the principle is clear: the greater the impact, the stricter the requirements.

What "human oversight" looks like in practice

It's easy to say "human oversight." But what does it mean in practice?

It means the AI doesn't work alone. That the prompts generating output are carefully crafted. That the patterns the system describes are based on established psychological concepts. And that there are people who continuously review what the system produces.

SAMRUM is not therapy and doesn't fall into the high-risk category. But we've chosen to build the system with the same principles – because it's about families. In practice, that means the AI works within locked frameworks. It only describes axes based on established psychological concepts. It doesn't invent new categories. It uses language that has been carefully crafted and predetermined. The AI model only receives aggregated scores in the prompt – never the raw responses. And we continuously monitor the quality of what the system produces. (Read more about our methodology.)

Human oversight isn't one person looking over the shoulder. It's a system built so the AI can't run unchecked.

The EU AI Act isn't perfect. No regulation is. But it asks a question worth taking with you: Do I trust that this system was built with care? And if I don't know – do I have the right to ask?

The answer is yes. And for high-risk systems, it's now law.

Sources