AI & ethics

Data ethics dilemmas in welfare: when efficiency affects the relationship

By Thomas Silkjær5 min read

When AI sorts people in the welfare system, the question isn't whether it's efficient – but whether it replaces or supplements the human gaze. An algorithm sees fields in a form; a health visitor sees the unopened letters on the kitchen table.

A mother is sitting in the waiting room with a cold cup of coffee. Her case has been assessed by an algorithm. The system says she's "low risk." She gets fewer visits from the health visitor. Three instead of five. The algorithm has crunched her answers on a form: age, income, marital status, number of children.

But the algorithm doesn't know that her partner just moved out. It doesn't know she hasn't slept in three weeks. It doesn't know she's sitting in the waiting room unable to remember the last time she talked to another adult.

What we're talking about when we say "AI in welfare"

AI in welfare isn't one thing. It ranges from triage systems that prioritise cases, through decision support for caseworkers, to algorithms that assess risk. What this article is about are the systems that affect who gets help – and how much. Not chatbots or record searches, but the algorithms that sort people.

The promise of efficiency

AI can process more cases faster. That's the promise. And it's real. In a world with limited resources – fewer health visitors, longer waiting lists, tighter budgets – it's tempting to let an algorithm sort. Who needs help the most? Who can manage on their own?

The problem isn't efficiency itself. The problem is when "faster" gets confused with "better." When the algorithm's assessment replaces the human encounter instead of supplementing it.

The Danish Data Ethics Council has in their work on AI in health pointed to the risk: when AI systems are used in welfare without sufficient transparency and explainability, the citizen loses the ability to understand – and challenge – the assessment that affects their life.

What the algorithm doesn't catch

A health visitor sitting in the living room senses something a form can't measure. She sees the unopened letters on the kitchen table. She hears the hesitation in the voice. She notices that the child looks at its mother before daring to take a biscuit.

The algorithm sees fields in a form. It sees marital status, not loneliness. Postcode, not pressure. Number of visits, not the quality of them.

That doesn't mean data is useless – quite the opposite. Data can identify trends, prioritise resources, and catch patterns that would otherwise remain invisible. The difference lies in how data is used. A system that sorts people into boxes based on records is something different from a system that gives families language for what they're experiencing themselves. The former replaces the gaze. The latter sharpens it.

Who is responsible when the algorithm gets it wrong?

Here we hit one of the hardest questions. When a health visitor judges that a family is doing fine, and it turns out she was wrong, the responsibility is clear. It's her expertise, her assessment, her responsibility.

But when an algorithm makes the same judgement – who carries the responsibility? The developer who built the model? The municipality that purchased the system? The manager who decided to use it? Or the employee who trusted it?

The question of accountability is unresolved. And that should concern us. Not because AI is dangerous – but because unclear responsibility is always dangerous when it involves vulnerable people.

What values are coded in?

Every AI system is built on choices. Which data carries the most weight? What counts as "risk"? What counts as "doing fine"?

Those choices aren't neutral. They reflect values. And the question is which ones: Efficiency? Equality? Savings? Prevention?

The Danish Data Ethics Council points out that values in AI systems are often invisible to those affected by them. A mother who gets fewer visits doesn't know which model decided it. She just knows the health visitor doesn't come anymore.

Transparency isn't just about publishing algorithms. It's about making the underlying choices visible and understandable for the people they affect.

From system to kitchen table

It's easy to discuss data ethics as something abstract. Something about systems and legislation and regulations. But in the end, it lands at a kitchen table. With a mother. A father. A child.

The question isn't whether AI should be used in welfare. It already is. The question is how it feels to be assessed by a machine. And whether that assessment supplements or replaces the human gaze.

AI shouldn't stand alone when decisions significantly affect people's lives. It can help sort, prioritise, identify. But the crucial encounter – the one where someone actually sees you – can't be automated. That takes a person who has time, and a system that gives them that time.

Sources