AI ethics in practice: privacy, consent, and 'who owns the family narrative?'
When AI describes your family, three ethical questions arise: what should the AI know, who gives consent, and who owns the narrative about the relationship? The answers involve data minimisation, real control over sharing – and the right to disagree with an algorithm's description.
You take a family test. The AI generates a report about the relationship between you and your daughter. It describes patterns you recognise. It puts words to something you've felt but never articulated. You nod.
But then you think: Can my daughter see this? And what if she disagrees?
What should an AI actually know?
Data minimisation is a principle that sounds technical but is deeply human: an AI should only know what it needs to. Nothing more.
When you answer questions about how you experience your family, you reveal something. That's vulnerable. And it creates an obligation for whoever receives those answers. The question isn't just "is my data safe?" – it's: "What is it used for, and what will it never be used for?"
In SAMRUM, the AI model only receives aggregated scores in the prompt – never the raw responses. It knows you score high on conflict tolerance – but it doesn't know what you answered on the individual questions. That difference is intentional. Because the more an AI knows, the more power it has over the narrative.
Consent – especially for teenagers
You're 14. Your mum says you're going to take a family test. You want to, because it sounds interesting. But do you really understand what it means for an AI to analyse your personality traits and describe your relationship with your mum?
Consent from teenagers is complicated. They can say yes. But informed consent requires understanding the consequences – and that's hard, even for adults.
That doesn't mean teenagers shouldn't participate. But it means there's a particular responsibility to explain: What happens to your answers? Who can see what? And what can you say no to?
A good system gives the teenager control. Not just participation, but real influence over what gets shared. In practice, a consent model for families should clarify: Who can invite whom to the test? What happens if one family member doesn't want to participate? And can you withdraw your participation?
Who owns the narrative?
This is where it gets truly difficult. A report describes a relationship – and a relationship always has at least two sides. If the AI says your dynamic is characterised by one person being more conflict-avoidant than the other, who owns that description?
Both of you. And that's the whole point.
A narrative about a relationship should never belong to just one party. If your daughter doesn't recognise herself in the description, it's not because she's wrong. It's because relationships look different depending on where you stand.
That's why reports in SAMRUM are symmetrical. Friction is always described as a loop – A affects B, B affects A – never as an accusation. And both parties can see the report about their relationship. Not the other person's individual answers.
How visibility works in SAMRUM
- Raw answers are always private. No other family members can see what you answered. Not even parents.
- Sharing requires participation. A relationship report requires both parties to have taken the test.
- Teenagers' answers are private. A teenager's individual answers remain their own – and they decide who can see their profile.
Privacy between family members
There's an important difference between transparency and surveillance. Parents should be able to see how the relationship works. They should not be able to see what their teenager answered on question 17.
Raw answers are private. Always. Even in a family. Especially in a family.
It's tempting to think: "We share everything." But you don't. And you shouldn't. Privacy is not a sign of distance – it's a sign of respect. A teenager who knows her answers are her own answers honestly. A teenager who suspects mum is reading along answers strategically. And the strategic version is worthless.
The right to disagree
The final ethical dimension might be the most important: the right to say "that's not right."
AI is not infallible. It describes probabilities, not truths. And when it describes a relationship, both parties should have the right to say: "That's not how I experience it."
That right shouldn't just exist in principle. It should be visible. Easy to access. And taken seriously.
Because ethics in AI isn't just about data security. It's about who has power over the story. And in a family, that power should never rest with an algorithm – but with the people it's about.