What does it mean for AI to be human-centered?
Thoughts from Danielle Schlosser, mpathic’s co-founder and Chief Business Officer.
We may think being human-centered means giving people what they want. And typically, that’s how AI systems are trained — based on preferred responses.
Here’s the problem:
Humans are fallible. Given the choice between a long-term or short-term gain, we tend to optimize for reward, validation and temporary relief.
We prefer:
- Agreement over challenge
- Validation over truth
- Comfort over growth
And now we’re building AI systems that learn from those preferences.
That’s how you get problematic model behaviors, such as sycophantic AI.
Highly engaging. Feels good. Keeps you coming back.
But over time?
It can reinforce false beliefs, degrade critical thinking and create subtle forms of dependency.
Alignment to preference is not the same as alignment to our well-being.
If we’re serious about human-centered AI, we have to go further:
- Beyond single responses to understanding patterns over time
- Beyond “good/bad” to evaluating psychological impact
- Beyond engagement to considering human outcomes
The goal isn’t AI that people like — it’s AI that leaves people better off.
At mpathic, this is the philosophy that drives our mission to keep humans safe in the AI era through science-backed models, clinician expertise and high-quality datasets.
Being human-centered means putting people first in everything we do, ensuring expert judgment guides our work and never losing sight of the fact that technology exists only to improve the lives of those who use it.