Book a Demo

Dropbox: What can AI teach us about empathy? By Tomi Akitunde

…Only 25% of employees ‘believe empathy in their organizations is sufficient…’

Lord is the founder of mpathic, a back-end integration for enterprise platforms that acts sort of like “Grammarly for empathy.” It offers real-time corrections to your messages before you hit send, whether that’s via Facebook messenger, email, or Slack. (They’re working on text messages and have speech support on their roadmap.)

Using AI, mpathic highlights the potentially offending text in question and offers suggestions and behavioral advice in a pop-up window. “U ready to present to execs tomorrow?” became “How do you feel about the presentation tomorrow?” in one demo.

Improving the way we communicate with each other at work could also help with workplace dissatisfaction. In a 2020 Society for Human Resource Management survey, 84% of the American workers they surveyed said “poorly trained people managers create a lot of unnecessary work and stress.” And the number one skill those surveyed said people managers needed to improve was “communicating effectively,” with 41% of participants considering it “the most important [skill] managers should develop.

Showing empathy is a key to positive communication. Yet only 25% of employees “believe empathy in their organizations is sufficient,” according to a 2021 survey on the state of workplace empathy by business management consultancy businessolver. (They define empathy as “the ability to understand and experience the feelings of another.)

According to Lord, sometimes AI can be better suited than humans at noticing and correcting these low-empathy moments before they snowball and contribute to low employee morale (or go viral).

mpathic’s suggestions are based on Lord’s 15 years of research. Lord and her team curate and source data for its model through Empathy Rocks (another arm of her company that uses games to train therapists in empathy) and by working with diverse organizations and communities around the United States. It’s an effort informed by an ongoing call for ethical AI that ameliorates bias, whether that’s seen in the data itself or through humans’ interpretation of the data. By broadening the model’s training data to include diverse voices, Lord hopes to reduce bias in their models so they “can assess if we’re starting to overcorrect a particular group.”

Read the full article

Discover more from mpathic AI

Subscribe now to keep reading and get access to the full archive.

Continue reading