As a toddler in Kuwait, Rana el Kaliouby was surrounded by technology. “My dad had one of the first video recorders,” el Kaliouby said. “I would stand on a blue plastic chair, at 3 or 4 years old, and ramble to the camera.”
When her dad gave el Kaliouby and her sisters an early video game console, she loved it — “but it wasn’t about the game,” she said. It was about bonding with those she played with.
When she grew up, el Kaliouby took this love of connecting through technology and ran with it. She is the author of a recent memoir, Girl Decoded: A Scientist’s Quest to Reclaim Our Humanity by Bringing Emotional Intelligence to Technology, and the CEO of Affectiva, a pioneer in the field of Emotion Artificial Intelligence. She has always sensed the incredible potential that comes with programs that can read a user’s expression.
“As it turns out,” she said, “only 10 percent of human communication is verbal. Ninety percent is nonverbal — expressions, gestures. I’m in the business of developing the 90 percent.”
El Kaliouby will give a virtual lecture on the potentials of Emotion AI and her work at 10:45 a.m. EDT on Tuesday, July 21, on the CHQ Assembly Video Platform. Her lecture, titled “Humanizing Technology Within AI,” will cover her work to enhance the emotional intelligence of the devices we carry in our pockets every day. Following the lecture will be a live Q-and-A with Institution Vice President and Emily and Richard Smucker Chair for Education Matt Ewalt. Viewers can submit questions through questions.chq.org or on Twitter at #CHQ2020.
Artificial intelligence is becoming more common — in our cars, our diagnostic healthcare, our hiring practices — but el Kaliouby argues that we have built AI, and technology at large, without regard to making it a more human experience. On a small scale, we may find it difficult to read others’ emotions over texts. On a larger scale, as many schools transition to online learning, translating a positive teaching experience to Zoom or Skype can be hit-or-miss.
“In a classroom, a great teacher will re-engage you,” she said. “With Emotion AI, you can identify a level of confusion or frustration.”
There is also a potential to democratize education globally. “Great teachers aren’t always available,” she said. “With programs like Khan Academy or Corsair, (there is an opportunity for) personalization.”
Both the problems of communication and learning are critical to el Kaliouby.
“If it wasn’t (for) COVID, I’d be there (at Chautauqua),” she said. “I’d have a sense of the audience and be able to adapt in real-time. ‘Oh, are they confused?’ ‘Oh, did they like my joke?’… But in a Zoom webinar, I have no idea how people are engaging.”
To el Kaliouby, this means a future opportunity to use Emotion AI to graph audience emotions and reactions.
With opportunities, however, there are risks. El Kaliouby outlined two principal possibilities for abuse: unethical development and unethical deployment. Unethical development’s primary risk is the accidental introduction of data and algorithmic bias, leading to situations where the AI might not be able to accurately generalize expressions to women or people of color if the only inputs are the faces of white men. In terms of unethical deployment, el Kaliouby said, “privacy is paramount.” Affectiva’s technology is sometimes sought by governments or companies seeking to use it for shady or malevolent purposes.
“There are certain industries,” she said, “which we won’t entertain — surveillance, lie detection.”
It is hard for a company to turn down those profits, which are sometimes in the millions, but el Kaliouby and Affectiva are standing firm in this conviction. She noted that it was important to set those boundaries with technology use, and to present an alternative to perceived inhumanity within the tech industry.
“As we continue to be catapulted into this virtual universe, how do you make sure you’re leading with emotional intelligence and empathy?” she asked.
Since tech and AI are transforming so many parts of our lives, she said, tech leaders need compasses, and need to embody clear social and moral leadership of their companies.
“It’s not just about creating cool new tech,” she said. “It’s about fundamentally changing how we communicate with each other (in a responsible way).”
Developers and users of AI alike must own up to the duty that comes with the possibility.
“We need to humanize technology,” she said, “before it dehumanizes us.”
This program is made possible by “The Lincoln Ethics Series” funded by the David and Joan Lincoln Family Fund for Applied Ethics and The Kevin and Joan Keogh Family Fund.