Algorithms are everywhere — from finances, to the workplace, to policing. And where there is an algorithm, David Danks said there is likely to be an unintended cultural bias.
“In my experience, almost nobody who’s building or using technology wants to be unethical. Nobody thinks, ‘Yeah, I’m gonna build something that’s racist.’ What people think is, ‘I’m gonna build this thing that’s going to help people,’” said Danks, L.L. Thurstone Professor of Philosophy and Psychology and head of the Department of Philosophy at Carnegie Mellon University. “The biggest ethical challenge we have is that many people who develop and deploy these systems have very narrow understandings of what it is to help other people.”
At 3:30 p.m. EDT Wednesday, July 22, on the CHQ Assembly Video Platform, Danks will present Week Four’s African American Heritage House lecture, “Fixing the Cultural Biases in Algorithms.” In this presentation, Danks will explain what algorithmic bias is and where it comes from.
Algorithms in many ways reflect the data that they are provided. So, if the data are from historically systematically biased communities and practices, the algorithm will learn to reflect that,” Danks said.
Danks pointed to predictive policing as one example. This is a system where AI compiles recorded data to point out crime hotspots and potential criminals in an effort to allocate police force to prevent crime in the first place. However, when historical data is based on racist and biased practices, law enforcement disproportionately polices communities populated by people of color.
Many have compared predictive policing programming to the practice of “broken window” policing. This is a practice, often used in the 1980s, where law enforcement heavily policed neighborhoods that appeared to have more crime by cracking down on lower-level offenses to deter more serious crime. As a result, people of color were excessively targeted and the practice has since been seen as a failure.
“Algorithms in many ways reflect the data that they are provided. So, if the data are from historically systematically biased communities and practices, the algorithm will learn to reflect that,” Danks said.
Danks began noticing these issues about a decade ago. While people flocked to the development of cutting-edge AI, Danks started to notice ethical dilemmas.
“There were a lot of people who wanted to talk about the technology. They wanted to talk about the ones and zeros,” Danks said. “It seemed to me that we were losing sight of what really should be at the heart of these discussions — which is people. How are people helped? How are people harmed? How do we ensure that technology serves us rather than the other way around?”
At Carnegie Mellon, Danks’ work falls at the intersection of philosophy, cognitive science, and machine learning. This research is always conducted with the people affected by tech in mind.
“The big-picture goal is (to have) people produce and be able to use technology to advance the values that are important to them. By that, I mean all people — not just the privileged technological ‘haves,’” Danks said. “The more immediate goal is to have people who produce technology be aware and deliberate about the choices they make in terms of the ethical and societal impacts that they have.”
Before his 3:30 p.m. EDT presentation, Danks will participate in a panel on ethics and technology with two other Carnegie Mellon University-affiliated academics, Illah Nourbakhsh and Jennifer Keating, at 10:45 a.m. EDT the same day on CHQ Assembly.