Michael Sandel, Harvard political philosopher, mediates a Socratic discussion via Zoom with Chautauquans on week’s theme of ethics in technology

sandel screenshot

All humans are susceptible to changes in mood, taste and ideology. Grandparents may listen to country one day and rock ‘n’ roll the next. Children always seem to want a new adventure. Unfortunately, these little shifts in the mind often decide people’s fates, such as with life-altering sentencing that happens in the courtroom everyday.

Harvard political philosopher Michael Sandel said judges’ decisions tend to change depending on the time of day, specifically before and after they eat lunch. He also said that artificial intelligence is not only able to process more information than people, but AI is dispassionate, as well. 

“The algorithm doesn’t eat lunch, doesn’t get hungry, doesn’t get low blood sugar,” Sandel said.

Sandel, a Harvard political philosopher and bestselling author, held a conversation on “Digital Responsibility in the Tech World” at 10:45 a.m. EDT Friday, July 24, on the CHQ Assembly Video Platform as the final part of Week Four’s theme of The Ethics of Tech: Scientific, Corporate and Personal Responsibility.” This Socratic conversation, meaning one driven by asking questions, included 18 members of the Chautauqua community in a Zoom call, with Sandel asking them about their opinion of certain uses of artificial intelligence, such as with COVID-19 tracing and police surveillance.

Sandel asked the participants their opinion on using cell phone data to track cases of COVID-19; half said they would support this, and the other half said no. The latter said they are reluctant to share their personal information with someone, or something, they have no confidence in, that the information could be used for other means and that their data can be leaked.

“We all have seen the unfortunate track record of even the best-intentioned protectors of data, have been leaked,” said an opponent of using cell phone data. “I don’t have a great deal of hope that this will be any different, at some future day.”

One of the people who supported using cell phone data was a physician, and they said everything in medicine is a risk, and this specific case would be for the greater good. Another supporter said that people commonly give up certain freedoms for safety, and it is important to have as much information as possible to combat the pandemic.

“Though it’s not a terrorist attack, we’re trying to keep (the pandemic) at bay,” they said. “I think it’s really important to make sure that we have as much data as possible.”

Sandel then asked supporters of using cell phone data for contact tracing what they thought about the using this data to enforce stay-at-home orders for people who tested positive for the virus.

One said they would support that use under the current climate if a person was exposed, but in the long term it would be problematic. 

An opponent said that using this technology to enforce COVID-19 quarantines could lead to this cell phone data being applied to crimes, such as missing a child-support payment. 

Sandel then asked, with the assumption that the crime rate is reduced in places that use predictive policing measures, would they support using artificial intelligence to predict where a crime will likely occur. Eight said yes and 10 said no.

An opponent said that these AI are frequently biased because they are based on historical information.

“We’ve seen a bias against minority communities,” they said. “And so using that kind of algorithm (to predict crime) really concerns me tremendously.”

A supporter said that there are many practical issues with AI, but they support the idea in theory that the technology would find more efficient methods of policing amidst limited resources. Another said that society needs to look at police departments that are already using predictive methods, and how the community responds, in order to get the best idea of how to effectively use AI.

Sandel then asked the supporters of using AI for predictive police strategies if they would be in favor of using AI to predict whether a prisoner would commit another crime and if they should be released on parole. 

A few said that they would support using these AI in determining whether an inmate receives parole, if it was used in conjunction with other evidence.

“I think the common theme in all of the scenarios we’ve discussed has been essentially a conservative approach to implementing any of these technologies or applications,” a participant said. “I believe that governance is key in all of them, but the tricky part about governance is who’s governing what, who owns the data, who is designing the algorithms that help the public, or how accessible is that information.”

A conversation participant said that the information that the AI would have to take into account would be too complex, including how well trained the officers are, what kinds of programs the jail offers, and if the inmate has a family.

“I don’t know that it would ever be completely neutral, just because I think that’s hard to do,” they said. “I think that it all involves some sort of collaboration between the data and the people who are going to deliberate this.”

The lecture then shifted to a Q-and-A session with Chautauqua Institution President Michael E. Hill, who asked Sandel if it is possible, or even necessary, to teach ethical inquiry now, compared to other moments in history.

Sandel said it is necessary and possible, and he has been striving to teach ethical inquiry to his students for a long time. He starts with where people are coming from, and then invites those who have similar or contrasting opinions into the conversation.

“We sometimes, in engaging in this kind of reflection, change our minds — either about the principles we thought we believed, or about our judgment in a particular case,” Sandel said. “Sometimes we don’t change our minds, but still learn something, learn a deeper appreciation of those with whom we disagree.”

Hill then asked what Sandel explores in his book that is coming out in the fall, The Tyranny of Merit: What’s Become Of The Common Good.

Sandel said that the United States’ rise in wealth inequality and political divisiveness is partly due to meritocracy, the idea that the wealthy and the poor earned their places in society. 

“That’s the tyranny of merit. That’s what’s driven us apart. That’s what’s brought such polarized politics. So I’m trying to diagnose how we got here,” Sandel said, “then ask how we could emerge from it, however we could rein in a meritocratic hubris, and and find our way to a politics of the common good.”

Tags : Corporate and Personal ResponsibilityHarvardMichael Sandelmorning lecture recapThe Ethics of Tech: Scientific

The author Nick Danlag

This is Nick Danlag’s second season at the Daily reporting the morning lecture recap. He worked remotely last year but loved waking up each day in Las Vegas to learn more about Chautauqua through his reporting. From Mount Laurel, New Jersey, Nick earned a creative writing degree from Eckerd College in St. Petersburg, Florida. As editor-in-chief of his student newspaper, The Current, he loved helping the staff develop their voices.