close

Harvard’s Jonathan Zittrain to speak on AI technology, governance

Jonathan Zittrain

Liz DeLillo
Staff Writer

Jonathan Zittrain has an eerily fitting metaphor for AI: asbestos.

“Only semi-jokingly, I think of AI like asbestos: extremely useful, embedded everywhere — if invisibly — and potentially very dangerous in ways that will prove difficult to remediate after a frenzy of building it out — and in,” Zittrain said. 

Continuing Week Nine’s Chautauqua Lecture Series theme “Past Informs Present: How to Harness History,” Zittrain will speak on AI technology, governance and ethics at 10:45 a.m. today in the Amphitheater. 

Zittrain is co-founder and director of Harvard’s Berkman Klein Center for Internet & Society, Vice Dean for Library and Information Resources at Harvard Law School, George Bemis Professor of International Law and a professor of computer science as well as public policy. He is on the board of directors of the Electronic Frontier Foundation and a member of the American Academy of Arts and Sciences.

Today, Zittrain plans to “highlight just how unusual the predominant AI technology — ‘machine learning’ — is, and what makes the large language models like ChatGPT that are based on it so eccentric.” 

“That, in turn, highlights some of the choices we face both individually and as a society in figuring out if and how to embrace it,” he said,  “and how it might be designed differently than what we’ve seen so far.”

More than the operative decisions faced with large language models (LLMs), that technology poses particular ethical questions, as well.

“Among many implications, we truly need to figure out when flexible, general AI systems should be working for us — required and designed to be loyal to us as individuals, the way lawyers must zealously represent their clients, and doctors look after their patients — and when they should be looking out for society, such as when someone asks for help in, say, building a bomb,” Zittrain said. “It’s a profound question when AI is encapsulating technology’s broader promise to grant much more power to humanity. How should that power be distributed?”

This question of when people should and should not utilize AI is far from consensus.

“I see three communities: accelerationists, who tend to view AI as a revolutionary force for human progress and want to hasten its development; safetyists, who warn of its potentially catastrophic risks; and skeptics, who see AI as an incremental, over-hyped technology that yet carries dangerous, if more prosaic, near-term, practical consequences,” Zittrain said.

Ethical disagreements often divide people, but Zittrain suggests these divergences should not preclude collaborative solutions.

“Each of these perspectives has something important to offer to understand AI’s rapid evolution,” Zittrain said, “and their isolation from one another hinders meaningful public and expert dialogue, collaborative progress on identifying and resolving issues, and the pursuit of meaningful policy development.”

Beyond more effectively tackling such concerns with AI, shared discourse can help pave a new path moving forward.

“At stake is a safe and equitable future where progress and empowerment are available to all, especially those sidelined by previous transformational technologies,” Zittrain said. “I’m hoping to start reconciling the various points of view.”

Though there is much going on in the ever-evolving sphere of AI, today’s morning lecture offers Chautauquans a fuller picture.

“I hope they’ll see a role for themselves in shepherding our new technologies, understanding better both the nuts and bolts of how they work — with AI, to the extent anyone can, which is not so much — and where the experts themselves currently don’t agree on what could and should happen next,” he said.

Tags : AIAmphitheaterChautauqua Lecture SeriesCLSmorning lectureMorning Lecture Preview
blank

The author Liz DeLillo