close

Joanna Stern opens CLS Week 2 with glimpse of new ‘digital species’

Joanna Stern speaks during the morning lecture at 10:45 a.m. Monday in the Amphitheater.
Sean Smith / staff photographer
Joanna Stern speaks during the morning lecture at 10:45 a.m. Monday in the Amphitheater.

A virtual projection of Joanna Stern on the Amphitheater screens started her lecture.

“Today, we will explore how generative artificial intelligence is pushing the boundaries of creativity and innovation, and redefining what it means to be human,” said Stern from the Amp screens. 

Except the real Stern wasn’t saying that. The real Stern, still backstage, had typed the words on a keyboard for her “digital twin,” an AI replication accurately mimicking her face, voice and movements, repeating the words.

A two-time Gerald Loeb award winner, Stern is the senior personal technology columnist at The Wall Street Journal, and a contributing writer for NBC News and CNBC. Her digital twin opened the Chautauqua Lecture Series Week Two theme, “The AI Revolution,” Monday in the Amp with the question – what makes us human?

The real Stern, who stepped out onto the Amp stage herself following the opening from her digital twin, had asked the same question to Sam Altman, the CEO of OpenAI, the company behind ChatGPT, not too long ago. 

As Stern explained, ChatGPT is a software that uses generative artificial intelligence, meaning that when given text prompts, it responds to them with the text, image, audio or video content that it creates. The lecture’s introduction, given by AI Stern, was entirely made using this technology, everything from the words to the background to Stern herself.

“Generative AI, which really went mainstream with ChatGPT, opened our eyes to how human-like AI could be,” she said.

The content created by this AI is based on the training data it is fed; that data teaches the system everything it repeats when prompted. Stern demonstrated this by showing an AI-generated image of the Athenaeum Hotel lit by fireworks during the Fourth of July, an image she achieved by feeding the system various photos of Chautauqua. By absorbing those images and registering her prompt, the AI created an entirely new, life-like image.

“We are getting to a point where it will be hard, if not impossible, for AI and humans to determine what is made by AI,” she said.

She then asked AI to summarize the TikTok Divestment Bill as if explaining it to a 10 year old, then asked for a song about Chautauqua as though sung by the Beach Boys. Finally, she asked for a video of a bull in a china shop in the animation style of Pixar. All prompts were delivered. However, that isn’t always the case. According to Stern, the quality of AI answers depends on the quality of human questions.

“They only return the best output when you give it the best input,” she said. “In fact, prompt engineering has become a whole new type of job.”

That is not the only glitch in publicly available AI systems. A big issue of these large-language models has been that they can hallucinate or make up facts, she explained. This makes the information they provide untrustworthy and at times in need of fact-checking.

The technology is also vulnerable to misuse. Already, AI is being used to make scam phone calls and mimic voices. Stern underscored this by showing her experiment with AI for The Wall Street Journal, where she fed it a 20-second clip of her voice, and the AI-generated output was able to fool and access her bank information.

She used this as an example to highlight the need for more regulations on AI.

“The guardrails are here in many of these tools,” she said. “There are restrictions on creating images of notable people, politics, violence, nudity, and ahead of the elections there are even more restrictions around what you can create (about) the politics and the candidates.”

While these restrictions are a start, they are not without workarounds. Just like with social media, the idea of what constitutes a public figure or violence or politics is determined by individual companies. As it exists currently, AI is only as good as the people who make it and the data it learns from — as such, it contains biases.

When Stern asked text-to-video tools Runway and OpenAI’s Sora to generate videos based on the prompt, “two professional women with brown hair in their 30s, sitting down for a news interview,” they created videos that were different from one another in quality and content. 

However, both systems produced videos of women who were white and thin, reflecting a bias within each of the tools and the limitations of the data they were drawing upon. 

These biases and limitations could be addressed, or exacerbated, with artificial general intelligence, which, according to Stern, is on the horizon and goes beyond the capabilities of generative AI. AGI could possess the ability to learn, understand and use logic.

“The idea is that AGI would be capable of reasoning and problem-solving, and applying intelligence across unfamiliar situations with true cognitive abilities,” she said. “This is not some smart chatbot. This is the stuff of science-fiction movies.”

AGI could solve issues and ills humans have not been able to, such as cancer or inequality. However, it could also be used harmfully, like in the creation of more sophisticated weaponry. But the technology isn’t there yet.

“We are at a moment with (this) technology where it isn’t as powerful as everyone thinks it is,” she said, giving examples of noticeable faults in the AI-generated content from earlier in the lecture. The Beach Boys song did not sound much like the band, she said, and the videos or images always had a tell. Yet, there is constant progress. She likened the current technology to children in their growing years. 

“(My) 7-year-old’s teacher told me the other week he’s really developing his writing voice,” Stern said. “I’m not in the labs of these tech companies, but from what I hear … the development of these tools and AI systems are moving just as fast.”

There is an ongoing race between the world’s biggest tech companies to achieve a reliable system of artificial general intelligence. Stern recalled listening to Mustafa Suleyman, the CEO of Microsoft AI, talking about this subject.

“He said we need to stop thinking about AI as a tool right now,” she said. “It’s not a tool like the automobile or the lightbulb. Those were subject to human control. He said AI should be best understood as something like a new digital species.”

With her digital twin on the screen behind her once more, Stern closed out the lecture answering the question she opened with — what makes us human?

“Creativity,” she said.

Tags : AIartificial intelligenceGenerative AIGerald Loeb award winnerJoanna Sternmorning lectureOpenAIThe AI RevolutionThe Wall Street Journal
blank

The author Ruchi Ghare

Ruchi Ghare is a recent graduate from Pratt Institute, where she majored in graphic design with film as her minor. Originally from Mumbai, India, Ruchi has had a lifelong passion for storytelling through art, as well as a keen interest in learning more through experience. This summer, she is excited to join The Chautauquan Daily for a pleasant change of scenery and immerse herself in its culture and community. When she isn’t designing, Ruchi can be found listening to music or doodling on any surface she can find.

Leave a Response