close

Walking the plank into virtual reality

Guest Column by Jeremy Bailenson

Mark Zuckerberg is about to walk the plank.

It’s March 2014, and we’re standing in the multisensory room of the Virtual Human Interactive Lab at Stanford University. I’m making last-minute adjustments to his head mounted display, the bulky, expensive, helmet-like device that is about to take him into another world. Zuckerberg, at the moment plunged into darkness, is asking me questions about the technical specs of the VR hardware in my lab — what’s the resolution of the displays in front of his eyes? How quickly do the images on those displays update? Unsurprisingly, he is a curious and knowledgeable subject, and he’s clearly done his homework. He’s come today because he wants to experience state-of-the-art virtual reality, and I’m eager to talk to him because I have opinions on the ways in which virtual reality can be used on a social networking site like Facebook.

Jeremy Bailenson

At Stanford, we are encouraged to face outward and to share our work, not just with academics but with decision makers of all types. I often do this kind of outreach, sharing my lab’s unique capabilities with business executives, foreign dignitaries, journalists, celebrities and others who are curious about the experience of virtual reality. On this day, I am eager to show Zuckerberg — someone who has demonstrated great philanthropic investment in areas of education, environment and empathy research — the ways in which our work on VR has direct applications for those same issues. But first, I have to show him what my lab can do. I usually start with “the plank” — it’s one of the most effective ways to evoke the powerful sensation of presence that good VR produces. And our VR is very good — one of the best in the world. The floor shakes; we have “haptic” devices that give feedback to the hands, 24 speakers that deliver spatialized sound and a high-resolution headset with LEDs on the side that allow cameras mounted around the room to track head and body movement as the user walks around the room. All of this information is assimilated to render interactive, digitally created places that allow the user to experience almost anything we can think of — flying through a city, swimming with sharks, inhabiting nonhuman bodies or standing on the surface of Mars. Anything we decide to program can be rendered in a virtual environment.

The display comes on and Zuckerberg sees the multisensory room again, except that I and my lab assistants have disappeared. The room he is looking at is discernibly lower-res — a bit like television used to look before high definition — but the carpeted floor, the doors, the wall in front of him are all there, creating an effective simulacrum of the space he was just standing in. Zuckerberg moves his head around to take it all in, everything smoothly scrolling into his vision as it would in real life. He steps forward and backward, and the illusion projected a few inches in front of his eyes corresponds with the movement of his body. “Trippy,” he says. I lead him to a spot on the floor (I will be constantly at his side “spotting him” during this demonstration, as it’s very easy to bump into real-world things when you’re navigating a virtual space) and instruct my assistant in the control room to start the program. “Let’s do the pit.”

Zuckerberg hears an industrial whine, the floor shudders, and the small virtual platform on which he stands shoots away from the ground. From his perspective, which I can see via a projection screen on the wall, he’s now standing on a small shelf about 30 feet in the air, connected by a narrow plank to another platform about 15 feet away. Zuckerberg’s legs buckle a bit, and his hand involuntarily goes to his heart. “OK, that’s pretty scary.” If we were measuring his stress signs we’d see that his heart rate was speeding up, and his hands were beginning to sweat. He knows he is standing on the floor of a campus lab, but his dominant sense is telling him that he’s precariously balanced at a deadly distance above the ground. He’s getting a taste of “presence,” that peculiar sense of “being there” unique to virtual reality.

Over the nearly two decades that I’ve been doing VR experiments and demonstrations, I’ve witnessed this scene — when a person is first enveloped by a virtual environment — thousands of times, and I’ve seen a lot of reactions. Some gasp. Some laugh with delight. Depending on what’s being rendered in the program, I’ve also seen people cry out in fear or throw their hands up to protect themselves as they hurtle toward a wall. An elderly federal judge once dove horizontally into a real table in order to catch an imaginary ledge after he “fell” off the virtual platform. At a demonstration at the Tribeca Film Festival, the rapper Q-Tip crawled across the plank on his hands and knees. Often my subjects just stand slack-jawed with wonder, gazing down, up and around — amazed to see themselves suddenly surrounded by a digitally rendered world that nevertheless feels, in crucial ways, real.

Consumer VR is coming like a freight train. It may take two years, it may take 10, but mass adoption of affordable and powerful VR technology, combined with vigorous investment in content, is going to unleash a torrent of applications that will touch every aspect of our lives. The powerful effects that researchers, doctors, industrial designers, pilots, and many others have known about for decades are about to become tools for artists, game designers, filmmakers, journalists, and eventually regular users, empowered by software to design and create their own custom experiences. At the moment, however, VR is unregulated and poorly understood. Consequently, the most psychologically powerful medium in history is getting an alpha test on-the-fly, not in an academic lab but in living rooms across the globe.

We each have a role in defining how this technology is shaped and developed. In Experience on Demand I want to encourage readers to take a broader look at VR’s applications — to look past the immediate offerings of games and movies and consider the wide array of life- altering things it can do. I will help readers understand it as a medium, and will describe some of the powerful effects I’ve observed in my nearly two-decades-long study of VR. This is so that, as we move from these toddler stages, we are using it responsibly, and making and choosing the best possible experiences for us — experiences that can change us and the world for the better. And the best way to start using VR responsibly is to understand what we’re dealing with.

This is a unique moment in our media history, as the potent and relatively young technology of VR migrates from industrial and research laboratories to living rooms across the world. Even as we are amazed by the incredible things VR will allow us to do, the inevitable widespread adoption of VR poses unique opportunities and dangers. What do we need to understand about this new technology? What are the best ways to use it? What are its psychological effects? What ethical considerations should guide its use — and what practical ones, for that matter? How will VR change the way we learn, the way we play, or the way we communicate with other people? How will VR change how we think about ourselves?

What, when given a limitless choice, do we actually want to experience?

Jeremy Bailenson is founding director of Stanford University’s Virtual Human Interaction Lab, Thomas More Storke Professor in the Department of Communication, Professor (by courtesy) of Education, Professor (by courtesy) Program in Symbolic Systems, a Senior Fellow at the Woods Institute for the Environment, and a Faculty Leader at Stanford’s Center for Longevity.

This column is adapted from the introduction of Experience on Demand: What Virtual Reality Is, How It Works, and What It Can Do, by Jeremy Bailenson, used with permission of the publisher, W.W. Norton & Company, Inc.

Tags : Experience on DemandJeremy Bailensonvirtual realityVRWeek Six 2018Zuckerberg
webchq

The author webchq