close

ASU’s Braden Allenby Discusses Transhumanism for Lincoln Applied Ethics Series

 

At 4 p.m. Monday in the Hall of Philosophy, popular lecturer Braden Allenby returns to Chautauqua Institution to give a talk on “The Human as Design Space: Toward Human Version 2.0,” the latest in this summer’s Lincoln Applied Ethics Series.

Allenby, President’s Professor and Lincoln Professor of Engineering and Ethics at Arizona State University, is an expert in many areas, including transhumanism. Prior to today’s lecture, he participated in an interview with the Daily. What follows is a condensed account.

“The Human as Design Space: Toward Human Version 2.0.” What does that entail?

We’re in a period of substantial uncertainty where whatever we’re going to become is unpredictable.

Part of that is as basic as definitions. When we talk about the human, you, me, others, we have a pretty good idea of what we mean. But that idea is probably already obsolete.

We almost certainly will not be what we are now. The question is: How are we even going to think about what human means in the world that we’re evolving [into]? Are we going to think of human as being sort of the Cartesian, protoplasmic, wetware that [we] are now? It’s not going to be where cognition occurs, because cognition is going to start occurring, and already is, in techno-human networks that operate above the level of the individual human.

Does this not become a crisis of identity? Does technology begin to take precedence over humans? Will there be a moment of role reversal?

There’s the possibility. But I think a stronger possibility might be that what you see is the evolution of integrated systems.

The model that we’ve carried for a long time in the Western culture, of humans versus technology, is Frankenstein. That’s a very powerful archetype.

What we do is we always think that there’s going to be these serious conflicts between the human and technology. And then what we tend to do is integrate. And the difference now is we’re integrating at levels that, heretofore, we would not have thought possible. We’re integrating at the molecular level. We’re integrating at the psychological level. We’re integrating at the level of cognition.

What we’re seeing is a fairly profound difference where our existing definitions fail. That’s part of the problem. We don’t have a good way to think about it. It’s an issue of identity.

If you look around the world today, what you see is fundamentalism, in virtually every belief system, increasing. You can see expressions that are similar to that in the Brexit vote, in [Donald] Trump. And what’s happening is technology is now changing rapidly enough so that it’s challenging human identity, what we’ve always thought of ourselves as, in much more profound ways around the world, for everybody.

People fall back on faith narratives to protect themselves psychologically from a rate of change that they’re unable to process.

Is there an ideal perspective to wade into this conversation? Openly? Hopefully? Pragmatically? Fearfully?

The thing about the fearful approach, the dystopian approach, is that it’s very emotionally appealing. More than anything else, people don’t want to lose their identity and a dystopian world gives them an identity. It may be an unpleasant one. But it gives them an identity.

The worst thing is not dystopian. The worst thing is meaninglessness.

People ideologically tend to fall back very easily into dystopian perspectives. If it’s technology and technological change that’s challenging you, you’re hardly going to go the utopian route. [You] go the dystopian route.

In 2002, former U.S. secretary of defense Donald Rumsfeld said, “There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know.” Does this subject fall under that final parallel: the “unknown unknowns?”

Look at the way people talk about A.I. They talk about artificial intelligence as if somehow human intelligence were the sine qua non of how rationality in the universe works. If the internet actually were to become intelligent, what would that even look like? Why should we assume it has any of our characteristics at all?

It may be that cognition is far stranger than anything we’ve thought about because we’re limited by our human cognitive constraints. We’re limited in the way we can conceptualize artificial intelligence, and what that means is that we really have no idea what it’s going to look like. And when people get scared, what they do is they fall back on fundamentalist narratives.

Fear is a very powerful drive of human behavior, and periods of rapid change generate a lot of fear because they generate a lot of instability in existing institutions.

Have you or any of your colleagues been able to determine, essentially, a Rubicon? That point of no return?

This tends to get fairly ideological. If you’re a fan of [Ray] Kurzweil, for example, you’d say it’s the singularity. I think probably the existing and accelerating rate of change that we’re in today, and the way that it’s beginning to affect cultures all around the world, is an indication that we may be starting to hit that exponential slope.

What it means? I have no idea. I think at that point, prediction is a mug’s game. But it’s at least an arguable hypothesis we’ve begun to hit that slope.

Where things go really screwy: We’ve already got, for example, computer systems that design computer chips without human intervention. We tell them what we want the chip to do, and they design a chip. That kind of complexity beyond human cognitive capability to comprehend is going to become more and more common. And as it does, you’re going to hit a point where the computer systems begin to build things into themselves at the rate of computer systems, not at the rate of biological evolution. And when that happens, I think you’ve really crossed a significant Rubicon. Probably not the only one, but a significant one.

In Ovid’s Metamorphoses, there’s this translation between forms. Like a cocooned caterpillar that then becomes a butterfly, do you see this translation between man and machine to be almost a natural movement?

The butterfly metaphor is a cheerful one. And hopeful one. But I think when humans really lost their innocence, as a species, was in 1945 when we tested the first atomic bomb. [J. Robert] Oppenheimer thought to himself, “I am become death, destroyer of worlds.” At that point, humans really developed the explicit ability to destroy the world as we know it.

That’s a fundamental Rubicon because that means you really are a god. The power to destroy worlds like that before in one fell swoop, before nuclear technology, was clearly in the realms of God.

Climate change is a minor issue compared to what would happen if Russia and the United States had a major nuclear exchange. And that’s all human. That’s us. That’s us being God. And if you look at a lot of what we’re talking about in the human as design space, it replicates that meme because you have humans, for example, talking about radical life extension. That is, you have humans talking about building immortality. Nobody knows what that means. Whether or not humans are even capable of maintaining an ego in the face of immortality, nobody knows.

But it does mean that a lot of what we’re talking about developing with our technologies comes from the realm of religion and the powers of God. And to that extent, there’s no going back. Creating the types of intelligence that we’ve created, in things like Siri and Google Maps, 20 to 30 years ago, would have been the realm of God. And now, it’s human.

What is the defining status of being a god? Does it lean more toward the ability to create, or the ability to destroy? Or is it an alchemy?

It’s a very complex question.

If you ask most people today, they would say that the printing press was a wonderful creation. If you go back and look at what it actually did, you couldn’t have had the [Protestant] Reformation without the printing press, because you could never have gotten the words of the Bible to people. You could never have created the literacy to create a Protestant revolution. When you have the Reformation, you generated a 150 years of terrible religious warfare that killed hundreds of thousands of people. Was that good or bad? Particularly when you’re beginning to deal with really fundamental technologies, the question becomes very difficult to understand.

Railroads were part of what generated the industrial scale and destruction of modern warfare. Railroads created the United States. Without the Transcontinental Railroad, you would’ve had a hard time building a continental power. Was rail good or bad?

Part of what every generation tries to do is stifle new technology. One of the first cases Abraham Lincoln had was a case where the steamships were trying to stifle rail technology because they knew that rail technology would put them out of business. They did their best to strangle it in its crib. They failed. It helped create the United States. It’s helped create enormous economic growth. But they were trying to kill it. And today, many people are trying to kill different kinds of technologies.

When people are fearful of something, with technology or anything else, really, is it not simply a self-induced anticipation of not knowing something?

There’s two important points.

You really don’t know what any technology of any power at all is going to do.

And if you begin to pay more attention to technologies as they’re evolving in real time, it would give you the opportunity to address those issues.

For example, it may well be that the substantial improvements in A.I technology that we’ve seen are going to significantly change employment patterns in the United States. If that begins to happen, you can say, ‘OK, it’s generating a lot of money but the money is going to certain very specific areas. We need to take some of that and redistribute it so that we can create an equitable society where all people are going to be benefited by the technology.’

While you can’t predict technology, that doesn’t mean that you’re hopeless or helpless in the face of it. And whether a technology improves things for a lot of people or whether it’s damaging, often depends on human choices, not on the technology.

This interview has been edited for clarity and length.

Joshua Gurian

The author Joshua Gurian

Joshua Gurian is a graduate of Binghamton University Class of 2015, with a Bachelor of Arts in English literature. He currently lives as a freelance writer in Chicago.