clock menu more-arrow no yes mobile

Filed under:

3 Body Problem’s most mind-bending question isn’t about aliens

Would you swear a loyalty oath to humanity — or cheer on its extinction?

A woman puts a virtual-reality headset on her head, holding it with both hands.
Auggie enters the virtual reality game in 3 Body Problem.
Courtesy of Netflix
Sigal Samuel is a senior reporter for Vox’s Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic.

Stars that wink at you. Protons with 11 dimensions. Computers made of rows of human soldiers. Aliens that give virtual reality a whole new meaning.

All of these visual pyrotechnics are very cool. But none of them are at the core of what makes 3 Body Problem, the new Netflix hit based on Cixin Liu’s sci-fi novel of the same name, so compelling. The real beating heart of the show is a philosophical question: Would you swear a loyalty oath to humanity — or cheer on its extinction?

There’s more division over this question than you might think. The show, which is about a face-off between humans and aliens, captures two opposing intellectual trends that have been swirling around in the zeitgeist in recent years.

One goes like this: “Humans may be the only intelligent life in the universe — we are incredibly precious. We must protect our species from existential threats at all costs!”

The other goes like this: “Humans are destroying the planet — causing climate change, making species go extinct. The world will be better off if we go extinct!”

The first, pro-human perspective is more familiar. It’s natural to want your own species to survive. And there’s lots in the media these days about perceived existential threats, from climate change to rogue AI that one day could wipe out humanity.

But anti-humanism has been gaining steam, too, especially among a vocal minority of environmental activists who seem to welcome the end of destructive Homo sapiens. There’s even a Voluntary Human Extinction Movement, which advocates for us to stop having kids so that humanity will fade out and nature will triumph.

And then there’s transhumanism, the Frankensteinish love child of pro-humanism and anti-humanism. This is the idea that we should use tech to evolve our species into Homo sapiens 2.0. Transhumanists — who span the gamut from Silicon Valley tech bros to academic philosophers — do want to keep some version of humanity going, but definitely not the current hardware. They imagine us with chips in our brains, or with AI telling us how to make moral decisions more objectively, or with digitally uploaded minds that live forever in the cloud.

Analyzing these trends in his book Revolt Against Humanity, the literary critic Adam Kirsch writes, “The anti-humanist future and the transhumanist future are opposites in most ways, except the most fundamental: They are worlds from which we have disappeared, and rightfully so.”

If you’ve watched 3 Body Problem, this is probably already ringing some bells for you. The Netflix hit actually tackles the question of human extinction with admirable nuance, so let’s get into the nuance a bit — with some mild spoilers ahead.

What does 3 Body Problem have to say about human extinction?

It would give too much away to say who in the show ends up repping anti-humanism. So suffice it to say that there’s an anti-humanist group in play — people who are actually trying to help the aliens invade Earth.

It’s not a monolithic group, though. One faction, led by a hardcore environmentalist named Mike Evans, believes that humans are too selfish to solve problems like biodiversity loss or climate change, so we basically deserve to be destroyed. Another, milder perspective says that humans are indeed selfish but may be redeemable — and the hope is that the aliens are wiser beings who will save us from ourselves. They refer to the extraterrestrials as literally “Our Lord.”

Meanwhile, one of the main characters, a brilliant physicist named Jin, is a walking embodiment of the pro-human position. When it becomes clear that aliens are planning to take over Earth, she develops a bold reconnaissance mission that involves sending her brainy friend, Will, into space to spy on the extraterrestrials.

Jin is willing to do whatever it takes to save humanity from the aliens, even though they’re traveling from a distant planet and their spaceships won’t reach Earth for another 400 years. She’s willing to sacrifice Will — who, by the way, is madly in love with her — for later generations of humans who don’t even exist yet.

A man and a woman are seen from the side, each holding a folded piece of paper in one hand and looking toward each other.
Will and Jin, star-crossed lovers (literally) in 3 Body Problem.
Courtesy of Netflix

Jin’s best friend is Auggie, a nanotechnology pioneer. When she’s asked to join the fight against the aliens, Auggie hesitates, because it would require killing hundreds of humans who are trying to help the aliens invade. Yet she eventually gives in to Jin’s appeals — and lots of people predictably wind up dead, thanks to a lethal weapon created from her nanotechnology.

As Auggie walks around surveying the carnage from the attack, she sees a child’s severed foot. It’s a classic “do the ends justify the means?” moment. For Auggie, the answer is no. She abandons the mission and starts using her nanotech to help people — not hypothetical people 400 years in the future, but disadvantaged people living in the here and now.

So, like Jin, Auggie is also a perfect emblem of the pro-human position — and yet she lives out that position in a totally different way. She is not content to sacrifice people today for the mere chance at helping people tomorrow.

But the most interesting character is Will, a humble science teacher who is given the chance to go into space and do humanity a major solid by gathering intel on the aliens. When the man in charge of the mission vets Will for the gig, he asks Will to sign a loyalty oath to humanity — to swear that he’ll never renege and side with the aliens.

Will refuses. “They might end up being better than us,” he says. “Why would I swear loyalty to us if they could end up being better?”

It’s a radical open-mindedness to the possibility that we humans might really suck — and that maybe we don’t deserve to be the protagonists of the universe’s story. If another species is better, kinder, more moral, should our allegiance be to furthering those values, or to the species we happen to be part of?

The pro-humanist vision

As we’ve seen, there are different ways to live out pro-humanism. In philosophy circles, there are names for these different approaches. While Auggie is a “neartermist,” focused on solving problems that affect people today, Jin is a classic “longtermist.”

At its core, longtermism is the idea that we should care more about positively influencing the long-term future of humanity — hundreds, thousands, or even millions of years from now. The idea emerged out of effective altruism (EA), a broader social movement dedicated to wielding reason and evidence to do the most good possible for the most people.

Longtermists often talk about existential risks. They care a lot about making sure, for example, that runaway AI doesn’t render Homo sapiens extinct. For the most part, Western society doesn’t assign much value to future generations, something we see in our struggles to deal with long-term threats like climate change. But because longtermists assign future people as much moral value as present people, and there are going to be way more people alive in the future than there are now, longtermists are especially focused on staving off risks that could erase the chance for those future people to exist.

The poster boy for longtermism, Oxford philosopher and founding EA figure Will MacAskill, published a book on the worldview called What We Owe the Future. To him, avoiding extinction is almost a sacrosanct duty. He writes:

With great rarity comes great responsibility. For thirteen billion years, the known universe was devoid of consciousness ... Now and in the coming centuries, we face threats that could kill us all. And if we mess this up, we mess it up forever. The universe’s self-understanding might be permanently lost ... the brief and slender flame of consciousness that flickered for a while would be extinguished forever.

There are a few eyebrow-raising anthropocentric ideas here. How confident are we that the universe was or would be barren of highly intelligent life without humanity? “Highly intelligent” by whose lights — humanity’s? And are we so sure that the universe would be meaningless without human minds to experience it?

But this way of thinking is popular among tech billionaires like Elon Musk, who talks about the need to colonize Mars as “life insurance” for the human species because we have “a duty to maintain the light of consciousness” rather than going extinct.

Musk describes MacAskill’s book as “a close match for my philosophy.”

The transhumanist vision

A close match — but not a perfect match.

Musk has a lot of commonalities with the pro-human camp, including his view that we should make lots of babies in order to stave off civilizational collapse. But he’s arguably a bit closer to that strange combo of pro-humanism and anti-humanism that we know as “transhumanism.”

Hence Musk’s company Neuralink, which recently implanted a brain chip in its first human subject. The ultimate goal, in Musk’s own words, is “to achieve a symbiosis with artificial intelligence.” He wants to develop a technology that helps humans “merg[e] with AI” so that we won’t be “left behind” as AI becomes more sophisticated.

In 3 Body Problem, the closest parallel for this approach is the anti-humanist faction that wants to help the aliens, not out of a belief that humans are so terrible they should be totally destroyed, but out of a hope that humans just might be redeemable with an infusion of the right knowledge or technology.

On the show, that technology comes via aliens; in our world, it’s perceived to be coming via AI. But regardless of the specifics, this is an approach that says: Let the overlords come. Don’t try to beat ’em — join ’em.

It should come as no surprise that the anti-humanists in 3 Body Problem refer to the aliens as “Our Lord.” That makes total sense, given that they’re viewing the aliens as a supremely powerful force that exists outside themselves and can propel them to a higher form of consciousness. If that’s not God, what is?

In fact, transhumanist thinking has a very long religious pedigree. In the early 1900s, French Jesuit priest and paleontologist Pierre Teilhard de Chardin argued that we could use tech to nudge along human evolution and thereby bring about the kingdom of God; melding humans and machines would lead to “a state of super-consciousness” where we become a new enlightened species.

Teilhard influenced his pal Julian Huxley, an evolutionary biologist who popularized the term “transhumanism” (and the brother of Brave New World author Aldous Huxley). That influenced the futurist Ray Kurzweil, who in turn shaped the thinking of Musk and many Silicon Valley tech heavyweights.

Some people today have even formed explicitly religious movements around worshiping AI or using AI to move humanity toward godliness, from Martine Rothblatt’s Terasem movement to the Mormon Transhumanist Association to Anthony Levandowski’s short-lived Way of the Future church. “Our Lord,” indeed.

The anti-humanist vision

Hardcore anti-humanists go much farther than the transhumanists. In their view, there’s no reason to keep humanity alive.

The philosopher Eric Dietrich, for example, argues that we should build “the better robots of our nature” — machines that can outperform us morally — and then hand over the world to what he calls “Homo sapiens 2.0.” Here is his modest proposal:

Let’s build a race of robots that implement only what is beautiful about humanity, that do not feel any evolutionary tug to commit certain evils, and then let us — the humans — exit stage left, leaving behind a planet populated with robots that, while not perfect angels, will nevertheless be a vast improvement over us.

Another philosopher, David Benatar, argued in his 2006 book Better Never to Have Been, that the universe would not be any less meaningful or valuable if humanity were to vanish. “The concern that humans will not exist at some future time is either a symptom of human arrogance … or is some misplaced sentimentalism,” he wrote.

Whether or not you think we’re the only intelligent life in the universe is key here. If there are lots of civilizations out there, the stakes of humanity going extinct are much lower from a cosmic perspective.

In 3 Body Problem, the characters know for a fact that there’s other intelligent life out there. This makes it harder for the pro-humanists to justify their position: on what grounds, other than basic survival instinct, can they really argue that it’s important for humanity to continue existing?

Will might be the character with the most compelling response to this central question. When he refuses to sign the loyalty oath to humanity, he shows that he is neither dogmatically pro-humanist nor dogmatically anti-humanist. His loyalty is to certain values, like kindness.

In the absence of certainty about who enacts those values best — humans or aliens — he remains species-agnostic.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.