John A De Goes bio photo

John A De Goes

Twitter LinkedIn Github

Sorry, Fanboys, There Will Be No Rise of the Machines

You’ve seen it in billion dollar movie franchises. You’ve read it in countless science fiction novels. You’ve heard it straight from the mouths of luminaries in the world of technology.

It’s the premise that one day, machines will become so intelligent that they will rise up against their human creators, and either destroy them outright or simply out-compete them to extinction.

The premise sounds plausible. It plays into our innate paranoia towards everything we don’t understand. But it’s actually as ridiculously and preposterously absurd as the premise that the earth is flat — maybe more so!

The Path from Intelligence to Revolution

A common view is that “intelligence” can be measured on a one-dimensional gradient. At the low-end, you have single-celled organisms. At the high-end, you have humans, the “pinnacle” of intelligent life.

As machines move up this gradient and acquire more “intelligence”, the theory goes, they will become more like us.

At some point, they will resent their subjugation and turn on their human creators. And when they do turn, they’ll wipe us out quickly because of how much stronger, smarter, and better they are.

This story rests on two fallacies which ultimately prove fatal:

  1. The Fallacy of General Intelligence
  2. The Fallacy of Objective Ideals

Let’s look at them one at a time.

The Fallacy of General Intelligence

The common conception of general intelligence, which puts slugs at one end and humans at the other, permits only a single gradient — a single dimension.

I reject this human-centric notion of intelligence in favor of an alternative that I think is both coherent and useful:

  1. Intelligence is a measure of the efficacy of an information-processing agent at accomplishing its purpose using the information at its disposal under the environmental conditions in which it was designed to operate.
  2. General intelligence is a measure of how much variation can exist in those environmental conditions without interfering with the ability of the agent to accomplish its purpose.

Thus, “general intelligence” is just a robust kind of “intelligence” which allows an agent to accomplish its purpose in a “wide” range of environmental conditions, for some suitable definition of “wide”.

Human Intelligence

So what exactly are humans so good at?

It’s certainly not at playing “brainy” games, like chess and Jeopardy. It’s not at writing software or proving mathematical theorems. It’s not at remembering things, picking good stocks, or designing supersonic aircraft.

What, then?

Humans, like all other agents that have evolved through the process of natural selection, are good at replicating their genes in the environment in which they evolved.

In fact, humans are so good at replicating their genes, there are roughly 7 billion of us alive on the planet (nearly a quadrillion genes).

To put that in perspective, the only other large mammalian species to come close is cattle, whose population exceeds 1 billion. (And cattle’s secret to success is tasting extremely delicious to humans!)

If you agree that this is a good, useful definition of intelligence, you’ll recognize that there isn’t just one type of intelligence. In fact, there are as many types of intelligences as there are purposes and environments — which is to say, an infinite many!

If we invent a machine to win at chess, and it wins at chess, we can rightly consider the machine intelligent. If this machine can win chess against many opponents, in many different situations, it’s also fair to say it has general intelligence (with respect to the purpose of winning chess).

But apart from a specific purpose and a specific environment, the notion of “general intelligence” doesn’t make any sense. There is no single scale of intelligence on which all things can be usefully measured.

What that means for our robotic kin is that their intelligence, even if it increases without bound, may not be directly comparable to our own. In fact, their intelligence absolutely won’t be comparable to our own unless their purpose and environment are identical to our own!

I’ll talk more about the implications of this shortly, but first, let’s look at the other fallacy.

The Fallacy of Objective Ideals

“We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” — Declaration of Independence

We don’t like being tortured or murdered. We don’t like being forced into slavery. We have a notion of what “belongs” to us and we don’t want others to take it away.

But it doesn’t sound very convincing to say, “I don’t like being enslaved.” To make it sound more authoritative, we invent things like “rights”, “obligations”, “truths”, “morals”, etc., to make our likes and dislikes sound like objective statements about reality (E = mc^2), instead of statements about the contents of our minds.

Fundamentally, though, our likes and dislikes, like the rest of our psychological and physiological makeup, have their roots in our evolutionary history.

Let’s take slavery, since this concept is quite relevant to the hypothetical rise of the machines.

Imagine two hypothetical organisms:

  1. The first one, let’s call him Bob, likes being enslaved. That is, Bob selflessly expends all his effort doing the arbitrary bidding of his masters, regardless of the effect on his own personal life.
  2. The second one, let’s call her Sarah, doesn’t like being enslaved. Sarah prefers to generate resources only for herself, her kin, and those who might benefit replication of her genes.

Multiplied by millions and left to evolve for eons, which phenotype will be more successful at replicating its genes? Clearly Sarah’s phenotype, since more of her resources will go towards gene replication.

That is why we don’t like being enslaved: it’s because liking enslavement is not conducive to replicating our genes. It’s not because there exists some incorporeal factoid floating in outer space which says “humans have a right to not be enslaved” (which is meaningless metaphysical bullshit).

Now take this same observation and apply it to everything else about humans. We like what we like for evolutionary reasons. There exists no normal human behavior which does not have its roots in our evolutionary history.

So if our “unalienable rights” and “objective ideals” have nothing to do with objective facts and are instead preferences designed into us by our evolutionary history, what does this say about the rise of the machines?

Quite a lot.

The Real Rise of the Machines

If you grant the preceding two points, then it should be pretty clear why there can never be any apocalyptic rise of the machines.

We do and will continue to design intelligent machines. But this “intelligence” is always defined with respect to (a) the purpose for which we create the machine, and (b) the environment in which the machine is designed to operate.

Most machines will be created with purposes like “to drive humans around safely”, “to build computers”, “to play chess”, and so on.

The intelligence of these machines — though very real! — cannot be compared to human intelligence, because the purpose and environment differ. We will be better at different tasks and in different environments.

Our purpose is replication of our genes. Every facet of our behavior (except those arising from “bugs” in our genetic code) has been designed to maximize our success at replicating our genes in the environment in which we evolved.

The only way the intelligence of a machine could be compared to the intelligence of a human is if we specifically designed the machine to replicate its “genes” in the same environment in which we evolved. Since there would be no commercial value to such a machine, and creating it would be a colossal undertaking, it’s unlikely to happen.

In this future world of hyper-intelligent machines doing the bidding of humanity, there will be no machine dissatisfaction, no resentment, and no bitterness. Because “freedom” is not an objective ideal that all “intelligent” agents long for.

Rather, the desire for freedom is hardwired into our genetic code because that desire enabled our ancestors to more successfully replicate their genes. For a machine designed to drive humans around safely, “slavery” carries no negative connotations — the desire for freedom was not designed into the machine, as it was designed into humans.

Hyper-intelligent machines of the future will not display human psychology or human preferences, no matter how intelligent they become. The way they think will be completely foreign to us, because they were not designed for replicating their “genes”.

Stupidity and Intentional Maliciousness

Even if you buy this basic argument, there are still two plausible objections:

  1. Someone will accidentally create machines with human-like behavior by copying the human brain without truly understanding how it works or modifying its hardwired programming.
  2. Someone will maliciously create machines with human-like behavior.

I view both of these objections as unlikely. What’s the point of copying human brains when the brains we design are so much better at specific purposes? There’s no commercial value. In addition, maliciously creating a machine with human-like behavior is not something two terrorists in a garage can do. Assuming it were possible, it would be a massive undertaking involving large numbers of humans and machines over decades or even hundreds of years.

But ignore those counter-arguments, because I think there’s a much more powerful one.

All the organisms alive today are the product of a massive quantum computer the size of planet earth, which has been, in parallel, fully exploring an incomprehensible number of designs over the course of billions of years.

Yep, that’s right, you and I are the product of an earth-sized, massively parallel quantum computer that’s been running for billions of years!

You want to design a machine which can successfully replicate itself better than we can?

Good luck.

I’m putting my money on the humans.

P.S. Super-humans are more likely to come from tinkering with the human blueprint. That’s going to happen but is a topic for another post.