Artificial Intelligence and how to build a self-conscious machine

The Coolest Thing in the Universe

The universe is full of some very cool stuff: neutron stars that weigh a ton a teaspoon; supermassive black holes that grip even light in their iron fists; infinitesimal neutrinos that stream right through solid steel; all the bizarre flora and fauna found right here on planet Earth.

It might be the ultimate in egoism, but of all the known things in the universe, the most amazing is surely the lump of goo inside our skulls. That lump of goo knows about neutron stars, black holes, neutrinos, and a middling number of the flora and fauna here on planet Earth. It even knows (a little) about itself. That lump of goo has worked out mathematical truths, moral half-truths, and philosophical ambiguities. And from the mud beneath our feet, it extracted all the stuff used to make our great cities, our cars and jets and rockets, and the wires and wireless signals that are turning these disparate lumps of goo into one great hivemind of creativity, knowledge, and sometimes cruelty.

There can be no argument that our brains are the coolest things ever, because there can be no such argument without those brains. They are the substrate of all argument and discussion. End of discussion.

So far, at least. One day, other things may be discovered or built that can also discover, create, argue, discuss, cajole, or be cruel. They might land in ships from faraway lands (highly unlikely). They might emerge from a laboratory or a garage (almost certainly). And these new thinking machines will without a doubt surpass the wonder of our lumps of goo. Just as a child grows taller than both parents and reaches new peaks while those parents decline, our creations will take our places as the coolest damn things in the universe. Some argue that this is already true.

Artificial intelligence is here now. In laboratories all around the world, little AIs are springing to life. Some play chess better than any human ever has. Some are learning to drive a million cars a billion miles while saving more lives than most doctors or EMTs will over their entire careers. Some will make sure your dishes are dry and spot-free, or that your laundry is properly fluffed and without wrinkle. Countless numbers of these intelligences are being built and programmed; they are only going to get smarter and more pervasive; they’re going to be better than us, but they’ll never be just like us. And that’s a good thing.

What separates us from all the other life forms on earth is the degree to which we are self-aware. Most animals are conscious. Many are even self-conscious. But humans are something I like to call hyper-conscious. There’s an amplifier in our brains wired into our consciousnesses, and it goes to 11.

It goes to 11, and the knob has come off.

The Origin of Consciousness

There isn’t a single day that a human being becomes self-conscious. You can’t pen the date in a baby book, or take a picture of the moment and share it on Facebook, or celebrate its anniversary for years to come. It happens gradually, in stages. (It often unravels gradually, also in stages.)

Human consciousness comes on like the old lights that used to hang in school gyms when I was a kid. You flip a switch, and nothing happens at first. There’s a buzz, a dim glow from a bulb here or there, a row that flickers on, shakily at first, and then more lights, a rising hum, before all the great hanging silver cones finally get in on the act and rise and rise in intensity to their full peak a half hour or more later.

We switch on like that. We emerge from the womb unaware of ourselves. The world very likely appears upside down to us for the first few hours of our lives, until our brains reorient the inverted image created by the lenses of our eyes (a very weird bit of mental elasticity that we can replicate in labs with goggle-wearing adults).

It takes a long while before our hands are seen as extensions of ourselves. Even longer before we realize that we have brains and thoughts separate from other people’s brains and thoughts. Longer still to cope with the disagreements and separate needs of those other people’s brains and thoughts. And for many of us (possibly most), any sort of true self-knowledge and self-enlightenment never happens. Because we rarely pause to reflect on such trivialities. The unexamined life and all that…

The field of AI is full of people working to replicate or simulate various features of our intelligence. One thing they are certain to replicate is the gradual way that our consciousness turns on. As I write this, the gymnasium is buzzing. A light in the distance, over by the far bleachers, is humming. Others are flickering. Still more are turning on.

The Holy Gr-ai-l

The holy grail of AI research was established before AI research ever even began. One of the pioneers of computing, Alan Turing, described an ultimate test for “thinking” machines: Could they pass as human? Ever since, humanity has both dreamed of—and had collective nightmares about—a future where machines are more human than humans. Not smarter than humans—which these intelligences already are in many ways. But more neurotic, violent, warlike, obsessed, devious, creative, passionate, amorous, and so on.

The genre of science fiction is stuffed to the gills with such tales. A collection of my short works will be released this October, and in it you can see that I have been similarly taken with these ideas about AI. And yet, even as these intelligences outpace human beings in almost every intellectual arena in which they’re entered, they seem no closer to being like us, much less more like us.

This is a good thing, but not for the reasons that films such as The Terminator and The Matrix suggest. The reason we haven’t made self-conscious machines is primarily because we are in denial about what makes us self-conscious. The things that make us self-conscious aren’t as flattering as the delusion of ego or the illusion of self-permanence. Self-consciousness isn’t even very useful (which is why research into consciousness rarely goes anywhere—it spends too much time assuming there’s a grand purpose and then searching for it).

Perhaps the best thing to come from AI research isn’t an understanding of computers, but rather an understanding of ourselves. The challenges we face in building machines that think highlight the various little miracles of our own biochemical goo. They also highlight our deficiencies. To replicate ourselves, we have to first embrace both the miracles and the foibles.

What follows is a very brief guide on how to build a self-conscious machine, and why no one has done so to date (thank goodness).

The Blueprint

The blueprint for a self-conscious machine is simple. You need:

  1. A physical body or apparatus that responds to outside stimuli. (This could be a car whose windshield wipers come on when it senses rain, or that brakes when a child steps in front of it. Not a problem, as we’re already building these.)
  2. A language engine. (Also not a problem. This can be a car with hundreds of different lights and indicators. Or it can be as linguistically savvy as IBM’s Watson.)
  3. The third component is a bit more unusual, and I don’t know why anyone would build one except to reproduce evolution’s botched mess. This final component is a separate part of the machine that observes the rest of its body and makes up stories about what it’s doing—stories that are usually wrong.

Again: (1) A body that responds to stimuli; (2) a method of communication; and (3) an algorithm that attempts (with little success) to deduce the reasons and motivations for these communications.

The critical ingredient here is that the algorithm in (3) must usually be wrong.

If this blueprint is confusing to you, you aren’t alone. The reason no one has built a self-conscious machine is that most people have the wrong idea about what consciousness is and how it arose in humans. So let’s take a detour. We’ll return to the blueprint later to describe how this algorithm might be programmed.

What Makes Us Human

To understand human consciousness, one needs to dive deep into the study of Theory of Mind. It’s a shame that this concept is obscure, because it consumes most of our computing power for most of our lives. Our brains have been likened to little more than Theory of Mind machines—almost all of our higher level processing power is shunted into this singular task. So what is Theory of Mind, and why is this topic so rarely discussed if our brains are indeed so obsessed?

Theory of Mind is the attempt by one brain to ascertain the contents of another brain. It is Sue wondering what in the world Juan is thinking. Sue creates theories about the current state of Juan’s mind. She does this in order to guess what Juan might do next. If you think about it, no power could possibly be greater for a social and tribal animal like us humans. For thousands and thousands of years we have lived in close proximity, reliant on one another in a way that mimics bees, ants, and termites. As our behaviors and thoughts grew more and more complex, it became crucial for each member of the tribe to have an idea of what the other members were thinking and what actions they might perform. Theory of Mind is intellectual espionage, and we are quite good at it—but with critical limitations that we will get into later.

Sue guessing what Juan is thinking is known as First Order Theory of Mind. It gets more complex. Sue might also be curious about what Juan thinks of her. This is Second Order Theory of Mind, and it is the root of most of our neuroses and perseverate thinking. “Does Juan think I’m smart?” “Does Juan like me?” “Does Juan wish me harm?” “Is Juan in a good or bad mood because of something I did?”

Questions like these should sound very, very familiar. We fill our days with them. And that’s just the beginning.

Third Order Theory of Mind would be for Sue to wonder what Juan thinks Josette thinks about Tom. More simply, does Tom know Josette is into him? Or Sue might wonder what Josette thinks Juan thinks about Sue. Is Josette jealous, in other words? This starts to sound confusing, the listing of several names and all the “thinking about” thrown in there like glue, but this is what we preoccupy our minds with more than any other conscious-level sort of thinking. We hardly stop doing it. We might call it gossip, or socializing, but our brains consider this their main duty—their primary function. There is speculation that Theory of Mind, and not tool use, is the reason for the relative size of our brains in the first place.

In a world of rocks hurtling through the air, a good use of processing power is to compute trajectories and learn how to avoid getting hit. One develops an innate sense of parabolas, of F=ma, of velocities squared. In a world of humans jostling about, a good use of processing power is to compute where those people might be next, and what they will do when they get there.

If this trait is so useful, then why aren’t all animals self-conscious? They very well might be. There’s plenty of research to suggest that many animals display varying degrees of self-consciousness. Animals that know a spot of color on the face in the mirror is in fact on their own heads. Animals that communicate to other animals on how to solve a puzzle so that both get a reward. Even octopi show considerable evidence of being self-conscious. But just as the cheetah is the fastest animal on land, humans are the queens and kings of Theory of Mind.

I’ve watched my dog observe me expectantly to guess what I might do next. Am I going to throw the stick or not? Am I going to eat that last bite of food or share it? I’ve even seen dogs wrestle with Second Order Theory of Mind questions. Play-wrestling with a partner, the dog has to gauge what my intent is. Have I suddenly turned on the pack? Or is this yet another game? Which side should my dog take? (Dwelling on this example now, I’m ashamed of having put my poor pup into such a heart-pounding conundrum for my own entertainment.)

Dogs are a good example of Theory of Mind in the animal kingdom, because dogs have evolved over the years to be social with humans and to pick up on our behavioral cues. The development of self-conscious AIs will follow this model closely, as robots have already become our domesticated pals. Some of them are already trying to guess what we’re thinking and what we might do next. There are cars being developed that read our faces to determine where our attention is being directed and whether or not we’re sleepy. This is First Order Theory of Mind, and it is being built into automated machines already on the road.

Further development of these abilities will not lead to self-consciousness, however. There’s a very simple and elegant reason for this, and it explains the mystery of human consciousness and it provides the blueprint mentioned above for creating self-conscious machines, something we could very easily do in the lab today. But you’ll soon see why this would be a terrible idea. And not the world-ending kind found in dystopic science fiction.

The Missing Piece

The human brain is not a single, holistic entity. It is a collection of thousands of disparate modules that only barely and rarely interconnect. We like to think of the brain as a computer chip. We might even attempt further precision and think of the brain as a desktop computer, with a central processing unit that’s separate from RAM (short-term memory), the hard drives (long-term memory), cooling fans (autonomous nervous functions), power supplies (digestion), and so on.

That’s a fun analogy, but it’s incredibly misleading. Computers are well-engineered devices created with a unified purpose. All the various bits were designed around the same time for those same purposes, and they were designed to work harmoniously with one another. None of this in any way resembles the human mind. Not even close. The human mind is more like Washington, D.C. (or any large government or sprawling corporation). Some functions of the brain were built hundreds of millions of years ago, like the ones that provide power to individual cells or pump sodium-potassium through cell membranes. Others were built millions of years ago, like the ones that fire neurons and make sure blood is pumped and oxygen is inhaled. Move toward the frontal lobe, and we have the modules that control mammalian behaviors and thoughts that were layered on relatively recently.

Each module in the brain is like a separate building in a congested town. Some of these modules don’t even talk to other modules, and for good measure. The blood-pumping and breath-reflex buildings should be left to their own devices. The other modules are prone to arguing, bickering, disagreeing, subverting one another, spasming uncontrollably, staging coups, freaking out, and all sorts of other hysterics. Here are a couple of examples.

A few months ago, my girlfriend and I set out from the Galapagos for the French Polynesians. The 3,000 miles of open sea can take anywhere from two weeks to a month to cross. My girlfriend does not often succumb to seasickness, but in an area of the South Pacific known as the Convergence Zone, a confused sea set our sailboat into a strange and jerky rhythm. She fell prey to a terrible sensation that lasted for a few days.

Seasickness is a case of our brain modules not communicating with one another (or doing their own thing). When the visual cues of motion from our environment do not match the signals from our inner ears (where we sense balance), our brains assume that we’ve been poisoned. It’s a reasonable assumption for creatures that climb through trees eating all the brightly-colored things. Toxins disrupt our brains’ processing, leading to misfires and bad data. We did not evolve to go to sea, so when motion does not match what we are seeing, our bodies think we’ve lost our ability to balance on two legs. The result is that we empty our stomachs (getting rid of the poison) and we lie down and feel zero desire to move about (preventing us from plummeting to our deaths from whatever high limb we might be swinging from).

It doesn’t matter that we know this is happening in a different module of our brains, a higher-level processing module. We can know without a doubt that we haven’t been poisoned, but this module is not going to easily win out over the seasickness module. Having been seasick myself, and being very curious about such things, I’ve felt the various modules wrestle with one another. Lying still and sleeping a lot while seasick, I will then jump up and perform various tasks needed of me around the boat—the seasickness practically gone for the moment—only to lie back down once the chore is done. Modules get different priorities based on our environmental stimuli. Our brains are not a holistic desktop PC. To truly watch their analogue in action, turn on C-SPAN or sit in on a contentious corporate board meeting.

Another example of our modules in battle with one another: There are some very strong modules inside of us that are programmed to make copies of themselves (and to do that, they need to make copies of us). These are the sex modules, and they have some of the largest and nicest buildings in our internal Washington D.C.s. These modules direct many of our waking hours as we navigate dating scenes, tend to our current relationships, determine what to wear and how to maintain our bodies, and so much more.

These reproductive modules might fill a woman with the urge to dress up and go dancing. And men with the urge to go to places where women dress up and dance while the men stand at bars drinking. Those modules might even lead some of these people to pair up and go home with one another. And this is where various other modules will intervene with pills, condoms, and other tools designed to subvert the original urges that got the couples together in the first place. However, if those devices are not employed, even though higher-level modules most definitely did not want anyone getting pregnant that night, a lovechild might be born, and other modules will then kick in and flood brains with love and deep connections to assist in the rearing of that child. Some of our modules want us to get pregnant. Often, stronger modules very much wish to delay this or make sure it’s with the right person. Dormant modules lie in wait to make sure we’re connected with our children no matter what those other hedonistic and unfeeling pricks in those other buildings down the block think.

Critical to keep in mind here is that these modules are highly variable across the population, and our unique mix of modules create the personalities that we associate with our singular selves. It means we aren’t all alike. We might have modules that crave reproduction, even though some of our bodies do not create sperm, or our eggs cannot be fertilized. We might carry the reproduction module, even though the sexual-attraction module is for the same sex as our own.

The perfectly engineered desktop computer analogy fails spectacularly, and the failure of this analogy leads to some terrible legislation and social mores, as we can’t seem to tolerate designs different from our own (or the average). It also leads AI researchers down erroneous paths if they want to mimic human behavior. Fallibility and the disjointed nature of processing systems will have to be built in by design. We will have to purposefully break systems similar to how nature haphazardly cobbled them together. We will especially have to simulate a most peculiar feature of this modularity, one that combines with Theory of Mind in a very special way. It is this combination that leads to human consciousness. It is the most important feature in our blueprint for a self-conscious machine.

The Most Important Mistake

With the concept of Theory of Mind firmly in our thoughts, and the knowledge that brain modules are both fallible and disconnected, we are primed to understand human consciousness, how it arose, and what it’s (not) for.

This may surprise those who are used to hearing that we don’t understand human consciousness and have made no progress in that arena. This isn’t true at all. What we have made no progress in doing is understanding what human consciousness is for.

Thousands of years of failure in this regard points to the simple truth: Human consciousness is not for anything at all. It serves no purpose. It has no evolutionary benefit. It arises at the union of two modules that are both so supremely useful that we can’t survive without either, and so we tolerate the annoying and detrimental consciousness that arises as a result.

One of those modules is Theory of Mind. It has already been mentioned that Theory of Mind consumes more brain processing power than any other higher-level neurological activity. It’s that damn important. The problem with this module is that it isn’t selective with its powers; it’s not even clear that such selectivity would be possible. That means our Theory of Mind abilities get turned onto ourselves just as often (or far more often) than it is wielded on others.

Imagine an alien ray gun that shoots with such a wide spread that anywhere you aim it, you hit yourself. That should give you a fair picture of how we employ Theory of Mind. Our brains are primed to watch humans to determine what they are thinking, why they are behaving the way they are behaving, and what they might do next. Looking down, these brains (and their mindless modules) see a body attached to them. These modules watch hands perform tasks, feet take them places, words pop out in streams of thought. It is not possible to turn off our Theory of Mind modules (and it wouldn’t be a good idea anyway; we would be blind in a world of hurtling rocks). And so this Theory of Mind module concocts stories about our own behaviors. Why do we want to get dressed up and go dancing? Because it’s fun! And our friends will be there! Why do we want to keep eating when we are already full? Because it’s delicious! And we walked an extra thousand steps today!

These questions about our own behaviors are never ending. And the answers are almost always wrong.

Allow that to sink in for a moment. The explanations we tell ourselves about our own behaviors are almost always wrong.

This is the weird thing about our Theory of Mind superpowers. They’re pretty good when we employ them on others. They fail spectacularly when we turn them on ourselves. Our guesses about others’ motivations are far more accurate than the guesses we make about our own. In a sense, we have developed a magic force-field to protect us from the alien mind-reading ray gun that we shoot others (and ourselves) with. This forcefield is our egos, and it gives us an inflated opinion of ourselves, a higher-minded rationale for our actions, and an illusion of sanity that we rarely extend to our peers.

The incorrect explanations we come up with about our own behaviors are meant to protect ourselves. They are often wildly creative, or they are absurdly simplistic. Answers like “fun” and “delicious” are circular answers pointing back to a happiness module, with no curiosity about the underlying benefit of this reward mechanism. The truth is that we keep eating when we’re full because we evolved in a world of caloric scarcity. We dance to attract mates to make copies of ourselves, because the modules that guided this behavior made lots of copies, which crowded out other designs.

Researchers have long studied this mismatch of behaviors and the lies we tell ourselves about our behaviors. One study primed test subjects to think they were feeling warm (easily done by dropping in certain words in a fake test given to those subjects). When these people got up to adjust the thermostat, the researchers paused them to ask why they were adjusting the temperature. Convincing stories were told, and when the primed words were pointed out, incredulity reigned. Even when we are shown where our actions come from, we choose to believe our internal Theory of Mind module, which has already reached its own conclusion.

Subjects in fMRI machines have revealed another peculiarity. Watching their brains in real time, we can see that decisions are made before higher level parts of the brain are aware of the decisions. That is, researchers can tell which button a test subject will press before those subjects claim to have made the choice. The action comes before the narrative. We move; we observe our actions; we tell ourselves stories about why we do things. The very useful Theory of Mind tool—which we can’t shut off—continues to run and make up things about our own actions.

More pronounced examples of this come from people with various neurological impairments. Test subjects with vision processing problems, or with hemispheres of their brains severed from one another, can be shown different images in each eye. Disconnected modules take in these conflicting inputs and create fascinating stories. One eye might see a rake and the other will see a pile of snow. The rake eye is effectively blind, with the test subject unable to tell what it is seeing if asked. But the module for processing the image is still active, so when asked what tool is needed to handle the image that is seen (the snow), the person will answer “a rake.” That’s not the interesting bit. What’s interesting is that the person will go through amazing contortions to justify this answer, even after the entire process is explained to them.

This is how we would have to build a self-conscious machine. And you’re hopefully beginning to see why no one should waste their time. These machines (probably) wouldn’t end the world, but they would be just as goofy and nonsensical as nature has made us. The only reason I can think of to build such machines is to employ more shrinks.

Hugh Howey is a writer and the author of WOOL and the Molly Fyde series.

For the rest of this article please go to source link below.

(Source: wired.com; October 4, 2017; http://tinyurl.com/ycbp3wfd)

Share Button