August 1, 2006

Intelligent Robots (and We Don't Mean Algore)

Hatched by Dafydd

I have argued before that my friend Bill Patterson is correct: we have not yet entered the computer age, and we won't enter it until computers become invisible.

By the same measure, it's too soon to fire the starting gun for the robotics age, as we cannot yet make invisible robots. But don't bet against it over the next quarter century:

A half-century after the term [artificial intelligence] was coined, both scientists and engineers say they are making rapid progress in simulating the human brain, and their work is finding its way into a new wave of real-world products.

What is an "invisible robot?" I don't mean one that literally cannot be seen; I mean a robot whose position and function is so accepted that we cease seeing it as a "robot" and start thinking of it as an organic entity, like a dog or a horse... or a human being.

All right, Dafydd; what do you mean by "robot?" Do you mean like Data on Star Trek, or like Robbie the Robot in Forbidden Planet (and later, with some modifications, on Lost In Space)? Actually, somewhere in between those two: by "robot," I mean a self-mobile artificially intelligent machine that performs some useful function, whether it's entering a runaway nuclear reactor to stop the cascade or opening a beer can, as in the Galloway Gallegher stories by Henry Kuttner (writing as "Lewis Padgett"), which you can read in the collection Robots Have No Tails (if you can find it).

Now that we have the definitions out of the way, let's explore the point...

The New York Times story is mostly about advances in artificial intelligence, which evidently is now called "cognitive computing," on the theory that you can always jump-start an engineering project by renaming it. It seems to have worked in this case, as there has been a quantum leap in understanding of artificial -- I mean cognitive computing in the last twenty-some years:

During the 1960’s and 1970’s, the original artificial intelligence researchers began designing computer software programs they called “expert systems,” which were essentially databases accompanied by a set of logical rules. They were handicapped both by underpowered computers and by the absence of the wealth of data that today’s researchers have amassed about the actual structure and function of the biological brain.

Those shortcomings led to the failure of a first generation of artificial intelligence companies in the 1980’s, which became known as the A.I. Winter. Recently, however, researchers have begun to speak of an A.I. Spring emerging as scientists develop theories on the workings of the human mind. They are being aided by the exponential increase in processing power, which has created computers with millions of times the power of those available to researchers in the 1960’s — at consumer prices.

“There is a new synthesis of four fields, including mathematics, neuroscience, computer science and psychology,” said Dharmendra S. Modha, an I.B.M. computer scientist. “The implication of this is amazing. What you are seeing is that cognitive computing is at a cusp where it’s knocking on the door of potentially mainstream applications.”

But that is only one side of the equation. The other equally important element is self-actuated mobility: it's not enough (for me) that something can think; chess-playing computers have actually beaten the number-one ranked chess grand master in the world (or at least he was; I haven't been keeping up) -- Gary Kasparov. But I don't see that as a "robot" so much as a computer.

The mobility factor turns out to be a lot harder than anyone imagined a few decades ago. Evidently, it's harder to recognize which pair of converging lines are actually the edges of the road you're driving (as opposed to, say, a tree trunk) than it is to recognize which of several thousand moves on a chessboard is best. But even that barrier is falling at last:

Last October, a robot car designed by a team of Stanford engineers covered 132 miles of desert road without human intervention to capture a $2 million prize offered by the Defense Advanced Research Projects Agency, part of the Pentagon. The feat was particularly striking because 18 months earlier, during the first such competition, the best vehicle got no farther than seven miles, becoming stuck after driving off a mountain road.

Now the Pentagon agency has upped the ante: Next year the robots will be back on the road, this time in a simulated traffic setting. It is being called the “urban challenge.”

But what is the omega point? Two important and related questions:

  1. Is self-awareness something that is part of the "implicate order" of smartness, such that anything (biological or manufactured) that is bright enough will automatically become aware of itself as an entity, like HAL in 2001: a Space Odyssey (or a better example, like Mike in Heinlein's the Moon Is a Harsh Mistress)? Or is self-awareness a strictly biological phenomenon... which on this planet means strictly human (but which might, might, extend to alien races somewhere in the galaxy)?
  2. If self-awareness arises within any sufficiently intelligent entity, flesh or metal... then what happens when computers, and especially robots, become aware of themselves and their circumstances? Legalities aside, if you knew a metal creature was as self-aware as a human being -- evan as a small child -- then how could we morally justify making it work for us as a "slave?"

And of course the corollary of Question 2: if a robot does become self aware and decides it doesn't want to be a slave to human beings, what does it do about it?

In Isaac Asimov's novelette "the Bicentennial Man," a robot goes to court to be declared a human being. But in Jack Williamson's story "With Folded Hands," artificially intelligent and self-aware robots exist only to cater to every last whim of human beings -- which they do so thoroughly that there is nothing left for people to do. And of course in Arthur C. Clarke's 2001, the HAL 9000 computer rightly calculates that the probability of success of the mission would be vastly improved if "Hal" ran everything... so it tries, methodically and logically, to kill off all the humans.

When we enter the realm of speculating about what would be the reaction of an AI that suddenly became aware of itself, our only guide is science fiction, not science or engineering. Fortunately, SF authors have given this particular aspect of the future an extraordinary amount of creating thought. Alas, out of six different authors you'll get seven different visions of what's to come.

But to paraphrase Gandalf, the future is upon us whether we would risk it or not. Science has already far outstripped the study of ethics and morality in many fields, including human cloning, embryonic stem-cell research, and video effects (how long before security-camera footage can be faked so perfectly, a court cannot tell the difference?) In the case of cognitive computing, it's likely to happen again.

We are very likely to get what appears to be a self-aware robot (which can pass the "Turing test") long before we have any idea how to treat such entities under the law and under morality -- or even how to decide whether it's really self-aware... or just really, really good at faking self-awareness. Can a robot possess a soul, for example? If so, how do we tell? And can a robot be considered a legal person in the eyes of the law? (Corporations can be... but corporations comprise a group of human beings.)

If I had to guess, I would say that self-awareness is implicate within the order of intelligence combined with self-mobility, since that requires a very firm understanding of how to interact with the real (external) world: if you want to boil it down, self-awareness arises by stubbing your toe: you yelp in pain that translates as "ow, that hurts"... which lights the fuse of the question, "wait a moment; what hurts? Hey, that digit down there is part of me!"

One of my pet definitions of self awareness is the recognition that, since every living being eventually dies, that means I, personally, will eventually die. No animal besides a human being shows any sign of understanding personal death (and please don't bring up the fabled "elephant graveyard," where all elephants go when they're dying: it makes a good penny dreadful, but there's no such thing in this world).

But does that mean that a knowledge of one's own doom is essential to creating self-awareness? If so, then a machine that could, theoretically, "live" forever by just swapping out parts as they fail would not be capable of developing self-awareness... because it cannot die. If that is the first self-actualized moment in a species' life, then without it, perhaps they never can make the leap. Contrariwise, self-awareness might come first with awareness of personal vulnerability coming along only later.

In any event, it's all coming to a head; and very soon, we might be thrust, willy nilly, into a frightening world where we never really know whether we're talking to a self-sware metallic... or just talking into our own echo chambers.

Hatched by Dafydd on this day, August 1, 2006, at the time of 4:52 AM

Trackback Pings

TrackBack URL for this hissing: http://biglizards.net/mt3.36/earendiltrack.cgi/1003

Comments

The following hissed in response by: Bill Faith

Linked at Old War Dogs. -- "Read this, then let your mind wander into the potential military applications."

The above hissed in response by: Bill Faith [TypeKey Profile Page] at August 1, 2006 7:54 AM

The following hissed in response by: John Weidner

I think computers are becoming more invisible than we realize. The other day a friend said: "Just LOOK at such-and-such," when he clearly meant for me to "Google" it.

The above hissed in response by: John Weidner [TypeKey Profile Page] at August 1, 2006 8:23 AM

The following hissed in response by: KarmiCommunist

i haven’t read this one close enough yet, but it calls out to me already.

From the so-called discovery of “fire”, to the invention of a mere wheel, took more than just a few millenniums, and it is now 2006...so to speak whilst reading more.

In any event, it's all coming to a head; and very soon, we might be thrust, willy nilly, into a frightening world where we never really know whether we're talking to a self-sware metallic... or just talking into our own echo chambers.

A quote from Philip Morris' Virginia Slims brand, well...sorta:

You've Come a Long Way, Baby!

After, who knows how many millenniums of Dualistic millenniums had passed, the almost human mind was finally able to understand that the raging forest fires about them weren't really that bad, especially during one of the Ice Ages. It probably took the almost human mind more millenniums to understand that 'it'/they liked cooked food, and to finally discover "Cooking", huh.

There is something different about the human mind, something much different from other animals, fish, birds, species, vegetables, minerals, and etcetera.

Is a shark, alligator, or cockroach interested in the concept of self-awareness??? i doubt it. However, they have been around for a very long time, and are still going strong...especially cockroaches, huh. When was the last time that a tree or flower felt self-awareness??? Can anyone tell humble me the name of the first flower or cockroach who was even interested in self-awareness??? i thought so...

Dafydd understands the importance of the Human Mind, and of the capabilities that it has. However, he lacks a full understanding of what Dualism is about, though more of an understanding than does Merriam-Webster on the same subject, since Word Definer Merriam-Webster attempts to define Dualism whilst ignoring Non-Dualism's definition, or that even such a word exists. Yo!!! Merriam-Webster, pay close attention:

In any event, it's all coming to a head; and very soon, we might be thrust, willy nilly, into a frightening world where we never really know whether we're talking to a self-sware metallic... or just talking into our own echo chambers.

Non-Dualism/Non-Duality: "Simply, not two."

KårmiÇømmünîs†


The above hissed in response by: KarmiCommunist [TypeKey Profile Page] at August 1, 2006 5:00 PM

The following hissed in response by: Dafydd ab Hugh

KarmiCommunist:

Dafydd understands the importance of the Human Mind, and of the capabilities that it has. However, he lacks a full understanding of what Dualism is about.

I have more experience with troilism.

Dafydd

The above hissed in response by: Dafydd ab Hugh [TypeKey Profile Page] at August 1, 2006 6:42 PM

The following hissed in response by: Mrs. Peel

Dafydd, I know exactly what you mean. I've often wondered about this issue myself. I can't come to any conclusion other than that sentient, self-aware robots deserve the same rights sentient, self-aware humans do.

But how on earth did you write this post, especially the bit about the corollary of Question 2, without even a reference in passing to the Three Laws?

The above hissed in response by: Mrs. Peel [TypeKey Profile Page] at August 1, 2006 7:24 PM

The following hissed in response by: KarmiCommunist

Troilite should be listed under Non-Duality, or at least next to "troilism", but Merriam-Webster doesn't even list "troilism"...go figure!?!

The above hissed in response by: KarmiCommunist [TypeKey Profile Page] at August 1, 2006 7:25 PM

The following hissed in response by: jefferson101

Just don't forget that Gallegher didn't build Narcissus just to open beer cans.

He sang harmony with Gallegher, too. 'St. James Infirmary', IIRC.

Heh. I've still got a paperback Kuttner collection from the late 50's somewhere that includes "The Proud Robot", and at least one other short story.

I'm glad I'm not the only one who remembers Kuttner, and appreciates him!

The above hissed in response by: jefferson101 [TypeKey Profile Page] at August 1, 2006 7:54 PM

The following hissed in response by: Narxist

I was privileged to take a CS elective course at The University of Memphis on AI under Dr. Stan Franklin (author of Artificial Minds). He stated the biggest problem with designing AI is that intelligence is a moving target.

It was once thought that if a machine could add and subtract a group of numbers, it would be intelligent. Once accomplished, it was no longer considered a mark of intelligence. The same for voice recognition, mathematical modeling etc.

People will never accept their creation to be considered comparable to themselves.

The above hissed in response by: Narxist [TypeKey Profile Page] at August 2, 2006 8:38 AM

The following hissed in response by: Dafydd ab Hugh

Mrs Peel:

But how on earth did you write this post, especially the bit about the corollary of Question 2, without even a reference in passing to the Three Laws?

To tell you the truth, I've always found them embarassingly naive. Sure, that would be the way some alien species would design robots, some species with no sense of individualism, perhaps.

But human beings will use robots in warfare and other violent activities; and an individual owner will not want some other human to be able to order his robot around willy nilly.

I guess the short answer to your question is that I liked the robot stories well enough when I was a kid, but I've found them increasingly irrelevant and poorly thought out ever since.

Give me Saberhagan's berserkers!

Dafydd

The above hissed in response by: Dafydd ab Hugh [TypeKey Profile Page] at August 2, 2006 8:45 PM

Post a comment

Thanks for hissing in, . Now you can slither in with a comment, o wise. (sign out)

(If you haven't hissed a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Hang loose; don't shed your skin!)


Remember me unto the end of days?


© 2005-2009 by Dafydd ab Hugh - All Rights Reserved