February 3, 2012

Computers Verses Concepts: Can Computers Think?

Traffic computers administrate our signal lights. Microprocessors direct our car engines. automated controllers run our factories.

And in an insult of sorts, Watson, a successor in spirit to Deep Blue, trounced our human compatriots in Jeopardy.

Computers have permeated our work and our leisure and our lives. This has mainly been for the good, enhancing human society, enabling our progress. But we wonder. Do we feel comfortable with so many functions performed by non-thinking machines? Do we risk something passing off operate to efficient, but non-the-less essentially mindless, entities?

Or maybe we feel the opposite, we wouldn't want our computers to think, otherwise we, humans, might lose control.

So can computers "think?" Would it pose a danger, or contribute a benefit?

I will search for those questions, and do so, as I often do with questions of this kind, with a understanding experiment.

Poker Chips

Imagine round, plastic poker chips, like you might find at a casino. Rather than being imprinted with dollar figures, we stamp each chip with a dissimilar number. The numbers run from one to twenty-five thousand. We need so many because each chip stands for a word, though for this argument we don't know which one.

Well, we will allow some exceptions. We will have a subset of chips with actual words not numbers. These words will be mainly prepositions, articles, linking verbs, etc. Such as, "is", "to", "can" and "from". This allows us to originate relations between the numbered chips. For example, using the words and chips, we might have:

"Two" can be "Seventeen" from "Sixty-four."

That might stand for something such as a chair (two) can be assembled (seventeen) from wood (sixty-four). We amble to construction thousands, hundreds of thousands, of such relations.

We could now be asked questions, such as what can a "two" be "seventeen" from. We would search straight through the array of chip expressions, and find our example expression, an riposte "sixty-four." We would have found the accurate answer. But we did so not by insight anything, but rather by finding straight through a variety of meaningless chip relationships. We had no idea of what we what talking about. We didn't understand.

From Symbols to Meaning

What would it take to add insight to the numbers on the chips?

We could translate the chips to words. But that is not actually an answer, since words are still symbols. If we translated the chip numbers into Latin, few of us would actually gain any understanding. The Latin words, in fact most words in any language, are as arbitrary a sticker as the number on the chip.

Pictures, however, would help. If a dozen or so pictures of a chair were associated to the chip numbered two, we would begin to understand. "Two" would start to have meaning.

We can envision lasting the process over hundreds, thousands, of the chips, associating each with pictures, or a movie, or a sound, or a smell, or even a touch sensation (hot, cold, sharp, soft, etc). Our insight would expand.

At some point, insight the concepts associated with each chip would need more than pictures. "Push" could be movie of a someone with his or her shoulder to a dresser captivating the dresser. That may or may not be interpreted correctly. But by this point, we would have built an insight of a good number of the chips, so the movie could be supplemented by the sentence "to push is to move an object. This can be done by walking while having your body against the object."

We could continue to build concepts upon concepts in the same manner. Once we reached a sufficient base, maybe when we got straight through the first ten thousand chips, we could actually step up to tackle the chips that represented words like "justice" and "truth."

So eventually, we could teach ourselves the "meaning" of all the twenty five-thousand chips. We would understand.

But could we teach a computer so that it would "understand?"

The Role of Experience

Yes, and no.

Yes, because like our human above, a computer can easily associate pictures, movies, sounds, smells, touches to a symbol. actually the computer would need many unique components, along with specialized sensors, optimized processors, large memory stores, and convention software. But we don't photo this as outlandish. We can photo a humanoid robot, with acceptable sensors in the locations of the human's ears, eyes, nose, finger tips, and so on, associated wirelessly to the computer complex needed to process all that data.

As sophisticated as the Watson of Jeopardy fame is, such a robot would be a generation, maybe two, beyond Watson. Watson works at the level of word association, basically linking our numbered chips. Watson has assimilated billions of associations between those chips, but nowhere does it appear Watson associates a chip/word with anyone other than someone else number chip, or an occasional photo or sound.

Our robot goes beyond that. It doesn't just associate "chairs" with "four legs". Our robot learns by sitting on actual chairs; in fact we have it sit on dozens of chairs of all dissimilar types, metal ones, wood ones, plastic ones, soft ones, hard ones, squeaky ones, springy ones. And as this happens, the robot's sensors derive sounds, sights, feels, smells, at ranges and precisions well beyond humans. All the while, the robot and its computers are construction associations upon associations.

And we repeat the process with tables, then with beds, then dressers and the whole range of furniture. We then move to desk items (paper, books, pens, erasers), then to kitchen items, bathroom items, work bench items, then move outside, and on and on.

When it has a sufficient knowledge of the poker chips we teach it to use the internet. The number of associations explodes.

We then add in a crucial element, evaluative software. This software allows for judgments, and comparisons, and balancing of alternate answers, and so on. We have estimation modules for many aspects of the world, for engineering, for ethics, for aesthetics, for social dynamics.

With all this, we then send our robot/computer out into the world, to shop, to travel, to attend college and to work, all to build additional and deeper associations and to tune the estimation modules.

Let's say the training progresses for a decade. Would our robot now understand?

Yes, and no.

Yes in that the computer would have an relationship mapping as rich and complex as humans, and an potential to make judgments with those associations. For example let's ask the robot/computer "would you drive a freight train on a highway, and why?"

If we asked Watson, I fancy it might stumble. Watson would find many associations between highways and freight handling, and associations of trains as a vehicle and that vehicles (trucks, cars) ride on highways. It would find many citations that trucks ride on trains, and train packaging ride on trucks.

In contrast, Watson would only see few mentions of the fact that the wheels on a train would damage the highway, and that the wheels could not derive sufficient traction on the road surface to travel under control.

So Watson would be confronted with at best conflicting associations relative to freight trains and highways, and at worst indications the trains and highways are compatible.

Watson would then likely falter with the words "would you" and "why." Those don't call for a fact, but rather a judgment, and Watson can not actually evaluate, it can only associate.

In contrast, our robot would likely catch the intent of the question. We gave our robot the potential to evaluate, and the word "would" would explicitly trigger the estimation modules. Watson would run straight through them all, for example inspecting ethics, and efficiency, and economics, but would finally reach a technical valuation based on engineering.

In fairly short order (a few seconds) or maybe long order (a few minutes), our robot/computer would fancy the load stresses of the train wheels on the asphalt and concrete, and the lateral conflict between the steel and the road. The robot would see that the concentrated load from the train wheels would exceed the carrying capacity of the road material, and also see that conflict between the wheels and the road surface would be insufficient to contribute traction and lateral control.

Our robot would thus riposte that it would not drive a train on a highway, since the train would fail in valuable mechanical aspects.

Could our robot actually do such engineering calculations? Computers do them routinely now. But today humans configure the question for the computer, so could our robot change our ask to the valuable mechanical setup. Yes, converting a physical object or principles into an abstract force diagram may be daunting, but it is not mystery or magic. The process of creating force diagrams can be converted into an algorithm, or set of algorithms, and algorithms can be programmed into a computer.

So our robot thinks? Yes. But does it "understand"?

No.

The robot lacks consciousness. For all the potential of the robot to associate and evaluate, the robot isn't conscious. Why do I say that? Long story short (and a argument of computer consciousness could be long), our robot of the near hereafter will have microchips of traditional architecture. These microchips may be very fast, may be very sophisticated, and may be made of exotic semiconductors, but they will be extensions of today's architectures nonetheless. In my view, such chips, even thousands put together, do not have the right configuration to create consciousness.

So, agree or not, let's posit that our robot is not conscious. And consciousness is likely the key to going beyond mental to meaning. We know a chair not because we have digitally stored a sensor determination of a 3/8 inch deflection in a cushion. We know a chair because we sense it, a holistic experience, not a set of mechanical sensor readings. Our robot has thousands of memory registers associating digitized pictures to a chair, but not a particular holistic experience.

Thinking Computers

So, our robot can think, but it doesn't understand. It has intelligence, but does have a sense of meaning. And this is because it lacks consciousness.

So now to the other part of our question, do we want our computers to think?

Numerous movies - Eagle Eye (2008), I Robot (2004), The Terminator series (1984 and later) - have computers that think. In a typical Hollywood fashion, the "thinking" of these computers, though well-intentioned, causes them to veer down unintended paths, to start to think they are smarter than humans, but to the detriment of humans. We actually don't want those type of mental computers.

Isaac Asimov, in his extended fictional writing on robots, was not nearly so pessimistic. His three laws of robotics kept the robots on a more confident and controlled path.

Data, on Star Trek, stands as an even a more confident view of a robot, even altruistic to a fault. But he was offset by the Borg, a cyber-organism of driven determination, to assimilate every civilization. The Borg could think, no doubt, but were thoughtless in their destructiveness.

Which one of these images from fiction will be our future?

I lean towards none of them. Watson, and then a second generation of Watson like the robot pictured here, will likely impact human society in a more insidious manner, economically. Will that economic impact vault us transmit or backward? Will we have a Star Trek like Camelot with computers freeing us for leisure and human advancement, and will mental computers displace our vast variety of facts workers consigning the formerly well-employed to low paying jobs. Utopia or Matrix-like enslavement, which might mental computers bring?

Will the hereafter tell? Maybe not my future. But likely the hereafter of our children. May God help them.

Computers Verses Concepts: Can Computers Think?

Micro USB Cable GPS Portable Dashboard