Cyborgs, robots, & A. I.

Navigating the site:

boat

Analysis

Articles

Authors

book

Autonomy

Bibliography

Biodiversity

map

Briefings

Capacity

CORE acronym

Courses

Demography

Ecology

bird eats

Facts

Genes

History

Inquiry

Justice

scales

Methods

Nature

New

Office

Photographs

photographer

Plutonium

Presentations

Research

Reviews

Science

Site Map

Sources

books

Technology

Technology time-line

Tragedy

Vita

Vocabulary

WEAL acronym

Writing

computer

Z-A site contents

 

Robots, robotics and artificial intelligence, the quest for uncertainty.

Assimov's three rules | Michio Kaku | Alan Turing | criteria for artificial intelligence, AI


Much has been written on the subject of life mimicry in machinery.

This is but the briefest look at the 400 year development of self moving devices

Watchmakers on the 16th and 17th century first made self moving mechanisms they called automata. They adorned clock faces and could move in ways that seemed to indicate, like wind up toys, an internally controlled movement. The craze in automata came as they were detached from clocks and used to amuse the wealthy.

Such clock related devices used my watchmakers were eventually significant in the timing devices used in manufacturing during the industrial revolution. Without control devices, like governors, sensors or electrical circuit switching instruments, telephone exchanges and dispatching centers for electrical machinery would have been impossible.

By the 1920s Czech writer Karel Capek, wrote a play: RUR: Rossum's Universal Robots using the Slavic derived word robotos, or worker, for the machines made to do servile work for humans.

Robot entered the Anglo-American vocabulary as the surrogate mechanical creation, modeled on human appearances to do the drudgery and tedious work that was precise, but repetitive to the point of demoralizing human operatives in factories.

So what is intelligently acting, automated machinery?

In the quest for creating an intelligent machine, mathematician and code analyst, Alan Turing proposed a test, in the 1950s, to determine if machines, like the analog computers he helped build in World War Two to crack the German Wermacht's wartime codes, could think.

Any machine, Turing argued, that could not distinguish the difference between a human generated command and a machine generated instruction, was, by definition (prima facia) intelligent. This became known as the Turing test of machine intelligence.

Bugbots at MIT:

"Unlike traditional mobile robots, which must be fed huge computer programs before they can move, Atilla (MIT's AI laboratory Robot built by Rodney Brooks) learns everything from scratch. It even has to learn how to walk....A simple feedback mechanism is all that is necessary for Atilla (an insectoid: six legged machine) to crawl all over the AI lab."

Kaku, Visions, pp. 71-72.

Alan TuringAlan Turing, computer code genius, while at University.
Since Turing, in the quest for artificial intelligence, or AI, writers have suggested the following criteria for what is necessary for the creation of a thinking machine:

  • binary code compiling (machine logic), based on Boolean Algebra
  • set of instructions or program
  • speech formulation and recognition
  • pattern recognition
  • creation of hierarchies within the recognized patterns
  • an understanding of context
  • sufficient background of the current event
  • capacity to question programmed assumptions
  • decision making trees (if, then formulations)
  • capacity to read physiological responses correctly
  • a sense of humor?

 

Isaac Assimov, in 1939, wrote I Robot where he posited three laws of robotics, in order to show the consequences of an inherent paradox, or contradiction in human language and logic that can escape our attention. This is especially true of the attention of robot designers, builders, and programmers who rely solely on rational discourse and a replication of reason in some form of machine language to instruct the creation.

These three rules of Assimov, programmed into robots, are:

  1. No robot may ever harm a human being.
  2. Robots must obey an instruction given to it by humans.
  3. Robots must protect themselves from harm (in ways that do not violate rules one and two).

So it is that, when faced with an order to harm a human, given by a human to a robot, what is the machine to do?

This is the context of Assimov's stories. He raises the problem of precisely how self activating do we want machines with artificial intelligence to be?

 

Should a machine be capable of recognizing its own mortality, or capacity for being replaced by a better robot, or cyborg?


Lesson: We may, after all, want thinking machines to be aware of uncertainty and distinguish among the knowable aspects of its instructions from the unknown, or unaccounted for side effects of its behavior.

I further suggest they need a sense of humor, or comic relief. Humor may actually be an exceedingly optimal measure of intellect because clever humor requires people to understand two different contexts simultaneously and thus shift between opposite perspectives to catch the joke's meaning.

Today's Robots:

Kismet

Leonardo -- first to become an adept social companion & learning machine.

Sexual congress with robots?

Kaku on robots