VIRGINIA
MONTECINO
Artificial Intelligence and Robotics
© Copyright 1999
DEFINITION OF ARTIFICIAL INTELLIGENCE : an attempt to model
aspects of human thought on computers. It is also sometimes defined as
trying to solve by computer any problem that a human can solve faster.
See "The Machine that Changed the World: Part I Giant Brains," available
at the Johnson Center Library and/or visit the companion
Web site.
``Machine intelligence will so fundamentally change existence that
we won't be able to conceive of going back to the way things
are now.''
``It will transform life the way language did the Stone Age,
mechanical inventions did the Industrial Age, electricity did the 20th
century.''
(Doug Lenat - Cyc project creator) |
How do researchers develop intelligent computers?
- create models of human intelligence
- identify the fundamental components of human intelligence
- omit non essential details
symbol processing - an inference engine directs the computer
to manipulate facts and rules in a knowledge base.
Some prominent figures in AI who would argue that computers can think:
-
Alan Turing - -- 1912-1954. (See Lecture 11) Turing
is perhaps best remembered for the concepts of the Turing Test for Artificial
Intelligence, the "acid test" of true artificial intelligence, and the
Turing Machine, an abstract model for modeling computer operations. He
said "a machine has artificial intelligence when there is no discernible
difference between the conversation generated by the machine and
that of an intelligent person." See an explanation
of the Turing Test by The
PT-Project, Illinois State University . See JAVA
applet Turing test.
-
Marvin Minsky-
has made many contributions to AI, cognitive psychology, mathematics, computational
linguistics, robotics, and optics. In 1951 he built the SNARC, the
first neural network simulator. His other inventions include mechanical
hands and other robotic devices. His recent work is to develop machines
with the capacity for commonsense reasoning.
Some prominent figures in AI who would argue that computers can't think:
-
John Searle - Chinese Room theory - Searle, a philosopher,
proposed a thought experiment outlining why computers can't think. He considers
the following thought-experiment. Suppose that a person were given a set
of purely formal rules for manipulating Chinese symbols. The rules
are a complete set of instructions that might be implemented on a computer
designed to engage in grammatically correct conversations in Chinese. The
person in the room, however, does not understand Chinese, yet can produce
the correct symbols to give the correct response. See explanation of the
Chinese
Room theory, by The PT-Project,
Illinois State University
-
Hubert Dreyfuss - Dreyfuss, a philosopher, says that computer games
do nothing more than calculate which moves are the best. Wrote What
Computers Still Can't Do : A Critique of Artificial Reason
Question to ponder - But, if the computer can learn from the moves
of the human chess player, for example, is it then capable of learning?
And is learning a sign of true intelligence?
Games
-
Deep Blue Chess game- ACM sponsored match between World Chess
Champion, Gary Kasparov and "Deep Blue" chess program. Deep blue won the
first game.
-
Samuel's Checker's Player - Arthur Samuel's Checkers player
experiments (1959 and 1967) were the earliest success stories in machine
learning . His machine evaluated board positions in the game of checkers
TD - Gammon - Gerald Tesauro was able to play his
programs in a significant number of games against world-class human players.
His TD-Gammon 3.0 appears to be at, or very near, the playing strength
of the best human players in the world. TD-Gammon learned to play certain
opening positions differently than was the convention among the best human
players. Based on TD-Gammon's success and further analysis, the best human
players now play these positions as TD-Gammon did.
Natural Language Processing - Language is what sets us apart from
the other members of the animal kingdom. Challenges of getting computers
to understand human language:
Vocabulary, rules of grammar, syntax |
complex and changeable.
Words can be combined in many ways |
Meaning |
nuance. grammatically correct vs meaningful |
Ambiguity |
metaphor, sarcasm, irony, cultural context |
Natural Language Processing Machines:
ELIZA - natural language processing machine. Joseph Weizenbaum
invented ELIZA more or less as an intellectual exercise to show that natural
language processing could be done. ELIZA is an automated psychoanalysis
program based on the psychoanalytic principle of repeating what the patient
says and drawing introspection out of the patient without adding content
from the analyst. Weizenbaum believed a computer program shouldn't
be used as a substitute for a human interpersonal respect, understanding,
and love. He rejected its use on ethical grounds. See the views on ELIZA:
The
Machine that Changed the World )
SHRDLU - pioneering natural language processing system. Could
manipulate blocks based on a set of instructions and was programmed to
ask questions for clarification of commands.
Common Sense-Lenat, "Cyc" project, University of Texas, Austin
- Cyc is a very large, multi-contextual knowledge base and inference engine
developed by Cycorp, Inc., at Austin, Texas. The goal of the Cyc project
is to construct a foundation of basic "common sense" knowledge base of
terms, rules, and relations that will enable a variety of knowledge-intensive
products and services. Cyc is intended to provide a "deep" layer of understanding
that can be used by other programs to make them more flexible. Cyc has
provided the foundation for ground-breaking pilot applications in database
browsing and integration, captioned image retrieval, and natural language
processing. Demonstration of Cyc's "intelligence":
Cyc ... demonstrated it couldn't be fooled into blaming a terrorist
act on a suspect who, it had been previously informed, had died.
``He couldn't have done it,'' Cyc responded ``Dead people can't commit
terrorist acts.''
Computer News Daily. From Tod Ackerman's article
c.1997,
Houston Chronicle. Retrieved WWW 4/15/99 (http://computernewsdaily.com/132_051297_100008_32235.html) |
Case Based Reasoning - (CBR) views reasoning as a process
of remembering one or a small set of concrete instances or cases and basing
decisions on comparisons between the new and old situation. The problem
can then be solved by using the knowledge based on the earlier situation
and adapting it.
The steps of CBR generally involve:
-- |
Retrieve the most similar case |
-- |
Reuse the information in the retrieved case |
-- |
Revise or adapt the case to solve the current
problem |
-- |
Retain the solved problem as another case (to
be used to help solve another problem). |
A case may not be entirely suitable for a new problem and must be adapted.
Examples of CASE systems - Cyrus, HYPO, CASEY.
See
Case
Based Reasoning on the Web
Pattern Recognition -
Computer Vision - pattern recognition using cameras for eyes,
microphones for ears:
Optical Character Recognition (OCR)
Some examples of computer-vision applications:
- Satellite photo interpretation
- Facial characteristics detection
- Digital searching of videos, based on content
- Obstacle detection systems for aircraft navigation
- Automatic analysis of multi-dimensional radiological images
- Machine vision grading of quality of produce (apples, etc).
- Shape recognition and analysis of machined parts
See:
GMU
Computer Vision and Neural Network Lab
Carnegie
Mellon U. Vision and Autonomous Systems Center
Computer
vision online demos
Carnegie-Mellon
Computer Science Computer Vision Home Page
Computer
Vision Handbook, by Dr. Margaret
Fleck
Virtual Reality (VR) - immerses viewers in virtual worlds even
though they are physically present in the real world. Each viewer moves
independently and freely throughout this world, allowing the participants
to see events from his or her own perspective. Participants enter a 3-D
graphical environment and control graphical objects in the environment
with body movements. A glove was the first the input device. Computer
vision and robotics technologies can be used to support practical, useful
virtual environments. Applications: flight simulators, virtual surgery,
virtual museums. NASA used virtual reality to design the Pathfinder
mission to Mars.
Knowledge Engineering/Expert Systems
What are Expert Systems?
Conventional programming languages, such as FORTRAN and C, are designed
to manipulate data, such as numbers. Humans, on the other hand, can solve
complex problems using very abstract, symbolic approaches,
not well suited for conventional programming languages. Abstract information
can be modeled in conventional programming languages, but significant effort
is needed to transform the information to a usable format which deals with
high levels of abstraction, more closely resembling human logic. The programs
which emulate human logic are called expert systems.
The expert system tool provides a mechanism, called the
inference
engine, which automatically matches facts against patterns and determines
which rules are applicable. The if portion of a rule applies to
the situation (if "such and such" happens or changes"). The then portion
of a rule is the set of actions to be executed when the rule is applicable.
The inference engine then selects another rule and executes its actions.
This process continues until no applicable rules remain.
Fuzzy Logic - Fuzzy logic is a superset of conventional
(Boolean) logic that has been extended to handle the concept of partial
truth - truth values between "completely true" and "completely false".
Dr. Lotfi
Zadeh, Professor Emeritus at Berkeley,
father of "Fuzzy Logic" , introduced the theory in the 1960's. Interview
with Zadeh
Genetic Algorithms - use the principles of Charles Darwin's
natural selection :
Natural selection - Some traits in a species cause a member of
that species to be better suited to its environment than some other traits.
The members of the species with the characteristics that give it the strongest
possibility to survive, pass those traits on to offspring.
The species with the stronger characteristics mate and pass the traits
on in the process called natural selection.
Crossover
is the term for natural selectionin genetic
algorithms. In crossover,
natural selection is accomplished
when the genetic algorithm:
1. |
Selects the set of best possible
solutions to a problem |
2. |
Selects the best candidates among the set of
best possible solutions. |
3. |
Selects pairs of solutions and the best parts
of each solution to create a new solution - called crossover. |
Artificial Life - computer organisms that reproduce and adapt
to their environment, mimicking the natural selection process which occurs
with biological organisms.
Neural Networks - Artificial Intelligence systems that attempt
to duplicate the physical functioning of the human brain by using a biological
model of intelligence.
Three (3) parts of a neural network:
- input layer corresponding to the 5 human senses: sight, hearing,
touch, smell, taste
- processing layer (hidden) corresponding to neurons in the
brain
- output layer corresponding to parts of the body that act on
signals from the brain (muscles, etc.)
Input layer ----------------- > |
Processing layer ---------- > |
Output layer ---------------- > |
|
(Hidden layer) |
|
cameras, microphones, |
Computers plus |
printers, screens, robot arms, |
data gathering equipment |
programs and functions |
chemical dispensers |
NNs "learn" from examples and exhibit some capability for generalization
beyond the specific example. Knowledge is acquired by the network through
a learning process.
Where can neural network systems help?
- where we can't formulate an algorithmic solution.
- where we can get lots of examples of the
behavior we require.
- where we need to pick out the structure
from existing data.
Real human brains, however, are orders of magnitude more complex than
any artificial neural network so far developed.
Existing computer "logic is not good at interacting with "noisy" data,
and adapting to unexpected or unusual circumstances.
See: Genetics Algorithms
Archives
Hitchhiker's
Guide to Evolutionary Computation
Herbert.
A. Simon, Allen Newell & J.C. Shaw: In 1957 devised
a logic theory machine (first proof by machine) the General Problem
Solver (GPS). The method for testing the theory involved developing
a computer simulation and then comparing the results of the simulation
with human behavior in a given task.
Simon believes that brain activities, as well as computer processing
activities can be explained in terms of information processing. Creativity
can be automated, he believes, by having the computer do selective
searches, then recognize cues that index knowledge in given situations.
For example, he says, his his chess playing computer can separate the important
moves from the unimportant ones, for a given chess configuration, and even
know when the opposing player makes an error.
Herbert A. Simon and AI
Simon, H.A. Interview. (1994, June). Omni Magazine, 16(9), 70-89.
Robotics - Intelligent robots are in use today for innovative
uses in entertainment, commerce, industry, and advanced research. - everything
from interactive toys to robots that go down oil wells to animated simulations
of humans in museum displays.
-
The term robot comes from a play written by K. Capek, RUR , Czech
novelist and playwright.
-
Leonardo da Vinci designed a "robot" in the late 15th century.
-
First "arm" that could be programmed to perform tasks developed by George
Devol in 1954.
-
Stationary (manufacturing)
-
Mobile (surveillance)
-
Applications too dangerous for humans: industrial activities, planetary
rovers, locating sunken ships, exploring active volcanos....
-
Edward Tufte - Visual Explanations : Images and Quantities, Evidence
and
Narrative - book explores how visual evidence influences
computer interfaces, design strategies, and how information is transferred
and represented, including the arts and science.
-
Nanotechnology - As we
discussed in the last lecture, nanotechnology is an emerging new field
which is attempting to break the barriers between engineered and living
systems. K.
Eric Drexler, 43, the founding father of nanotechnology, envisioned
the idea of using individual atoms and molecules to build living
and mechanical "things" in miniature factories. His vision is that
if scientists can engineer DNA on a molecular, why can't we build machines
out of atoms and program them to build more machines? The requirement for
low cost creates an interest in the "self replicating manufacturing systems,"
studied by von Neumann in the 1940's. These "nanorobots,
" programmed by miniature computers smaller than the human cell, could
go through the bloodstream curing disease, perform surgery, etc.
If this technology comes about the barriers between engineered and living
systems may be broken. Researchers at various institutions and organizations,
like NASA and Xerox, are working on this technology.
See:
information
on Nanotechnology
nanoManipulator
J.
von Neumann -- (1903-1957). A child prodigy in mathematics,
authored landmark paper explaining how programs could be stored as data.
(Unlike ENIAC, which had to be re-wired to be re-programmed.). Virtually
all computers today, from toys to supercomputers costing millions of dollars,
are variations on the computer architecture that John von Neumann created
on the foundation of the work of Alan Turing's work in the 1940s.
It included three components used by most computers today: a CPU; a slow-to-access
storage area, like a hard drive; and secondary fast-access memory (RAM
). The machine stored instructions as binary values (creating the stored
program concept) and executed instructions sequentially - the processor
fetched instructions one at a time and processed them. The instruction
is analyzed, data is processed, the next instruction is analyzed, etc.
Today "von Neumann architecture" often refers to the sequential nature
of computers based on this model. Nanotechnology creates new interest
in self replicating manufacturing systems studied by von Neumann in the
1940s.
VIRGINIA
MONTECINO
|