Frequently Asked Questions
This page contains answers to some of the more frequent questions asked about Corby, Artificial Intelligence and related matters. Many of the subjects included here are discussed with greater detail in these articles:
Here is a set of definitions of intelligence, which range from the serious to the nonsensical, that someone compiled recently on Usenet:
Intelligence: Ability to reason. Adapt to environment. Adapt the environment to yourself. Put someone on Triton and see if they survive. Things IQ tests measure. A great painter. Common sense. Logic + memory. Metallic asteroids with super conducting strands. Bill Clinton and "triangulation." Star Trek's Data or R2D2. Jimmy Carter and differential equations. Sum of all talents (or skills). Produce radio signals that can be detected outside your solar system. Build a Dyson Sphere. What humans do. What ants do. Longevity of a species. What dolphins and humans do. What computers do. The design of the universe. Existence. What God endows on humans. Ralean UFO's DNA experiments. What's in panspermia organisms. Higher levels of consciousness. "Mind" as opposed to brain. Holy spirit (i.e. link to the "soul" data bank), and finally the biggie: Ability to imply that other posters are stupider than you.
Corby is based on the following definition of intelligence:
Intelligence is the ability to discover the rules that govern the relationships between elements of the environment.
Artificial intelligence is defined as intelligence exhibited by anything manufactured (i.e. artificial) by humans. This also means, machine intelligence, a term used to refer to the field of scientific investigation into the plausibility of and approaches to creating intelligent systems using general-purpose computers.
It is the interaction of the organism with the environment characterized by movement of the organism, or of its parts, in space, through time, and having at least one measurable effect on the environment.
Learning is the process by which an intelligent organism acquires the appropriate responses to changes in its environment. There are many learning mechanisms: One of them is by direct experience. If you touch a hot object, you get hurt and then learn that hot objects are not to be touched. Another major learning mechanism is by imitation: If you see other people fleeing a lion, you learn that lions are to be avoided. Another learning mechanism involves language and therefore it is mainly restricted to humans: You are told explicitly how to respond to some change in the environment.
If you consider a species as a whole, evolution can also be considered a learning mechanism. It provides a way for the species to better adapt to its environment.
Information has a very precise definition in information theory. This theory, due primarily to Shannon, models in precise mathematical terms some aspects of a communications channel, established between a transmitter and a receiver of messages. According to this theory, the amount of information that a message conveys is the base-2 logarithm of the inverse of the message’s probability.
In everyday use, in the field of Artificial Intelligence, information is just any change in the environment that is captured by the organism’s sensors.
The two definitions may come together when you consider that the information captured by the organism’s sensors from the environment (the transmitter) is used by the organism to update its world model (the receiver).
Knowledge is the set of all the relevant information collected by an intelligent organism, which enables it to properly respond to changes in the environment.
A fundamental part of the knowledge that an organism possesses, constitutes the world model – see 1.8 below.
A concept is a compact representation of a family of similar ideas. Concepts are very important to intelligent organisms because they provide mechanisms for data compression, inference and creativity. For more details see the Learning Page, which is part of the Operation Manual.
A world model is the part of the intelligent organism’s knowledge that reflects the state of the real world, as perceived by the individual. The world model is essential for the organism to respond to solicitations from the environment, when a required part of the real world is not available at the moment.
Inference is the ability to draw conclusions based exclusively on the knowledge that one possesses. There are basically two forms of inference: Inductive and deductive.
In deductive inference one progresses from certain premises to certain conclusions. In inductive inference, one tries to establish general principles from a limited number of observations.
Both types of inference are very important to intelligent systems because that is what enables them to acquire new responses based exclusively on what they already know.
An intelligent device is some artefact that behaves, in some particular aspect, like an intelligent living organism. As the ability to learn in a characteristic that is common to all living organisms, an intelligent device should also have it.
In his 1950 paper “Computing Machinery And Intelligence” he proposed the following thought experiment that he called “The Imitation Game”: Imagine a locked room with a computer inside. Questions can be fed into the room, and its hidden inhabitant must reply. If, based on such a dialogue, we cannot determine whether the inhabitant is human or machine, then the machine can think.
For more information on this take a look at The Turing Test article.
It is doubtful. Not that this is deemed impossible by some well-established law of physics, so, at least in theory, it is possible. The problem is that for a machine to achieve that state is very costly, especially compared to the benefits it would bring. For a more detailed discussion of this question take a look at The Turing Test article.
In 1980, John Searle proposed the Chinese Room thought experiment that goes like this: A person who understands no Chinese sits in a room into which written Chinese characters are passed. In the room there is also a book containing a complex set of rules (established ahead of time) to manipulate these characters, and pass other characters out of the room. This would be done on a rote basis, e.g. "When you see character X, write character Y". The idea is that a Chinese-speaking interviewer would pass questions written in Chinese into the room, and the corresponding answers would come out of the room appearing from the outside as if there were a native Chinese speaker in the room. This whole set-up depicts a computer executing instructions (program) to manipulate abstract symbols.
It is Searle's belief that such a system would indeed pass the Turing Test, yet the person who manipulated the symbols would obviously not understand Chinese any better than he did before entering the room. Searle proceeds to try to refute the claims of strong AI: that if a machine were to pass a Turing test, then it can be regarded as "thinking" in the same sense as human thought; or put another way, that the human mind is some kind of computer running a program.
For more information on this take a look at The Chinese Room article.
Strong AI refers to the possibility of creating a Human-level AI, in which the computer program thinks and reasons much like a human mind.
As there is no well-established law of physics that prevents it from happening, Strong AI is, at least in theory, possible. Many people tried, over the years, to demonstrate by logic reasoning that Strong AI is impossible. The most famous of all is John Searle’s Chinese Room thought experiment – See 2.4 above.
Some people refer to Strong AI as the possibility of creating a Human-like AI. This is highly improbable – See 2.3 above.
Consciousness is a quality of the mind generally regarded to comprise things such as self-awareness, sentience, sapience, and the ability to perceive the relationship between itself and the environment. At the most basic level, consciousness denotes being awake and responsive to the environment; this contrasts with being asleep or being in a coma. Consciousness can be seen as a continuum that starts with inattention, continues on sleep and arrives to coma and death. Some people also require that the individual have some sense of its history to demonstrate consciousness. In sum, the conscious individual must see itself as a separate entity, located in its environment and having a history.
There is little doubt about our ability to build machines that are “awake and responsive to the environment” and have a history. No aspect of consciousness is, a priori, out of our reach.
Corby is an intelligent conversation robot that simulates human verbal behaviour. Its most distinctive features are its ability to learn and its language independence. Corby is based on a stimulus-response model. The stimulus consists in a statement provided by the user, which causes Corby to provide an appropriate response.
Besides its ability to learn and its language independence, Corby provides innovative solutions to the usual problems in Artificial Intelligence: Learning, abstraction, inference, conceptualisation, knowledge representation and world models. But its most important contribution to the field is in the area of semantics, one of the thorniest problems in Artificial Intelligence. If you want to learn how Corby is able to understand what people say, take a look at the Learning Page, which is part of the Operation Manual.
Yes, according to the definition of intelligence given in 1.1.
Corby learns from the normal interaction with its users. The basic learning model uses a pair of paragraphs where one of them constitutes the stimulus and the other is the appropriate response to that stimulus. This can be done automatically during normal system use; in this case Corby will consider any statement input by the user as the appropriate response to its previous production. You can also submit text or HTML files for Corby to learn from, in an autonomous way.
No. Corby’s responses are a deterministic function of the contents of the Knowledge base. This, in turn, is the result of learning. Corby will behave, in principle, exactly in the way it was told to behave. When Corby is asked a question for which it does not have a response, it will use inference but that is also based on what it learned before. That being said, it is not easy to predict what Corby will say in a particular instance. A response can depend on many variables and it is impossible to determine which ones just by looking from the outside.
Yes, and if you do
By arts and mime
In Corby’s own time
It will use them too
Comments and suggestions about this page are welcome and should be sent to firstname.lastname@example.org
Rev 1.0 - This page was last modified 2005-07-17 - Copyright © 2004-2005 A.C.Esteves