Corby Home page

"The universe is full of magical things patiently waiting for our wits to grow sharper."

Eden Phillpotts

 


Abstract

As is the case with any other endeavour, the first thing that someone pursuing an investigation into the field of Artificial Intelligence needs to do is to define precisely the objective of this area of study. After all, if one does not know exactly what he is looking for, chances are he will not find it. This is particularly acute in the case of Artificial Intelligence as there is not a broad consensus among its practitioners as to what its objective is, when stated in precise, concrete terms.

Just about everyone agrees as to what the term Artificial means: Some man-made device that performs some specific function. When it comes to defining Intelligence, the agreement stops and the answers to the question that is the title of this article would provide a good example of the cacophony that prevailed in the biblical Babel tower. Here is a set of definitions of intelligence, which range from the serious to the nonsensical, that someone compiled recently on Usenet:

Intelligence: Ability to reason. Adapt to environment. Adapt the environment to yourself. Put someone on Triton and see if they survive. Things IQ tests measure. A great painter. Common sense. Logic + memory. Metallic asteroids with super conducting strands. Bill Clinton and "triangulation." Star Trek's Data or R2D2. Jimmy Carter and differential equations. Sum of all talents (or skills). Produce radio signals that can be detected outside your solar system. Build a Dyson Sphere. What humans do. What ants do. Longevity of a species. What dolphins and humans do. What computers do. The design of the universe. Existence. What God endows on humans. Ralean UFO's DNA experiments. What's in panspermia organisms. Higher levels of consciousness. "Mind" as opposed to brain. Holy spirit (i.e. link to the "soul" data bank), and finally the biggie: Ability to imply that other posters are stupider than you.

Some people are in the position of being able to propose a definition for intelligence according to and promoting their moral, philosophical or religious beliefs. For instance, some people believe that intelligence is the exclusive province of human beings and therefore we should not consider lower animals intelligent.

Artificial Intelligence practitioners, however, do not have that luxury; they are bound to a definition of intelligence that leads to concrete implementations in devices that people would call intelligent.

In this article I will provide a definition for intelligence compatible with the construction of Artificial Intelligence devices and discuss some other aspects directly or indirectly associated with intelligent entities (natural or artificial). The ideas herein expressed describe the underlying theoretical assumptions that presided the design and implementation of the Corby system.

This is not a theoretical paper; it concentrates on the practical aspects of achieving human-level intelligence in artificial devices. It is thus mainly directed at people that concern themselves with the practical aspects of Artificial Intelligence.

If you want to know my opinions about the Turing Test read my article The Turing Test. If you want to know what I think about Searle’s Chinese Room thought experiment, read my article The Chinese Room.

Index

The old regime

The case of the ever-moving goal posts

The sad story of the sea squirt

Behave yourself

Intelligent models

The road map

Future concerns

 

 


The old regime

Artificial Intelligence began as an experimental field in the 1950s with such pioneers as Allen Newell and Herbert Simon, who founded the first artificial intelligence laboratory at Carnegie-Mellon University, and McCarthy and Marvin Minsky, who founded the MIT AI Lab in 1959. They all attended the famous Dartmouth College summer AI conference in 1956, which was organized by McCarthy, Minsky, Nathan Rochester of IBM and Claude Shannon.

Forever people have been dreaming about the possibility of constructing intelligent machines. But it was the advent of the digital computer that made possible credible attempts to fulfil those dreams. The term "Artificial Intelligence" was coined by John McCarthy, during the aforementioned Dartmouth College Workshop, held during two months in the summer of 1956.

One popular and early definition of Artificial Intelligence research, put forth by John McCarthy at the Dartmouth Conference, is "making a machine behave in ways that would be called intelligent if a human were so behaving". This is clearly not a very good definition and is, to some extent, responsible for much of the mischief and discredit that plagues the field of Artificial Intelligence research these days.

For starters, it does not address animal intelligence. This argument is not so easy to dismiss because many people, frustrated with the slow pace of advancement in this area say: “Well I understand that to simulate human behaviour must be a very complicated thing to do, but why can’t you make a machine as intelligent as a dog, or a bee or some other simple animal?”. And the truth is that we do not know how to do that.

The second objection to the classic definition of intelligence is that it lends itself to abuse. It is bad enough that some people would try to apply it, for instance, to a knitting machine, because it clearly does something that would be called intelligent if done by a human. But some people, in all seriousness, claim that a thermostat is in fact intelligent, based on that definition.

Finally, the classic definition is not very helpful in practical terms. If we go to that definition with a machine and ask “Is this machine intelligent?”, the definition throws the problem back at us saying, “Well, if you consider that it behaves intelligently, then it is in fact intelligent”. This is, of course, not terribly helpful.

We need a better definition of intelligence, one that can help us in our quest for human-level intelligence devices, and one that clearly separates what is intelligent from what is not, by capturing the essential characteristics of intelligent organisms.

Over the years, we managed to implement several artificial systems that have this property in common: they perform one or more functions that otherwise require an intelligent being. Take for instance the lowly calculator; It performs a function that otherwise requires people; furthermore, it performs that function faster and more accurately than human beings. In spite of all that, most people would not consider the calculator an intelligent device.

This looks like a contradiction: the calculator performs a non-trivial intelligent function and yet it is not universally considered an intelligent device. The only explanation for this apparent contradiction is that the ability to emulate an intelligent function is not enough to grant the “intelligent” attribute to a device. We reserve that attribute to organisms that have some other property.

Let us look further into this comparison between a human and a calculator. One difference between man and machine is obvious: People are not born with the ability to calculate; it must be learned. Besides learning to do arithmetic calculations, people learn a lot of other things. So, learning arithmetic, in human beings, determines their ability to express the external behaviour characteristic of a calculator. In other words, learning, in intelligent organisms, affects their external behaviour.

There is another argument that highlights the importance of learning. In many instances, the behaviour displayed by an intelligent agent must be modified on the spot, due to some change that the agent perceived earlier in its environment. If you look out the window and see the weather, the answer to my question “How’s the weather today?” will depend on what you previously perceived when looking out the window. It is reasonable to expect that this behaviour modification be effected by the same mechanism that allows us to learn arithmetic.

However, this aspect of learning has been overlooked in many Artificial Intelligence projects. Instead, people concentrated themselves in trying to produce some kind of glorified calculators. The rationale for this might have been that if we manage to duplicate enough intelligent functions, one at a time, eventually we will manage to duplicate the whole thing. The fallacy here is that what we are trying to build is a machine that can produce not only a pre-defined set of behaviours, but also any other intelligent behaviour that it can learn later on.

When we, as Artificial Intelligent practitioners, intend to build machines that emulate some aspect of human behaviour, it seems nonsensical to ignore the main characteristic of human behaviour: It is, for the most part learned. Ignoring this, results in systems that are brittle, do not scale well to real world problems and cannot, by design, extend their functionality beyond the narrow original specification.

Another major problem that afflicts many Artificial Intelligence projects is that they rely entirely on the programmer’s ability to understand the relevant aspects of the world. For instance, most of the projects that deal with language rely on the ability of the programmer to understand the language. Then they implement what is best described as “conversation by proxy” between the programmer and the user. Such systems can hardly be called intelligent because the intelligence is in fact in the programmer’s head, not in the system. This is the problem of semantics, a thorny issue that so far has not been addressed properly.

The case of the CYC project, started in 1984 by Doug Lenat, is paradigmatic. It is an Artificial Intelligence project that attempts to assemble a comprehensive ontology and database of everyday common-sense knowledge, with the goal of enabling AI applications to perform human-like reasoning.

Much of the current work on the Cyc project consists in feeding the system by hand with knowledge representing facts about the world. Typical pieces of knowledge represented in the database are "Every tree is a plant" and "Plants die eventually". When asked whether trees die, an inference engine can draw the obvious conclusion and answers the question correctly. The Knowledge Base (KB) contains over a million human-defined assertions, rules or common sense ideas.

 

 


The case of the ever-moving goal posts

Artificial intelligence research was very heavily funded in the 1980s by the Defense Advanced Research Projects Agency in the United States and by the fifth generation computer systems project in Japan. The failure of the work funded at the time to produce immediate results, despite the grandiose promises of some AI practitioners, led to correspondingly large cutbacks in funding by government agencies in the late 1980s, leading to a general downturn in activity in the field, known as AI winter. Over the following decade, many AI researchers moved into related areas with more modest goals such as machine learning, robotics, and computer vision, though research in pure AI continued at reduced levels.

With the development of practical techniques derived from Artificial Intelligence research, some of its advocates have argued that opponents of Artificial Intelligence have repeatedly changed their position on tasks such as computer chess or speech recognition that were previously regarded as "intelligent" in order to deny the accomplishments of Artificial Intelligence. They point out that this moving of the goalposts effectively defines "intelligence" as "whatever humans can do that machines cannot". Of course this is not a question of names and people should not resort to such practices to disguise their failures.

However, we should not belittle the work produced so far by Artificially Intelligence research. Some of it, like expert systems have proved very useful in real world applications, others like the chess playing Deep Blue, represent great technological achievements. Most probably, when we one day manage to produce a machine that mimics the human thought process we will come to the conclusion that it is as slow and error-prone as we humans are. It is therefore reasonable to assume that some of the results of Artificially Intelligence research, although they cannot be considered intelligent, will stay with us for some time to come.

But the fact remains that Artificial Intelligence did not manage thus far to capture the essential aspects of intelligence. Some critics even talk about “abject failure” because, they claim, after so many years of promises and so much money invested, we are not closer to the final objective than we were at the beginning. Many Artificial Intelligence researchers argue that animals, which are simpler than humans, ought to be considerably easier to mimic. Yet, satisfactory computational models for animal intelligence are not available today.

 

 


The sad story of the sea squirt

Every living individual dwells in its environment, which provides all that it needs for its survival like food and shelter. The environment contains also some perils, like predators, that the individual must avoid if it is to survive. What the individual does in order to reap the benefits and avoid the perils of the environment is what constitutes its behaviour. Changes in the environment elicit changes in the behaviour of the individual to counterbalance their adverse effects.

The individual does not know a priori what is the best behaviour to react to the changes; it must determine that by the effects of its behaviour in the environment. The individual perceives changes in the environment through stimuli that reach its senses. It responds to the changes through its motor mechanisms, and detects the effect of its response. The stimulus, response and feedback are the basic elements of the activity of any intelligent being. Learning is then the process of acquiring new behaviour in response to the environment.

We should also consider the evolutionary aspect of intelligence. The brain is an expensive proposition in terms of energy, so why did nature come up with intelligent creatures? The only plausible answer is that intelligence provides some advantage to the individual in terms of survivability. What affects most the chances of an individual to survive is its ability to adapt to its environment. This adaptation of the individual to changes in the environment can take two aspects: a) The individual will try to change the environment to better suit its purposes; b) The individual will change its behaviour to better exploit the changes or to avoid their adverse effects. This later aspect highlights again learning and the ability to change behaviour.

It is easy to imagine that intelligence appeared as a result of animals acquiring mobility, as a way to alleviate the pressure posed by a continuously changing environment. In the beginning, both plants and animals lived in a fairly stable environment and they could manage to respond to changes in that environment using just genetics. Then the organisms would specialise in some particular kind of environment and when the changes exceeded some threshold they would simply die.

We have a good example of this in plants: Although there are plants in just about any place of the earth, they specialized to specific kinds of environment. When extreme changes occur, like too much heat or too low humidity, they die.

At some point in time, animals started to move outside their stable environment either to look for food, fleeing predators or just to escape some change in the environment that they could not cope with. Suddenly, they found themselves in a variety of strange environments that they did not know to deal with. Intelligence is the response to that situation because the animal can learn during its lifetime how to respond to several types of environments instead of just relying on evolution to come up with the genetic changes needed to adapt the individual to each specific environment.

But the most dramatic example of this link between intelligence and mobility is given by the sea squirt. The sea squirt belongs to a group of marine animals that spend most of their lives attached to docks, rocks or the undersides of boats. To most people they look like small, coloured blobs. It often comes as a surprise to learn that they are actually more closely related to vertebrates like ourselves than to most other invertebrate animals.

The sea squirt starts its life as a larva, or tadpole that moves about freely. At some moment in its life, the tadpole decides that it is time to settle down, raise a family, you know, the works. Then it finds a place were it will spend the rest of its life, with its posterior firmly attached to some comfortable object.

At the same time its body suffers a series of transformations that make it more in accordance with its new sedentary life. One of the transformations consists in dissolving its brain, because it has no longer a use for it.

The paper where I got all this information did not say it explicitly but it is reasonable to infer that in their new way of life the sea squirts no longer go to the local pub and forget about old friends. Also the paper did not mention if they get to consume vast amounts of beer while watching sports events on cable-TV.

 

 


Behave yourself

The first thing to do in our quest for an intelligent machine is to devise some method by which we can evaluate each machine we make and therefore know our position in relation to the intended goal.

In the absence of a universally accepted definition of intelligence, the best way to proceed is to compare our machine with some living organism that we consider, a priori, to be intelligent, like for instance a human being. In most cases we cannot compare the two systems by inspecting their internal mechanisms, either because they are too different or just because of ethical considerations; we must resort to comparing the external manifestations of intelligence. In other words, we must compare the external behaviour of the two entities. This is appropriate because our ultimate goal is to simulate intelligent behaviour, not to duplicate the internal mechanisms of some intelligent organism.

Human beings are by definition intelligent, so it seems reasonable to say that if an artificial device had the ability to behave exactly the way that people do, we would not be able to distinguish between human and machine behaviour, and therefore be forced to conclude that the artificial device reached human-like intelligence.

This, by the way, is the gist of the argument given by the English mathematician Alan Turing. Wondering whether a machine can think, he described the following thought experiment: Imagine a locked room with a computer inside. Questions can be fed into the room, and its hidden inhabitant must reply. If, based on such a dialogue, we cannot determine whether the inhabitant is human or machine, then the machine can think. For details, see my article The Turing Test.

The second step in the road to intelligent devices is to investigate if there is one or more characteristics that are common to all intelligent creatures and are relevant to our purpose of simulating intelligent behaviour.

One such characteristic is the ability to learn. All intelligent creatures have this ability to change the way in which they respond to a given stimulus, during their lifetime. They are also able to acquire new behaviours that they are not born with. This characteristic must be relevant to artificial intelligent devices because it deals precisely with what we are trying to simulate – Intelligent behaviour.

The ability to learn is present in the whole range of intelligent living things. At the top end we have human beings, which must learn most of the behaviours they exhibit. At the other end we have the lowly Aplysia or sea slug. The Aplysia is a very simple animal whose entire nervous system contains only a few hundred neurons. Despite its simplicity, it exhibits a variety of behaviours and the capability to learn.

If our intent is to build a device that deals in a way or another with natural language, and this is almost exclusively the province of human beings, there is a fundamental characteristic that we must consider: That any human being can, at least in principle, learn any language on the face of the earth; some of them have in fact learned several languages and can use them on a regular basis. This characteristic is important not only because of the convenience of having a single machine dealing with several languages but because it is an indication that language is being dealt with at the proper abstraction level. This is related to the problem of semantics that we have alluded to earlier.

Language independence, incidentally, provides us with a good tool to separate the wheat from the chaff as far as intelligent systems is concerned. When presented with a system that deals with language you should ask the question “Can it learn any language I like?”. If the answer is no, then the system is probably of the “conversation by proxy” type and can be dismissed as intelligent.

It is amazing that these important aspects have been overlooked in so many Artificial Intelligence projects so far. One explanation for the fact may be that people deemed these aspects as unimportant when, after all, what we want is a simulation of intelligent behaviour, not a duplication of the mechanisms used by living things to achieve that. But these issues seem so important that they should have deserved at least a sound justification as to why they were dismissed.

In my article The Turing Test I provide another explanation for the fact that most people ignored the above issues: That they recognised in fact their importance but did not know how to deal with them because they were too difficult. Then they decided to fake the whole thing. The consequences of such a poor decision will be with us for some time to come.

So far we have learned some important aspects of our future intelligent machine: It must be able to learn, we must evaluate its performance by looking at its behaviour and, if it deals with language, it must be language independent.

 

 


Intelligent models

We now get to the crucial part of this article, where we get to define what intelligence is. But first we will play with models for a while.

Given the considerations presented in the previous sections, we can model a living organism at the most basic level with a simple stimulus-response mechanism. Then we set this model in some environment, natural or artificial, and get to define what form the stimuli and responses will take.

Let us imagine that we want to build an intelligent conversation robot or chatbot as they are commonly called. Then we can implement our basic model as a program that receives sentences from the user (the stimulus) and gives the appropriate answers (the response). We must not forget that this system must be language independent and must be able to learn.

How do intelligent organisms learn new behaviour? From the environment, in a variety of ways: One of the ways is through the feedback part of the mechanism that constitutes the basic unit of interaction between the individual and the environment. Every living organism possesses the instinct of self-preservation. A derivative of that is the inclination of living organisms to seek pleasure and avoid pain. This is then what guides the individual when learning new behaviour: Of the set of possible responses to a change in the environment, the organism will chose the one that maximizes pleasure and minimizes pain.

Imitation is another form of behaviour learning mechanism used by intelligent organisms. One organism can copy the behaviour of similar organisms that it perceives in its environment. This is immediately apparent in children: When they play mommy, daddy, nurse, doctor and teacher, they imitate as best as they can the behaviour they perceive in adults. This is also one the mechanisms they use to learn language. The parents say some word and the child tries to imitate that as best as possible.

At a later stage of language learning, the parents use the question-answer model: They ask the child a question like “What colour was Napoleon’s white horse?” and expect the child to give the right response. If the child gives the wrong response they correct it by providing the right one: “White”.

Sometimes several learning mechanisms intermix in a complex way: Take for instance the case of two persons, A and B, engaging in a conversation. When A says something, this constitutes a stimulus to B, which responds accordingly. B’s response constitute to A both a feedback to his own action, but also an example of a possible response to it. B’s response is also the stimulus to A for the next round in the dialogue.

So, our model of an intelligent chatbot would learn primarily from pairs of sentences supplied to it somehow, one sentence representing the stimulus and the other the appropriate response to that stimulus. We can also devise a mechanism for our chatbot to learn from the feedback provided by the user, but that is not essential.

We are now in the position of being able to propose a first approximation to the system architecture of our chatbot. We would use some sort of database where we would collect stimulus-response pairs and make the stimulus the search key. Then the stimulus would consist of phrases entered by the user and the database would provide the system’s response. We could implement a crude feedback mechanism where the user would grade the system’s response. This would provide a decision mechanism in cases that we had more than one response to a given stimulus. The system would also consider the user input to be the correct response to its previous output and learn from that. The system would also extract feedback from the user input to streamline its behaviour in future exchanges.

A system like this would respect the constraints that we imposed earlier for an intelligent system: The ability to learn and language independence. It would be able to provide the correct response to any question you care to teach it. In fact, a system like this would, almost certainly be able to pass the Turing Test.

The question that we should ask now is: Is it possible to implement such a system? The answer to that question is no. For such a machine to be successful you have to contemplate every possible question that someone could ever ask the machine. Just consider the multiple ways in which an idea can be expressed, the little variations like punctuation and the use of synonyms. The number of possible combinations gets quickly out of hand. If the machine was able to get the idea behind the words, it would be easier, but that requires understanding and so far we have not addressed that problem.

But it gets worse. Now consider a normal conversation between two persons. What one person says depends not only on what the other person just said but also on all the previous sentences uttered by both persons during the whole conversation. Not only that but a response may depend on some aspect of the world, as perceived by any of the interlocutors. Then the number of responses for each question that you must contemplate beforehand just raises to impossible levels. This raises two very important issues: One is the amount of work needed to set up such a system; the other is the huge amount of storage space for such a thing.

As if the above was not bad enough, the worse is yet to come. It is not a matter of if, it is a matter of when someone comes up with a question that you have not contemplated. In that event, a system like ours would be at a complete loss. Human beings can cope nicely with this situation: If everything else fails, they can always resort to asking for clarifications. Our mechanical system cannot do that, for the simple reason that the question was not contemplated in the first place. But humans can do even better: They can infer the meaning of the question by establishing how close it is to other questions for which they know the response. Then they can use that with a degree of confidence that is proportional to the degree of similarity.

But the fundamental flaw with our system as proposed is that it relies entirely on the user’s ability to understand the language it deals with. It uses the “conversation by proxy” paradigm that we described earlier in this article and therefore it cannot even be considered an intelligent system.

 

 


The road map

In the preceding section we have seen how a naďve implementation of our chatbot would work in principle but it would be impossible to implement. In this section we will discuss some modifications that we can introduce in the naďve model in order to make it practical. Finally we will present a definition of intelligence based on the modified model.

One of the problems with the naďve model is the amount of storage needed for all the stimulus-response pairs that the system must deal with and the consequent work to fill that storage. So we can concentrate on that and devise ways of reducing the number of entries in our database.

One possible solution would be to use some abstraction mechanism that would extract the idea behind a sentence. That abstraction mechanism would be immune to little variations like an extra space, a punctuation signal, or even the use of a synonym and be able to extract the underlying idea. This would be a huge help in reducing the amount of entries needed in our database.

In the same vein, we could consider whole families of sentences that represent the same basic idea but applied to different objects. “I ate an apple”, “I ate a banana” “I ate an orange” is an example of such a construct. In this case we could extract the concept behind a family of such sentences and use a compact representation of it in our system, further reducing the size of our database.

Both of the above mechanisms rely on the ability of the system to understand what it is being told. That is the semantic problem, a thorny issue that so far got no satisfactory answer. In my article The Chinese Room I discuss the semantic problem and present some ideas about how we can approach it. One of those ideas is that understanding is the process of discovering the rules that determine the relationships between elements of the environment. Therefore, understanding is, essentially, a prediction tool.

And this brings us to the point where we can define intelligence:

Intelligence is the ability to discover the rules that govern the relationships between elements of the environment.

Understanding is then the essential ingredient of intelligence and it is what will make our system “creative”, in the sense that it will be able to answer many more questions than the ones that we teach it explicitly. This reduces both the amount of storage needed and the work needed to fill that storage.

Abstraction and conceptualisation bring still another benefit: Inference. In the same way that we humans are able to reason by analogy when we are presented with a question for which we have no answer, our chatbot will be able to reduce a sentence to an abstraction level where it is possible to make meaningful comparisons with other sentences for which it has already an answer.

But the response to a given stimulus depends sometimes on previous knowledge that the individual possesses about the environment: If I tell you my name and later ask you what my name is, you will recall the knowledge previously recorded and give the correct answer. Knowledge can then be defined as the set of all the relevant information collected by an intelligent organism, which enables it to properly respond to changes in the environment.

So, the other aspect of learning is the acquisition of knowledge. If I tell you “It’s raining cats and dogs”, this constitutes knowledge about the world that you can use later to answer some question. You use such knowledge about the world in all situations where the real thing is not available. Therefore this type of knowledge constitutes what is commonly called a world model. People are able to update their world models either by direct perception or using language.

In the case of our chatbot, it has no way of sensing the real word; therefore it must rely exclusively on language to build its world model. Therefore, whenever you say anything to it, besides giving an answer to your sentence, our system must also use the information you provided to update its world model.

In mathematical terms, we can view an intelligent system as a discrete function R = f (S, K) where R is the response given by the system, S is the stimulus and K is the knowledge accumulated in the world model. Intelligence is the mechanism that allows an individual to extrapolate from a few points of the function to many more, in order to cover as much as possible the range of the S variable.

We now have all the elements of our chatbot in place: It must be able to learn, it must be language independent, it must understand what people say and, finally, it must have a world model. Very much like we humans do, our system retains the capability to play back, parrot like, the response corresponding to some stimulus. But it is the ability to extract the rules that determine the construction of responses that make it, truly, an intelligent system.

 

 


Future concerns

There are lots of questions related to Artificial Intelligence that are the favourite theme of some philosophers and other idle thinkers. We can roughly distinguish three sorts of questions. To the first kind belong the predictions that this or that aspect of Artificial Intelligence will never be possible. Most of these we can ignored safely because they are just based on some sort of logical argument, usually flawed, and not backed-up by some solid physical evidence. However, we should study them carefully, because, as John von Neumann put it, “You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!".

The second kind of issues frequently discussed deals questions that will be fully answered when we finally manage to build a human-level intelligent machine. Usually these are just premature, idle questions that do not contribute one iota to the goal of Artificial Intelligence. No useful purpose is served by putting the cart before the horse in these situations.

Finally, the third kind of question addresses genuine concerns that people have about this new thing. Sometimes this type of questions highlight important issues that must be addressed if the dream of Artificial Intelligence is to come true some day. A good example of this kind of question is the one posed by Alan Turing back in 1950: "Can machines think?"

My favourite issue of the kind has to do with super intelligence. There is no guarantee, whatsoever that we will one day be able to build human-level intelligent machines. Given recent history, many people start to despair of our ability to do so. But if we were able to, one day, capture the essence of the human thought process, there would be nothing to prevent us from making machines with supra-human intelligence except computing power and we have already demonstrated that we can obtain vast amounts of that. This raises some important moral and ethical issues about how we will relate to such machines.

One of the most popular questions is: Will we ever be able to build a machine that has consciousness? Consciousness is a quality of the mind generally regarded to comprise things such as self-awareness, sentience, sapience, and the ability to perceive the relationship between itself and the environment. At the most basic level, consciousness denotes being awake and responsive to the environment; this contrasts with being asleep or being in a coma. Consciousness can be seen as a continuum that starts with inattention, continues on sleep and arrives to coma and death.

There is little doubt about our ability to build machines that are “awake and responsive to the environment”. But surely not any kind of responsiveness will do; we are looking for behaviour similar to human behaviour in some relevant aspects. Awareness of itself and its environment seems to be the defining feature of consciousness. This includes the ability of the individual to see itself as a separate entity, located in its environment and having a history. This seems to imply that in order to reach this level of consciousness a machine must have a body that moves about in the environment, in other words, it must be a mobile robot.

For some people, however, consciousness is something that accompanies some, or perhaps all, mental events. So when we perceive, we are conscious of what we perceive; when we introspect, we are conscious of our thoughts; when we remember, we are conscious of something that happened in the past, or of some piece of information that we learned. This aspect of consciousness is a bit more difficult to deal with, mainly because we do not know how to characterize it in terms of behaviour.

What about emotions? Will intelligent machines be able to feel emotions and act according to them? In humans, emotions have a big role in conditioning behaviour. There are a lot of emotions and they are strongly associated with basic instincts; they are one of the main drivers of behaviour because an intelligent organism learns how to react to changes in the environment based on the evaluation of the feedback of its actions provided precisely by those instincts.

Some emotions, like hunger, fear or sexual appetite do not make much sense when applied to machines. This is the main obstacle in our quest to build intelligent machines that exactly duplicate human behaviour; we will not be able to do that unless we can find mechanical equivalents to all human emotions.

Another popular question related to Artificial Intelligent devices is about Free Will: If we can build an intelligent machine will it have free will? And if the answer is “no” what can we say about free will in human beings?

The idea of Free Will was fostered by religion as a way to highlight the moral aspect of human actions. The idea is that there is a god out there that judges each human action in terms of good and evil. God will reward good actions and punish evil ones. This implies that man has the freedom to choose, at every opportunity, how he will act. In other words, human beings are free to follow god’s teachings or ignore them.

In religious terms, Free Will would serve to distinguish between humans and other animals. Human beings are able to reason and their actions would be the result of that reasoning. On the contrary, the acts of other animals are guided mainly by instinct and therefore they would not have Free Will.

The actions of living things are subject to their primitive instincts like, for instance, the inclination to avoid pain and. seek pleasure. Moral, religious or philosophical concerns sometimes demand that human beings act contrary to their instincts either for the greater good of the community or for the sake of some ideal. When these two conflicting forces are in balance, Free Will is what tips the scale one way or the other.

Would an intelligent device, built according to the principles exposed in this article, have the freedom to choose its actions according to some concept of Free Will? The answer is no: The machine would behave exactly how it has learned to behave. If that includes the concept of god or any other metaphysical concern, it will be taken into account, but in a deterministic manner: In exactly the same circumstances it will always do the exact same thing. This doesn’t mean that it is easy to predict exactly what such a machine could do in some circumstance. The basic mechanisms may be known, but the complexity due to the number of interacting elements would be enormous. It is like the weather: It can be predicted to a certain extent but not exactly, for the same reasons.

So the question remains now: If we could some day duplicate a relevant part of the human behaviour using those principles, would we be forced to conclude that Free Will in human beings is only a product of our imagination?

 

 


Feedback

Comments and suggestions about this page are welcome and should be sent to fadevelop@clix.pt

 

 


Rev 1.0 - This page was last modified 2005-07-25 - Copyright © 2004-2005 A.C.Esteves

Corby Home page