The Philosophy of Mind |
AI and neural computing
To appreciate how the brain might be thought to create the mind, one can
look at two different areas of computer science:
Some scientists claim that AI holds the key. The human brain,
they argue, comprises about 100 million memories and a few thousand functions.
All you need, in a sense, is raw computing power. Of course, this is never
going to be that easy, because the human brain, being the most complex thing
known in the universe, will take a great deal of computer power to match
it. This view is countered by those who claim that
neural computing
holds the key to an understanding of the 'mind', since a neural computer
can take on characteristics that are normally regarded as 'human'. Human
brains are not programmed, they just learn, and that is the difference with
a neural computer, it does not have to be fed all its information, it
learns it itself.
In other words AI is the attempt to store and reproduce the workings of already developed brains (of those who program the computers); neural computing is the attempt to get a simple brain to 'grow' in intelligence. |
This debate between AI and neural computing highlights a feature
of Ryle's argument in The Concept of Mind. Ryle made the distinction
between 'knowing that' and 'knowing how'. He argued that to do something
intelligently is not just a matter of knowing facts, but of applying them
- to use information and not just to store it. To do something skilfully
implies an operation over and above applying ready digested rules. To give
one of Ryle's examples - a clock keeps time, and is set to do so, but it
is not seen as intelligent. He speaks of the 'intellectualist legend' which
is that a person who acts intelligently first has to think of the various
rules that apply to his or her action, and then think how to apply them (thus
making 'knowing how' just part of 'knowing that' - knowing the rules for
action as well as the facts upon which that action is based).
Ryle claims that when, for example, we make a joke,
we do not
actually know the rules by which something is said to be funny. We do
not perform two actions - first thinking of the rules, and then trying to
apply them. Rather, we simply say something that, in spite of not knowing
why, actually strikes us as funny.
The same could be said for a work of art, literature or musical composition.
The second rate composer follows all the established rules and applies them
diligently. He or she produces art that follows fashion. The really creative
person does not follow rules, producing work that may be loathed or
controversial, but nevertheless may be said to be an intelligent production
- the attempt to express something which goes beyond all previous experience
or rules. This is a kind of 'knowing how': knowing how to perform creatively.
Now, if a computer is fed with sufficient information, it 'knows that'. It
can also follow the process that Ryle calls the 'intellectualist legend'
- it can sort out the rules first and then apply them to its new data. What
it cannot do - unless we claim that it suddenly 'takes on a life of its own'
- is to go beyond all the established (programmed) rules and do something
utterly original.
By contrast with AI, neural networks offer the possibility that this might
one day be the case. A neural network responds to each new experience by
referring back to its memory of past experiences. In this way, it learns
in the immediate context of its experience, not by predetermined rules. It
is more like a human being. Everything is judged (by the neural network as
well as by the human being) in terms of past experiences and responses -
and so its understanding of the world is constantly changing, being modified
with each new experience. Its understanding is based on the relationships
between events, not on rules.
Some philosophers, while accepting that the brain is the origin of consciousness,
are suspicious of the computer-view of consciousness.
John Searle, of the University of California,
Berkeley, believes that consciousness is part of the physical world, even
though it is experienced as private and subjective. He suggests that the
brain causes consciousness in the same sense that the stomach causes digestion.
The mind does not stand outside the ordinary physical processes of the world.
By the same token, he does not accept that this issue will be solved by computer
programs, and has called the computer-view of consciousness a 'pre- scientific
superstition'.
There is something of a battlefield, therefore, between neuroscientists and
neurophilosophers. All seem to hold that the mind is in some way the product
of brain activity, but it is not at all clear what consciousness is. Some
hold that consciousness is really a higher order mental activity (thinking
about thinking: being self-conscious) others claim that it is really a matter
of the sensations that the body receives, and which are recognised and related
to one another by the brain.
So, for example, Roger Penrose argued (in
The Emperor's
New Mind, 1989) that it would be impossible to create an intelligent
robot because consciousness requires self-awareness - and that is something
that a computer cannot simulate. He argues that consciousness is based on
a 'non-algorithmic' ingredient (in other words an ingredient which does not
depend on an algorithm (a set of rules)). Yet, although this applies to AI,
it does not necessarily apply to neural networks - for neural computing gets
away from the idea that intelligence requires pre-programming with rules.
A further step is the idea that computers programs could, as a result of
tiny random mutations, select the most appropriate options in order to survive
and develop. Dr Hillis of the Thinking Machines Corporation in Cambridge,
Massachusetts, is consciously using the process of evolution, as described
by Darwin, to create machines that can develop - using evolution to produce
machines of greater complexity than has been possible by conventional methods.
He allows computer programs to compete with one another; and then to breed
with one another, in order to become more and more adaptable and
successful.
For consideration A robot would not need self-awareness in order to carry out actions intelligently - merely a set of goals to be achieved. So, for example, a chess program can play chess and beat its own creator - it only knows what constitutes winning at chess, not what it means to play a game. |
Perhaps one feature of the mind/body debate that has been
highlighted in recent years by artificial intelligence and neural networking
is that it is no longer adequate to accept a simple 'ghost in the machine'
view of bodies and minds. Ryle's attack on this was based on language -
reflecting the fact that people did not speak as though a body had a separate
'ghost' controlling it. What we now find is that, as the mechanistic view
of the universe gives way to something rather more sophisticated, the nature
of 'intelligent' activity is probably a feature of complexity and of
relationships -that if something is complex enough, and if its operation
is based on a constantly changing pattern of relationships between its memory
components - then it appears to evolve in a personal and intelligent way;
it takes on character.
Chinese writing A most graphic way of exploring the difference between being able to handle and manipulate information (which a computer can do very efficiently) and actually understanding that information (which, it is claimed, a computer cannot) was given by the philosopher John Searle (in his 1984 Reith Lecture.1 Minds, Brains and Science). You are locked in a room, into which are passed various bits of Chinese writing - none of which you can read. You are then given instructions (in English) about how to relate one piece of Chinese to another. The result is that you are apparently able to respond in Chinese to instructions given in Chinese (in the sense that you can post out of the room the appropriate Chinese reply), but without actually understanding one word of it. Searle argues that this is the case with computers; they can sort out bits of information, based on the instructions that they are given, but they cannot actually understand that information. |
The philosopher Hubert Dreyfus has given a criticism of AI
based on an aspect of the philosophy of Heidegger (see p. 198). Heidegger
(and Dreyfus) argued that human activity is a matter of skilful coping with
situations, and this presupposes a 'background',which is all the facts about
society and life in general which lead us to do what we do. AI, according
to Dreyfus, attempts to reduce this 'background' to a set of facts ('know-how'
is reduced to 'know-that'). But this is an impossible task, because there
can he an ever-growing number of facts to be taken into account - for practical
purposes the number of background facts is infinite. Also, a person has to
select which of those facts are relevant to the decision in hand - and the
rules for deciding which are going to be relevant form another practically
infinite set of facts. This failure ever to provide enough rules and facts
to give the background for a human decision is, according to Dreyfus (see
Dreyfus H and Dreyfus S Mind over Machines, 1986), a problem which
will continue to be formidable for AI.
For reflection It seems to me that artificial intelligence is, basically, a modern computerised form of Frankenstein's experiment! It Is assembling 'materials' (in this case, raw computer power and the data upon which it is to work) in the hope that it will find the key to make this become a thinking being, a self. So far, the AI limbs have started to twitch, but the monster has not taken on human form! |
Background notes A person's view on the mind/body issue depends on his or her general view of the world and our knowledge of it. It is possible to trace the debate through the history of philosophy. For example:
Hume, the empiricist, analysing the phenomenon of life, finds
just the same thing when he analyses himself. He always sees some particular
thing. For Hume, the world is an assembly of bits and pieces, regulated by
general laws of nature similarly, the self is an assembly of bits and pieces.
He cannot see a general 'self' because he cannot step outside the world of
phenomena.
There is also a tradition which puts the mind quite beyond what
can be known. So, for example Wittgenstein, at the end of the
Tractatus, says: 'The subject does not belong to the world, but it
is a limit of the world.' In other words, from the standpoint of empirical
evidence, I do not exist. As I experience it, the world is everything that
is not me - although it includes my physical body. |
Related Articles |