Behind the Brain

with Geoff Watts

"Robin Dempsey seated himself at a console consisting of a typewriter keyboard and a visual display screen.Eliza was on line."

Geoff Watts : Robin Dempsey is a character in "Small World",the novel by David Lodge,but there's nothing fictional about Eliza,the program he's using,it really does allow the operator to hold a conversation with a computer.

"Hello",Robin typed on the keyboard.
"Hello,how are you feeling",replied Eliza,the printed words appearing instantly on the VDU.
"I'm feeling terrible",Robin replied.
"What do you mean by that precisely?"
"I'm feeling depressed"
"Do you have any friends?"
"Not really."
"Why is that?" Robin Dempsey paused for some considerable time before he typed in his reply.
"I don't know."
"Do you really not know or are you just avoiding the question?"

Geoff Watts : Using Eliza is a creepy experience,as the replies come back,reasonable,sensible,even sympathetic,it's difficult to stop yourself believing your not chatting to another person.When I had a quick fling with Eliza,the man who released me from her tempting embrace was computer scientist Aaron Sloman.He can run the Eliza program on the machine in his room at Birmingham University.

Aaron Sloman : I'll try something which sometimes produces an interesting result. "I am demonstrating you"
"Perhaps in your fantasy we are demonstrating each other",says Eliza.
Erm (laughs),it's not really understanding anything.If I type "I",lots of gobbledegook,gobbledegook,"you",we get "Perhaps in your fantasy we",all that gobbledegook,"each other" In other words it didn't even know that I wasn't talking English.The point is you could add more and more rules to cover more and more still isn't taking decisions.It isn't thinking.It's doing something else which produces the appearance of thinking.

Geoff Watts : So it's not too difficult to expose Eliza as a fraud.But even a brief experience of using the program is enough to set you thinking or worrying about the possibility of creating a genuinely conscious machine,and what it might mean for human self esteem if we did.Igor Aleksander [Ref: Iotm29] a computer engineer at London's Imperial College,is well aware that no one,biologist or psychologist, never mind an engineer can hope to court universal popularity by framing scientific explanations for the richness of human thought.

Igor Aleksander : People used to think that the heart was the centre of emotions and the centre of goodness and evil and so on.We now know that the heart is a pump,but you could have been burnt at the stake for stating that! So you know, people who now say,well our consciousness is the function, a very complex,but very interesting function of the way in which the neurones head work,is still at the level of heresy.

Geoff Watts: : Of course,yesterdays heretics have a habit of becoming tomorrow's visionaries.Right now the Professor of Cybernetics at the University of Reading,is still firmly in the heretics camp.Not least,on account of his decidedly bullish claims about the future of artificial intelligence,of machines that can think.

Kevin Warwick : It may never be possible to get exactly the same form of consciousness,because the machine would be something different.But that's not to say it would be worse,what the machine comes up fact,far from it,I can easily see that machine consciousness,if we can call it that,could be far superior to that of humans.The brain makes a very,very simplified view of the world outside which is suitable for humans,two,three dimensions at most,that's what we think of the world in [Speak for yourself I go anywhere from 4 to 11 -LB],spatially and time.The world is far more complex than that,out there.our senses are fairly limited,and the brain just deals with those limited senses.It may well be that it's not humans that understand the human brain,it may well be that it's machines that have a full grasp and understanding of what the human brain's all about,whilst we might not understand what they're actually thinking about.

Geoff Watts : Kevin Warwick's disconcerting habit of talking so matter-of-factly about the rosy prospects of machine consciousness,stem from his conviction that artificial intelligence has few limits. His enthusiasm for demonstrating,indeed for living the future,led him to have a small electronic device implanted beneath the skin of his arm.Via a central computer,this tracks his movements around the building where works,triggers various greetings "Hello Professor Warwick" and as he approaches doors, opens them.A simple technical trick,of course.The computer isn't communicating with Kevin Warwick's brain.But it's a step,"We're getting there",he claims.

Kevin Warwick : Already,in Atlanta,they have a guy whose brain is wired up,and he has been well moving cursors around on computers just by his thoughts [Ref: Tomorrow's World {disabled people use thoughts to communicate}].So this is looking at merely an extension of that ,just having particular thoughts and learning to have certain thoughts which will open doors,or get buildings to say hello to you,operate computers,communicate with other people even.This actually taking signals,thought processes, conscious ideas from from one person to a computer,one person to another.

Geoff Watts : Perhaps in principle would it get to the point where you could actually experience someone else's subjective view of the world do you think?

Kevin Warwick : Exactly that.If someone thinks of the colour red,do they think of the same thing as you,or if someone is excited by something,or is frightened of something.I think the possibility of you experiencing what they feel is very real,I can't see technically there's any reason to stop that in the very near future.[Yes there is - LB Ref: R.Penrose "ENM" etc] So it will actually hit on the head a number of philosophical arguments that they will actually scientifically answer those arguments.

Geoff Watts : Philosophers have only speculated about what it's like to be "you" as opposed to "me". Kevin Warwick thinks that one day we'll be able to know.For the moment though,the philosophers are safe.Even at the media lab at the Massachussets Institute of Technology,a veritable shrine to the future, they're not yet wiring brains directly to machines.They are though,forging indirect links between humans and computers.Less ambitious certainly,but clever enough.The theme of Ros Pickard's current project is "Artificial Caring".

Ros Pickard : Right now it's certainly very common that you see people emoting at their machine, and you know,expressing.....especially we see a lot of frustration and anger.The story of a man who shot his computer a few times through the monitor and a couple of rounds through the hard drive. There's a lot of frustration out there at machines.Now you can say,"well yeah,people get frustrated at their cars and their pens and pencils,so what?",well the "so what?" here is that the machine could change it's behaviour.For example if some new little agent or something pops up on your machine,and it drives you crazy,and you frown at it or look disapprovingly at it,it could offer to turn itself off. [That would be even more frustrating not helpful -LB] HAL9000: I know that you and Frank were planning to disconnect me,and I'm afraid that's something I cannot allow to happen.[Ref: Green File:World6.wri;Inourtmt.wri]

Ros Pickard : If you look at a piece of information that's brought to you with a furrowed brow,a look of confusion,which it could learn by watching you over a period of time,and interacting with you,then it could offer you a different explanation,or ask you if you would like some help or would you like to speak to,you know, so and other words it could play more of the role that an assistant does. HAL9000: I hope the two of you are not concerned about this.This sort of thing has cropped up before,and it has always been due to human error.

Geoff Watts : And the step beyond that would be,instead of a simple straightforward,sort of,what shall I say, mechanical response from the computer,the computer would not only recognise your distress,but tell you,or exhibit to you that it shared your distress,and it had got an emotional state as a consequence of seeing yours?

Ros Pickard : Erm.computers showing empathy are we might even....we've jokingly called "Artificial Caring",is a research question right now,and we've been running some experiments testing how people respond when a machine shows empathy and sympathy,and practices the skills of active listening,in an appropriate way. HAL9000: I enjoy working with people.I am putting myself to the fullest possible use,which is all I think that any conscious entity can ever hope to do.

Geoff Watts : HAL,the computer from Kubrick's "2001:A space odyssey".As everyone who has seen the film will know,HAL's "fullest possible use" had dire consequences for it's human operators.Back in the real world we're still a long way from machine intelligence,nevermind machine consciousness. Ros Pickard's PCs are following a set of rules [I suppose they might be called PC PCs LB],sophisticated yes,but conscious,clearly not.How long that comfortable state of affairs will last is a matter of controversy.The clear functional difference between humans and machines,between us and them,begins to seem rather less clear when you're confronted by smart systems which can learn from experience. (Computer bleeps sound) Among the non-human inhabitants of Kevin Warwick's laboratory at reading is an engaging little machine,which runs around the floor,and which you simply can't help thinking of as a creature.

Kevin Warwick : This robot has a basic program,but it also has a goal in life,very simply to move around,not to hit anything,but to keep moving around.But it has to learn what to do with its wheels in order to do that,by trial and error,so it's a very simple demonstration,and in this case,in about 4 or 5 minutes the robot learns how to get about without hitting anything.But it does it in its way.It's not,if there is such a thing as a perfect way,it's not some objective way,it'll do it in a way that is sufficient to get it by. It's trying different things.It's ......oooop there we you can see it actually bumping into something and as you watch,it thinks "what shall I do to get out of this mess",and it will try something and if it's successful,it's more likely that it'll do that next time.If it fails,if it actually gets closer to what it's trying to avoid,then it will be less likely to do that. The interesting thing we've found is if we reward it,by telling it it's done well about 2/3 of the time and punish it 1/3 of the time,then it seems to learn in as fast as possible way.

Geoff Watts : I wonder if it's happy? (laughs)

Kevin Warwick : (laughs) Erm,well I can answer that quickly,the original ones are called the seven dwarfs,so I think this could be Happy,but it might be Grumpy!

Geoff Watts : As robots go,Grumpy is pretty basic.But it does illustrate one of the principles of successful artificial intelligence.Not telling a machine everything it needs to know,but giving it a few simple rules which it can use to elaborate more complex forms of behaviour.A bit like bringing up a child.At its most sophisticated,this approach can be impressive,as when a computer took on the world chess champion,and won. "The advantage was with Deep Blue playing white for the second game in a series of 6.Kasparov deliberately avoided open tactical positions,which the computer favours,with it's ability to analyse up to 300 million moves per second.But the world champion resigned....."

Aaron Sloman : If you try to design a chess playing computer by building into all the possible states that can occur and the best moves to make,I believe it's been shown that it would need a larger memory than could fit into the universe,because of the size of the game tree in chess.So instead,if you build a machine which instead can work out moves for itself,you give it the ability to contemplate possible moves,evaluate them,and select one,it may not actually perform quite as well as one that's got the complete game tree worked out in advance,but it may do very well indeed and may even beat most of the human chess players in the world,and that's the kind of thing that we have at the moment. So,what I'm suggesting is that at some point or other,evolution made that kind of discovery.

Geoff Watts : Aaron Sloman whom we heard earlier teasing Eliza,has spent much of the last 25 years studying the principles of artificial consciousness,and it's make up or architecture.One of these principles concerns the role of emotional states.Any machine which is even to begin simulating a human brain must be able to do at least three things.The first,as with all living organisms,is to react to it's surroundings.

Aaron Sloman : You're walking down a rather narrow dark alley at night,and suddenly you see a shadow in the doorway,and suddenly,you know,there may be a wave of terror that makes you freeze in your steps,or in some cases you may scream or jump,or be startled.In our architecture,and I think to some extent in other animals,though in less sophisticated forms,there's another kind of module or collection of modules,which can,not just react to what is currently happening,but contemplate possible future happenings.They have the ability to do "what if?" reasoning,a kind of virtual reality thinking about what might happen.I think humans have a third layer which has the ability,not only to think about what might happen in the external world and so on,but also to pay attention to what's going on internally,to evaluate it as good or bad,and to try to change it,and not always to succeed,it's not a total control.I think this third layer sometimes fights the other layers and loses.For example,if you've been humiliated by something,you may find yourself dwelling on what you could have said,how you might have your revenge,what you should say the next time you meet the person who humiliated you.But it seems to me that a lot of human emotions,in fact most of the ones that we find interesting, and worth gossiping about,and worrying about in our friends and ourselves and others ,involve that third layer.

Geoff Watts : So you could imagine then, machines that formed friendships,formed antagonisms, even fell in love,although they might not call it that,but form special bonds between them?

Aaron Sloman : Er yes,what I'm saying is that if you build a robot,with the right kind of capabilities, the right kind of software,then it may be able to have the same sorts of social relationships as human beings do.It may incidentally,be too hard for us to design ourselves.We may have to evolve it and let it bootstrap itself by interacting in a social environment the way we do.

Geoff Watts : But suppose you did create a machine which could learn,think and have states of mind. Suppose you replicated the human brain in silicon down to the last cell and connection.Would that machine, placed inside a robot,be conscious? Psychologist Steven Pinker [Ref: Iotm18],of the Massachussets Institute of Technology offers a simple thought experiment.

Steven Pinker : We've got C3P0 or Rosie the maid or the Terminator or whatever robot you want to fantasise and it's made out of gears and chips and silicon and wires.Anything that you do to it,to tap consciousness,you will get answer that,if it came from a human,we would attribute to that person's consciousness. "What are you thinking right now?" "Oh well I'm thinking about what I'm going to say in response to your question,and I feel a little bit itchy and a little bit hungry" and the believer will say "There I told you,it's conscious",and the sceptic will say,"Nah,it's just programmed to say that,there's no one home who is actually feeling the itch". That debate can always happen.There is no way to answer that debate,and that's maybe the sense of consciousness that will always seem mysterious,and of course the question that I can ask about his robot, namely,"How would we ever know whether it's actually feeling something?" we can ask in equal measure about,you know,non-human animals or for that matter one another.

Geoff Watts : Unless,as Kevin Warwick hopes,we are eventually able to link brains electronically and experience other people's subjectivity for ourselves.If one day we do build a machine with consciousness,Ros Pickard thinks we'd be wise to try at least to understand its inner life.

Ros Pickard : We could build machines whose internal state and the way in which they operate is hidden to an outside observer,or we their maker could make them....their insides known in a way that you know,a human might say our insides are known only to our maker.Erm I sometimes refer to the movie 2001 where HAL,the most emotional character in the film,was able to keep his internal state hidden from the rest of the crew.They didn't know that HAL was fearful until the last scene,you know where HAL says to Dave....

HAL9000 : I'm afraid Dave.

Speaker Icon WAV 76K

Ros Pickard : "I'm afraid Dave,I'm afraid." They couldn't read Hal's quote "feelings" unquote.Now we as designers could decide that machine's feelings could always be read on the outside by humans,but maybe not by other machines. So that say if we have two machines going off to negotiate for us,we might want our machine to be able to keep its feelings secret from the machine of our enemy with whom its negotiating,but ultimately we would want to know what the machine's feelings were.on the other hand it maybe that this become simply too complicated to have a simple read's quite possible that we could design machines with emotional states quite different from human emotional states,and that would be very hard for us to understand.

Geoff Watts : Speculations of this kind are greatly entertaining,but that's about all,according to engineer Igor Aleksander [Ref: Iotm29 & 76] .He believes there are fundamental differences between the experiences of biological organisms and those of artificially engineered systems,like MAGNUS [Ref: Video L5:Late Show Science Week] the machine he's been working on. That said though,there are hints to be gleaned here about the human brain.Think for a moment,how you create a mental picture.

Igor Aleksander : If I say to you think of a blue banana with red spots,I guess you'd have no difficulty in doing that. Now it's my words that cause that activity in your awareness area,even though you've never seen a blue banana with red spots.

Geoff Watts : Just now I called MAGNUS a machine.In fact it's really a package of software,a set of computer instructions.No matter,when MAGNUS exercises what you might describe as it's imagination,the process of tying together words and pictures,invisible in humans,can be observed.It's as if the machine's subjective experience we're being opened up to objective scrutiny.Igor Aleksander has used MAGNUS to suggest how certain areas of our brains might work,including what happens when we dream.

Igor Aleksander : What happens in sleep is that much of our links to our limbs and many of the links to our sensory inputs become cut off if you like.Then the inner bit is much freer to roam through what you might call its experience, than when it's controlled by its worldly constraints.So at odd times it does enter these areas which have....which feel like past memories but they could be strange juxtapositions of these.Now we see that very clearly in the sort of machines we build.

Geoff Watts : You deny them any particular extra input and you just let them go and see what they do?

Igor Aleksander : Let them run,and we can see strange juxtapositions of what they've learnt during awake existence.

Geoff Watts : The machine is just shuffling the information around?

Igor Aleksander : Yes,for example,if I do this blue banana with red spots trick with one of our current machines,I can actually ask it to visualise such a device,and then if I start cutting off it's sensory inputs, it has a nightmare about this thing.

Geoff Watts : When you say you can find out what's actually going on inside it,I mean,does it mean it's displaying something on the screen or what?

Igor Aleksander : Yes indeed,we could put MAGNUS to sleep and it would show us strange juxtapositions of images that normally it would use to understand its world.

Geoff Watts : Although intrigued by such dreamy inventions,Open University biologist Steven Rose [Ref: Iotm6 ;Video:N20:Quantum Leaps] remains cautious to the point of scepticism.He accepts that computers offer insights into the human mind but isn't yet ready to trade the carbon of life for the silicon of computer chips.

Steven Rose : I'm not convinced that the parents of silicon chip structures,is making brains which are conscious in the sense that I'm using the term.No one was really surprised when finally Deep Blue beat Kasparov at chess,because chess is essentially a cognitive game.If you make a computer which is powerful enough,can analyse all the rules and so on,it's going to beat the human brain.But think of the difference between that and playing poker with a computer.The point is that playing poker is essentially a collective activity,involving several people and what I would call,sort of competitive psychology,in a rather crude sort of way! it's fun,because there are all these bluffs, because you are reading people in particular sorts of ways,which you know about them,and that is,not at the present moment,possible to embody within a computer. When the time comes when it's possible to play poker enjoyably with a computer round the table with us,then I shall surrender carbon for silicon.

Geoff Watts : When computers start beating us at poker,as well as chess then perhaps we should start worrying. Right now remember robot Grumpy has trouble negotiating its way around a room, nevermind taking over the world.In fact Aaron Sloman would argue,controversially,that the ethical boot may soon have to be on the other foot.

Aaron Sloman : There might one day have to be a Society for the Prevention of Cruelty to Robots. Human beings do awful things to each other,as bad as anything any machines could do to us,and people are worried about that,some of them.But it may well happen that,without realising the consequences of it they,in future,design robots,which have a kind of autonomy,which they need in order to perform the jobs we want them to do,and that means they can develop new goals of their own,new interests,new preferences,and they might want to go to University,they might want to study philosophy,they might not want to do the jobs that they're doing,they might want to lead a different kind of life,and in that kind of context,some of us might think they have the same rights as human beings do,and be inclined to feel that way. [That is not possible.Even if it were the case that such an Asimovian robot existed,it would be deemed to be a failure in terms of its propensity to do the job that it had been asked to do.Robots are needed to perform tasks that humans can't or don't wish to do,and if the robot decides it wishes to do other wise,then it will be trashed in favour of one that does as its told.If a robot were to be given rights would that mean that all animals should be granted them,or would a conscious robot be seen as more like us and so have rights in favour of animals? -LB]

Igor Aleksander : It has never occurred to me that any form of rights are necessary for this machine, except the sort of protection that you need for an artefact.Patents and what have you.But the object itself will not require the rights that one normally thinks of for human beings.

Geoff Watts : The machine might have a very different view though?

Igor Aleksander : Well a machine would probably be...have the most sensible view of the lot on this, and would come out and tell you," Why the blazes are you worrying about my rights? I know exactly what happens to me if you switch me off.I don't mind as long as long as you back off my memory,and if you don't your just going to lose a lot of effort.",and I think a lot of this discussion tends to get wasted because we get so worried about the rights of machines,and we don't worry enough about the rights of human beings.

Geoff Watts : Suppose we do eventually come to understand consciousness,that we are able ,in principle,if not in practice to build a machine that lives a mental life as varied as our own.How would that change the way we view ourselves? Would our self esteem be somehow diminished? Neurologist VS Rama Chandran [Ref: Iotm11] of the University of California at San Diego.

VS Ramachandran : Not necessarily,because if you have an enriched understanding of yourself it still doesn't mean you know all the antecedent circumstances that are going to lead to your next act.In other words you may know the general rules that govern your behaviour,that's the best we can hope to achieve,but you still don't know who your going to run into round the corner tomorrow.Your behaviour is going to be just as capricious,just as enigmatic,and just as much fun.

Geoff Watts : Oxford University physiologist Larry Wisecrantz agrees.

Larry Wisecrantz : You see a lovely rainbow.We now understand rainbows.[Actually,that's not quite true,there are some aspects of rainbows that we don't understand.(Ref Video BB14:STM-The Rainbow) -LB] We know that there are receptors in the eye that are selectively responsive to different wavelengths,we know how colour is processed in the brain.So a lot of the mystery has gone.Now it depends on whether you like rainbows despite that,and I still do.I think rainbows are marvellous events.I can become poetical about rainbows even though I think I understand the physical basis of them.Similarly,I think with consciousness, if just getting an explanation means that it then becomes boring,well then,you know,more's the pity.

Geoff Watts : Another researcher into consciousness who sees no threat in understanding more about the brain,is Gerald Adelmann,Director of the Neurosciences institute at LeHoya (sp?) in California. [Ref: Iotm76]

Gerald Adelmann : I think it lends to human dignity.It says yeah,"Everything maybe founded on a scientific description and our place in nature,but some things are not good scientific subjects".That you can know all the science in the world and you aren't going to be able to create a beautiful poem, and you can't take a poem per se,subject it completely,and say "Now I know how to do a poem". So I think we'll get a more balanced view of the relationship between science which tells us where we are in the world,what our place is in the universe,and how we might be comfortable in it which is,I think the role of art. [Then how does a human being learn poetry if not through a physical process of stimulus to the brain? It is a fallacy to think that there is science and art as two separate doctrines (See CP Snow "
Two Cultures ").This is an imposed distinction upon nature which is a unity.The abstract processes that go on in a human being when subjected to artistic stimuli still have allegories in the reactions in the brain,and whilst art appreciation cannot be REDUCED to brain function,it can be seen that it must be a function of what the brain is reacting to.-LB] Now I don't mean that it's going to make you a better artist,or a better scientist,but since it bears enormously on us,it can't help be ideologically one of the most important things in the world since Darwin.

Geoff Watts : Running in parallel with the misconception that a scientific understanding of the world makes it a poorer place to live in,is the belief that,to scientists,only science matters.Untrue says Steven Rose.

Steven Rose : When I was a youngster I had this naive belief that as it were,sort of religions would all disappear,and we would be left as it were,in a world of scientific rationality.I'm not sure,these days I want a world a world of scientific rationality,because the rationality that science produces,also produces a sense of the commodification of humans.We can clone individuals,in order to make spare parts. The National Institute of Health in the US owns a patent on the genes in my own body [Which is absurd -LB],and somehow that that reduces the worth and the value of what it is to be human. [That is because of the romantic view of a human as a unique collection of experiences which cannot be duplicated and the sense of an individual as a free entity at liberty to do what it wishes without being REDUCED to merely a set of rules.But this is a public mind set that sets how scientific analysis is to be received.If one had started with the idea that humans were just a bag of chemicals that happened to be animated,we would be amazed at just how capable that mixture was at doing some of the incredible things that it does as science uncovered what they were. We have a cultural tradition of the romantics glaring in disbelief at the "absurd" things that science is prepared to do and say in the cause of knowledge,and this has fostered the notion of the lay person as something so amazing and unique and spiritual that to attempt to understand it or duplicate its action demeans the human state.But this is fallacy.More to the point since worth and value have no objective basis what Steven is saying can only be "as far as he is concerned",it isn't a truism -LB] I think a true understanding of the biology of humans which locates within their social and historical and evolutionary context,must insist on a respect for human integrity as well.If it doesn't do that we're lost.

Geoff Watts : Some scientists go further,seeing the exploration of our own minds as locating us more firmly within a wider setting.Cosmologist Paul Davies [Ref: P.Davies "The Matter Myth" etc] has tried to weigh the value of our urge to understand consciousness.
"The physical species,Homo Sapiens,may count for nothing,but the existence of mind in some organism,on some planet in the universe,is surely a fact of fundamental significance.Through conscious beings,the universe has generated self awareness."





Chaos Quantum Logic Cosmos Conscious Belief Elect. Art Chem. Maths

Source:BBC Radio4 File Info: Created 24/7/2000 Updated 20/3/2007 Page Address: