Testbeds

In a world of increasing automation,it's surprising that robot autonomy is still the stuff of science fiction.Alun Lewis now,finds out why,and looks at the latest research to try and make it more than a virtual reality.

(Robot noises & music plays)

Alun Lewis : A small robot doing what modern small expensive robots do so well,making whirring noises.Peering about with it's camera,a pair of tiny spotlights providing illumination,in the darkness of a sewer.It's looking for defects and blockages,that might require urgent remedial work,and we'd like the electronic sewer rat,to go there and rootle around,looking for problems,guided by it's own inquisitiveness.Without having a bored human operator,sitting on the surface driving the machine by remote control.In fact,this little experimental pipe crawler,can do that to a degree,but this machine,clever though its computer brain is,could not be described as "independent",it isn't autonomous,it needs all the human help it can get.Yet for the last 20 years,computer scientists have been predicting that robots would be able to operate safely and effectively without a constant watching eye.When robots visit other planets,they need to be self determined,in many tasks.It takes far too long for commands to travel back and forth.Well robot autonomy has been predicted,but it clearly hasn't arrived,for a very simple reason,according to Hans Moravec,of Carnegie Mellon University in Philadelphia.

Hans Moravec : The illusion that computers were powerful,was fostered because of the tasks that computers were first set to,for practical reasons,arithmetic,even the first computers were thousands of times more powerful than human beings at arithmetic,but in hindsight we see that the problem is not that computers were so powerful,but that human beings were so pitifully inefficient doing arithmetic.A hundred billion neurones straining,producing a few bits per second of actual end result calculation,but then we from that got the impression that the computers were very powerful,and in fact they're not.When we put tasks on them that human beings do very well,that animals do very well,like seeing,moving around in a complex world,we found that we couldn't duplicate the performance of a lizard.

Alun Lewis : And lizards are good at making decisions that affect they're eating,mating and surviving,but they can't be trained to help us in any useful way in our technological society.But rather than wait for computing techniques to evolve from level lizard to the degree of a dog,researchers such as Graham Parker at Surrey University opt to combine the best of human reasoning,and decision making with the bits that robots do best.He wants to make things happen now.

Graham Parker : Most of our work is in the area of augmented reality,in which we are trying to carry out tasks in which we are looking at remote scenes,sometimes hostile,and we wish to use,if you like,the interpretive intelligence of the human to actually control the overall system. But then to use the machine in an effective way as well.Now we believe that,along with other researchers,that the use of overlays,graphical overlays,is a very useful way of augmenting this sort of task.

(Commentary plays) "Here's Beckham on the far side,back to Neville,Neville forward to York,York with Keene to his right,so to is Beckham,Beckham plays the cross in first time,it's a....."

Alun Lewis : Think of a football match on the television,we turn on after the game as started,we listen eagerly to the commentators ramblings,concerning two halves,when the game will be over,and men with two right feet,but when will the commentator mention the score.Well hooray for augmented reality,for in the top left hand corner,there is the score line,which tells us also which team is at home,and for how long the match has been played. [It's somewhat ironic that Alun is mentioning this on the radio -LB] In rugby matches we're beginning to be bombarded with exciting data,concerning the amount of time spent in each other's territory,and if you watch American football,well that's reality overload.But this is augmented reality as it's used today.Graham Parker wants to use something a little more advanced.

Graham Parker : What we're interested in is going a little bit further than that, and that is actually using the graphical overlay in a active sort of way,so that tasks in the remote environment can be solved.

Alun Lewis : Let's just sort of get the idea of...we've got a robot somewhere... somewhere remote,powered,moving around,or capable of doing tasks,it's got a camera system on it,some distance away you've got a human operator with something like a control stick to drive it forward,left,right,and then to operate the various bits of it,and then there's a screen in front of the operator,with pictures coming back from the camera,and what you're talking about then is putting some graphics onto that picture from the remote robot,to help the operator make a better decision quicker.

Graham Parker : That's correct,what,you've just described,is one possible scenario for a man - machine interface,and that is probably the simpler type of way of operating it.Yes indeed,the graphical information can be used to actually augment the task.We all have heard,of course,of fighter pilots,for example having head-up displays,again for the same sort of reason.

Alun Lewis : And the type and style of the of the graphic overlays that will aid a robots controller would depend on the type of application.Well the robot could be working in an unpleasant but mostly orderly world of a sewer,in the highly organised but totally dangerous world of the inside of a nuclear reactor,or even be struggling with the dangerous but very random experience of a battle field.In the Surrey laboratories,Alison Wheeler and her colleagues,are tackling the real and surprisingly commercially interesting world of the sewer pipe,surveyed with a robot,and the graphic that they want to produce,is a representation of the interior of the conduit,creating a three dimensional model of the real sewer as opposed to the plans as they go.

Alison Wheeler : Right I'm just going to....

Alun Lewis : Oh here we go,we're....

Alison Wheeler : ...scroll through my menu and select that option.

Alun Lewis : I'm watching the 3D picture over here as it's moving,and in fact there's a pile of bricks in the way in front of us,and we're approaching those,like a pair of bricks have dropped down to the top,and we're approaching them quite rapidly,and the good thing is that your wire model,your graphical representation is moving at the same speed.

Alison Wheeler : Yep,that's right.

Alun Lewis : That seems quite simple to do,was it?

Alison Wheeler : No,it's not at all simple,trying to get accurate,sort of position information back from the vehicle in the sewer,is very difficult and it's an ongoing problem that we're trying to solve.

Alun Lewis : So just the simple idea of putting on a graphical overlay over a real picture and making them move together,that turns out not to be a simple problem?

Alison Wheeler : No.

Alun Lewis : That in fact is a complicated engineering problem?

Alison Wheeler : It is very complicated.

Alun Lewis : Any computer-based decisions involving vision are hopelessly difficult,whether it's splashing along a pipe or searching snowy wastes for meteorites,doing a geologists job.NOMAD is a four-wheeled vehicle,about the size of a small saloon car,that's been built to search for meteorites.It's been given a technique for tracking back and forth across the snow in a pattern that will cover all the ground in time.It's capable of handling the terrain on the Elephant Lorraine in the Antarctic the site where it works,and it can look at and handle the rocks from space which litter the polar caps.It's job is not to bring back interesting looking samples,but to positively identify meteorites,measure them and plot them and then leave them in place.To do this it's been given a specially designed computer brain,by Dimitri Apostolopolis, of Carnegie Mellon University.

Dimitri Apostolopolis : NOMAD's brain is an architecture,a software architecture that handles all of NOMAD's function from the highest level,which is explore an area in a certain pattern to the lowest level which is control the camera to look at the specific rock.So when NOMAD finds with its camera a very interesting rock,then it informs the highest level planner in it's brain, which is called a "mission planner",that it has located a very interesting rock,and asks the mission planner to modify the search,so that NOMAD can drive safely at an effective distance from the rock.Effective meaning that it would be within the reach of the manipulator arm which contains the science instruments of the robot.

Alun Lewis : So really,when I called it an electronic geologist,it's much brighter than the electronic geologist,because we all know that scientists take one look and they say "That's interesting" and go straight for it,and they probably fall down a crevasse,but here you've got a much cleverer device,which is saying "I'll approach with caution".

Dimitri Apostolopolis : Well,that's correct,and we...NOMAD has to approach with caution,because there is a great deal of processing first to follow,and a careful procedure of following,for the robot to get to a location without really hurting itself or getting into a situation which would be very difficult to get out from.

Alun Lewis : Does it ever ring back to base,and say "Hey guys I'm stuck,I don't know what to so here"? Can it do that if it really doesn't know what to do?

Dimitri Apostolopolis : Yes,there are cases where it gets into a no way out situation.The robot knows what to do in those situations,usually it will stop,and back out,trying to follow the same path that led it into the situation,but there have been a couple of cases where it gets...and this is actually very interesting,that in some occasions the robot would see its own shadow,which is projected very long in front of it,because the sun is very low in the Antarctic, and the robot thinks those are obstacles,and then it tries to avoid them and it stops continuously and tries to get out of them,and cannot avoid its own shadow,until it changes its orientation when then the sun disappears,it's behind the robot,and then it continues its path. But there are situations where the robot actually will halt,if it cannot do anything else,and then we'll have to intervene,but to the best of my knowledge,I don't think there was any situation like that at the (indistinct ) Antarctica,last month.

Alun Lewis : This vision problem has dogged robotics researchers for many years.We use as much as two-thirds of our brain to make sense of what our eyes see,with far less computing power,computers struggle to interpret the world around them.One of Hans Moravec's early autonomous robots,15 years ago,tried to navigate unknown territory using only a simple 2-dimensional view of the world,in which both a path and another object were defined as a dark strip with converging edges.

Hans Moravec : The problems with earlier systems for instance one of our robots trying to go down a road,in fact climbed a tree,because the outline of the tree resembled the perspective outline of the road,was because it had a rather simple model of the space in front of it,it was a 2 dimensional model,it really was basically looking at a flat picture.If in fact it had measured the distance to the portions of the tree,there would be no ambiguity at all.It would not look the least bit like a road,it's in the wrong place,it's going vertically.So 3 dimensional mapping,which does however take about a thousand times as much computing as the 2 dimensional kind,I think will solve these first tier problems,and allow us to build a machine whose intelligence on a scale I have between nervous systems and computers,puts it about at the level of a guppy! One of the smallest vertebrate nervous systems,but still it's enough to basically get around. [But it's still algorithmic,and logical,and perhaps we're not,and thus any computer is not capable of mimicking out talents -LB]

Alun Lewis : But not to make the sort of quick and useful decisions that we'd like robots to be capable of.In the laboratory at Surrey University,Lindsay Hitchen is working with the electronic sewer surveyor to give a human operator a more useful picture of the subterranean world.

Lindsay Hitchen : This is a virtual reality headset,it's got a special sensor on it,that measures how much you've moved your head,to the left and the right,and the up and the down,and if I just rotate it here...

Alun Lewis : Oooooh!

Lindsay Hitchen : ...you can look on the monitor....

Alun Lewis : Yes.

Lindsay Hitchen : ...that you were using before,and you can see how the camera has moved in response to this.

Alun Lewis : So we don't have to use sort of pointers and joysticks and everything else.Now to actually rotate the field of view,just by moving this headset,but of course the great thing about these headsets is they've got little television screens in front of each eye.

Lindsay Hitchen : Yes they do,so you get full stereo vision from it and it's very intuitive to use,you move your head and the robot head in the sewer,moves exactly the same amount in the same direction.

Alun Lewis : Right,prove it,put it on,it's no good saying "it will do this". (Music plays) I'm just putting...virtual reality headset,so I'm getting fairly common now,I'm just lining that up on my eyes,that's perfect,tightening up the headband.Now then I was going to look back at the monitor then! There's no need for me to do so,if I look to the left,I can see the left side of the pipe,I can see a hole to the left of me there,that's a joining pipe,look down at the end again, there's my bricks,the pair of bricks that had fallen down,in very,very good 3D,and the great thing is,this wire model,this graphical overlay of the inside of the thing,is following my view around as well.There's a slight delay between the two,I shouldn't move my head too quickly, but now I can adopt real manoeuvres.

Lindsay Hitchen :
And that's ultimately what augmented reality is.At the moment a professional system, that would inspect sewer,you're just looking at a small TV screen.There's no movements,there's no ability to control it.We're also looking at different user interfaces,wearing a head mounted display,gets a bit tiresome.It's got a few kilos of electronics on your head,you feel nice and hot.So we can use the 3D display that you talked about and use other input devices,like 6 degree of freedom,mice,ordinary computer mice, keyboards eye trackers....

Alun Lewis : Eye trackers?

Lindsay Hitchen : Yes,we're doing some work using an eye tracker,so that as you look around the scene,with your eye,it steers the robot head.

Alun Lewis : Professor of Reading Cybernetics Department,Kevin Warwick, claims that such headsets are only an interim step.He wants an even closer relationship,with the robots creating cyborgs,a symbiosis of man and machine.

Kevin Warwick : I think cyborgs for all,depends how far we want to go,having blind people with sonar information,or infra red information,extra sensory information,may well be only 2-3 years away,I can't see it being any further than that,as long as results that we achieve in the next 18 months are how we think they'll be.So limited cyborg-icity,if that's the word,could be there for some people,next summer,2001 [An auspicious date -LB].

Alun Lewis : But that is actually giving people robotic senses,feeding it into them to help them.If we think of once again,I think what we tend to think of as a robot,is a device powered separately,autonomous,it's out there doing things,set it off,do a task,when it gets stuck,call you up in some way.All very cluncky isn't it?

Kevin Warwick : Oh it's very cluncky,and I think really that's a different vision of the future than I have.I see it in the near future of being much more a shared thing.There are some things that we simply cannot do,that the robot has to do,the machine has to do.So we see lots of applications now,in production line machinery.Even sorting out how people think in terms of how we buy products and so on,where the issues could be seen as much more complex than just...why do you buy this and that and that,a machine can potentially map out what you're thinking about and what you're likely to buy and so on,in a way that a human can't,because we're thinking too simply,so I think,anything that is pretty complex and we are dealing with it in 2 or 3 dimensions when in reality it's not,it's many,many dimensions,that's when big advantages for the machine,simply because we can't do it,but potentially we could link together.

Alun Lewis : Long before we get robots directed by thought,we'll be hoping they can carry out simple tasks,and out on Elephant Lorraine,polar geologist,automat mark 1,NOMAD from Carnegie Mellon,has been working well.But its performance on the glacier was the result of careful advanced programming,and good teaching,because today's best robots,employ what's known as "adaptive software",it modifies its behaviour,with experience,just like we do.

Dimitri Apostolopolis : NOMAD's ability to classify in the field in a specific location,heavily depends on how well the robot learns about its first finds in the specific location.Let me say a few more things about this.Before we went...before NOMAD explored the various areas of Elephant Lorraine last month,we had...the robot had been trained to recognise certain features of rocks and meteorites,using samples that we placed in front of the robot.We needed to do that in order to build the...what we call the "database" of information that the robot has,before it starts searching for the first time and classifying for the first time,however the problem is that when you set a robot....when you do this in a controlled environment,like a laboratory,the conditions are completely different from the ones the robot will be faced with in the natural environment. So when NOMAD first started searching for meteorites in Antarctica,it had to make this...make classifications and tell whether rocks were meteorites or not,based on experiences it had in different,very,very different environments,and so the first results that NOMAD gave of its confidence of whether rocks were meteorites or not were pretty low.Once we finished the specific phase of testing,then we retrained NOMAD's classifier,with all this new data that the robot had taken at Elephant Lorraine,and then so when the robot continued its searches it was much better in identifying meteorites because it already had absorbed and analysed some of the experiences that it had come across,in the specific location.

Alun Lewis : Dimitri Apostolopolis,being careful not to use the anthropomorphic term "learn". Throughout our lives,we learn things,in one environment and then have to apply them in the real world,it can be difficult for us [Especially when our brains form working models of reality that are not true,and we develop mythical views of how nature works -LB],and computers don't have our brains intuitive reasoning power -yet. But will they ever? Philosopher Alexander Fiske Harrison,doubts that day will ever come,though computer control systems may seem to be thinking,by employing the sincerest form of flattery.

Alexander Fiske Harrison : I think that they could probably mimic with sufficient time and programming,most human things,but I think this will always be simulation,never be reproduction of actual thought.Basically what you have,it's like "does a calculator calculate?". People always say that they do,but they don't.Calculators just follow the laws of physics [So do we! -LB].You hit some things in and some numbers some out and they agree with what a human would do if you calculated.The word "computer" as originally used by Turing [Ref:Maths2:Turing.htm] actually meant "person who computes".

Alun Lewis : But,you say "simulate",will they do it in the way we do it,in...which begs the question of actually "How do we make decisions? .How do we make judgements?".[Ref:Hardie.htm]

Alexander Fiske Harrison : Yes,well,no I don't think they will do it in the way that we do it.One of the big problems is that computers run on algorithms .An algorithm is very well,some philosophers call a "moronic procedure",a moronic procedure can be described by an algorithm,and there are certain procedures which we do that cannot be described by an algorithm.[Ref R.Penrose "ENM"]

Alun Lewis : Why is that actually? I mean what is it that we do? When we are making a decision,when we are making a judgement,isn't it pure hard cold logic,[See fuzzylog.html] we like to think it is,isn't it a bit mechanical?

Alexander Fiske Harrison : Oh certainly not,the biggest problem when they program computers,I was involved with which were mimicking human conversation,is that computers are innately logical,and so can appear completely inhuman.For instance no computer would ever buy a lottery ticket, [Why ? Because the odds make it pointless,it's only human desires for greed and aspiration that defeat their own object.Buying tickets is like pouring money down the drain,unless you win,and that's the point,a human can be fooled by a big enough carrot,to walk in endless circles on a treadmill,like a donkey,a computer is unlikely to have its decisions warped into making a wrong decision with emotions of desire or greed.Whether something is "worth it" can be calculated and the lottery isn't worth it -LB] but it's not just the illogical things that humans do that can't be algorithmically described.There are actually certain mathematical and logical matters which cannot be incorporated into an algorithm, [This is why any digital computer is doomed not to achieve our intelligence-AI can only work if the "computer" exploits our system -LB (See qcomp1.html or qcomp2.html] and there is a lot of debate at the moment,on one side you have Professor Roger Penrose (See bwave.html],the Professor of mathematics at Oxford saying,"This is because of the nature of these logical matters",what they call "2nd order logic",when you talk about rather than "the table is red",when you start asking "what is redness?" [Or what is reality? -LB],"What is this property?".Algorithms can't go near it.Other philosophers attempted..and mathematicians,attempted to say,"Well one day if we get enough little algorithms running,then maybe we'll discover,that we weren't doing this amazing thing we thought we were",that actually it's just lots of little competing algorithms,not one big algorithm describing it.But I certainly fall in with Penrose's idea that computers won't do it, [So do I ,though I don't see anything to outlaw the multi algorithm idea,except perhaps Quantum Physics,which suggests that nature is inherently fuzzy,and this not able to be described by an algorithm -LB] and I think that whole debate is a little bit silly,because,the computer is acting according to the algorithm,it doesn't know that it's following an algorithm. [But if we were many competing algorithms,we wouldn't know either,we have "the illusion of freewill" as some contend.I say we have actual freewill via QP -LB] It's based on this misconception that there's such a thing as "information" without an informed entity,a mind, [This is why the QP observer participance in some sense "creates reality".The observer is required to decide what "is" -LB] by piling in information,but information is only information to us.To the computer it's a series of electrons moving through transistors and circuits. [Using Pirsig's analogy we are privy to the story line of the novel held on the word processor,the computer only sees the 1s and 0s and never understands the novel -LB] You know,data this word "data",data is data to humans,it isn't data to anything else.To everything else it's the writing in the sand formed by the sea.One would never say,"if you wrote enough in the book,eventually the book would be able to read itself ",so why say,"if you pour in enough data into this metal box,that eventually the metal box is going to be a thinking entity"?

Alun Lewis : Hans Moravec on the other hand,is on the side of machine intelligence.He's optimistic that the basics of computer intelligence can be created and nurtured.

Hans Moravec : I think we will see a line of development in machines that will parallel the development of vertebrate nervous systems of our own brains. [They weren't created then Hans? No of course not,a Carnegie Mellon University AI researcher has much more intelligence than to believe such nonsense! -LB] Starting with quite small things,and we have to start small because at the moment the computers are not capable of doing the big job,even though for very narrow tasks,they can outperform us,overall they are vastly too weak.But they should be able to do the job of an insect [See langton.html -LB],and it would be wonderful to know in advance exactly what we need to do,and in fact many of us have rough road maps for what will be necessary,I have a 4 stage plan for robots from where they are now to full human intelligence,that passes through forms that have intelligence comparable to a lizard,a mouse, a monkey and a human,but that is only a prognostication,and I don't have all the details.I expect those to emerge however as the evolution actually proceeds. Then timing of the whole developmental process is in fact set by the amount of computer power that's needed for each stage,and that is progressing so steadily and so reliably that it is predictable [In fact it will soon run into a brick wall as the size of the elements of the processors are comparable with atoms,at that level they experience quantum interference and cannot function reliably,the only hope after that is a quantum computer (see Xoom)-LB],you know when things will be big enough to host thing of certain intelligence,and the rate at which the software is involving I think is compatible with the rate at which the hardware is evolving [Watch at xoom for "Genetic Algorithms"-LB] Alun Lewis : Well before then,Graham Parker's team at Surrey University,will have their robots and others like it,working at limited tasks,using limited capabilities.As well as employing senses,that we cannot use,because we simply don't have them.

Graham Parker : Vision is by far the most important sense for a human being,and certainly we'd anticipate that most machines would rely predominantly on vision information.However there are other important senses,including touch,feel,known formally as "haptic feedback",and indeed there's quite a lot of work going on in this area and indeed we have done some work in this area ourselves.It aids the realism of the application,and for example,it's a particularly important one for surgical work as one could imagine,performing remote surgery does require some extremely good sense touch and feel,and of course one can add further realism through noise generation for example,and indeed,where appropriate temperature.

Alun Lewis : New and exotic senses,perhaps a degree of intelligence,though not on an animal scale.More experience and more subtle control systems,put these technologies together in a commercially available package,and you've got a machine that will.....well what? Do or help with certain types of dangerous job.Explore remote and inhospitable locations,and then there are the promises of the major white goods electronics manufacturers,who claim we'll have autonomous vacuum cleaners,soon.Perhaps even robots that can tidy up around the home [There is one that already cuts grass unaided,and farm machinery that can collect crops -LB],is this the vision of the potential power of robot autonomy? Expensive electronic slaves,condemned to performing largely unnecessary tasks?

Kevin Warwick : Oh no I think that is humans thinking inside the box.I think really we've got to look outside the box.There's a hell of a lot we simply don't know about.We restrict our lives "we can't travel faster than the speed of light",this is all boring crap to be honest (Alun giggles),based on our meagre understanding of the world as we know it. [I think you need some lessons in relativity theory Kev-LB] Let's get out there,let's experience,the world...there's a hell of a lot more,let's look at the world in many dimensions, let's sense it in all sorts of different ways,and in all this business of travelling faster or slower,it will completely change.Humans will take on a completely different form,that of being cyborgs,the world will be a new oyster to us.


MAIN INDEX

REFERENCE GUIDE

TRANSCRIPTS

GLOSSARY

Chaos Quantum Logic Cosmos Conscious Belief Elect. Art Chem. Maths


File Info: Created 24/6/2000 Updated 19/12/2002  Page Address: http://members.fortunecity.com/templarser/tbed1.html