Putting Intelligence back into AI

Putting Intelligence back into AI

February 25, 2020|AI, Digital Agents

We are constantly bombarded with breaking news about the latest and greatest improvement in Artificial Intelligence (AI). So much so, that I feel almost silly when I added the (AI) to the last sentence 🙂 But, oddly, no matter how many times we hear about the newest improvement, it doesn’t seem like the AI is getting any smarter.  Sure, by leveraging deep learning and convolutional neural nets and customized massively parallel computing arrays we get improvements – the nets train faster, they recognize more classes of kittens, but they don’t really seem to be smarter.

We have all seen the reported advances in automating the boring, mundane task of driving. First it was warning systems – buzzers for the car in front of you stopping too fast, or ‘lane departure’ alarms. Then we saw automated braking, and self parking cars – AI at it’s finest we were told.  This trend has culminated in fully self-driving cars. Cars that offer speed control, braking, lane changing, the ability to automatically detect and classify potential road hazards. Then, based on that classification, predict what is likely to happen and be able to take steps to avoid accidents. All the bells and whistles. This added capability is an amazing thing.  Capability adds to the range of things a system (human or machine) can do.  The marketers would have you believe that they are delivering truly intelligent cars – but are they?

What is intelligence anyway?  Many AI companies would have you believe that “Intelligence” is defined by the capabilities they add  – really advanced capabilities that can find patterns in data, classify images (more than just cats – a recent demonstration showed that neural net powered image classification can out perform doctors at detecting cancers from x-ray images) and land booster rockets on floating barges, but capability is not intelligence. One can use a capability to do the right thing, or one can use that same capability to do something really stupid.  

Intelligence has been defined as being able to make the right decision in dynamic, uncertain situations. But to do this – the intelligent system has to be able to evaluate the choices within the context of the current situation. Any given option may be the intelligent thing to do in one situation, but absolutely stupid in another case. And therein lies the rub.

Picture a nice afternoon, and your cab driver is taking you to a business lunch.  The restaurant is in a trendy area adjacent to a nice residential area.  As the driver turns the corner, you look out and see a speed limit sign. One that some kids have tampered with, adding some black tape so that it seems to read 85mph instead of 35mph.  Of course, you know that they don’t put high speed limits in residential areas, and certainly not on narrow streets with on street parking. It must be a joke. You start to laugh, and then you notice that the driver has floored the accelerator – intent on reaching the 85mph speed limit.

This is what happened during an ‘experiment’ with a self-driving car this week.  The car used its amazing image processing capabilities to read the sign, and it matched the 85mph classification with a high degree of confidence.  It used its advanced rule system to conclude that the optimal decision was to travel at the posted speed limit.  Nowhere in this ‘intelligent’ machine was a check for “Does it make sense that there would be an 85mph speed limit here?” So, very intelligently – it did the stupid thing.  It hit the gas.

A number of years ago we presented a paper at the National Institutes of Standards and Technology workshop on Performance Metrics for Intelligent Systems (PerMIS). The focus of the paper was that there is a significant difference between intelligence, capability, and autonomy.  In essence a system like a self-driving car, can be  

  • capable of controlling the vehicle – doing what it decides is appropriate,
  • autonomous – able to do what it decides to do, without getting permission, but
  • still not intelligent enough to decide what the appropriate thing to do is.
     

 Using that approach, an ‘intelligent’ self driving car that hits 85 on a residential street is not intelligent, no matter how many CPUs, deep learning neural nets, and advanced AI cores it has under the hood. However, with all that fast hardware it can be powerfully stupid.  

If you would like a copy of our NIST paper, get in touch