As I was doing some research last week for a previous article, I stumbled upon a concept called AI effect I had never heard of before. To get straight to the point, the idea behind it is that every time research is reaching an important milestone in AI, it is no longer considered as AI.

 

Let’s take an example to understand what that means: chess. It took decades to be able to beat human players with an AI. But as soon as it happened, people started saying it wasn’t real artificial intelligence. Even though the computer was anticipating every move possible and trying to stick to the path offering the highest odds of winning, it’s been told it was only brute force in the end, and not intelligence at all, for being too mechanical.

 

Do we still consider face detection as AI? We’re so used to have it in our smartphone and cameras every time we shoot a picture that we hardly notice it.

Do we still consider using AI when using a translation service such as Google Translate for example? Not really. Same with the translation option in our facebook or twitter feeds.

Do we still consider using AI when using our GPS for directions? When it helps avoiding traffic jams?

 

Surprisingly, it seems that every time we have the feeling we get close to AI, it disappears as soon as it’s here and usable. Rodney Brooks said “Every time we figure out a piece of it, it stops being magical; we say, ‘Oh, that’s just a computation.'”

 

In an article on the topic of AI effect, Katherine Bailey suggests an interesting approach. She explains there are lots of challenges we thought could only be handled by strong AI. But it turns out that step by step, those problems can be solved by weak AI. Instead of thinking that true AI is what hasn’t been solved yet – which also tends to suggest that weak AI is the way to strong AI – it’s better to think about AI in a different frame: things that can be solved with weak AI and things that can’t. And as weak AI become stronger and capable of solving more and more complex problems, we’ll just move problems from the can’t to the can be solved with weak AI category.

 

I tend to think that there is another element that keeps us away from considering AI as AI. And this topic is not discussed a lot. It’s failure. People want artificial intelligence to be failure-proof to consider it as such. If it’s really intelligent, it should be right all the time.

 

DeepBlue didn’t win all games against Kasparov. The GPS is sometimes wrong. All faces are not detected on the camera. And some translations are sooo bad! How can it be intelligent If it’s not right all the time? How can it be intelligent If I can see obvious moments of failure? Is Watson intelligent if it can’t detect 100% of cancers correctly? Probably not that much. We’re tough on AI. Probably because it makes us feel better to think so.

 

While AI are not perfect, there is a pragmatic behavior: balancing AI with human intelligence. I discovered another thing I have never heard of before: Advanced Chess. The Wikipedia page is called like this, but I prefer the other term Centaur Chess. It basically pairing an AI with a human player. One of the highlighted interest of such a pair is worth looking at: it’s said to produce “blunder free games with the qualities and the beauty of both perfect tactical play and highly meaningful strategic plans”. Instead of opposing sides – humans and machines – they’re paired to produce things unseen before. It is also depicted as providing “an insight into the thought processes of strong human chess players and strong chess computers, and the combination thereof.”. Probably the best way to evaluate the benefits and limitations AI has to offer.