What Artificial Intelligence Needs to be Intelligent
In 2020, Brian Bergstein wrote, “Artificial intelligence won’t be very smart if computers don’t grasp cause and effect. That’s something even humans have trouble with.” Now, in 2021, Melanie Mitchell adds, “Today’s state-of-the-art neural networks are … bad at taking what they’ve learned in one kind of situation and transferring it to another – the essence of analogy.”
Melanie Mitchell’s thoughts have been transposed to a reconstructed dialogue between her and John Pavlus, in an article published by Quanta Magazine.org on July 14, 2021, that can be found here – The Computer Scientist Training AI to Think with Analogies. Mitchell thinks machines need to be able to make good analogies before they can approach human-like artificial intelligence. This fascinating article surfaces the limits of AI in its search for another key to being truly intelligent. Melanie Mitchell has worked on digital minds for decades. She says they’ll never truly be like ours until they can make analogies.
Melanie Mitchell is the author of the book Artificial Intelligence: A Guide for Thinking Humans in which she traces the struggle that computer science is having in overcoming the challenges to replicate the inner functions of the human brain to achieve artificial general intelligence, its eventual goal. In the meantime, we need to wonder if AI will ever pick up on humor as in-jokes or play on words.
Mitchell uses an excellent example of how analogies work when we speak to one another about our own life experiences:
She maintains that analogy can go much deeper than exam-style pattern matching. “It’s understanding the essence of a situation by mapping it to another situation that is already understood. If you tell me a story and I say, ‘Oh, the same thing happened to me,’ literally the same thing did not happen to me that happened to you, but I can make a mapping that makes it seem very analogous. It’s something that we humans do all the time without even realizing we’re doing it. We’re swimming in this sea of analogies constantly.”
Mitchell has broadened her research beyond machine learning as she leads Santa Fe Institute’s Foundation of Intelligence in Natural and Artificial Systems project that examines how biological evolution, collective behavior (as in the social activities of ants), and the physical body all contribute to intelligence. However, the role of analogy plays a bigger part in her work in AI, driven by deep neural networks that mimic the layers of human neuronal activities of the brain.
Today’s state-of-the-art neural networks are very good at certain tasks but they’re very bad at taking what they’ve learned in one kind of situation and transferring it to another – the essence of analogy. … It’s a fundamental mechanism of thought that will help AI get to where we want it to be. Some people say that being able to predict the future is what’s key for AI or being able to have common sense or the ability to retrieve memories that are useful in the current situation. But in each of these things, the analogy is very central.
Mitchell reminds us that as humans we figure out what to do in situations that are different from what we were trained or used to; we use analogies from previous experience. The challenge with AI is how to get machine learning to analogically adjust to similar differences as in self-driving vehicles. This is the ability to abstract, as in using statistics to solve unforeseen situations, something AI hasn’t achieved yet.
Takeaway: Artificial intelligence is not yet analogous to human intelligence – if that could ever happen.
 Bergstein, Brian, (2020); MIT Technology Review, Feb. 19 – https://www.technologyreview.com/2020/02/19/868178/what-ai-still-cant-do/
 Mitchell, Melanie, (2021); https://www.quantamagazine.org/melanie-mitchell-trains-ai-to-think-with-analogies-20210714/
 Mitchell, Melanie, (2019); Artificial Intelligence: A Guide for Thinking Humans. Picador-Farrar, Straus, Giroux, N.Y.