AI Has Yet to Understand Causal Relations

by | Apr 19, 2022 | Machine Learning / Artificial Intelligence

In the February 19, 2020, MIT Technology Review (Artificial Intelligence), Brian Bergstein[1] wrote an article What AI still can’t do with the opening statement:

Artificial intelligence won’t be very smart if computers don’t grasp cause and effect. That’s something even humans have trouble with.

Bergstein recounts many of the successes of AI like “diagnosing diseases, translating language, and scripting speech,” outplaying humans at strategy games and photorealistic images.

However, in what he labels “glaring weaknesses”, he lists machine-learning being duped by what it hasn’t seen before and self-driving cars flummoxed by simple scenarios.  He labels learning a single task that is later replaced by learning another task as “catastrophic forgetting.”  Bergstein concludes that

These shortcomings have something in common: they exist because AI systems don’t understand causation.  They see that some events are associated with other events, but they don’t ascertain which things directly make other things happen.

Bergstein quotes Elias Bareinboim: “AI systems are clueless when it comes to causation.”  Bareinboim is director of the new Causal Artificial Intelligence Lab at Columbia University in charge of this question and is working under the aegis of Judea Pearl, a Turing Aware-winning scholar, who is shaping the field of the science of AI research of causality. AI can recognize correlations due the many patterns found in huge data, “but there’s a growing consensus that progress in AI will stall if computers don’t get better at wrestling with causation.”

 If machines could grasp that certain things lead to other things, they wouldn’t have to learn everything anew all the time – they could take what they have learned in one domain and apply it to another.  And if machines could use common sense, we’d be able to put more trust in them to take actions on their own, knowing that they aren’t likely to make dumb errors. Today’s AI has only a limited ability to infer what will result from a given action.

 This brings us the question “what if.” Bareinboim “argues that anyone asking, ‘what if’ … should start not merely by gathering data, but … determine whether the available data could possibly answer a causal hypothesis.”

 ‘What if’ questions are the building blocks of science, or moral attitudes, of free will, of consciousness” says Pearl, who cannot be drawn into predicting how long it will take computers to be able to reason causally.  He thinks the first move should be to develop machine-learning that combine data with available scientific knowledge. “We have a lot of knowledge that resides in the human skull which is not utilized.”

 Bergstein makes the following hopeful statement that computers will eventually use causal relations:

 Over time, with enough meta-learning about variables that are consistent across data sets, a computer could gain causal knowledge that would be reusable in many domains.

 This is a fascinating article that is worth one’s time and thought with this computer possible innovation.

 Take-away: As it stands, self-driving cars are still in the foreseeable distance while artificial general intelligence is not even foreseeable.  But causal relations seem foreseeable for computers with due diligence.  Computer scientists are “on it.”

Sources:

[1] Bergstein, Brian, (2020); MIT Technology Review, Feb. 19 – https://www.technologyreview.com/2020/02/19/868178/what-ai-still-cant-do/