Taming AI before it becomes Dangerous

by | Jun 3, 2022 | Machine Learning / Artificial Intelligence

The Aug. 11, 2021, issue of Time Magazine published the article Uncontrolled AI Can Endanger People’s Lives. We Must Enforce Stronger Safeguards by Kate Crawford[1], warning the world of the misguided use of ERTs (emotion recognition tools) that randomly make false judgments of people’s faces imputing emotional states based on the “snapshots.”  The article makes a critical statement:

This industry is predicted to be worth $56 billion by 2024, and yet there is considerable scientific doubt that these systems are accurately detecting emotional states at all. A landmark 2019 review of the available research found no reliable correlation between facial expression and genuine emotion.

Others have taken issue with ERTs such as Brookings Institute as well as the European Union as “the first to attempt an omnibus proposal to regulate AI.” The article, likewise, cites scholars Michael Veale[2] and Frederick Borgesius[3] c­­riticizing AI’s use of ERTs as illegal. The British Medical Journal reviewed 232 ML algorithms of Covid patients and “found none of them fit for clinical use; they may have harmed patients.”

Crawford joined Alex Campolo[4], DO, Medical/Surgical Associates, Inc. (U. of Chicago) to label AI policy “enchanted determinism” – the belief that AI systems are both magical and superhuman beyond what we can understand and regulate, yet deterministic enough to be relied upon to make predictions about life-changing decisions. 

It was Kate Crawford[5], author of Time’s article and her book Atlas of AI, who insisted AI must be monitored and supervised. Amazon’s review of her book succinctly summarizes it: “This is an urgent account of what is at stake as technology companies use AI to reshape the world.”  Crawford continues her criticism:

The growth of AI might seem inevitable, but it is being driven by a small, homogeneous group of very wealthy people based in a handful of cities without any real accountability. … We urgently need stronger scientific safeguards and controls.

Most recently, The Algorithm from MIT Technology Review (9/8/2021) presented three feature stories of Facebook’s Adverse Algorithms, 8/10/21 – that reviewed Frances Haugen, Facebook’s whistleblower, who witnessed before Congress how dangerous were the algorithms used by Facebook; Joaquin Quiñonero Candela’s charge that Facebook was addicted to spreading misinformation; and Sophie Zhang’s exposure of Facebook’s fake accounts that were being used to sway elections globally. 

Whether it’s a particular industry as social media or any other organization, artificial intelligence is a dangerous tool of bias and misinformation unless it is carefully monitored. AI is just a tool that must be intelligently used by the engineers and computer scientists using it. AI doesn’t have the intelligence of consciousness to self-regulate. 

Take-away: Since AI does not have consciousness, it is amoral and insensitive.  Its users are the humans who bring their ethics and morality to the application of AI for human services.


[1] Crawford, Kate, (2021) senior principal researcher at Microsoft Research, a professor at USC Annenberg, and author of Atlas of AI. Time’s article was reprinted in Refind.Inc’s Oct. 8, 2021, newsletter.

[2] Veale, Michael, technology policy academic on IT and law; Associate Professor in the Faculty of Laws at University College London (UCL).

[3] Frederick Borgesius, Law Prof at iHub, Interdisciplinary hub on Security, Privacy, and Data Governance & Institute for Computing and Information Sciences, Radboud University, NL.

[4] Campolo, Alex, (2021); family medicine specialist in Newark, OH. He currently practices at Medical and Surgical Associates.

[5] Crawford, Kate (2021); Atlas of AI, Power, Politics, and the Planetary Costs of AI. Yale University Press, New Haven, CT