Nov 01, 2016 | By

Why Big Data Can’t be Trusted (Part Two) – the Inscrutability of Deep Learning

Human beings are often credited with intuition, the capacity of knowing something without being able to explain where the knowledge comes from. Intuitions are inherently inscrutable, even to ourselves. Interestingly, as Artificial Intelligence researchers have tackled the challenges of interrogating big data with “deep learning” algorithms, they are finding the algorithms great at scouring through masses of data to make good predictions, but programmers can’t explain how the algorithms reach their conclusions. Like their human creators, algorithms are becoming inscrutable. <MORE>

main-thumb-t-3005-200-r6zmsuwzettrpccra9xiwi6qifeask6j

Many psychologists attribute intuition in humans to the hidden workings of our subconscious minds and innate instincts, as well as to tacit knowledge – perception and cognition skills learned through experience and practice. Others might add conscience or mystical sources (“the still small voice within”) as potential inputs to intuition, but I think we can leave that aside for now. Intuition probably serves a key purpose in facilitating efficient decision-making, a highly beneficial attribute that helps explain the evolutionary success of the human species. Human experience, sensation, memory and instinct provide a deep database for guiding human behaviors to positive results — intuition gives us immediate access to this database without having to waste time and energy on complex and relatively slow conscious thinking. If every survival choice had required conscious thinking it’s hard to see how humans would have survived.

While intuition is valuable, it can also lead us astray in a variety of ways. Sometimes we make very bad decisions on the basis of a “hunch.” It can be impossible to tell the difference between a valid intuitive realization and a post-hoc rationalization. Intuition readily serves as a mask for bias and prejudice. We often deceive ourselves.

images

—–

Recent research in Artificial Intelligence (AI) has been focusing on techniques knows as “deep learning”, where algorithms are designed to evolve in a process that mimics biological evolution. For example, alternative explanatory models for a big data set are tried, and the more successful ones replicated and retested in succeeding generations, with each generation passing along the more successful features to succeeding ones. Deep learning algorithms have been very successful in developing predictive models that are far more accurate and efficient than brute force computer modeling (e.g. multiple linear regressions). The fields in which they are being used now include speech and image recognition, robotics, automated customer management, and games (such as Alpha-Go), as well as medical diagnostics, drug and genetics research, stock trading, remote sensing and logistics.

Researchers have identified two related issues with such algorithms. The first is that the evolved algorithms are very complex and incorporate data and analysis tools in ways that the researchers cannot explain. In essence, the algorithms have achieved a complexity that cannot be simply reverse engineered to figure out what they do.   Much like human intuitive thinking, the algorithm has learned to get a good result but no one can explain why.  The algorithm, like a human, is inscrutable. The second problem is that the algorithms cannot help distinguish between correlation and causation. In the field of diagnostic medicine, for example, understanding causal mechanisms is critically important. Imagine a doctor prescribing a treatment that an algorithm says will work but no one knows why.

—–

When we deal with other human beings, we know that they, like us, are intuitive and therefore, to an extent, inscrutable.   In addition, we also know that other humans can seek to deceive and manipulate. Our instincts and intuitions have developed in a context of constant interactions with other human beings. We have innate pattern-recognition capabilities that can glean information from the subtle clues of clothing, eye movements, posture, verbalizations – even chemical signals. In this very rich information environment, we learn the art of knowing how to judge other people, and who to trust. As social creatures, we also learn how we fit into the social order and how to manage the subtleties of power relationships. We are highly sensitive to power asymmetry between humans, and well aware from history of how badly such asymmetries can turn out.

The thought that we will rely increasingly on complex self-learning algorithms that give us results we cannot interpret may not seem particularly chilling. Yet any such human / algorithm relationship will be subject to one particular asymmetry that we should be concerned about – the asymmetry of information. To the extent this asymmetry leads humans to cede authority to inscrutable algorithms (as Cathy O’Neill discusses in Weapons of Math Destruction), a power dynamic will come into play and the risks of catastrophe rise dramatically. Just consider the incentives for hackers, programmers, corporate interests or governments to seek to manipulate such inscrutability to their own ends, not to mention the risk of inadvertent or undiscoverable errors leading to wildly inappropriate or destructive algorithmic behaviors.

—–

It took millions of years for humans to develop some capacity for determining whom to trust among their peers. For some thousands of years, humanity has been trying to learn to navigate power relationships in increasingly large and complex institutions by deciding which can be trusted. In a few years, we may have to invent procedures and protocols to help us decide whether and when to trust the machines and programs that we are now creating.

There is a reason why many regard AI as raising potential existential risks. (see http://swedenborgcenterconcord.org/chapter-iii-the-technology-race-to-super-intelligence/)

—–

Recent sources about big data and deep learning:

Is Artificial Intelligence Permanently Inscrutable? by Aaron Bornstein. Nautilus. September 1, 2016. http://nautil.us/issue/40/learning/is-artificial-intelligence-permanently-inscrutable

Russ Roberts interview of Cathy O’Neil on EconTalk October 3, 2016.    http://www.econtalk.org/archives/2016/10/cathy_oneil_on_1.html

Russ Roberts interview of Susan Athey on EconTalk September 12, 2016. http://www.econtalk.org/archives/2016/09/susan_athey_on.html

One Response to “Why Big Data Can’t be Trusted (Part Two) – the Inscrutability of Deep Learning”

  1. admin says:

    Lest anyone think that deep learning is an esoteric AI concept only being explored in a handful of academic or research environments, this news (which came across my desk after the above was posted) makes it clear that Microsoft and others are making deep learning tools available to virtually anyone. http://venturebeat.com/2016/10/25/microsoft-launches-cognitive-toolkit-2-0-beta-with-python-support/

Leave a Reply to admin

Why ask?