Human beings are often credited with intuition, the capacity of knowing something without being able to explain where the knowledge comes from. Intuitions are inherently inscrutable, even to ourselves. Interestingly, as Artificial Intelligence researchers have tackled the challenges of interrogating big data with “deep learning” algorithms, they are finding the algorithms great at scouring through masses of data to make good predictions, but programmers can’t explain how the algorithms reach their conclusions. Like their human creators, algorithms are becoming inscrutable. <MORE>
Parents, philosophers, theologians and educators for millennia have grappled with the challenge of teaching morality to young people. Many great thinkers have proposed theories, models, practices and programs designed to instill virtue, yet people young and old consistently fail to live up to the morals their elders promote. Microsoft recently experienced this phenomenon in relation to artificial intelligence. As reported in the New York Times, Microsoft launched a self-learning chatbot program named Tay, designed to emulate a 19-year old female, into the Twitter-sphere. Within 24 hours, the program had to be removed as it had been quickly corrupted by exposure to anti-moral attacks that turned Tay into a “sexist, Holacaust-denying supremacist” (according to The Week, April 8 at p.18). It turns out that the company your chatbot keeps is important to its moral development. True, the program is not really a sentient human and has no morals per se, but the social learning the incident demonstrates is a reminder of how powerful social influences can be on the impressionable. Moreover, while the influence of social networks, e.g. family and communities, on human moral development has always been apparent, modern technology powerfully amplifies these influences in ways that we may not appreciate.
This week AEON magazine published a provocative article by Michael Schulson suggesting (half-heartedly) that we might consider government regulations in responding to websites and apps that are designed to promote compulsion or addiction, just as we do drugs or casinos. Schulson traces the manipulative tricks of Internet designers to the experiments of B.F.Skinner, who found that pigeons facing variable timing of rewards “went nuts… One pigeon hit the Plexiglas 2.5 times per second for 16 hours.” He suggests that our individual battles with legions of savvy, well-funded Internet companies is “not a fair fight”, and yet, as we do in gambling or drugs, we blame the addicts and not the purveyors. Can we rely on industry auto-regulation to help us, or do we need government regulations? In our series on The Human Race and the Technology Race, we focused on the only realistic option —personal self-regulation, and we offered the four “A’ tips: AVOID; ADOPT; ADAPT; ADEPT.