Parents, philosophers, theologians and educators for millennia have grappled with the challenge of teaching morality to young people. Many great thinkers have proposed theories, models, practices and programs designed to instill virtue, yet people young and old consistently fail to live up to the morals their elders promote. Microsoft recently experienced this phenomenon in relation to artificial intelligence. As reported in the New York Times, Microsoft launched a self-learning chatbot program named Tay, designed to emulate a 19-year old female, into the Twitter-sphere. Within 24 hours, the program had to be removed as it had been quickly corrupted by exposure to anti-moral attacks that turned Tay into a “sexist, Holacaust-denying supremacist” (according to The Week, April 8 at p.18). It turns out that the company your chatbot keeps is important to its moral development. True, the program is not really a sentient human and has no morals per se, but the social learning the incident demonstrates is a reminder of how powerful social influences can be on the impressionable. Moreover, while the influence of social networks, e.g. family and communities, on human moral development has always been apparent, modern technology powerfully amplifies these influences in ways that we may not appreciate.
I recently read a series of articles dealing with conscience and culture posted under the Questions for a Resilient Future project of the Center for Humans and Nature. At about the same time, I also read a briefing in The Week on the CRISPR technique that has vastly simplified gene editing – the headline is titled “Editing the human race.” (See also the post in this forum: “Engineering Better Babies” November 20, 2015.) CRISPR is one among many technologies that, by their very existence, test our collective conscience. (more…)