Nov 07, 2013 | By

Busted: Three 21st Century Technology-Driven Myths

As the digital age has matured, new technologies have emerged that change the way we do things.  Some of these changes have been hyped as vastly more powerful and more efficient than the old-fashioned way of doing things.  Three prominent examples:  multi-tasking, reading digital books, and social networking have been touted as superior – as the way of the future.  In all three cases, I found my own capabilities limited, leading me to conclude that as child of the pre-digital age I would inevitably be left behind in the great transformation.  Recent scientific studies, however, have shown that the hype is off the mark and the myths are untrue.  Human bodies, brains and social behaviors are incredibly complex and interdependent and they do not always fit with the technological infrastructure we are creating.  Instead of blindly accepting these myths, we need to make informed choices about our use of technology – or we risk losing the very best of what makes us human.

Multi-tasking: 

As the digital age arrived and matured, our society has moved from Future Shock (Alvin Toffler, 1970) towards Present Shock (Douglas Rushkoff, 2013), and I found my capacity for multi-tasking was increasingly challenged.  When I entered the professional work environment in 1975 I had never used a computer, there were no cell phones in use, and the Internet was a nascent concept limited to a few academic and research institutions.  Over the next several decades, these and other technologies exploded, and the productivity of the workplace and the pace of life accelerated dramatically.  One consequence was the popularization of the idea that humans were increasing their capacities by “multi-tasking”.  The term originated in the computer industry to denote designs that facilitated (real or apparent) parallel rather than sequential processing of discrete computing tasks.

I learned early that I was not very good at multitasking.  I have a tendency to forget the details of what I had been doing when I switch to a new task, so when I get back to the first task I have to start over again.  As a coping strategy, I developed a reliance on lists and notes, jotting down or crossing off an item before jumping to the next.  At the same time, I often felt pressured, as if I were under a constant acceleration pushing me back in my chair.  Surviving that rush could provide a wonderful sense of exhilaration – but at times it felt overwhelming and exhausting.  Significantly, I learned that my judgment and the ability to make the best decisions dropped in high stress “multitasking” environments.  As a result, I felt like a “plodder” who would never be as productive or efficient as those who could glide effortlessly through the chaos.

It wasn’t long before evidence began to confirm that multitasking was actually an illusion.  We are not able to truly focus on multiple tasks at the same time, and the effort involved in trying to do so results in a deterioration of performance.  Terms such as “cognitive load” and “switching costs” were developed to explain the downsides of multitasking – fundamentally, the human brain is more efficient if it works on one problem at a time.   As Harold Pashler put it in 1994 (Dual-Task Interference in Simple Tasks), “people have surprisingly severe limitations on their ability to carry out simultaneously certain cognitive processes that seem fairly trivial from a computational standpoint.”

The literature is deep and the results are consistent – human beings are not good at functional multitasking (although they may improve performance by employing strategies for prioritizing and sequencing tasks that make it look like they are multitasking).   At the same time, however, we are increasingly finding ourselves in an intensely multitasking environment as computers and tablets, smart phones, GPS and other digital devices are demanding our time and attention  – 24/7.  Indeed, we would not need mandatory bans on texting while driving if we were competent multitaskers.

Many publications have probed these issues including two recent books, What Technology Wants by Kevin Kelly (2010) and The Shallows: What the Internet is Doing to Our Brains by Nicolas Carr (2010).  Kelly coined the term “the technium” to highlight technology as an active agency in its own right, and he points out that our digital devices are now controlling us rather than the other way around.  The sound of a beeping smart phone compels our attention and we are drawn to respond.  According to studies cited by Carr, the brain rewards us with a rush of pleasure-inducing dopamine when we answer the call!   But the consequence of our online 24/7 presence includes a deterioration in cognitive performance.

In a 2010 study, (Junco, R. & Cotten, S. (2010)) on Perceived academic effects of instant messaging use the authors concluded that “students who engaged in more multitasking reported more problems with their academic work.”   In a presentation at The Economist conference in 2011, Carr talks about a case study where students were separated into two rooms.  In one they were required to close their laptops, in the other they were not.  After the lecture, they were given a test and the no-laptop group performed significantly better.  When the laptop group was sorted into those who were engaged in tasks relevant to the course material and those who were not, the “relevant-task” group actually performed worse.

We all recognize that this problem is getting worse.  Our interactive digital devices are becoming more engaging, more insistent and more ubiquitous and our ability to resist their siren call seems remarkably limited.  We have all been known to scan our smart phone in meetings, during conversations, while walking or driving.  The practice may be making us faster but it is not making us smarter.

Digital Books:

Last year I read Kevin Kelly’s book (What Technology Wants, cited above) about the impacts of technology on my iPad.   It’s a fabulous book and I devoured it, but shortly after found I had a hard time recalling any details of what the book actually said.  After a number of similarly disappointing digital book experiences, I concluded that my 62-year old brain was just wired the old-fashioned way – for paper.  When reading longer pieces of text I tend to rely on some of the cues that paper provides to help cement the ideas in memory.  These cues include location on the page, the place in the book (e.g. front, middle, back), and the way a paragraph looks on the page.  The tactile and visual interaction with the physical book helped me process and retain what I was reading.  These cues were absent from my experience of digital books.  Sadly, I concluded that I would never be able to make the digital book transition and I have retreated to my old habit of reading books on paper.

I’ve talked with other people about this, and some noted similar experiences.  Many people like to highlight and put marks or notes in the margins (not recommended for library books!) and the physical and mental engagement with text in this way significantly increased their recall.  Highlighting or making notes in eBooks, while possible, seems to require an inordinate mental focus, distracting from rather than reinforcing the content.

My suspicion had been that these deficits in the eBook reading experience were likely to be because of the way we learned to read.  Pathways had been set up in the brain many years ago – and older individuals did not have enough bandwidth to build all the new pathways needed to read effectively on digital devices.

However, this does not seem to be the case.   Scientific studies on the reading of text are remarkably few in number, but they are finding that text on paper has significant advantages over text on screens in terms of cognition and recall for old and young alike.   One recent paper (Mangen et al:  Reading Linear Texts on Paper versus Screen) found that “reading linear narrative and expository texts on a computer screen leads to poorer reading comprehension than reading the same text on paper.”   They speculated that “lack of spatiotemporal markers of the digital texts” and “additional cognitive costs” for reading in a digital medium contributed to the decline.

Nicolas Carr provided an interesting summary of the history of paper and the tradeoffs between print and computer text in his recent article Paper Versus Pixel in Nautilus .  Carr points out that “college students continue to prefer printed textbooks to electronic ones by wide margins” and he concludes, “we were probably mistaken to think of words on screens as substitutes for words on paper. They seem to be different things, suited to different kinds of reading and providing different sorts of aesthetic and intellectual experiences.”

My iPad has now been relegated to functioning as a photo frame.  It does a wonderful job.

Social networking: 

Ten years ago, Harvard student Mark Zuckerberg helped launch the social networking movement with a prank website.  The rest is history.  In less than 10 years, FaceBook became a $120B powerhouse, with more than 1B users.  The idea caught on because it was cool – you could have a lot of friends, let them know what you were up to, and share photos and other fun stuff.  And it was free!  While the concept has had a few skeptics and the corporate giant has come under criticism for its boundary-blurring privacy standards, intensive data mining activity and aggressive moves into advertising (not so free anymore), the public has voted with its eyeballs – billions of them.  By implication, if you are not on Facebook, you must be a loser.

I started as a reluctant Facebook user a couple of years ago.  Over time I have added friends (101), viewed a lot of photos, liked some posts, played a few games and posted items of interest – mostly my own blog posts from this website.  Overall, my experience has been mixed and I have often felt like a “lurker”, passively peering into the intimate lives of other people.  While most of what gets posted (including all of the ads – “recommended posts”) is of little interest, the experience of scrolling through and seeing snippets of photos, links and updates from all those people can feel quite satisfying – until I realize how much time has passed.  My overall impression of Facebook is that it is an extraordinary time sink.  While I occasionally find items of interest, I usually end a session feeling like a voyeur, both bored and uncomfortable.

In the face of the massive hype about social networking and the constant reinforcement from individuals and the media including the ubiquitous Facebook “Like” buttons, I have felt like an anachronism.  Perhaps I have antisocial tendencies or such a limited attention span that I simply cannot “catch the wave” and embrace this new and marvelous way to build relationships and networks of friends and contacts.  I just don’t have the bandwidth.

This piece of research recently caught my attention:  In a paper published in August 2013 in the PLOS ONE online Journal titled “Facebook Use Predicts Declines in Subjective Well-Being in Young Adults”, researchers noted that despite the explosive growth in Facebook use, there were no studies “that examined how using this technology influences subjective well-being over time.”  They conducted such a longitudinal study and found that the frequency of Facebook participation by test subjects over a two-week period correlated with a decline in self-reported general well-being and life satisfaction.  Remarkably, the authors concluded: “On the surface, Facebook provides an invaluable resource for fulfilling the basic human need for social connection. Rather than enhancing well being, however, these findings suggest that Facebook may undermine it. “

The general literature and various cross-sectional studies seem to bear this out.  For example, Ray B Williams posted on June 13, 2013 in “Wired for Success” in Psychology Today online an article that noted, “Excessive use of some social media may be narcissistic.”  A variety of researchers cited in the article have found that:

  • people who used Facebook the most tended to have narcissistic or insecure personalities.
  • high levels of Facebook use by couples were correlated with negative relationship outcomes
  • teens who spend too much time on Facebook are more likely to show narcissistic tendencies and display signs of other behavioral problems
  • Facebook may be worsening the tendency to think everyone else is enjoying themselves more than you are

These findings come as quite a shock, and a reminder that embracing a technology without thinking about it can bring with it a significant cost to individuals and to society.  However, there is another level of concern that some have expressed about the way this technology is being used as a marketing and advertising medium.  There is a purpose to everything on Facebook, and the purpose is commercial, not social.  As Nicolas Carr put it in a post in 2007: “Marketing is conversational, says Facebook’s Mark Zuckerberg, and advertising is social. There is no intimacy that is not a branding opportunity, no friendship that can’t be monetized, no kiss that doesn’t carry an exchange of value.”

Conclusion:

Kevin Kelly is both optimistic and critical about the impact of adopting new technology.  Indeed, I have to agree that the benefits of digital technology are profound and transformative in significant ways.  While our lives are faster and more hectic, the vastly increased opportunities for education, access to information, higher productivity, economic growth, social change and personal satisfaction are wonderful.  At the same time, as noted above, the side effects and unanticipated consequences may be correspondingly disruptive, even terrifying

Kelly devotes a significant chapter in his book to describing the practice used by the Amish in choosing whether to allow a new technology to be introduced into their communities.  While Amish communities differ in their attitudes and decisions, on the whole the Amish are not luddites and do not reject technology simply because it is new.  Rather, they ask whether this technology is going to support or undermine the community’s shared values.  They are also experimental, and may allow a pilot program to be introduced in order to see what the impacts will be.  Based on that experience, they will then make a community decision to accept or reject the technology.  Significantly, Kelly himself recognizes the importance of simplicity – trying to keep one’s immediate daily life free from distractions and disruptions, while simultaneously enjoying the benefits of global adoption of disruptive (and distracting) technology.

We do not all live in small, closed communities like the Amish, so their practices may have limited applicability to the broader society, but the thought-process is instructive.  What are the values that we hold and how does the use of a new technology support or undermine those values?

In the case of digital books, an evaluation of values and consequences may not be particularly profound.  If cognitive understanding and recall of the text to be read is important, then paper would likely be the right choice.  But if entertainment is the goal, or if interactive engagement with the text is more significant, then by all means go digital.  I should note that in writing this article, I have relied exclusively on a computer screen and reviewed articles and posts, used searches, hyperlinks, the cut and paste function and word-processing.   I can’t imagine doing this without digital tools – indeed the article will be posted on a website and promoted on Facebook!

The use of social media presents a more complex mix of costs and benefits.  Clearly, while social media can provide a useful and fun vehicle for the exchange of information, it should not become a substitute for the effort to build and maintain intimate human relationships with family and friends.  Moreover, digital media is a poor communication medium compared to physical conversation since most (two-thirds or more) of the information we communicate with each other in person is non-verbal!  Yet the appeal of social media can be mind-numbingly compulsive.  Can we resist the temptation to overindulge and instead achieve an optimal balance of face-to-face and online communication?  Significantly, is it important to us that our information is being collected and will be used, ultimately, to manipulate our choices in ways that we may not want?

The question of multitasking, or more broadly the question of how we engage with our digital devices, brings up perhaps the most difficult challenges.  How do we avoid becoming addicted to the pings and vibrations of our personal devices if our brains are hard wired to reward stimulus-response with an immediate sense of pleasure?  How do we retain and enjoy our appreciation for the moments of joy, creativity, love and affection that create meaning for our lives if we are in a constant state of perpetual readiness and responsiveness to those devices?  How do we retain depth of experience in our social lives and communities as our world fills with technologies that reinforce breadth and shallowness?

There is no obvious answer to these questions.  But to paraphrase what I said in my last post on Knowing –  “We will never know with certainty what is, in fact, THE RIGHT ANSWER – but we are obliged to make an effort to learn what we can in order to make the most of what we have and who we are.”  The issues are serious and we need to do our best to consider the implications and make choices that reflect and reinforce the values we hold to be most important in our lives.

2 Responses to “Busted: Three 21st Century Technology-Driven Myths”

  1. […] year I raised some concerns about the impact of technology on humans (see: Busted). These concerns remain, but in the interest of equal time, it is worth highlighting some of the […]

  2. George Gantz says:

    Speaking of valuable digital apps, I just read, in a Nautilus article by Greg Beato (http://nautil.us/issue/7/waste/how-to-waste-time-properly), that research studies suggest our choice of “distractions” can make a huge difference in whether our cognitive performance declines or improves – particularly when we want to let out unconscious mind chew on things for a while. And guess what? – There is an app for that! UpJoy, by SelfEcho, will provide video “distractions” designed to enhance mood and provide a non-demanding cognitive break that the research suggests will benefit creativity.

Join the Discussion

Why ask?