Jonathan Swift Revisited
Even beyond this is the fact that a certain percentage of American preschoolers—and I was not able to determine what that figure currently is—are on anti-depressant drugs. I find the idea of a three-year-old on Zoloft absolutely chilling, in a Brave New World kind of way. This has got to be a terrible mistake; it’s got to be a way of destroying an infant’s self, so that dependency and psychological disorientation become the “normal” way of being in the world, for these poor kids. Research has suggested this in the case of adults: that the use of anti-psychotic drugs is associated with atrophy of the prefrontal cortex, and that after only a few weeks of drug use the brain begins to function in a different way. How much more powerful and long-lasting must these effects be in the case of toddlers?
The real motivation for getting very young children hooked on these meds is, of course, money: the use of such drugs from a very early age pretty much guarantees Big Pharma an endless supply of customers. It is not, à la Jonathan Swift (“A Modest Proposal”), that there is some kind of plot out there to destroy our children, wreck their intellectual and emotional functioning from age two or even earlier. But if that is the result, does it matter? If the percentage of the under-four age group on anti-depressants continues to grow, then it might be said that deliberately or not, we are eating our children alive. The jury is still out on all this, but the indications are certainly not encouraging.
When it comes to screens, however, so dramatically represented in American society by things such as Facebook and Twitter, there doesn’t seem to be much doubt: these are killers. As Sherry Turkle shows in her most recent book, Alone Together, the much-touted idea of “virtual community” proved to be a fraud. What we really have is increased alienation and depression. All of these social media and accompanying devices peddle a phony intimacy, because if you are at home alone with a screen, that’s where you actually are. Let’s take a look at some of the evidence.
In 1998 a research team at Carnegie Mellon University published an empirical study entitled “Internet Paradox,” demonstrating that within the first year or two online, people were experiencing less social engagement and poorer psychological well-being. The researchers also found that a greater use of the Internet was associated with less family communication, a reduction in local social circles, and an increase in loneliness, as well as higher rates of depression. The authors of the study concluded by suggesting that by using the Net, people were “substituting poorer quality social relationships for better relationships, that is, substituting weak ties for strong ones,” with consequent negative effects. One thinks of Mark Zuckerberg, poor rich asshole, destroying the one real friendship he had (with Facebook cofounder Eduardo Saverin), so that he could acquire a million meaningless ones.
A more recent study, conducted at the University of Michigan for the period 1979-2009, revealed a 48% decrease in empathy among college students during this time, and a 34% decrease in the ability to see things from another person’s perspective. Most of these declines, it turns out, occurred over the past decade, and the general interpretation is that this is related to the isolation involved in the use of personal technology and popular social networking sites that have become so much a part of student life. The study suggested that this was not surprising “in a world filled with rampant technology revolving around personal needs and self-expression.” But it is also the nature of the technology that is at issue, because the Internet and other electronic media are based on speed and distraction, on rapidly shifting attention. It turns out that the higher emotions, such as empathy and compassion, emerge from neural processes that are inherently slow. Various studies have shown that the more distracted we become, the less able we are to experience such emotions, or see things from the perspective of others. In a word, these technologies may be undermining our moral sense. At the very least, it becomes hard to argue that they are promoting community.
It also seems to be the case that the use of screens is creating a different type of human being, partly as a result of the neural rewiring of the brain that these devices engender. Much of the evidence for this argument has been collected and expanded upon by Nicholas Carr in The Shallows: What the Internet Is Doing to Our Brains. Marshall McLuhan had argued that the brain takes on the characteristics of the technology it uses, and we now see this in the cultural shift from print media to screens. For the Internet’s emphasis (and of course, that of Facebook and Twitter) is on searching and skimming, not on genuine reading or contemplation. As a result, given what we now know about the relative plasticity of the brain, the ability to reflect or to grasp the nuance of a situation is pushed to the margins. The Net, he says, is literally rerouting the pathways in our brains, making our thought processes increasingly shallow. It breaks up the content of a text into searchable chunks, and surrounds it with other content. This is why a page online is very different from a page of print. The concentration and attention factor are high for the latter, low for the former. Then there are the various links, which encourage us not to devote our attention to any single thing but rather to jump from item to item. Our attachment to any single item is thus provisional and fragmented. The Net and its related technologies thus constitute an “ecosystem of interruption technologies.”
Print, on the other hand, has (or should I say had?) a quality of calm attentiveness. “The quiet was part of the meaning,” as the poet Wallace Stevens once put it. When a printed text is transferred to an electronic device, says Carr, it turns into something like a website; the calm attentiveness disappears. Instead, the Net & Co. deliver repetitive, intense, and addictive stimuli, promoting very superficial understanding. Basically, you don’t really read on a screen; it’s a different kind of activity: browsing, scanning, keyword spotting, and so on. And the better you get at this, the less able you are to think deeply or creatively. We are, Carr concludes (quoting the playwright Richard Foreman), turning into “pancake people”—spread wide and thin. Facebook and Twitter are turning out such folks by the IHOP-load.
The lack of interest in printed material, and the corresponding upswing in interest in screens is, of course, especially pronounced among the young. In 2009 the average American teenager was sending or receiving 2,272 text messages a month(!). Meanwhile, the amount of time the average American between twenty-five and thirty-four years of age devoted to reading print in 2008 was forty-nine minutes a week. As Maryanne Wolf of Tufts University cogently puts it, “the digital world may be the greatest threat yet to the endangered reading brain as it has developed over the past five thousand years.” Collectively, adds author Christine Rosen, this is the endpoint of the tragedy we are now witnessing:
“Literacy, the most empowering achievement of our civilization, is to be replaced by a vague and ill-defined screen savvy. The paper book, the tool that built modernity, is to be phased out in favor of fractured, unfixed information. All in the name of progress.”
There is little room in this world, Carr points out, for “the pensive stillness of deep reading or the fuzzy indirection of contemplation.” In such a world, he goes on to say, “Ambiguity is not an opening for insight but a bug to be fixed.” The cultural impact follows upon the individual one, then: what we are witnessing is the replacement of a complex inner diversity with a new kind of self, one devoid of any sense of cultural inheritance. Screens are generating the emptiest people in the history of the world, and as in The Matrix, there is no way for these folks to get outside themselves and perceive this. This is the “frenzy” of technological society famously referred to by Martin Heidegger. In the pathological climate of “techno-social Darwinism,” as Rosen calls it, there is no time for stillness. All of these brave new people lack the ability to be alone with their thoughts, or to appreciate the importance of silence. I have found that even the brightest people don’t get it, have no idea what George Steiner meant when he called modernity “the systematic suppression of silence.” Silence, after all, is the source of all self-knowledge, and of much creativity as well. But it is hardly valued by societies that confuse creativity with productivity, and incessant noise with aliveness. As a result, we don’t notice that fundamental aspects of being human are disappearing. During his time at Yale, William Deresiewicz asked his students what place solitude had in their lives. In response, they seemed to be puzzled that anyone would want to be alone. “Young people today,” he concluded, “seem to have no desire for solitude, have never heard of it, [and] can’t imagine why it would be worth having. In fact, their use of technology…seems to involve a constant effort to stave off the possibility of solitude.” The world of creativity, of imagination, of depth of the self, is closing down.
The similarity of all this to toddlers on anti-depressants is thrown into stark relief when you realize that the corporate goal is to hook children as early as possible. Last month, Rullingnet Corp. (based in Canada) launched Vinci, a 7” touch-screen tablet for the under-four age group. It is the first tablet designed for babies as young as one week old—the product of a technological mindset that one can only call “creepy,” in my opinion, although the company’s tag line is, ironically enough, “Inspire the genius.” “We are just leveraging their curiosity,” says the inventor of the device. (Notice how a word from corporate finance gets imported into the world of child-rearing. It was leveraging that brought on the crash of 2008.) In fact, a recent study conducted by Parenting magazine and BlogHer found that 29% of Generation-X moms say their children were onto laptops by age two, and the figure rises to 34% for moms of Generation-Y. In the first month of its release, Rullingnet sold 600 Vincis.
In chapter 3 of Why America Failed I argue that technology has always functioned as America’s hidden religion, and that if you deprive Americans of their gadgets, they become depressed or enraged. What can one say when many users of Apple’s iPhone refer to it as “the Jesus phone”? This is not an accident. Technology in America has been associated with unlimited progress and therefore with utopia, with redemption, and when we are now giving touch-screens to one-week-old babies we are imprinting them in the same way that, say, a baptism might. But the reality of Facebook, Twitter, Vinci and the like is a story of false redemption. As the sociologist Zygmunt Bauman writes, what is omitted from public discussion today is the fact that almost every technological “advance” in recent years has deepened the “continuing decomposition and crumbling of social bonds and communal cohesion.” It goes way beyond the dumbing down of the culture, in other words (horrific as that is); it also involves increasing human disconnectedness, social atomization, rudeness, incivility. One effect of spending most of your time in a virtual world is that of “absent presence”: you treat the world as a mere backdrop, and devalue those around you. These are the hallmarks of a superficial, narcissistic society, one which possesses no inherent meaning, and whose Twittered citizens don’t as well. With techno-imprinting going on now at age one week, I think we can expect that things can only get worse. For there is no getting around it: eating our children alive means we are eating our society alive as well.
©Morris Berman, 2011