Dear Friends,
Check out a Harrison Ford film entitled "Crossing Over". It's a pretty good portrait of what we have finally become, and of the violence that I believe now lies at the center of the American soul--not exactly a happy place these days. The movie is running in Mexico under the title of "Persecucion inminente," which comes closer to the mark. When the film ended and the lights went up, I noticed that I was the only gringo in the theater, and felt more than a little embarrassed about it.
Of course, the film got panned by most US newspapers. Gee, what a shock.
Anyway, "enjoy"!
mb
This is the Blog for MORRIS BERMAN, the author of "Dark Ages America". It includes current publications and random thoughts about U.S. Foreign Policy, including letters and reactions to publications from others. A cultural historian and social critic, MORRIS BERMAN is the author of "Wandering God" and "The Twilight of American Culture". Since 2003 he has been a visiting professor in sociology at Catholic University of America in Washington, DC. Feel free to write and participate.
September 28, 2009
September 18, 2009
Oh fer...So now he's a poet?
Dear Friends,
As many of you know, I relocated to Mexico a little over three years ago. There were many reasons for this, both personal and political, which were often intertwined. In terms of daily life, I found the United States oppressive: the workaholism, the running on a treadmill and never getting anywhere, the feeling that one never has a moment to oneself to think, relax, just be. All of that changed when I crossed the border; the sensation was "libre por fin!" (free at last). And it proved to be true. Mexico is not in a rush; the rhythm of life is much slower down here, and a lot healthier, imo. You can believe what you want about crime and swine flu; the fact is, unlike Americans, these folks have their priorities straight. Slowly, I began to unwind. Slowly, I was much more in touch with my body, and with the sensuality of the environment. Slowly, I began to feel happy (what a thought).
All of this had a very large, and unexpected, impact on my writing. One year after my immigration, I suddenly wrote a novel. The bulk of it just poured out in a few days, and then it took another six months to complete the rest of it and start the editing process. I'm currently looking for a publisher for it (well, an agent to start with), and hope I can get it into print sometime next year. The title is Destiny, and it explores the question of whether we can change our lives as an act of will.
I was also asked to be a regular columnist for two upscale literary journals here in Mexico, and began contributing articles on a variety of topics, some of which have been posted on this blog. My Mexican publisher intends to bring it out in book form (in Spanish) as a collection of essays sometime next year, and I'm hoping I can find a US publisher for the English edition as well. These articles reveal a somewhat different form of nonfiction writing, for me, I think: more direct, more visceral, occasionally lyrical. Title: A Question of Values.
Finally: nearly two years ago I suddenly wrote a volume of poetry. This literally erupted while I was sitting in a small cafe; I had no idea I had all this in me, and it moved through my body like a tidal wave. I wrote the bulk of the collection on napkins over a period of three days, and then spent a few months editing it with the generous help of Paul Christensen, who teaches creative writing at Texas A&M, and who contributed a Foreword to the volume as well. And then a small, fine literary press in the Boston area responded positively to a query letter, and the book is going to be published next year by the Cervena Barva Press. The title: Counting Blessings. You get the idea.
Well, I guess good news travels fast. I was subsequently asked to read from this collection by the Poetry Center at San Francisco State University, and will be doing a week of presentations for them this November. The schedule is as follows:
-Nov. 16, 7:30 pm: Poetry Reading at Moe's Books, 2476 Telegraph Ave, Berkeley
-Nov. 18, 7:30 pm: Lecture at the Unitarian Church, 1187 Franklin St, SF (I'll be reading 2 or 3 essays from the collection I referred to above)
-Nov. 19, 3:30 pm: Interview at SF State U, Hum 512 (1600 Holloway Ave)
The theme of the Poetry Center's work this year is "The Poetics of Healing". It fits well with what I've been going through, I think. For those of you living in or near the Bay Area, I'd be delighted if you could join us.
And finally, let me thank all of you who have contributed to this blog over the past few years. In some indefinable but very real way, I think the intelligent back-and-forth that we've engaged in had something to do with my growth as a writer. Gracias and
L'Chaim!
-mb
As many of you know, I relocated to Mexico a little over three years ago. There were many reasons for this, both personal and political, which were often intertwined. In terms of daily life, I found the United States oppressive: the workaholism, the running on a treadmill and never getting anywhere, the feeling that one never has a moment to oneself to think, relax, just be. All of that changed when I crossed the border; the sensation was "libre por fin!" (free at last). And it proved to be true. Mexico is not in a rush; the rhythm of life is much slower down here, and a lot healthier, imo. You can believe what you want about crime and swine flu; the fact is, unlike Americans, these folks have their priorities straight. Slowly, I began to unwind. Slowly, I was much more in touch with my body, and with the sensuality of the environment. Slowly, I began to feel happy (what a thought).
All of this had a very large, and unexpected, impact on my writing. One year after my immigration, I suddenly wrote a novel. The bulk of it just poured out in a few days, and then it took another six months to complete the rest of it and start the editing process. I'm currently looking for a publisher for it (well, an agent to start with), and hope I can get it into print sometime next year. The title is Destiny, and it explores the question of whether we can change our lives as an act of will.
I was also asked to be a regular columnist for two upscale literary journals here in Mexico, and began contributing articles on a variety of topics, some of which have been posted on this blog. My Mexican publisher intends to bring it out in book form (in Spanish) as a collection of essays sometime next year, and I'm hoping I can find a US publisher for the English edition as well. These articles reveal a somewhat different form of nonfiction writing, for me, I think: more direct, more visceral, occasionally lyrical. Title: A Question of Values.
Finally: nearly two years ago I suddenly wrote a volume of poetry. This literally erupted while I was sitting in a small cafe; I had no idea I had all this in me, and it moved through my body like a tidal wave. I wrote the bulk of the collection on napkins over a period of three days, and then spent a few months editing it with the generous help of Paul Christensen, who teaches creative writing at Texas A&M, and who contributed a Foreword to the volume as well. And then a small, fine literary press in the Boston area responded positively to a query letter, and the book is going to be published next year by the Cervena Barva Press. The title: Counting Blessings. You get the idea.
Well, I guess good news travels fast. I was subsequently asked to read from this collection by the Poetry Center at San Francisco State University, and will be doing a week of presentations for them this November. The schedule is as follows:
-Nov. 16, 7:30 pm: Poetry Reading at Moe's Books, 2476 Telegraph Ave, Berkeley
-Nov. 18, 7:30 pm: Lecture at the Unitarian Church, 1187 Franklin St, SF (I'll be reading 2 or 3 essays from the collection I referred to above)
-Nov. 19, 3:30 pm: Interview at SF State U, Hum 512 (1600 Holloway Ave)
The theme of the Poetry Center's work this year is "The Poetics of Healing". It fits well with what I've been going through, I think. For those of you living in or near the Bay Area, I'd be delighted if you could join us.
And finally, let me thank all of you who have contributed to this blog over the past few years. In some indefinable but very real way, I think the intelligent back-and-forth that we've engaged in had something to do with my growth as a writer. Gracias and
L'Chaim!
-mb
September 10, 2009
The Parable of the Frogs
One who knows “enough is enough” always has enough.
–Tao Te Ching
What does it take to produce large-scale social change? Most historians, if you catch them in an honest moment, will admit that the popular levers of social change, such as education or legislation, are bogus; they don’t really amount to very much. What does make a difference–and then only potentially–is massive systemic breakdown, such as occurred in the United States in the fall of 2008. It was the greatest market crash since 1929, leading to widespread unemployment (something like 20% of the population, in real–as opposed to official–statistics) and the loss of billions of dollars in retirement savings. It had been many decades since the middle class found itself in soup kitchens, and yet there they were. In the face of all this, however, very little seems to have changed. Americans are still committed to the dream of unlimited abundance as a “reasonable” goal, when in reality it is (and always has been) the dream of an addict. President Obama’s $12 trillion bailout and stimulus plan is funneling money into the very banking establishment that gave us the disaster; it rescues the wealthy, not those who really need the money. And while he could have appointed economic advisers such as Paul Krugman and Joseph Stiglitz (both Nobel laureates), who would have attempted to put the nation on a different economic path, he chose instead two traditional neoliberal ideologues, Timothy Geithner and Lawrence Summers, who believe in the very policies that led to the crash. “Change we can believe in” has never sounded more hollow.
The metaphor of addiction is extremely relevant to situations such as these, because addicts always seek to maximize their intake (or behavior) rather than optimize it, even though the former leads to self-destruction. In the face of what seems to be biologically driven activity, reason doesn’t have much of a chance. An experiment with frogs some years ago demonstrated this quite clearly. They were wired up with electrodes in the pleasure center of the brain, and could stimulate that center–i.e., create a “rush”–by pressing a metal bar. Not only did the frogs keep pressing the bar over and over again, but they didn’t stop even when their legs were cut off with a pair of shears And if you are going to object that human beings are not frogs, then you obviously haven’t been reading the daily newspapers, or observing the behavior of the people around you.
There are, of course, a few intelligent frogs around, ones who struggle to point out the difference between optima and maxima. They don’t have much of an audience, as you might expect, but inasmuch as this column has no pretensions to making a difference in the real world, let’s put the matter of popularity aside and concentrate on the ideas instead.
The first intelligent frog who comes to mind is the anthropologist Gregory Bateson, perhaps most famous for having been married to Margaret Mead. For Bateson, the issue was an ethical one. As he himself put it, “the ethics of optima and the ethics of maxima are totally different ethical systems.” The ethics of maxima knows only one rule: more. More is better, in this scheme of things; words such as “limits” or “enough” are either foolish or meaningless. Clearly, the “American Way of Life” is a system of maxima, of indefinite expansion.
But what if the reality of all social systems is that they are homeostatic, which is to say, designed to stay in balance? In that case, said Bateson, the attempt to maximize any single variable (for example, wealth) will eventually push the system into runaway, such that it will destroy itself. To take a physiological analogy, we recognize that the human body needs only so much calcium per day. We do not say, “The more calcium I ingest, the better off I’ll be,” because we recognize that past a certain point any chemical element becomes toxic to an organism. Yet we seem to be unable to extend this insight to the social or economic realm. We do not say, for example, “That company is making too much profit,” or “That individual (Bill Gates, Carlos Slim) has too much money for one person,” or “The Gross Domestic Product is spinning out of control.” Rather than being interested in balance, in stability, we are fascinated by asymptotes–frogs at the bar of pleasure, even while our legs are being cut off. We don’t get it, that if you fight the ecology of a system, you lose, especially when you “win”.
Maximizing a single variable, wrote Bateson, can seem like an ingenious adaptation, but over time it typically turns into pathology. The saber teeth of a tiger may have had short-range survival value, but this development weakened its flexibility in other situations that proved to be crucial. The “favored” species became so “favored” that it destroyed its own ecological niche, and disappeared. A gain at one level became a calamity at another.
In recent months, two American scholars of the intelligent frog variety began to understand this line of reasoning and to conclude from it that Adam Smith, with his theory of the “invisible hand”, was wrong. An early version of Gordon Gekko, with his eulogy of greed (in Oliver Stone’s 1987 film, Wall Street), Smith argued that the collective result of individual self-interest was the prosperity of the whole. But the economist Robert Frank, writing in the New York Times (12 July 2009), argues that “traits that help individuals are harmful to larger groups. For instance,” he goes on,
“a mutation for larger antlers served the reproductive interests of an individual male elk, because it helped him prevail in battles with other males for access to mates. But as this mutation spread, it started an arms race that made life more hazardous for male elk over all. The antlers of male elk can now span five feet or more. And despite their utility in battle, they often become a fatal handicap when predators pursue males into dense woods.”
In the case of the market, says Frank, individual reward structures undermine the invisible hand. “To make their funds more attractive to investors,” he writes, “money managers create complex securities that impose serious, if often well-camouflaged, risks on society. But when all managers take such steps, they are mutually offsetting. No one benefits, yet the risk of financial crises rises sharply.”
Similarly, U.S. Appeals Court Judge Richard Posner, in A Failure of Capitalism, points out that the crash of 2008 was brought about by individual actions that were actually quite rational: bankers and investors pursuing their own interests. Reckless behavior was quite consistent, he says, with being well informed about the risks involved in the context of an economic bubble, and so a great many money managers took those risks. The problem is that what was rational on the individual level was irrational on the collective level, thus leading to a systemic collapse.
We are thus led, quite naturally, from a consideration of optima vs. maxima to the question of individual vs. collective behavior. Which brings me to one of the twentieth century’s most intelligent frogs, the biologist Garrett Hardin, who posed the dilemma in a famous essay entitled “The Tragedy of the Commons” (1968). Consider, said Hardin, the example of a pasture shared by local herders. They all understand that the commons belongs to no one in particular, but supports the well being of all and is the responsibility of all. One day, however, one of the herders puts an additional animal out to graze, with the result that he increases his yield. The pasture, however, is slightly degraded by this. Meanwhile, other herders come to the same conclusion, and as each makes the rational decision to take advantage of the situation for personal gain, the net result is the overgrazing, and ultimately the destruction, of the common area. In a word, the system favors selfish individuals over those with greater foresight and restraint. Or as Hardin put it, “Freedom in a commons brings ruin to all.” Frogs, in a word, are not to be trusted.
How, then, can excess be curbed in a free democratic system? For we can be sure that the intelligent frogs, who are really quite exceptional, are not going to be listened to, and certainly have no power to enforce their insights. True, there are certain countries–the Scandanavian nations come to mind–where for some reason the concentration of intelligent frogs is unusually high, resulting in decisions designed to protect the commons. But on a world scale, this is not very typical. More typical, and (sad to say) a model for most of Latin America, is the United States, where proposed “changes” are in fact cosmetic, and where the reality is business as usual. In the context of 306 million highly addicted frogs, the voices of the smart ones–Bateson, Frank, Posner, Hardin, et al.–aren’t going to have much impact or, truth be told, even get heard.
Of course, authoritarian systems don’t have these problems, which is a good indicator of how things will probably develop. Under the name of “harmony”, for example, China regulates its citizens for what it perceives to be the common good. Hence the famous one-child policy, introduced in 1979, supposedly prevented more than 300 million births over the next 29 years in a country that was threatened by its own population density. In the case of the United States, the imposition of rules and limits on individual behavior to protect the commons is not, at present, a realistic prospect; the population is simply not having it, end of story. But how much longer before this freedom of choice is regarded as an impossible luxury? In fact, no crystal ball is required to predict the future here. The tragedy of the commons–what Hardin called “the remorseless working of things”–is that a society like the U.S. won’t undertake serious changes even when it is sitting on the edge of an abyss. It has to actually be in the abyss before it will entertain such changes; i.e., it has to be faced with no choice at all. It seems unlikely now, but things are probably moving faster than we realize. In terms of population, food, resources, water, social inequality, public health, and environmental degradation, a crunch of the type I am referring to may be only twenty years away.
In Shakespeare’s Two Gentlemen of Verona, the character Valentine is confronted by an outlaw, who asks him if he is content “To make a virtue of necessity/And live, as we do, in this wilderness?” That may prove to be the only “choice” we have. As Thomas Hobbes put it, a few decades after Shakespeare, “Hell is truth seen too late.”
©Morris Berman, 2009
–Tao Te Ching
What does it take to produce large-scale social change? Most historians, if you catch them in an honest moment, will admit that the popular levers of social change, such as education or legislation, are bogus; they don’t really amount to very much. What does make a difference–and then only potentially–is massive systemic breakdown, such as occurred in the United States in the fall of 2008. It was the greatest market crash since 1929, leading to widespread unemployment (something like 20% of the population, in real–as opposed to official–statistics) and the loss of billions of dollars in retirement savings. It had been many decades since the middle class found itself in soup kitchens, and yet there they were. In the face of all this, however, very little seems to have changed. Americans are still committed to the dream of unlimited abundance as a “reasonable” goal, when in reality it is (and always has been) the dream of an addict. President Obama’s $12 trillion bailout and stimulus plan is funneling money into the very banking establishment that gave us the disaster; it rescues the wealthy, not those who really need the money. And while he could have appointed economic advisers such as Paul Krugman and Joseph Stiglitz (both Nobel laureates), who would have attempted to put the nation on a different economic path, he chose instead two traditional neoliberal ideologues, Timothy Geithner and Lawrence Summers, who believe in the very policies that led to the crash. “Change we can believe in” has never sounded more hollow.
The metaphor of addiction is extremely relevant to situations such as these, because addicts always seek to maximize their intake (or behavior) rather than optimize it, even though the former leads to self-destruction. In the face of what seems to be biologically driven activity, reason doesn’t have much of a chance. An experiment with frogs some years ago demonstrated this quite clearly. They were wired up with electrodes in the pleasure center of the brain, and could stimulate that center–i.e., create a “rush”–by pressing a metal bar. Not only did the frogs keep pressing the bar over and over again, but they didn’t stop even when their legs were cut off with a pair of shears And if you are going to object that human beings are not frogs, then you obviously haven’t been reading the daily newspapers, or observing the behavior of the people around you.
There are, of course, a few intelligent frogs around, ones who struggle to point out the difference between optima and maxima. They don’t have much of an audience, as you might expect, but inasmuch as this column has no pretensions to making a difference in the real world, let’s put the matter of popularity aside and concentrate on the ideas instead.
The first intelligent frog who comes to mind is the anthropologist Gregory Bateson, perhaps most famous for having been married to Margaret Mead. For Bateson, the issue was an ethical one. As he himself put it, “the ethics of optima and the ethics of maxima are totally different ethical systems.” The ethics of maxima knows only one rule: more. More is better, in this scheme of things; words such as “limits” or “enough” are either foolish or meaningless. Clearly, the “American Way of Life” is a system of maxima, of indefinite expansion.
But what if the reality of all social systems is that they are homeostatic, which is to say, designed to stay in balance? In that case, said Bateson, the attempt to maximize any single variable (for example, wealth) will eventually push the system into runaway, such that it will destroy itself. To take a physiological analogy, we recognize that the human body needs only so much calcium per day. We do not say, “The more calcium I ingest, the better off I’ll be,” because we recognize that past a certain point any chemical element becomes toxic to an organism. Yet we seem to be unable to extend this insight to the social or economic realm. We do not say, for example, “That company is making too much profit,” or “That individual (Bill Gates, Carlos Slim) has too much money for one person,” or “The Gross Domestic Product is spinning out of control.” Rather than being interested in balance, in stability, we are fascinated by asymptotes–frogs at the bar of pleasure, even while our legs are being cut off. We don’t get it, that if you fight the ecology of a system, you lose, especially when you “win”.
Maximizing a single variable, wrote Bateson, can seem like an ingenious adaptation, but over time it typically turns into pathology. The saber teeth of a tiger may have had short-range survival value, but this development weakened its flexibility in other situations that proved to be crucial. The “favored” species became so “favored” that it destroyed its own ecological niche, and disappeared. A gain at one level became a calamity at another.
In recent months, two American scholars of the intelligent frog variety began to understand this line of reasoning and to conclude from it that Adam Smith, with his theory of the “invisible hand”, was wrong. An early version of Gordon Gekko, with his eulogy of greed (in Oliver Stone’s 1987 film, Wall Street), Smith argued that the collective result of individual self-interest was the prosperity of the whole. But the economist Robert Frank, writing in the New York Times (12 July 2009), argues that “traits that help individuals are harmful to larger groups. For instance,” he goes on,
“a mutation for larger antlers served the reproductive interests of an individual male elk, because it helped him prevail in battles with other males for access to mates. But as this mutation spread, it started an arms race that made life more hazardous for male elk over all. The antlers of male elk can now span five feet or more. And despite their utility in battle, they often become a fatal handicap when predators pursue males into dense woods.”
In the case of the market, says Frank, individual reward structures undermine the invisible hand. “To make their funds more attractive to investors,” he writes, “money managers create complex securities that impose serious, if often well-camouflaged, risks on society. But when all managers take such steps, they are mutually offsetting. No one benefits, yet the risk of financial crises rises sharply.”
Similarly, U.S. Appeals Court Judge Richard Posner, in A Failure of Capitalism, points out that the crash of 2008 was brought about by individual actions that were actually quite rational: bankers and investors pursuing their own interests. Reckless behavior was quite consistent, he says, with being well informed about the risks involved in the context of an economic bubble, and so a great many money managers took those risks. The problem is that what was rational on the individual level was irrational on the collective level, thus leading to a systemic collapse.
We are thus led, quite naturally, from a consideration of optima vs. maxima to the question of individual vs. collective behavior. Which brings me to one of the twentieth century’s most intelligent frogs, the biologist Garrett Hardin, who posed the dilemma in a famous essay entitled “The Tragedy of the Commons” (1968). Consider, said Hardin, the example of a pasture shared by local herders. They all understand that the commons belongs to no one in particular, but supports the well being of all and is the responsibility of all. One day, however, one of the herders puts an additional animal out to graze, with the result that he increases his yield. The pasture, however, is slightly degraded by this. Meanwhile, other herders come to the same conclusion, and as each makes the rational decision to take advantage of the situation for personal gain, the net result is the overgrazing, and ultimately the destruction, of the common area. In a word, the system favors selfish individuals over those with greater foresight and restraint. Or as Hardin put it, “Freedom in a commons brings ruin to all.” Frogs, in a word, are not to be trusted.
How, then, can excess be curbed in a free democratic system? For we can be sure that the intelligent frogs, who are really quite exceptional, are not going to be listened to, and certainly have no power to enforce their insights. True, there are certain countries–the Scandanavian nations come to mind–where for some reason the concentration of intelligent frogs is unusually high, resulting in decisions designed to protect the commons. But on a world scale, this is not very typical. More typical, and (sad to say) a model for most of Latin America, is the United States, where proposed “changes” are in fact cosmetic, and where the reality is business as usual. In the context of 306 million highly addicted frogs, the voices of the smart ones–Bateson, Frank, Posner, Hardin, et al.–aren’t going to have much impact or, truth be told, even get heard.
Of course, authoritarian systems don’t have these problems, which is a good indicator of how things will probably develop. Under the name of “harmony”, for example, China regulates its citizens for what it perceives to be the common good. Hence the famous one-child policy, introduced in 1979, supposedly prevented more than 300 million births over the next 29 years in a country that was threatened by its own population density. In the case of the United States, the imposition of rules and limits on individual behavior to protect the commons is not, at present, a realistic prospect; the population is simply not having it, end of story. But how much longer before this freedom of choice is regarded as an impossible luxury? In fact, no crystal ball is required to predict the future here. The tragedy of the commons–what Hardin called “the remorseless working of things”–is that a society like the U.S. won’t undertake serious changes even when it is sitting on the edge of an abyss. It has to actually be in the abyss before it will entertain such changes; i.e., it has to be faced with no choice at all. It seems unlikely now, but things are probably moving faster than we realize. In terms of population, food, resources, water, social inequality, public health, and environmental degradation, a crunch of the type I am referring to may be only twenty years away.
In Shakespeare’s Two Gentlemen of Verona, the character Valentine is confronted by an outlaw, who asks him if he is content “To make a virtue of necessity/And live, as we do, in this wilderness?” That may prove to be the only “choice” we have. As Thomas Hobbes put it, a few decades after Shakespeare, “Hell is truth seen too late.”
©Morris Berman, 2009
June 30, 2009
Not Coming To Your Local Cineplex
Dear Friends:
This 2006 film is a bit dated (they actually believe the Democratic Party is going to turn things around), but worth a view nevertheless. One hopes Liberty News TV will do a follow-up on the emptiness and spinelessness of Barack Obama--his pandering to the Pentagon, bailout of corporations, and continuation of neoliberal economic policy--but for some odd reason I'm not holding my breath. Enjoy!
http://www.archive.org/details/LibertyNewsTVEpisode14
This 2006 film is a bit dated (they actually believe the Democratic Party is going to turn things around), but worth a view nevertheless. One hopes Liberty News TV will do a follow-up on the emptiness and spinelessness of Barack Obama--his pandering to the Pentagon, bailout of corporations, and continuation of neoliberal economic policy--but for some odd reason I'm not holding my breath. Enjoy!
http://www.archive.org/details/LibertyNewsTVEpisode14
June 22, 2009
Tribal Consciousness and the Enlightenment Tradition
At one point in his work, Proust advances a theory of development that goes back to Goethe, and ultimately to Plato. It was Goethe who coined the term “morphology,” by which he meant the science of form, and the crucial idea was that the adult manifestation of an organism was already encoded in its earliest structural arrangement. Thus the entire oak tree, for example, was already present in the acorn; growth and development were basically a process of “unfolding” from an original archetype. It is a teleological theory, a theory of predestination; and Proust comments that if you see a pretty girl next to her mother, you can already discern in the daughter the pattern of ageing, the adult features, “in the wings,” as it were. Extending the theory from the biological to the social realm, Proust argues that we should hardly be surprised, for example, to learn that some Jewish person we might know (this around 1900, say) is heatedly on the side of Alfred Dreyfus.* For this is pre-ordained, he says; it’s in the blood. Our mistake is to believe that we are making rational decisions, when the truth is that “our minds possess in advance...the characteristic that we imagine ourselves to be selecting.” He goes on:
“For we grasp only the secondary ideas, without detecting the primary
cause (Jewish blood, French birth or whatever it may be) that
inevitably produced them....We take from our family, as [adult
plants] take the form of their seed, as well the ideas by which we
live as the malady from which we shall die.”
The theory, then, is one of genetic memory, and for Proust it applies to the biological development of human beings as well as plants. It also, Proust is saying, applies to the mental and supposedly intellectual function of human beings, in the form of what we might call “tribal consciousness.” Of course, Dreyfus was innocent and his enemies were a bunch of liars and antisemites, but for Proust that is not the point. The claim here is that we would expect Jews to be on the side of Dreyfus without worrying too much about the evidence pro or con, in the same way that it is not too much of a shock to learn that 96% of the black American population voted for Barack Obama. These are not really freely chosen rational decisions, in short, and we are kidding ourselves if we think they are.
This matter of tribal consciousness is enormously significant, it seems to me, and Jewish identity is as good an illustration of it as any. Suppose, at the height of the Dreyfus Affair, God had waved a magic wand and all of the Jews in France suddenly became Christian, and all the Christians, Jews. I can’t prove it, of course, but I’m guessing that a large percentage of the new Christians would suddenly regard Dreyfus as guilty, and a large percentage of the new Jews would now find him innocent. It is depressing to think that evidence gets marshaled in the service of emotions, but hard to avoid that conclusion. What happened in the aftermath of the Israeli attack on Gaza during December 2008-January 2009, for example, which was nothing less than the wholesale massacre of Palestinian civilians, was quite Orwellian: one heard Israeli spokesmen and apologists claiming that Israel (the occupying power) was somehow the victim in all of this–and they actually believed it. But again, if a magic wand suddenly rendered the Israelis Palestinians and vice versa, wouldn’t the former Israelis now be on the Palestinian side, and the former Palestinians now be convinced that yes, Israel was indeed the victim in this tragedy? That blood, rather than evidence, is the issue constitutes the essence of tribal consciousness. We need to examine this more closely.
I remember, some years ago, pondering this question of how tribal allegiance colonizes the brain when I ran across an intriguing work of science fiction by the American author Neal Stephenson, entitled Snow Crash. The core of the book is what might be called the “viral theory of religion,” in which the brain is taken over or possessed by a certain set of religious ideas. The virus replicates itself inside the individual mind, and it also jumps from one person to the next. Stephenson spends a lot of time applying this theory of infection to ancient Sumer, the thought process of which can be regarded as a kind of trance phenomenon. (Egypt would fall into the same category, it seems to me.) There were, he says, various attempts to break out of the trance, Judaism being the most notable. Thus the Torah was also a virus, says Stephenson, but a benign one; a counter-virus to the ancient mythological world, which was stuck in a rut. Scribes copied it; people came to the synagogue to read it. Judaism was basically the first rational religion, then, but eventually it hardened into legalism, whereupon it was challenged by Christ...whose ideas got taken over by viral influence almost immediately, becoming a new theocracy. The Reformation, fifteen centuries later, was then the counter-virus to this. Etc. The idea is that we become “hosts” for self-replicating information, and as further examples Stephenson points to mass hysteria, jokes, catchy tunes, and ideologies.
As it turns out, Snow Crash is the fictionalized version of the theory of memes, first put forward by the British biologist Richard Dawkins in 1976. The dictionary defines “meme” as “an idea, behavior, style, or usage that spreads from person to person within a culture.” It’s basically an information virus. Dawkins regarded it as a “unit” of cultural ideas that moves by way of imitation, and saw things such as catch phrases, fashion in clothing, and the technology of building arches (to take three unrelated examples) as falling into this category. Memes are essentially replicators, and their mode of transmission can be likened to contagion. As in the case of Stephenson, the virus/meme invades the “host,” takes it over; and this is not, said Dawkins, necessarily positive: in terms of replication, a successful meme can actually be detrimental to the host body. (Just think of what neoliberalism and the Milton Friedman-virus–the “shock doctrine,” in Naomi Klein’s memorable phrase–have done to North and South America, for example.)
Now quite frankly, there is a lot to be said against the theory, most notably that it sets up a kind of pseudoscience that ultimately doesn’t explain very much. There was, for example, a period in the history of science in which the concept of “instinct” was extended from biology to sociology and psychology. It was a total explanation: there was a death instinct, a love instinct, an artistic instinct, a criminal instinct, a nesting instinct, an instinct for sailing the high seas, and on and on. It took a while for social scientists to realize that these “explanations” were completely circular. As one philosopher observed, it was like labeling a bird that went around in circles a “rotopedist,” and then when asked why the bird went around in circles, “explaining” that it did so because it was a rotopedist! Obviously, if everything is an instinct, or a meme, then nothing is.
Second, the meme theory itself can be seen as a meme, moving through society like a virus. But this takes us into a classic situation known as “Mannheim’s paradox,” because then the scientific status of the theory is called into question (it too is a fad, in other words). Karl Mannheim, the German sociologist, developed a mode of investigation known as the Sociology of Knowledge, whereby one studies how ideas get accepted in an intellectual community. Foreshadowing T.S. Kuhn, Mannheim argued that this acceptance did not occur on a rational basis, but rather on an ideological one. However, we then have to ask if this applies to the Sociology of Knowledge as well. After all, why should it alone get a free pass? If it does apply (and Mannheim unsuccessfully tried to argue that it didn’t), the rug is pulled out from under the theory. It begins to look like the ancient “Liar’s paradox”: A Cretan said, “All Cretans are liars.” Was he telling the truth?
Finally, and related to this, is the phenomenon whereby the counter-virus becomes, in short order, the new virus. Judaism becomes Pharasaism, Christ becomes St. Paul becomes the Vatican, the Reformation becomes Protestant rigidity, and New Age spirituality becomes Oprah and Chopra. The old mimetic system gets cracked open, and then the opener becomes The Opener. This means that in effect, with the exception of the briefest of moments, there is no such thing as a non-meme world. As I argued in an earlier essay (“The Hula Hoop Theory of History”), we seem to be caught up in one form of “hula-hoop” or another; we never seem to get a handle on any kind of objective reality. But can that really be the case? I mean, we know that Galileo was right about falling bodies and Aristotle wrong; we know that severe population pressure leads to hierarchical social systems; we know that syphilis is caused by a particular bacterium and that if left untreated, will result in insanity and death; and we know that Alfred Dreyfus was innocent and that the French army was corrupt. Objectively speaking, we know things–a lot of things. And yet, there is no getting around the fact that tribalism–mimetic thinking–is the rule rather than the exception. Thus while there are a number of soldiers in the Israeli army who refuse to serve in the occupied territories, and Israeli peace organizations such as Yesh Gvul (“There is a limit ”) who support them, the majority of the population does indeed see itself as victims, and votes for a prime minister who can be guaranteed to continue the dead-end policies of oppression and occupation–until the demographics of the situation will finally render Israeli rule untenable, and things will change not by reason, but by force. One tribe, in short, will defeat another. What a triumph!
What our discussion comes down to is this: Leaving aside, for now, the first two (philosophical) objections to the meme-virus theory, and granting the fact that tribal consciousness really is the norm for the human race, what are the chances that mimetic behavior could be seriously disrupted, once and for all? This was, after all, the goal of the Scientific Revolution and the Enlightenment tradition; but as one political scientist once pointed out, “It’s not that the Enlightenment failed; rather, it’s that it has never been tried.” This is, of course, not entirely true; but when you have an “advanced” industrial nation with 59% of its adult population sitting around and waiting for the “Rapture” and the Second Coming, 29% thinking that the sun revolves around the earth or not knowing which revolves around which, and 45% believing that extra-terrestrials have visited the planet, you realize that this commentator has a point.
It all comes down to reflexivity: Can we break the hold of the meme-trance, and look at things from the “outside”? After all, intuitively speaking, heavy bodies should hit the earth faster than light ones when dropped from the same height, and we can plainly see the sun “rise” in the East and “set” in the West. Getting outside of the (medieval) meme here means that we look at evidence that is counter-intuitive; that we recognize that there is an objective truth to the situation that doesn’t give a damn about our personal or tribal belief system; that one can stand outside a situation and evaluate it, and extend this analytical mode to our own beliefs, and to who we are. “O would some power the gift to give us/To see ourselves as others see us,” wrote the Scottish poet Robert Burns in the eighteenth century. This external evaluation–what I have referred to elsewhere as “nonparticipating consciousness”–was, as Neal Stephenson correctly notes, the stellar contribution of the ancient Hebrews; and it was also characteristic of the ancient Greeks (their ties to the Mystery religions notwithstanding). After all, when you have Heraclitus talking about the problem of subjective judgment, and Democritus asserting that it is only by convention that we can talk about sweet, bitter, hot, and cold, “but in reality there are only atoms and the void,” you know you’re in a different kind of world than that of blind mimetic belief.
I am not, I should add, claiming that nonparticipating consciousness is without its problems; indeed, that was the entire point of my book The Reenchantment of the World. But it is also the case that there is too much that simply cannot be solved from within a strictly mimetic framework, and this is why we need to ask if the Enlightenment tradition can ever be made to “stick.” Reading its late twentieth-century representatives–I am thinking of philosophers such as Peter Singer and John Rawls–I am often frustrated at how naïve they are, because they are clearly talking about how people “ought” to behave (i.e., rationally) and not how they actually behave (i.e., tribally). What planet are you guys on? is the annoyed reaction I frequently have. And yet, this is the crucial point: Controlling the excesses of tribal consciousness really does mean taking the Enlightenment tradition seriously, breaking the “trance,” and standing outside the particular meme we are caught up in (whatever it is) and evaluating it rationally and empirically. Singer and Rawls don’t have any clear ideas on how to get to such a place, and frankly, neither do I. My guess is that force, not reason, will be the deciding factor in a whole host of areas as the twenty-first century wears on. But it’s challenging to think about what a non-mimetic path might consist of.
Here is a single example, something I can’t really do myself, but at least aspire to. A very long time ago, when I first got interested in Karl Marx, I ran across a biography of the man by Isaiah Berlin. At the time I had no idea who Isaiah Berlin was, but as I was keen to learn more about Marx, I read the book from cover to cover. It was a very sympathetic portrait of the great German philosopher; the author managed to get inside his head, enable you to see the world through Marx’s eyes. I came away impressed with Marx as a thinker; really, as a heroic figure. And then I subsequently learned that Communism was complete anathema to Berlin, who was a Russian (actually, Latvian) emigré; and that if there was one single political ideology he hated, it was that. I still retain a great admiration for Marx, of course, and confess I have some reservations about the work of Isaiah Berlin in general. But that is neither here nor there. Given his own mimetic background, it is hard not to regard his portrait of Marx as a type of heroism all its own.
©Morris Berman, 2009
*Captain Alfred Dreyfus was a French Jewish artillery officer falsely convicted of treason in 1894, and sent to the Devil’s Island penal colony in French Guiana, where he spent two years in solitary confinement. The real culprit, Ferdinand Esterhazy, was tried and acquitted in 1896 in what amounted to an Army cover-up (including the falsification of documents). In 1898, the famous writer Émile Zola led the public protest against the government, as the “Dreyfus Affair” tore the nation apart. Eventually, all the charges against Dreyfus were dropped, and he was finally exonerated in 1906. All in all, not exactly France’s finest hour.
“For we grasp only the secondary ideas, without detecting the primary
cause (Jewish blood, French birth or whatever it may be) that
inevitably produced them....We take from our family, as [adult
plants] take the form of their seed, as well the ideas by which we
live as the malady from which we shall die.”
The theory, then, is one of genetic memory, and for Proust it applies to the biological development of human beings as well as plants. It also, Proust is saying, applies to the mental and supposedly intellectual function of human beings, in the form of what we might call “tribal consciousness.” Of course, Dreyfus was innocent and his enemies were a bunch of liars and antisemites, but for Proust that is not the point. The claim here is that we would expect Jews to be on the side of Dreyfus without worrying too much about the evidence pro or con, in the same way that it is not too much of a shock to learn that 96% of the black American population voted for Barack Obama. These are not really freely chosen rational decisions, in short, and we are kidding ourselves if we think they are.
This matter of tribal consciousness is enormously significant, it seems to me, and Jewish identity is as good an illustration of it as any. Suppose, at the height of the Dreyfus Affair, God had waved a magic wand and all of the Jews in France suddenly became Christian, and all the Christians, Jews. I can’t prove it, of course, but I’m guessing that a large percentage of the new Christians would suddenly regard Dreyfus as guilty, and a large percentage of the new Jews would now find him innocent. It is depressing to think that evidence gets marshaled in the service of emotions, but hard to avoid that conclusion. What happened in the aftermath of the Israeli attack on Gaza during December 2008-January 2009, for example, which was nothing less than the wholesale massacre of Palestinian civilians, was quite Orwellian: one heard Israeli spokesmen and apologists claiming that Israel (the occupying power) was somehow the victim in all of this–and they actually believed it. But again, if a magic wand suddenly rendered the Israelis Palestinians and vice versa, wouldn’t the former Israelis now be on the Palestinian side, and the former Palestinians now be convinced that yes, Israel was indeed the victim in this tragedy? That blood, rather than evidence, is the issue constitutes the essence of tribal consciousness. We need to examine this more closely.
I remember, some years ago, pondering this question of how tribal allegiance colonizes the brain when I ran across an intriguing work of science fiction by the American author Neal Stephenson, entitled Snow Crash. The core of the book is what might be called the “viral theory of religion,” in which the brain is taken over or possessed by a certain set of religious ideas. The virus replicates itself inside the individual mind, and it also jumps from one person to the next. Stephenson spends a lot of time applying this theory of infection to ancient Sumer, the thought process of which can be regarded as a kind of trance phenomenon. (Egypt would fall into the same category, it seems to me.) There were, he says, various attempts to break out of the trance, Judaism being the most notable. Thus the Torah was also a virus, says Stephenson, but a benign one; a counter-virus to the ancient mythological world, which was stuck in a rut. Scribes copied it; people came to the synagogue to read it. Judaism was basically the first rational religion, then, but eventually it hardened into legalism, whereupon it was challenged by Christ...whose ideas got taken over by viral influence almost immediately, becoming a new theocracy. The Reformation, fifteen centuries later, was then the counter-virus to this. Etc. The idea is that we become “hosts” for self-replicating information, and as further examples Stephenson points to mass hysteria, jokes, catchy tunes, and ideologies.
As it turns out, Snow Crash is the fictionalized version of the theory of memes, first put forward by the British biologist Richard Dawkins in 1976. The dictionary defines “meme” as “an idea, behavior, style, or usage that spreads from person to person within a culture.” It’s basically an information virus. Dawkins regarded it as a “unit” of cultural ideas that moves by way of imitation, and saw things such as catch phrases, fashion in clothing, and the technology of building arches (to take three unrelated examples) as falling into this category. Memes are essentially replicators, and their mode of transmission can be likened to contagion. As in the case of Stephenson, the virus/meme invades the “host,” takes it over; and this is not, said Dawkins, necessarily positive: in terms of replication, a successful meme can actually be detrimental to the host body. (Just think of what neoliberalism and the Milton Friedman-virus–the “shock doctrine,” in Naomi Klein’s memorable phrase–have done to North and South America, for example.)
Now quite frankly, there is a lot to be said against the theory, most notably that it sets up a kind of pseudoscience that ultimately doesn’t explain very much. There was, for example, a period in the history of science in which the concept of “instinct” was extended from biology to sociology and psychology. It was a total explanation: there was a death instinct, a love instinct, an artistic instinct, a criminal instinct, a nesting instinct, an instinct for sailing the high seas, and on and on. It took a while for social scientists to realize that these “explanations” were completely circular. As one philosopher observed, it was like labeling a bird that went around in circles a “rotopedist,” and then when asked why the bird went around in circles, “explaining” that it did so because it was a rotopedist! Obviously, if everything is an instinct, or a meme, then nothing is.
Second, the meme theory itself can be seen as a meme, moving through society like a virus. But this takes us into a classic situation known as “Mannheim’s paradox,” because then the scientific status of the theory is called into question (it too is a fad, in other words). Karl Mannheim, the German sociologist, developed a mode of investigation known as the Sociology of Knowledge, whereby one studies how ideas get accepted in an intellectual community. Foreshadowing T.S. Kuhn, Mannheim argued that this acceptance did not occur on a rational basis, but rather on an ideological one. However, we then have to ask if this applies to the Sociology of Knowledge as well. After all, why should it alone get a free pass? If it does apply (and Mannheim unsuccessfully tried to argue that it didn’t), the rug is pulled out from under the theory. It begins to look like the ancient “Liar’s paradox”: A Cretan said, “All Cretans are liars.” Was he telling the truth?
Finally, and related to this, is the phenomenon whereby the counter-virus becomes, in short order, the new virus. Judaism becomes Pharasaism, Christ becomes St. Paul becomes the Vatican, the Reformation becomes Protestant rigidity, and New Age spirituality becomes Oprah and Chopra. The old mimetic system gets cracked open, and then the opener becomes The Opener. This means that in effect, with the exception of the briefest of moments, there is no such thing as a non-meme world. As I argued in an earlier essay (“The Hula Hoop Theory of History”), we seem to be caught up in one form of “hula-hoop” or another; we never seem to get a handle on any kind of objective reality. But can that really be the case? I mean, we know that Galileo was right about falling bodies and Aristotle wrong; we know that severe population pressure leads to hierarchical social systems; we know that syphilis is caused by a particular bacterium and that if left untreated, will result in insanity and death; and we know that Alfred Dreyfus was innocent and that the French army was corrupt. Objectively speaking, we know things–a lot of things. And yet, there is no getting around the fact that tribalism–mimetic thinking–is the rule rather than the exception. Thus while there are a number of soldiers in the Israeli army who refuse to serve in the occupied territories, and Israeli peace organizations such as Yesh Gvul (“There is a limit ”) who support them, the majority of the population does indeed see itself as victims, and votes for a prime minister who can be guaranteed to continue the dead-end policies of oppression and occupation–until the demographics of the situation will finally render Israeli rule untenable, and things will change not by reason, but by force. One tribe, in short, will defeat another. What a triumph!
What our discussion comes down to is this: Leaving aside, for now, the first two (philosophical) objections to the meme-virus theory, and granting the fact that tribal consciousness really is the norm for the human race, what are the chances that mimetic behavior could be seriously disrupted, once and for all? This was, after all, the goal of the Scientific Revolution and the Enlightenment tradition; but as one political scientist once pointed out, “It’s not that the Enlightenment failed; rather, it’s that it has never been tried.” This is, of course, not entirely true; but when you have an “advanced” industrial nation with 59% of its adult population sitting around and waiting for the “Rapture” and the Second Coming, 29% thinking that the sun revolves around the earth or not knowing which revolves around which, and 45% believing that extra-terrestrials have visited the planet, you realize that this commentator has a point.
It all comes down to reflexivity: Can we break the hold of the meme-trance, and look at things from the “outside”? After all, intuitively speaking, heavy bodies should hit the earth faster than light ones when dropped from the same height, and we can plainly see the sun “rise” in the East and “set” in the West. Getting outside of the (medieval) meme here means that we look at evidence that is counter-intuitive; that we recognize that there is an objective truth to the situation that doesn’t give a damn about our personal or tribal belief system; that one can stand outside a situation and evaluate it, and extend this analytical mode to our own beliefs, and to who we are. “O would some power the gift to give us/To see ourselves as others see us,” wrote the Scottish poet Robert Burns in the eighteenth century. This external evaluation–what I have referred to elsewhere as “nonparticipating consciousness”–was, as Neal Stephenson correctly notes, the stellar contribution of the ancient Hebrews; and it was also characteristic of the ancient Greeks (their ties to the Mystery religions notwithstanding). After all, when you have Heraclitus talking about the problem of subjective judgment, and Democritus asserting that it is only by convention that we can talk about sweet, bitter, hot, and cold, “but in reality there are only atoms and the void,” you know you’re in a different kind of world than that of blind mimetic belief.
I am not, I should add, claiming that nonparticipating consciousness is without its problems; indeed, that was the entire point of my book The Reenchantment of the World. But it is also the case that there is too much that simply cannot be solved from within a strictly mimetic framework, and this is why we need to ask if the Enlightenment tradition can ever be made to “stick.” Reading its late twentieth-century representatives–I am thinking of philosophers such as Peter Singer and John Rawls–I am often frustrated at how naïve they are, because they are clearly talking about how people “ought” to behave (i.e., rationally) and not how they actually behave (i.e., tribally). What planet are you guys on? is the annoyed reaction I frequently have. And yet, this is the crucial point: Controlling the excesses of tribal consciousness really does mean taking the Enlightenment tradition seriously, breaking the “trance,” and standing outside the particular meme we are caught up in (whatever it is) and evaluating it rationally and empirically. Singer and Rawls don’t have any clear ideas on how to get to such a place, and frankly, neither do I. My guess is that force, not reason, will be the deciding factor in a whole host of areas as the twenty-first century wears on. But it’s challenging to think about what a non-mimetic path might consist of.
Here is a single example, something I can’t really do myself, but at least aspire to. A very long time ago, when I first got interested in Karl Marx, I ran across a biography of the man by Isaiah Berlin. At the time I had no idea who Isaiah Berlin was, but as I was keen to learn more about Marx, I read the book from cover to cover. It was a very sympathetic portrait of the great German philosopher; the author managed to get inside his head, enable you to see the world through Marx’s eyes. I came away impressed with Marx as a thinker; really, as a heroic figure. And then I subsequently learned that Communism was complete anathema to Berlin, who was a Russian (actually, Latvian) emigré; and that if there was one single political ideology he hated, it was that. I still retain a great admiration for Marx, of course, and confess I have some reservations about the work of Isaiah Berlin in general. But that is neither here nor there. Given his own mimetic background, it is hard not to regard his portrait of Marx as a type of heroism all its own.
©Morris Berman, 2009
*Captain Alfred Dreyfus was a French Jewish artillery officer falsely convicted of treason in 1894, and sent to the Devil’s Island penal colony in French Guiana, where he spent two years in solitary confinement. The real culprit, Ferdinand Esterhazy, was tried and acquitted in 1896 in what amounted to an Army cover-up (including the falsification of documents). In 1898, the famous writer Émile Zola led the public protest against the government, as the “Dreyfus Affair” tore the nation apart. Eventually, all the charges against Dreyfus were dropped, and he was finally exonerated in 1906. All in all, not exactly France’s finest hour.
April 29, 2009
How Chic Was My Progress
When it was hip to be hep, I was hep.
–From “I’m Hip,” by Dave Frishberg and Bob Dorough
At one point in The Labyrinth of Solitude, Octavio Paz quotes the German philosopher Max Scheler, who asked, “What is progress?” It’s a crucial question, and in the United States there is basically only one answer, involving the visible expression of technological innovation and economic expansion. Paz was not impressed with this notion of progress in 1950, when he wrote his famous essay, and it is a safe bet that he was increasingly disenchanted with the American model as the years wore on. Although he saw the flaws of his own culture quite clearly, he never felt that the American Way of Life was any kind of solution for Mexico or indeed, the rest of the world. Paz was prescient: at a time when everyone was celebrating America as an unrivaled success, he correctly pegged it as a wounded civilization, one that saw the future strictly in terms of novelty and never questioned what it was doing.
This extremely limited notion of the good life, combined with almost total unconsciousness, presents itself as daily reality in the U.S. I recall a friend of mine telling me, a few years ago, about a train trip she took up the California coast, during which she decided to walk through the cars very slowly, from back to front, almost pretending to be an invalid, so that she could eavesdrop on conversations. Every last one of these, she said, was about some gadget, some aspect of consumer technology–software, computer attachments, iPods, cell phone variations, etc. This is where, she concluded, Americans put their attention; it is what really excites them, makes them feel alive. Nor is this limited to Americans, of course. In the mid-eighties, when I was teaching at a Canadian university, my colleagues were literally ecstatic over the introduction of personal computers, firmly believing that these machines would write papers and books for them, perhaps help them get tenure or upgrade their entire careers (promises that failed to materialize, needless to say). As for south of the border, I was recently riding around Mexico City with a colleague of mine when we saw a huge billboard ad for some cell phone, with the caption, in three-foot high block capitals (in English, for some strange reason), KILL SILENCE. “Well,” I remarked to my colleague, “at least they are being honest about it.” “Oh,” he quipped, “you are fixated on cell phones.”
It’s hard to know how to reply to a dismissive remark of this kind, since even the brightest people don’t get it, and usually have no idea what George Steiner meant when he called modernity “the systematic suppression of silence.” Silence, after all, is the source of all self-knowledge, and of much creativity as well. But it is hardly valued by societies that confuse creativity with productivity. What I am fixated on, in fact, is not technology but the fixation on technology, the obsession with it. Unfortunately, it is hard to persuade those caught up in the American model of progress that it is they who are living in an upside-down world, not Octavio Paz.
For it doesn’t have to be this way. Notions of progress might conceivably revolve around how we treat each other in social situations, for example, not around the latest electronic toy. Some years ago I taught in the sociology department of a major American university, and marveled at my colleagues, who were constantly interrupting their conversations with each other to take cell phone calls–as if a conversation with someone who was not physically present were more important than one with someone who was. They had no idea of how rude they were on a daily basis, and regarded my own views on technology as “quaint.” Considering the damage this behavior was doing to human social interaction, and the fact that these folks were sociologists, I was impressed by the irony of it all. It was like being at a convention of nutritionists, each of whom weighed more than 300 pounds. After all, if obesity is the new health, what is there left to say?
This brings to mind the famous phrase coined by the late Senator Daniel Patrick Moynihan, “defining deviancy down.” Moynihan pointed out that there was a process in American culture by which behavior traditionally regarded as selfish or vulgar–e.g., abruptly breaking off a conversation with one person to initiate one with someone else–rapidly becomes acceptable if enough people start doing it. Deviancy, in short, goes down to the lowest common denominator, finally becoming the norm. Indeed, the vulgarization and “narcissization” of American society had become so widespread by the mid-1990s that books were being written on incivility, and conferences held on the subject as well. But none of this made any difference for actual behavior, as even the most casual observation of contemporary American society reveals.
I remember, some years ago, then Secretary of State Condoleezza Rice talking about American (non)relations with Cuba, and stating that “we don’t want that model to be able to replicate itself”–the old contagion theory of communism, as it were. Well, I’m not big on dictatorships myself, but what about the danger of the American model replicating itself? When you go to New Zealand and see the Maori people talking on cell phones and watching American sitcoms, you know that Moynihan’s prediction about the world turning into trash is not very far off.
China, which is all set to replace the U.S. as the next hegemonic power, is of course replicating the American model with a vengeance. “To get rich is glorious,” declared Deng Xiaoping, and the 1990s witnessed the stripping away of time-worn (non-Maoist) Chinese models of good citizenship and moral participation in collective goals. The race was on to crank out as many cell phones, DVD players, televisions, shopping malls, and highways as possible. Monthly car production went from 20,000 in 1993 to 250,000 in 2004, and Wal-Mart and McDonald’s have spread through the country like wildfire. In China Pop, Jianying Zha gives us a vivid (read: garish and appalling) portrait of a country wallowing in mass consumerism, from soap operas to pornography and beyond. China is now dotted with privileged consumption zones, theme parks, and beauty pageants. Cosmetic surgery clinics abound, promising to give young women more rounded, Western eyes. In fact, the beauty industry grosses more than $24 billion a year. ”Consumerism became a religion,” writes Rachel Dewoskin in Foreign Babes in Beijing, as “street kiosks made way for sleek boutiques and cafés, where Chinese and foreigners lounged together, drinking lattes and Italian sodas.” Companies arrived like missionaries, she recalls, seducing the average Chinese with products they never knew they needed. In the progressive China of today, everyone, according to the British anthropologist David Harvey, “speculates on the desires of others in the Darwinian struggle for position.”
This is why we have more to fear from the American model of progress, and its replication on a world scale, than from some aged caudillo in Cuba. For what does it consist of, finally, when “freedom” means little more than “free enterprise”? As Harvey tells us in his remarkable study, A Brief History of Neoliberalism,
“that culture, however spectacular, glamorous, and beguiling,
perpetually plays with desires without ever conferring satisfactions
beyond the limited identity of the shopping mall and the anxieties
of status by way of good looks (in the case of women) or of material
possessions. ‘I shop therefore I am’ and possessive individualism
together construct a world of pseudo-satisfactions that is superficially
exciting but hollow at its core.”
This beguiling quality–the notion of culture as chic–is an enormous shell game, as Harvey demonstrates in his summary of what happened to New York City during the 1970s. A fiscal crisis arose, the product of rapid suburbanization that was destroying the tax base of the city. Financial institutions were prepared to bridge the gap between income and expenditure in the city budget, and expansion of public employment via federal funding was also being considered. But in 1975 a powerful group of investment bankers, led by Citibank, refused to roll over the debt and left the city technically bankrupt. Union activity was curtailed; cutbacks took place in education, public health, and transportation; and wealth got redistributed upward, to the rich and super rich. It was, says Harvey, “a coup by the financial institutions against the democratically elected government of New York City.” Both the social and the physical infrastructure of the city deteriorated, and the city government, the municipal labor movement, and working-class New Yorkers were stripped of their power.
That wasn’t the end of it, however. The next step on the part of the business community was to turn New York into a “good investment opportunity.” “Corporate welfare,” writes Harvey, “substituted for people welfare.” The idea was to sell New York as a tourist destination, and “I [Heart] New York” swept through the town as the new logo. As Harvey notes:
“The narcissistic exploration of self, sexuality, and identity became
the leitmotif of bourgeois urban culture. Artistic freedom and artistic
licence, promoted by the city’s powerful cultural institutions, led, in
effect, to the neoliberalization of culture. ‘Delirious New York’...
erased the collective memory of democratic New York....New York
became the epicentre of postmodern cultural and intellectual
experimentation. Meanwhile the investment bankers reconstructed
the city economy around financial activities...and diversified con-
sumerism (gentrification and neighbourhood ‘restoration’ playing a
prominent and profitable role). City government was more and more
construed as an enterpreneurial rather than a social democratic or
even managerial entity.”
Progress (so-called) has to be chic, in other words, and this integrates well with the neoliberal equation of freedom with lifestyle choice; which effectively kills democracy, or renders it irrelevant. Again, it’s a question of how you define it. Home visits by doctors, for example (the norm, when I was a child), have vanished almost completely, and Americans would hardly regard the return of this practice as progress. It may well be a life saver, but it’s not particularly hip. SUV’s that destroy the environment are chic; mass transit is not. Dog-eat-dog competition is chic; a social safety net, or a health system that actually works, is not. Best sellers praising globalization are chic; community and friendship, rather passé. And so on. Children get excited by toys, bright colors, and the latest gimmick; adults, by the prospect of a truly healthy society. As deviancy is defined downward across the planet, whether in New York or Beijing, it leaves very few adults in its wake.
As far as technology goes, the irony is that it seems to be failing in its own terms. The social and psychological damage of “life on the screen” has by now been documented by numerous studies; but when the technology is actually delivering the opposite of what was originally promised, one has to ask what it is all for. The literature on this is fairly large, so all I can do at this point is touch on some of the highlights.*
In Tyranny of the Moment, Norwegian anthropologist Thomas Hylland Eriksen argues that while the period from 1980 saw a rapid expansion in so-called time-saving technologies, the truth is that we have never had so little free time as we do now. The Internet has made possible a huge expansion of available information, and yet all the data show an increasingly ignorant population. Changes that were touted as boosting creativity and efficiency have actually had the opposite effect. Air travel is now so heavily congested that by 2000, fifty percent of the flights connecting major European cities were delayed. In the U.S., road traffic tripled during 1970-2000, and the average speed involved in getting around decreased every year. In fact, the average speed of a car in New York City in 2000 was about seven miles per hour, and we can guess that it is even less today. Etc.
One activity heavily promoted as “progressive” was multitasking, made easy by the use of a variety of compact technologies. Yet a study conducted by the University of London in 2005, according to the journalist Christine Rosen, revealed that workers who are distracted by e-mail and cell phone calls suffer a fall in I.Q. of more than twice that experienced by pot smokers. In 2007, she notes, a major U.S. business analyst (Jonathan Spira, at a research firm called Basex) estimated that multitasking was costing the American economy $650 billion a year in lost productivity, and a University of Michigan study revealed that it causes short-term memory loss. In general, writes Walter Kirn, “Neuroscience is confirming what we all suspect: Multitasking is dumbing us down and driving us crazy.” Specifically, it interferes with areas of the brain related to memory and learning; it actually slows our thinking. The problem seems to be that when you move from one task to another, you have to keep “revving up” to get back to doing what you were doing before. Hence, the quality of work gets compromised due to loss of focus and loss of time. In general, the Net lowers the brain’s capacity for concentration and contemplation; “reading on the Net” is almost a contradiction in terms. “We inevitably begin to take on the quality of those technologies,” writes Nicholas Carr; “our own intelligence...flattens into artificial intelligence.”
All in all, it now appears that endless technological innovation and economic expansion, which have only themselves as their goal, finally undermine social relations, redefine common sense, and interfere with our ability to think. Harvey hits the nail on the head when he argues for the existence of an inner connection between “technological dynamism, instability, dissolution of social solidarities, environmental degradation, deindustrialization, rapid shifts in time-space relations, speculative bubbles, and the general tendency towards crisis formation within capitalism.” We are caught in a contradiction, he says, between “a seductive but alienating possessive individualism on the one hand and the desire for a meaningful collective life on the other.”
Personally, I don’t think there is much doubt as to which of these two options is going to win out. By 2050, the planet is expected to have a population of 10 to 11 billion people. Competition for food and water will be fierce; resources in general will be scarce. The majority of this population will probably be living on less than two dollars a day, and “iron” governments will arise to manage politically unstable situations . And yet, there may be an odd silver lining to this, as Blade Runner descends on us in earnest: clutched in the hand of every man, woman, and child will be a state-of-the-art cell phone, and in front of each individual the hippest of personal computers. Granted, we may be collectively dying, but at least we’ll be chic.
©Morris Berman, 2009
*To mention a few key sources: Thomas Hylland Eriksen, Tyranny of the Moment (London: Pluto Press, 2001); Nicole Aubert, Le culte de l’urgence (Paris: Flammarion, 2003); Christine Rosen, “The Myth of Multitasking,” The New Atlantis, No. 20 (Spring 2008), pp. 105-10; Walter Kirn, “The Autumn of the Multitaskers,” Atlantic Monthly, November 2007; Nicholas Carr, “Is Google Making Us Stupid?” Atlantic Monthly, July/August 2008.
–From “I’m Hip,” by Dave Frishberg and Bob Dorough
At one point in The Labyrinth of Solitude, Octavio Paz quotes the German philosopher Max Scheler, who asked, “What is progress?” It’s a crucial question, and in the United States there is basically only one answer, involving the visible expression of technological innovation and economic expansion. Paz was not impressed with this notion of progress in 1950, when he wrote his famous essay, and it is a safe bet that he was increasingly disenchanted with the American model as the years wore on. Although he saw the flaws of his own culture quite clearly, he never felt that the American Way of Life was any kind of solution for Mexico or indeed, the rest of the world. Paz was prescient: at a time when everyone was celebrating America as an unrivaled success, he correctly pegged it as a wounded civilization, one that saw the future strictly in terms of novelty and never questioned what it was doing.
This extremely limited notion of the good life, combined with almost total unconsciousness, presents itself as daily reality in the U.S. I recall a friend of mine telling me, a few years ago, about a train trip she took up the California coast, during which she decided to walk through the cars very slowly, from back to front, almost pretending to be an invalid, so that she could eavesdrop on conversations. Every last one of these, she said, was about some gadget, some aspect of consumer technology–software, computer attachments, iPods, cell phone variations, etc. This is where, she concluded, Americans put their attention; it is what really excites them, makes them feel alive. Nor is this limited to Americans, of course. In the mid-eighties, when I was teaching at a Canadian university, my colleagues were literally ecstatic over the introduction of personal computers, firmly believing that these machines would write papers and books for them, perhaps help them get tenure or upgrade their entire careers (promises that failed to materialize, needless to say). As for south of the border, I was recently riding around Mexico City with a colleague of mine when we saw a huge billboard ad for some cell phone, with the caption, in three-foot high block capitals (in English, for some strange reason), KILL SILENCE. “Well,” I remarked to my colleague, “at least they are being honest about it.” “Oh,” he quipped, “you are fixated on cell phones.”
It’s hard to know how to reply to a dismissive remark of this kind, since even the brightest people don’t get it, and usually have no idea what George Steiner meant when he called modernity “the systematic suppression of silence.” Silence, after all, is the source of all self-knowledge, and of much creativity as well. But it is hardly valued by societies that confuse creativity with productivity. What I am fixated on, in fact, is not technology but the fixation on technology, the obsession with it. Unfortunately, it is hard to persuade those caught up in the American model of progress that it is they who are living in an upside-down world, not Octavio Paz.
For it doesn’t have to be this way. Notions of progress might conceivably revolve around how we treat each other in social situations, for example, not around the latest electronic toy. Some years ago I taught in the sociology department of a major American university, and marveled at my colleagues, who were constantly interrupting their conversations with each other to take cell phone calls–as if a conversation with someone who was not physically present were more important than one with someone who was. They had no idea of how rude they were on a daily basis, and regarded my own views on technology as “quaint.” Considering the damage this behavior was doing to human social interaction, and the fact that these folks were sociologists, I was impressed by the irony of it all. It was like being at a convention of nutritionists, each of whom weighed more than 300 pounds. After all, if obesity is the new health, what is there left to say?
This brings to mind the famous phrase coined by the late Senator Daniel Patrick Moynihan, “defining deviancy down.” Moynihan pointed out that there was a process in American culture by which behavior traditionally regarded as selfish or vulgar–e.g., abruptly breaking off a conversation with one person to initiate one with someone else–rapidly becomes acceptable if enough people start doing it. Deviancy, in short, goes down to the lowest common denominator, finally becoming the norm. Indeed, the vulgarization and “narcissization” of American society had become so widespread by the mid-1990s that books were being written on incivility, and conferences held on the subject as well. But none of this made any difference for actual behavior, as even the most casual observation of contemporary American society reveals.
I remember, some years ago, then Secretary of State Condoleezza Rice talking about American (non)relations with Cuba, and stating that “we don’t want that model to be able to replicate itself”–the old contagion theory of communism, as it were. Well, I’m not big on dictatorships myself, but what about the danger of the American model replicating itself? When you go to New Zealand and see the Maori people talking on cell phones and watching American sitcoms, you know that Moynihan’s prediction about the world turning into trash is not very far off.
China, which is all set to replace the U.S. as the next hegemonic power, is of course replicating the American model with a vengeance. “To get rich is glorious,” declared Deng Xiaoping, and the 1990s witnessed the stripping away of time-worn (non-Maoist) Chinese models of good citizenship and moral participation in collective goals. The race was on to crank out as many cell phones, DVD players, televisions, shopping malls, and highways as possible. Monthly car production went from 20,000 in 1993 to 250,000 in 2004, and Wal-Mart and McDonald’s have spread through the country like wildfire. In China Pop, Jianying Zha gives us a vivid (read: garish and appalling) portrait of a country wallowing in mass consumerism, from soap operas to pornography and beyond. China is now dotted with privileged consumption zones, theme parks, and beauty pageants. Cosmetic surgery clinics abound, promising to give young women more rounded, Western eyes. In fact, the beauty industry grosses more than $24 billion a year. ”Consumerism became a religion,” writes Rachel Dewoskin in Foreign Babes in Beijing, as “street kiosks made way for sleek boutiques and cafés, where Chinese and foreigners lounged together, drinking lattes and Italian sodas.” Companies arrived like missionaries, she recalls, seducing the average Chinese with products they never knew they needed. In the progressive China of today, everyone, according to the British anthropologist David Harvey, “speculates on the desires of others in the Darwinian struggle for position.”
This is why we have more to fear from the American model of progress, and its replication on a world scale, than from some aged caudillo in Cuba. For what does it consist of, finally, when “freedom” means little more than “free enterprise”? As Harvey tells us in his remarkable study, A Brief History of Neoliberalism,
“that culture, however spectacular, glamorous, and beguiling,
perpetually plays with desires without ever conferring satisfactions
beyond the limited identity of the shopping mall and the anxieties
of status by way of good looks (in the case of women) or of material
possessions. ‘I shop therefore I am’ and possessive individualism
together construct a world of pseudo-satisfactions that is superficially
exciting but hollow at its core.”
This beguiling quality–the notion of culture as chic–is an enormous shell game, as Harvey demonstrates in his summary of what happened to New York City during the 1970s. A fiscal crisis arose, the product of rapid suburbanization that was destroying the tax base of the city. Financial institutions were prepared to bridge the gap between income and expenditure in the city budget, and expansion of public employment via federal funding was also being considered. But in 1975 a powerful group of investment bankers, led by Citibank, refused to roll over the debt and left the city technically bankrupt. Union activity was curtailed; cutbacks took place in education, public health, and transportation; and wealth got redistributed upward, to the rich and super rich. It was, says Harvey, “a coup by the financial institutions against the democratically elected government of New York City.” Both the social and the physical infrastructure of the city deteriorated, and the city government, the municipal labor movement, and working-class New Yorkers were stripped of their power.
That wasn’t the end of it, however. The next step on the part of the business community was to turn New York into a “good investment opportunity.” “Corporate welfare,” writes Harvey, “substituted for people welfare.” The idea was to sell New York as a tourist destination, and “I [Heart] New York” swept through the town as the new logo. As Harvey notes:
“The narcissistic exploration of self, sexuality, and identity became
the leitmotif of bourgeois urban culture. Artistic freedom and artistic
licence, promoted by the city’s powerful cultural institutions, led, in
effect, to the neoliberalization of culture. ‘Delirious New York’...
erased the collective memory of democratic New York....New York
became the epicentre of postmodern cultural and intellectual
experimentation. Meanwhile the investment bankers reconstructed
the city economy around financial activities...and diversified con-
sumerism (gentrification and neighbourhood ‘restoration’ playing a
prominent and profitable role). City government was more and more
construed as an enterpreneurial rather than a social democratic or
even managerial entity.”
Progress (so-called) has to be chic, in other words, and this integrates well with the neoliberal equation of freedom with lifestyle choice; which effectively kills democracy, or renders it irrelevant. Again, it’s a question of how you define it. Home visits by doctors, for example (the norm, when I was a child), have vanished almost completely, and Americans would hardly regard the return of this practice as progress. It may well be a life saver, but it’s not particularly hip. SUV’s that destroy the environment are chic; mass transit is not. Dog-eat-dog competition is chic; a social safety net, or a health system that actually works, is not. Best sellers praising globalization are chic; community and friendship, rather passé. And so on. Children get excited by toys, bright colors, and the latest gimmick; adults, by the prospect of a truly healthy society. As deviancy is defined downward across the planet, whether in New York or Beijing, it leaves very few adults in its wake.
As far as technology goes, the irony is that it seems to be failing in its own terms. The social and psychological damage of “life on the screen” has by now been documented by numerous studies; but when the technology is actually delivering the opposite of what was originally promised, one has to ask what it is all for. The literature on this is fairly large, so all I can do at this point is touch on some of the highlights.*
In Tyranny of the Moment, Norwegian anthropologist Thomas Hylland Eriksen argues that while the period from 1980 saw a rapid expansion in so-called time-saving technologies, the truth is that we have never had so little free time as we do now. The Internet has made possible a huge expansion of available information, and yet all the data show an increasingly ignorant population. Changes that were touted as boosting creativity and efficiency have actually had the opposite effect. Air travel is now so heavily congested that by 2000, fifty percent of the flights connecting major European cities were delayed. In the U.S., road traffic tripled during 1970-2000, and the average speed involved in getting around decreased every year. In fact, the average speed of a car in New York City in 2000 was about seven miles per hour, and we can guess that it is even less today. Etc.
One activity heavily promoted as “progressive” was multitasking, made easy by the use of a variety of compact technologies. Yet a study conducted by the University of London in 2005, according to the journalist Christine Rosen, revealed that workers who are distracted by e-mail and cell phone calls suffer a fall in I.Q. of more than twice that experienced by pot smokers. In 2007, she notes, a major U.S. business analyst (Jonathan Spira, at a research firm called Basex) estimated that multitasking was costing the American economy $650 billion a year in lost productivity, and a University of Michigan study revealed that it causes short-term memory loss. In general, writes Walter Kirn, “Neuroscience is confirming what we all suspect: Multitasking is dumbing us down and driving us crazy.” Specifically, it interferes with areas of the brain related to memory and learning; it actually slows our thinking. The problem seems to be that when you move from one task to another, you have to keep “revving up” to get back to doing what you were doing before. Hence, the quality of work gets compromised due to loss of focus and loss of time. In general, the Net lowers the brain’s capacity for concentration and contemplation; “reading on the Net” is almost a contradiction in terms. “We inevitably begin to take on the quality of those technologies,” writes Nicholas Carr; “our own intelligence...flattens into artificial intelligence.”
All in all, it now appears that endless technological innovation and economic expansion, which have only themselves as their goal, finally undermine social relations, redefine common sense, and interfere with our ability to think. Harvey hits the nail on the head when he argues for the existence of an inner connection between “technological dynamism, instability, dissolution of social solidarities, environmental degradation, deindustrialization, rapid shifts in time-space relations, speculative bubbles, and the general tendency towards crisis formation within capitalism.” We are caught in a contradiction, he says, between “a seductive but alienating possessive individualism on the one hand and the desire for a meaningful collective life on the other.”
Personally, I don’t think there is much doubt as to which of these two options is going to win out. By 2050, the planet is expected to have a population of 10 to 11 billion people. Competition for food and water will be fierce; resources in general will be scarce. The majority of this population will probably be living on less than two dollars a day, and “iron” governments will arise to manage politically unstable situations . And yet, there may be an odd silver lining to this, as Blade Runner descends on us in earnest: clutched in the hand of every man, woman, and child will be a state-of-the-art cell phone, and in front of each individual the hippest of personal computers. Granted, we may be collectively dying, but at least we’ll be chic.
©Morris Berman, 2009
*To mention a few key sources: Thomas Hylland Eriksen, Tyranny of the Moment (London: Pluto Press, 2001); Nicole Aubert, Le culte de l’urgence (Paris: Flammarion, 2003); Christine Rosen, “The Myth of Multitasking,” The New Atlantis, No. 20 (Spring 2008), pp. 105-10; Walter Kirn, “The Autumn of the Multitaskers,” Atlantic Monthly, November 2007; Nicholas Carr, “Is Google Making Us Stupid?” Atlantic Monthly, July/August 2008.
Subscribe to:
Posts (Atom)