28 October 2023
On the responsibility of intellectuals
As the conscience of society, writer-thinkers should not be swayed by prevailing political opinion in the Israel-Hamas conflict.
By George Scialabba
Illustration by Jedi Noordegraaf / Ikon Images
I and the public know
What all schoolchildren learn,
Those to whom evil is done
Do evil in return.
--WH Auden, "September 1, 1939"
"The 7 October attack by Hamas was morally barbarous and strategically futile. Nothing justifies the killing of innocents, not even the denial of a people's nationhood for 75 years, the displacement of hundreds of thousands of them to make way for colonial settlers, or the killing of thousands of their own innocents in scandalously disproportionate 'reprisals'. And as for strategy, for the weak (and not only for them), nothing is less efficacious than such violence, which makes trust - the only reliable basis of lasting security - impossible. Better a people should suffer another 75 years of dispossession than that another such crime be committed in its name. Of course, those who would allow this people to go without justice for another 75 years, and who allowed it to go without justice for the last 75 years, share the murderers' guilt, and with far less excuse."
No one asked me for a public statement after the Hamas raid. If anyone had, this is roughly what I would have said, and I've used it as a kind of template in reacting to the innumerable public statements, solicited and unsolicited, that I've encountered since the event.
The loudest class of reactions - the most numerous, most anguished, most indignant - has been to the least consequential of statements: those of university students. Several dozen student organisations, probably representing several hundred individuals, issued a statement after the raid that began by holding Israel "entirely responsible" for "all the unfolding violence". Academic luminaries such as Lawrence Summers, and more consequentially, billionaire donors such as Ken Griffin, Marc Rowan and Jon Huntsman demanded that the universities in question (Harvard and University of Pennsylvania - though Penn was guilty only of hosting a Palestinian literature festival several weeks before the attack) officially disavow the students' statements. There was, of course, little debate about the substance of the letter beyond hand-wringing, and it has now been deleted, with the (desired?) result that there will apparently be little more. Is this how such matters should be handled in a healthy democratic society, or, for that matter, a self-respecting educational institution? Couldn't Summers or some other Harvard eminence responsible for the instruction of the young have descended from Parnassus and shown the deluded students the error of their ways in face-to-face debate?
What did the students mean by their first sentence holding Israel "entirely responsible" for the attack? They could not have meant what the sentence appears to mean: that Israel rather than Hamas carried out the attack. They must have been making a statement about moral responsibility for the attack. To absolve Hamas of responsibility for murder is plainly wrong; therefore "entirely responsible" is indefensible. But what if the students had written "largely responsible"? Suppose that during the Vietnam War the National Liberation Front (NLF), or Viet Cong, had committed some atrocity comparable to Hamas's? I don't know how students then would have reacted, but surely millions of Americans would have agreed that the United States, as the aggressor, was "largely responsible for the unfolding violence", even if NLF atrocities were also morally wrong. Most of the world - though not Americans, by and large - believes that Israel is, in effect, the aggressor in the Israeli-Palestinian conflict: for preventing the return of 750,000 Palestinian refugees to their homes after the 1948 war and ever since; for continually extending its illegal settlements on Palestinian land in the West Bank; for devastating southern Lebanon in 1978 and 1982 in an attempt to destroy the Palestine Liberation Organisation (PLO); for refusing to accept the results of the 2006 Palestinian election, in which Hamas was chosen as the Palestinians' political representative; and for imposing an inhumane blockade on the two million inhabitants of Gaza, and carrying out vastly disproportionate reprisals, mostly affecting civilians, after previous Hamas attacks. I'm pretty sure the rest of the world, having supported countless UN resolutions demanding that Israel give back the West Bank, would have ignored the students' statement or rebuked them for rhetorical ineptitude but not seen it as an existential threat to Israel or to Jews.
Student-bashing is a species of left-bashing. If war is politics by other means, so are polemics about foreign policy. The right and the centre have shown themselves determined to locate and publicise "irresponsible" formulations by the left. That would be welcome if they also deigned to take notice of the non-foolish things leftists have to say - often in the same piece - about centrists' and rightists' cherished illusions and guilty silences.
In "Notes on Nationalism" (1945), George Orwell observed: "The nationalist not only does not disapprove of atrocities committed by his own side, but he has a remarkable capacity for not even hearing about them." The opinion and commentary I have read so far - in the New York Times, the Washington Post, Foreign Affairs and online sites such as Unherd, Quillette, Compact and Persuasion - has been almost wholly devoid of any mention of Israel's many crimes against the Palestinians, as though that would be to minimise the horror of Hamas's attack or deny Israel's right to lawful self-defence. On the contrary, the usual judgement about comparative criminality is implied, for example, in this entirely typical article from New York Magazine:
"The Israel Defense Forces do not, as a matter of policy, aim to kill Palestinian civilians, though it is debatable how sorry they really are when they inevitably do. This differentiates them from Hamas, which glorifies the killing of innocent Israelis (because again, in their worldview, no Israeli is innocent)."
In the 15 years before the Hamas attack, Palestinians suffered 6,407 fatalities and 152,560 injuries in comparison with Israel's 308 and 6,307, respectively.
Obviously, every event has both immediate and ultimate causes. In the present case, one should ask both who is responsible for the massacre and who is responsible for its context, the conflict that has generated so many past and (probably) future massacres. This is the left-wing reflex, which infuriates left-bashers, who insist that talk of root causes is merely an excuse for "revolutionary" violence. That is an evergreen fallacy: that to explain is to justify. It is doubtless, in some cases, an honest confusion; in others, an ideologically motivated dodge. In the latter case, its purpose is to deny that, beyond simply denouncing terrorism by the designated enemy, anything morally relevant remains to be said.
But some things do remain to be said. First, that by the ordinary definition of terrorism - deliberate violence against civilians for political purposes - both Israel and the United States have also been guilty of terrorism: the former during its 1978 and 1982 invasions of Lebanon, as well as many of its bombing raids in that country at other times, and its blockade and bombing of Gaza; the US far more extensively, through its support, training and arms sales to many brutal regimes and insurgencies; the latter's large-scale bombing of cities in the Second World War, the Korean War and the Indochina War; and the Iraqi sanctions, which killed tens of thousands of civilians. Second, that the definition of terrorism should perhaps be broadened to include reprisals that can hardly fail to produce civilian casualties, like the bombing, strafing and bulldozing of inhabited areas where terrorists may be hiding; or that cause a grave deterioration in the life of an entire society, like large-scale jailings, house detonations, curfews, roadblocks, checkpoints, school closings, border closings, import restrictions, destruction of cultural, administrative and agricultural resources, and more. The third point to raise is that those responsible for a huge, flagrant, persistent injustice, which they could remedy without grave detriment to their own society's security, and which terrorists claim to be protesting, deserve some blame for the terrorists' crimes (an allocation that does not diminish the terrorists' responsibility). The left's critics deplore its lack of moral complexity, but their own understanding of terrorism is a virtual flight from complexity.
[See also: The Cloud of Unreason]
Another simplicity to which Western (particularly American) intellectuals are prone is "rejectionism". According to the conventional wisdom, Israel has made many generous peace offers over the years, which Palestinians have refused, demonstrating their - and other Arabs' - fundamental unwillingness to live peacefully alongside Israel and absolving Israel of its prima facie obligations to somehow make whole the refugees of 1948 and relinquish Palestinian lands annexed since 1967. In the New York Times, under its executive editor AM Rosenthal, and the New Republic under Martin Peretz, probably the two most influential American vehicles of political opinion in the late 20th century, this view was unquestioned.
It was, nonetheless, false. The Egyptian Peace Plan of 1971, the PLO Peace Plan of 1988, and the Arab Peace Plan of 2002 all envisaged full diplomatic recognition of Israel. Israel rejected or ignored all of them. The reason, as with the Madrid, Oslo and Camp David negotiations, is that Israel has never been willing to withdraw from all the occupied territories and allow a Palestinian state there. The history of the Byzantine manoeuvres with which Israeli negotiators managed to portray various schemes for partial withdrawal but continued control as generous peace offers is told in two books by Israeli writers: Israel and Palestine: Reappraisals, Revisions, Refutations (2009) by the Oxford-based historian Avi Shlaim and Israel/Palestine (2002) by the academic Tanya Reinhart, as well as in Noam Chomsky's monumental and indispensable Fateful Triangle (1983).
Many have called the Hamas massacre Israel's 9/11. If so, we must not repeat that event's sequel. The response of American intellectuals to 9/11 was shameful. Only one explanation was allowed: the terrorists hated American values: democracy, progress, science, freedom. The notion that they had grievances, legitimate or fanciful, about American foreign policy was derided as "apologetics for terrorism" or "reflexive anti-Americanism", even though the George W Bush White House's chief counterterrorism expert, Richard Clarke, said the same thing, citing publications by al-Qaeda. Eventually, after a period of national mobilisation aided by these left-bashing intellectuals - Christopher Hitchens, Charles Krauthammer, the drum-beating Project for a New American Century, the New Republic's mean-spirited "Idiocy Watch", which jeered at reservations about the war on terror - America marched off to two ruinous wars, one criminal and one of tenuous legality. Let us hope Israel is wiser and more law-abiding.
Apart from a few student revolutionaries, no one has actually welcomed the Hamas attack and called for more of the same. What, then, should Western intellectuals say to Israelis and Palestinians? We should remind the Palestinians of their own professed belief: "And the retribution for an evil act is an evil one like it, but whoever pardons and makes reconciliation - his reward is [due] from Allah. Indeed, He does not like wrongdoers" (Koran 42:40). I am pretty sure "wrongdoers" includes "terrorists". Islam, Judaism and Christianity all teach that it is better to suffer an evil than to commit one. Beyond that, we should advise them to appeal to the conscience of Israelis - and Americans, who have steadfastly enabled Israeli policy since 1967. Whether or not that advice turns out to be a cruel joke depends at least in part on us intellectuals, whose vocation it is to inform the conscience of our societies.
We should tell the Israelis that they must refuse to pretend they are blameless, whatever their politicians and their foreign cheerleaders tell them; that having suffered even the greatest of evils does not license doing evil in return, much less to those who had not done them evil in the first place; and that they have some substantial injustices to redress, and though doing so will probably not gravely threaten their security, they must do so whether or not - though of course as prudently as possible.
Finally, because power entails responsibility and preponderant power entails preponderant responsibility, Western intellectuals should not fail to address America's leaders and citizens. For the sake of a reliable and powerful ally in the region containing "one of the greatest material prizes in world history", as an American statesman described Middle Eastern oil in the 1940s, and secondly because of a ferocious domestic lobby, the US has virtually conceded Israel carte blanche in its dealings with the Palestinians. The policy has been a success in its own terms: no serious threat to American dominance in the region has arisen in many decades. But it is realpolitik at its ugliest. The US cannot dictate peace, of course, but its influence is immense: Israel has no other source of military and diplomatic support.
Unfortunately, no one in American politics now has the moral or intellectual stature to propose a just settlement. Israel's current political leadership is the most fanatical and bloody-minded in that country's history. And Palestinian politics have never recovered from the Israeli-American overturning of their election in 2006. In Israel/Palestine, it is midnight in the century.
George Scialabba's Only a Voice: Essays has just been published by Verso.
]]>
The Cold War has a lot to answer for: trillions of dollars of wasteful military spending; a couple of nuclear close shaves, either of which might easily have led to a nuclear exchange, with unimaginable consequences; and during the late Forties and early Fifties, thousands of American lives ruined or marred by Congressional witch hunts and an unchecked FBI. It also, according to Samuel Moyn's original and penetrating new book, swallowed up our political imagination.
In the last few decades, liberalism has undergone a bewildering succession of attacks, vindications, anatomies, autopsies, and resurrections. It has been denounced as the upholder of the unholy capitalist political order and as the destroyer of the sacred Christian moral order. Evidently liberalism is for Americans - to adapt Madeline Albright's unfortunate phrase - the indispensable ideology.
"Liberalism" is a protean word, and any historical account of it will be open to one or another objection. But Moyn is something of an expert at intellectual genealogy. Not Enough: Human Rights in an Unequal World (2018) followed that (also protean) idea throughout the 20th century, as it developed in a subtle counterpoint with demands for economic equality, their proponents often competing within international institutions, foundations, and academia for funding, endorsements, and staff. Humane: How the United States Abandoned Peace and Reinvented War (2021) had a similarly dialectical structure, tracing over a century and a half how the ideal of humanitarian war gradually displaced the movement - once very strong - to outlaw all wars. The US in particular, Moyn showed, has in recent decades gone all in on humanitarian war while firmly disavowing any constraint on its sovereign right to wage war whenever and wherever it sees fit, leaving us engaged in "deterritorialized and endless wars," carried on with ultra-precise weapons.
Liberalism Against Itself describes liberalism's decline from the expansive and optimistic creed birthed by the Enlightenment and Romanticism to a cramped and defensive mindset that saw dangers and enemies everywhere. The Cold War liberals were traumatized by Nazism and Stalinism, to the point that most of the vocabulary of political hope - "liberation," "revolution," "emancipation," "utopia" - became unavailable. And they went even further: highly accomplished scholars (those that Moyn discusses, at any rate), they hunted down the ideas they believed to be the roots of the 20th-century catastrophe.
The troop Moyn has chosen to study is an illustrious one. They are six in all: three intellectual celebrities - Hannah Arendt, Isaiah Berlin, and Karl Popper; two others only slightly less well-known: Gertrude Himmelfarb, a prolific historian, and Judith Shklar, a political theorist; and a surprising and adventurous choice, Lionel Trilling. (He leaves out popular but all-too-familiar figures like Reinhold Niebuhr and Walter Lippmann.) There were differences among them, but they all agreed that the rationalism of the 18th-century Enlightenment bred a fateful overconfidence with terrible consequences, first in the French Revolution and then in the Communist movements of the 19th and 20th centuries. The Cold War liberals also reinterpreted Romanticism as an episode in the history of political thought, not as an inspiration for creative agency and the higher life, as earlier liberals like Tocqueville and Benjamin Constant had conceived it, but as a scapegoat and a caution. Prometheanism, they believed, was an ever-present temptation.
Rousseau and Hegel figured prominently in what Moyn calls the Cold War liberals' "anti-canon." Rousseau, by speaking of different wills and different liberties, had, according to Isaiah Berlin, introduced into the world the "grotesque paradox whereby a man is told that to be deprived of his liberty is to be given a higher, nobler liberty" - a familiar Communist gambit. Rousseau also exalted feeling and imagination to parity with reason - in effect, according to the Cold War liberals, preaching irrationalism. Hegel's many offenses included believing that history was a progress, arguing that the state was an important agency of human betterment, and not being a commonsensical English empiricist. Rousseau and Hegel, together with the 18th-century philosophes and Marx, epitomized the pernicious doctrines that had led the modern world astray: progressivism and perfectionism.
The concept of progress was undoubtedly the largest source of the Cold War liberals' grievance. Of course even conservatives acknowledge progress: in science, technology, medicine. It is claims about moral or political progress that make them see red, and especially claims about inevitable progress, which allow rulers to rationalize oppression and deprivation. But such claims are not so common as the emphatic warnings from Cold War liberals might suggest. Condorcet thought that universal brotherhood would eventually reign. Hegel thought that Reason would eventually reign. Marx thought that workers would eventually reign. Each of them had a theory about how his predictions might come true. But they were not dogmatic inevitabilists like, say, St. Simon, Comte, and Herbert Spencer. The others, though condemned as proto-totalitarians by the Cold War liberals, were not saying much more than MLK's "the arc of the universe bends toward justice." But as with many ideas and terms appropriated by the Bolsheviks, "progress" became deeply and enduringly suspect.
"Perfectionism" grew out of Romanticism, with its emphasis on the uniqueness and creative power of individuals. Liberalism before the World Wars, Moyn writes, was "the project of securing the conditions - including the economic conditions - for the enjoyment of creative freedom" - by everyone. After the First World War, this emancipatory project, like most projects, seemed beside the point. After the Second, it seemed positively dangerous - a program waiting for a demagogue to take it up. The Cold War liberals preached the opposite doctrine - Original Sin - with a vengeance. A little like Dostoevsky's Grand Inquisitor, they thought that ordinary people could not bear much reality or happiness and needed firm, vigilant political guidance. They also recommended religion to the public, though none of them was religious.
Moyn's graduate school mentor Judith Shklar is the book's muse. It was she who held out longest against the Cold War liberals' cardinal mistake: the wholesale rejection of the Enlightenment. She also criticized one of the most unfortunate aspects of Cold War liberalism: its near-exclusive preference for negative liberty.
Shklar's friend Isaiah Berlin was, like her, friendlier to the Enlightenment and Romanticism than most Cold War liberals. But his very influential foray into political theory, Two Concepts of Liberty, enshrined a distinction between negative liberty - freedom from - and positive liberty - freedom to. In Shklar's view, this maimed traditional liberalism, which had cared as much about "moral and intellectual self-fulfillment" as about "absence of restraint."
Gertrude Himmelfarb found in English intellectual history, and particularly in the rediscovery of one of its great figures, Lord Acton, an antidote for the French and German toxins that had infected modern political theory. Acton was a historian of freedom, a champion of absolute, unconditional moral law, and a bulwark against progressivism ('progress is the religion of those who have none," he pronounced) and relativism.
Hannah Arendt was, Moyn writes, a "fellow traveler" of the Cold War liberals. From different philosophical premises, she arrived at similarly pessimistic conclusions. The Roman Republic was Arendt's ideal, and nothing in modern times equaled its balance of order and freedom except the American colonial period. She despised Rousseau and Hegel, the French and Russian Revolutions, as cordially as the Cold War liberals did. She also shared their distaste for the postcolonial states, many of which appropriated Western rhetoric about revolution and liberation to cover up tyranny and corruption. Arendt's and the Cold Warriors' caustic skepticism about the possibility of non-Western freedom was partly hard-eyed realism and partly, Moyn suggests, racism.
Lionel Trilling is an unexpected presence in this book, and an illuminating one. Trilling was essential in introducing American Cold War liberals to Freud - not the critic of sexual repression but the stern moralist, preaching renunciation and self-control. "The Cold War liberals," Moyn writes, "canonized Freud for the self-oppression he recommended: strict self-control for the sake of avoiding misdirected enthusiasm and monitoring disorderly passion." This self-control - he often called it "responsibility" - was Trilling's lifelong teaching. For him as for the Cold War liberals, enthusiasm and passion were guilty until proven innocent.
**********
If there was one word that Cold War liberals almost invariably used to characterize their outlook, it was "tragic." That Reason, the purported agent of human liberation, had instead spawned concentration camps; that the United States, with the noblest of intentions, wound up again and again allied with unsavory right-wing dictatorships; that this peace-loving nation should find itself forced to drop so many and such lethal bombs on so many noncombatants throughout the 20th century - bombing them into "a nobler, higher liberty," as Isaiah Berlin might have said (but didn't): these regrettable facts fell into the category of "tragic irony." Those who could not perceive the tragedy - or misidentified the victims - were derided as naïve or doctrinaire, too impatient to appreciate the subtle ironies that would preclude an unequivocal condemnation of American foreign policy.
"Tragic irony" was not entirely a dodge. It was meant to counter a style of thinking that one might call Jacobin: a tendency to think about politics schematically, with too much reliance on abstractions and too much readiness to assign people to categories (like class), which thereafter determine their treatment, and too little allowance for contingency and individuality. There was also a tendency in this style of thought to appeal to historically inevitability, usually to justify sacrifices by the masses. The Bolsheviks had done these things to a fault - a monstrous fault - and the Cold War liberals' revulsion was justified. Not justified, however, was their implacable condemnation of any and all radical criticism or action.
************
Moyn's critique of the Cold War liberals is acute and judicious. But they seem to me liable to another, even more damaging critique: the Cold War liberals simply had no idea what the Cold War was about. They believed it was a mortal combat between democracy and totalitarianism, between freedom and slavery, and most elementally, between good and evil. But it was nothing of the sort.
The actual Cold War, rather than the grand clash of metaphysical systems imagined by the Cold War liberals, was a tacit, mutually advantageous arrangement between the superpowers to represent each other as a supreme threat, a tireless aggressor, an evil empire, in order to induce their own populations to bear the moral and material costs of imposing their different forms of hegemony in their respective domains. US support for numerous repressive and violent regimes engaged in crushing restive populations could hardly be justified to the American public honestly, i.e., as support for a favorable investment climate. So Americans were instructed that the international Communist crusade was threatening yet another helpless country vital to the defense of the Free World - no matter what that country's population might want. The Soviets, likewise, portrayed their interventions in Eastern Europe as the defense of socialism against cunning and unscrupulous agents of capitalist counterrevolution, when what was really at stake was, in the first place, to prevent the Russian population from becoming infected with democratic ideas from the satellite countries, and in the second place, to secure a buffer against invasion from the West, which had nearly destroyed Russia three times in a century and a half. (Then as now, NATO made the Russians extremely nervous.)
The Cold War liberals were useful idiots. "Useful idiot" is a term of art. Of course the Cold War liberals were not idiots, any more than Brecht, Sartre, and Lukacs were idiots. But just as Brecht, Sartre, and Lukacs managed not to notice the tyranny and mendacity of the Soviet Union in order to remain useful to the international proletarian revolution, the Cold War liberals managed not to notice the United States' consistent support of brutal and undemocratic but business-friendly Third World regimes. The roster of those regimes is very long - Argentina, Brazil, Chile, Congo, Cuba, the Dominican Republic, Greece, Guatemala, Honduras, Indonesia, Iran, Iraq, Nicaragua, the Philippines, Uruguay - and the toll of suffering on the subject populations very great. History will judge both groups of useful idiots harshly.
[END]
George Scialabba's Only a Voice: Essays has just been published by Verso.
]]>
These Are the Plunderers: How Private Equity Runs - and Wrecks - America by Gretchen Morgenson and Joshua Rosner. Simon & Schuster, 381 pages, $30.
Our Lives in Their Portfolios: Why Asset Managers Own the World by Brett Christophers. Verso, 310 pages, $29.95.
A specter is haunting capitalism: the specter of financialization. Industrial capitalism - the capitalism of "dark Satanic mills" - was bad enough, but it had certain redeeming features: in a word (well, two words), people and place. Factory work may have been grueling and dangerous, but workers sometimes acquired genuine skills, and being under one roof made it easier for them to organize and strike. Factories were often tied, by custom and tradition as well as logistics, to one place, making it harder to simply pack up and move in the face of worker dissatisfaction or government regulation.
To put the contrast at its simplest and starkest: industrial capitalism made money by making things; financial capitalism makes money by fiddling with figures. Sometimes, at least, old-fashioned capitalism produced - along with pollution, workplace injuries, and grinding exploitation - useful things: food, clothing, housing, transportation, books, and other necessities of life. Financial capitalism merely siphons money upward, from the suckers to the sharps.
Marxism predicted that because of competition and technological development, it would eventually prove more and more difficult to make a profit through the relatively straightforward activity of industrial capitalism. It looked for a while - from the mid-1940s to the mid-1970s - as though capitalism had proven Marxism wrong. Under the benign guidance of the Bretton Woods Agreement, which used capital controls and fixed exchange rates to promote international economic stability and discourage rapid capital movements and currency speculation, the US and Europe enjoyed an almost idyllic prosperity in those three decades. But then American companies began to feel the effects of European and Japanese competition. They didn't like it, so they pressured the Nixon administration to scrap the accords. Wall Street, which the Bretton Woods rules had kept on a leash, sensed its opportunity and also lobbied hard -and successfully.
The result was a tsunami of speculation over the next few decades, enabled by wave after wave of financial deregulation. The latter was a joint product of fierce lobbying by financial institutions and the ascendancy of laissez faire ideology - also called "neoliberalism" - embraced by Ronald Reagan and Margaret Thatcher and subsequently by Bill Clinton and Tony Blair. The idiocy was bipartisan: Clinton and Obama were as clueless as their Republican counterparts.
Among these "reforms" - each of them a dagger aimed at the heart of a sane and fair economy - were: allowing commercial banks, which handle the public's money, to take many of the same risks as investment banks, which handle investors' money; lowering banks' minimum reserve requirements, freeing them to use more of their funds for speculative purposes; allowing pension funds, insurance companies, and savings-and-loan associations (S&Ls) to make high-risk investments; facilitating corporate takeovers; approving new and risky financial instruments like credit default swaps, collateralized debt obligations, derivatives, and mortgage-based securities; and most important, removing all restrictions on the movement of speculative capital, while using the International Monetary Fund (IMF) to force unwilling countries to comply. Together these changes, as the noted economic journalist Robert Kuttner observed, forced governments "to run their economies less in service of steady growth, decent distribution, and full employment - and more to keep the trust of financial speculators, who [demanded] high interest rates, limited social outlays, low taxes on capital, and balanced budgets."
Keynes, the author of the Bretton Woods Agreement, warned: "Speculators may do no harm as bubbles on a steady stream of enterprise. But the position is serious when enterprise becomes the bubble on a whirlpool of speculation." That was indeed the position roughly fifty years after Keynes's death, and the predictable consequences followed. S&Ls were invited to make more adventurous investments. They did, and in the 1980s a third of them failed. The cost of the bailout was $160 billion. In the 1990s, a hedge fund named Long-Term Capital Management claimed to have discovered an algorithm that would reduce investment risk to nearly zero. For five years it was wildly successful, attracting $125 billion from investors. In 1998 its luck ran out. Judging that its failure would crash the stock market and bring down dozens of banks, the government organized an emergency rescue. The 2007-8 crisis was an epic clusterfuck, involving nearly everyone in both the financial and political systems, though special blame should attach to supreme con man Alan Greenspan, who persuaded everyone in government to repose unlimited confidence in the wisdom of the financial markets. Through it all, the Justice Department was asleep at the wheel. During the wild and woolly ten years before the 2008 crash, bank fraud referrals for criminal prosecution decreased by 95 per cent.
The Washington Consensus, embodying the neoliberal dogma of market sovereignty, was forced on the rest of the world through the mechanism of "structural adjustments," a set of conditions tacked onto all loans by the IMF. Latin American countries were encouraged to borrow heavily from US banks after the 1973 oil shock. When interest rates increased later in the decade, those countries were badly squeezed; Washington and the IMF urged still more deregulation. The continent's economies were devastated; the 1980s are known in Latin America as the "Lost Decade." In 1997, in Thailand, Indonesia, the Philippines, and South Korea, the same causes -
large and risky debts to US banks and subsequent interest-rate fluctuations - produced similar results: economic contraction, redoubled exhortations to
accommodate foreign investors, and warnings not to try to regulate capital flows. By the 2000s Europe had caught the neoliberal contagion: in the wake of the 2008 crisis, the weaker, more heavily indebted economies - Greece, Italy, Portugal, and Spain - were forced to endure crushing austerity rather than default. Financialization was a global plague.
*********************
Around 1970, a brave new idea was born, which ushered in a brave new world. A young trader figured out how to buy things without money. More precisely, he realized that you could borrow the money to buy the thing while using the thing itself as collateral. Our young genius bought a company with borrowed money, using the company's assets as collateral for the loan. He then [OMIT?] transferred the debt to the company, which then in effect had to pay for its own hijacking, and eventually sold it for a tidy profit. The young trader had invented the leveraged buy-out (LBO).
The leveraged buy-out was the key to the magic kingdom of private equity. But LBOs are not good for everyone. To service its new debt, the acquired company often must cut costs drastically. This usually means firing workers and managers and overworking those that remain, selling off divisions, renegotiating contracts with suppliers, halting environmental mitigation, and eliminating philanthropy and community service. And even then, many companies failed - a far higher proportion of companies acquired in LBOs went bankrupt than those that weren't.
Fortunately, it was discovered around this time that workers, suppliers, and communities don't matter. In the wake of Milton Friedman's famous and influential 1972 pronouncement that corporations have no other obligations than to maximize profits, several business school professors further honed neoliberalism into an operational formula: the fiduciary duty of every employee is always and only to increase the firm's share price. This "shareholder value theory," which exalted the interests of investors over all others - indeed recognized no other interests at all - afforded [WORD?] the intellectual and moral scaffolding of the private equity revolution.
*******************
Two excellent new books narrate, with complementary approaches, the alarming story of private equity's kudzu-like growth. These Are the Plunderers: How Private Equity Runs - and Wrecks - America by Pulitzer-Prize-winning reporter Gretchen Morgenson and her long-time writing partner Joshua Rosner provides blow-by-blow case histories, reconstructing tactics, analyzing legal conflicts, and affording the victims of PE depredations a face and a voice. (PE executives are generally faceless and rarely speak except to issue pro forma denials of everything through their extremely expensive lawyers.)
Morgenson and Rosner offer a précis of the PE playbook:
A high-performing operation is taken over and crippled with heavy debt taken on to pay for its acquisition. Real estate and other assets are stripped and sold, paying off the financiers who've taken charge. Pensions are slashed and employees fired or their jobs moved offshore. Decimated, the company flails and the financiers head to their next triumph.
An academic study found that around 20 percent of PE- acquired companies were bankrupt after 10 years, compared with two percent of all other companies. Another study looked at 10,000 companies acquired by PEs over a 30-year period and found that employment declined between 13 and 16 percent. A 2019 study found that "over the previous decade almost 600,000 people lost their jobs as retailers as retail collapsed after being bought by private equity."
Doctors and nurses at PE-owned health-care facilities are routinely suspended or fired for protesting inadequate staffing or equipment - not unrelated, probably, to the 10 percent higher fatality rate in PE-owned nursing homes. Hospital and nursing home charges at PE-acquired facilities often increase sharply; so do rents for PE tenants, leading to frequent mass evictions. PE Medicare reimbursement fraud is rampant, but Justice Department investigations are rare and penalties are minimal.
Morgenson and Rosner recount numerous horror stories, with a wealth of grisly detail. The centerpiece of their account is the destruction of a giant insurance company, Executive Life of California. The company had bought billions of dollars of junk bonds - high-interest, high-risk bonds -- from Drexel Burnham Lambert, a firm later found guilty of insider trading, fraud, and other financial crimes and forced out of business. One of Drexel's crimes was misrepresenting its junk bonds, so no one knew exactly how to value Executive Life's massive bond portfolio. Enter Leon Black, founder of the Apollo Fund and a former Drexel executive. He knew very well how much each of the company's bonds were worth, and if he could take over the company, that knowledge would be worth billions. Through a process "labyrinthine in its complexity" (indeed, this reviewer is still struggling to understand it), and aided by a bumbling state Insurance Commissioner, Apollo acquired Executive Life, keeping its bonds and shucking off its insurance portfolio to a small, spun-off corporation. Apollo made a $2 billion profit on the bonds; Executive Life's 300,000 policyholders lost, by Morgenson and Rosner's estimate, $3.9 billion in payouts.
************************
Brett Christophers is an academic rather than a journalist, and his approach in Our Lives in Their Portfolios is more analytical than that of We Are the Plunderers, though equally compelling. He proposes restricting the term "private equity" to the shares of companies that are not publicly traded, rather than to the investment firms - Apollo, Blackstone, Carlyle, KKR, et al - that take them private but also take over companies that remain publicly traded. All the funds organized by these buyout artists, whether their equity is public or private, are called "asset-manager funds." Using these definitions, only $5 trillion worldwide is managed as private equity. Of the remaining $98 trillion of assets under management, around 90 percent are financial assets - stocks and bonds. The other 10 percent is the world of "real assets," physical and social.
Asset manager funds buy highways, ports, water systems, energy (including wind and solar), farmlands, power generation facilities, waste management facilities, telecommunications, transportation systems, municipal parking meters and parking garages, single-family and multi-family housing, child-care centers, medical practices, nursing homes - anything with a cash flow. Adding financial to real assets, asset managers control more than 40 percent of the world's wealth, including more and more of the essential physical and social infrastructure of modern life. This is something new under the sun, Christophers claims: "a society in which the key physical systems supporting social life and its reproduction - so-called 'real assets' - are increasingly owned by institutional investors [pension funds, insurance companies, university endowments] specifically through the mediation of dedicated asset managers [the plunderers] and their investment funds." He calls it "asset manager society."
How did it come about? Two large developments paved the way. The first was the 2008 financial crisis, and in particular its aftermath: several years of rock-bottom interest rates. Pension funds and insurance companies, though traditionally conservative, needed some returns in order to meet their obligations, and neither bonds nor money market funds were offering much. These institutions did not have the financial expertise to make complicated (i.e., tax-avoidant) large-scale investments or the technical expertise to manage acquired companies or facilities. Asset-management investment funds were a perfect fit, and since their buyout financing relied heavily on debt, low interest rates were like high-octane fuel.
The second development, previously noted, was ideological. At first in the US and UK, then in Canada, Australia, and Europe, market fundamentalism - the belief that states could do nothing right and that the profit motive was invariably more efficient - led to the privatization of many traditionally public functions: building and maintaining highways and bridges; housing; elder care and nursing homes; water and sewer systems; energy production and distribution; and others. So bold were the privateers, so self-effacing were the public authorities, and so browbeaten were developing countries by the IMF that many investment fund managers demanded and received a guarantee against losses, a practice called "de-risking." Another investor-friendly arrangement was the Public-Private Partnership, which frequently guaranteed the fund a minimum revenue or pledged capital spending.
There are horror stories in Our Lives in T heir Portfolios, too. Asset manager KKR paid the city of Bayonne, New Jersey, $150 million, plus a promise to undertake capital improvements, in exchange for revenues from its water system over a 40-year period. After 5 years, rates had risen 20 percent, no improvements had been made, and KKR sold its interest for a 36 percent profit. The Carlyle Group entered a similar agreement with Missoula, Montana. It too sold its interest after 5 years for a large profit, leaving the water system in such bad shape that the city repossessed it under eminent domain.
But even more worrisome than these misfires and machinations, Christophers argues, is the fundamental mismatch between the asset-manager model and the real assets they are increasingly taking over. Asset managers prefer to invest in parts of large, integrated assets rather than the whole, since it is easier to isolate risk and return profiles. The result is often a patchwork with, for example, different owners of transmission and distribution assets within power systems, or of the pipes and the water within a water system. This, together with the invariably short-term nature of asset-manager involvement, makes long-term planning all but impossible. It is not the way to grow a business, Christophers warns, and it is not the way to sustainably manage infrastructure.
****************************
Beyond exacerbating America's already grotesque economic inequality and upending (or ending) the lives of workers, tenants, hospital patients, and nursing home residents, private equity inflicts a more subtle, insidious harm on American society. As left-wing critics have pointed out all the way back to the 19-century Populists, a minimum of autonomy and control in the chief spheres of life is essential to achieving a balanced relationship to authority, neither belligerent nor servile, which in turn is a prerequisite to effective democratic citizenship. Industrialization, to which Populism was a response, was the first setback to America's 200-year tradition of popular economic independence. The post-World War II destruction of the family farm was another. The Reaganite/neoliberal destruction of labor unions was another. Private equity, by removing ownership and authority even further from the worker than the corporation did, paralyzes its subjects still more. Often the worker cannot even learn where his orders come from, much less confront their source. Distance and abstraction do not merely disempower; they infantilize. Such a workforce is no longer a fit democratic citizenry, the only force capable of taking on the financial juggernaut.
**********************
There is something perversely impressive about private equity. With an energy and ingenuity worthy of a less sordid purpose, the plunderers have crafted a marvelously efficient machine for enriching themselves and have persuaded (or bribed) the political class not to interfere. If threatened, they will undoubtedly warn that the economy will collapse without them - that is, after all, how the banks got away with it last time. And who, in any case, is going to threaten them? A fragmented, dispossessed workforce that doesn't even know where to find its bosses?
The epitaph of Our Lives in Their Portfolios is a line from the CEO of Brookfield Asset Management, one of the world's largest: "What we do is behind the scenes. Nobody knows we're there." By and large, nobody does. It's true of government too. I only learned from Morgenson and Rosner that Jay Powell, the current chairman of the Fed, who spent $750 billion to save the corporate bond market, where private equity feeds, is himself a former private equity executive.
It will probably take an entity their own size to curb the financial industry's drive to own the world. Sweden, Denmark, South Korea and a few other states have fought at least some of the plunderers' worst abuses. The chance that the pirates' stronghold and base of operations, the United States, will do the same seems pretty slight. There's a glimmer of hope that the greed of the vampires will finally repel their own customers. The huge fees they charge their institutional investors, together with the high percentage they take of the profits, has led some of those investors to commission studies comparing how well they did with PE compared with how well they would have done putting the money into an index fund. It turns out to be a dead heat.
Maybe these investors will go on strike. Or maybe the US, UK, EU, Canada, and Australia will wiggle out from under the industry's thumb and rein in its egregious tax and other legal privileges. Or maybe the rest of us will see through the mystifications and navigate the political obstacle course they and their legislative allies will undoubtedly throw up to keep us from holding them accountable. If not, then we'll all be assets someday.
[END]
George Scialabba's Only a Voice: Selected Essays will be published by Verso in August.
]]>Regime Change: Toward a Postliberal Future by Patrick Deneen. Forum Books, 268 pages, £22.
The end of liberalism, like the end of history, capitalism, religion, and the novel, has often been foretold. Such prophecies will probably cease only with the end of reading and writing - which, come to think of it, may not be so very far off.
University of Notre Dame professor Patrick Deneen achieved an unlikely success with Why Liberalism Failed (2016), a work of political philosophy that deployed serial abstractions - "liberalism," "tradition," "individualism," "timelessness," "progress" - with an air of moral urgency and predictions of civilizational collapse. The book's evidentiary texture was thin, but thanks to three decades of identity politics and four decades of incessant right-wing political messaging, an unreflective dislike of liberalism is widespread in the United States, so a book entitled Why Liberalism Failed had a considerable reservoir of sympathy to draw on.
What is liberalism? The word has two roots: liber, meaning "free," as in "not under compulsion"; and liberalis, meaning "generous, open-handed," as in "free with one's time, money, advice, etc." These two facets of liberalism find expression in two broad areas of social policy: the protection of equal rights for racial and sexual minorities and other vulnerable groups; and generous public provision. It follows that a coherent critique of liberalism should, upon identifying some political or social problem, plausibly connect it to one or the other of liberalism's two main purposes.
This was not Deneen's method in Why Liberalism Failed. His liberalism was an ideal type, almost a Platonic Form. Deductively, without much in the way of history, sociology, or economics, he blamed a myriad of ills - alienation, isolation, consumerism, elitism, inequality - on liberalism's allegedly relentless undermining of traditional practices, customs, and restraints. He was sometimes right about effects; rarely, however, about causes.
Isaiah Berlin used to admonish polemical opponents whom he thought were ignoring important distinctions: "Everything is what it is and not another thing." Liberalism is liberalism, and not capitalism. Liberalism, to repeat, is the commitment to secure equal civil and political rights to all citizens; and to provide sufficient material help to allow needy citizens to live a dignified life. Capitalism is an economic system based on competition in a market, the realization of a profit, and the reinvestment of that profit back into the production process.
To confuse matters, however, there is (or was) also something called "economic liberalism" or laissez faire, whose three classical tenets, according to Karl Polanyi's The Great Transformation, are: "a labor market, the gold standard, and free trade." That does not sound much like political liberalism - perhaps because it isn't - and nowadays no one (except The Economist) calls economic liberalism "liberalism," as Deneen does. Many Americans call it "neoliberalism," which emphasizes privatization, shrinking government, and subjecting public enterprises to market criteria. Their names notwithstanding, neoliberalism and economic liberalism are mainly championed today by conservatives (and centrist Democrats) and opposed by the few egalitarian liberals remaining in American public life.
Why, then, did liberalism fail? Because it succeeded, Deneen replied, and so revealed its fundamental incompatibility with human nature. Humans fall naturally into two classes, he informed us: the "few" and the "many," who used to know by instinct their station and its duties, though occasionally they needed the "guardrails" provided by narrow but deep "natural" institutions like family, community, and religion. In the modern era, when we are supposedly emancipated from traditional restraints by such unnatural delusions as egalitarianism, technology, abundance, self-expression, and - most sinister and seductive of all - progress, the many (now called "the masses") find themselves a prey to the few (now called "the elites".)
Well, that's Deneen's explanation. I prefer a less metaphysical one. Liberalism certainly succeeded - for a time. Between 1945 and 1975, virtually every social and economic indicator in America was positive compared with today: union membership was higher, economic inequality was lower; membership, both civic and religious, was higher; suicide, depression, and other mental health conditions were lower; self-reported happiness was higher; etc, etc. The French call these decades les trentes gloriouses; they were America's Golden Age.
What happened? Even before the New Deal reforms that rescued American capitalism from the Great Depression, a new ideology grew up, devoted to protecting business from even minimal government interference. The new creed, market fundamentalism, was promoted through books, pamphlets, magazines, advertising campaigns, lectures, documentary films, radio and television, research institutes, academic programs and appointments, and of course, ceaseless Congressional lobbying. This campaign (exhaustively chronicled in Naomi Oreskes and Eric Conway's excellent new book, The Big Myth: How American Business Taught Us to Loathe Government and Love the Free Market) was largely financed by the National Association of Manufacturers, the Foundation for Economic Education, the American Liberty League, the National Electric Light Association, and many other industry groups, right-wing foundations, and wealthy individuals. Initially concerned with opposing child labor laws, workmen's compensation, and trade unions, these groups shifted into high gear to portray the New Deal as "socialism." All through the long liberal decades they never let up, and their efforts were crowned with success by the election in 1980 of the Great Oversimplifier, Ronald Reagan, at which point they began to sabotage the New Deal's legacy in earnest. Now in full control of one of our two national parties, most state governments, and the Supreme Court, this wrecking crew is still hard at work. Trying to understand the decline of liberalism without reference to this decades-long business-sponsored propaganda blitz would be like trying to understand the decline of literacy without reference to advertising or television. Liberalism did not fail; it was assassinated by a large and open conspiracy.
Deneen's unwillingness to discuss or even mention capitalism may have contributed to the popularity of Why Liberalism Failed among conservatives. But it also lent the book an air of unreality, as though it were a series of extended rhetorical exercises - a mere trial run for a more substantive critique.
************************
Regime Change promises "a positive and hopeful vision of a post-liberal future." It is certainly more ambitious and programmatic than its predecessor. Its critique of liberalism is more trenchant, dividing it into right-wing or classical liberalism, especially concerned with economic freedom, and left-wing or progressive liberalism, mainly concerned with racial and sexual equality. The two are nevertheless united in championing meritocracy, at once the chief engine of economic inequality and the primary mechanism of social control. Deneen makes good use of Michael Sandel's and Michael Lind's recent books on meritocracy and the "new class," driving home the point that those who succeed in a competitive, credentials-based system tend to believe that those who have not succeeded deserve their low status. This is a recipe for the kind of resentment that currently disfigures American politics.
If this critique of liberal elites is the leitmotif of Regime Change, the ground bass is: ordinary people are conservative. They want stability, order, and continuity, unlike flighty progressive or greedy classical liberals. They are indifferent to ideologies; custom and common sense are enough for them. Most people in the West were satisfied until the Enlightenment challenged ancient beliefs and the rise of commercial society disrupted ancient lifeways, sowing envy and new appetites, displacing traditional elites, and setting up the false idol of Progress.
To regain our former happy condition, Deneen prescribes "aristocratic populism." The key to stability is balance - in the case of society, between the few and the many. The ideal is a mixed constitution, with distinct orders or estates: great and small, noble and common. But the new elites will not be irresponsible, like all past and present elites. They will share the many's tradition-derived common sense, watching over the common life and protecting its customary institutions: family, community, religion. And the many will not be passively ruled. They will not be like Burke's rustics, contentedly chewing their cud in the shade of the great English oak; they will be a multi-ethnic, multi-national working class, able to constrain and discipline their elites when necessary. It is an appealing picture, though much too lightly sketched.
Deneen's emphasis on "custom" and "common sense" raises an obvious question. Didn't custom and common sense keep women, dark-skinned people, homosexuals, lower castes, and many other unfortunates - probably amounting to most of humanity - in their places for millennia, whether they were satisfied there or not? Don't custom and common sense need to be criticized? Democracy is, after all, as much about arguing as voting.
Deneen is a bit dodgy about democracy. He would replace the negative liberty and procedural equality of liberal democracy with a "common-good conservatism" based on "classical or Christian liberty": "a condition of self-governance ... requiring an extensive habituation in virtue, particularly self-command and self-discipline over base but insistent appetites." Only virtue, he instructs us, makes true liberty possible.
But does it? Saints have been tyrants, again and again: the Christian Roman emperors who persecuted unorthodox Christian theologians; the medieval Catholic prelates who exterminated the Cathars and Albigensians; the Inquisitors who burned Jews, scholars, and hapless peasants; Geneva under Calvin; the ascetic Brahmins who for centuries have treated the lower castes with barbarous contempt and cruelty; the Iranian mullahs, obsessed with female modesty and homosexuality. Virtue may be a necessary condition of political liberty, but it is hardly a sufficient one.
Deneen appears untroubled by this appalling history. Again and again, he invokes the Puritan John Winthrop's "beautiful definition of freedom": "The proper end and object of authority ... is liberty for that only which is just and good." This is exactly the principle on which the above persecutions, and all other persecutions, were conducted: liberty only for what the state (community, party, church) defines as just and good. Deneen's "beautiful definition" flatly contradicts two of the most authoritative - and beautiful - statements of America's political philosophy: "Congress shall make no law respecting an establishment of religion" and Oliver Wendell Holmes Jr's: "If there is any principle of the Constitution that more imperatively calls for attachment than any other it is the principle of free thought--not free thought for those who agree with us but freedom for the thought we hate." I expect most Americans would (if we can put our phones down long enough) fight in the last ditch to defend that philosophy against Deneen's regime of virtue.
Religion occupies a somewhat shadowy place in the argument of Regime Change. It is frequently invoked, declared foundational and indispensable, and even identified with Christianity - no milquetoast "religion of humanity" or "ethical culture." But troublingly, in discussing religion as an essential means of integrating society, he remarks:
Liberalism is a denial that there can be any good for humans that is not simply the aggregation of individual opinions. Liberalism claims that any justification based upon "the common good" is ultimately nothing more than a preference disguised as a universal ideal.
Liberalism claims no such thing. Like everyone else, liberals believe that some things would be good for all humans, even if many of them don't agree: reduced carbon emissions, nuclear disarmament, universal health care, and many more. But liberals believe that the only thing that would give them the right to effect those things using the power of the state is the "aggregation of individual opinions" known as voting. In humanity's unenlightened past, many religious (and secular) leaders have believed that their "objective" knowledge of the common good entitled them to skip the step of consulting the opinions of the ruled. I hope Deneen, in his next book, will warn the religious and political leaders of common-good conservatism against claiming any such authoritarian prerogative.
Modernity has its problems, and Deneen has identified many of them: overspecialization, spiritual desiccation, technology-worship, cultural vertigo, bigness. His proposed solutions will strike most people as far-fetched or eccentric, but they deserve to be debated - a prophet is not required to supply all the details and necessary qualifications. His writing is often formulaic and abstract, but occasionally it takes flight. Compared with the greatest writing about the dilemmas of modernity, from John Ruskin and William Morris to D. H. Lawrence and T. S. Eliot, Lewis Mumford and Ivan Illich, Christopher Lasch and Philip Rieff, Paul Goodman and Wendell Berry and Alastair MacIntyre, Why Liberalism Failed and Regime Change, though sincere and impassioned, are decidedly minor. But they will find many enthusiastic readers among those who are - for good reasons or bad - hostile to liberalism. Undoubtedly John Stuart Mill, always eager to canvass arguments against his most cherished beliefs, would earnestly recommend both books to his fellow liberals.
[END]
George Scialabba's Only a Voice: Essays has just been published by Verso.
]]>
What Can We Hope For? Essays on Politics by Richard Rorty. Edited by W.P. Malecki and Chris Voparil. Princeton University Press, 227 pages, $24.95.
By George Scialabba
Fifty years ago, William F. Buckley Jr. vowed not to read another book about liberalism until his mother wrote one. Liberalism was riding high then, and Buckley was presumably annoyed by its champions' triumphalist tone. He would feel differently now. You can hardly walk around the block today without tripping over a critique of liberalism. There are critiques by wild-eyed Randians, free-market libertarians, neoclassical economists, neo-Burkean conservatives, and Catholic integralists; and from the left, by critical race theorists, postmodernists, and of course, Marxists.
Two books stand out from the anti-liberal flood. Christopher Lasch's The True and Only Heaven (1991) takes on the dark side of the Enlightenment legacy, searching out the philosophical roots of consumerism and technocracy and unearthing a counter-tradition in a great many well-known and lesser-known places. Whatever one thinks of Lasch's central argument, it is a remarkable synthesis. The other book is Alasdair MacIntyre's After Virtue (1981), undoubtedly the best (the only?) Aristotelian critique of liberalism. Though difficult, like all MacIntyre's works, it has been extraordinarily influential. Otherwise fractious conservatives are unanimous in their reverence, while even most liberals and leftists - at least those who pay any attention to philosophy - accord it a wary respect.
MacIntyre, now 91,
has had an unusual trajectory. Born in
Scotland to a working-class family, he grew up and attended university in
northern England. In the late 1950s he joined E.P. Thompson's legendary journal
The New Reasoner, which eventually became the New Left Review. In
the 1960s, after a decade of intellectual engagement with Marxism, much of it
gathered in his 1971 collection Against the Self-Images of the Age, he
drifted away - not from Marxism but from the left. The contemporary left, he
argued, saw only two possibilities: Stalinism or social democratic reformism.
As a Trotskyist, MacIntyre could accept neither.
Marxism continued to fascinate him, as an ethics and a philosophy of history. One of the attractive features of his thought is his scorn for the superficial notion that Stalinism discredited Marxism. As he later put it: "The barbarous despotism of the collective Czardom which now reigns in Moscow is as irrelevant to the question of Marxism's moral substance as the life of a Borgia pope was to Christianity's moral substance." He eventually took his distance from Marxism, both because of the "impotence of Marx's economic theory" (presumably value theory - a valid complaint) and the failure of Marx's predictions (not a valid complaint). But concluding his intriguing little volume, Marxism and Christianity (1968), he insisted that it was by no means superannuated:
Both liberals and Christians are too apt to forget that Marxism is the only systematic doctrine in the modern world that has been able to translate to any important degree the hopes men once expressed, and could not but express in religious terms, into the secular project of understanding societies and expressions of human possibility and history as a means of liberating the present from the burdens of the past, and so constructing the future. Liberalism by contrast simply abandons the virtue of hope. For liberals the future has become the present enlarged.
In the long course of his subsequent development, MacIntyre would maintain a distant regard for Marxism, while directing his philosophical heavy artillery at liberalism.
Liberalism is notoriously hard to define. For MacIntyre the political radical, liberals are those who, while professing concern for the less advantaged, have no intention of allowing them significantly greater social power. Judging from scattered hints in his later works - he has written hardly anything about politics for the last forty years - he still feels that way. For MacIntyre the cultural conservative, liberalism embraces rationalism, secularism, individualism, and materialism, the latter as both a philosophical doctrine and a sociological phenomenon.
For MacIntyre the moral philosopher, the decline of the modern world is seen at its starkest and most ominous in the evolution of moral reasoning. The background and scaffolding of the classical tradition of Aristotle and Aquinas was a shared conception of cosmic or social order, derived from Aristotle's metaphysics and Aquinas' theology. The scientific revolution of the 16th and 17th centuries undermined Aristotle's metaphysics, and the Protestant Reformation introduced several competing theologies. In response, moral philosophers in the 18th and 19th centuries, including Hume, Smith, Diderot, Kant, Bentham, and Mill, tried to provide a rational but non-metaphysical justification for morality. All of them, MacIntyre argues, failed.
Instead, in the 20th century morality was severed from rationality. The dominant form of moral theory is now emotivism, which holds that evaluative statements are nothing more than expressions of preference. "X is good" simply means "I like X," but disguised as a factual statement in order to manipulate the hearer. There is a less pejorative way of understanding emotivism: "I like X" could mean "I like X. This is why, and why you might like X too." For MacIntyre, however, emotivism makes honest communication impossible. We can only inveigle one another.
The initial chapters of After Virtue, which set out the imagined consequences of this philosophical impasse, are probably what MacIntyre is best known for. The absence of a cosmic order, with its associated telos or purpose, condemns modern society to widespread anomie, superficiality, and narcissism (in the clinical, not the colloquial, sense). Modern culture has evolved several representative character types, notably the bureaucrat, the therapist, and the aesthete. The first two deploy fictitious expertise to achieve externally imposed goals; the third treats others as interesting sensations to be consumed. Modern moral life is a series of interminable quarrels and subtle conflicts of will which, for lack of a recognized moral authority, can never be resolved.
What then is this telos, which alone can redeem us? Telos, like "Being" and "dialectic," is one of the most important and mischievous terms in the history of philosophy. It means, roughly, "essential nature," "ultimate end," "purpose," "goal," "fulfillment." According to MacIntyre moral philosophy is futile unless it starts from our telos. Only with a grasp of our true end can we judge what our duties are and what virtues will enable us to fulfill them. The human telos, MacIntyre asserts, following Aristotle, is rational happiness.
Well, rational happiness is a plausible contender, surely. But questions crowd in at this point. Who gets to decide what the human telos is? Why is reason more essential to humans than, say, love or beauty? More universal than suffering? Nobler than sympathy or courage? And what is an essential nature, anyway? Is it something every member of the species has? But every human has eyes and a stomach, arguably as important to its well-being as reason. It's true that if someone is deprived of eyes or stomach, we still count her as human, but the same is true if someone is temporarily or permanently deprived of reason.
There is an ambiguity in the meaning of telos that even MacIntyre, for all his philosophical acuity, does not resolve. It can mean "nature," that is, what we are; or it can mean "purpose," that is, what we are for. Anyone is entitled to an opinion about my nature - my own opinion is not privileged. But I alone am entitled to decide on my purpose - I am for whatever I decide to be for. So that to refer, as MacIntyre repeatedly does, to the end or goal or purpose of human life as something objectively discoverable or deducible is to ignore the fact that ends, goals, and purposes are chosen with the whole of a person's experience and imagination - perhaps making use of Aristotelian moral philosophy, and perhaps not.
Why is MacIntyre so alarmed by our apparent lack of moral consensus? After all, the supposedly interminable and irresolvable disagreements he laments may be described very differently, as conversations: age-long, society-wide conversations, sometimes (e.g., slavery), but by no means always, issuing in violence. Our national conversations about Jim Crow and interracial marriage ended in the 1960s. Our conversation about the full humanity of women seemed to have ended in the 1980s and 90s. Republicans and evangelicals are bent on re-opening it, but that too will end eventually, when (to adapt a venerable 18th-century phrase) the last originalist Justice is hanged in the entrails of the last fundamentalist preacher. Our conversation about homosexuality ended happily; our conversation about legalizing marijuana - maybe also psychedelic drugs - looks promising. Our conversation about economic inequality and reviving the New Deal is unfortunately going nowhere - but there was a New Deal, which is perhaps grounds for hope. Our conversation about global warming has, alas, barely begun. But despite the persistence of conflict, MacIntyre's insistence that modern pluralism makes moral and political progress impossible seems at odds with our history. And in secular, social-democratic Europe, where they're even more alienated from Aristotelian metaphysics and supernatural religion than we are in America, their moral/political conversations generally go even better.
MacIntyre is one of the most celebrated moral philosophers alive. But he seems to me just wrong about the nature of moral argument. What it ought to provide, he writes, and did provide in those blessed days now lost, are "genuine objective and impersonal standards" capable of "rational justification." At least, that's what premodern philosophers claimed. But for three hundred years, most philosophers have rejected that claim. Of course one should be "rational" and "objective" - that is, fair-minded and scrupulous in argument - whatever one's moral philosophy. But that doesn't mean one should - or can - appeal only to standards and values that will compel universal assent, or whatever "rational justification" is supposed to mean. The distinctions on which MacIntyre leans so heavily - "meaning" and "use"; "emotive" and "rational"; even "means" and "ends" - are practical and provisional, not absolute.
Two people arguing about whether something is good may offer factual reasons, in case one thinks the other is misinformed, or may suggest that the other's reasoning is faulty. If that doesn't produce agreement, they may canvass principles and values relevant to the dispute, and if they share one and can agree on how it applies to their disagreement, then they've reached agreement. In the most difficult case, however, facts and logic will not suffice: the disputants will have to reveal to each other the whole scaffolding of beliefs, experiences, and hopes underlying their position, each one trying to see the issue with new eyes - or more precisely, with an enlarged moral imagination. Moral judgment incorporates both reason and emotion - Hume formulated that then-novel truth provocatively, saying that reason is always the servant of emotion, in an attempt to dislodge from readers' minds the traditional strict separation between the two. It's also what pragmatists like James and Dewey meant by identifying the imagination as our key moral faculty; and it's why Richard Rorty wrote that we should expect moral progress chiefly from the work of novelists, journalists, ethnographers, and other purveyors of thick descriptions rather than from philosophy.
But who can get through all that mutual moral excavation? Even lovers and spouses rarely dredge all the way to moral bedrock. Can Democrats and Republicans, democratic socialists and Tea Partiers, be expected to? Not individually, perhaps; to that extent, MacIntyre's pessimism is justified. Other, more homogeneous and solidaristic societies rely on widely shared origins or a sense of mutual responsibility in order to understand one another sufficiently for civic debate. But in chronically bamboozled, heavily armed, endemically precarious America, there is very little solidarity or good will. The civic debate is a brawl. And the institutions that should help mediate it - the schools, the media, local government - only add fuel to the flames. MacIntyre's diagnosis notwithstanding, however, the reasons for America's civic cacophony are political and historical, not philosophical - plutocracy, not emotivism.
It is not only dark side of modernity - the alleged manipulativeness, shallowness, aimlessness, and fragmentation - that MacIntyre deplores. Even modernity's loftiest achievements are hollow. Natural rights and human rights have no more reality than witches or unicorns, he scoffs, so the Bill of Rights, the Declaration of Independence, and the UN Universal Declaration of Human Rights are "fictions," with no objectively rational justification. In fact, those great documents are not philosophical arguments, nor do they depend on philosophical arguments. The Bill of Rights means: "Where this document's writ runs, no one shall be prevented from voting or running for office or starting a newspaper or any other political activity merely because he is not a nobleman." It does not mean: "There are wraith-like entities called rights subsisting in a shadowy metaphysical realm, from which we must deduce how best to organize our polity." The fact that some or all of the document's signers may have believed the latter, metaphysical proposition does not change the fact that what they actually did, and what they were then and ever since understood to have done, is best expressed by the former, non-metaphysical one.
And how do we today, who don't believe in those wraith-like entities, rationally justify our affirmation of these truths? Our justification is: "We trust ordinary people, governed only by persuasion, with ultimate political power." We could explain further, but it wouldn't satisfy MacIntyre. For him, an "objectively rational justification" is a rigorous deduction from the telos, against which there is no appeal. The absence of an ultimate, unaccountable authority is a decisive defect in liberalism, MacIntyre warns. Liberalism is purely negative, a matter of setting limits on authority. Liberal principles "set before us no ends to pursue, no ideal or vision to confer significance upon our political action. They never tell us what to do."
It is true, of course, that to grow up in a society or polis where the ends of life are authoritatively defined by the society's traditions, which prescribe the duties and virtues needed in each walk of life, does make it easier, in a way, to become a hero. Becoming a hero takes a great deal of single-mindedness, and not to have to form one's own conscience and values relieves a prospective hero of a great deal of trouble and distraction. But even so, our confused and distracted society does occasionally produce our own sort of hero: Dorothy Day, Daniel Berrigan, Aaron Swartz, for example, who seem to me not spiritually inferior to Saint Benedict, MacIntyre's ideal.
Alasdair MacIntyre, by the eminent French philosopher Emile Perrault-Saussine, is not so much an intellectual biography as an essay on MacIntyrean themes. It's engaging and accessible, and it does sometimes clarify MacIntyre's arguments, which can be knotty. It would have been interesting, though, to learn something more about MacIntyre's life: for example, about his break with the British New Left (there were rumors of hard feelings with Perry Anderson and Robin Blackburn) or his conversion to Roman Catholicism. A little gossip adds relish to a biography, even (especially?) of so austere and forbidding a figure as MacIntyre. It is also surprising that Perreau-Saussine barely mentions Simone Weil. With her intellectual passion and her anguished concern for the infirmities of the soul, she is perhaps the 20th century's closest approach to St. Augustine, who was a major influence on MacIntyre. I wish his biographer had asked him why he has never written about her.
In the book's Foreword, the even more eminent French philosopher Pierre Manent approvingly notes MacIntyre's fifty-year-long "steady core of antiliberal anger" but then wryly observes, in Tocquevillian accents, that "the alternatives to liberalism have lost all credibility. Never has a principle organizing human association been more criticized while triumphant, or more triumphant while discredited." MacIntyre would probably agree, even if not in the same jaunty tone of voice. He too thinks liberalism will be around a long time, though not because it's resilient or the least bad alternative. Rather, liberalism is a blight, a toxic fog that has settled permanently on our cultural/political landscape. "What matters at this stage is the construction of local forms of community within which civility and the intellectual and moral life can be sustained through the long dark ages which are already upon us." That's MacIntyre's political program.
Perreau-Saussine's book is a useful digest of MacIntyre's thought but rarely offers us anything original or profound. The exception is the book's final paragraph, which wisely tempers MacIntyre's radical antiliberalism and which contemporary American postliberals should take to heart:
MacIntyre remains faithful to his antiliberal premises even though he has no constitutional or political alternatives to counter liberal democracy. The tension between de facto political liberalism and philosophical, theological, or moral antiliberalism manifests a tension in the very substance of our lives. Liberalism presupposes a social order that it does not produce and that it even tends to destroy, By absolutizing individual consent, by reducing truth to mere opinion without granting importance to otherwise recognized authorities, liberalism nourishes a relativism that subverts the mores and habits it needs. The moral is that liberalism does not stand to win if its program is completely realized. Liberalism only lasts if we periodically counter it with our objections. Without this it collapses in on itself. The tension between liberalism and these criticisms, between freedom and truth, does not weaken the West. On the contrary, this tension constitutes one of the secrets of its vitality.
************
Richard Rorty (1931-2007) was MacIntyre's polar opposite in all ways except one: both men liked and respected the other. Rorty was an anti-foundationalist, while MacIntyre grimly insists that philosophy without metaphysical foundations is the merest fiction. Rorty thought our paramount moral/political obligation was to reduce suffering and increase happiness. MacIntyre thinks it is to follow the path of virtue marked out by the traditions of our community, guided by that community's view of the telos or purpose of human life. Rorty thought the Enlightenment, and the spirit of criticism it bequeathed, inaugurated a new and fortunate period in history, an epoch in which personal and social liberation are at least possible. MacIntyre thinks we will be lucky to survive this liberation.
Rorty seems to have felt that his philosophical celebrity entailed an obligation
to comment on contemporary political issues, while MacIntyre seems to feel that his philosophical celebrity entails an obligation not to - Rorty was pretty much the model of a public intellectual, while MacIntyre may as well have been writing from inside a monastery. What Can We Hope For? rounds up a last (presumably) selection of topical pieces from Rorty's archive, following on last year's more strictly philosophical Pragmatism As Anti-Authoritarianism.
Rorty's parents were journalists, teachers, activists, and friends of John Dewey, from whom they learned that radical democratic egalitarianism was 100 percent Americanism. Their son was to offer many friendly reminders of this over the years to intemperate leftists who dismissed American history and culture as irredeemably benighted. This was the theme of Achieving Our Country: Leftist Thought in Twentieth-Century America (1998), which urged a rapprochement between the Old and the New Left. Rorty's other previous political book, Contingency, Irony and Solidarity (1989), argued with grace and subtlety that for political purposes, philosophy is unimportant and imaginative identification with others is all-important.
The eighteen pieces in What Can We Hope For? give amateurism a good name. Each one discusses an essay-sized topic - economic inequality ("Making the Rich Richer," "Back to Class Politics"), globalization ("Can American Egalitarianism Survive in a Global Economy?"), cultural politics ("Demonizing the Academy"), international affairs ("The Unpredictable American Empire," "Half a Million Blue Helmets?") - humanely, incisively, and elegantly. Perhaps the most memorable is "Looking Backwards from the Year 2096," a review of the preceding (i.e., 21st) century by a nameless speaker. Increasing misery and resentment gave rise to increasingly uncontrollable civil strife, he or she tells us, resulting in a military dictatorship in mid-century. Eventually the Democratic Vistas Party restored civilian rule, but everyone was much chastened and American exceptionalism, for better and worse, much weakened. "Compared with the Americans of a hundred years ago [i.e., 1996], we are citizens of an isolationist, unambitious, middle-grade nation." The speaker concludes that "everything depends on keeping our fragile sense of American fraternity intact." The piece's allusions to Whitman's Democratic Vistas and Bellamy's Looking Backward underline this moral.
Rorty was fond of drawing a distinction between Enlightenment rationalism and Enlightenment liberalism. He agreed with MacIntyre that Enlightenment rationalism - the attempt to ground morality in reason - had failed. But he thought Enlightenment liberalism - egalitarianism, free speech, universal suffrage, the separation of Church and State - had succeeded gloriously and was humanity's best hope. Doubtless he and MacIntyre will spend eternity debating this.
[END]
George Scialabba's selected essays will be published next year by Verso.
]]>
Unless we have reached the end point of humankind's moral development, it is pretty certain that the average educated human of the 23rd century will look back at the average educated human of the 21st century and ask incredulously about a considerable number of our most cherished moral and political axioms, "How could they have believed that?" We do it every time a movie like Twelve Years a Slave or a novel like The Handmaid's Tale or a play like Angels in America or a work of history like Bury My Heart at Wounded Knee or of journalism like Michael Harrington's The Other America prompts us to ask, "How could decent, intelligent people have believed they were entitled to treat other human beings like that?"
So let's interrogate some of our beliefs about political morality with the eyes of our descendants. Two four-letter words lie at the heart of contemporary America's public morality: "free" and "fair." "It's a free country" is every American's boast; "I only want a fair shake" is every American's plea. I doubt Ineed to remind many Commonweal readers of the more flagrant forms of unfairness in our national life - that one American child in five lives below or near the poverty line; that somewhere between eighty and ninety percent of our economy's productivity gains since 1980 have gone to the top ten percent of the income distribution; that the top 25 hedge-fund managers earn more than all the nation's kindergarten teachers combined; that 100,000 Americans will die for lack of health care over the next ten years in order to give a large tax cut to Americans with incomes above a half-million dollars; and so on and on, down the long and shameful catalogue. You all read the newspapers. Our 23rd-century descendants may ask - they will ask - how we could have tolerated such unfairness; but they won't ask how we could have believed such inequalities to be fair, because we don't, most of us, believe them to be fair. Let's instead consider a different question: whether our present-day ideals of fairness and freedom, even if we lived up to them, would satisfy our descendants.
The average CEO now earns around 300 times as much as his or her average employee. Many people are dismayed at the contrast with the good old days of the Eisenhower administration, when CEOs earned only 30 times as much as their average employees and paid a far higher tax rate, and yet the country didn't exactly seem to be going to the dogs. But let's put aside our reaction to this striking change and ask more generally whether and why some people ought to earn more than others.
The usual answer, I suppose, is that people deserve whatever they get through the operation of supply and demand. The competitive marketplace quantifies the value that one's efforts have for others. Some people (like doctors) employ vital skills; some people (like baseball players) give exceptional enjoyment; some people (like corporate executives) assume extra responsibilities; some people (like investors) forego luxury consumption. All such people are rewarded in proportion to the satisfaction they furnish others, as measured by others' willingness to pay, directly or indirectly, for those satisfactions. No payment, no service. As Adam Smith wrote: "It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest."
Of course it's not that simple. Consider those doctors, baseball players, and executives I used as examples of economic agents who exchange services for money. In fact, they - like you, like me - live with only one foot in a market economy and the other in a gift economy. Any doctor or scientist or athlete or nurse or teacher or carpenter worth her salt feels at least occasionally that she is making a gift of her best efforts; and as with all such gifts, the chief reward is internal: the pleasures of giving and of exercising one's faculties at their highest pitch.
Nowadays, the gift economy leads a precarious existence, appearing mostly in commencement-day addresses in which graduates are exhorted to follow their dreams, while most of the poor things are worrying frantically about how to pay their debts. The family is a gift economy, and so is culture, including both the arts and the sciences, as well as the shrinking public and non-profit spheres. Ever since that most fateful of innovations, industrial mass production, has become virtually universal, the market economy has progressively squeezed out the gift economy. In a mature capitalist society, competition grows in both extent and intensity, that is, both between and within economic units. Creativity and generosity are not forbidden but they are no longer self-justifying; they are, on the contrary, subordinated, like all activity in the non-public sphere, to the goal of increasing shareholder value. In the private economy, you can do whatever you like - create beauty, pursue truth, help others - as long as what you like to do makes someone a profit.
I said earlier that people in a market economy are rewarded in proportion to others' willingness to pay. That willingness to pay is the measure of value in a market economy; and so, to say that a person deserves what she earns is to say that there is at least a rough correspondence between the value of what she produces and the value of what she receives. As Milton Friedman, the high priest of American capitalism, put it: "The ethical principle [underlying] the distribution of income in a free-market society is, 'To each according to what he and the instruments he owns produces.'"
This notion of desert rests on the assumption that two distinctions can be made rigorously: first, that one person's input - to any output or outcome at all - can be sharply distinguished from all other inputs; and second, that merit can be distinguished from luck: that is, that diligence, good judgment, and other productive qualities and character traits, as well as talent, are not fully attributable to biological endowment, early environment, education, and other contingent and therefore morally arbitrary sources. I don't believe those distinctions hold up.
Let's take that CEO, and let's assume we know somehow that she produces thirty or 300 times as much as her average employee. Causation is a transitive relation, and production is a kind of causation. If A is a cause of B, and B is a cause of C, then A is a cause of C. If A contributes to the production of B, and B contributes to the production of C, then A has contributed to the production of C. Now, who has contributed to the production of our CEO, and therefore to the production of whatever she produces? Clearly, her parents, spouse, teachers, fellow students, predecessors, colleagues, rivals, and friends, along with all their parents, spouses, teachers, fellow students, predecessors, colleagues, rivals, and friends, along with all those who created the physical, organizational, and cultural resources employed in the production of whatever our CEO produces, along with all their parents, spouses, teachers, fellow students, predecessors, colleagues, rivals, and friends, and, it goes without saying, all their parents, spouses, teachers, and so on through what is, if one wants to insist on the point, an infinite chain of causes.
I do want to insist on the point. Einstein famously wrote: "I have all along been standing on the shoulders of giants." So has our CEO. Exceptional contributions, whether to art, science, or the Gross National Product, are prepared for by the whole previous development of the field. People who make brilliant, courageous, and illuminating mistakes, which may be indispensable to the ultimate success of a rich and famous artist, scientist, or entrepreneur, are not, in a competitive market system, retrospectively and proportionately rewarded for their contributions, even though Friedman's definition of justice would seem to require it.
My point is that all production is social production. The productive assets of every age are the joint product of all preceding ages, and all those born into the present are legitimately joint heirs of those assets. And the same arguments for joint rather than individual inheritance of wealth created in the past apply to the distribution of income in the present. If this seems counter-intuitive, it is perhaps because there persists a deep and ancient distinction between luck and merit, according to which we deserve praise and reward for our good actions, though not for our good fortune. But what if our good actions are the results of our good fortune?
Philosophy assimilates scientific discoveries slowly. As a result, it is always riddled with archaic concepts and images, survivals from an earlier scientific epoch. One such survival, it seems to me, is the concept of merit. It has always been partly recognized (it is, indeed, implicit in the word "gifted") that talents and aptitudes come under the heading of luck rather than merit. But the inescapable implication of modern genetics, neurobiology, and psychiatry is that character, no less than talent, is inherited or else formed by very early experiences. Diligence, decisiveness, initiative, coolness under pressure - all these entrepreneurial virtues are, no less than intellectual or manual abilities, part of one's natural endowment. And from a strictly moral point of view, no one deserves a reward for being born luckier than someone else. I imagine the 23rd century will ask: "Why did you make talent and character the measure of an individual's desert rather than of her obligations? How could you have overlooked what is to us the obvious and elementary principle of fairness: from each according to her abilities, to each according to her need?"
I suggested earlier that causation is potentially an infinite regress. If that's true, does anyone deserve anything? Actually, potentially infinite regressions are perfectly commonplace and don't normally defeat us. We call a halt to them wherever seems appropriate. Every parent has to decide when a child is genuinely curious and when it keeps asking "Why?" just to put off going to sleep. Every conscientious judge has to decide when to stop applying the maxim "To understand all is to forgive all," even though it's undoubtedly true. The point about these decisions is that they are arbitrary and fallible - in making them we rely on prudence rather than principle. So that when we decide to ignore the infinite chain of causes that produced the output of the CEO and pay her the whole market value of it, our decision is not a matter of justice, as Milton Friedman claimed it was.
I said "our decision," but of course you and I don't have anything to say about the just distribution of income and wealth. Indeed, the purpose of definitions like Milton Friedman's is precisely to prevent such distributions from becoming a matter of public decision. In the 1940s, an influential senator, trying to stifle criticism of Harry Truman's Cold War policies, demanded that "politics should stop at the water's edge." It worked then, and the proponents of the economic class war have had a similar success in preaching that democracy should stop at the economy's edge. In principle, the state is governed according to the rule of one person, one vote. Economic enterprises such as corporations are not even democratic in principle: there the rule is, one dollar of shareholder value, one vote. In both areas, it hardly needs pointing out, principles count for very little. None but the largest investors have any influence with corporate management; while in politics, rich donors in effect have many votes, the rest of us none.
The case of politics is particularly egregious. Two political scientists, Martin Gilens of Princeton and Benjamin Page of Northwestern, recently summarized years of detailed statistical research into the relation between what voters want and what we get: "In the United States, our findings indicate, the majority does not rule--at least not in the causal sense of actually determining policy outcomes. When a majority of citizens disagree with economic elites and/or with organized interests, they generally lose. Moreover ... even when fairly large majorities of Americans favor policy change, they generally do not get it. ... For Americans below the top of the income distribution, any association between preferences and policy outcomes is likely to reflect the extent to which their preferences [happen to] coincide with those of the affluent. Although responsiveness to the preferences of the affluent is [not] perfect, responsiveness to less-well-off Americans is virtually nonexistent."
If democracy means one-person-one-vote, in what situations is it morally requisite? Here is an answer from Robert Dahl, perhaps the most eminent American political scientist of the twentieth century. According to Dahl, members of any association are entitled to insist that it be governed democratically when the following conditions hold: the group must reach some decisions that are binding on all members; discussion and collective decision-making are feasible; membership is stable, i.e., those who make the decisions will be subject to the consequences; and there is a rough equality of competence, i.e., members are capable of judging their own interests and also of judging which decisions they must delegate to experts.
Now, why don't these conditions hold for corporations as well as for political communities? One possible objection might be that, unlike laws, management decisions are not binding - employees can quit. The answer to this objection is that in the real world, unlike the world of smoothly clearing labor markets and other fantasies of neoclassical economics, the costs of renouncing employment are frequently as great as the costs of renouncing citizenship. Another possible objection is that management requires special skills, which workers may not possess. But surely workers are no less capable of hiring and supervising managers than shareholders are, and probably more so. Still another objection is based on the notorious "iron law of oligarchy," according to which any sizable association tends to be dominated by those with the most aptitude and ambition. But the same holds of political democracy, which no one proposes abandoning on that account. Finally, there is the moral objection: aren't shareholders entitled to control the firms they invest in? For the same reasons that entitlement theories fail to justify large inequalities in income - namely, that wealth is a social product and that differences in ability and character are morally arbitrary - they fail to justify large differences in the power to control our common economic destiny. And more: since one requirement of fair political competition is that all group members have equal access to relevant information about group decisions and equal opportunity to place items on the agenda for decision, it follows that in a society like ours, where economic resources translate into political resources, economic inequality must result in political inequality, a conclusion that is obvious to everyone except the conservative majority on the US Supreme Court. Political democracy requires economic democracy; indeed, the distinction between the political and the economic is altogether artificial. How, our 23rd-century descendants will ask us politely, but perhaps with a tinge of exasperation, did you manage to overlook that?
If you have the misfortune to be a left-wing social critic, the most galling part of each day is encountering the ubiquitous self-designation of apologists for capitalism as champions of freedom. One day a Tea Party Congressman introduces the Economic Freedom Act, which would free the 4000 or so people who pay it from the estate tax and liberate the rest of us from Social Security and the minimum wage. The next day some foundation with "freedom" in its name gives an award to Charles Koch for his stalwart defense of Koch Industries' freedom to render sizable areas of West Virginia, Arkansas, and Louisiana uninhabitable. And every day the Congressional Freedom Caucus warns sternly that it will not rest until the tens of millions of Americans who cannot afford proper health care without assistance from the rest of us are finally free to go without it.
Where there is ideological smoke there is sometimes philosophical fire. The primitive intuitions about freedom to which defenders of laissez-faire capitalism appeal are widespread and at least superficially plausible. No one makes you shop at Wal-Mart, after all, or work there either. If you don't like it where you live, you're free to move. If you don't like what you're hearing, change the channel. If you don't like Fords, buy a Chevy. This model of life as a series of discrete purchases and of citizens as sovereign consumers seems to lie in the background of many Americans' conviction that, whatever its other virtues or defects, capitalism relies exclusively or primarily on free choice and that regulations or taxes or public provision, even if sometimes justified, necessarily diminish freedom.
This everyday, rough-and-ready understanding of freedom was more or less adequate once, back when America was, uniquely in its time, neither a feudal nor a capitalist society. For a couple of centuries, because the land was so rich and was empty of any inhabitants whose rights a white man was obliged to respect, economic autonomy - the ability to make a living without selling one's labor - was very widely, almost universally possible. Those two centuries formed the American imagination, which has not yet adjusted to the traumatic fact that the possibility of individual self-reliance, and therefore of economic autonomy in the sense presupposed by laissez-faire ideology, is gone forever. When the means of making a living were largely unowned and available to all, economic agents could confront one another as equals, capable of entering into genuinely voluntary agreements and morally binding contracts. Today, by contrast, employment contracts typically involve members of two groups that are radically unequal, since one group has control over something the other must have access to in order to survive, but not vice versa. That is just another way of saying that we live in a class society. Our individualistic political rhetoric, appropriate to the frontier period but now a century and a half out of date, serves only to conceal the one-sided class warfare that its victims stubbornly refuse to acknowledge.
Those victims have some excuse; they are daily bombarded by laissez-faire ideology. Intellectuals, on the other hand, really ought to know better. The structural unfreedom inherent in class relations was authoritatively described by an early critic of capitalism and champion of labor unions. I'm referring to Adam Smith, who wrote in Book 1 of The Wealth of Nations:
Smith, a true friend of the working-man, added this:
[Moreover,] in [general,] the employers can hold out much longer. [A master], even if he did not employ a single workman, could generally live a year or two on [his accumulated capital]. Many workmen could not subsist a week, few could subsist a month, and scarcely any a year without wages. In the long run the workman may be as necessary to his employer as his employer is to him; but the necessity is not so [pressing].
**********
If we could speak with our nineteenth-century counterparts we might ask questions like: "Why did you believe it legitimate for one person to own another? Why did women seem to you incapable of self-determination? Why did you consider that political authority could be inherited, for example by monarchs or aristocrats?" If our imaginary 19th-century interlocutors defended their morality against ours, we might learn a good deal by trying to rebut them and vindicate our own moral intuitions.
Similarly, we should try to imagine which of our current beliefs might seem benighted to our 23rd-century descendants. I suspect they will want to ask us questions like: "Why did you base desert on performance, which can't be measured and is in any case a function of one's endowments? After all, no one deserves her endowments. Why did you make that strangely artificial distinction between the political and the economic? It looks as though your only purpose was to prevent economic democracy. Why did you define freedom so narrowly, as the absence of constraints on one person's right to employ her capital but not on another person's right to realize her capacities? Why did you assume that contracts between parties with radically unequal resources could be free?"
Ending 1 or Ending 2 here.
[END]
George Scialabba's selected essays, Only a Voice, will be published in 2023 by Verso.
]]>
Dobbs leaves us with two fundamental questions: What is a person? and Who should decide? The answer to the first question seems to me straightforward. At no point in the first and second trimesters, nor in the third when the mother's life or health is at stake, does the fetus - sans thoughts, sans emotions, sans experiences, sans everything - have any rights that override those of the woman of whose body it is merely (during the time in which more than 90 percent of abortions currently take place) an infinitesimal part. Unlike its host, it is a potential person, not an actual person; a future person, not a present one. That millions of Americans think differently is a source of puzzlement and distress to me, as well as, I hope, humility. But with all the good will I can muster, I'm unable to find any plausibility in their view.
Suppose a state legislature outlawed sex-change operations, judging them unnatural and offensive to God. The Supreme Court would (probably) declare that law unconstitutional. Since the Constitution does not expressly mention sex-change operations, there can be no constitutional right to a sex-change operation; instead, the Court would, or should, rule that the legislature is not allowed to legislate its religious beliefs, even if they are the beliefs of a majority of the state's citizens.
Now what are the beliefs on the basis of which a legislature would likely outlaw abortion? Presumably that the fetus is a human person, entitled to the state's protection. What reason could they give for that conclusion? Crucially, they cannot say that a fetus has a soul. That is a religious belief. It is held almost exclusively by religious persons and defended almost exclusively with religious reasons. They could say that from conception the fetus is a fully human being, with a range of human attributes. But they would have to stipulate that belief and then refuse to hear expert witnesses, most of whom would rebut it. They could claim that the fetus contains a human genome, and that anything with a human genome is entitled to be considered human and protected by the state. But every cell in the human body - every hair, every fingernail, every bead of sweat -contains a human genome, which is just a complete set of human DNA. Of course, unlike those tissues, the embryo (its proper name during most of the first trimester) will, with a great deal of effort, pain, and sometimes danger on the part of its host, become viable. Virtually no one argues that it deserves no protection once it is viable - at 24-28 weeks. But by then, there is virtually nothing to protect it from. Fewer than one percent of abortions take place after viability, and most of those are necessary to save the life or health of the mother or because the fetus has been discovered to have a grave defect.
Is there really any doubt that an abortion ban would be a religious imposition? Among non-religiously-affiliated persons, there is very little support for restricting abortion. A majority of each of the major religions, excepting Mormons and evangelical Protestants, would allow abortion in some or all cases. As far as I know, Mormons haven't been active in opposing Roe. The only two groups who have been notably active are evangelical Protestants and the Catholic hierarchy. The bishops have maintained a steady opposition since 1973 but with little effect, so they deserve only a modest share of the blame for Dobbs.
That leaves evangelicals. I suggest they have made a devil's bargain. The evangelical movement has regularly provided the margin of victory for a radical party that has undermined democracy with gerrymandering and voter restriction, allowed the number of guns in the country to swell to lunatic proportions, voted its rich patrons a $1.5 trillion tax cut while 1 in 6 American children lives in poverty, and, with almost incomprehensible irresponsibility, has prevented serious government action to reduce carbon emissions, contemptuously disregarding Biblical (and papal) admonitions to stewardship of the Earth. Evangelicals voted overwhelmingly (85 percent) to place supreme political power in the hands of a sociopath and sexual predator, who took a wrecking ball to the Executive Branch and ended his term with a traitorous refusal to hand over power peacefully to his successor. Evangelicals inflicted all this misgovernment and disgrace on their fellow Americans solely in order to overturn Roe v. Wade. Quite possibly, their fellow Americans will not thank them for it.
Very few Americans - only 25 percent, the lowest level ever recorded - express confidence in the current Supreme Court. That is understandable. Five of the six conservative justices were appointed by presidents who had not won the popular vote. The egregious cheat by which Sen. McConnell stole a seat from the Democrats and, four years later, in identical circumstances, rammed through a Republican nominee, was a national scandal. The three most recent nominees were pressed for their opinion of Roe; each replied that they considered it "settled law" and then voted to repeal it at the first opportunity. Justice Thomas has so far refused to recuse himself from January 6-related cases, despite his wife's unflagging efforts to further the "Stop the Steal" canard. Perhaps Thomas will offer the same elegant reply to critics of this refusal as Justice Scalia did to critics of Bush v. Gore: "Get over it."
Governing without a majority has become a Republican specialty. Early in this century, Republicans forged a new strategy. They recognized that the politics of resentment (so well described in Thomas Frank's What's the Matter with Kansas?), for all its utility, could not guarantee them a lasting majority. They were, after all, and had always been, the party of the rich. So they conceived the idea of launching an electoral blitz at the state level in the 2010 elections, which, if successful, would allow them control of the decennial redistricting process. The Democratic Party was Washington-centric and uninterested in local politics, so the Republicans were wildly successful. After which they hired armies of computer consultants and gerrymandered every state they controlled to within an inch of its life. Of course, they did not invent gerrymandering, but they brought it to a level that was to previous Democratic efforts as World Cup play is to 10-year-olds on a back street.
As a result, a fair number of states have a majority of registered Democratic voters but a majority of Republican state and Congressional legislators, or else a considerable disproportion between the size of the Republican majority of voters in a state and the size of the state legislature's Republican majority. In 2018, Democrats won 54 percent of the statewide vote in Pennsylvania but only 45 percent of seats in the legislature. In Michigan, Democrats won 53 percent of the vote and 47 percent of the seats. In North Carolina, Democrats won 51 percent of the vote but only 45 percent of the seats. In Wisconsin, Democrats won 54 percent of the vote but only 36 percent of the seats. In Texas, party affiliation is 40% Democratic, 39% Republican, but Republicans have an 18-13 majority in the state senate, an 82-67 majority in the state legislature, and a 25-11 majority in the state's Congressional delegation. There are many such examples. And Texas, which tried to turn its citizens into anti-abortion bounty hunters, offers a characteristic example of radical Republicans' exquisite deference to the voice of the people. In May 2022, 78% of Texans thought abortion should be legal in some circumstances (39% in all circumstances), 28% only in cases of rape or incest, and 15 percent thought it should never be legal - i.e., agreed with their elected representatives. Nonetheless, it is a foregone conclusion that after Dobbs, Texas will pass a maximally restrictive law against abortion.
Dobbs proclaims that since "the Constitution makes no express reference to the right to obtain an abortion ... the authority to regulate abortion is returned to the people and their elected representatives." It is not surprising that the Court took no notice of the crisis of democratic legitimacy produced by unscrupulous Republican partisanship, since the Court has largely enabled it. Shelby v. Holder barred the Federal Election Commission from overseeing state election laws, and Citizens United - the mother and father of all anti-democratic Supreme Court decisions - removed all limits on political spending. The Constitution makes no express reference to gerrymandering or to unlimited campaign contributions, but this did not prompt the Court to return the authority to regulate these things to the people and their elected representatives. Nor did the Court trust the democratic process to regulate gun mayhem (District of Columbia v. Heller, NYSRPA v. Bruen) or climate chaos (West Virginia v. EPA). Dobbs is undeniably a tainted victory.
How would a genuinely democratic polity address constitutional controversies? Undoubtedly, "the people and their elected representatives" are the ultimate court of appeal, notwithstanding the Founders' well-known misgivings about our wisdom and character. And even though the Constitution makes no express reference to "judicial review," the deliberations of nine wise and learned men and women do sometimes supply essential discriminations and clarifications and model moral and political reasoning for the rest of us.
The problem is that American government is not sufficiently accountable. The Congress has generally been, and the Supreme Court has nearly always been, to the right of public opinion. The traditional remedy - voting the scoundrels out - does not work if only scoundrels have the resources to sponsor, lobby, and (after their term in office) employ politicians, initiate complex lawsuits, or saturate the media with their message. Campaigns for constitutional amendments and ballot referenda are expensive, often prohibitively. Our political media are commercial enterprises, not civic ones. Our political system is more accurately described as a plutocracy than a democracy, and the understandable attitude of more and more citizens is a sullen passivity, occasionally erupting into unreasoning rage. And this Supreme Court's only response has been to double down, announcing after 200 years its discovery that the Founders considered money to be the equivalent of free speech.
Dobbs makes one cogent criticism of Roe and some not-so-cogent ones. One of the not-so-cogent ones is of Roe's solution: the trimester scheme. But that approach, which even Justice Blackmun conceded was "arbitrary," nonetheless made practical sense. It gradually shifted authority over the abortion decision from the woman and her doctor to the state, which had a gradually increasing rational - ie, non-religious - basis for intervening as the pregnancy proceeded. Dobbs could result in a dozen different schemes, any or all of them equally arbitrary.
Not very cogently, Justice Alito writes: "Far from bringing about a national settlement of the abortion issue, Roe and Casey have inflamed debate and deepened division." But Roe and Casey did not force anyone to do anything, whereas Dobbs will force tens of thousands of women to go into debt or leave their state or risk their health or risk criminal prosecution or bear unwanted children. (Or, of course, forego sex - and not all of it will be extra-marital sex.) For Dobbs, women are just one party in an "inflamed debate," with nothing more at stake than legislators or other citizens, which may explain the opinion's curiously tone-deaf remark: "The regulation of abortion is not a sex-based classification." There is undoubtedly a technical meaning of "sex-based classification" that makes this sentence technically true. But the evil of forced pregnancy is very great, and it is entirely sex-based.
Dobbs's cogent criticism - however hypocritical in view of this Court's own practice - is that the Court should normally defer to the people and their representatives, even if it thinks them mistaken. That is what we have - or wish we had - a democracy for.
[END]
George Scialabba's selected essays will be published by Verso next year.
]]>
Richard Rorty (1931-2007) was the philosopher's anti-philosopher. His professional credentials were impeccable: an influential anthology (The Linguistic Turn, 1967); a game-changing book (Philosophy and the Mirror of Nature, 1979); another, only slightly less original book (Consequences of Pragmatism, 1982); a best-selling (for a philosopher) collection of literary/philosophical/political essays (Contingency, Irony and Solidarity, 1989); four volumes of Collected Papers from venerable Cambridge University Press; President of the Eastern Division of the American Philosophical Association (1979). He seemed to be speaking at every humanities conference in the 1980s and 1990s, about postmodernism, critical theory, deconstruction, and the past, present, and future (if any) of philosophy.
All the same, it began to be whispered among his colleagues in mid-career that Rorty had become disillusioned with being a philosopher and turned into something else: a culture critic, an untethered public intellectual, a French fellow traveler. And the chief whisperer, it turned out, was Rorty himself. After leaving Princeton's philosophy department in 1981, he never held another appointment as a philosopher - by choice. He thought philosophy's days were numbered and spent the second half of his career (and much of the first) explaining why.
But how can philosophy end? Surely the quest for Truth is eternal? Surely the hunger for Wisdom is part of human nature? Surely questions about the Good will never cease to exercise us? Well, yes and no. Certainly Rorty was not proposing that we simply give up on all the big questions. We will always mull over "how things, in the largest sense of that word, hang together, in the largest sense of that word," a phrase he quoted often from one of his favorite philosophers, Wilfred Sellars. But he thought that philosophy's perennial abstractions, distinctions, and problems - including Truth, human nature, and the Good - though they were once very much alive, had by now led Western thought into a dead end and should be retired.
Truth, for example, has meant many things since Plato: a knowledge of the Forms; a subsistent Essence, in virtue of which all true things are true; a correspondence between sentences and states of affairs. Likewise the Good: fulfillment of one's telos, or natural end; participation in the Divine Essence; the greatest happiness of the greatest number. Each of these definitions has its partisans, but to each of them most other philosophers are quite deaf. Schools wax and wane but, unlike scientific theories, none steadily gains adherents as it achieves generally recognized solutions to common problems, while its competitors fade away. Philosophy makes no progress.
Rorty was hardly the first to make this observation and draw the conclusion that something else was necessary and inevitable. Hume's mordant aphorism gives the gist of much later criticism: "If we take in our hand any volume, of divinity or school metaphysics, for instance, let us ask: Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames, for it can contain nothing but sophistry and illusion." John Stuart Mill dispensed with most of traditional philosophy, though he was the greatest political and moral philosopher of his day. William James did grapple with many of the traditional problems and gave the new orientation a name ("pragmatism") and some pithy formulations: "The true is the good in the way of belief." "A difference that makes no difference is no difference at all." And perhaps the best-known and most misunderstood: "Grant an idea or belief to be true ... what concrete difference will its being true make in anyone's actual life? ... What experiences will be different from those which would obtain if the belief were false? What, in short, is the truth's cash-value in experiential terms?" The face of 20th-century pragmatism and Rorty's main influence was John Dewey, a penetrating and prolific writer who unfortunately never spoke or wrote a memorable sentence.
An early statement of his pragmatist brief is also one of innumerable passages in which Rorty advocated the euthanasia of philosophy:
[T]he notion of knowledge as accurate representation, made possible by special mental processes, and intelligible through a general theory of representation, needs to be abandoned. [So do] the notions of "foundations of knowledge" and of philosophy as revolving around the Cartesian attempt to answer the epistemological skeptic [i.e., to generate certain knowledge]. [Likewise] the notion of "the mind" common to Descartes, Locke, and Kant - as a special object of study, located in inner space, containing elements or processes which make knowledge possible. This is not to say that [we need] alternative "theories of knowledge" or "philosophies of mind." [On the contrary, we should] set aside epistemology and metaphysics as possible disciplines ... [and try to] glimpse the possibility of a form of intellectual life in which the vocabulary of philosophical reflection inherited from the seventeenth century would seem as pointless as the thirteenth-century philosophical vocabulary had seemed to the Enlightenment.
(Philosophy and the Mirror of Nature)
"Setting aside epistemology and metaphysics" is as good a short definition of pragmatism's purpose, and of Rorty's aim in this book, as one may hope for.
Pragmatism as Anti-Authoritarianism consists of ten lectures, the prestigious Mora Lectures delivered at the University of Girona in Catalonia in 1996. A few of them were subsequently published in English, but this is the first time they've appeared together. Rorty was normally the most fluid and graceful of stylists, but these lectures are earnest and business-like rather than brooding and essayistic, no doubt because he is addressing his fellow professional philosophers rather than his fellow world-weary postmodern intellectuals. They show an impressive grasp of the literature of analytic philosophy (and of Continental philosophy as well), but instead of the glancingly allusive, almost absent-minded style of most of his other books, his writing here is workmanlike and focused. Perhaps he was just trying out this style and felt a little dissatisfied with the results, which may explain why he never got around to publishing them. Or maybe he was just too busy attending conferences and giving lectures.
The Preface, however, is vintage Rorty: intellectually provocative, rhetorically audacious, leaping nimbly across the millennia from Plato to Habermas and back. The "anti-authoritarianism" of the title is meant in a peculiar sense. It is the authority of "the unconditioned" - the Real, the Ideal, the infinite, the absolute, the transcendent, the sublime - that pragmatism rejects. Instead, it embraces the conditioned, the contingent, the finite.
This rejection alone does not constitute pragmatism. Not all anti-foundationalists are pragmatists - cf. Nietzache. Rorty conceives of the twentieth century as "a struggle between secularists who follow Nietzsche in hankering for a kind of greatness which cannot be viewed as a means to a larger end, and secularists who are pragmatic and finitistic in the manner of Dewey." Of course Nietzsche had all the best lines in this argument (most famously, "the last man"). But Dewey was right, Rorty argues:
Nietzsche feared that human greatness would be impossible if we all became happy citizens of a democratic utopia. Dewey was not interested in greatness except as a means to the greater happiness of the greatest number. For him, great human beings ... were finite means to further finite ends. They helped make new, richer, more complex, and more joyful forms of human life available to the rest of us.
Rorty was the grandson of Walter Rauschenbusch, one of the founders of the Social Gospel movement in early 20th-century America, and he always felt free to take coloration from liberal Christianity. Pragmatism as Anti-Authoritarianism devotes a chapter to establishing the ethos of pragmatism, which he does by means of James's "The Will to Believe." All the pragmatists, even Mill, he writes, "took for granted ... the Christian ideal of universal human fraternity," which he identifies with the democratic utopia of pragmatists. Even Christian belief is unobjectionable, pace aggressive rationalists (represented in James's essay by W.K. Clifford and in our time by multitudes), as long as it is a matter of edification and aesthetic contemplation, "the work of strong poets" rather than philosophical grounding for legislation or jurisprudence. A little idiosyncratically, Rorty baptizes pragmatism as "Romantic Polytheism." "Romantic" refers to the belief of the Romantic poets and their champions such as Arnold and Mill that poetry should take over the functions once performed by religion. "Polytheism" is the belief that "there is no actual or possible object of knowledge which would permit you to commensurate and rank all human needs." It's a definition that takes in most modern secular thinkers, many of whom would be surprised to learn that they are pragmatists (let alone romantic polytheists).
From one point of view, the history of modern philosophy is the steady application of Occam's Razor. Descartes' clear and distinct ideas; Locke's primary and secondary qualities; Kant's categories; Hegel's World Spirit; the logical empiricism of Russell and the logical positivists: all these have bitten the dust, or at least fallen into desuetude. Pragmatists played their part: Pierce wrote a famous critique of Descartes. But James and Dewey were not given to hand-to-hand philosophical polemic. Neither was Rorty in his other books, but here he wades in, though his targets are generally not classical philosophers (except for Kant, the book's evil genius) but contemporaries. Jurgen Habermas and Hilary Putnam, for example, would at times declare themselves pragmatists and anti-representationalists but then backslide, making wistful noises about "context-free validity" (Habermas) and "convergence" (Putnam). Thomas Nagel and John Searle are metaphysical realists, who believe in all the distinctions and abstractions - mind, consciousness, qualia - that pragmatists reject. Some of the book's most interesting pages deal with Rawls's Theory of Justice and Political Liberalism: Rorty argues that Rawls did not, as many of his interpreters think, make use of a universalist conception of justice or obligation. (Rawls apparently agreed with Rorty.)
Traditional philosophers thought of their activity as getting at the truth, approaching the intrinsic nature of things, abstracting from the purposes and prejudices of the inquirer. Pragmatists, by contrast, define truth in relation to the purpose of the inquiry and are content to call it the stable consensus of the competent; they see objects as nodes in a network of relations, describable in indefinitely many ways, and without an intrinsic nature ("There is nothing to be known about an object except what sentences are true of it"); and they see philosophical inquiry as a conversation aiming at deep or shallow but not final agreement, since countless conversations are to follow. The great advantage of pragmatism, Rorty writes only half tongue-in-cheek, is that "adopting it makes it impossible to formulate a lot of the traditional philosophical problems, and harder to incite the sort of culture wars in which philosophers like to take part."
Pragmatism as Anti-Authoritarianism speaks to and for those who, living in a post-positivist, Wittgensteinian world, would never ask themselves questions like: "Do objects really have the properties they seem to, or are they only appearances? How can we know?" "What is the essence of a human being? Soul? Personality? Genetic code?" "Is mind material or immaterial?" "Are some actions intrinsically right or wrong, regardless of all consequences (or none) to all parties concerned, anywhere in the universe, in saecula saeculorum?" "Can an act be both caused and free?" "Can something be objectively good even if no one anywhere thinks it so?" "Can a proposition be true if no one exists?"
Rorty's response to such questions was a shrug. The point of pragmatism is that a philosophical problem or distinction is only real to the extent that it has consequences - and those consequences are, in fact, the meaning of the problem or distinction. Dissolving those questions, casting doubt on the existence of any such consequences, was a frequent move of James's and Dewey's and a favorite move of Rorty's.
And what if they're right? What are the moral and political consequences of pragmatism? As Rorty regularly explained, there are none. Pragmatism does not entail or enjoin; it is, he acknowledged, "neutral between Hitler and Jefferson." Its only consequences are philosophical, and those are purely negative. It helps us to see through abstract and absolutist justifications - the glory of God, the divine right of kings, even freedom and democracy - for war and authoritarianism.
Can pragmatism, conditional and provisional as it is, ground democracy, as Rawls, Dworkin, Habermas, and many other political philosophers have hoped? No, Rorty replies, philosophy cannot anchor politics. There is not a universal human faculty called "rationality" that, once it is awakened, gently (or firmly) nudges every person toward cooperative and tolerant behavior. Rationality is simply the ability to use language, and so to form beliefs and desires, which are the basis of community. "I do not much care," he writes, "whether democratic politics are an expression of something deep, or whether they express nothing better than some hopes which popped from nowhere into the brains of a few remarkable people (Socrates, Christ, Jefferson, etc.) and which, for unknown reasons, became popular."
Such offhand iconoclasm was a trademark of Rorty's. Perhaps his most outrageous pronouncement, at any rate in this book, has to do with an issue just as pressing now as twenty-five years ago. "The fundamentalist parents of our fundamentalist students think that the entire 'liberal Establishment' is engaged in a conspiracy." Any liberal professor could have written that sentence, then or now. But not many would have written Rorty's next sentence: "These parents have a point." And I doubt anyone else (except perhaps Stanley Fish) could have offered this clincher:
The racist or fundamentalist parents of our students say that in a truly democratic society the students should not be forced to read books by black people, Jewish people, homosexual people. They will protest that these books are being jammed down their children's throats. I cannot see how to reply to this charge without saying something like: "There are credentials for admission to our democratic society, credentials which we liberals have been making steadily more stringent by doing our best to excommunicate racists, male chauvinists, homophobes, and the like. You have to be educated to be a citizen of our society, a participant in our conversation, someone with whom we can envision merging our horizons. So we are going to go right on trying to discredit you in the eyes of your children, trying to strip your fundamentalist religious community of dignity, trying to make your views seem silly rather than discussable. We are not so inclusivist as to tolerate intolerance such as yours."
Of course, I suspect the university's Human Resources Department wouldn't let a present-day Rorty anywhere near a parent, fundamentalist or (probably) any other kind.
Pragmatism as Anti-Authoritarianism is probably not the first of Rorty's books you'll want to read. If you're hooked on philosophy, you should start with Philosophy and the Mirror of Nature and then go on to the Collected Papers. If you're a freelance intellectual, try Contingency, Irony and Solidarity and then Achieving Our Country (1998) (in which he famously foresaw and deplored wokeness). Consequences of Pragmatism and Philosophy and Social Hope (1999) are excellent miscellanies. Take Caren of Freedom and Truth Will Take Care of Itself (2006), a book of interviews, is worth owning just for the title. All of them will give the reader some idea of why Rorty was so widely revered.
So will his conclusion to the Preface to these lectures:
We pragmatists must be content to offer suggestions about how to patch things up, how to adjust things to each other, how to rearrange them into slightly more useful patterns.
That is what I hope to have done in these lectures. I see myself as having shifted a few pieces around on the philosophical chessboard, rather than having answered any deep questions or produced any elevating thoughts.
Others may see it differently.
[END]
George Scialabba's selected essays will be published Verso in 2023.
]]>
Anarchism is the black sheep of political theories. A glance at its main tenets will explain why: the absence of a state or of representative government; politics as face-to-face relations within small groups; decisions by consensus; no authority; no leadership; no coercion, even of the obstreperous; and a deep suspicion of expertise as somehow subversive of equality. (Worst of all, perhaps: drum circles.) Most Americans find these ideas bewildering. Most senior academics, secret authoritarians that they are, find them abhorrent, even ghoulish, especially as applied to their own department.
Which is why David Graeber, possibly the world's best-known anarchist before his sadly premature death last year, was the black sheep of academic anthropology. As a popular and prolific assistant professor at Yale, he was thought to be a sure bet for tenure. But the department turned him down, with almost no explanation. It was universally assumed that Graeber's anarchist principles, activist politics - especially his support for Yale graduate students trying to organize a union - and cheeky personality cost him the prize. (No doubt the department shuddered with relief at its near escape when he later became a leading interpreter and spokesman for Occupy Wall Street.) Offers from other departments - quite a few, eventually - trickled in; and the huge success of his Debt: The First 5000 Years must also have assuaged the bitterness. But the lesson had been delivered: outspokenness was not costless. Outspokenness, however, was instinctive with Graeber, as was his extraordinary generosity to students and younger colleagues, who responded with extraordinary affection, even love.
The Dawn of Everything caps a large and variegated output. Debt (2011), controversial but enormously erudite and startlingly original, was his best-known work, though his two explicitly political volumes were also best-sellers: Bullshit Jobs (2018), an acerbic history and analysis of pointless drudgery (an important theme in The Dawn of Everything as well); and The Democracy Project (2013), a chronicle of Occupy Wall Street, followed by a scathing critique of American society and politics. The Utopia of Rules (2015) gathered several celebrated essays, including "The Utopia of Rules" and "Of Flying Cars and the Declining Rate of Profit." He was on quite a roll in the decade before he died. But the above was not all he was doing.
In a moving Forward to this book, his co-author David Wengrow, an archaeology professor at University College, London (Graeber ended up at the London School of Economics), described their ten-year collaboration: "[A]s the project gained momentum, it was not uncommon for us to talk two or three times a day. We would often lose track of who came up with what idea or which new set of facts and examples. ... We got to the end just as we'd started, in dialogue, with drafts passing constantly back and forth as we read, shared, and discussed the same sources, often into the small hours of the night." It sounds idyllic.
******
There is a Standard Version of deep history, those long ages before writing (roughly 40000-12000 BCE), when humans left behind traces - suggestive but not definitive - of culture and technology. It is a species of technological determinism, in which forms of society correspond to modes of production. There have been four main social forms, according to this theory: bands, mobile groups of a few families; tribes, of perhaps a hundred members, moving a few times a year; chiefdoms, hundreds strong, centered in one place but with smaller groups occasionally moving away for various reasons; and states, with thousands of members, centered in cities, and with a central government more or (usually) less accountable to the populace. To each of these forms corresponded a mode of subsistence: respectively, hunting/gathering; gardening/foraging/herding; farming; and industry. Political forms followed a closely parallel evolution: egalitarianism, private property, kingship (often just ceremonial), and the bureaucratic state. Each of these stages was more productive and more civilized than the last, but also less equal and less free.
In addition to its pleasing symmetry, the Standard Version has a certain pathos that appeals to supposedly tough-minded scientists. Civilization is a stern fate, on this view: we can only attain modernity's deepest satisfactions by renouncing the mobility, spontaneity, and nonchalance of our free-spirited but immature ancestors. We moderns - and especially intellectuals, who grasp this painful dilemma most fully - become tragic heroes of a sort.
Graeber and Wengrow, however, are intent on blowing up the Standard Version. It was an understandable attempt to extrapolate from very limited data (and, in some cases, a less excusable attempt to retroactively justify Western colonialism). But in the last few decades, a mass of new evidence from archaeology and anthropology has appeared, leaving it all but unsalvageable. Again and again, among the Kwakiutl, Nambikwara, Inuit, Lakota, and innumerable others, from the Amazon to the Arctic Circle to Central Africa to the Great Plains, and in all periods from the Upper Paleolithic to the nineteenth century, archaeologists have discovered variety where the Standard Version predicted uniformity.
Until around 10000 BC, according to the eminent primatologist Christopher Boehm, articulating the scholarly consensus, humans lived in "societies of equals, and outside the family there were no dominators." [86] In such societies, where supposedly no distinctions of power or rank were observed in life, it seems unlikely they would have been observed in death. They were, however, and regularly. Rich burials - in unusually large graves or with ornaments, tools, textiles, or weapons, sometimes in profusion - have been found on every continent, often dating to millennia before social distinctions of any sort were supposed to have arisen in human societies. The egalitarian bands of prehistory, never solidly based on evidence, may soon disappear into myth.
Monumental architecture is more evidence against the standard evolutionary scheme. In southern Turkey, for example, there is an ensemble of 20 stone temples, about as large as Stonehenge (5000 BC), with carved portraits of animals on the pillars. It dates from 11000 BC. In Poverty Point, Louisiana, a network of enormous mounds and ridges stretches out across 500 acres or so. Constructed in 1600 BC (by moving a million cubic meters of earth), it may have been a trading center or a ritual center. Its builders seem to have been hunters, fishers, and foragers. Across north central Europe is a line of "mammoth houses," enclosures up to forty feet in diameter made of mammoth hides stretched over poles, constructed between 25000 and 12000 years ago, obviously by at least part-time hunters. Every year more very old monuments constructed by non-farming, non-state people are discovered, making it harder to believe that such achievements are only possible, as the conventional wisdom has it, on the basis of agricultural surpluses and bureaucratic expertise.
Evidence of occupational variety at many sites calls for explanation: it seems unlikely that, at the same moment in a given area, one group were full-time agriculturalists, another were full-time foragers, and another full-time pastoralists. It now appears that seasonality was very common, with groups changing not only their way of procuring food one or more times a year, but authority relations and other customs as well. A North American Plains tribe, for example, were foragers and herders for most of the year, with very lax discipline both at home and toward tribal leaders. During the great annual buffalo hunt, however, the tribe became quite hierarchical; in particular, there were "buffalo police" who enforced norms of cooperation and distribution very strictly and even had the power to impose capital punishment on the spot for sufficiently grave violations. Most indigenous Amazonian societies had different authority structures at different times of year. Perhaps the best-known example is from the Arctic, where Inuit fathers exercised strict patriarchal authority in summer, while winter, lived more inside, was something of a saturnalia, with spouse-swapping and children running free.
By and large, anthropologists have not made much of seasonality. (Interestingly, most of those who've done so have been anarchist-leaning: Marcel Mauss, Claude Levi-Strauss, Robert Lowie, Pierre Clastres.) Graeber and Wengrow make a great deal of it.
[A]rchaeological evidence suggests that in the highly seasonal environments of the last Ice Age, our remote ancestors were behaving in broadly similar ways: shifting back and forth between alternative social arrangements, permitting the rise of authoritarian structures during certain times of year, on the proviso that they could not last; on the understanding that no particular social order was ever fixed or immutable. Within the same population, one could live sometimes in what looks, from a distance, like a band, sometimes a tribe, and sometimes a society with many of the features we now identify with states. With such institutional flexibility comes the capacity to step outside the boundaries of any given social structure and reflect; to both make and unmake the political worlds we live in. [https://www.eurozine.com/change-course-human-history/]
It is difficult for some - perhaps most - of us to attribute so advanced a political and philosophical consciousness to our remote ancestors. Perhaps, Graeber and Wengrow suggest, that is the problem: our unshakeable conviction that modernity spells progress and liberation prevents us from seeing that, in many times and places, premodern life was actually more rational and free.
Though combative, The Dawn of Everything is an upbeat book. Its debunking energies mainly go to refuting the conventional wisdom at its most discouraging. For example, anthropologists and archaeologists (like most everyone else) tend to assume there is an inverse relation between scale and equality; i.e., the greater the number of people who need to be organized to work or live or fight together, the more coercion will be necessary. Cities represent a scaling up of population, and therefore, naturally, of mechanisms of control. And where did cities come from? "The conventional story looks for the ultimate causes in technological factors: cities were a delayed, but inevitable, effect of the 'Agricultural Revolution,' which started populations on an upward trajectory, and set off a chain of other developments, for instance in transport and administration, which made it possible to support large populations living in one place. These large populations then required states to administer them." [285-6]
This conventional story is being undermined by new archaeological evidence, especially from the largest prehistoric cities, in Mesopotamia and Mesoamerica. Those "large populations living in one place" - peasantries - do not show up until later in the histories of most large cities. Initially, besides farmers drawn to a fertile floodplain, there were equal numbers of hunters, foragers, and fishers, and sometimes very large ceremonial or ritual centers. What there don't seem to have been, by and large, were ruling classes. The conventional assumption - amounting almost to a Weltanschauung - that civilization marches in lockstep with state authority seems to be tottering.
The Agricultural Revolution is another key element of the Standard Version: a swift and mostly complete transition from mobile, egalitarian, healthy foragers, relatively few in number, lacking the concept of private property, and living on wild resources, to farming populations, numerous, sedentary, class-stratified, disease-ridden, and producing a surplus of food. The consequence, as noted above, was cities, and the inevitable concomitant of cities was states. But this turns out to be far too neat. On recent evidence, many populations took up farming and then went back to foraging. Many foraging communities were far more authoritarian than farming communities. And in quite a few places, the transition from foraging to farming took thousands of years. It may be necessary to re-christen the Agricultural Revolution the Agricultural Slow Walk.
Prehistory, Graeber and Wengrow insist, is vastly more interesting than scholars knew until recently. And not just more interesting, but more inspiring as well: "[I]t is clear now that human societies before the advent of farming were not confined to small, egalitarian bands. On the contrary, the world of hunter-gatherers as it existed before the coming of agriculture was one of bold social experiments, resembling a carnival parade of political forms, far more than it does the drab abstractions of evolutionary theory." [4] "Carnival" reminds one of Occupy, which, along with this book, testifies to David Graeber's admirable energy, imagination, and love of freedom.
*********************
For all its historical and theoretical brilliance, The Dawn of Everything does not wholly vindicate the anarchist philosophical framework in which the argument is set. Graeber and Wengrow do not exactly preach anarchism, but the moral of their long and immensely rich study is clear: relations of authority are the most important and revealing things about any society, small or large, and no one should ever be subject to any authority she hasn't chosen to be subject to.
Who could disagree - as long as "chosen" is understood to mean "accepted citizenship in a democratic polity"? This is a window on a longstanding quarrel between anarchists and their less glamorous political cousins, socialists and social democrats. As one of the latter tribe, I confess that The Dawn of Everything did get a rise out of me now and then. For one thing, nearly everyone to the left of Genghis Khan has a sentimental fondness for the European Enlightenment - it's where the critical spirit found its voice. Graeber and Wengrow think it's vastly overrated. Enlightenment thinkers weren't particularly original, they write; their political ideas came mostly from China and from Native Americans. The proof is that Leibniz and Montesquieu praised the Chinese civil service and recommended it to European rulers; while Native Americans who visited Europe impressed the philosophes so much that many of them put the visitors into their philosophical dialogues.
Native American political thought is certainly impressive, and Graeber and Wengrow expound it superlatively well. Still, no one has claimed (as far as I know) that Europe got from Native Americans the ideas of habeas corpus, an independent judiciary, trial by jury, a free press, religious disestablishment, or a written constitution with enumerated rights, or that Adam Smith got from them the idea of labor unions, free education for workers, or income redistribution, all of which he argued for in The Wealth of Nations (though few conservatives have noticed). Perhaps the American left should take a break from trying to subvert the Enlightenment until the American right stops trying to roll it back.
Graeber and Wengrow's second foray into socialist-/social democrat-baiting is more surprising. Equality, the cherished ideal of most leftists past and present, seems to them a theoretical and strategic dead end, a mere "technocratic" [https://www.eurozine.com/change-course-human-history/] reform. They dismiss, even mock, equality as a goal:
[T]o create a society of true equality today, you're going to have to figure out a way to go back to becoming tiny bands of foragers again with no significant personal property. Since foragers require a pretty extensive territory to forage in, this would mean having to reduce the world's population by something like 99.9 percent. Otherwise, the best we can hope for is to adjust the size of the boot that will forever be stomping on our faces; or, perhaps, to wrangle a bit more wiggle room in which some of us can temporarily duck out of its way. [pp. 7-8]
Equality is not only an unworthy goal; it is not even an intelligible one: "it remains entirely unclear what 'egalitarian' even means." [75] Does it? It seems clear enough to me: a society with a Gini coefficient below .2 (Graeber and Wengrow persistently and annoyingly disparage the Gini coefficient, our best quantitative measure of inequality); universal free health care; universal free preschool and public higher education; equal per-pupil expenditures in primary and secondary school; a Universal Basic Income (maybe); enforcement of labor law (the non-enforcement of which has destroyed American unionism); enforcement of tax law (the non-enforcement of which is a trillion-dollar annual gift to the wealthy); all adult citizens automatically registered to vote; exclusively public funding of elections; transparency mechanisms, including a vastly expanded Freedom of Information Act; and accountability mechanisms, including recall, at all levels. If that's not an egalitarian program, why not? And if Graeber and Wengrow wouldn't regard it as well worth fighting for, why not?
I think I know why: because unlike grubby, soulless social democracy, true communism (for example, the indigenous societies of the Northeast Woodlands before the European invasion) "guaranteed one another the means to an autonomous life - or at least ensured no man or woman was subordinated to any other." [48] That is the anarchist ideal. Well, what is the purpose of the socialist/social democratic reforms I just proposed except to guarantee everyone "the means to an autonomous life" in an industrial society? "Industrial society" - there's the rub. Is anarchism feasible in a society of any considerable size or complexity, where coordination, authority, and expertise are essential? How much of mass production, technological innovation, cheap paperbacks and CDs, and the rest of our accursedly seductive late-capitalist way of life do we want to walk back? And how do we do that without starving or stranding or inciting to rebellion the hundreds of millions of hapless humans who have allowed themselves to be trapped into dependence on cars, air travel, supermarkets, and single-family houses? Few contemporary anarchist writers have addressed these questions squarely, and none satisfactorily.
Still, socialists and social democrats have a very large blind spot of our own: the ideology of progress. Believing that democracy and technology advance together, that representative institutions and scientific rationality will reliably and permanently vanquish ignorance and want, and that the historical record demonstrates all this, we can't account for historical regression (like contemporary right-wing populism in Europe and the United States) or precocity, i.e., outstanding political virtue or imagination among peoples with few material attainments. Anarchists, free of this intellectual baggage, need not tie themselves in knots to explain these "paradoxes" of progress.
Labels, clearly, are an aid to misunderstanding. Surely it is not necessary to choose between freedom and equality, much less to disparage those who make the opposite choice. If an anarchist believes in freedom, and a socialist believes in equality, what is someone who believes in freedom and equality? A wise person and a useful citizen.
[END]
George Scialabba's selected essays will be published by Verso in 2023.
]]>
"Never let a serious crisis go to waste," is a saying attributed to President Obama's chief of staff, Rahm Emanuel. His predecessors in the Bush administration followed this maxim instinctively after 9/11, sweeping up everything "related and unrelated" (Donald Rumsfeld) into a pseudo-justification for an illegal and ruinously costly war that they wanted on other grounds. The Obama administration, on the other hand, signally failed to take advantage of the financial crisis of 2008, instead making bank stockholders whole and leaving even the most irresponsible bankers and investment company executives unpunished while letting millions of bamboozled mortgage-holders twist in the wind.
Will the COVID-19 pandemic go to waste? Michael Lewis's new book, The Premonition, along with its predecessor, The Fifth Risk, offer much useful material for reflecting on COVID and other recent crises. They also, unfortunately, offer reason to doubt that much reflection of any sort goes on in the upper reaches of American government. Both books offer a view from the middle and upper-middle levels of that government, in the words of career professionals trying, usually futilely, to get the political appointees above them to do the right thing, or merely to pay attention.
The Premonition's story begins in 1918, with the first and mightiest of the modern pandemics. An avian flu mutated and killed 50 million people worldwide, 675,000 in the United States. Since then, the specter of influenza has haunted the world's public health agencies. Much smaller pandemics in 1957, 1968, and 2009 kept the danger to the fore. But the first national pandemic preparedness plan was only created in 2005, after President Bush read John Barry's The Great Influenza, the classic study of the 1918 pandemic, and concluded that "if anything like the 1918 flu occurred, the basic functions of the society would come to a halt, and no one in the federal government seemed to have worried about it." [p. 52] Rajeev Venkayya, the youthful head of the Biodefense Directorate of the Homeland Security Council, was tapped to draft a pandemic strategy embracing not just the production and stockpiling of vaccines but also immigration, commerce, tourism, and whatever else would affect the course of a pandemic. He put together a team of brilliant and dedicated oddballs whose accomplishments and frustrations are the story line of The Premonition.
Initially the group concentrated on modeling, led by Richard Hatchett, an oncologist (and a brilliant undergraduate poet who chose medical school because "writing is too hard"). But however elegant their models, the verdict of the Centers for Disease Control and other health experts was always: "Not enough data." Then Carter Mecher, an intensive-care physician from Veterans Affairs studying the pandemic of 1918, noticed that, even though the two cities took similar preventive measures, St. Louis had half the death rate of Philadelphia. Philadelphia, however, instituted those measures only several weeks after the first reported case in the city, while St. Louis acted almost immediately. The exponential spread of the disease exacted a heavy toll on Philadelphia; much less so in St. Louis. And there was a further lesson. Philadelphia was reluctant to act without federal guidance, which did not come for three weeks after the first reported case there. Fortunately for St. Louis, the U.S. Surgeon General spoke out recommending school closure and social distancing on the same day the first case was reported there, giving the city authorities political cover to take unpopular measures. Clearly, Federal leadership would be crucial to any future pandemic response.
Though Mecher, Hatchett, and their colleagues would never be taken seriously by the Centers for Disease Control, the National Institutes of Health, and other prestigious public health organizations, it must have been some comfort to them that the paper in which they reported the above results, "Public Health Interventions and Epidemic Intensity During the 1918 Pandemic," was phenomenally popular. As of October 2020, it ranked as the eighth most cited paper (out of 86,622) published in the Proceedings of the National Academy of Sciences.
The pandemic plan was completed and the Mecher-Hatchett team dispersed by the end of the Bush administration, but political problems had not been overcome. The new strategy called for closing schools as soon as the infection rate reached 0.1. That sounds small, but only because exponential growth is hard to get one's imagination around, especially when, as with politicians and school boards, one has angry constituents demanding to know how so few apparent infections could justify so much inconvenience. Of course, by the time the number of infections has grown, controlling the pandemic will be many times more difficult. The analogy with climate change is obvious: many voters fail to understand the need for sharply restricting energy use as long as life remains anything like normal, and will punish politicians who impose such measures, or even advocate them. By the time life no longer feels normal, however, feedback effects make warming far harder to slow.
A political test soon arrived in the form of a swine flu epidemic in early 2009. The new President asked for advice. Mecher's White House group advised closing the schools; the CDC counseled against it. Obama declined to close the schools. The swine flu infected 40-80 million people, but luckily only 12,500 died. The right decision was made, it turned out, but for the wrong reasons - the numbers could easily have been very much worse. Did the failure to bite the bullet this time make it even harder to do so the next time, in 2020?
The Premonition is a bicoastal story. Its three hubs are Washington DC, Atlanta (where the CDC is headquartered), and California, where Dr. Charity Dean, a public health officer with a communicable disease background and a maverick temperament, clashed repeatedly with the state's medical bureaucracy. Lewis shows Dean as Santa Barbara County chief health officer, facing down outbreaks of TB and meningitis, shutting down a clinic with many wealthy patients but dangerously bad hygiene, and closing an extremely posh nursing home at risk of disappearing in a predicted mudslide. Her willingness to make hard decisions and accept the resulting flak got her promoted to deputy chief health officer of California. That was as far as it would get her, though.
Jerry Brown had appointed Dean; in 2019 a new governor, Gavin Newsom, passed over her for chief health officer in favor of a disastrously unqualified affirmative action hire. With the first signs of a possible pandemic in December 2019, Dean pushed for full-throttle preparedness. Her boss, in response, forbade her even to use the word "pandemic." An acquaintance of Dean's in Washington, the chief medical officer of Homeland Security, put her in touch with the reconstituted Mecher/Hatchett group, still working out of the White House but without formal sponsorship. As late as February 2020, they were, as far as they could tell, the only people in government with a sense of urgency about the virus. For example: one of Dean's contacts in California was a brilliant, unorthodox molecular biologist, Joe DeRisi, at UC San Francisco. Having imbibed Dean's fervor about the all-importance of a fast and multi-pronged response, and learning from her that even simple testing, much less genomic sequencing, was lagging badly everywhere, he offered his state-of-the-art lab. For a long time, there were no takers. The CDC had its own facilities, which were free but very slow, partly because the Center was preoccupied elsewhere, with a succession of embarrassingly unsuccessful attempts to devise a COVID-19 test. California hospitals could not be persuaded to switch from getting expensive results after a five-day wait from giant companies Labcorp and Quest to getting free results the next day from DeRisi. (One hospital explained apologetically that its accounting software couldn't handle getting something free.) As a result, "nearly a year into the pandemic, in February 2021, the number of genomes being sequenced in the United States was trivial - less than a third of 1 percent of the virus in people who tested positive. The UK was by then sequencing 10 percent of its positives; Denmark had set a goal of sequencing all of them." [268]
*********
The main theme of The Premonition - the major-key theme, so to speak - is the slightly offbeat heroism of Mecher, Dean, and company. Anyone who doubts the cynical maxim that it is better for your career to be wrong when everyone else is wrong than to be right when everyone else is wrong will find it amply confirmed here. But the protagonists do not become cynical, any more than the protagonists of The Fifth Risk, who had equally abundant justification for doing so.
The Fifth Risk and The Premonition are a departure for Lewis. His mega-bestsellers have generally featured larger-than-life protagonists (John Gutfreund in Liar's Poker, Steve Eisman in The Big Short, Billy Beane in Moneyball, Jim Clark in The New New Thing) doing glamorous but not particularly useful things (getting very rich in the bond market, getting very rich in the stock market, building winning baseball teams, founding billion-dollar software companies). The Fifth Risk is about ordinary, life-sized people with an extraordinary devotion to their useful but unglamorous jobs. One of them, from the Department of Energy, oversaw the design of safety devices, one of which, in 1961, prevented a 4-megaton bomb that fell off a B-52 from destroying half of North Carolina. (The Air Force, of course, insisted that the accident not be publicized, so he received no credit.) Another, the chief scientist of the Department of Agriculture, kept the poultry companies from increasing the rate of chickens killed per minute from 140 to 175, which would have made the job of health inspectors, now almost impossible, completely impossible. Another tries to keep the Agriculture Department's $220 billion Rural Development Bank from being handed over to Wall Street, which would, among other devastating consequences, triple or quadruple the price farmers pay for water. Another one tries to protect the National Weather Service from Trump's undersecretary of Commerce, who happens to be the chief executive of the country's largest private forecasting company and who would like the Service to stop making its forecasts public, since that cuts into his profits.
The book's title comes from a list by one of Lewis's interlocutors of present dangers to America's well-being. The "fifth risk" is mismanagement. Every other page of The Fifth Risk details some instance of egregious mismanagement - or worse, sabotage - by the Trump administration's wrecking crew. One hopes the quiet, competent types Lewis portrays in his two most recent books will outlast them.
The Premonition has a secondary, minor-key theme, and a surprising one: the uselessness of the CDC. The Centers for Disease Control is the most prestigious public health organization in the world, but none of Lewis's dedicated health-care professionals has a good word for it - some fairly colorful bad words, in fact. The problem with the CDC was precisely its prestige - it was determined to say or do nothing that might endanger its reputation. This meant never being wrong in public, which translated into never being ahead of the data. Time and again Mecher or Dean would urge some action on the CDC - testing returnees from Wuhan in January 2020; routing viral samples to DeRisi's lab for genomic sequencing; endorsing school closings - only to be rebuffed with "sorry, insufficient data." The problem with pathogens that spread exponentially, however, is that in the few weeks it may take to gather sufficient data, the infection may have spread out of control. It is admittedly difficult to know precisely when to go in whole hog for containment. But if an organization's overriding priority is to produce excellent research papers a year hence and meanwhile to avoid being blamed for having unnecessarily inconvenienced the public if this virus should prove less infectious than feared (as happened in 2009), it will sometimes err on the side of inaction, at a heavy cost in lives.
The CDC's oft-reiterated public position through the end of February 2020 was "the risk to the American people is very low." That ought to have some reputational consequences. In September 2020 William Foege, a former head of the CDC, wrote to its current head, Robert Redfield: "This will go down as a colossal failure of the public health system of this country. The biggest challenge in a century and we let the country down. The public health texts of the future will use this as a lesson in how not to handle an infectious disease pandemic." He was talking about the CDC's acquiescence to the Trump administration, of which the agency's reluctance to champion protective measures was one expression.
The CDC's passivity may have had something to do with a traumatic episode in its not-too-distant past. In the spring of 1976 several hundred American soldiers came down with a new variety of swine flu. The flu was expected to be dormant in the summer, as usual with flu, and to return in strength in the fall. How severe it would be was unknowable. Should the government wait or vaccinate?
The CDC convened a public-health summit. The sentiment was near-unanimous: vaccinate. The CDC director, David Sencer, made the case for vaccination to the White House. A large-scale program was launched, and tens of millions were vaccinated. However, an outbreak of Guillain-Barre syndrome was linked to the vaccine. Vaccination was halted and "the pandemic never came. The new strain of swine flue simply vanished. No one knew why." [284]
Americans always hate compulsory anything, so the vaccinations had been unpopular. Now that they were known to have been unnecessary, someone's head had to roll. It was Sencer's. It didn't matter that the entire public health community had agreed with him. It didn't matter that the vaccinations were unnecessary only in the sense that, say, a backup generator and extra food and water laid in beforehand were unnecessary because the hurricane swerved at the last minute. Someone, anyone, outside the White House had to take the blame. Evidently the CDC determined from then on to master the bureaucratic arts of evading blame.
Years later, his career destroyed, Sencer returned to Washington for a conference on the events of 1976. Unfortunately, he seems to have manifested signs of Stockholm Syndrome. His main advice, Lewis reports, was that "to preserve the president's credibility, you needed to keep him as loosely linked to the public side of the decision-making as possible." After all, viruses mutate, which "might well necessitate big changes of strategy." The public might see these changes not as intelligent adaptations but as "signs of ineptitude," which must on no account be associated with the president. How sad: an honorable man, once punished for his candor, now cares above all about his bureaucratic superiors' "credibility," that futile and dishonorable substitute for candor and enemy of democratic accountability and trust.
******
The Premonition does not end happily. Unlike the protagonists of The
Fifth Risk, who mostly stayed in government and looked back on their careers with a mixture of satisfaction and exasperation, for many of the later book's leading figures, exasperation was paramount. Charity Dean was offered the post of chief health officer of California but left instead to found a public health startup, in which Mecher, DeRisi, and other colleagues joined her. On her way out of the office for the last time, Lewis tells us, she asked herself: "Why doesn't the United States have the institutions it needs to save itself?" [279]
No one in the book answers that question, including the author. Lewis is a journalist, not a social critic. But he's a very good journalist, and he dramatizes that question in The Premonition with painful vividness.
[END]
George Scialabba is the author of five essay collections and a memoir, How To Be Depressed.
]]>
At the end of the Second World War, the United States, with 6 percent of the world's population, accounted for 50 percent of the world's production. Militarily, it was invulnerable: it controlled both oceans and had sole possession of the most fearsome weapon ever invented. Culturally, American movies and popular music were all-conquering. Perhaps most important was America's moral standing. Woodrow Wilson's (not entirely deserved) reputation as an apostle of self-determination; the United States' apparent lack of territorial ambition; and the awful fate from which the US (with a great deal of help from the USSR) had saved Europe - all aroused hopes for America's moral leadership. Those hopes seemed to find fulfillment in the UN Charter and the Universal Declaration of Human Rights, as wise and generous a scheme for international order as any yet devised. But the UN, along with America's moral prestige, fell victim to the superpower rivalry that extended from 1945 to 1991 and is known as the Cold War.
That is not the story Louis Menand aims to tell in his teeming and colorful history of "art and thought in the Cold War." Helpfully, Menand explains that this is "not a book about the 'cultural Cold War' (the use of cultural diplomacy as an instrument of foreign policy." Nor is it a book about "'Cold War culture' (art and ideas as reflections of Cold War ideology and conditions)." (A good example of the former is Frances Stonor Saunders' The Cultural Cold War; of the latter, Duncan White's Cold Warriors.) The Free World is intellectual history, primarily a narrative of ideas talking to ideas and works of art talking to works of art, while also trying to take into account "the underlying social forces - economic, geopolitical, demographic, technological - that created the conditions for the possibility of certain kinds of art and ideas."
It is a tall order, but he has pulled it off brilliantly once before, in his The Metaphysical Club: A Story of Ideas in America (2001). That book traced the personal and intellectual histories of Oliver Wendell Holmes Jr, Charles Pierce, William James, and John Dewey. The first three of these were friends (and members of a discussion group called the Metaphysical Club), and all four of them were central to American thought between the mid-nineteenth and mid-twentieth century. They were all broad-gauged thinkers, so following their intellectual odysseys meant touching on nearly every significant current of thought in their time. The book's narrative weave and astute interpretations set, in effect, an impossibly high standard for The Free World, which is twice as long and has a cast roughly ten times as large. The Free World pays the price of its ambition - it is not, like The Metaphysical Club, a masterpiece. But it is a splendid book.
Understandably but unfortunately, Menand begins with a portrait of George Kennan. Kennan was widely revered as the patron saint of American foreign policy during the Cold War. While on diplomatic service in Russia after World War II, he sent back a famous 8000-word dispatch, the "Long Telegram," that explained the Soviet Union to the State Department. Russians were, he cautioned, by nature devious, mistrustful, chronically suspicious: in a word, paranoid. No lasting understanding could be expected between such a regime and the high-minded, straight-shooting United States. Internationally, the sneaky Soviets would strive, he warned, "to stimulate all forms of disunity," by setting "the poor against the rich, black against white, young against old, newcomers against established residents, etc." (How dastardly of them!) All in all, the best we could hope for was to firmly, patiently contain Soviet attempts to spread their disruptive ideology, while waiting for a favorable evolution. The term "containment" stuck.
Kennan explained the Kremlin's "neurotic views of international affairs" as the result of a "traditional and distinctive Russian sense of insecurity ... the insecurity of a peaceful agricultural people trying to live on a vast exposed plain in the neighborhood of fierce nomadic peoples." This was compounded by contact with the economically advanced West, which induced "fear of more competent, more powerful, more highly organized societies." Russian fears of capitalist hostility were of course "sheerest nonsense": capitalist countries "showed no disposition to solve their differences by joining in a crusade against the USSR."
It is a pity Trotsky never had an opportunity to comment on the Long Telegram. I think he would have shredded it so thoroughly that even American foreign-policy intellectuals, normally rather sheep-like, would have felt obliged to sternly repudiate it. For it was not merely the existence of that "vast exposed plain," extending through much of Eastern and Central Europe, that made the Russians anxious; it was also the fact that three times in 150 years, invaders from the West had crossed those plains into Russia, on the most recent occasion very nearly annihilating the country, exterminating much of its population and enslaving the rest, while its Western allies dithered, one of them (Churchill) opining that it would be no bad thing if the Nazis and Bolsheviks killed as many of each other as possible before the US and UK fulfilled their long-delayed promise to open a second front. Of course Stalin had no right to occupy Eastern and Central Europe after the war. But to ascribe the Soviets' concern for their western borders to traditional Russian "paranoia" is [WAS] preposterous.
The Soviets also had a more up-to-date reason to want to hold on to Eastern and Central Europe. In the late 1940s, they pushed for a unified, demilitarized Germany, with free elections, and offered to pull back the Red Army from Europe. It was German military power they were worried about, for obvious reasons. But the US insisted on partition, with West Germany to be part of a new military alliance, NATO. Not surprisingly, the Red Army stayed put.
Perhaps the most outrageous line in Kennan's dispatch was his ridiculing as "sheerest nonsense" the notion of capitalist countries "joining in a crusade against the USSR." In fact, the moment World War I ended, that's exactly what they did. The United States, Britain, Japan, and other capitalist countries sent 180,000 troops to support the rebellious anti-Bolshevik White Army. The resulting civil war lasted five years and made an already bad situation in Russia incomparably worse. In short, there were many reasons for the USSR to distrust the West at the outset of the Cold War besides what Kennan called "the disrespect of Russians for objective truth." But Kennan told the US - and the West generally - what we wanted to hear: we are rational and honorable; they are devious and paranoid. Naturally, he was hailed as a Wise Man and the Cold War deemed inevitable and "tragic" - the latter term usually (as in Vietnam and in Kissinger's "philosophical" blatherings about international affairs) a tacit acknowledgment that the policy or practice in question is indefensible.
There's not a great deal more about politics in The Free World, except for a chapter about James Burnham and George Orwell. Burnham's The Managerial Revolution (1941), which Orwell wrote about several times and drew on in Nineteen Eighty-Four, predicted that bureaucrats and technocrats would be the ruling class in advanced countries, that the state would extend its control indefinitely over economic and private life, and that states would form shifting blocs, always in conflict, often military. This was to be the fate of both capitalist and socialist countries, hence "the convergence theory of totalitarianism." It didn't turn out that way, in most respects at least. Still, it struck many people as one of the most powerful and persuasive social theories (in Orwell's case, visions) of the twentieth century.
There was a deep difference, though, between Burnham and Orwell, which Menand mentions but doesn't make enough of. They were both notably tough-minded; that is, they shared an intense dislike of cant and wishful thinking. But Burnham was a thoroughgoing nihilist: he thought that all ideals were sentimental rubbish, that lasting peace was a pipedream, and that power was the only reality in politics. Orwell, on the other hand, though in Nineteen Eighty-Four he portrayed nihilism more brilliantly than anyone else ever has or, probably, ever will, was nevertheless the most idealistic of men, with solidarity and generosity seemingly written into his source code. Burnham doubtless pitied the poor, deluded democratic socialist Orwell. But as Orwell observed dryly in "Second Thoughts on James Burnham" (1946): "So long as they were winning, Burnham seems to have seen nothing wrong with the methods of the Nazis." That is not something a decent person would want to have written about himself. Perhaps tough-mindedness is not the last word in politics.
One of The Free World's larger themes is the replacement of Paris by New York as "the capitol of the modern." For nearly a century, Menand writes, "Paris was where advanced Western culture - especially painting, sculpture, literature, dance, film, and photography, but also fashion, cuisine, and sexual mores - was ... created, accredited, and transmitted." [p. 56] It was not the Nazi occupation that changed things; Paris got off lightly compared with other occupied capitals. (Though it would have been burned to the ground if the German commandant had not ignored Hitler's orders.) And immediately after the Liberation in 1944, there was a cultural efflorescence. Existentialism was in vogue everywhere; its three main exponents - Sartre, Camus, and Beauvoir - were international celebrities. But the compass needle was turning: as Sartre acknowledged, "the greatest literary development in France between 1929 and 1939 was the discovery of Faulkner, Dos Passos, Hemingway, Caldwell, Steinbeck." [pp. 77-78] American leadership in painting was even more pronounced in the 1940s and 50s. Paris would always retain its aura, particularly for Black American writers and musicians, though not only for them - Paris would play a large part in liberating Susan Sontag, for example. But American global primacy was so complete in the 50s and 60s that the cultural primacy of its capital city could not be gainsaid.
Menand gives a lengthy but concise and penetrating sketch, both biographical and intellectual, of Sartre and Beauvoir. That is his modus operandi. Though there's plenty of connective tissue, the book essentially consists of a very large number of profiles of such luminaries as Hannah Arendt, David Riesman, Jackson Pollock, Clement Greenberg, C. Wright Mills, Harold Rosenberg, Lionel Trilling, Diana Trilling, Allen Ginsberg, Claude Levi-Strauss, John Cage, Robert Rauschenberg, Merce Cunningham, Jasper Johns, Elvis Presley, John Lennon, Isaiah Berlin, James Baldwin, I.A. Richards, Northrop Frye, Paul de Man, Jacques Derrida, Norman Mailer, Andy Warhol, Betty Friedan, Susan Sontag, Ralph Ellison, Pauline Kael, Francois Truffaut, Jean-Luc Godard, and Tom Hayden, as well as quite a few only slightly less luminous figures. In addition, there are many sketches, pretty full and mostly even-handed, of influential institutions, movements, and doctrines, such as the Bauhaus, Black Mountain College, UNESCO's Family of Man exhibit, "Action" painting, structuralism, the Beats, the New Criticism, deconstruction, Industrial Art, the Cahiers du Cinema, the Congress for Cultural Freedom, the rise of the rock-and-roll industry, the rise of the paperback book industry (with special attention to Grove Press and Olympia Press), the Leo Castelli Gallery, the Vietnam War, and Bonnie and Clyde. It is possible to quibble with some of his judgments (as I'll do in a moment). But it's not possible, I'd say, to read the book without learning a vast amount about 20th-century intellectual history.
Menand plays his cards pretty close to the vest, ideologically; but given his obvious intelligence and his affiliation with the New Yorker (he's a staff writer), I'd guess he skews left. He is disappointingly indulgent, though, toward eminent centrists. Kennan is one example; another is Isaiah Berlin. Berlin was one of the most influential intellectuals in the English-speaking world at mid-century, so Menand had to discuss him at some length. But he didn't have to discuss him respectfully. Berlin made a career of admonishing everyone to his left that perfect harmony and rationality in politics are unattainable, that not all desirable values can be fully and simultaneously realized, and that therefore anyone who talked of utopia, or even radical change, was a totalitarian in sheep's clothing. His political views, for all his suavity, really were no more sophisticated than that, though they did very often have the virtue of being tacked on to the rather elegant essays in intellectual history in which he specialized. His regrettably popular book on Marx assured liberals that the Moor was fundamentally unsound and largely responsible for the Bolshevik outrages. And his main contribution to political philosophy, the essay "Two Concepts of Liberty," by insisting on a sharp distinction between "freedom from" coercion (from e.g., taxation and gun laws) and "freedom to" live with dignity (e.g., to afford housing, education, health care), aimed a poisoned shaft at the Communist countries - who of course deserved it but who were paying no attention to Berlin - and which unfortunately landed in the English-speaking countries, where the eventual results were Thatcherism and the Republican Freedom Caucus. Berlin might have regretted this, but to forestall it he would have had to take a firm stand in favor of democratic socialism, or at least social democracy. That, however, might have been unpopular at his Oxford High Table and was therefore unthinkable.
Menand's interpretations are always plausible, but not always quite [OMIT?] convincing. The Black Mountain and New York City avant-gardes figure largely in the book, and he nicely balances narratives of their individual careers and collaborations. But with the best will in the world (well, perhaps something less than that), I couldn't accept his claims for their significance. Three of this story's central figures were Robert Rauschenberg, Merce Cunningham, and John Cage. In 1952, each produced an important work. Rauschenberg's was White Paintings, several canvases painted white. Cunningham's was Theatre Piece No. 1, a 45-minute-long mixed-media piece which consisted of a movie turned on and off throughout the performance, a lecturer (Cage) alternately speaking and silent, piano music, records playing on a Victrola, and some intermittent and spontaneous dancing - all of these lasting for intervals determined by chance. From the rafters hung Rauschenberg's White Paintings, which supposedly inspired Cage's now-famous 4'33" - four minutes and thirty-three seconds of ambient noise, with the performer sitting at the piano holding a stopwatch.
Menand tells a plausible enough story of how each of those artists arrived at his chef d'oeuvre (or cul de sac). But to show that it was worth getting there is something else again. "These works," he writes,
--the White Paintings, Theatre Piece No. 1, and 4'33" - are at the center of the Rauschenberg-[Jasper] Johns-Cage-Cunningham aesthetic, and they are easily misread. They are not Dada or anti-art, and they do not embrace a philosophy of "anything goes." They are completely committed to a traditional view of art as a transformative experience, and they are highly disciplined. They rule out much more than they permit.
Rauschenberg made large claims for White Paintings: "they deserve a place with other outstanding art, and yet they are not Art because they take you to a place in painting that art has not yet been." Menand apparently agrees. "The discipline in the White Paintings is the uninflected surface. Rauschenberg was insistent about this. ... Keeping the artist out of the work required constant vigilance. ... The artist doesn't make the painting signify; the viewer does. ... Cage got what was going on in the White Paintings. He described them as 'airports for the light, shadows, and particles' in the space around them. They proved that 'a canvas is never empty.'" As Marcel Duchamp had shown, "the art object itself is empty, inert; it is 'made' by the spectator ... The art is happening because of the canvas, but not on the canvas." [pp 254-55]
Theatre Piece No. 1 may have sounded to its audience like "a cacophony of simultaneous and unrelated independent actions," but there was order, Menand counters. The order consisted in the fact that, even though the performances had no relation to one another, the noises and silences were not determined by the whims of the performers. They were not determined by Cage either. They were determined by chance, which was Cage's usual method of composition at this stage of his career. 4'33", which was Cage's favorite work to date, was also full of order: a stopwatch, a score whose pages Cage would turn without playing anything, the wind and rain, and the audience, who "made all kinds of interesting sounds as they talked or walked out."
As that last phrase suggests, the audience for 4'33" (likewise Theatre Piece No. 1), who were other artists and not easily intimidated middlebrow concertgoers, were not impressed by Cage's and Cunningham's masterpieces. (Rauschenberg was already famous; it is interesting to imagine an unknown painter bringing a set of white-painted canvases to a midtown gallery and insisting that the [THEY] "deserve a place with other outstanding art.")
I had high hopes for Menand's chapter on the "Rauschenberg- Johns-Cage-Cunningham aesthetic": I've struggled for decades between closing the book on American avant-garde painting, music, and dance, as I'm inclined to do, and giving an ear one more time to astute and knowledgeable interpreters like Menand, and perhaps finally getting it. Alas, la lutte continue.
I've never felt any such ambivalence about Andy Warhol. He's always seemed to me a clever racket, disclaiming any special knowledge of what his productions meant, even to him, and inviting his viewers to invest the work with their own meanings, while refraining, sphinx-like, from responding, lest he limit its saleability. According to Menand, that pretty much is what Warhol was up to, and more power to him. "Pop Art was a market-driven phenomenon." [525]
Warhol's [Campbell Soup can] painting is a painting about the nature of painting. It represents the idea that a soup can is a commodity, and so is a painting of a soup can. ... There is a marketplace for everything. This collapse of the fine art-commerce distinction seems banal today, but in 1962, it tied people in knots. [535]
Actually, I hadn't heard that there was no longer any distinction between fine art and commerce. I'm not sure I like the idea.
Menand gets a bit wound up on the subject:
[Pop Art] was what [Clement] Greenberg said it should be, the next step in art's investigation of its own nature. And it brought that investigation to an end. Pop Art showed that the only difference between art, such as a sculpture that looks like a grocery carton, and reality, such as a grocery carton, is that the first is received as art and the second is not. At that moment, art could be anything it wanted. The illusion/reality barrier had been broken. [538]
It sounds a little like a stock market pyramid scheme. The winner is the person who guesses exactly when other people are going to stop receiving sculptures of grocery cartons and paintings of soup cans as art, and sells at the top of the market. In the stock market, too, the "illusion/reality barrier" has been broken.
I'm afraid Menand didn't convince the [ME] that Warhol is even worth arguing about - I half-suspect he made those extravagant claims for Pop Art just to get a rise out of reviewers. Susan Sontag is another matter. "She had been educated at Berkeley, Chicago, Harvard, and Oxford ... She had a command of the Western literary, philosophical, and classical music canon; she was up to date on Continental thought; she was a dedicated cineaste who often saw two or three movies a day; she followed the avant-garde ... She also wrote experimental fiction." [572-3] Menand, who is strictly measured in his praise of nearly everyone else, is unmeasured in his praise of Sontag. "[T[here was no one like her." She was "in the forefront of American letters." "Every other American critic of the period looks provincial by comparison." [572-3]
How good was Sontag? Her experimental fiction, as Menand half-admits, was barbarously bad. Her conventional fiction was good but not outstanding. On Photography, Illness As Metaphor, and AIDS and Its Metaphors were intermittently interesting, but hardly the brilliant revelations they were initially hailed as. Her most interesting and enduring work is in her five essay collections. Many of the essays are fine: on Bresson, Camus, "The Pornographic Imagination," Riefenstahl, Walter Benjamin, Victor Serge, "On Style," others. There are quite a few political pronouncements, usually wise and eloquent. But she is generally considered a literary critic, and virtually none of her essays, I would say, is literary criticism. They are literary history, literary journalism, literary theory, literary reflections, sometimes, as I have said, very good. But almost nowhere does she grapple with a poem or novel or film or painting or piece of music and show us, from the inside, how it works: how, precisely, it achieves the effects it does. Menand suggests that that's an outdated idea of literary criticism, the so-called New Criticism, which Paul de Man allegedly rendered obsolete with his dazzling deconstructive approach. But De Man, too, was a literary theorist and historian rather than a critic; when he actually essayed interpreting a text, the results were not impressive. And what Menand calls the New Criticism is, I'd say, simply criticism: what F. R. Leavis did with Hard Times, Lionel Trilling with Little Dorrit and The Princess Casamassima, Irving Howe with Nostromo, Randall Jarrell with Marianne Moore, Helen Vendler with Wallace Stevens, and William Gass with Rilke.
Sontag simply [OMIT (WORD USED ABOVE)] was not interested in getting to grips with individual works like that. As Menand writes: "She just wanted to be on the cutting edge of cultural awareness." That will keep you busy. She did, on at least two occasions, become the cutting edge herself. "Notes on Camp," her most famous essay, was a jeu d'esprit, a catalogue of practically everything in cultural history that is serious and ridiculous at the same time. It was so popular that later in life she hated to be reminded of it. It remains fun, however.
The other large splash was "Against Interpretation," a protest against the tyranny of meaning, content, intellect. "Interpretation is the revenge of the intellect upon art" was only one of the flaming arrows Sontag shot into the traditionalist camp. Marxist and Freudian approaches are paradigms of critical aggression against art. But any determined search for meaning is a mistake. Something else altogether is needed. "Transparence is the highest, most liberating value in art - and in criticism - today. Transparence means experiencing the luminousness of the thing in itself, of things being what they are." It sounds a little like a mindfulness exercise.
Sontag is perhaps the key figure in The Free World, or at any rate the one Menand admires most. But he must sense that there's something a little unstable about her reputation, because his discussion is largely defensive: noting and answering criticisms of her. Here is one of his best formulations, explaining what (he thought) she was up to in "Against Interpretation":
Sontag was not a permissivist or a leveler. She was not saying that the Beatles are as good as Thomas Mann. She was saying that the fine arts can be approached with the same openness and lack of pretension that people bring to pop songs and Hollywood movies. [592]
Well, yes. But then, there's an awful lot in Thomas Mann (and the fine arts generally) that you can only get through discipline and interpretive skill, while for the Beatles (and pop songs and Hollywood movies), openness and lack of pretension are all you need.
******************
The Free World does not have a single, comprehensive argument; it contains many disparate arguments, only a few of which I've touched on here. It is based on a vast amount of research, biographical, historical, even economic, which shows (though not obtrudes - Menand is too graceful a writer for that). Though he has not overcome my skepticism about some of the artistic and cultural developments of the period, he has nevertheless taught me a good deal. It will be a long time, I imagine, before a better account of art and thought in mid-20th-century America appears.
[END]
George Scialabba's most recent books are Slouching Toward Utopia and How To Be Depressed.
]]>
We all know Nietzsche's parable of the last man. Certain that democracy, science, and secular humanism would definitively reshape civilization, Nietzsche - or more precisely, Zarathustra - asked what kind of human being would result. His answer, dripping with sarcasm and contempt, was that ordinary humans would become a kind of insect, "a race as ineradicable as the flea-beetle," a creature that would "make the earth itself small." Here is Zarathustra's lament:
"Alas, the time of the most despicable man is coming, he that is no longer able to despise himself. Behold, I show you the last man.
"'What is love? What is creation? What is longing? What is a star?' the last man asks, and he blinks.
"'We have invented happiness,' say the last men, and they blink. They have left the regions where it was hard to live, for one needs warmth. One loves one's neighbor and rubs against him, for one needs warmth.
"No shepherd and one herd! Everyone wants the same, everyone is the same; whoever feels different goes voluntarily into a madhouse.
"One has one's little pleasures for the day and one's little pleasures for the night, but one has a regard for health.
"'We have invented happiness,' say the last men, and they blink."
Plenty of others besides Nietzsche have expressed misgivings about the likely character structure of democratic citizens, and these critics have not all been opponents of democracy. (I'm using "democracy" here to mean the whole Enlightenment program: not just political equality but also feminism, pacifism, human rights, and the welfare state, along with a chastened belief in, and modest hopes for, moral and material progress.) Tocqueville's reservations are well-known: "The general character of past society was diversity," he wrote; "unity and uniformity were nowhere to be met with. In modern society, however, all things threaten to become so much alike that the peculiar characteristics of each individual will be entirely lost in the uniformity of the general aspect." Even John Stuart Mill fretted that "the general tendency of things throughout the world is to render mediocrity the ascendant power among mankind. ... At present individuals are lost in the crowd." Criticisms of mass society and mass man swelled to a roar in the 20th century: Durkheim, Spengler, Schmitt, Ortega, Walter Lippmann, Heidegger, the Frankfurt School, Foucault, Alasdair MacIntyre, Allan Bloom, and many, many others.
Most of these criticisms I reject, not for their often powerful diagnoses but for the illiberal prescriptions that usually accompany them. I agree with Richard Rorty's admirably forthright solution to the supposed dilemma of democratic mediocrity: to wit, "even if the typical character types of liberal democracies are bland, calculating, petty, and unheroic, the prevalence of such people may be a reasonable price to pay for political freedom." We can and should separate the private from the public, self-creation from tolerance, the pursuit of perfection from democratic politics. As Rorty famously elaborated:
From Plato through Kant down to [Habermas and Derrida], most philosophers have tried to fuse sublimity and decency, to fuse social hope with knowledge of something big... My own hunch is that we have to separate individual and social reassurance, to make sublimity [unlike tolerance] a private, optional matter. That means conceding to Nietzsche that democratic societies have no higher aim than what he called "the last men" -- the people who have "their little pleasures for the day and their little pleasures for the night." Maybe we should just make that concession, and also concede that democratic societies do not embody anything, and cannot be reassured by anything, larger than themselves (e.g., by "rationality"). Such societies should not aim at the creation of a new breed of human being, or at anything less banal than evening out people's chances of getting a little pleasure out of their lives. This means that citizens of those societies who have a taste for sublimity will have to pursue it on their own time, and within the limits set by On Liberty. But such opportunities might be quite enough.
That, broadly, is where I also stand - with the Enlightenment and its contemporary heirs, and against Straussians, religious conservatives, national greatness neoconservatives, Ayn Randian libertarians, and anyone else for whom tolerance, civic equality, international law, and a universal minimum standard of material welfare are less than fundamental commitments. But without, I hope, contradicting myself, I'd like to work the other side of the street for a while: to acknowledge the force of at least some criticisms of modernity and progress.
Perhaps the most important, though also the most fragile, success Enlightenment liberalism has had is the delegitimation, however partial, of war. The perception that the arbitrary power of absolute rulers facilitated needless and vastly destructive wars was a powerful impetus to popular sovereignty in the 19th and 20th centuries, culminating in the United Nations Charter. Though the Charter has been repeatedly violated by the great powers, and not only by them, it is not quite a dead letter, and a global culture of respect for international law may be the most urgent cause any activist could devote her life to.
Even so, biology has its rights. In 1910, the last year of his life and only a few years before the First World War put an end to the long European peace, William James wrote a pamphlet for the Association for International Conciliation, one of the many pacifist groups whose prominence in that period convinced many people that war between nations, being so obviously irrational, was therefore impossible. James's essay, entitled "The Moral Equivalent of War," is a work of supreme pathos and wisdom. James himself was a pacifist, a founding member of the Anti-Imperialist League, a group formed to protest America's military interventions in Cuba, Haiti, and the Philippines, and one of the most humane and generous spirits America or any other nation has ever produced.
James understood perfectly the folly - the "monstrosity," as he called it - of war, even in those comparatively innocent, pre-nuclear days. But he also acknowledged the place of the martial virtues in a healthy character. "We inherit the warlike type," he pointed out, "and for most of the capacities of heroism that the human race is full of we have to thank [our bloody] history." "The martial virtues," he continued, "although originally gained by the race through war, are absolute and permanent human goods. ... Militarism is the supreme theater of strenuousness, the great preserver of our ideals of hardihood; and human life with no use for strenuousness and hardihood would be contemptible." "We pacifists," he wrote with characteristic intellectual generosity, "ought to enter more deeply into the aesthetic and ethical point of view of our opponents." To militarists, a world without war is "a sheep's paradise," flat and insipid. "No scorn, no hardness, no valor any more!" he imagines them saying indignantly. "Fie upon such a cattleyard of a planet!" This, remember, was the era of Teddy Roosevelt, preacher of the strenuous life and instigator of splendid little wars. James's pacifism may be common sense to you and me, but when he wrote, the common sense of Americans was mostly on Roosevelt's side.
How to nourish the martial virtues without war? James resolved this apparent dilemma with a suggestion many decades ahead of its time: universal national service, every youth to be conscripted for several years of hard and socially necessary physical work, with no exceptions and no class or educational discrimination. This army without weapons would be the moral equivalent of war, breeding, James argued, some of the virtues essential to democracy: "intrepidity, contempt of softness, surrender of private interest, obedience to command." I am sure James would have agreed that these are not the only virtues essential to democracy - he himself, with his anti-imperialist activism, exemplified an equally essential skepticism and resistance to authority. But I wonder if our contemporaries, who mostly need no convincing about the necessity of skepticism and resistance to authority, would also agree with James about the importance of valor, strenuousness, and self-sacrifice.
James wrote in America before World War I, a situation of almost idyllic innocence compared with that of the next writer I want to cite, D. H. Lawrence. The Great War, as contemporaries called it, was a soul-shattering experience for English writers. The complacent stupidity with which Europe's governing classes initiated, conducted, and concluded that war, the chauvinism and bloodlust with which ordinary people welcomed it, and above all, the mindless, mechanical grinding up of millions of lives by a war machine that seemed to go of itself - these things infuriated Lawrence almost to madness. Like many others, Lawrence saw the facelessness, the impersonality, the almost bureaucratic character of this mass violence as something new and horrifying in human history. But more than all others in the 20th century, Lawrence was the champion of the body and the instincts against the abstract, impersonal forces of modernity. Like Nietzsche, he marshalled torrents of impassioned prose against the apparently inexorable encroachments of progress. Here is a passage from "Education of the People," published posthumously in the two volumes of Phoenix.
We are all fighters. Let us fight. Has it come down to chasing a poor fox and kicking a leather ball? Heavens, what a spectacle we should be to the ancient Greek. Rouse the old male spirit again. The male is always a fighter. The human male is a superb and god-like fighter, unless he is contravened in his own nature. In fighting to the death, he has one great crisis of his being.
What is the fight? It is a primary, physical thing. It is not a horrible, obscene, abstract business, like our last war. It is not a ghastly and blasphemous translation of ideas into engines, and men into cannon-fodder. Away with such war. A million times away with such obscenity. Let the desire of it die out of mankind. ... Let us beat our plowshares into swords, if we will. But let us blow all guns and explosives and poison-gases sky-high. Let us shoot every man who makes one more grain of gunpowder, with his own powder.
And then let us be soldiers, hand-to-hand soldiers. Lord, but it is a bitter thing to be born at the end of a rotten, idea-ridden machine civilization. Think what we've missed: the glorious bright passion of anger and pride, reckless and dauntless.
In other words: fight when you must, when your blood boils over and your anger won't be gainsaid. But fight face to face, hand to hand, in your own quarrels and in your own skin, as a responsible human being and not a machine, or worse, a machine-operator. I think William James would have agreed with that. I'll go further: I think Mary Wollstonecraft, Margaret Fuller, Grace Paley, perhaps even Dorothy Day would have agreed. I believe that one can be - must be - both a feminist and an upholder of the martial virtues, just as James showed that one must be both a pacifist and an upholder of the martial virtues.
Modernity imperils another set of virtues, which are a little harder to characterize than the martial virtues, but are even more important. I don't mean the bourgeois virtues, though there's some overlap. I suppose I'd call them the yeoman virtues. I have in mind the qualities we associate with life in the early American republic - the positive qualities, of course, not the qualities that enabled slavery and genocide. In 1820, 80 percent of the American population was self-employed. Protestant Christianity, local-self-government, and agrarian and artisanal producerism fostered a culture of self-control, self-reliance, integrity, diligence, and neighborliness - the American ethos that Tocqueville praised and that Lincoln argued was incompatible with large-scale slave-owning. Today that ethos survives only in political speeches and Hollywood movies. In a society based on precarious employment and feverish consumption, on debt, financial trickery, endless manipulation, and incessant distraction, such a sensibility seems archaic.
According to the late Christopher Lasch, the advent of mass production and the new relations of authority it introduced in every sphere of social life wrought a fateful change in the prevailing American character structure. Psychological maturation - as Lasch, relying on Freud, explicated it - depended crucially on face-to-face relations, on a rhythm and a scale that industrialism disrupted. The result was a weakened, malleable self, more easily regimented than its preindustrial forebear, less able to withstand conformist pressures and bureaucratic manipulation - the antithesis of the rugged individualism that had undergirded the republican virtues.
In an important recent book, The Age of Acquiescence, the historian Steve Fraser deploys a similar argument to explain why, in contrast with the first Gilded Age, when America was wracked by furious anti-capitalist resistance, popular response in our time to the depredations of capitalism has been so feeble. Here is Fraser's thesis:
During the first Gilded Age the work ethic constituted the nuclear core of American cultural belief and practice. That era's emphasis on capital accumulation presumed frugality, saving, and delayed gratification as well as disciplined, methodical labor. That ethos frowned on self-indulgence, was wary of debt, denounced wealth not transparently connected to useful, tangible outputs, and feared libidinal excess, whether that took the form of gambling, sumptuary displays, leisured indolence, or uninhibited sexuality.
How at odds that all is with the moral and psychic economy of our own second Gilded Age. An economy kept aloft by finance and mass consumption has for a long time rested on an ethos of immediate gratification, enjoyed a love affair with debt, speculation, and risk, erased the distinction between productive labor and pursuits once upon a time judged parasitic, and become endlessly inventive about ways to supercharge with libido even the homeliest of household wares.
Can these two diverging political economies - one resting on industry, the other on finance - and these two polarized sensibilities - one fearing God, the other living in an impromptu moment to moment - explain the Great Noise of the first Gilded Age and the Great Silence of the second? Is it possible that people still attached by custom and belief to ways of subsisting that had originated outside the orbit of capital accumulation were for that very reason both psychologically and politically more existentially desperate, more capable, and more audacious in envisioning a non-capitalist future than those who have come of age knowing nothing else?
If this argument is true - and I find it painfully plausible - where does that leave us? An individual's or a society's character structure cannot be willed into or out of existence. Lost virtues and solidarities cannot be regained overnight, or even, perhaps, in a generation. Even our ideologies of liberation may have to be rethought. A transvaluation of values may be in order: faster, easier, and more may have to give way to slower, harder, and less - not only for ecological reasons but also for reasons of mental and moral hygiene. And even if we bite that bullet and decide, as a society, to spit out the poisoned apple of consumerism and technological addiction, is there a path back - or forward, for that matter? If individual self-sufficiency and local self-government are prerequisites for human flourishing, then maybe it is too late.
I know of only one book that takes the full measure of the dilemmas I've been hinting at and goes on to show one way to a sane and stable future. It's a utopian novel by Ernest Callenbach, called Ecotopia. It was published in 1975 and had a brief vogue but seems to have disappeared along with the rest of the counterculture of that era. It deserves better: it's politically and psychologically astute, and ecologically far ahead of its - or our - time. But the utopian society it depicts, located in the Pacific Northwest, is made possible by the survival in that region of some of the very cultural traits and virtues whose obsolescence in the rest of the country I've been lamenting.
*************************
Do my apparently disparate-sounding worries have anything in common? Possibly this: they all result from one or another move on the part of the culture away from the immediate, the instinctual, the face-to-face. We are embodied beings, gradually adapted over millions of years to thrive on a certain scale, our metabolisms a delicate orchestration of innumerable biological and geophysical rhythms. The culture of modernity has thrust upon us, sometimes with traumatic abruptness, experiences, relationships, and powers for which we may not yet be ready - to which we may need more time to adapt.
But time is short. "All that is solid melts into air" - Marx meant the crust of tradition, dissolving in the acid bath of global capitalism. Now, however, the earth itself is melting. Marx's great metaphor has acquired a terrifying second meaning.
And so has Nietzsche's. If we cannot slow down and grow cautiously, evenly, gradually into our new technological and political possibilities and responsibilities - even the potentially liberating ones - the last recognizably individual men and women may give place, before too many more generations, to the simultaneously sub- and super-human civilization of the hive.
[END]
]]>
The United States is widely believed to be a democracy. Unfortunately, it is not. In a democracy, elected representatives are broadly accountable to and controllable by their constituents, not merely by means of a biennial or quadrennial voting ritual but through regular channels of information and discussion, which communication technology is making ever more feasible.
In the United States, however, the general population has very little influence over its government. Five years ago, two American political scientists studied the correlation between policy preferences, policy outcomes, and income levels, concluding that "ordinary citizens have virtually no influence over what their government does in the United States," while the "ability to shape outcomes is restricted to people at the top of the income distribution and to organized groups that represent primarily - though not exclusively - business." In other words, the United States is a plutocracy.
This will hardly be news to anyone who has paid attention to American politics since Ronald Reagan arrived in Washington in 1980 and began energetically to undermine the New Deal, the foundation of American prosperity and fairness in the decades after World War II. Most Republican presidents since 1980, including Reagan, George W. Bush, and Donald Trump, have enacted giant tax cuts for the very rich while trying to cut or privatize Social Security and unemployment benefits, destroy the bargaining power of labor unions, and resist popular pressure for universal health care and free higher education.
Why, then, do a majority of Americans vote Republican? They don't. In the last decade, the majority of votes cast on both the state and national level - including the 2016 presidential election - were for Democrats. The United States has an archaic electoral system, which has awarded the presidency twice in the last twenty years to the candidate with fewer votes, both times Republicans (George W. Bush and Trump). This system assigns the same weight to one Wyoming voter as to 57 California voters. Because the American Constitution is absurdly difficult to amend, and because the small states, mostly Republican, have always refused to give up their unfair electoral advantage, unrepresentative government at the level of President and Senate is entrenched. The House of Representatives and the state legislatures are chosen by election district. Manipulating the composition of these districts to insure disproportionate advantage - a practice called "gerrymandering" - has traditionally been indulged in by whichever party was in a position to do so. But in 2010, Republicans hired teams of computer consultants to take this skullduggery to an entirely new level. As a result, though Republicans are the minority party vote-wise, they have been the governing party in America for the last decade.
Many non-rich people do, of course, vote Republican - around 50 million of them - and their motivations vary widely. Perhaps the largest segment are evangelical Christians, who support Trump because he has promised to promote, through his judicial appointments, religious freedom (usually meaning a narrow interpretation of the First Amendment) and restrictions on abortion. Others, particularly in the South, have never forgiven the Democratic Party for the Civil Rights Act of 1965. Others, reasonably enough, dislike the neoliberalism of Obama and the Clintons: "free trade" agreements facilitating capital flows and the outsourcing of jobs; financial and other kinds of deregulation; and privatization - though Congressional Republicans supported these things too. Still others generically dislike "Washington" and the federal government, often managing to forget that this is where their Social Security checks and Medicare coverage come from.
The main strength of the Republican electorate is that a high percentage of them actually vote. There are many more Democrats, but a higher percentage of the latter don't vote. Partly this is because they are younger and more footloose; possibly also because they are less likely to get off work to vote. (In America, Election Day is a workday, a time-tested bipartisan means of suppressing the vote.) There is another reason: targeted Republican voter-suppression efforts. These include restrictive registration rules, closing polling stations in Democratic areas, purging voter rolls, not making available or not counting mail ballots, and more. This is expected to have a serious impact on elections across the board in 2020. Democratic counter-efforts are circumscribed both by a lack of money and a lack of zeal - the Democratic Party apparatus appears to be even more concerned with retaining control of the party than with winning elections. In addition, Republican-appointed judges, all the way up to the Supreme Court, have been notably unenthusiastic about protecting voters' (and workers' and consumers') rights.
Of course, all the malign consequences of Trump's and Mitch McConnell's fanatical devotion to further enriching the already rich pale into insignificance compared with the damage we have all conspired to wreak by emitting carbon. The proportion of carbon dioxide in the earth's atmosphere has rocketed into the danger zone. The floods in Kerala and Bangladesh, the wildfires in California and Australia, the droughts in Syria and Sudan, the melting of polar ice, the thawing of the Arctic permafrost, the bleaching of half the world's coral reefs - these are the first symptoms of our planet's death. It is far from certain that this death spiral can be arrested. Trump has done his best over the last four years to accelerate it, rolling back environmental protections, silencing or firing government scientists, aggressively promoting drilling for oil and gas. Another four years of these policies may be fatal, especially combined with the reckless destruction of the Amazon rain forest - the world's lungs - by Trump's admirer, Bolsonaro.
Joe Biden is a sensible and decent man, though, like Obama, not a courageous one. If he is elected with large majorities in the Senate and House, he will invest in renewable energy on a large scale, rejoin the Paris Accords, and enforce emission standards. And the Republicans will fight him every step of the way. Why? Because these measures may make the richest Americans - Charles Koch and the Exxon Corporation, in particular - a little less rich.
Like many other Americans, I can see the justice of raising living standards in the impoverished world, even if those of us in the rich world must tighten our belts, carbon-wise. Half a billion new refrigerators and air conditioners in India and China may add a fraction of a degree to the average global temperature. So be it. But the profits of the energy industry are another matter. That our beautiful and fragile world may be mortally wounded merely because a few thousand American plutocrats and corporations have bought themselves a wholly unscrupulous gang of operatives, who have established a choke-hold on American politics - this is almost too terrible to contemplate.
[END]
George Scialabba's books include What Are Intellectuals Good For? and Slouching Toward Utopia.
]]>
The Bad Side of Books: Selected Essays by D. H. Lawrence. Edited and with an introduction by Geoff Dyer. New York Review Classics, 490 pages, $19.95.
T. S. Eliot wrote of Henry James: "He had a mind so fine no idea could violate it." That was a compliment. Eliot thought ideas were mostly out of place in art, or at least notoriously hard to keep in their proper place. Surprisingly, Eliot also commended James for keeping ideas out of his critical writings. Eliot's critical writings teemed with ideas, indeed staggered under them.
Of course James had plenty of ideas, and so did his characters, but not the sort that Eliot deplored - not general ideas or, as we might say, ideologies. Those who do have ideologies - the feminists in The Bostonians or the revolutionaries in The Princess Casamassima, for example - are slightly ridiculous or slightly sinister. But their ideas don't detract from those novels; they're just materials, like the enigmatic adventurism of the Princess and the chivalrous absurdity of Basil Ransom. They are not the author's ideas, and the novels are not written to propagate them. That would have been a violation, in Eliot's sense.
Eliot thought D. H. Lawrence's mind was continually and ruinously violated by ideas. He called Lawrence an "arch-heretic" (this was not a compliment) and a "very sick man"; and he devoted a good deal of After Strange Gods to deploring Lawrence's regrettable effects on contemporary sensibilities. It was not a matter of individual depravity. Lawrence simply had the misfortune to be born without the indispensable mental resources of a traditional culture and the stabilizing moral bulwarks of a traditional religion. It was therefore perfectly natural that he could not think and that his characters had no conscience.
It has been an influential judgment - like all of Eliot's - and it finds a faint echo in Geoff Dyer's introduction to The Bad Side of Books, though he invokes not Eliot's authority but Kate Millett's in Sexual Politics, a book which could hardly be more different from After Strange Gods. "If Lawrence remains a great writer today," Dyer opines,
that is due in no small part because his enduring freshness and force is found in the travel books, in poems that were scarcely even poems, and in the scatter of his essays. For Lawrence the novel, "the one bright book of life," was the supreme test; that's what he staked his life on. But many of his gifts were best displayed elsewhere.
"Freshness and force" falls well short of wisdom and genius; travel books, poems (that are "scarcely even poems"), and a scatter of essays - do these modest achievements add up to a major writer? In Out of Sheer Rage, his witty and exasperating book about not writing a book about Lawrence, Dyer acknowledged that he no longer cares for Lawrence's novels. In his Introduction to The Bad Side of Books, explaining why the collection is made up almost entirely of occasional writing, Dyer muses that "although Lawrence undoubtedly had a philosophy which he was keen to share with the world (to put it mildly), the effort involved him writing against his strengths. ... Lawrence was often carried away by stuff about a metaphorical 'river of dissolution' but he noticed, with stunning clarity of vision, all the flora and fauna on the literal riverbank." In British English - ie, Dyer's - "stuff" sounds more disparaging than in American English: cf. the expression "Stuff and nonsense!" There is comparatively little, then, of Lawrence's philosophical (or "philosophical-ish," as he put it self-deprecatingly) prose here. Of course, even in writing on ordinary subjects, Lawrence is always setting off little philosophical fireworks. But the grand fireworks displays - "Education of the People," most of the "Study of Thomas Hardy," Reflections on the Death of a Porcupine (except for the title essay), Psychoanalysis and the Unconscious, and Fantasia of the Unconscious - are unrepresented. For better and worse - mostly better, since nearly all the philosophical writing is collected in the two volumes of Phoenix, the more complete edition of Lawrence's nonfiction, and in Psychoanalysis and Fantasia, which are available in a single paperback volume - The Bad Side of Books is a medley of Lawrence at his most observant, sensuous, and immediate.
Every writer is unique in some way; Lawrence was unique in most ways: in his prose style, with its frequently incandescent images and incantatory rhythms; in his personality - most of his friends testified that he was flame-like, more alive than anyone else they knew; and in his opinions, the usual reactions to which ran from amusement through incomprehension, incredulity, and ridicule to abhorrence. If any twentieth-century writer can be said to have lived with that hard, gemlike flame that Walter Pater recommended, it was Lawrence.
He was a vagabond. The Bad Side of Books is, among other things, a record of his wanderings. His lungs were weak, so he avoided Northern winters. But even more than his health, or than the trickle of income that came from travel writing, he was drawn by an ardent curiosity, a curiosity that (sometimes) trumped even his generally formidable preconceptions. He lived in Florence, Rome, Sicily, Germany, southern France, Ceylon, Tahiti, Australia, Mexico, and, most consequentially, New Mexico.
Sometimes he came away chiefly with vivid descriptive writing, like the opening of "Flowery Tuscany":
Each country has its own flowers, that shine out specially there. In England it is daisies and buttercups, hawthorn and cowslips. In America, it is goldenrod, stargrass, June daisies, Mayapple and asters, that we call Michaelmas daisies. In India, hibiscus and dattura and champa flowers, and in Australia mimosa, that they call wattle, and sharp-tongued strange heath-flowers. In Mexico it is cactus flowers, that they call roses of the desert, lovely and crystalline among many thorns; and also the dangling yard-long clusters of the cream bells of the yucca, like dropping froth. ...
But by the Mediterranean, now as in the days of the Argosy, and, we hope, for ever, it is narcissus and anemone, asphodel and myrtle ... crocus and grape-hyacinth.
But more often, the vivid writing was in the service of a vision. "New Mexico," he wrote, "was the greatest experience from the outside world that I have ever had."
The moment I saw the brilliant, proud morning shine high up over the deserts of Santa Fe, something stood still in my soul, and I started to attend. ... In the magnificent, fierce morning of New Mexico one sprang awake, a new part of the soul woke up suddenly, and the old world gave way to a new.
There are all kinds of beauty in the world, thank God. ... But for a greatness of beauty, I have never experienced anything like New Mexico. All those mornings when I went with a hoe along the ditch to the canyon, at [my] ranch, and stood, in the fierce, proud silence of the Rockies, on their foothills, to look far over the desert to the blue mountains away in Arizona, blue as chalcedony, with the sage-brush desert sweeping grey-blue in between, dotted with tiny cube-crystals of houses, the vast amphitheater of lofty, indomitable desert, sweeping round to the ponderous Sangre de Cristo mountains on the east, and coming flush at the pine-dotted foot-hills of the Rockies! What splendor!
Along with several essays, Lawrence wrote a substantial piece of fiction set partly in New Mexico: the novella St. Mawr, which throws some light on the meanings the region had for him.
America meant all sorts of things to Lawrence, many of them adumbrated in his Studies in Classic American Literature (1923). In The Bad Side of Books, there's an essay called "Pan in America" (1924), which starts from the cry that echoed around the Mediterranean as paganism faded: "The Great God Pan is dead!" What that meant, according to Lawrence, was that the possibility of life lived in spontaneous unison with nature dwindled as commerce, technology, and metaphysical religion advanced. Pan seemed still alive to Lawrence in the Indians of the Southwest, and he conjured a graphic account of the animist mind and imagination. But even there, Pan was "dying fast"; every Indian, Lawrence thought, "will kill Pan with his own hands for the sake of a motor car." Who, given the choice the essay poses - "to live among the living, or to run on wheels" - would choose what Lawrence called "life"? Pretty much no one, he thought, though he returned to this opposition again and again.
Idiosyncratic though Lawrence was, this is the same choice posed by the entire Romantic tradition: Blake and Wordsworth, Ruskin and Morris, Wendell Berry and Norman O. Brown. Embeddedness, mystery, everyday beauty on one side; separateness, control, functionality on the other; limits versus progress. Rationalists often claim that Romantics are mere mystics and wholly impractical. That's not true of most Romantics, and certainly not of the greatest ones, Ruskin and Morris. And even Lawrence was, for so loftily prophetic and gorgeously imaginative a writer, surprisingly practical. He was a miner's child, comfortable from childhood with tools, and always knocking up shelves or bookcases in whatever temporary lodging he and his wife, Frieda, found themselves. The Lawrences bought and ran a small ranch in New Mexico for a couple of years after Lawrence's revelation, and he was happy there.
"Nottingham and the Mining Countryside" (1929) is an account of a visit to the area he grew up in. By then he had seen the Tuscan hill towns, and so he wrung his hands and gnashed his teeth over the "great scrabble of ugly pettiness" the mining companies had created by way of villages for the miners in the hilly region. And he proceeded to offer an alternative building plan, along with an argument that "if they had done this" - eschewed ugliness and had a care for beauty - "there would never have been an industrial problem. The industrial problem arises from the base forcing of all human energy into a competition of mere acquisition." I don't know whether that's a left-wing opinion or a right-wing opinion, but it makes fundamental sense.
In "Return to Bestwood" (1926), another homecoming essay, Lawrence arrives during the General Strike that convulsed England that year. The people, he recognizes, are his people, though both he and they have changed almost out of recognition. They have lost even the few graces that lent dignity to the harsher, poorer form of life he knew as a child. "I feel I hardly know any more the people I come from," he acknowledges sorrowfully, though they are also "the only people who move me strongly, and with whom I feel myself connected in deeper destiny."
From these wistful reflections emerges a political credo.
A few things I know, with inner knowledge.
I know that what I am struggling for is life, more life ahead, for myself and the men who will come after me, struggling against fixations and corruptions. ...
I know that there is ahead the mortal struggle for property.
I know that the ownership of property has become, now, a problem, a religious problem. But it is one we can solve.
I know I want to own a few things, my personal things. But I also know I want to own no more than those. ... I don't want a fortune - not even an assured income.
At the same time, I don't want poverty and hardship. I know I need enough money to leave me free in my movements, and I want to be able to earn that money without humiliation. ...
I know that we could, if we would, establish little by little a true democracy in England: we could nationalize the land and industries and means of transport, and make the whole thing work infinitely better than at present, if we would. ...
I know we are on the brink of a class war.
I know we had all better hang ourselves at once, than enter on a struggle which shall be a fight for the ownership or non-ownership of property, pure and simple, and nothing beyond.
I know that the ownership of property is a problem that may have to be fought out. But beyond the fight must lie a new hope, a new beginning.
In the early 1920s, profoundly disillusioned by the initial popular enthusiasm on all sides for World War I and the continued popular acquiescence in the war even after its futility and insane destructiveness became clear, Lawrence flirted with a not-very-well-defined authoritarianism in several of his novels. As a result, several generations of English and American leftists have come to the same conclusion as T. S. Eliot: Lawrence simply could not think, at any rate about politics. I'd say the above passages, along with numerous others in The Bad Side of Books and the two volumes of Phoenix, suggest otherwise.
"Study of Thomas Hardy" (1914), one of his earliest nonfiction works, is, he admitted, "about anything but Thomas Hardy." Along with Hardy (occasionally), it is about "Being and Not-Being" and their peregrination through history from the Jews of the Old Testament onward through Western civilization. Being and Not-Being usually appear in the character of the male and female principles, around which Lawrence spins an elaborate metaphysical theory, which can easily be vulgarized into the ideology of male supremacy that Kate Millett found everywhere in his writing. Whether it can be fashioned into something more innocent, or even useful, may never be known - he died, at 44, very much a work in progress.
When I think of Lawrence sub specie aeternitatis, I think of a few lovely lines that appear in "The Bad Side of Books," the title essay of this collection. I have a feeling he would have liked to be remembered by them as much as by any other few lines he wrote. They are a memento mori but also, naturally - it being Lawrence - a memento vitae.
To some men still the trees stand up and look around at the daylight, having woven the two ends of darkness together into visible being and presence. And soon, they will let go the two ends of darkness again, and disappear. A flower laughs once, and having had his laugh, chuckles off into seed, and is gone. Whence? Whither? Who knows, who cares? That little laugh of achieved being is all.
[END]
George Scialabba's most recent book is How To Be Depressed.
]]>
Reviewed by George Scialabba
Martha Nussbaum begins The Cosmopolitan Tradition with a very famous (though possibly apocryphal) anecdote. Diogenes the Cynic, scorning convention, slept in a tub, wore rags, ate scraps, copulated and masturbated in public, and spoke his mind pungently and uninhibitedly. This ur-hippie behavior did not lack for admirers, even in high places. One day as he lounged in his tub, sunning himself, he was visited by Alexander of Macedon, then in the process of conquering the world. Looming over the philosopher, he said: "I am great Alexander. Ask anything of me." Without looking up, Diogenes replied: "Would you please stop blocking the sun?"
Alexander was reportedly amused, and many subsequent generations have been mightily impressed. But was it really such a clever thing to say - or better, since it was certainly clever, was it wise or generous? Why not "free your slaves" or "give land to the poor" or "humble the rich" or "leave the rest of the world alone"? With so many obvious better choices, it begins to seem morally obtuse of Diogenes to have, in effect, flipped Alexander the bird. Maybe he was, like some subsequent countercultural radicals, a bit of a poseur.
It is, of course, easy to second-guess at a distance of twenty-four centuries. And as Nussbaum shows, there was a core of principle to Diogenes' answer. He thought all that mattered, or should matter, to human beings are our most important capacities: moral reasoning and free choice. These are what make us human, what confer on us that inner dignity that is the human essence. Only what diminishes those capacities - emotional attachments, ambitions, vanity, pleasure-seeking - are evil. Hunger, pain, imprisonment, even enslavement do not, or need not, rob us of our inner dignity; nor can wealth, power, or fame enhance it. Alexander the Great had nothing to offer someone who believed all this.
Diogenes stands at the beginning of the cosmopolitan tradition. But why "cosmopolitan"? Nowadays the word means "urbane, sophisticated, worldly, cultivated, at home everywhere."[1] Dorothy Parker was cosmopolitan; so was Hannah Arendt. Diogenes the Dog ("cynic" comes from the Greek word for "dog") was presumably not cosmopolitan in this sense. But when asked where he came from, he allegedly replied: "Kosmopolites": "I'm a citizen of the cosmos," or "My homeland is the universe." The Greeks were not at all cosmopolitan in this sense: they were fiercely attached to their city-states; and within most of those, an aristocracy was fiercely attached to its privileges. Diogenes was announcing a new basis of political allegiance: not geography or class but "the equal, and unconditional, worth of all human beings, grounded in moral choice-capacity."[2]
The Greek Cynic philosophers were numerous but mainly important for giving rise to Stoicism, which greatly influenced the (in turn) enormously influential Cicero, as well as Seneca and Marcus Aurelius. All took their inspiration from the Cynic/Stoic ideal of inner self-sufficiency, resting on virtuous behavior and self-command. This was a large step forward for moral philosophy. But Nussbaum identifies a serious problem with the cosmopolitan tradition, which she calls "the bifurcation" and to which she recurs throughout the book. The Cynics taught that if someone were fully in possession of her own soul, then no external deprivation or assault could harm her. Material welfare seemed of secondary rather than primary importance, and even liberty was not necessarily more favorable to self-command than bondage. But if the accidents of life - money, status, and power - cannot erode a person's inner dignity, why care about poverty, inequality, illiteracy, even slavery? For centuries philosophers in the cosmopolitan tradition accepted this inference (with occasional glimmers of humanity and common sense breaking in). The bifurcation was largely, if not quite completely, overcome much later, during the Scottish Enlightenment, but it is a large part of the story Nussbaum has to tell.
Nussbaum makes large claims for Cicero. The work she concentrates on, De Officiis (Of Duties), is "perhaps the most influential book in the Western tradition of political philosophy"[3] and "the foundation for much of modern international law, including both the law of war and human rights law."[4] For Cicero, justice is based on "an idea of respect for humanity, of treating a human being like an end rather than a means"[5] - the latter would mean violating that person's dignity. Such violations include, in Nussbaum's reckoning, physical assault, sexual assault, cruel punishments, torture, and takings of property. Duties of justice "are fully universal and impose strict, exceptionless obligations."[6] By contrast, duties of material aid are weak, full of exceptions and qualifications. Cicero assures us that we need not draw too deeply on our own material resources to aid others. Clear priority goes to family and close friends, who have some right to depend on us. The republic deserves our strong support, but strangers and foreigners are mostly out of luck. The reason is that the foreigner should be "great and lofty in soul, despising human things," and should "seek nothing but what is morally good and appropriate, nor should he yield to any human being or any disturbance of mind or [ill] fortune."[7] (Though why these admonitions should not apply equally to our family and friends Cicero does not explain.) This disjunction between the two kinds of duties is the legacy of Stoicism, the "bifurcation" at full strength.
The Stoic tradition lasted many centuries and still commands respect. But to a modern person, there is something very wrong with [CICERO'S] the above conclusion. Nussbaum spells it out:
People have long held that there are certain things that are so bad, so deforming of humanity, that we must go to great lengths to prevent them. Thus, with Cicero and Seneca, they hold that torture is an insult to humanity; and we now go further, rejecting slavery [as well]. But to deny people material aid seems to such people not in the same category at all. They do not feel that people are torturing or raping others when they deny them the things that they need in order to live - presumably because they do not think these goods are in the same class. Humanity can shine out in a poor dwelling, and it can appear that human dignity has not been offended by the poverty itself. Poverty is just an external: it doesn't cut to the core of humanity.
But of course it does. First of all, certain living conditions are an offense to humanity whether the person is inwardly altered by them or not. And, second, there is a considerable likelihood that the person will be affected by them. The human being is not like a block or a rock, but a body of flesh and blood that is made each day by its living conditions. Hope, desire, expectation, will - all these things are shaped by material surroundings. People can wonderfully rise above their conditions, but that does not mean that the conditions themselves are not important, shaping what they are able to do and to be. [8]
This is entirely persuasive - we are made and unmade each day, as she eloquently says, by circumstances, very much including material ones. In recognizing this, surely we correct an [A PHILOSOPHICAL] error on the Stoics' part. Still, I cannot help adding a slightly skeptical observation. The bifurcation Nussbaum identifies is indeed important and recurs throughout history. But is it based on a philosophical mistake, as she believes? Or are the duties of justice universal and strongly binding because even the rich and powerful are sometimes in need of justice, if only against other rich and powerful people; while the duties of material aid are not universal and are only weakly binding because only the poor and powerless need material aid? So also with the law of nations: if "civil and political" rights are, as she says, universally acknowledged, while "social and economic" ones are not, is this not because powerful nations are sometimes victims of aggression but are unlikely to be candidates for humanitarian aid? It's a fairly general rule: what the powerful need is defined as "justice" or "the national interest," while what the powerless need - collective bargaining, minimum wage, free public education, universal health care - is only grudgingly written into law, if at all. Pace Socrates, "justice" is, all too often, the advantage of the stronger.
The cosmopolitan tradition - most notably in Cicero, Seneca, and Marcus Aurelius - bequeathed us another enduring problem, Nussbaum argues: the problem of moral priorities, or of how to apportion our beneficence. Because every person deserves to be treated with the respect due her "infinite and equal worth,"[9] how can we justify treating those near and dear to us with a special benevolence, as nearly all of us do? We can't justify it, of course; we simply can't do otherwise. "A few rare human beings may be able to have intense love and concern that is truly cosmopolitan (compatible with due respect for all human life and due attention to the just claims of all) and to live their lives with an awareness of the equal worth and the equal needs of all."[10] The rest of us are bound to find this austere and abstract moral landscape "a barren and frightening world."[11] Fortunately, later thinkers in the tradition - Adam Smith, especially - humanized it somewhat.
Another skeptical observation: the "infinite and equal worth of every human being" is one of those edifying and resonant phrases to which nearly everyone nods automatic assent. But do we - can we - really believe in it? The "equal" part is, as Nussbaum concedes, a practical and even a theoretical impossibility: it is no more possible for everyone to matter equally to us than it is for all objects in our visual field to appear at the same distance or (as long as our taste buds are intact) for all foods to taste the same. Our biological endowment sharply delimits our affections no less than our perceptions. And as for everyone's presumed "infinite" worth, Freud, who had some claim to know, famously wrote late in life to a close friend: "In the depths of my heart I can't help being convinced that my dear fellow-men, with a few exceptions, are worthless."[12] A little jaundiced, perhaps, but a useful counterpoint to unworldly Stoic metaphysics.
************
The Dutch jurist Hugo Grotius (1583-1645) brought the cosmopolitan tradition to the theory of international law and morality. Before the 17th century, relations between states were regulated only by a rudimentary ius gentium, or law of peoples. There were few nation-states, and relations - mainly war and trade - were of the simplest. Individual rights within states were just beginning to be theorized by Hobbes, Bodin, and others. Grotius introduced into the discussion of war and interstate commerce the central cosmopolitan concepts of respect for humanity and concern for sociability, or civilized fellowship. These dictate both individual rights, which we have come to call human rights, and social rights, the rights of association and affiliation that may be very important to our identity. That Grotius was the first theorist to define these rights and assert their validity even against the sovereign was a great achievement; his writings, Nussbaum observes, "may justly be said to mark the dawning of the Enlightenment."[13] (It is true that nearly every other page of his magnum opus, On the Law of War and Peace, contains an appeal to the authority of Holy Scripture, which was not exactly the Enlightenment's style; but then, most other pages contain an appeal to one or another eminent classical author, which very much was. And besides, Grotius was an Arminian, the philosophes' favorite kind of Christian.)
Grotius' humanism comes out in his doctrine of property ownership, which follows the Church Fathers in holding that "in cases of extreme need ... the poor person actually owns the [rich person's] property by right, and the holder does not."[14] Rich nations too are morally obliged to distribute their wealth to other nations in dire need. He also has very modern views about migration and exile. Many of the ideas in Law of War and Peace were new to European thought, which explains the extraordinary esteem in which Grotius was held by Leibniz, Hume, and Kant, among others.
His theory of humanitarian intervention was particularly influential. In antiquity there was not a well-developed or widely accepted conception of human rights, though the mythological figures Hercules and Theseus were celebrated for various heroic exploits on behalf of victims of notorious injustices. In the Middle Ages, worldly rulers were liable to be chastised, morally and militarily, by the Church in case of scandalous immorality or inhuman cruelty, though the Church rarely exercised this right, which was in any case abrogated after the Protestant Reformation. Grotius held that grave violations of the ius naturale by sovereigns (e.g., enslavement or deprivation of religious freedom) or by freelancers ("pirates, general robbers, and enemies of the human race"[15]) could trigger "punishment," by which he clearly meant armed intervention. It was a bold innovation, which Nussbaum credits with a large part in the creation of the modern human rights movement.
************
The chapter on Adam Smith is the book's longest and most rewarding. For one thing, it is a pleasure to read - one has to take on faith Nussbaum's praise of Cicero's and Grotius' prose, while Smith, whom she liberally quotes, is a splendid prose stylist. More important, she persuasively reverses the conventional assessments of The Wealth of Nations and Theory of Moral Sentiments. The former is often assumed to be an unsentimental exposition of the superiority of free markets, as though Smith were an eighteenth-century Milton Friedman; while the latter, because of its famous discussion of "sympathy" as one of the springs of moral action, has been taken to represent a kinder, gentler Smith. Matters are not so simple.[16]
As we've seen, there are two sometimes conflicting strains in the Stoic tradition, with which Smith strongly identifies. One of them emphasizes the equal, inalienable dignity of all persons, based on our capacities for reasoning and choice. Another strain prescribes self-command, a serene and austere ("stoical") acceptance of life's accidents, teaching that nothing external, but only our own weakness, can affect those capacities on which our dignity rests. Too much emphasis on the latter strain can lead to indifference toward the material deprivations that very often rule the lives of others - the "bifurcation" that is the chief failing of the cosmopolitan tradition.
The Wealth of Nations, Smith's encyclopedic treatise on economic and social life in 18th-century Europe, has been famous since its publication for Smith's advocacy of competition, free trade, the division of labor, and other fundamental features of capitalism. Less well known are his forceful defense of the rights of workingmen and his sharp criticism of the undue influence of employers over the legislature. He inveighs against mandatory apprenticeship, as well as parish registration, both of which restricted the freedom and mobility of young workers. He notes the hypocrisy involved in outlawing unions, as was common then and for a long while afterward: "The masters, being fewer in number, can combine much more easily; and the law, besides, authorizes or at least does not prohibit their combination, while it prohibits those of the workmen. We have no acts of parliament against combing to lower [wages], but many against combining to raise it."[17] And he ridicules the ignorance of legislatures who give manufacturers a favorable trade policy on the latter's assurance that trade "enriches the country."
But what is most striking, according to Nussbaum, is Smith's insistence that fair wages, decent working conditions, collective bargaining, and an adequate system of public education are all owed to workers, as a matter of justice. Overwork, poverty, and hopelessness make a worker sick, shiftless, and stupid, no matter how virtuous he may be. He simply cannot achieve his human dignity under such conditions. This line of argument culminates in a long and famous passage:
The man whose life is spent in performing a few simple operations, of which the effects too are, perhaps, always the same, has no occasion to exert his understanding, or to exercise his invention in finding out expedients for removing difficulties which never occur. He naturally loses, therefore, the habit of such exertion, and generally becomes as stupid and ignorant as it is possible for a human creature to become. The torpor of his mind renders him, not only incapable of relishing or bearing a part in any rational conversation, but of conceiving any generous, noble, or tender sentiment, and consequently of forming any just judgment concerning many even of the ordinary duties of private life. Of the great and extensive interests of his country, he is altogether incapable of judging ... His dexterity at his own particular trade seems, in this manner, to be acquired at the expense of his intellectual, social, and martial virtues. But in every improved and civilized society, this is the state into which the laboring poor, that is, the great body of the people, must necessarily fall, unless government takes some pains to prevent it.[18]
Here the bifurcation is healed, and the cosmopolitan tradition finds its noblest expression.
Curiously, Smith's other great work, the Theory of Moral Sentiments, contains virtually no echoes of this great insight. It expounds a classical Stoic, even Ciceronian, ethics, including a sharp demarcation between duties of justice and duties of beneficence and an affirmation of apatheia, the belief that nothing external, whether injury or lack, can affect a genuinely wise person. This seems in flat contradiction to original and important insights of Wealth of Nations about the ease with which harsh deprivations can extinguish a person's dignity. There is, Nussbaum concludes, an unresolved tension between Smith's "humanity," so prominent in the one book and the "macho stoicism" that pervades the other. It is almost (she stops just short of conjecturing) as though Smith wrote Wealth of Nations with the feminine part of his mind and Theory of Moral Sentiments with the masculine.
**********
Nussbaum is herself a theorist in the cosmopolitan tradition, and the book concludes with a review of contemporary problems that the tradition may have something helpful to say about: pluralism, international law, foreign aid, and immigration/asylum. She sees only a moral function for international law, promulgating norms that nations may adopt or not. That may indeed be the best one can do today, though it is perhaps too much to say, as she does, that early proponents of international law were "starry-eyed" about its potential efficacy. In fact, the UN Charter, binding on its signatories, was well designed for keeping the peace and would have saved countless lives if the great powers - and above all, the superpowers - had abandoned their geopolitical follies and lived up to their obligations. Why was it unrealistic to have expected nations that had just survived the most catastrophic war in history to behave with a minimum of rationality and decency in order to avoid another one?
On foreign aid, Nussbaum shares the skepticism of economists William Easterly and Angus Deaton, who found that autocracy, corruption, paternalism, and ignorance of local conditions have made most foreign aid almost totally ineffective. And where it is effective, the result, allegedly, is dependency and lack of political initiative. This is doubtless often true, but I wish Nussbaum had also mentioned the many strong defenses of aid by Jeffrey Sachs[19] and others, or had alerted us to the existence of passages like this:
[G]et the poorest people in the world such obvious goods as the vaccines, the antibiotics, the food supplements, the improved seeds, the fertilizer, the roads, the boreholes, the water pipes, the textbooks, and the nurses. This is not making the poor dependent on handouts; it is giving the poorest people the health, nutrition, education, and other inputs that raise the payoff to their own efforts to better their lives.
And this:
Health campaigns, known as "vertical health programs," have been effective in saving millions of lives. Other vertical initiatives include the successful campaign to eliminate smallpox throughout the world; the campaign against river blindness jointly mounted by the World Bank, the Carter Center, WHO, and Merck; and the ongoing-- but as yet incomplete-- attempt to eliminate polio.
The first passage is by William Easterly; the second by Angus Deaton.[20]
The final problem, perhaps the knottiest and the most urgent, is immigration and asylum. The cosmopolitan tradition is particularly well adapted to address this problem, since its "basic insight is that respect for humanity requires us to furnish the basic wherewithal of human life, somehow, to those in desperate need."[21] If this can be done through humanitarian aid, with a minimum of disruption to both countries, it should be. But those who must leave, because of want or persecution, should be welcomed - in principle.
This is where things get knotty. How many of them should be welcomed? Not too many: it is reasonable to limit numbers "in accordance with skills and job opportunities,"[22] for the sake of economic stability, and to require that candidates for permanent residence understand and accept our political culture, i.e., our constitution and laws. But not too few, either: we can't try to "preserve national homogeneity" or "defend dominant national ethnic or religious traditions from the pluralism and challenge that immigration typically brings."[23] This is a little too general. Nussbaum's argument might have been more persuasive if she had engaged with the defenders of homogeneity and national culture, who are not all xenophobes, or even acknowledged the existence of a controversy over whether immigrants lower wages, as the many businessmen who support unrestricted immigration seem to believe.
Here and elsewhere in The Cosmopolitan Tradition, Nussbaum is reluctant to descend from the philosophical plane to the empirical. In consequence, as a history of one strain of moral philosophy, the book is excellent; as a work of political theory or social criticism, less so. Perhaps these more local matters - historical, sociological, economic - seemed to her parochial rather than cosmopolitan.
[END]
George Scialabba (www.georgescialabba.net) is an essayist and book critic in Cambridge, Massachusetts. His most recent book is How To Be Depressed.
[1] J. I. Rodale, The Synonym Finder.
[2] The Cosmopolitan Tradition, p. 2.
[3] The Cosmopolitan Tradition, p. 19.
[4] The Cosmopolitan Tradition, p. 30.
[5] The Cosmopolitan Tradition, p. 27.
[6] The Cosmopolitan Tradition, p.30. In this section Nussbaum makes a - tiny and insignificant - error. She says that, according to Cicero, "it is wrong to poison even the foulest of tyrants." (p. 30-31) But in De Officiis, III, 32, Cicero writes about tyrants: "[I]t is not opposed to Nature to rob, if one can, a man whom it is morally right to kill; -- nay, all that pestilent and abominable race should be exterminated from human society." Well said.
[7] The Cosmopolitan Tradition, p. 35.
[8] The Cosmopolitan Tradition, p. 39.
[9] The Cosmopolitan Tradition, p. 91.
[10] The Cosmopolitan Tradition, p. 93.
[11] The Cosmopolitan Tradition, p. 94.
[12] Freud, Letters of Sigmund Freud, 1873-1939.
[13] The Cosmopolitan Tradition, p. 100.
[14] The Cosmopolitan Tradition, p. 130.
[15] Grotius, On the Law of War and Peace, Book II, chapter 20, section 40.
[16] A major recent treatment of Smith, discussed by Nussbaum, is Emma Rothschild, Economic Sentiments: Adam Smith, Condorcet, and the Enlightenment, Harvard University Press, 2002. See also "The Workingman's Friend" in George Scialabba, For the Republic, Pressed Wafer, 2013.
[17] The Cosmopolitan Tradition, pp. 154-5.
[18] The Wealth of Nations, Book V, ch. 1.
[19] Eg., Jeffrey Sachs, "The Case for Aid," Foreign Policy, January 21, 2014.
[20] Both quotes appear in The GiveWell Blog, Nov. 6, 2015: https://blog.givewell.org/2015/11/06/the-lack-of-controversy-over-well-targeted-aid/.
[21] The Cosmopolitan Tradition, p. 230.
[22] The Cosmopolitan Tradition, p. 231.
[23] The Cosmopolitan Tradition, p. 231.