This Academic Life
Around the horn, 20 February
Today on TAL we start a new feature: a quick round-up of a couple of newspaper editorials that pissed me off philosophically and conceptually. Not substantive criticism, but simply an ongoing catalog of places where public intellectuals and other commentators engage in analytically dubious reasoning. Not surprisingly, expect to hear a lot about methodology here…
First off comes this editorial about social security
from The Washington Post
. The substance of the article is a fairly interesting comparison of the terms and metaphors that FDR and GWB use to talk about the Social Security program, noting that where FDR characterized the program as an obligation, GWB talks about "ownership." Lots of examples ensue. So far, so good -- and then there's this bit of wisdom: "How we talk about policy says a lot about how we think about it." Which means what
, exactly? We can take Bush's public statements as indicating something about his genuinely-held personal beliefs about Social Security? The fact that there is controversy about the language deployed in the debate about Social Security indicates that we "think" different things about it now than we did in 1935? (And who's this "we" who was thinking things in 1935 and is still around to think things, collectively, in 2005?)
What we have here is the classic balk committed by liberal individualists who are trying to be social constructionists, in which public articulations are reduced to epiphenomenal indicators of the subjective beliefs which are supposed to really
drive behavior. Although the author seems quite sensitive to the nuances of the public debate and to the different kinds of programs envisioned by the FDR framing and the GWB framing, what is missing is any real sense of the dynamics of framing itself, and hence any real explanation of why
the debate now seems to be about ownership and individual retirements when once upon a time it was about the obligation of society towards its retirees. Implicitly, standing in place of any kind of defensible explanation, we have the vague notion of "thinking," suggesting that this shift is due to "us" changing "our" (collective?) mind. What about the mutations of discourse that make such a shift possible
in the first place, and the concrete deployments of cultural resources that produce this
shift and not others? In short, what happened to the politics
of the debate about Social Security?
Second, this gem of an editorial
(also from the Post
) suggesting that we would have better intelligence estimates if analysts would assign a numerical value to two aspects of their analyses: their confidence in the quality of the evidence that they've used, and their confidence in the conclusions that they've drawn from that evidence. This is basic conditional probability reasoning, the same sort of thing that fuels expected utility forecasts (in which one separates probability of occurrence from payoff amount, and then multiplies down the probability tree to give expected values, and repeats for each step of the process involved). And if this would generate reliable numbers, then yes, we could use Bayesian and other statistical techniques to determine which estimates were the best ones to use as a guide for policymaking.
One small problem: the numbers involved in this procedure are meaningless
. The author says that his modest proposal is based on the success of "data-driven analyses" like those used in sabremetric baseball management and on Wall Street. But a key difference is that in those relatively closed social environments, the numbers in question are generated by the process under observation
, like the movement of a stock or a player's success at getting on base. What the author proposes is instead like asking scouts to estimate how confident they are in their projections of a player's future performance, which is precisely what traditional scouts have been doing for years -- albeit without trying to assign precise numbers to their level of confidence (which strikes me as more honest
than pretending that one can assign a precise number to a subjective level of certainty). The author's other misleading parallel concerns the prominence of Bayesian techniques in modern spam filtering software; Bayesian techniques rely on a massive amount of data (including a massive number of judgments by individual end users about what constitutes "spam"; filter components are weighted based on the relative frequency of some user's designation of particular kinds of e-mail as "spam," and do not
rely on the user's determination that a particular piece of mail is 40% likely to be junk…which would be a meaningless determination in any event) so that updates of prior probability estimates are meaningful
, as opposed to simply representing the needless quantification of a hunch. Once again, this works in relatively closed social systems characterized by repeated acts, and is not likely to be revealing in other settings.
All four of the regular readers of this blog will know that when it comes to baseball, as well as other relatively closed social systems (or social systems that can be safely presumed to be approximately closed for analytical purposes, of which I'd argue that there are considerably fewer in political and social life than we'd like to think), I'm a big partisan of quantitative/statistical/comparative techniques for knowledge-construction. I'd much rather rely on numerical data when managing a baseball team (even a fantasy baseball team) than on "hunches" and "feelings" about particular players (with the exception of Mike Mussina, whom I always
rank higher up in the pre-draft order than his numbers merit; I've seen him do enough amazing things in his career that I generally think he's worth the risk. Let's not forget that he threw six perfect
innings in the ALCS last year against the eventual champion Boston Red Sox, and has twice carried perfect games even deeper, once coming within one out…). Why do I do this? Because the numbers are meaningful in baseball
, just as they are meaningful if one is trying to ease traffic congestion at a busy intersection. But numbers like the ones called for by the op-ed author are meaningless
. The fact that the Holy Writ Of Neopositivist Social Science -- a.k.a. King, Keohane, and Verba's Designing Social Inquiry
-- also calls for such likelihood estimates on the part of researchers doesn't change the basic issue: such numbers can tell one a lot about the analyst
, but I would be seriously skeptical that they can tell us diddly-squat about the analysis
. And I am 1.0 confident in that conclusion.
So much silliness, so little time -- since I have papers to grade and papers to write. So these two will have to suffice for today.[Posted with ecto]
At a meeting of a subset of the faculty in my department on Wednesday, the issue of faculty participation in the annual recruiting of admitted students was brought up. Faculty complained that they didn't like being gathered into a room for the traditional "phone bank" with people from financial aid and student advising present to answer picky technical questions; they also complained vociferously about the fact that phone calls were problematic -- the student was often not home when the call came, etc.
So I brought up what seemed to me a perfectly obvious solution: IM. Faculty doing the recruiting would simply make their screenname and a block of time available; they'd then jump online from wherever in the world they were then, chat with interested students (who would initiate conversations themselves, thus guaranteeing that they were present and available), and have ready-to-hand a list of screennames and e-mail addresses to refer the student to for issues involving specific AP credits and the like. Plus, students these days grow up with several IM chat windows open on their screens at once, and it's a medium with which they are very very comfortable. So our making ourselves available would signal a certain kind of approachability and tech-savvyness as well -- an added bonus.
I might as well have have grown a second head right then and there, or proposed offering all of the students drugs and alcohol in order to secure their enrollment. A series of typical skeptical/dismissive responses were in evidence: the condescending half-smiles, shakes of the head, wide-eyed incredulity, verbal replies that deliberately tripped on the technical terms in order to generate laughs ("so when I, what do you call it, 'sign on,' then the thingamigummy starts working?"). The general tone was that of the pat on the head that precedes the admonition for the kids to run along and play quietly.
There is a very high level of what we might call technical conservatism
in academia; academics as a group are very resistant any of these new-fangled things like e-mail and blogging and IM. This is not to belittle the small minority of academics -- of which I am one -- who are relatively tech-savvy and responsive to these alterations in pedagogical practice. Full disclosure: I didn't use IM regularly until a couple of years ago, and only started blogging in earnest last Spring. And I caught both viruses from students, but in a way did so deliberately -- I want
to reach students where they live, and that means not just being able to drop pop cultural references from time to time. It means participating in the form of life of students, mutatis mutandis
(and figuring out precisely what
has to be changed and what can be preserved intact is a nontrivial endeavor), seeing how they world
and what can be done to introduce divergent elements (uncomfortable facts, logical rigor, an analytical sensibility) into their cosmos. So I consider my use of IM, my blogging, and the like to be ways of developing a pedagogy -- at least in part. [There's a geek-cool aspect here too, of course, and I think that my geek credentials are quite intact.]
It sometimes seems to me like my more technically conservative colleagues are implicitly picking up on this broader pedagogical and philosophical issue, and that their resistance to e-mail and IM and the like (even extending to visual slide displays generated by Keynote
or PowerPoint -- I fail to see why those are any different from writing on the chalkboard, or using transparencies or even paper handouts) is in a non-trivial sense a resistance to students
. if one is going to resist students, why in God's name does one take a job in a situation where students are omnipresent, and in fact constitute the raison d'être
for the organization? Just go work in a think tank someplace, or produce policy briefs in an office in your basement. Sheesh. Our job
, the thing we get paid to do
, is to teach students
. And resisting the form of life of one's students seems to be an abandonment of this charge in one of two ways:
- it could be an implicit or explicit judgment that the students' form of life is somehow wrong and needs to change (e.g. Allan Bloom's rant in The Closing of the American Mind about how portable music players make students incapable of appreciating the serious silence that is essential for real thinking, which of course bears more than a passing resemblance to Socrates' rant in The Republic about how the poets are corrupting the youth with their fancy language and tales of sex and violence…anti-hip-hop crusading, anyone?), which abandons teaching in favor of training and drill. And I am not a trainer; my job is not to force people to live in a certain way. Gurus and prophets do that; teachers should avoid such an abuse of their authority.
- on the other hand, it could be simply a way of turning students away -- "leave me alone except for these very few, minimal, constrained kinds of interactions, which take place exclusively on my terms." Again I wonder: if you hate students so much, why put yourself in their path all the time?
Maybe this problem will ultimately be solved by a generation shift, as older folks retire and are replaced by people of my generation who grew up with computers. But will this mean that in twenty years there will simply be another clash involving different technical apparatuses? Perhaps. But I would like to think of myself as the sort of teacher who would be able to remain in touch with whatever the form of life of my students is. Indeed, I can't really imagine exercising this vocation in any other way.[Posted with ecto]
This morning my wife and I refinanced our house again. Rates are good, the housing prices in our region are absolutely insane and show no signs of coming down to earth at any time in the near future, and let's be honest here: assistant professors don't make gobs of cash. At least not compared to what comparably-educated people make in the rest of the job market. Factor in the need for my wife to remain home to be a full-time case manager for our autistic son, and you have a "structural deficit" -- one that we were meeting, as many Americans do, with credit cards. And it was getting quite out of hand. Hence the refinance, and the new home equity loan, and a variety of other financial jiggerings to keep us solvent.
On paper, assistant professors are reasonably well paid at my university. Now, when we factor in the actual working hours, plus the need to do a lot of stuff on one's time "off" (like summers -- even though I have had to teach two summer courses a year since moving down here in order to pay the bills), it works out to a pretty raw deal, which is one of the things I always make clear to those of my students who think that they have a vocation for the academic life. One does not get rich doing this. One does this because one can't imagine being happy doing anything else long-term. It's that calling, that perverse inner drive, that keeps one slogging through.
I find it fascinating that we've built a financial system in which the restructuring of debt can save the borrower so much money in monthly payments; our refinance this morning actually increased
the aggregate debt load, but shuffled things around such that we owe less per month. The motor of such a thing, of course, is the expectation of dependable future income and dependable high home prices, both of which are arguably traceable to the Protestant Work Ethic (tm) that sustains the level of marketization that is required to sustain such a system. [I am setting aside for the moment the issue of whether the Protestant Work Ethic was causally
responsible for erecting the present system, or whether it was the most significant cause; the fact is that some such notion -- and arguably the Protestant Work Ethic is only one of the commonplaces tossed up by the same discursive formation that brought us the Inherent Justice Of The Free Market and the Blessings Of Deregulation Of Basically Everything -- provides the legitimating framework for the system as it is now, which is more than enough for my purposes.] So it makes sense to sign papers for a thirty-year commitment to regularly pay back a massive loan, and for there to be legal penalties if one fails to follow through; both of these institutions instantiate the same basic arrangement of commonplaces, and are sustained by the deployment of that arrangement in multiple fora.
I also find it fascinating that the signature
means so much to the process; we must have signed our names hundreds of times over the course of an hour and a half. My personal favorite is always the signature that affirms that this is my signature; coming in a close second is the form that says that we will work with the lender to adjudicate any typos and other ambiguities (which is itself, of course, ambiguous about what constitutes a "typo" and what constitutes an "ambiguity"). Both are aporia
moments in the process, I think: places where the apparent solidity of the legal and financial arrangements fall apart and have to be smoothed over by an almost purely arbitrary political
action. Language tripping on itself, so to speak. And as usual, when I make a crack about the irony of those forms, we paper the cracks over with a chuckle, a shake of the head, and a move on to the next form in the sequence. The ways in which social action produces the effect of solidity are endlessly fascinating to me…never moreso when I find myself in the thick of them.
Of course, analyzing them like this is perhaps a way of bringing some psychological balance to the fact that we just committed to assume even more debt, and distancing myself from those aspects of the situation -- like when I translated the Yankees' loss in the ALCS last year into an extended meditation on randomness versus contingency
. But that explanation would get us into territory that I am not really all that comfortable entering; personal motives aren't my business.[Posted with ecto]
There are times when academia reminds me of nothing as much as a dysfunctional family. Elephants in the middle of the room are ignored in a determined fashion; people lash out at unusual provocations, then apologize afterwards as though that somehow justified their earlier aggression; and pettiness reigns in interpersonal griping and struggles, although never acknowledged as such and more often masked (badly) by appeals to more respectable standards. Add in the enmeshed character characteristic of any situation in which your standing and in many cases your continued employment is contingent on the opinions of your peers, and you have a recipe for disaster.
Two brief illustrations, shorn of names and other identifying markers.
1) I have a colleague at another institution -- a female colleague, which is important -- who was hired two years ago; her university has a reappointment process every two years of pre-tenure service, so she was up for her 3rd and 4th year reappointment. This colleague of mine is, frankly, not the best classroom teacher; her student evaluations generally reflect this. But she's taking steps to improve. Anyway, she has had run-ins with her department chair -- also female -- over the past two years; such run-ins generally take the form of snide comments about how my colleague dresses, passive-aggressively phrased as offers to take my colleague clothes shopping. No shit. Pressing a little further, we might note that the department chair is only an Associate Professor after almost a full university career, and seems to feel that my colleague threatens her somehow, since she's still young and about to publish a number of very interesting articles -- which are consistently belittled by the department chair in subtle ways (since a frontal assault won't work; my colleague's work is very very good within her subfield, as vouched for by senior people in the field).
The upshot? The department chair recommended against reappointment, spinning the bad teaching evaluations in the worst possible light and downplaying my colleague's efforts to make improvements in her classroom technique. Ridiculous, right? No, wait, it gets better: the college committee to which the files are submitted next disagreed
with the department chair and recommended reappointment. So now it goes to the dean of the college, who has two opposite recommendations and no clear guidelines for choosing between them. And I suspect that my colleague's fate at that university will depend on a whole series of personal ties between administrators and professors -- processes that play out without anything resembling oversight, and processes in which petty personal griping seems to play an inordinate role.
2) Speaking of petty personal griping: I am evil, at least according to one of my colleagues here, and my evilness manifests itself in my corruption of the youth (the PhD students) away from the One True Path Of Social Science, which involves an emphasis on "empirical" (read: statistical) work with "policy relevance" (read: problem-solving dissertations that seek to tell policy-makers precisely what to do about North Korean nukes or sub-Saharan African poverty or whatever) as opposed to my colleague's ultimate term of derision, "political theory." [Parenthetically, my work is not "political theory" in the sense of consisting largely of discussions of classical texts and the subtleties of their arguments; half of the book I'm finishing is archive-based historical reconstruction, which looks like "empirical work" to me. but I already ranted about this here
.] I know that I am evil and reprehensible because said colleague called me on the phone yesterday to bitch at me for fifteen minutes about how I was using the "wrong" standards to evaluate PhD students for admission to our program -- and then apologized today in a backhanded sort of way, saying something to the effect of "yeah, I was angry, but I'm over it now." And this makes it okay to call me up and yell at me how
Backstory: said person a) blames me for corrupting two of her PhD students; b) thinks I'm a bad influence because more of the PhD students are attracted to the kind of work that I do than they are to the kind of work that my colleague does; and c) disagrees with my vision for what our PhD program is supposed to be about. But do we actually have the argument about that issue? No. [Although we might finally be having that argument at the next committee meeting. After I've been pressing to have that discussion for five years
.] Instead, there's petty personal shit, passive-aggressive tactics, an allergy to conflict that involves substantive disagreements, and veiled -- sometimes not so veiled -- reminders that I don't have tenure yet.
See? Dysfunctional.[Posted with ecto]
Revolt of the Primitives
Dean Acheson was the U.S. Secretary of State around the time when Joe McCarthy started whipping up his anti-Communist campaign. Acheson, along with President Truman, and Acheson's former boss George Marshall (of The Marshall Plan (tm), only the most effective piece of anti-Communist foreign policy in history), was a prominent target of McCarthy and his cronies
allies, and was regularly vilified as being a closet Communist sympathizer, un-American, and so forth. In his memoirs, Acheson memorably refers to the McCarthy interlude as "the revolt of the primitives," a term I find altogether apt.
What was "primitive" about McCarthy? Contrary to what various right-wing wackos might claim, Acheson's objections to McCarthy do not involve any supposedly archaic character of values like patriotism and the need to defend the integrity of the American Way of Life (tm; I probably owe royalties to someone -- hasn't the RNC applied for a copyright on these words? -- for using the phrase, but unless and until they sue me I should be safe). Acheson was a patriot, a firm believer in the inherent rightness of American (and Western, and liberal) values; hence his accusation of "primitivism" can't apply to the content
of the McCarthyite charge. Rather, I think that Acheson was objecting to the form
of the campaign: its hysterical tone, its deliberate oversimplification of complex issues, its bold-faced nativism. In short, the narrowness
of the accusations and the manichaean
character of the proposed solutions: yes or no, right or wrong, loyal American or committed enemy of all that is good.
Welcome back to the 1950s. The new primitives -- we might call them techno-primitives, since they work via the 'Net and the vast swirling morass of Talk Radio -- are apparently in full swing, targeting Ward Churchill for excommunication from the body politic. According to an article in The Washington Post
[free subscription required], the Chancellor of the University of Colorado has launched an investigation into Churchill's "lectures and publications" which is is "the first step…in the legal process required to fire a tenured professor." At issue are comments that Churchill made
right after the terrorist attacks on the World Trade Center and the Pentagon on 2001, in which he said, among other things:
As to those in the World Trade Center…Well, really. Let's get a grip here, shall we? True enough, they were civilians of a sort. But innocent? Gimme a break. They formed a technocratic corps at the very heart of America's global financial empire – the "mighty engine of profit" to which the military dimension of U.S. policy has always been enslaved – and they did so both willingly and knowingly. Recourse to "ignorance" – a derivative, after all, of the word "ignore" – counts as less than an excuse among this relatively well-educated elite. To the extent that any of them were unaware of the costs and consequences to others of what they were involved in – and in many cases excelling at – it was because of their absolute refusal to see. More likely, it was because they were too busy braying, incessantly and self-importantly, into their cell phones, arranging power lunches and stock transactions, each of which translated, conveniently out of sight, mind and smelling distance, into the starved and rotting flesh of infants. If there was a better, more effective, or in fact any other way of visiting some penalty befitting their participation upon the little Eichmanns inhabiting the sterile sanctuary of the twin towers, I'd really be interested in hearing about it.
Harsh words. And offered as part of a harsh essay demanding that Americans examine the record of their own country in committing war crimes, murdering civilians, inflicting harm on others -- and that they do so before blithely condemning the terrorist attacks as uncivilized acts that justify any and all methods of reprisal, as though the United States had never perpetrated acts just as violent.
But it is important to note that Churchill's point
is not "fuck America, go terrorists!" but instead "America brought this on itself by acting in similar ways at various times in history." His tone is acid, accusatory, deliberately provocative and designed to piss people off; his rhetoric is overblown, his analogies are slippery [Eichmann was a bureaucrat participating in an enterprise the declared intention of which was a Judenrein
Europe; arguably, there's a significant difference between the culpability of such a person and the culpability of a person who accepts the unintended consequences
of their day-to-day actions], and his overall moral calculus relating guilt and reprisal is questionable. And this is the point of the exercise
, I think: to provoke complacent readers and listeners to re-examine their own firmly-held assumptions about, well, everything
. [And is it really any more extreme than identifying a group of countries as constituting an "axis of evil"?]
Churchill's writings are designed to provoke debate, discussion, conflict. He is up-ending the apple cart, and trying to cast some doubt on the self-confident narrative of America in which we are the divinely blessed Good Guys who can do no wrong as long as we follow our Manifest Destiny to bring light to the nations, by force if necessary. In this way, he's a moral philosopher, fulfilling that essentially Socratic function of serving as a stinging horsefly on the ass of state. Don't like what he's saying? Argue back
. Debate with him. Give good grounds for maintaining that the United States didn't deserve what it got, that terrorism should be an unacceptable mode of warfare, that the subsequent operations in Afghanistan and Iraq were justified. If you are so confident that you are correct, you should have no problem adducing such reasons.
By characterizing Churchill as a moral philosopher I am deliberately withholding the label of "social science" from his oeuvre. That is because I do not think that his work (and admittedly, I have read comparatively little of it, so I might be entirely wrong about this) seeks to apply an ideal-typical analytic to a set of empirical data in a rigorous manner so as to produce "objective" (in the Weberian sense) facts. Rather, he makes use of empirics to illustrate his broader practical-moral claims, which is what any good moral philosopher or cultural critic does. He's not a scientist -- I don't think that he wants to be -- and so it would be inappropriate to evaluate his work on such grounds. Rather, like a law professor asked to reflect on a Supreme Court decision or a business ethicist asked for a reasoned stance on the acceptability of certain accounting practices, Churchill is contributing to a public debate by forwarding a stance -- one among others -- that responsible citizens should confront.
Except that the 2005 primitives are seeking to fire him instead.
I find the idea of an investigation into Churchill's lectures and publications somewhat ludicrous. What are they planning to investigate? Last time I checked, a tenured professor can't be fired for making claims with which people (especially powerful people, or a majority faction of the people) disagree. I don't know precisely how scholarship is evaluated within American Indian Studies, but my sense is that the field deliberately tries to relax or erode the politik/wissenschaft boundary that is constitutive of the social sciences. So going after Churchill for being political won't wash (unless there is some hidden clause of his contract that prevents such activity; when I was an adjunct professor at a University In New York City some years ago, a condition of employment was the signing of a small white card that was basically a loyalty oath committing me not to speak against the Constitution of the United States or against the Constitution of the State of New York…and this was in 1997(!)), and going after him for not adhering to the standards of his field won't either. So all they could be investigating is whether he is saying things with which they agree, which is very scary shit indeed.
We too often forget that the point of university education is to promote thinking: critical, creative, independent. The precondition for doing so might be a damning critique of what is held sacred, so that the mind can widen a bit beyond the simple repetition of stock phrases from childhood. Churchill can be read as raising extremely "uncomfortable facts" with which Americans generally should have to grapple; in this way he contributes to the overall process of education. [Parenthetically, if his classes were premised on his students coming to agree with his point of view, that would in my opinion be a violation of his academic vocation; promoting thinking and creating disciples are different enterprises. What he does outside of the classroom, or outside of his professional interactions with students, is a different matter entirely…as long as we're willing to accept moral philosophy and cultural criticism as valid scholarly enterprises, which is a whole different issue. For now I'll just assert that the socially surplus character of higher education allows it to support such initiatives, and point out that if it didn't, radically dissenting voices would in all likelihood be completely silenced, and our public discourse would be poorer for it.] Shutting him up is a profoundly primitive response: see fire, fire hot, run away or smother quick. But under no circumstances should you try to learn from
the thing that makes you profoundly uncomfortable; no, that requires entirely too much flexibility of mind, too much appreciation of subtlety and ambiguity, too much questioning of the comfortable narrative of moral superiority that legitimates your actions.
I am not saying that Churchill is correct. I am saying that Churchill should be debated with. The primitive response, although simpler and cleaner, could only be justified if its central claim to moral superiority were true -- and how would we know that if the debate was not allowed to take place?[Posted with ecto]
Beware of Universalism
Forthwith, the op-ed nobody wanted, apparently because I'm not a household name. I do not think that the State of the Union speech
allayed any of my fears.Beware of Universalism
Commentators poring over Bush’s second inaugural address for clues about specific policy initiatives are missing the forest for the trees. Major public speeches are better understood as efforts to affect the terms of political discourse, establishing the general framework within which subsequent debates will be carried out. Tone and vision are the critical issues here, not detailed proposals.
Seen in this light, the ominous thing about Bush’s address is its unabashed moral universalism. Bush proposes to confront “every ruler and every nation” with “the moral choice between oppression, which is always wrong, and freedom, which is eternally right,” and to devote America’s “considerable” influence to this cause. Absent from the speech is any set of criteria that would underpin a decision about where to commit American resources, or any declaration of principle that could counterbalance the open-ended universalism of advancing “freedom…the permanent hope of mankind.”
This universalism is by no means a novel development in the rhetoric surrounding American foreign policy. A half-century ago, President Truman used uncannily similar language in calling for a commitment of resources to the struggle against Communism: “The free peoples of the world look to us for support in maintaining their freedoms…we must assist free peoples to work out their own destinies in their own way.” Perceptive critics like Walter Lippmann promptly critiqued the open-ended character of this Truman Doctrine, pointing out that if taken to its logical conclusion it implied American intervention into every corner of the globe to ensure that freedom was being adequately promoted and defended.
The problem with universal declarations of principle is that they brook no compromise and envision no deviations. Imported directly into politics, they produce what Max Weber called an “ethic of ultimate ends,” according to which the purity of one’s motive trumps any considerations of practical effect. Hence, moral universalism as a framework for policy leads to a situation in which the ends justifies the means and any commitment of resources of any type is acceptable in pursuit of the ultimate, absolute goal. It also allows the enactors of such policies to evade responsibility for the consequences of their actions, as long as they can demonstrate that they were sincere in their efforts to advance the cause of the universal principle.
An ethic of ultimate ends also permits its adherents to avoid the real work of politics, which involves bringing moral principles into dialogue with the practical requirements of governance and administration. Such a dialogue necessitates a deviation from the purity of universalist declarations, and a tempering of categorical declarations with a measure of prudence. A universal declaration of principle sits uneasily with such prudent tempering, however. Unless notions of prudence and judgment remain a part of the public framing of an overarching political goal, advocates of tempering universalist goals can be easily vilified as traitors to the cause. The seemingly inevitable consequence is an unchecked rush to commit resources to the campaign, culminating in overextension, exhaustion, and perhaps even the discrediting of the original goal as its advocates fail to deliver on their grandiose promises.
This fate can, however, be avoided. The Truman Administration wisely spent the weeks and months after Truman’s speech qualifying and rolling back the open-ended, universalist character of the commitment to defend freedom globally. Secretary of State George Marshall, among others, quickly reassured Congress that certain areas of the world were more vital than others, and that the United States was not about to bankrupt itself by committing resources to every local conflict everywhere around the world. Over time, administration representatives defined a set of core interests and laid out concrete proposals (such as the Marshall Plan) for acting on those interests. These prudent moves involved tempering the universalism of the Truman Doctrine with public commitments to fiscal responsibility, the efficacy of different forms of aid and intervention, and the necessity to consult and cooperate with allies—not simply to evaluate them in terms of their fidelity to the cause, but to collaborate to enact policy in a genuinely multilateral manner.
Let us hope that the advocates of the Bush Doctrine display similar wisdom in the weeks and months ahead.21stCWeber is a relatively unknown Assistant Professor of International Relations; he blogs sporadically at thisacademiclife.blogspot.com.
[Posted with ecto]