Monday, November 17, 2008

Atheism by any other Name

A review of Mitchell Silver’s A Plausible God: Secular Reflections on Liberal Jewish Theology (NY: Fordham University Press, 2006).
Published in Philosophy Now no. 62 (38-9), July/August 2007.

New God or No God – that is the question

The story goes that, while being processed for imprisonment, Bertrand Russell was filling out a form that asked what his religion was. He wrote “atheist” (or perhaps “agnostic”), whereupon the gaoler remarked, “Isn’t it wonderful? We may belong to different religions, but we all believe in the same one God.” This gave the philosopher a chuckle, but there may be more truth to it than he supposed. More recently theologian Karen Armstrong has argued, in her book A History of God (1993), that atheism is always a response to a particular notion of God, and so since what is meant by God differs from era to era (not to mention from culture to culture, denomination to denomination, etc.), so does atheism turn out to be a historically conditioned concept. In its starkest manifestation, this means that today’s atheism could be tomorrow’s theism.

Along comes philosopher Mitchell Silver’s new book, A Plausible God, which tackles directly the question of whether “modern” atheism is compatible with the “new God” of some contemporary theologians. The book focuses on the work of liberal Jewish thinkers, but the author intends that accident of his own interests and commitments only to give the work scholarly validity; the arguments and conclusions are meant to be general. Still, Silver is upfront – literally in his opening chapter – about the problematic nature of his audience, for, Jewish specificity aside, the book speaks to people who are already convinced atheists or else “new theists” – in a word, “moderns.” Moderns are those who, like Silver, fancy themselves children of the Eighteenth-Century European Enlightenment, that is, devotees of rationality, science, and liberal or progressive politics. Silver’s sanguine hope is that that includes an increasing number of us, and his publisher, buttressed by Daniel Dennett’s enthusiastic endorsement, apparently agrees.

Until recently moderns would have been skeptical at best about matters religious, but Silver notes the novel trend of some moderns hankering after tradition. Silver had felt this tug on himself in his previous book, Respecting the Wicked Child: A Philosophy of Secular Jewish Identity and Education (University of Massachusetts Press, 1998), wherein his response was to champion a sense of ethnic identity tailored to Enlightenment values and hence shorn of supernaturalism. He admitted candidly that bringing children into the world was what had prompted his concern about community. Now, a decade later, Silver again reveals a personal motivation for his philosophical investigation, this time his genuine puzzlement about the alternative route chosen by some of his peers, who also seek to enjoy the best of both worlds, modern and traditional, but including God. A Plausible God is Silver’s effort to discover whether this makes any more sense than the gaoler’s happy reverie.

The inquiry is not just an intellectual exercise, for presumably what motivates new-God faith is some kind of felt need. Silver himself may have no such need, perhaps because he has the Zen-like or existentialist capacity to bite the bullet of raw, difficult albeit occasionally pleasurable, finite, ultimately pointless human existence, or simply because, as he himself offers, he is of a moderate temperament that has “less of a thirst for heavenly joy and feel[s] less threatened by psychological hell” (p. 111) … and he has never been in a foxhole? But the rest of us moderns might be missing something essential to our thriving if we truly turned out backs on all that the “God of our fathers” provided. Much of Silver’s book is a survey of what that might be, and then considers in detail whether the new God is a satisfactory substitute (for as always, God is in the details).

The reader must always keep in mind that Silver is not discussing whether God exists. It is a premise of the book that the old God does not. Lest anyone need reminding of why such a belief is untenable, Silver helpfully provides an appendix that reviews the main arguments for that God’s existence and their refutations. As for the new God, Silver grants that it is on equal epistemic footing with atheism, but that is only because its empirical claims are coincident with those of science. Does this leave enough wiggle room for a God who can “satisfy” the way the old God could? More precisely, can the belief is such a God provide the sorts of solace and meaning and inspiration that a belief in the old God did? For any benefits accorded by the actual existence of the old God have been ruled out of court ex hypothesi, and any benefits accruing from the actual existence of the new God would presumably also be available to the atheist except for those that depended specifically on believing in her.

The first thing Silver must do is limn this new God for us, but that is a tricky task. As a scholarly text, the book must be faithful to the theologians’ conception it seeks to assess. Since it is a given that no two theologians (or scholars) are likely to agree perfectly, Silver must abstract some essence that, ideally, will be acceptable to all of them. In fact Silver makes the much broader claim on behalf of the new God that it shares its essence with some very old gods indeed, including the Brahman of Hinduism and even the God of many Christian theologians, both heretical and mainstream. Another challenge is that the new God must avoid both the Scylla of violating Enlightenment sensibilities and the Charybdis of being so vapid as not to be worth the bother to believe in. The danger is that the conception Silver settles upon might be a straw God, but he certainly makes a good faith attempt to meet all the criteria. The “baseline new God,” Silver concludes, is “whatever there is in nature that makes good things possible” (p. 42).

In the end, I think, the reader must decide for him/herself whether the set of qualities Silver identifies as crucial are what would be minimally desired in a God. Then the issue is whether that God is compatible with the modernist reader’s beliefs and values. Silver is frankly skeptical that the new theologians have pulled this off. His central suspicion is that they trade on equivocation; that is, when articulating the qualities of their God, they explicitly toe the modernist line, but the very act of calling this object of concern “God” implicitly invokes the old God, with all of its comforting associations. Silver himself, it seems clear, would rather be a “dissatisfied infidel” than a “satisfied believer” (p. 100, resonating of John Stuart Mill).

Thus, the Big Lie – adoption of a false belief in the old God because of its superior benefits -- is not an option. And, more to the point, neither is the small self-deception of a “useful obfuscation” (p. 101), which is what Silver sometimes suspects the new God to be. To be “clear and honest” (p. 101) is Silver’s modus operandi. Indeed, the reader, no matter whether persuaded of Silver’s thesis, will delight in the incisiveness and wit of his arguments. But there may also be the lingering worry whether the subject matter has been made to fit a procrustean methodology. Is there no Middle Way between rigid dogmatism and uncompromising skepticism? Silver also acknowledges that his “innocence of [mystical states] surely contributes to a secularist bias,” since “[m]ystics seem inclined to theism” (p. xv). Thus, his effective critique of mystical experience qua experience (on p. 76) could be beside the point if any meaning can be attached to the common mystical claim that samadhi is not a merely psychological state.

Silver does make one major effort to accommodate his theist co-moderns by suggesting in the end that it is all a matter of taste. In fact he becomes a regular Feyerabend of religion, who extols the prospect of a diversity of beliefs – “a vision of free men and women” (p. 120) – and here explicitly appropriates the unintended import of Russell’s joke: “Religious Truth, of which I take atheism to be a species, is plural” (p. 115, my emphasis). In doing so Silver also comes full circle to his Jewish roots, for, as I once heard a rabbi declare, a Jew who does not believe in God is still a Jew, since “Israel” means “one who contends with God,” and that is surely what an atheist does, “or something like it” (p. 120).

Thursday, January 03, 2008

Activism as Integrity

by Joel Marks

A review of Lee Hall’s Capers in the Churchyard: Animal Rights Advocacy in the Age of Terror (Darien, Connecticut: Nectar Bat Press, 2006)

Published in Philosophy Now, no. 67, pp. 44-5, May/June 2008.

Capers in the Churchyard is not a book of philosophy, but it ought to be. Ostensibly about tactics in the animal rights movement, the book is in fact a manifesto for thinking about nonhuman animals in a wholly different way from what we have become accustomed to. The author, Lee Hall, is legal director of the America-based Friends of Animals, an animal advocacy group whose approach to the issue of animal rights is novel even by animal advocacy standards.

The churchyard capers of the title refer to a particularly gross episode of animal activism that took place in England in 2004: somebody absconded with the remains of the mother-in-law of a farmer who bred guinea pigs for a testing lab. Letters were subsequently delivered to the family demanding that the breeding stop if they wanted mother-in-law returned to her final resting place; and eventually it did -- a supposed success of the extreme tactic. However, a number of protestors were eventually implicated and hauled into court on the charge of blackmail, where they faced long prison terms, and a new anti-terror bill was introduced by the government. Meanwhile, the local populace was outraged by the grave desecration, as was the whole country when informed by the media; and presumably the testing lab found an alternative supplier. So, on balance, was this a victory for animal liberation? Hall wants to impress upon the reader that the answer is most likely “No.” As she puts it: “if the actions of the militants appear to work on some level, it’s neither the level of changing minds nor laws. Indeed, on both counts, they’ve triggered a fierce backlash” (p. 121).

When I had first heard about new so-called anti-terror legislation aimed at animal rights activists in both the U.S. and England, I could only roll my eyes in knowing cynicism that Bush and Blair’s new universal pretext was being exposed for the fraud it is. If you don’t like something, label its advocacy “terror.” For example, the Fur Commission USA’s Website applauded the prospect that “[this] major improvement over current law … could provide prosecutors with a substantially greater incentive to prosecute animal rights terrorists …” -- meaning those who oppose the fur industry? But Hall makes clear that even a dedicated activist such as herself can have more than qualms about certain tactics being used, endorsed, or tacitly accepted by some in the movement. For one thing, they play right into the hands of the terror-labelers.

Her objection runs far deeper than that, however; and this is where the book becomes not only tactical advice for activists but also an exploration of ethics. When Hall writes that “There is no victory in changing someone’s conduct because a grave has been desecrated” (p. 118), she does not mean only that the costs to the movement may in fact outweigh the benefits, but, more essentially, that coercion as a tactic is a betrayal of the proper end of an animal rights movement. That end is one that would encompass all animals, which is to say, humans included; and it is nothing less than the elimination of domination and hierarchy from the relations of humans to humans as well as of humans to other beings. This would be a regime of peace, Hall argues, because violence or the threat of violence is only a tool of domination, no matter how apparently benign the overt goal.

A big surprise of Hall’s treatment of activist terrorism is her linking it to the animal reform movement. It turns out that the rubric of “animal rights” in fact masks a deep schism in the movement, and in this case not over (only) tactics but also goals. On the one hand there are those who espouse incremental improvement of the lot of nonhuman animals, even including alliances with the major corporations that use animals for the production or provision of food, clothing, entertainment, health, etc., for human beings, but do not necessarily oppose the use of animals as such. On the other hand there are those, like Lee’s group, Friends of Animals, who oppose any use whatever of other animals for human purposes, and hence their treatment as property or commodities, and seek nothing less than their total liberation from human control or even oversight.

The former are called reformists, since they advocate bettering the conditions of animal use by humans but seem to accept that use in the main; while the latter are abolitionists because they would altogether sever the ties that bind other animals to human dominion and the denial of their natural freedom in a kind of enslavement. Thus, for example, reformists typically push for anti-cruelty legislation and bring pressure to bear on corporations to institute more “humane” treatment of animals on farms, in labs, and the like; while abolitionists call for the outright banning of animals from circuses, in experimental research, etc., and advocate a vegan diet.

The reason I said that Hall’s book ought to be a book of philosophy is that the above distinction cries out for more analysis and argument than she provides. The reader needs to understand what is really at issue; specifically, is the argument intended to move beyond a discussion of tactics, or does it remain that? For example, there are reformists who claim that their goal is the same as the abolitionists’; but they believe that the great mass of humanity can only be brought along stepwise, and in the meantime animal suffering can and should be alleviated. Then the abolitionists could be seen as arguing that their strategy will be more effective in the long run, since advocating only stepwise reforms is likely to have the unintended effect of lulling the public into self-satisfied acceptance of their main habits of animal use and consumption. Thus there is not necessarily any fundamental disagreement between the reformists and the abolitionists, but only one about how best to proceed.

Hall’s book does not portray it this way, however; or rather, the book is divided. For while Hall does, as with terror tactics, offer many examples of the simple ineffectiveness, even counterproductivity, of reformist methods, her brief in both types of cases goes further to question goals, and, indeed, motives. Some reformists are even characterized as selling out to well-heeled corporate sponsors, or at least as being their dupes; and Hall is surely right about the prominence in the advocacy field of strong appeals to potential donors’ sympathies for the plight of abused cuddly creatures, as well as of self-serving claims of victory with each new deal with a major animal user to use animals more humanely. These tactics no doubt bring in a lot of money to certain animal advocacy organizations; but do they hasten the eventual end of animal exploitation or only create another vested interest in their long-term subjugation?

Meanwhile, the connection drawn by Hall between reformism and terrorism is based in the first instance on cases of one and the same person or group espousing both methods in their animal advocacy. What sense is to be made of that? Hall discovers the root cause in “a steely utilitarian philosophy that supports ends-justified manipulation of others” (p. 118). In a word, Machiavellianism: do whatever works to achieve your end. But some of these means, as we have seen Hall argues, subvert the end by failing to appreciate the proper end of animal advocacy.

I find Hall’s vision compelling, but much theoretical work remains to be done. The issue here is as deep as ethics itself, for there is nothing so enduring as the tension between means and ends (except for the tension between self, or “us,” and others, which surely also plays out in the province of animal ethics). If what is truly at issue are means, then the question is an empirical one; and much more is needed than an ad hominem questioning of motives, or the adducing of what might appear to be outré and exceptional cases, such as the titular capers, to clinch the argument. The world is painfully aware of this very kind of issue being fought in Iraq (among many other places): what kind of tactics will win the day there and for whom? It is a very ancient question, and it is not even clear who decides the answer or when it can be decided. (Remember the banner on the aircraft carrier announcing “MISSION ACCOMPLISHED”?) Furthermore, it is equally open to the opponents of Hall’s kind of advocacy to level a charge of wishful thinking or even its own brand of self-serving or deluded motives.

If what is at issue are ends, then the vision of a world where humans leave (or enable) other animals to live “on their own terms” (a frequent refrain in this book) needs to be spelled out in much greater detail and defended as practicable and indeed, desirable. As regards the latter, Hall does offer us intriguing glimpses of what her ideal would be like; but would humans like it, or even the other animals? For example, Hall argues that “risk is part of living in a vibrant ecology” (p. 113). Apparently she means that if the cost of properly respecting other animals is that the occasional human will be mauled by a wild animal in the neighborhood, so be it (p. 98). But it is also the other animals who will suffer, for “Coming to respect the interests of conscious beings in living on their terms does not mean seeking to erase all the suffering and risk that life involves” (p. 135). Thus, instead of coddled pets there will be animals fending for themselves in the wild. Hall surely believes that in some fundamental sense this will be better for the animals, in the sense of restoring their dignity if not their freedom from pain (p. 71).

It is a noble vision, to be sure, but one that needs to be argued. Otherwise one could honestly question it as yet another human imposition on the lives of other animals: a romantic conception in lieu of an utilitarian one. After all, do not ethologists tell us that hierarchies are also to be found in the natural world, which should come as no surprise if we have evolved from common ancestors? And who is to say that the average animal in the wild might not prefer “a dog’s life” if given the choice (which indeed the modern dog’s ancestor may have had)?

I pose these questions as devil’s advocate. What I would really love to see is another book by Lee Hall, this time focused not on questions of strategy but instead on the proper goal of animal advocacy. The reader needs to have Hall’s vision delineated in much greater detail. As she herself notes, “Advocates might know what they oppose, but they are less sure about a positive vision to replace it” (p. 96). Indeed, she asserts that the whole point of a social movement is “to cultivate an alternative viewpoint, one that takes hold, gains energy, and becomes plausible to enough people to effect a paradigm shift” (p. 73). Hall’s book is filled with succinct, striking, stirring statements, such as, “The likelihood of individuals or cultures asking fundamental ethical questions about vivisection is not strong where those same people routinely interact with other animals by eating them” (p. 60); but these cannot substitute for a sustained, i.e., booklength, development of the alternative vision.

For one thing, the vision needs to be made consistent. There is potentially a deep tension or at least an ambiguity in what Hall’s vision actually is. On the new hand what she says in this book suggests forging a wholly non-dominating relationship between humans and other animals; but on the other hand Hall appears to be advocating an utter separation of human lives from theirs. The latter is instanced when she writes, “It’s simply not plausible that humanity can renounce our privileged position over them, yet live in situations where we could exert our will” (p. 53); thus, she speaks of their “right to be left alone” (p. 52). Then Hall makes this comparison: “Feminists have observed the ways in which society’s extension of protection to women is a bargain that ends up with the women still under [men’s] control” (p. 74). Is an implication therefore that the “solution” for the domination of women by men would be the complete segregation of the sexes? By the way, it becomes clear in the course of this book that Hall also considers the “dominance” that is the root cause of animals’ plight to be a (human) male phenomenon (e.g., p. 90). Presumably Hall does not mean to ship the men off to Mars; but then, how are her various statements about the ideal (or morally necessary) relation between humans and nonhuman animals to be understood in concrete terms?

What is more, any extended treatment of Hall’s vision needs to be theoretical. As I have already indicated, Hall herself can seem unclear on this score. Interestingly, Hall lodges a similar complaint against the terrorists and reformists for failing to understand their own theory (e.g., p. 80). It is nonetheless crucial to resolve, as Hall herself suggests when she writes, “Social justice movements everywhere find guidance in the idea that another world is possible, and that once an idea can be conceived, it can be achieved. Theories can indeed be put into practice overnight …” (p. 137).

In particular: does Hall really offer an alternative to the utilitarianism of her antagonists? Perhaps in dismissing a “steely utilitarianism” Hall does not mean to dismiss utilitarianism outright. So much of this book, as we have seen, inveighs against the bad “effects” of terrorist and reformist tactics. But suppose it turned out that a reformist or even terrorist strategy would in fact be more effective in bringing about a world free from domination (including terrorism); and, Hall’s arguments and evidence notwithstanding, I see no intrinsic paradox in such a possibility. I strongly suspect even so that Hall would insist reformism and terrorism should be rejected. In other words, I don’t believe Hall wants to cast the fate of her vision to the contingencies of existence. The reason to reject terrorism and reformism is not that they might be ineffective, but that they are inconsistent with her vision of a world without coercion, manipulation, and domination.

It may therefore be necessary to transcend tactics and goals altogether. This manifests the deep distinction in ethical theory between consequentialism and nonconsequentialism, the latter being precisely the view that the ends do not justify the means. If one opts for the latter, one has thereby put aside questions of ends and means, and replaced them with a way of living. Note that this approach encompasses both questions of strategy in the animal rights movement and the more fundamental question of how human beings ought to relate to other animals. I gather that is what Hall intends with the phrase “activism as integrity” (p. 19).

One last problem then remains. In classic nonconsequentialist theory (in particular, Immanuel Kant’s categorical imperative), it is not use as such that is proscribed but only “mere” use, that is, abuse. For example, it is wrong to cheat someone but perfectly fine to obtain something from them in a fair exchange. But it seems pretty clear from her book that Hall does not think it is possible in the real world for human beings to use other animals without abusing them. So, once again, the implication seems to be that humans should live and let (other animals) live. But the explanation for this disparity between human-human and human-animal relations needs to be elaborated. Let the title of Hall’s next book therefore be: On Their Own Terms.

Wednesday, May 31, 2006

When I Heard the Learn'd Biologist

Published in the Connecticut Journal of Science Education 40:1 (1-4) Fall-Winter 2002

Recently I had a perfect Socratic moment. It occurred during a talk given by a visiting biologist on the campus of my career-oriented university. I had been looking forward to asking him a question about evolution and education. I thought this would be an ideal opportunity, since the speaker was a geneticist and would be addressing a large audience of students and faculty in our theatre auditorium. Specifically, I was curious to hear what a person, whose lifework is presumably premised upon Darwinian principles, would have to say about how to teach those students -- some of whom are quite intelligent -- who claim to reject evolution on religious grounds, or sometimes even on supposedly strictly rational/empirical grounds.

For example, is it a proper part of our educational mission to insist on the truth of evolution (as, I am used to thinking, we uphold the truth of the Earth's orbiting the Sun)? Or ought we instead to present it as a hypothesis or theory plus the evidence for it, but without expecting our students to demonstrate any intellectual allegiance to it? My concern has grown out of my own teaching experience, where I, like any other teacher, will commonly refer to or allude to or presume various items of general knowledge in order to make some particular point; but if students are "entitled" to believe that, say, the universe is only 5000 years old, I'm not sure that I can assume anything at all about what counts as knowledge.

As I listened to the speaker, however, my attention was drawn to a very different implication of his remarks. He began with some general observations about scientific revolutions. A revolution in a given field, he explained, is often induced by a specialist entering from another field; that is because an outsider will sometimes see things in a fundamentally different way. The speaker clearly considered evolutionary biology and genetics to be an example of a scientific revolution.

He proceeded to discuss that field in particular, but, in passing, made what seemed to be an extraordinary concession. Evidently referring to the controversies of a Darwinian sort in contemporary society, he said (I quote from memory): "I have opinions on these issues, but in ethics my opinion counts no more than anybody else's; so I will not talk about them." On the face of it this was an expression of deep humility, and may even have been felt to be so by the speaker himself. But I personally was outraged. To me it sounded like that most dreadful of modern dogmas: ethical relativism. Furthermore, it was relativism being employed in the service of a particular agenda.

I have observed this strategy, or stratagem, before, and I believe it is a way of evading ethical and professional responsibility. Let me stress, however, that I do not mean to be making a personal accusation against the speaker, whose motives and awareness I do not know. But even if his thinking is based simply on a conceptual confusion, its effects can be just as devastating as if it were a consciously perpetrated ploy.

What am I getting all hot and bothered about? Well, first I should explain that our speaker is the head of a well-known research laboratory, which specializes in the cultivation of mice for understanding and alleviating human diseases. His lab has provided millions of mice to universities, medical schools, and biomedical research laboratories around the world, and its annual operating budget is in the hundreds of millions of dollars.

I should also disclose that I am sympathetic to the animal rights movement, which holds that nonhuman animals have certain minimal rights that ought to be respected by humans.1 Part of the argument for that view is that humans are very special animals indeed, in no small part because we are apparently unique in our capacity to respect others' rights!

Hence, I readily put two and two together and realized that the speaker's remarks about ethics may have been designed to offset animal advocates. Clearly his laboratory has a vital vested interest in the use of animals for research. So isn't it "convenient" to be able to brush aside any ethical scrutiny of that kind of research by "graciously" according equal respect to everybody's ethical "opinions"? "You live your life by your scruples, and I'll live mine by mine." How nice ... except for the mice! Such an attitude does not allow the millions of mice his lab exports to live their lives unmolested.

Even if I were not an animal advocate, I would cringe at the speaker's apparent assumptions about the nature of ethics. For, as I have already indicated, I am no friend of ethical relativism. Here I am truly in my own professional bailiwick, but I need not argue on the basis of "authority." The refutation of relativism is accessible to anyone, and it goes as follows. The standard argument for relativism is that different people have different ethical beliefs; therefore ethics is just a matter of personal belief -- it is merely subjective -- there is no ethical Truth. But that argument is simply invalid. From the undeniable fact that different people have different ethical beliefs, it does not follow that all of those beliefs are equally legitimate; no more than it follows that there is no truth or fact of the matter about the shape of the Earth, just because some people happen to believe it is round and others, flat.

Nor does it help to attempt to strengthen the relativist argument by pointing to the comparative unanimity of opinion on scientific matters, as opposed to ethical matters. Here there are two rebuttals. First, even unanimity in science does not guarantee truth: Witness the fates of Aristotelian and then of Newtonian physics. On the other hand, one can question the truth of the premise that near unanimity is more characteristic of science than of ethics; as a former teacher of mine2 once put it, "Which is more certain: That it is wrong to torture a baby, or that quarks have charm?"

Meanwhile, one can argue directly for the falsehood of relativism by pointing out that proclaiming the truth of relativism seems to be a self-contradictory act. Or, less abstractly, one can argue that the primary virtue of the relativist is supposed to be tolerance; but then is not the supposed relativist espousing an absolute value?3

A final key fact accounting for my reaction to the speaker's statement about ethics, in addition to his being a relativist and heading a mouse lab and my being a non-relativist and an animal advocate, is that the linchpin of his laboratory's phenomenal success has been the modern genetics research finding that, to quote their brochure, "Mice are remarkably similar to humans and share 75% of our DNA. [Hence, t]oday, the mouse is recognized by the world scientific community as the most important model of human diseases and disorders."

So at the end of the biologist's talk I raised my hand and, when recognized, spoke as follows. "Despite your disclaimer regarding ethical opinions, Dr. ______, I would like to ask you about your opinion on a certain subject, since your experience and expertise would make it an especially informed one. You have noted that a scientific revolution in one area can be brought about by some influence from another area. So I am wondering if you think there might be a moral revolution in the offing, brought about by the findings of contemporary genetics regarding the extraordinary degree of closeness between other animals -- in particular, other mammals -- and ourselves. Might we come to accord an analogous sort of moral concern and respect to our fellow mammals to that we currently accord our fellow humans?"

I do not think I am exaggerating when I say this question elicited a definite response from the audience, and an atmosphere of tense expectancy ensued. The explicit charge of the speaker series4 of which this was a part is to bring to the university "persons of national stature and prominence in the fields of business and public service [in order to] broaden the horizons of undergraduate students and to enable them -- in an open and informal atmosphere -- to gain exposure to the ethics and dynamics of those areas of endeavor." But the talks themselves are often conducted as a kind of love fest for the distinguished visitor, whose favor is sometimes, understandably, sought by various faculty, students, and administrators (as potential employer, benefactor, and so on.). Also, this was being broadcast on live radio, and at the very least one does not want to embarrass a guest.

I had tried, therefore, to phrase my question in the most respectful and open-ended way possible, but at the same time I felt a professional obligation as an educator to plant a seed in the students' minds. Both the audience and the speaker were also able to put two and two together, as my implication was that the very similarities that argue for the use of our fellow mammals as research subjects would seem to militate against that use on moral grounds.

The speaker was visibly hesitant about how to proceed. He began to talk in a desultory way about the closeness of the primates, but then abruptly shifted gears. (Here again I quote from memory.) "Look, I know what you're getting at. Let me tell you that my son suffers from Crohn's Syndrome. And if I have to kill a thousand mice in order to help my son, I will." Then he simply stopped, and his presentation was over.

After a moment there was a round of applause. One may imagine a mix of reactions. (I will not speculate on the relative percentages.) There were no doubt those who were expressing their approval of the speaker's "devastating" response to my question, with the reference to his son paramount in their sympathetic imagination. There were others, such as myself, who wished to extend a courtesy to a man who had perhaps revealed far more than he had intended ... the echo of the words "I will kill a thousand mice" lingering in the great hall.

It was afterwards I realized that this had been my true Socratic moment ... for better or worse! My question had not been designed to increase my popularity. I was the gadfly, attempting to "sting" my university to a better performance of its mission, which is to graduate genuine professionals. I take the latter to mean those who have attained not only technical competence in a given field but also an appreciation of, and a sense of responsibility for, the impact that that competency can have on the world at large.

Furthermore, in true Socratic fashion I had posed a question to a prominent citizen and professional expert that presented him with a contradiction contained in his own utterances. This, I believe, is actually a form of respect, for one is attending to the other person's own opinions, rather than ignoring them or merely trying to impose one's own. Unfortunately, many people view this as an outright attack on their (or somebody else's) God-given right to hold their own opinion on any subject. Hence the death of Socrates.

But I have yet to be martyred for expecting experts to back up their opinions with reasons. And I was indeed thankful for the opportunity to engage in this minimal dialogue; I felt the university was functioning quite as it should. After all, you can't expect everyone to love you when you suggest a potentially fatal criticism of someone's main accomplishment, and couch it in moral terms to boot.

But did the speaker really engage in a dialogue? He did reply to my question, but his reply struck me as mainly rhetorical in nature. Specifically, I heard him making an appeal to the audience's emotions, analogous to a prosecuting attorney's depicting the heinousness of a crime as "evidence" (or an "argument") for the defendant's guilt. Strictly speaking, the speaker did not even address my question, did he?

Or at best he did answer my question, thus: "No, there will be no moral revolution such as you suggest." And his implied argument was, "The scope of morality, at its broadest, is delimited by the needs and desires of one's own species." But that is not a self-evident truth to every listener, and I would be curious to hear him articulate any further reasons in its favor. Are we entitled to neglect the interests of other animals, when their interests conflict with ours, simply because we can get away with it (in the case of mice, because we are bigger than they are)? That kind of "justification" would not work with respect to our fellow humans; for example, I am not authorized to cannibalize your child to garner organs that may be needed by mine. So what exactly is the entitling difference? Again consider that the speaker's central argument is the similarity of other mammals to us.

Meanwhile, I found my own reaction to the speaker's answer to be interesting in its own right. When a colleague afterward asked me what I had thought about it, I said, "He answered like a human." "Well, isn't that exactly what you would want?" retorted my colleague. In other words, isn't that precisely what an ethical response would be? But I meant it in the sense, "Just as a tiger would 'answer' like a tiger if you asked her why she killed a human." Instead I was looking for an answer from this human in his professional capacity, which supposedly legitimates his sacrificing countless nonhuman mammals for the sole benefit of human mammals. Alternatively, I was asking him to exercise his uniquely human capacity of rational and moral reflection.

So I continue to wonder whether the speaker was just a con man, trying to divert attention from the issue I had raised, or, more typically, he was illiterate in professional ethics, that is, not conversant with the standard theoretical bases for making ethical judgments in his field. It is simply naive to presume that science deals only with objective facts, while ethics has to do only with subjective values. But it is disturbing when a person in a position of high professional power is either unwilling or unable to sustain a rational discussion about the ethical defensibility of his work.

Certainly I am not suggesting that there is an algorithm for coming up with the right answers to questions about right and wrong. Nor, despite my rejection of relativism and espousal of animal rights doctrine, do I maintain that the speaker is proved ignorant because he disagrees with me. All I seek is a professional's willingness and ability to discuss and debate -- and, in the first instance, to recognize and respect -- moral issues that pertain to his or her profession, rather than to be blissfully unaware of them, or to intentionally ignore, dismiss, sidestep, or obfuscate them.

It could well be that I have inadvertently caricatured the speaker's actual attitude and position. But this is just another reason to advocate dialogue, for without the opportunity to probe our respective views in detail, we are left with only stick-figure images of each other. The speaker may also imagine that I hold some radical position, such as totally banning the use of animals for research. But despite my leanings, I am approaching the matter as a genuine questioner, for I have had my mind changed on many occasions in quite unexpected ways by exposure to novel arguments. Somehow we must all strive to strike a balance between our convictions and open-mindedness, lest a general defensiveness push all of our positions towards the limits.

Why didn't the speaker attempt to meet me halfway by discussing the efforts his laboratory takes to treat its mice humanely? Surely there is a broad area for ethical discussion and implementation between the extremes of banning animal research outright and treating animals as mere objects having neither rights nor feelings. Alas, one is left to suspect that the speaker and his laboratory may indeed be guilty of ethical negligence. Thus, it was disappointing to read in the very next issue of AV Magazine after hearing the lecture, even as the article vindicated some of my "a priori fears," that mice are entirely excluded from the Animal Welfare Act.5 So even the minimal protection afforded to hamsters by this federal law do not apply to those very useful mice.

In conclusion, while I applaud our university's hosting speakers of this prominence, I would also have some of them serve as cautionary models to us educators, that we do not want to turn out graduates whose notions of career success and professional competence are restricted in the way this biologist's appears to be. By engaging them (both speakers and students) in vigorous dialogue we do our proper jobs. Some of us serve the special function of Socratic gadfly, questioning speakers on ethical issues in particular. The university may therefore be conceived as the part of our society which institutionalizes the gadfly -- the academic equivalent of the court jester, perhaps, who is given the mandate and accorded the special privilege of remarking on the emperor's clothes, new or otherwise ... and all, one would hope, for the ultimate good of the whole.

To assure the desired outcome, of course, much more needs to be institutionalized in the university besides the critical questioner at the occasional colloquium. Ultimately one wants a full-fledged professional ethics curriculum, involving critical examination of foundational assumptions, as a component of every career program. More broadly, formal education as a whole, beginning with grade school, ought to place due emphasis on dialogue as a form of learning. Thus, to invoke my original, un-asked question for the speaker: If I had to choose between inculcating Darwinism and debating anti-Darwinists in the classroom, then, despite my own Darwinist convictions, I would choose the latter.

A final question for the speaker: Do you suppose science will ever come up with a way to genetically engineer gadflies so they won't be so annoying anymore?


1. The locus classicus is of course Peter Singer's Animal Liberation, 2nd ed. (New York Review/Random House, 1990).

2. John Troyer, Philosophy, University of Connecticut.

3. Fred Feldman makes this point in chapter 11 of his Introductory Ethics (Prentice-Hall, 1978).

4. The Bartels Fellowship at the University of New Haven, sponsored by Henry E. and Nancy H. Bartels.

5. "The Injustice of Excluding Laboratory Rats, Mice, and Birds from the Animal Welfare Act," by F. Barbara Orlans, in AV Magazine (a publication of the American Anti-Vivisection Society), Spring 2002, Vol. CX, No. 2, pp. 2-5 & 9.

Wednesday, May 24, 2006

Cheating 101: Ethics as a Lab Course

Published in Teaching Philosophy (26:2 June 2003 pp. 131-145)

What is the point of teaching about abortion, euthanasia, and capital punishment, if the students are cheating in the course?1 As much as eighty per cent of our students cheat.2 Cheating is the norm. Furthermore, ethics courses are not immune. What was at first perhaps considered only a joke about cheating in an ethics class3 turned out to be a reality.4 A decade ago, therefore, I decided to seize the bull by the horns and challenge my ethics students not to cheat.5 Herein I report on the results of this ten-year experiment.

My own education in this subject began in the late-1980s. One day I gave a "pop quiz" to my students in an ethics course, simply asking for a one-sentence summary of the assigned homework (to see how many had completed it on time). When I was grading the quizzes, I came upon two identical answers. Then I read another answer which sounded familiar, and, sure enough, I found the same sentence when I looked back at a quiz I had already reviewed. This kept happening. By the time I had finished with the whole set, I was able to bunch almost all of the answers into five groups of four or so each. Then, on a hunch, I checked my seating chart (which I used to help me learn my students' names): The groups corresponded exactly to the seating arrangement. My conclusion: The majority of my ethics students had copied from their neighbors!

Welcome to the Real World, as they say. Evidently I had been one of a minuscule minority (although maybe it was different "when I grew up"?) for whom cheating was not even a mental option -- no more so than stealing or killing was -- although of course I knew that some people did cheat (as some people did steal or kill). So this came as a shock to me that most of my students were cheating. When I next looked at them, I saw a roomful of people who were trying to deceive me. A prescription for paranoia? You bet. Nonetheless, a perception of reality.6 What was I to do now?

Pondering the problem, I soon realized that any of the "obvious" solutions could have disastrous consequences for learning. First, given its pervasiveness, my coming down hard on cheating would likely foster a me-against-them (and them-against-me) mentality. Teacher and student would become like cat and mouse; we would be adversaries in crime and detection (and punishment), not partners in learning. Furthermore, I already knew that there was no institutional support for doing this. On a previous occasion when I had caught three of the back-of-the-room boys in the act and, as per the student handbook, reported them to the dean for students, he had somewhat dismissively informed me that no other faculty member had ever done such a thing; and my own academic dean intervened when, as per the student handbook, I gave the three a grade of F. So students might simply avoid signing up for my courses if I gained a reputation for being out-of-step with the permissive environment.7

Another standard approach would be to try to prevent cheating in the first place, by the use of various techniques, such as limiting the writing of essays to in-class examinations (all the better for the rod-wielding don to monitor as he paces the aisles). But this inhibits truly reflective work, which a student can do better when composing papers outside of class. More sophisticated methods of deterring cheating, such as having students write multiple drafts, is simply not realistic for those of us who teach several sections of fully-enrolled general education courses (and are also expected to do research, etc.). At best, implementing such a strategy would mean that the number of distinct assignments, and hence again the opportunities for learning, would have to be drastically curtailed.

But there is an even more pertinent drawback of both punishment and preclusion for an ethics course in particular: What is the point of trying to impose honesty? The form of the course would be contradicting its content. Here I would be, attempting to foster a rational appreciation of right conduct among my students, while at the same time I was implicitly acknowledging my failure to do so, by policing or otherwise manipulating their behavior. It's ludicrous, and counter-productive. My students would see that my actions spoke louder than my (and Socrates') words.

That insight was the key to the method I did finally come up with: What better way to address the subject matter of an ethics course than to have the course itself serve as its own laboratory? Cheating 101 (my nickname for my sections of Introductory Ethics) was born.8

Here is how it works in a nutshell. Students are graded on how much time they spend on the course (distributed among specific assignments of reading and writing and class participation), and not on an assessment of the quality of the work they do (other than its having to meet a certain minimum standard). More precisely, students are graded on how much time they tell me they have spent on the course, even including keeping tab of their own attendance. Thus, as the syllabus states: "There is nothing standing between you and an easy A except your own integrity." The responsibility for honesty has been placed squarely on the students' shoulders. I honestly don't know where else it could be placed and still be worthy of the name honesty!

The material covered in the course is standard for introductory ethics, except that I occasionally steer the classroom discussion and paper assignments to the topic of cheating. I take a theoretical approach to ethics, so I explicitly apply each theory in turn to the issue of cheating; for example, "Would an Epicurean cheat in this course?" I also spend the first two or three classes discussing the rationale of the system and the detailed syllabus, including administering a written "quiz" (reviewed, but not graded), which the student must also sign to indicate that he or she understands and agrees to the terms.

Such a grading system is sometimes called contract grading, although, unlike in my course, the "contract" need not rule out graded assessments by the teacher.9 Also, contract grading is typically based on a different measure of quantity from mine, such as number of projects completed or number of pages produced; I have chosen hours as the relevant unit because I want to remove all incentive to prattle rather than study. The student is faced with the following choice: "I have to study for n hours to achieve the grade I want honestly. I can therefore (a) not do the work at all and just lie about how much time I have spent, (b) fill the hours with half-hearted efforts, or (c) conscientiously work for the full time." My idea is that while a is the pure test of honesty, b is a test of good faith (since I make it abundantly clear that I expect my students to pay full attention to their classwork and homework); finally, c provides a double incentive of demonstrating total integrity and avoiding the boredom implicit in just "punching the clock" as in b.

All grading, of course, involves a contract, whether tacit or explicit,10 but I mention the terminology because the literature is relevant to the method I chose. Yet even the variety I employed is a standard item in the teacher's tool kit. Whenever a teacher docks a student's grade for poor attendance, or gives a student extra credit for doing an additional assignment, some part of the grade has been detached from a direct evaluation of the student's understanding of the course material. The justification for such practices is obvious: Grades are a powerful motivator, and many aspects of the academic environment other than tests and assessments, such as good attendance and extra work, are strongly correlated with learning; hence, it makes perfect academic sense to use grades to motivate such activities.

My grading system is more radical in that it dispenses with assessment altogether. But when you think about it, any employment of contract grading has the feature of divorcing the final course grade from assessment. I call this the part-whole argument. Suppose a teacher assigns a grade of B based on his or her judgment of a student's mastery of the course material, and then adds a 10-point bonus for actively participating in classroom discussions; the resultant grade would be A. But now the grade has ceased to reflect mastery of content, for the student's comprehension was, ex hypothesi, only at B level.

The beauty of contract grading is that a teacher can stick to the letter (and letters) of the standard system,11 but use grading for a purpose to which it is better and more appropriately suited, namely, to motivate learning, rather than to generate some possibly mythical mark of a student's competence. I speak disparagingly of the latter, "traditional" function of grading because grades on a transcript do not have a univocal meaning. Some teachers grade relative progress while others use an absolute standard; some are easy graders, others tough; some count English mechanics, others don't or don't assign writing at all; etc. ad inf. Furthermore, most college professors do not have a degree or even any training in education and have never formally studied the methods of grading in particular, which generate controversy even among the experts.

There are also deeper, "ideological" issues about grading, such as whether we teachers have any business judging the suitability of our students to fit into outside institutions, be they academic, business, governmental, etc.; whether there might not be some covert purpose in grading in the sense of ranking our students relative to one another; whether an educational institution is justified to put more emphasis on threshing the smart from the less so than on providing maximum opportunity for all to learn; and whether we professors are perhaps perpetrating a power grab by propping up our presumed authority in our various specialized fields by this extraneous, and hence illegitimate, means.12 I myself believe that that last one has a lot to do, however subconsciously, with why most of us cling so tenaciously to the practice of grading. I have become quite aware of my own grading phenomenology; for example, it seems to me that when I assign a grade of D to a student's paper, I am only topping off my commentary with an insult -- as if I were saying, "And furthermore, you stink!"

Perhaps the most telling argument of all against evaluative grading is that it promotes anti-educational values, substituting various extrinsic motivations for a genuine desire to learn the subject matter.13 I mention all of these issues only in passing, however, because my premise in this essay is not that the entire system of grading rots, but rather that grading is a multifarious phenomenon, which ought to be flexible enough to serve different legitimate purposes and take various forms throughout the curriculum. The proper concerns of an engineering professor could at times be quite different from those of an ethics professor. The purposes that engage me the most when I am teaching my ethics course are to contribute to my students' general education, to enhance their appreciation of philosophy, and to promote their understanding of ethics. I am now convinced that contract grading is one way, and possibly the best way, to achieve all of these ends together.

Here is an example of how contract grading can foster general education. I take it that (1) as with any skill, people learn to write by doing it and (2) feedback is available from many sources. Hence, one recommended technique for harried teachers of writing, who have more students than they can reasonably be expected to critique both frequently and in detail, is to have students trade papers with one another for commentary, in lieu of their handing them into the teacher for a grade ("Trade Don't Grade," you might say). In this way the amount of writing assigned can be maximized. I adopted this technique, and it worked to good effect. Not only quantity but also quality increased,14 inspiring me to bring out a textbook composed of my students' essays.15

My students' reading has also benefited. For one thing, they do vastly more of it (that is, the students who are honestly reporting how much work they are doing), since how much time they spend reading, directly affects their course grade. But an unexpected bonus is that students are given an opportunity to enjoy reading, since for my course -- without tests or other graded assessments -- they can read at their own pace and for its own sake; furthermore, re-reading counts.

A contribution of contract grading to philosophical education in particular is that it encourages dialogue. Over the years I have come to view dialogue as the heart of the process of both philosophy and the teaching/learning of it. Socrates has become my patron saint. And even though one advantage of contract grading is that more work can be piled onto the students without breaking the teacher's back, I have been delighted to discover that my ability to provide feedback has been enhanced rather than diminished (for I do still collect all papers and comment on them when I am so moved). I think the reason is that a certain resistance has been removed from our interactions.

Thus, when I make a critical observation on a student's paper, there is no time wasted because the student thinks she "deserves a better grade"; I need not contend against that tedious sort of defensiveness. Instead we can get right to the issue at hand, whether it be a substantive point of philosophy or a technical point of logic or a stylistic point about writing effectively. Meanwhile, my own reactions to a student's work need not be inhibited by considerations of "objective assessment" or even "self-censorship," since now there is no risk of the student's suspecting that she might receive a low grade due to my disagreeing with her about some issue under discussion. Similarly, the student feels freer to express herself about truly held beliefs, a sine qua non of genuine dialogue, according to the anti-sophist, Socrates.

As for contract grading's advancing the goals of an ethics course specifically, it works like this. The only way I have of knowing how much time my students have spent on the course is to have them fill out a log. This means, of course, that a student could be duping me; a student might even have spent no time at all on the course (other than, say, enlisting the services of an Internet paper provider or of a roommate) but put down enough hours in his or her log to receive an A. But then, you see, that is precisely the point. The responsibility of honesty has been shifted entirely to the student's conscience. In my course the choice is stark -- Learn or cheat; do the work or deceive. It is not a game of hide and seek with the teacher; it is instead a confrontation with oneself. Who are you?

But aren't both the students and the teacher getting away with murder? The students don't have to do any homework, and the teacher doesn't have to do any grading; it is a perfect marriage of convenience for the lazy and the dishonest. Well, yes, of course this could happen ... just as the "traditional" grading system also has its infinite loopholes. Any system of grading or education depends for its ultimate success upon its being employed conscientiously; this is equally true for any system of government, business, science, you-name-it -- and that is the case, no matter how many checks and balances have been instituted (need I mention the names "Enron" and "Arthur Andersen"?). It must also be competently employed; one must learn how to teach a course of this nature (as one would any other course).

What is vastly ironic about the present situation is that the system of grading I have been using, precisely because it attacks the problems head-on and challenges the students to justify not cheating, has served as a lightning rod for criticism of the ills that have been rotting away education in this country due to the traditional grading mentality. An example of what I mean comes from the opposite end of the spectrum from the complaint that my system permits dishonest students to avoid doing any work. It is the charge that the system obliges students who are both honest and intelligent to do too much work. For, remember, my system links the course grade to time spent doing course assignments, and not to performance on tests, etc. Under a traditional grading system, a smart student could earn a high grade with far less effort devoted to homework than my system requires. Hence, smart students (that is, those who are also honest) are being penalized; they must work more than they "should have to" in order to get a good grade.

I have actually heard this argument put forward by some of my (non-philosopher) colleagues, not just by teenagers. I find it astonishing and appalling; to me it seems I am being admonished by my fellow educators for trying to further my students' education. Somehow it is deemed "unfair" -- even the word "harmful" has been used! -- to insist that students spend a certain amount of time doing homework for a course, when they could, and hence should (it is argued), get a good grade for doing less. Some colleagues have expressed this sentiment particularly with regards to general education courses, as being less important than courses in the major.16

But what is this argument really saying? It seems to me it is placing more importance on rewarding a small subset of students for their innate intelligence than on encouraging all students to learn as much as they possibly can (within reasonable time constraints). And a point I wish to stress is that intelligent students are the first to suffer -- yes, to be harmed in a real sense -- because of this topsy-turvy view of education. For what I have seen repeatedly is that some of the brightest students are among the most ignorant. That is because they have been permitted to glide through their schooling with minimal work to "get the grade," while their peers have had to struggle (and/or have cheated) just to pass.17 Hence, it is the "smart" ones who are being cheated, by the system itself; by giving them As for their intelligence, we have really been failing them.

Perhaps such an insight explains why my university's undergraduate catalogue specifies a minimum amount of time for homework, thus: "All full-time and part-time students are expected to spend at least two hours of time on academic studies outside of and in addition to each hour of class time."18 But this statement is as widely ignored --and I mean by the faculty as well as by the students -- as the one that says "Academic dishonesty isn't tolerated at the University." How do I know? Because my students have told me so, consistently and convincingly.

Recall that I employ a system where the burden of honesty about their work is entirely the students'. This has resulted in a degree of freedom of expression in (and about) my course that is perhaps unique. In the assigned papers on cheating, in a decade of classroom discussions, in 500 interviews on a program I hosted at our university's radio station, and in one thousand anonymous course evaluations, my students have made it clear: Just about nobody does even the minimum "expected" homework for any course, and just about everybody cheats. Behold: the fruits of the traditional grading system!

I certainly do not claim that I have discovered the magic formula to cure cheating and laziness and ignorance. Furthermore, there are probably several valid approaches that can be taken, and no one alone should have to bear the whole burden. But I do know that my method addresses these problems, rather than, on the one hand, sweeping them under the rug or, on the other, applying a "cure" that is worse than the disease (i.e., traditional methods that attempt to impose an artificial honesty and end up undercutting both educational possibilities and genuine character development).

I also believe that cheating is reduced in my courses (again, my claim is based on extensive feedback from my students19). Remember that the baseline for cheating, according to national surveys as well as my own teaching experience, is as high as 80%; so if that goes down even to 50%, significant progress has been achieved.

But more to the point: I believe that dishonesty is diminished. For consider: The true measure of honesty in the classroom is not how many students are cheating, but how many are cheats. So suppose 50% of my students actually cheat, whereas in somebody else's course, employing stringent controls, only 10% cheat; it could still be that more of my students are honest because, given the chance, maybe 70% of the students in the other course would cheat.

I claim, therefore, not only that fewer students cheat in my ethics course than would under the regular regime (sans stringent, education-stifling controls), but that fewer will go on to cheat in other courses. Their reason for not cheating will be not that they cannot get away with it, you see, but that it is wrong. The underlying rationale -- or call it my (Socratic?) faith, if you will -- is that when people reach an understanding of why cheating (or anything else) is wrong, then, all other things equal (i.e., with neither carrot nor stick in the offing), they will be less likely to do it.20 After all, if philosophers really thought that virtue cannot be taught, what are we doing teaching ethics courses?21

There is one final objection to my sort of system, however, that really does give me pause: Honest students, who work hard for their grade, sometimes feel, well, cheated, by my grading system because of the cheats who end up with a better grade for doing less work. I do acknowledge that this is a flaw in the system. Yet, how could it be otherwise and still be the system that it is, having all of the advantages that it does? (In this best of all possible worlds, did not even God have to rely upon Judas to carry out His plan?) Also, nothing can be evaluated in a vacuum, so the real question is: Are the net disadvantages of alternative systems even greater?

I have tried once again to tackle the problem head-on, by pointing out to my students that one of the great "lessons" of ethics may be that doing the right thing does not necessarily leave one better off than those who do the wrong thing. Alternatively, on the "virtue is its own reward" view, akin to Socrates' claim that integrity is a matter of the soul's welfare, the honest student may be ipso facto better off. However, as much as I do myself subscribe to such a conviction, I also recognize that it most emphatically does not endorse turning a blind eye to injustice (all the better to enjoy the virtuous fruits of suffering it!).

So let me make the following points. First, as I have already noted, the amount of cheating in my course may actually be less than that in other courses; thus, all that may really be distinctive about my course is that, as intended, everybody is talking about the cheating that is going on in it. Second, while the talking can deteriorate into boasting by some of the cheats, this is both understandable and even a good thing in a way. As one student wrote in a course evaluation, "I would hear them complain about your teaching. They would mock you and say that you were being unreasonable thinking we shouldn't cheat although the opportunity was there. Personally, I think that is their way of justifying their wrong actions." I interpret that to indicate that the course has indeed "reached" even the cheats, who now feel that they must broadcast their behavior, to disguise with bravado a feeling of guilt in front of other students who, they now dimly realize, are victims of that cheating.

These cheats are having to confront, evidently for the first time in their academic lives, both the true moral consequences of their behavior, and the fact that it is something they must learn to control on their own, without the crutch of external sanctions. Indeed, sometimes they must actively resist powerful external forces. In a moving testimonial, one evaluation related how the student's own mother advised the student to cheat on a psychology exam (so as to have more time for homework in the student's major).22 Maybe the "payoff" of these dawning realizations will not come for another year, or ten years, but there is reason to believe that even the cheats will have been positively affected by this course. Will not the vast majority of opportunities to cheat in later life -- in personal relationships and professional dealings alike -- go undeterred except by one's conscience?

To conclude my argument for the value of contract grading: Grading is a means, not an end. What end or ends does grading serve, or ought it to? This will depend much on a particular context. But in general, academe seems to hold out two purposes: "private" learning and "public" assessment of learning. As is the case whenever there are two values, they can come into conflict. I submit that when these two do, public assessment should yield to private learning.

The virtues of contract grading are perhaps best summarized by a sample of testimonials by my own students in their anonymous course evaluations. "I liked the grading system because it made me feel like I was in control of my grade." "It is much easier to get involved in this class because the students never have to worry about being right or wrong about their answers." "[The teacher] didn't pressure me. He just made me think." "This system played a big role in being able to contradict the professor." "I really like the grading system because I think it opens the relationship between the teacher and student because the teacher is giving the students a sense of responsibility that they don't get from other teachers." "I enjoyed reading other students' papers, especially those who wouldn't speak in class. I eventually got to hear their ideas and thoughts, as well as a little bit about them." "I never enjoyed reading before this course. Now I find myself drifting into books for hours!! I was at first very turned off by the books given to us but as I came to class and re-read the assignments they were like different books the 2nd time around. This is a very valuable lesson I learned." "I felt myself wanting to learn the information rather than learning it because I had to. Isn't that what college is all about?" "If someone is in college to learn then this is an ideal course." "This course was the most difficult course I have taken at UNH although it was one that I have learned the most from and have enjoyed taking." "I know I could get around the required readings by lying, but this instructor was challenging me to do the readings. I think this is funny because I read more in this class than any other class at this university. I feel that one thing I can take from this class is that I understand myself better." "The only test in the course is to see how you get the grade you receive. I believe that this is the best way to show all of the students what ethics is all about." "The personal and professional growth available to those who put forth effort in this course are not duplicated or even approached in any of my other courses." "I should say this is the only class that was able to change my behavior." "[We] lived ethical situations throughout the course in the classroom and also away from it. When this type of classroom setting was brought to the class's attention I was skeptical about it, but as the semester moved along and now at the end I realize how much of an impact it has had on me." "I think [the] grading system promotes honesty and rewards the student for effort put forth." "This is one course I will never forget." All of these comments are typical.

Finally, however, I have modified my grading system once again, incorporating elements of the traditional system, such as in-class examinations, as an additional check on cheating, while still retaining the self-reporting mechanism for time spent on homework. I do this in the same spirit of experiment with which I embarked on the wholly contract grading system. I have done it because I share the emotional response of many of my honest students to the cheating that goes on, namely, disgust. But I am afraid that this is not a good reason, given all of the considerations above, and mainly represents my own weakness. I have retreated partway into the cave (or opted for the blue pill, to allude to the contemporary metaphor from the movie Matrix), back to the comfort of the familiar shadows and an illusory world in which honesty and studiousness prevail over a few bad apples who cheat or skimp. Even though my system seemed to me to be working as well as could be hoped in meeting the objectives I had set for it, I have become so sensitized to the issue, that the cheating that remained of which I was aware, became too much for me to bear. Another factor was the peculiar variety of in-your-face cheating that my system engendered.

I should also note that adopting a wholly nonevaluative system such as mine is likely to elicit antagonism from some faculty and administrators.23 The idea of grades as motivators of work and instigators of a course conversation, without at the same time being assessments of the quality of that work and conversation, strikes some as practically incoherent.24 Thus, one may find oneself striving not only with students -- both the dishonest ones who blatantly cheat and the honest ones who think the system is unfair -- but also with colleagues who reject one's reasons or suspect one's motives. Should such disagreements rise to the level of attempted interference, there are three key points to remember: (1) academic freedom applies not only to the content of what we teach but also to how we teach it, (2) even though the faculty as a collective are responsible for academic standards, academic freedom resides in the individual faculty member, and (3) as my university's faculty constitution puts it, "the individual instructor has [not only] the prerogative [but also] the responsibility of making use of such methods, techniques, books, and materials as he or she considers useful to fulfill his or her objectives as an educator, and the intent and purpose of the course" (Article II, Section 1; my emphasis). Still, the end result can be controversy that affects the teacher's relations with all constituencies at the institution. If, like Socrates, you relish the good fight, your career is made. But for most of us, the effect on the emotions can be draining.

Instead of continuing my solitary struggle against an institutional ill, should I seek an institutional solution? One tactic that has been tried with some success at some schools is an honor system or honor code. I can imagine rallying the troops at my university: challenging the student body in op-eds written for the student newspaper; bringing the idea before the faculty senate; broaching it with the administration. What holds me back is twofold. On the one hand there is the practical consideration that what prompted my grading experiment in the first place was the dearth of administrative and faculty support for doing anything about cheating or even recognizing the existence of the problem (and various allusions in the preceding paragraph and elsewhere in this essay suggest that this attitude has only become more entrenched with time). Second is the theoretico-ethical consideration that honor systems tend to incorporate a significant enforcement component, which seems beside the point of what I have been trying to accomplish. Indeed, massive cheating has occurred at institutions that have an honor code.25 Nonetheless, I do not rule out this avenue, and the literature is certainly pertinent.26

Another sort of institutional approach I mention in passing, only because I consider it to be ridiculous. This was tried at the main (College Park) campus of the University of Maryland, where discount cards at local merchants' were offered to students who signed a pledge against cheating.27

So for now I am back to dealing with the problem on my own. But already I am observing that for the sake of what is in effect a cosmetic change (for cheating goes on no matter what), my partial reversion to the old way of grading has brought in its wake the host of pseudo-educational issues that are so loathsome about the traditional grading system. I see the intense focus on what will "count" on the exams rather than luxuriating in the expanse of learning, the hoarding of helpful ideas during class discussions lest the student give away his or her "competitive advantage" on subsequent graded assignments, and the hesitation, both by students and by teacher, to express genuinely held opinions about ethical issues for fear of grade repercussions or of appearing to be biased in grading, respectively. Thus, this story is not over.


I would like to acknowledge the encouragement of Michael Morris, editor of our university's in-house newsletter, Reflections, where an earlier draft of this essay appeared last spring; of Robert Rafalko, who wrote a companion piece about his use of the method over the past five years; of Mitchell Silver (University of Massachusetts at Boston), who has been using it ever since reading about it in my AAPT article (see Note 5); and of Lawrence DeNardis, president of the University of New Haven. I would also like to thank Michael Kaloyanides, Erik Rosenthal, Ralf Carriuolo, and Charles Vigue for their unwavering support of academic freedom.

1. James B. Gould makes the analogous or general point in his essay, "Better Hearts: Teaching Applied Virtue Ethics" (Teaching Philosophy, 25:1, March 2002, pp. 1-26). However, in his emphasis on character education in our ethics courses, Gould curiously overlooks the problem that is staring us right in the face in our own classrooms.

2. E.g., "Throughout the 1990s, nearly 80 percent of Who's Who [Among American High School Students] teens have consistently admitted to cheating in school," from "Students' Priorities Yield Sorry Results," a report by Educational Communications, 2002 . There is no reason to think the numbers drop appreciably when these students enter college, and every reason not to (see seq.).

3. In a Sunday "Calvin and Hobbes" cartoon from 9/12/1993, the little boy Calvin relates to his imaginary companion Hobbes (the tiger) his internal dialogue earlier that day about whether to cheat on a test at school. "So what did you decide?" asks Hobbes near the end. "Nothing," replies Calvin; "I ran our of time and I had to turn in a blank paper." "Simply acknowledging the issue is a moral victory," comments Hobbes." "Well," concludes Calvin, "it just seemed wrong to cheat on an ethics test."

4. E.g., Brian Cornforth, an instructor at San Diego State University, caught 25 of his 75 undergraduate business-ethics students cribbing from a pirated test key ("Need Someone in Creative Accounting?" by Jamie Reno in Newsweek, 5/17/99, p. 51), and 31 engineering students at Carleton University in Ottawa were caught plagiarizing an essay on ethics ("Students cheat their way to ethics essays," Reuters, 3/28/02).

5. See my "Cheating: Two Responses" (American Association of Philosophy Teachers News 15:3, November, 1992, pp. 5-9).

6. Looking back I wonder whether a more innocent interpretation of this episode were possible. All I know is that at the time, when I confronted my class, no one suggested any. In any case, the Who's Who statistics (see Note 2) and my subsequent discussions with hundreds of my students make it clear that massive cheating does take place.

7. Of course my university is not alone. Presumably this is another trend begun in high school, where, according to Who's Who Among American High School Students, 95% of cheats avoid getting caught (and presumably few of the remainder are punished severely) ("Cheating and Succeeding: Record Numbers of Top High School Students Take Ethical Shortcuts," op. cit.).

8. By the way, lest I be perceived as a plagiarist (!), let me acknowledge that the title of this essay, "Cheating 101," was previously used for a handbook by Michael Moore, a then-student at Rutgers University. Published in 1992 and subtitled, "the benefits and fundamentals of earning an easy 'A'," his how-to guide achieved nationwide notoriety. Moore defended the work as a wake-up call to lax enforcers of academic "integrity" (which I put in scare quotes, for reasons to be explained in this article; and cf. Note 27 below); see his op-ed in the Hartford Courant (1/14/92), "Does the educational system encourage cheating?"

9. An excellent introduction to the virtues of this kind of grading is "Contract Grading: Encouraging Commitment to the Learning Process Through Voice in the Evaluation Process," by Tammy Bunn Hiller & Amy B. Hietapelto (Journal of Management Education 25:6, December, 2001, pp. 660-684). Cf. also and the student testimonials near the end of the present article.

10. I thank my colleague David Morris for pointing this out.

11. Our university catalogue says, for example, that "B = Good." Some of my (non-philosophy) colleagues assume that such stipulations rule out my grading system. But, having been trained in conceptual analysis and logic, I ask the question, "What does 'Good' mean?" and point out the analogy of the logical operators, whose entire meaning is given by their truth table definition. Thus, even though we informally refer to the wedge as "or" (or, more precisely, "and/or"), its exact and only meaning is a certain set of Ts and Fs in a truth table. Just so, the exact meaning of "B" or "Good" in the university catalogue will be relative to a grading system. In my system, "B" or "Good" is precisely defined as "Puts in x amount of hours on the assigned homework, which must satisfy certain specified criteria." Hence, a student who does x amount of satisfactory work performs at the B level, i.e., does "Good" work, in my course.

12. Cf. "Institutional Obstacles to the Teaching of Philosophy" by Michael Goldman (Metaphilosophy 6:3-4, July-October 1975, pp. 338-346).

13. Cf. "A Proposal to Abolish Grading" by Paul Goodman in Compulsory Miseducation (New York: Horizon Press, 1964).

14. I suspect one explanation of the latter is that the average college student, being a teenager, is more likely to want to impress a classmate than a teacher, and also to take a classmate's criticism to heart.

15. From the Theoretical to the Personal: Essays for and by Students about Ethics, which I self-published for the use of my own students (with any profits earmarked for scholarships at our university).

16. Perhaps it is pertinent to point out that at my university there are only two philosophers among the more than 150 full-time faculty, and we both use the grading method in question.

17. The same Who's Who surveys that highlight cheating in our schools point to a deficiency of homework done by the best and the brightest. "In this decade [of the '90s], fewer and fewer high-achieving teens find school challenging. ... More than half (54%) of Who's Who teens surveyed in the past three years claim to have spent only an hour a day - or less - on homework. Who's Who teachers agreed" ("Students' Priorities Yield Sorry Results," op. cit.).

18. Playing with the numbers, I have discovered a rather interesting correlation (whether it be coincidence or the original intent, I do not know): The required hours make being a full-time student equivalent to having a full-time job. For consider: A 3-credit-hour course meets for 2.5 hours per week (as academics have 50-minute hours, just like psychiatrists), and twice that is 5 hours, so each course requires a (minimum) total of 7.5 hours, which is like a typical 9-5 workday minus half an hour for lunch; and the typical workload is 5 courses. Ergo Q.E.D.

19. For some comments from the students themselves about this and other results of using this grading system, see the end of the present article.

20. But is cheating wrong? This becomes a central question of my course. My own answer, naturally, is "Yes." I argue that the reason is purely Kantian, although I also recognize the tremendous social costs of knowledge not gained and trust loss. Cf. "Why Cheating Is Wrong" in my book, Moral Moments: Very Short Essays on Ethics (Lanham, MD: University Press of America, 2000).

21. I put a question mark after "Socratic" in this paragraph because Socrates apparently believed that virtue cannot be taught. But I take this to be a separate issue about whether all knowledge is remembered. I'm sure Socrates would agree that understanding is the key to right behavior.

22. Who's Who also reports that the parents are as blasé about cheating as are the students. "[A]pproximately two-thirds of both students and parents say that 'cheating is not a big deal'" (op. cit.), although there is some indication that, while the students are definitely condoning what they know to be pervasive, the parents are mostly just unaware of even the cheating by their own offspring.

23. One particular bugbear is so-called grade inflation. It is a fact that the normal curve has taken a vacation from my ethics course, and the final grades are heavily skewed to the high end. Two-thirds of my ethics students, during the decade I have been using the contract grading system, have received a course grade in the A range; almost half of these have been A+. I make two observations. (1) Let us look at the glass as more-than-half full; is it not worthy of note that more than two-thirds of the students did not receive A+? Is it not also remarkable that scores of students, when they fully realized the work load and the expectation of honesty, dropped out of the course altogether rather than cheat or fudge their way to a grade they desired? (2) “[I]t should be our goal that every student get an A in every course,” writes psychologist Steven D. Falkenberg in "Grade Inflation." Although Falkenberg is discussing the grading of content mastery, his essay alludes to a Skinnerian notion which is relevant to my concerns, to wit: “When the student does poorly it is because the instructor failed to motivate the student, failed to captivate her/his interest, and failed to provide appropriate learning activities and instruction …...”

24. I put aside the question of whether this objection is simply an ignoratio elenchi to begin with. For I do provide plenty of assessment of the quality of my students' work. It's just that I don't allow it to influence the grade I submit to the registrar for inclusion on their permanent transcript.

25. E.g., a major incident of cheating involving at least seventy-one midshipmen occurred in 1992 at the U.S. Naval Academy in Annapolis, where there has been an honor code in place since the Korean War (Newsweek 9/27/93, p. 44; Associated Press 1/25/94 & 4/1/94); and in 2001, 122 students were under investigation for plagiarism in an introductory physics class at the University of Virginia, whose much vaunted honor code was established in 1842 (Associated Press, 5/10/01).

26. See, e.g., the excellent issue of Perspectives on the Professions (a periodical of the Center for the Study of Ethics in the Professions at Illinois Institute of Technology; 14:2, January 1995) devoted to the subject. A resource is the Center for Academic Integrity , highlighting the work of Donald L. McCabe of Rutgers (appropriately enough; cf. Note 8).

27. “College students offered honesty incentive,” Associated Press (2/28/97). I discuss the absurdity of this in “Rightness and Rewards” (Philosophy Now, no. 37, August/September 2002, p. 47).

Friday, February 17, 2006

"There's No Room in the Worksheet" and Other Fallacies about Professional Ethics in the Curriculum

Published in Teaching Ethics: The Journal of the Society for Ethics Across the Curriculum (v. 4, n. 2, Spring 2004, pp. 77-88)

The majority of my students profess a total amorality when it comes to professional ethics.1 Taking their words at face value, which I have elicited in countless papers and class discussions over the decades, their attitude is: Whatever you need to do to "succeed," to get ahead, or even simply to remain employed ... do it. I am tempted to conclude that Enron Corporation and whatever other notorious examples one cares to mention -- from business, politics, criminal justice, law, scientific research, health care, engineering, sports, entertainment, and, yes, education -- are not anomalies, but the norm.

Of course my experience is not uncommon. Yet, despite growing recognition that the ethical education of professionals is sorely lacking, attempts to establish a professional ethics curriculum continue to encounter resistance at many colleges and universities. This is true even at those -- perhaps especially at those -- that emphasize professional programs and majors. The main stumbling block seems to be a purely practical one: How do you fit a course on professional ethics into academic worksheets2 that are already over-crowded with essential technical courses in every professional discipline?

I maintain, to the contrary, that the real problem is one of attitude and will, and these in turn rest upon a set of mistaken notions about the nature of professional ethics. In this essay I shall highlight what I take to be a number of fallacies about professional ethics (and ethics tout court) and suggest more appropriate ways to think about these things.

Ten Fallacies

Fallacy No. 1: Everybody knows the difference between right and wrong, so there is really nothing special that needs to be learned or taught in the realm of so-called professional ethics.

Response: It may be true (although I'm not really sure, since this is, at least in part, an empirical question) that we are all endowed with a conscience, which, furthermore, is a veridical one. But this is no more than to say that we have a certain capacity, just as we may all be supposed to have the capacity for language. That elementary fact, however, does not preclude the recognized need or desirability of refining our innate abilities through informal and, to the point, formal education. For example, we all walk into the first grade classroom with some facility in our native tongue; yet, as every college freshman laments, even twelve years of instruction in English (or whatever language) have still not sufficed to make us fluent. The analogous point may be made about conscience.

Furthermore, the fallacy presumes that an understanding of ethics tout court will equip one to deal with ethical issues on the job at the level of professional proficiency. But again, by analogy to other areas of knowledge, one is not necessarily ready to practice, say, engineering simply because one has a good grounding in basic science, nor ready to give a flute recital because one has a solid understanding of music theory, etc. Just so with professional ethics, which is both a specialized and an applied discipline.

A "corollary" of Fallacy No. 1 is the not infrequent claim by instructors in the various professional fields that they have all the competence required to teach their students about professional ethics -- again, both because they (the instructors) are endowed with a basic ethical sense or conscience and because they are familiar with their own professional practice (cf. Fallacy No. 3 below). In fact, it is often perceived as insulting by these instructors should someone suggest that they hand over their "charges" to an ethicist for a course on same. But, then, why do they lack a similar "sensitivity" about having, say, a professional in English teach their students about writing, even technical writing? Is it because they don't mind being considered illiterate in their own professional practice? Clearly not. So somehow there is the perception that, unlike English, ethics and professional ethics are not real disciplines, just matters of personal opinion. I submit this evidences a profound lack of understanding of a field that most professionals, including teachers of the various professions, are simply unfamiliar with, not having had a thorough exposure to it in their own professional education.

Fallacy No. 2: If somebody doesn't already know the difference between right and wrong, or, knowing it, has failed to sufficiently internalize it into one's motivational psyche by the time they reach college, it is too late to do anything about it ... or, at least, no mere classroom instruction is likely to have a significant ameliorative effect.

Response: Psychologists tell us that by the time we are three years old, our personalities are pretty much set in stone. Presumably something analogous is true about our moral outlook on the world. A hoodlum does not typically become a saint by taking a course on sainthood. Nevertheless, if we did not believe in the power of formal education to transform people, and not simply add to their "storehouse of knowledge," then I think much would be lost from the point and purpose of our profession. (For education is our profession, is it not? While we may also be engineers, fire fighters, and philosophers, our salaries at institutions of higher learning are first and foremost for assisting others to learn ... yes?)

As a matter of fact, some psychologists have championed the idea that our moral personalities do or can undergo significant change throughout a lifetime. Lawrence Kohlberg, after Piaget, was a pioneering figure in this regard.3 He particularly emphasized that life experiences, including educational interventions, can be crucial to moving from one “stage” to the next. While his work can be criticized for begging some central conceptual issues, such as whether later stages are equivalent to higher stages and whether principled reasoning is necessarily superior to socially sensitive decision-making, it does provide heartening scientific support for the common-sense view that people are capable of moral maturation and benefiting from moral instruction.

I cannot say exactly how to "measure" the efficacy of a professional ethics course in particular, but I would not merely dismiss the prospect either. The testimony of graduates on some long-term assessment instrument might well reveal the perceived value of such a component of our students' professional education. My university, for example, is implementing a simple questionnaire, to be sent to alumni at five-year intervals, that asks how they now regard various components of their university experience. For what it is worth, I know that were I to be called upon to comment on the impact of formal ethics study on my own professional behavior, I would want to write reams (as I have done). And I don't just mean about how I have been motivated to try to persuade others to study ethics! I would discuss day-to-day decisions about teaching, interacting with colleagues, voting on academic policy issues, and so forth.

Fallacy No. 3: Professional ethics is a matter of obeying the law and observing standard practices, so there is no need for anything other than instruction in the laws that pertain to one's profession and instruction by practitioners in one's field, followed by or in tandem with one's own experience on the job.

Response: Laws have loopholes, and laws can themselves be immoral. These are commonplaces. There must be a well-developed conscience to fill these gaps and show the right path.4 But even when the laws are an adequate guide, one must be motivated to comply with them. To depend on fear (of punishment) as the great social motivator is to ask for a police state (with all its attendant criminal ills, perpetrated mainly by the legal authorities!), or else to leave ample room for criminal behavior to thrive. An obvious alternative, or supplement, is to strive to promote norms of behavior by inculcating personal understanding of the value of avoiding wrong and of being and doing good, and also social attitudes of approbation and disapproval that reinforce this understanding -- in a word, ethics and morality.5

Meanwhile, standard practices are another fallible guide. Just because all of your co-workers swipe paper clips from the supply room doesn't make it a good idea or something you should emulate. Ethics doesn't derive its notions of right and wrong simply from fitting in with the prevailing milieu.

Fallacy No. 4: Professional ethics is a matter of practice and not of theory; i.e., it is an "applied" discipline.

Response: This assertion presumes a dichotomy between theory and application that is fallacious because it is excessive. While there is surely a distinction to be drawn, application cannot be independent from theory, for precisely what is to be applied is ... a theory! Thus, while professional ethics is definitely an applied discipline, that does not imply that theory can be dispensed with. On the contrary, ethical theory must be firmly understood so that it can be applied intelligently and effectively.

Without theory, one is left with “intuitions” or even mere words. How often, for example, I hear my students assert that something is “wrong” (that is, when they’re not in their relativist mode), but when pressed they are completely inarticulate about what makes it wrong or what it means for something to be wrong (in the relevant sense). They literally do not know what they are talking about. Hence, they are in no position to defend their original assertion, other than to reiterate it ever more emphatically.

Upon further questioning – as every philosophy teacher knows – the students will often be found to be intuiting on the basis of some implicit theory – commonly egoism. I see one main purpose of an ethics course as helping the students to become aware of their own theoretical preconceptions and exposing them to alternatives so that they can weigh the relative merits in the light of explicit reflection. The practical ramifications of such an exercise can be profound indeed, since any alteration of theoretical outlook potentially affects every decision a person ever makes. Even without fundamental alteration, a better understanding of one’s assumptions can bring clarity of purpose and more consistent and finely tuned actions.

Fallacy No. 5: Professional ethics is best conveyed by integrating it into and throughout the entire professional curriculum rather than isolating it in a separate course that can then be easily ignored in the rest of one's education (and subsequent career).

Response: Analogous to the preceding fallacy about theory and application, this assertion presumes a false dichotomy between separate instruction and integrated instruction. And as earlier, I need only point to other basic fields, such as English. Does the obvious desirability of writing throughout the curriculum preclude the need for dedicated courses in English? Of course not. Much more likely, the reason for having separate instruction and practice in English is that the teacher of business or engineering or criminal justice or sound recording or biology does not want to keep interrupting the course to teach and inculcate the rules of grammar, etc. But assigning writing in all of those non-English courses will help to instill and develop what has been learned in the English courses. Separate and integrated are therefore complementary components of a complete education.

Fallacy No. 6: Professional ethics is a special kind of ethics, which is mainly about divergences from "ordinary morality."

Response: This is an especially dangerous idea, which nonetheless seems to be borne out by such obvious examples as the defense attorney whose professional responsibility is primarily to her client and not necessarily to truth, public safety, or justice, and the soldier whose professional responsibility is to protect her country by lethal means if necessary. But most ethicists would probably argue that, underlying many apparent differences, there are fundamental principles of ethics, or even a single Supreme Principle. Hypotheses regarding such principles, or such a Principle, constitute the core of most ethics courses; they are the theoretical elements of the discipline. Thus, what to the "untutored" might appear to be exceptions to morality -- one of those special privileges society accords to professionals -- will more likely turn out to be instances of a general rule that is being applied under special circumstances.

The danger of mistaking the latter for the former is that the idea of exception or exemption or special privilege could be (mis)taken to grant a certain license and prerogative to the professional in the realm of morality, such that the professional might profess to become a moral law unto him- or herself. Under such dispensation, even an entire profession might arrogate unto itself the power to exempt its practitioners from the common morality, as would be the case, for example, were intelligence professionals to designate themselves completely free agents when interrogating terror suspects.

(Analogous comments could be made in response to the widespread notions that ethics, and hence professional ethics as a special case, is "subjective" or "relative" or just a "cultural" phenomenon.)

Fallacy No. 7: The supreme end of professional ethics is the promotion of one's profession; the highest ideal is, therefore, "loyalty" to the profession and one's colleagues.

Response: This too is a potentially noxious idea, but, alas, a truism in the eyes of many professional organizations. Analogously to the preceding response, the mistake is to focus on a derived application, to the neglect of its theoretical justification. Precisely what accords significance to a profession is, presumably, its value to society. Therefore, if the welfare of the profession and of its practitioners comes to be seen as the summum bonum, the cart is being put before the horse, and very likely to the detriment of the profession's ability to carry out its warranting mission.

Fallacy No. 8: Professional ethics comes into play only when there is some crisis in a particular profession.

Response: It is certainly the case that public and even professional consciousness of professional ethical issues correlates with media attention to egregious wrong-doing in a particular profession or industry. It does not follow that there are not also pervasive problems, which are less likely to receive publicity. Here I think of the analogy of medicine. Until recently in this country, and to this day in most other countries, the time to see the doctor is when something goes seriously wrong. Now in the United States we recognize the importance of preventive care: The best time to see the doctor is before a crisis arises, to make sure that healthy strategies of life are being followed. Just so, I would argue that professional ethics is an area of expertise and reflection that is best employed preventatively, or as a vaccination if you will.

But the fallacy runs deeper still: I have heard a CEO, who was introducing a panel discussion of recent corporate scandals, express regret, possibly even shame, not only about those scandals but also about the need to discuss them. In other words, his displeasure had become displaced from its cause to its possible "cure" -- as people will sometimes talk about going to the doctor in the same hushed tones they use to refer to an embarrassing ailment that sends them there (if they will mention it at all). But this is a huge impediment to the necessary, and forever ongoing, vigorous, and open discussion that would help to make such crises less likely to occur in the first place.

Fallacy No. 9: Professional ethics is not something one needs to emphasize in formal education because the school of hard knocks all by itself resolves professional practice in favor of ethical behavior, as being the kind of behavior that works best.

Response: This is a popular view among some business professionals, who argue that the best possible reason to be ethical is precisely that it’s good for business. I call this “The Other Hand Defense,” as it appears to postulate a complement to the “invisible hand” of Adam Smith, according to whom “[Every individual] neither intends to promote the public interest, nor knows how much he is promoting it. . . . [H]e intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intentions."6 The present argument postulates the converse: By focusing on ethically right behavior, a businessperson will be led by an invisible hand to maximize his or her profits.7

As nice as it would be to believe in some such pre-established harmony or mechanism of nature and society, no unprejudiced observer of the world can plausibly maintain that it always and of necessity works that way. Without going to the other extreme of insisting that “no good deed goes unpunished,” I submit that it is reasonable to suppose the relation between “doing well and doing good” is contingent and hence won’t always be a positive correlation (some SRI8 proponents and afterlife aficionados to the contrary).

Perhaps it is enough to hold that it is generally true nonetheless. Plausible explanations of why it should be true are available, such as that a certain degree of trust is essential to the successful conduct of business (as with most human affairs), and trust is more likely to be maintained in the long run if it is based on fact, i.e., on trustworthiness.

But our students often have a hardened sense of how things get done in “the real world,” which makes them quite skeptical about such theoretical assurances. For one thing, many are blissfully unaware of the recurrent scandals in the various professions. Even when they do follow the news, the lesson they are most likely to learn is: Don’t get caught. What other “moral” could be drawn if there is no other way to succeed than to “bend the rules”? The students may also be simply correct that the cases they hear about in the media are a skewed sample in terms of likelihood of being found out and punished, but a representative sample in terms of how business is normally conducted.9

But the most profound reason for labeling the Other Hand defense of professional ethics a fallacy, I maintain, is that it puts success ahead of ethics. The bottom line remains financial profit. Thus, if one were ever to find oneself in a situation where ethical behavior did not seem to promise the best outcome for one’s personal or business interests, what would this stance advise? It seems to me to invite the wrong kind of thinking: “What’s best for my business?” rather than “What is the right thing to do?” The danger is that it will invite exceptions to the rule of ethics.10

Fallacy No. 10: Ethical beliefs are a part of one’s religion, or are a private, personal matter between oneself and one’s conscience, which therefore are to be respected by others and hence not to be directly taken issue with or even broached or mentioned by others and possibly even by oneself (except through religious channels).11

Response: This is a response I commonly hear from my students in any ethics course. I find it distressing because it seems to me to indicate a form of mind-control by some churches and cults. It is a recognized practice of brainwashing to exclude all opposing voices, ideas, and arguments. This is the antithesis of the kind of education we seek to promote in the liberal arts.

A Final Fallacy?

Finally let me mention a notion I think is only partially fallacious, to wit: Professional ethics is an essentially critical and "negative" discipline, which specializes in finding fault with prevailing practices and even some professional ideals. According to this view, the exemplar of professional ethics is the employee who refuses to do something ("unethical") or is a whistleblower. Ultimately the discipline of professional ethics might seem to be incompatible with the work world. Perhaps it should position itself in the periphery or even on the "outside," in the traditional role of gadfly; otherwise the critic risks being compromised or co-opted. It is a clear conflict of interest for the critic to be in the employ of the criticized; thus, it is nonsensical to suppose that professional ethics can be integrated into professional practice, and perhaps even into professional education.

I have to admit that the above characterization often gives me pause. I know that Socrates himself would disapprove of my being a professional critic of the professions. (At least, then, I am appropriately criticizing my own profession as a case in point of a profession being criticized!) And this state of affairs certainly creates practical problems for the project I am promoting. Why, after all, would a potential employer want to hire someone trained in ethics -- wouldn’t that be like letting the fox into the chicken coop? So it makes problematical any claim a professional ethics program might make about enhancing the employability of the graduates of its institution's professional programs, and hence the ethics program's suitability to the career-preparation mission many of our institutions have espoused.

I think the truth of the matter is, however, that the main object of criticism by professional ethics is not the professions (nor professional education) as such, but precisely their characteristic resistance to professional ethics. Professional ethics proper is critical only in the benign sense that my colleague David Brubaker has articulated for art criticism in the syllabus of his aesthetics course:

"To judge, discern or evaluate carefully is to act as a critic. For example, people who review movies and musical recordings are called film and music critics. Criticism consists of an attitude of careful judging and discernment that leads to an opinion (a positive or negative judgment). A good critic can be carefully aware, thoughtful and still like the movie or CD -- still think carefully and arrive at the opinion that the film is good. Or the critic may be of the opinion that the film is bad. My point is this: to be critical, in a philosophical way, is NOT the same as to judge negatively. To be critical is to have the right kind of attitude to guarantee an accurate opinion."

I am not sure that even this notion of criticism in the basic, "neutral" sense captures the flavor of professional ethics. I now tend to think of the discipline as concerned not so much with "judging" as with problem identification and problem solving. A paradigmatic example would be: How can a defense attorney maintain her professional "loyalty" to her client (or a police officer to her colleagues, or a businessperson to the bottom line, etc. ad inf.) while at the same time respecting and caring about other persons, beings, entities, and values? The reasoned effort to answer such questions is completely in keeping with, and furthers, a profession's own goals for the most part, it seems to me. It is also compatible with the "can do" attitude that the business world so admires but often thinks would be impeded by ethical scrutiny; it's just that, on the ethical view, "get the job done" ought not be taken to imply "by any means necessary."


I conclude that those of my colleagues in the professional and business disciplines who are forever bemoaning the lack of room in their worksheets for a professional ethics course, have quite missed the boat. Professional ethics is not an "add-on" to a professional education, not even as a part of the "broadening" that a liberal arts-based professional education is supposed to provide. More essentially, professional ethics is part and parcel of being a professional. Not to "have room" for it in a professional's education is therefore a contradiction.

This is presumably one of the reasons we would expect people who seek to be professionals to receive higher education: so that they will have the opportunity to reflect on the implications of their means of making a living. This is precisely one of the advanced skills that separate professionals from mere toilers in the field. Professional ethics is an area of expertise, and so should be taught by its own specialists – typically in philosophy but also allied fields. Give it its due in the curriculum, in the required core and not just the optional periphery of every professional program of study.12 To fail to do so could itself be construed as a lapse of professional ethics in our own field of education.13


1 I teach mainly undergraduates who are taking a general education ethics course in a career-oriented university. None of them are philosophy majors. I have formed the same impression of students taking upper-level and graduate business courses where I have been invited to guest-lecture; their teachers bemoan the same phenomenon (hence the invitation).

2 “Worksheet” is the term my university uses for a student’s program of study.

3 Kohlberg, Lawrence. Essays on Moral Development: The Psychology of Moral Development (V. II). San Francisco: Harper & Row, 1984.

4 One of my students once expressed that his senior “professional seminar” was essentially a course in, as he put it, “CYA” (covering your butt). In other words, he was taught about the laws he would not want to run afoul of for prudential reasons.

5 The above points apply, mutatis mutandis, to ethics codes. I refer to my prescription as the “cure for the common code.”

6 An Inquiry into the Nature and Causes of the Wealth of Nations (B. IV, Ch. 2, Para. 9).

7 Smith himself disdained this possibility. He concluded the quoted passage by stating, "I have never known much good done by those who affected to trade for the public good.”

8 “SRI” stands for “socially responsible investing,” some of whose more over-the-top advocates appear to believe that investing in the most ethical businesses guarantees maximum returns in the long run. The idea that sacrifice is sometimes necessary for the moral life may occasionally be forgotten by members of this otherwise laudable and level-headed movement.

9 In my teaching I drive home the analogous point that the vast majority of academic cheating goes undetected and/or unpunished. See my “Cheating 101: Ethics as a Lab Course” in Teaching Philosophy 26:2 (131-145), June 2003.

10 In a fascinating article in the National Post (of Canada) Business Magazine (March 2004, pp. 78-87), the newspaper’s editorials editor Jonathan Kay (who is also a tax attorney) acknowledges the gap between these two views of professional ethics – which he labels (after philosopher Wesley Cragg) “the business case for ethics” and “the ethics case for ethics” – and comes down unapologetically on the side of the former. He concludes, “To the extent that academics can make the case that doing the right thing is good for profits, then by all means let students be taught to do the right thing. But it will only do harm to their companies and careers if they are taught to take their business management cues from Albert Schweitzer instead of Adam Smith.” (Despite his thesis, however, Kay’s argument seems to be an ethically ethical one.)

11 This fallacy was added after publication of this article.

12 Professional ethics should also be integrated throughout the professional curriculum, with campus-wide coordination and collaboration, including team-teaching, among theorists and professional practitioners on the faculty, who, ideally, would also be fellows in a campus research center that would sponsor visiting speakers and the like. My emphasis in this essay, however, has been on professional ethics as a specialization.

13 This essay has benefited from critical commentary by Janet Gillespie, Michael Morris, and an anonymous reviewer for this journal. Any remaining causes for complaint are due to the author’s pig-headedness.