Logical Fallacies and the Art of Debate

Contents:

Introduction

This is a guide to using logical fallacies in debate. And when I say "using," I don't mean just pointing them out when opposing debaters commit them -- I mean deliberately committing them oneself, or finding ways to transform fallacious arguments into perfectly good ones.

Debate is, fortunately or not, an exercise in persuasion, wit, and rhetoric, not just logic. In a debate format that limits each debater's speaking time, it is simply not reasonable to expect every proposition or conclusion to follow precisely and rigorously from a clear set of premises stated at the outset. Instead, debaters have to bring together various facts, insights, and values that others share or can be persuaded to accept, and then show that those ideas lead more or less plausibly to a conclusion. Logic is a useful tool in this process, but it is not the only tool -- after all, "plausibility" is a fairly subjective matter that does not follow strict logical rules. Ultimately, the judge in a debate round has to decide which side's position is more plausible in light of the arguments given -- and the judge is required to pick one of those sides, even if logic alone dictates that "we do not know" is the answer to the question at hand.

Besides, let's be honest: debate is not just about finding truth, it's also about winning. If you think a fallacious argument can slide by and persuade the judge to vote for you, you're going to make it, right? The trick is not getting caught.
 

So why learn logical fallacies at all?

I can think of a couple of good reasons. First, it makes you look smart. If you can not only show that the opposition has made an error in reasoning, but you can give that error a name as well (in Latin!), it shows that you can think on your feet and that you understand the opposition's argument possibly better than they do.

Second, and maybe more importantly, pointing out a logical fallacy is a way of removing an argument from the debate rather than just weakening it. Much of the time, a debater will respond to an argument by simply stating a counterargument showing why the original argument is not terribly significant in comparison to other concerns, or shouldn't be taken seriously, or whatever. That kind of response is fine, except that the original argument still remains in the debate, albeit in a less persuasive form, and the opposition is free to mount a rhetorical offensive saying why it's important after all. On the other hand, if you can show that the original argument actually commits a logical fallacy, you put the opposition in the position of justifying why their original argument should be considered at all. If they can't come up with a darn good reason, then the argument is actually removed from the round.
 

Logic as a form of rhetoric

Unfortunately, the account I have just given is a bit idealized. Not every judge will immediately recognize the importance of the logical fallacy you've pointed out in your opposition's argument. Even if a logician would immediately accept the accuracy of your point, in a debate round it's the judge that counts.

It is therefore not enough simply to point out a logical fallacy and move on; there is an art to pointing out logical fallacies in your opposition's arguments. Here are a few strategies I've found useful in pointing out logical fallacies in an effective manner:

Committing your very own logical fallacies

In general, of course, it's a good idea to avoid logical fallacies if at all possible, because a good debater will almost always catch you. It is especially important to avoid obvious logical fallacies like the one above (argumentum ad populum), because they are vulnerable to such powerful (and persuasive) refutations. But sometimes, a logical fallacy -- or at least an unjustified logical leap -- is unavoidable. And there are some types of argument that are listed as logical fallacies in logic textbooks, but that are perfectly acceptable in the context of the rules of debate. The most important guideline for committing such fallacies yourself is to know when you are doing it, and to be prepared to justify yourself later if the opposition tries to call you down for it. For examples of logical fallacies that can sometimes be acceptable in the context of debate, see ad ignorantiam, ad logicam, complex question, slippery slope, straw man, and tu quoque in the list below.
 

The list of logical fallacies

What follows is not a comprehensive list of all the known logical fallacies. Nor is this intended as a rigorous philosophical treatise on logical reasoning. (If that's what you're looking for, you should check out the following excellent web resources: The Atheist Web's logic page, or San Jose University's Mission: Critical page; I owe a debt to these pages for reminding me of a number of fallacies I had forgotten about.) What I have done is compile a list of fallacies that debaters should be familiar with -- either for pointing them out in others' arguments or for using and defending them in one's own.

Argumentum ad antiquitatem (the argument to antiquity or tradition). This is the familiar argument that some policy, behavior, or practice is right or acceptable because "it's always been done that way." This is an extremely popular fallacy in debate rounds; for example, "Every great civilization in history has provided state subsidies for art and culture!" But that fact does not justify continuing the policy.

Because an argumentum ad antiquitatem is easily refuted by simply pointing it out, in general it should be avoided. But if you must make such an argument -- perhaps because you can't come up with anything better -- you can at least make it marginally more acceptable by providing some reason why tradition should usually be respected. For instance, you might make an evolutionary argument to the effect that the prevalence of a particular practice in existing societies is evidence that societies that failed to adopt it were weeded out by natural selection. This argument is weak, but better than the fallacy alone.

Argumentum ad hominem (argument directed at the person). This is the error of attacking the character or motives of a person who has stated an idea, rather than the idea itself. The most obvious example of this fallacy is when one debater maligns the character of another debater (e.g, "The members of the opposition are a couple of fascists!"), but this is actually not that common. A more typical manifestation of argumentum ad hominem is attacking a source of information -- for example, responding to a quotation from Richard Nixon on the subject of free trade with China by saying, "We all know Nixon was a liar and a cheat, so why should we believe anything he says?" Argumentum ad hominem also occurs when someone's arguments are discounted merely because they stand to benefit from the policy they advocate -- such as Bill Gates arguing against antitrust, rich people arguing for lower taxes, white people arguing against affirmative action, minorities arguing for affirmative action, etc. In all of these cases, the relevant question is not who makes the argument, but whether the argument is valid.

It is always bad form to use the fallacy of argumentum ad hominem. But there are some cases when it is not really a fallacy, such as when one needs to evaluate the truth of factual statements (as opposed to lines of argument or statements of value) made by interested parties. If someone has an incentive to lie about something, then it would be naive to accept his statements about that subject without question. It is also possible to restate many ad hominem arguments so as to redirect them toward ideas rather than people, such as by replacing "My opponents are fascists" with "My opponents' arguments are fascist."

Argumentum ad ignorantiam (argument to ignorance). This is the fallacy of assuming something is true simply because it hasn't been proven false. For example, someone might argue that global warming is certainly occurring because nobody has demonstrated conclusively that it is not. But failing to prove the global warming theory false is not the same as proving it true.

Whether or not an argumentum ad ignorantiam is really fallacious depends crucially upon the burden of proof. In an American courtroom, where the burden of proof rests with the prosecution, it would be fallacious for the prosecution to argue, "The defendant has no alibi, therefore he must have committed the crime." But it would be perfectly valid for the defense to argue, "The prosecution has not proven the defendant committed the crime, therefore you should declare him not guilty." Both statements have the form of an argumentum ad ignorantiam; the difference is the burden of proof.

In debate, the proposing team in a debate round is usually (but not always) assumed to have the burden of proof, which means that if the team fails to prove the proposition to the satisfaction of the judge, the opposition wins. In a sense, the opposition team's case is assumed true until proven false. But the burden of proof can sometimes be shifted; for example, in some forms of debate, the proposing team can shift the burden of proof to the opposing team by presenting a prima facie case that would, in the absence of refutation, be sufficient to affirm the proposition. Still, the higher burden generally rests with the proposing team, which means that only the opposition is in a position to make an accusation of argumentum ad ignorantiam with respect to proving the proposition.

Argumentum ad logicam (argument to logic). This is the fallacy of assuming that something is false simply because a proof or argument that someone has offered for it is invalid; this reasoning is fallacious because there may be another proof or argument that successfully supports the proposition. This fallacy often appears in the context of a straw man argument.

This is another case in which the burden of proof determines whether it is actually a fallacy or not. If a proposing team fails to provide sufficient support for its case, the burden of proof dictates they should lose the debate, even if there exist other arguments (not presented by the proposing team) that could have supported the case successfully. Moreover, it is common practice in debate for judges to give no weight to a point supported by an argument that has been proven invalid by the other team, even if there might be a valid argument the team failed to make that would have supported the same point; this is because the implicit burden of proof rests with the team that brought up the argument. For further commentary on burdens of proof, see argumentum ad ignorantiam, above.

Argumentum ad misericordiam (argument or appeal to pity). The English translation pretty much says it all. Example: "Think of all the poor, starving Ethiopian children! How could we be so cruel as not to help them?" The problem with such an argument is that no amount of special pleading can make the impossible possible, the false true, the expensive costless, etc.

It is, of course, perfectly legitimate to point out the severity of a problem as part of the justification for adopting a proposed solution. The fallacy comes in when other aspects of the proposed solution (such as whether it is possible, how much it costs, who else might be harmed by adopting the policy) are ignored or responded to only with more impassioned pleas. You should not call your opposition down for committing this fallacy unless they rely on appeals to pity to the exclusion of the other necessary arguments. It is perfectly acceptable to use appeal to pity in order to argue that the benefits of the proposed policy are greater than they might at first appear (and hence capable of justifying larger costs).

Argumentum ad nauseam (argument to the point of disgust; i.e., by repitition). This is the fallacy of trying to prove something by saying it again and again. But no matter how many times you repeat something, it will not become any more or less true than it was in the first place. Of course, it is not a fallacy to state the truth again and again; what is fallacious is to expect the repitition alone to substitute for real arguments.

Nonetheless, this is a very popular fallacy in debate, and with good reason: the more times you say something, the more likely it is that the judge will remember it. The first thing they'll teach you in any public speaking course is that you should "Tell 'em what you're gonna tell 'em, then tell 'em, and then tell 'em what you told 'em." Unfortunately, some debaters think that's all there is to it, with no substantiation necessary! The appropriate time to mention argumentum ad nauseam in a debate round is when the other team has made some assertion, failed to justify it, and then stated it again and again. The Latin wording is particularly nice here, since it is evocative of what the opposition's assertions make you want to do: retch. "Sir, our opponents tell us drugs are wrong, drugs are wrong, drugs are wrong, again and again and again. But this argumentum ad nauseam can't and won't win this debate for them, because they've given us no justification for their bald assertions!"

Argumentum ad numerum (argument or appeal to numbers). This fallacy is the attempt to prove something by showing how many people think that it's true. But no matter how many people believe something, that doesn't necessarily make it true or right. Example: "At least 70% of all Americans support restrictions on access to abortions." Well, maybe 70% of Americans are wrong!

This fallacy is very similar to argumentum ad populum, the appeal to the people or to popularity. When a distinction is made between the two, ad populum is construed narrowly to designate an appeal to the opinions of people in the immediate vicinity, perhaps in hope of getting others (such as judges) to jump on the bandwagon, whereas ad numerum is used to designate appeals based purely on the number of people who hold a particular belief. The distinction is a fine one, and in general the terms can be used interchangeably in debate rounds. (I've found that ad populum has better rhetorical effect.)

Argumentum ad populum (argument or appeal to the public). This is the fallacy of trying to prove something by showing that the public agrees with you. For an example, see above. This fallacy is nearly identical to argumentum ad numerum, which you should see for more details.

Argumentum ad verecundiam (argument or appeal to authority). This fallacy occurs when someone tries to demonstrate the truth of a proposition by citing some person who agrees, even though that person may have no expertise in the given area. For instance, some people like to quote Einstein's opinions about politics (he tended to have fairly left-wing views), as though Einstein were a political philosopher rather than a physicist. Of course, it is not a fallacy at all to rely on authorities whose expertise relates to the question at hand, especially with regard to questions of fact that could not easily be answered by a layman -- for instance, it makes perfect sense to quote Stephen Hawking on the subject of black holes.

At least in some forms of debate, quoting various sources to support one's position is not just acceptable but mandatory. In general, there is nothing wrong with doing so. Even if the person quoted has no particular expertise in the area, he may have had a particularly eloquent way of saying something that makes for a more persuasive speech. In general, debaters should be called down for committing argumentum ad verecundiam only when (a) they rely on an unqualified source for information about facts without other (qualified) sources of verification, or (b) they imply that some policy must be right simply because so-and-so thought so.

Circulus in demonstrando (circular argument). Circular argumentation occurs when someone uses what they are trying to prove as part of the proof of that thing. Here is one of my favorite examples (in pared down form): "Marijuana is illegal in every state in the nation. And we all know that you shouldn't violate the law. Since smoking pot is illegal, you shouldn't smoke pot. And since you shouldn't smoke pot, it is the duty of the government to stop people from smoking it, which is why marijuana is illegal!"

Circular arguments appear a lot in debate, but they are not always so easy to spot as the example above. They are always illegitimate, though, and pointing them out in a debate round looks really good if you can do it. The best strategy for pointing out a circular argument is to make sure you can state clearly the proposition being proven, and then pinpoint where that proposition appears in the proof. A good summing up statement is, "In other words, they are trying to tell us that X is true because X is true! But they have yet to tell us why it's true."

Complex question. A complex question is a question that implicitly assumes something to be true by its construction, such as "Have you stopped beating your wife?" A question like this is fallacious only if the thing presumed true (in this case, that you beat your wife) has not been established.

Complex questions are a well established and time-honored practice in debate, although they are rarely so bald-faced as the example just given. Complex questions usually appear in cross-examination or points of information when the questioner wants the questionee to inadvertently admit something that she might not admit if asked directly. For instance, one might say, "Inasmuch as the majority of black Americans live in poverty, do you really think that self-help within the black community is sufficient to address their problems?" Of course, the introductory clause about the majority of black Americans living in poverty may not be true (in fact, it is false), but an unwary debater might not think quickly enough to notice that the stowaway statement is questionable. This is a sneaky tactic, but debate is sometimes a sneaky business. You wouldn't want to put a question like that in your master's thesis, but it might work in a debate. But be careful -- if you try to pull a fast one on someone who is alert enough to catch you, you'll look stupid. "The assumption behind your question is simply false. The majority of blacks do not live in poverty. Get your facts straight before you interrupt me again!"

Cum hoc ergo propter hoc (with this, therefore because of this). This is the familiar fallacy of mistaking correlation for causation -- i.e., thinking that because two things occur simultaneously, one must be a cause of the other. A popular example of this fallacy is the argument that "President Clinton has great economic policies; just look at how well the economy is doing while he's in office!" The problem here is that two things may happen at the same time merely by coincidence (e.g., the President may have a negligible effect on the economy, and the real driving force is technological growth), or the causative link between one thing and another may be lagged in time (e.g., the current economy's health is determined by the actions of previous presidents), or the two things may be unconnected to each other but related to a common cause (e.g., downsizing upset a lot of voters, causing them to elect a new president just before the economy began to benefit from the downsizing).

It is always fallacious to suppose that there is a causative link between two things simply because they coexist. But a correlation is usually considered acceptable supporting evidence for theories that argue for a causative link between two things. For instance, some economic theories suggest that substantially reducing the federal budget deficit should cause the economy to do better (loosely speaking), so the coincidence of deficit reductions under Clinton and the economy's relative health might be taken as evidence in favor of those economic theories. In debate rounds, what this means is that it is acceptable to demonstrate a correlation between two phenomenon and to say one caused the other if you can also come up with convincing reasons why the correlation is no accident.

Cum hoc ergo propter hoc is very similar to post hoc ergo propter hoc, below. The two terms can be used almost interchangeably, post hoc (as it is affectionately called) being the preferred term.

Dicto simpliciter (spoken simply, i.e., sweeping generalization). This is the fallacy of making a sweeping statement and expecting it to be true of every specific case -- in other words, stereotyping. Example: "Women are on average not as strong as men and less able to carry a gun. Therefore women can't pull their weight in a military unit." The problem is that the sweeping statement may be true (on average, women are indeed weaker than men), but it is not necessarily true for every member of the group in question (there are some women who are much stronger than the average).

As the example indicates, dicto simpliciter is fairly common in debate rounds. Most of the time, it is not necessary to call an opposing debater down for making this fallacy -- it is enough to point out why the sweeping generalization they have made fails to prove their point. Since everybody knows what a sweeping generalization is, using the Latin in this case will usually sound condescending. It is also important to note that some generalizations are perfectly valid and apply directly to all individual cases, and therefore do not commit the fallacy of dicto simpliciter (for example, "All human males have a Y chromosome" is, to my knowledge, absolutely correct).

Nature, appeal to. This is the fallacy of assuming that whatever is "natural" or consistent with "nature" (somehow defined) is good, or that whatever conflicts with nature is bad. For example, "Sodomy is unnatural; anal sex is not the evolutionary function of a penis or an anus. Therefore sodomy is wrong." But aside from the difficulty of defining what "natural" even means, there is no particular reason to suppose that unnatural and wrong are the same thing. After all, wearing clothes, tilling the soil, and using fire might be considered unnatural since no other animals do so, but humans do these things all the time and to great benefit.

The appeal to nature appears occasionally in debate, often in the form of naive environmentalist arguments for preserving pristine wilderness or resources. The argument is very weak and should always be shot down. It can, however, be made stronger by showing why at least in specific cases, there may be a (possibly unspecifiable) benefit to preserving nature as it is. A typical ecological argument along these lines is that human beings are part of a complex biological system that is highly sensitive to shocks, and therefore it is dangerous for humans to engage in activities that might damage the system in ways we cannot predict. Note, however, that this approach no longer appeals to nature itself, but to the value of human survival.

For further comment on this subject, see the naturalistic fallacy.

Naturalistic fallacy. This is the fallacy of trying to derive conclusions about what is right or good (that is, about values) from statements of fact alone. This is invalid because no matter how many statements of fact you assemble, any logical inference from them will be another statement of fact, not a statement of value. If you wish to reach conclusions about values, then you must include amongst your assumptions (or axioms, or premises) a statement of value. Once you have an axiomatic statement of value, then you may use it in conjunction with statements of fact to reach value-laden conclusions.

For example, someone might argue that the premise, "This medicine will prevent you from dying" immediately leads to the conclusion, "You should take this medicine." But this reasoning is invalid, because the former statement is a statement of fact, while the latter is a statement of value. To reach the conclusion that you ought to take the medicine, you would need at least one more premise: "You ought to try to preserve your life whenever possible."

The naturalistic fallacy appears in many forms. Two examples are argumentum ad antiquitatem (saying something's right because it's always been done that way) and the appeal to nature (saying something's right because it's natural). In both of these fallacies, the speaker is trying to reach a conclusion about what we ought to do or ought to value based solely on what is the case. David Hume called this trying to bridge the "is-ought gap," which is a nice phrase to use in debate rounds where your opponent is committing the naturalistic fallacy.

One unsettling implication of taking the naturalistic fallacy seriously is that, in order to reach any conclusions of value, one must be willing to posit some initial statement or statements of value that will be treated as axioms, and which cannot themselves be justified on purely logical grounds. Fortunately, debate does not restrict itself to purely logical grounds of argumentation. For example, suppose your opponent has stated axiomatically that "whatever is natural is good." Inasmuch as this statement is an axiom rather than the conclusion of a logical proof, there can be no purely logical argument against it. But some nonetheless appropriate responses to such an absolute statement of value include: (a) questioning whether anyone -- you, your judge, or even your opponent himself -- really believes that "whatever is natural is good"; (b) stating a competing axiomatic value statement, like "whatever enhances human life is good," and forcing the judge to choose between them; and (c) pointing out logical implications of the statement "whatever is natural is good" that conflict with our most basic intuitions about right and wrong.

Non Sequitur ("It does not follow"). This is the simple fallacy of stating, as a conclusion, something that does not strictly follow from the premises. For example, "Racism is wrong. Therefore, we need affirmative action." Obviously, there is at least one missing step in this argument, because the wrongness of racism does not imply a need for affirmative action without some additional support (such as, "Racism is common," "Affirmative action would reduce racism," "There are no superior alternatives to affirmative action," etc.).

Not surprisingly, debate rounds are rife with non sequitur. But that is partly just a result of having to work within the time constraints of a debate round, and partly a result of using good strategy. A debate team arguing for affirmative action would be foolish to say in their first speech, "We also believe that affirmative action does not lead to a racist backlash," because doing so might give the other side a hint about a good argument to make. A better strategy (usually) is to wait for the other team to bring up an argument, and then refute it; that way, you don't end up wasting your time by refuting arguments that the opposition has never made in the first place. (This strategy is not always preferable, though, because some counterarguments are so obvious and important that it makes sense to address them early and nip them in the bud.)

For these reasons, it is generally bad form to scream "non sequitur" just because your opposition has failed to anticipate every counterargument you might make. The best time to point out a non sequitur is when your opposition is trying to construct a chain of causation (A leads to B leads to C, etc.) without justifying each step in the chain. For each step in the chain they fail to justify, point out the non sequitur, so that it is obvious by the end that the alleged chain of causation is tenuous and implausible.

Petitio principii (begging the question). This is the fallacy of assuming, when trying to prove something, what it is that you are trying prove. For all practical purposes, this fallacy is indistinguishable from circular argumentation.

The main thing to remember about this fallacy is that the term "begging the question" has a very specific meaning. It is common to hear debaters saying things like, "They say pornography should be legal because it is a form of free expression. But this begs the question of what free expression means." This is a misuse of terminology. Something may inspire or motivate us to ask a particular question without begging the question. A question has been begged only if the question has been asked before in the same discussion, and then a conclusion is reached on a related matter without the question having been answered. If somebody said, "The fact that we believe pornography should be legal means that it is a valid form of free expression. And since it's free expression, it shouldn't be banned," that would be begging the question.

Post hoc ergo propter hoc (after this, therefore because of this). This is the fallacy of assuming that A caused B simply because A happened prior to B. A favorite example: "Most rapists read pornography when they were teenagers; obviously, pornography causes violence toward women." The conclusion is invalid, because there can be a correlation between two phenomena without one causing the other. Often, this is because both phenomena may be linked to the same cause. In the example given, it is possible that some psychological factor -- say, a frustrated sex drive -- might cause both a tendency toward sexual violence and a desire for pornographic material, in which case the pornography would not be the true cause of the violence.

Post hoc ergo propter hoc is nearly identical to cum hoc ergo propter hoc, which you should see for further details.

Red herring. This means exactly what you think it means: introducing irrelevant facts or arguments to distract from the question at hand. For example, "The opposition claims that welfare dependency leads to higher crime rates -- but how are poor people supposed to keep a roof over their heads without our help?" It is perfectly valid to ask this question as part of the broader debate, but to pose it as a response to the argument about welfare leading to crime is fallacious. (There is also an element of ad misericordiam in this example.)

It is not fallacious, however, to argue that benefits of one kind may justify incurring costs of another kind. In the example given, concern about providing shelter for the poor would not refute concerns about crime, but one could plausibly argue that a somewhat higher level of crime is a justifiable price given the need to alleviate poverty. This is a debatable point of view, but it is no longer a fallacious one.

The term red herring is sometimes used loosely to refer to any kind of diversionary tactic, such as presenting relatively unimportant arguments that will use up the other debaters' speaking time and distract them from more important issues. This kind of a red herring is a wonderful strategic maneuver with which every debater should be familiar.

Slippery slope. A slippery slope argument is not always a fallacy.  A slippery slope fallacy is an argument that says adopting one policy or taking one action will lead to a series of other policies or actions also being taken, without showing a causal connection between the advocated policy and the consequent policies. A popular example of the slippery slope fallacy is, "If we legalize marijuana, the next thing you know we'll legalize heroin, LSD, and crack cocaine." This slippery slope is a form of non sequitur, because no reason has been provided for why legalization of one thing leads to legalization of another. Tobacco and alcohol are currently legal, and yet other drugs have somehow remained illegal.

There are a variety of ways to turn a slippery slope fallacy into a valid (or at least plausible) argument. All you need to do is provide some reason why the adoption of one policy will lead to the adoption of another. For example, you could argue that legalizing marijuana would cause more people to consider the use of mind-altering drugs acceptable, and those people will support more permissive drug policies across the board. An alternative to the slippery slope argument is simply to point out that the principles espoused by your opposition imply the acceptability of certain other policies, so if we don't like those other policies, we should question whether we really buy those principles. For instance, if the proposing team argued for legalizing marijuana by saying, "individuals should be able to do whatever they want with their own bodies," the opposition could point out that that principle would also justify legalizing a variety of other drugs -- so if we don't support legalizing other drugs, then maybe we don't really believe in that principle.

Straw man. This is the fallacy of refuting a caricatured or extreme version of somebody's argument, rather than the actual argument they've made. Often this fallacy involves putting words into somebody's mouth by saying they've made arguments they haven't actually made, in which case the straw man argument is a veiled version of argumentum ad logicam. One example of a straw man argument would be to say, "Mr. Jones thinks that capitalism is good because everybody earns whatever wealth they have, but this is clearly false because many people just inherit their fortunes," when in fact Mr. Jones had not made the "earnings" argument and had instead argued, say, that capitalism gives most people an incentive to work and save. The fact that some arguments made for a policy are wrong does not imply that the policy itself is wrong.

In debate, strategic use of a straw man can be very effective. A carefully constructed straw man can sometimes entice an unsuspecting opponent into defending a silly argument that he would not have tried to defend otherwise. But this strategy only works if the straw man is not too different from the arguments your opponent has actually made, because a really outrageous straw man will be recognized as just that. The best straw man is not, in fact, a fallacy at all, but simply a logical extension or amplification of an argument your opponent has made.

Tu quoque ("you too"). This is the fallacy of defending an error in one's reasoning by pointing out that one's opponent has made the same error. An error is still an error, regardless of how many people make it. For example, "They accuse us of making unjustified assertions. But they asserted a lot of things, too!"

Although clearly fallacious, tu quoque arguments play an important role in debate because they may help establish who has done a better job of debating (setting aside the issue of whether the proposition is true or not). If both teams have engaged in ad hominem attacks, or both teams have made a few appeals to pity, then it would hardly be fair to penalize one team for it but not the other. In addition, it is not fallacious at all to point out that certain advantages or disadvantages may apply equally to both positions presented in a debate, and therefore they cannot provide a reason for favoring one position over the other (such disadvantages are referred to as "non-unique"). In general, using tu quoque statements is a good way to assure that judges make decisions based only on factors that distinguish between the two sides.


This page has now been translated into Belurussian! You can find the translated version here.

Return to main debate page.
Return to cover page.
This page was last modified on 29 January 2001.