“Of course we know we shouldn’t touch those vines, Reverend Smith; every village child is told the story of the evil spirits that are forced to live in them, and how touching them allows the evil spirits to escape into your skin.”
“Evil spirits? No, you’re confused: it’s just poison oak… the plant has chemical compounds that cause you to break out in rashes; there are no ‘evil spirits’ involved.”
“See, you people think you are so understanding with such talk, but how else can you explain why the god-blessed aloe poultice helps fight off the bad spirits?”
Although this is a slight caricature of certain tribal understandings of poisonous compounds and plants, the underlying idea—that some people utilize such witch doctor explanations—is not a misconception. Unfortunately, however, this is not just a problem in remote parts of the world; this can regularly be seen in competitive debate rounds and discussions. Witch doctor theory is an issue that oftentimes is left unaddressed until it blooms into large, noticeable problems. Hoping to promote awareness of this problem, I am writing this article to detail what the issue is generally, why it matters, what some common examples are currently, and how to address it.
Witch Doctor Theory?
Of course, I’m not suggesting that people explain the stock issues such as through the “god of topicality,” or the “Kurse of kritiks.” Rather, by witch doctor explanations/theory I am referring to explanations or ideas that ultimately lead to correct end conclusions, but are built on flawed premises or reasoning. To go back to the opening example: the people correctly knew that they shouldn’t touch poison oak or else they would get painful rashes. However, the reason why they believed they shouldn’t touch the vine (because it held evil spirits) was completely wrong. Still, this term (as I use it) does not only apply to seemingly radical explanations: another, less exotic example could be the common misconception that the moon consistently passes through phases because of the Earth’s shadow. Ultimately, it is correct that the moon consistently passes through phases, but these phases are certainly not caused by the Earth’s shadow; again, correct conclusion, but incorrect reasoning. While this may not sound like a crazy explanation to many people (I didn’t know it was wrong until about a month ago, although it’s actually rather silly when you begin to imagine lunar motion), that’s exactly the issue with witch doctor theory: the mistakes being made don’t always sound so radical to us.
Why this matters
Many times, the initial mistakes are actually very minor or largely irrelevant, just like knowing the way the moon’s phases work is not critical to everyday life. Thus, you may be asking “Then why do I need to recognize it?” The answer is multifaceted.
First of all, I should clarify that you don’t always need to wholly reject inaccurate explanations. Although you generally should still seek to recognize them, later in this article I will provide examples where outright rejection isn’t as critical. But other than in these exceptions, you should discourage or reject inaccurate explanations based on a few main cases:
- Although the mistakes are usually minor, they are not always minor; some explanations are so flawed that they pose serious problems.
- Even though individual mistakes can be minor, a number of individually minor misconceptions can become major problems when they compound or interact.
- Even initially small misconceptions can later grow into more serious misconceptions.
These cases and others can all give rise to the variety of problems associated with witch doctor explanations. Some of the primary consequences are:
- Having flawed foundations, leading to incorrect conclusions elsewhere. For example, as detailed later, having the wrong understanding of fiat power in policy debate leads to incorrect beliefs about topicality requirements for cases.
- Having weak foundations, leading to abandonment of the correct conclusion when the explanation is shown to be false. Referencing the poison oak as an example, suppose that a village child who has never seen the effects of poison oak is only told that it does not have evil spirits. That child might then believe that it’s okay to touch the poison oak. Thus, it’s best not to base your conclusions on weak foundations.
- Being unable to convince others of the correct conclusion. Again referencing the poison oak example, suppose someone who did not believe in evil spirits and had never heard of poison oak was told that it was harmful because it contained evil spirits. That person would not be convinced of either the explanation or the conclusion because he does not see the reasoning as compelling.
Ultimately, there is a multitude of serious problems that accompany the usage of witch doctor explanations. This is especially true for debate, where I have witnessed all kinds of convoluted or otherwise errant explanations.
Witch doctor theory in the debate world
Debate is all about finding the truth and convincing others of it, but in many cases, preferring simplicity in the former task (superficiality) or ease in the latter (sophistry) leads to explanations which, even when they get the point across, are seriously flawed. As stated, some of these inaccuracies are not so major as to be addressed here. However, in my experience, I have found that there are a few misconceptions which really need to be exposed and countered. Before I do that, however, I should make a disclaimer: the world of theory is usually not well-established, scientific fact. Thus, some of the conclusions that I support may be wrong. However, what I can at least do is talk about what are arguably bad ways to support conclusions.
Fiat and Extratopicality
Many policy debaters at least think they are familiar with the idea of fiat power, which is the hypothetical “it shall be so” premise adopted so that no negative team can argue “Congress/SCOTUS/POTUS would never pass this.” Really, though, fiat power shouldn’t even need to be a concept. The whole debate is simply “Should the USFG reform its ____ policy, based on the option(s) you’ve heard?” rather than “We will pass this if you support us.” Unfortunately, some debaters I know personally even misunderstand fiat power and topicality so far to explain it in the following way: “Judge, you can imagine yourself as a court judge, like in a traffic, civil, or criminal court. You are given cases which are supposed to meet the jurisdiction or ‘topic’ of the court, which in this round is the USFG’s _____ policy. You are given what is known as fiat power to pass plans which fall under your court’s jurisdiction.” While this is all highly inaccurate (because it imposes convoluted conditions which aren’t found in or derived from the rules), it usually supports the debaters’ (correct) conclusion: that affirmative cases must be topical. And considering that it seems to persuade community judges of that conclusion, the debaters prefer to use it rather often. However, this explanation becomes problematic when an issue such as extratopicality arises (which is where only some of the plan is not topical). The debaters will say that the judge cannot use their magical fiat power to pass anything which is extratopical. Although some might not realize it, this is an incorrect conclusion (you just can’t claim any advantages from extratopical mandates). Thus, this incorrect explanation leads to the first problem: accepting incorrect conclusions elsewhere.
Fiat and Counterplans
This might shock some debaters out there, but there are actually debaters and coaches who believe that the only legitimate counterplans are topical counterplans. That’s right: they consider non-topical counterplans to be illegitimate. Why? Although some groups of people might have slightly different reasoning, the most common explanation (in my experience) is based on the one given in the previous point: that the judge only has fiat power over topical things, and therefore cannot fiat a non-topical plan. I imagine that for many of you, this is rather crazy, isn’t it? I will admit that there actually is some debate to be had over non-topical vs. topical counterplans, but appealing to these misconceptions of fiat power is not a valid approach. Unfortunately, these are the kinds of faulty and/or weak foundations that arise from witch doctor theory. And there are more.
“The Four-legged Table”
Many TPers are probably familiar with the idea of the “four-legged table” explanation for the stock issues. Basically, the metaphor is that all four stock issues (“topicality, inherency, significance, solvency,” based on whom you ask) must stand; if the negative can knock out just one of the stock issues, then the whole table falls—a negative ballot is warranted. While some see this as a convenient explanation (because we all know that three-legged tables don’t exist, right?), it can be misinterpreted. Although in principle, completely (100%) knocking out an entire leg (stock issue) should mean the affirmative loses, but it isn’t always argued this way. In some rounds, the negative team just heavily (say, 95%) undermines solvency, ignoring all the other aspects of the case and proudly concluding with the four-legged table metaphor. The problem here is that especially in situations where the significance is extremely great (for explanation purposes, suppose the harms are global famine and war), 5% solvency will always outweigh no disadvantages. Although cutting off 95% of a four-legged table’s leg might make the table collapse, the actual policy might still be a good policy. Once again, faulty premises lead to incorrect conclusions elsewhere.
Topical Counterplans and the Rules
This topic is obviously rather controversial, but that being said, I am of the mindset that although topical counterplans (TCPs) should be legitimate with some restrictions (e.g. “cannot be too similar to aff’s plan”), under current (Stoa TP) rules they aren’t. You can disagree with me all you would like, but for this issue I am supposing that it is the correct conclusion. What I find particularly troubling is when someone actually claims that TCPs are illegitimate because the rules clearly say so (a problem prominent in some regions of NCFCA, even though there are no clear rules like in Stoa). Although I believe the ultimate conclusion is correct, such reasoning is completely wrong. This is problematic because if someone disapproves of TCPs because they think the rules specifically bar TCPs, when they find out that their premise is incorrect they might think “Well, I guess I was wrong: TCPs are okay.” This is the second type of problem: abandoning a correct conclusion. Additionally, even if they don’t change their own opinions, they face the third problem: they will not be able to convince anyone if their support is clearly flawed.
These are just some of the examples with which I have personally had to deal. There are others (such as misconceptions on how to respond to nuclear war DAs), but I think I’ve made the point clear: witch doctor theory is a problem that cannot just be ignored. Thus, the question becomes how and when to avoid or counter it.
Preventing/Countering witch doctor theory: How
Of course, you first need to make sure that the explanation at hand is actually incorrect. Especially in the whimsical world of debate theory, perhaps you have the wrong approach. The best way to avoid falling into this trap is to avoid taking the easiest, superficial explanations that you find; dig deeper than what makes sense on the surface, and challenge assumptions. Aside from that, however, the next issue is actually establishing that witch doctor theory does matter. If you’ve been reading up until now, you personally should know this, but other people may not. “After all,” they might say, “if I win rounds, what does it matter?” Thus, you have to explain not just that someone is wrong, but that they should care. And that’s basically all there is to it. The more complicated question is when this requires serious attention.
Preventing/Countering witch doctor theory: When
I’m not going to be absolutist or purist when it comes to this: If you are in the middle of a round, and some person is making some theory argument which you know in your head is wrong, but you can’t quite be sure why, I’m not advocating that you don’t say anything. If you think you can explain it somehow and that’s your best answer, utilize that explanation even if it may not be perfectly accurate. Sometimes these stopgap explanations are okay, if only because you don’t know the correct explanation. HOWEVER, you still must recognize that such an explanation is only stopgap, and should not be considered a solid foundation for deeper or broader thought. If it was an issue of theory in a debate, after the round you should try to solve the problem.
Ultimately, some inaccuracies may seem to be very insignificant, and depending on the circumstances, they may not merit complete rejection. However, it is important to stay vigilant in at least acknowledging when an explanation is inaccurate, and ensuring they do not become the basis for further inaccuracies.
Bad explanations are not a rare occurrence; people hold misconceptions about plenty of things. It’s just these mistakes are usually either so incorrect that the people realize it and change their views, or they are so minor that they don’t really have serious consequences. However, some explanations can fall right between clearly wrong and insignificantly wrong, leading to a list of unrecognized consequences. This happens much more often in the wild world of debate theory, where things are not well defined and persuasion tends to take precedence over strong foundations. I, however, strongly urge against careless acceptance of witch doctor theory, and thus hope that you will also emphasize correct reasoning for your argument’s foundations.