“And this brings me to the next disad disad 2 nuclear war card 1 Matthews ‘08 nuclear war most gasp-gasp likely in Asia reads evidence… and now gasp-gasp card 2 Greer ‘09 South China Sea (SCS) unstable reads evidence… now we gasp-gasp have link 1 their plan cooperates with China-that would tip the balance of the SCS dispute in China’s favor gasp-gasp link 2 if China secures the SCS they become more aggressive, and try to take more territory link 3 aggressiveness leads to miscalculations link 4-reference card 1: miscalculations would go nuclear.
Therefore, providing China with this device to clean up their jellyfish problem would cause nuclear war gasp-gasp
Now I impact out nuclear holocaust card 1 Jameson ‘05 reads evidence…”
You may be thinking that this is some satire of policy debate, right? Well, as crazy as you may think this example is (especially those of you in Stoa or NCFCA), I have some bad news: this actually happens in public high school leagues. In fact, the above argument was made in the very first round I judged: the aff team’s harms stemmed from jellyfish overpopulation in China, so their plan was to provide them with a type of robot that is designed to capture or kill the jellyfish. The neg team argued that cooperating with China would (somehow) lead them to make a “miscalculation” and start nuclear war (and then they thought it necessary to explain with evidence for about 45 seconds why a nuclear holocaust was bad).
I wasn’t happy, to say the least.
However, what really got me wasn’t that the neg made the argument. What really frustrated (and shocked) me was that the aff didn’t know how to respond. By that, I mean their argument was “That’s absurd and unlikely: [Reasons why the result is unlikely]”
Unfortunately, I imagine that many other debaters would approach the argument in the same, insufficient way. Therein lies the problem: these “outlandish arguments” are as prevalent as they are because people don’t know the most effective way to respond, and thus the arguments can win rounds—especially when done “correctly.” Therefore, in this article I will detail the more effective response and explain why this is better than “That’s absurd.”
There are many other crazy arguments like nuclear war DAs (NW DAs): that climate change will reach a tipping point and humanity will go extinct; that a super-bacteria will be created and humanity will go extinct; a Orwellian, 1984 society will be instated and humanity will become slaves; etc. There is a common construction to them: people often use some sort of generic evidence to make a complicated and/or unrealistic chain of links, which all lead to extremely massive impacts (e.g. extinction). Although it is not always stated as eloquently as follows, the underlying idea is that “this plan could, just maybe, cause really major problems (e.g. extinction), and therefore we shouldn’t risk it: don’t pass the plan.”
In practice, many people know these kinds of arguments are generally flawed. The problem is that many people don’t understand why, and therefore don’t know how to effectively respond.
The common response
In my round, the aff basically responded with “That’s an absurd argument, and is not based on empirics. Furthermore, their evidence is weak and outdated. We, on the other hand, have definite impacts that weren’t disputed by the neg team. Their arguments need to have compelling support.”
Essentially, they tried to delink the DA (i.e. they tried to say the result likely wouldn’t happen). This is what people normally do: they just give delinking arguments such as:
- “These scenarios have never happened, even when people in the past suggested they would (delink).”
- “The real world has systems (e.g. diplomacy) that keep these scenarios from happening (delink).”
Some people, after making delinking arguments, even go so far as to kritik/object to the arguments, saying “Their argument is so dumb that it harms us in the real world by making TP look bad and/or promotes more use of this bad argument.” In sum, though, the most common response is some form of “Wow, that DA is stupid and unlikely.”
The problem is that technically, that response isn’t sufficient.
Why that doesn’t work
The problem is that simply delinking a massive disad—where you argue the likelihood of the disadvantage happening is small—will almost never be enough if the DA’s impacts are large enough (e.g. extinction). After all, we intuitively know that an outcome does not become irrelevant just because it is unlikely; we wouldn’t say the threat of terrorism is irrelevant just because attacks are not always likely. Rather, we often instinctively make some sort of calculation resembling “expectation = probability*consequences.” Oversimplified it may be, for the purposes here this is an accurate way of comparing choices. Using this equation, let’s turn to the example of a NW DA.
Would you risk a 3% chance of extinction for an uncontested (~100% chance) $30B?
(Assuming money just translates to happiness, rather than being also a tool) decision theory would prefer not to risk extinction even if the money were tripled or quadrupled.
See, the only reason NW DAs are often terrible is because the team making the argument tries to argue that the nukes will assuredly start flying upon passage of their plan. The legitimate argument would be “No, it’s not likely, but a risk-reward assessment would show that despite how much we may want the money, it’s not worth even slight increases in the possibility of nuclear holocaust.” If the neg in my round had argued this, I might have voted for them, because no matter how much delinking the aff tries to do, they can’t argue the DA has a 0% chance of occurring. Yet, stepping back from the math, you may think this is crazy—that something is wrong in our calculations.
If you think this, you are correct. The question, again, is why?
How people should respond
Rather than trying to solely de-link their argument, your best approach is generally to initially delink, but then turn their links or otherwise show how your plan/side has similar impacts that outweigh their side (e.g. nuclear war). In the round I watched, I was astonished that the aff never tried to say “Hey, maybe if we solve their jellyfish problem, the mass starvation [which, no joke, was one of their uncontested harms] will stop, and thus they will become less desperate, and therefore less likely to lash out in desperation” or “Cooperating with China will improve our diplomatic capital, better allowing us to resolve or prevent those kinds of disputes,” in addition to their basic “Their argument is highly unlikely: we are talking about jellyfish problems, and they are suggesting nuclear war.”
In general, if the risk of something massive is very low (e.g. 1–4%) for reasons such as the support is highly dubious, you can find some kind of reasoning that the opposite is the case, or you can do some kind of impact calculus where you say “our plan/side is more likely to prevent [insert catastrophe here].”
Yet, occasionally it is not this simple. Rather, sometimes the best you can argue is “their argument is so unrealistic/unlikely that it is just canceled out by arguments on our side. Ultimately, the margins are so slim that the best way to judge is to focus on the more likely results, since that is what people tend to do and we have never had a moment of extinction before… [etc.]”
Theory in action: Batman vs. Superman
See the following video for context.
When I first saw this trailer, I was shocked that this would be Batma… Bruce Wayne’s justification for “destroying Superman.” I asked various people what they thought of it, but yet again, I was even more shocked by their responses.
The argument is very similar to the NW DAs people occasionally make: “If there’s just a 1% chance this plan could cause nuclear war and kill us all, we shouldn’t pass it” (except, in this case, the aff’s harms are “1% chance of extinction”). I imagine some people know this isn’t an adequate justification for destroying Superman, but they can’t always explain why. Many people would probably just respond by saying “That’s dumb; Superman would never turn on us, so we don’t even need to worry about that.” But, if you have tracked with what I’ve been saying so far, you’ll recognize that doesn’t quite work—or it certainly wouldn’t convince Mr. Wayne, which is what really matters. So, how do we refute this?
After first undermining the argument’s support/links (i.e. explaining why it’s unlikely he will kill us all) there are at least two great turns you can make:
- He could save us from extinction (e.g. asteroids, aliens, nuclear war). Since he is far more likely to save us than kill us, we should let him live.
- Trying to capture him will likely fail (insolvency), but also it will probably make him more likely to kill us all. Thus, the probability of extinction-by-Superman goes up after passing this plan.
You may be able to come up with other examples, but just one of these would be sufficient. Because in the end, the judge sees that extinction is more likely with the aff’s plan.
Unfortunately, policy debate has lost a lot of respect because of these types of arguments. Now, I’m not at all saying massive DAs are inherently bad; I argued that passing a Taiwan FTA increased our risk of war with China by amounts enough to outweigh the economic benefits. No, the problem is that people are misusing them. This problem persists because people don’t know how to respond.
Rather than superficially dismissing them based on looks, when faced with these kinds of “bad arguments” we need to dissect their content and expose their true flaws, not those conveniently at the surface. I believe that if enough people learn how to respond to bad arguments such as these, we will not have to deal with them as often.
Harrison Durland is a blogging intern at Ethos. Now a college student at Ole Miss, he is studying international affairs, Russian, (hopefully public policy,) and intelligence and security studies, seeking to do analyst work and perhaps later move into public policy or organizational administration. He began debate in his sophomore year of high school, in Stoa. Despite an unenthusiastic first year, he later found that he had a passion for debate, especially policy debate. His third and final year of high school debate was 2016, during which year he qualified to NITOC. His primary interests outside of debate and academics include his faith, ethics, and game and decision theory.