Home

The true cost of conspiracy theories

Conspiracy theories raise a simple but difficult philosophical question: why should we not believe in them? Finding an answer to that is much harder than we like to think.

Philippe Huneman
17 December 2015
Imagining conspiracies

Flickr/Lettuce. Some rights reserved.

Flickr/Lettuce. Some rights reserved.There have been many studies in psychology and sociology about who believes in conspiracy theories, and why one may believe them. Which features of the human mind, and human societies, make these beliefs possible? Studies range from the detection and measure of those beliefs, to the political and psychological predictors of those beliefs, and the cognitive biases (regarding causal or intentional attribution) that make us prone to adopt conspiracy theories, including the social mechanisms that support their diffusion, in the real world, or online.

But conspiracy theories raise a simple but difficult philosophical question: why should we not believe in them? When we hear someone who claims that the moon landing was faked, or that Princess Diana was killed by the Mossad, we know that these are silly fantasies. But what is the general principle according to which those theories are, indeed, wrong, or irrational? This is much less intuitive or obvious, and actually it’s very hard to make such a principle explicit.

Psychologists used to talk about “conspiracist ideation”, or “conspiracy worldview”, to define their object when they studied the class of believers in conspiracy theories. This notion is justified by the documented fact that the chances of one believing in a given conspiracy theory is higher when one already believes in other conspiracy theories. These theories make up a system, and there should be psychological underpinnings behind such a system – a closed system, as Goetzel emphasized, is less likely to be discussed and debated than to be reinforced by other theories of the same kind.

I argue that there is a specific epistemic stance taken by those who believe in conspiracy theories. I will try to underline it, and show that there is something intrinsically wrong with it. I’ll call it a “conspiracist epistemology” and will analyse some of its characteristics.

Of course, once one holds such a stance, she’ll be likely to believe in many theories of this sort, hence the “worldview” character of conspiracist ideation. Recently Quassim Cassam argued that, if one wants to understand why some people believe in the Illuminati plot or the 9-11 conspiracy, one should appeal to the notion of intellectual vices – which negatively corresponds to the intellectual virtues, required to understand how knowledge and truth are actually reached by some rational enquirers, in addition to the enquiry respecting some formal rules.

Why should we not believe in them?

But I’ll suggest that conspiracy theories could also be addressed according to a kind of economic terminology, namely, in terms of epistemic benefits and costs, adding an economic perspective to Cassam’s ethical approach.

We should note that “conspiracy theory” may not be enough to characterize the faked moon landing belief, the chemtrails idea, the suggestions that terrible shootings or bombings are hoaxes, and so on. Because some of these events have been actual conspiracies – first of all, 9-11 was a conspiracy led by Islamic terrorists – and many conspiracies have occurred across human history, and especially twentieth-century American history.

Many events can be explained by theories that appeal to conspiracists, in the sense of a collective plan secretly achieved by a group of agents with specific intentions and great powers.

Aaronovitch, in Voodoo Histories: The Role of the Conspiracy Theory in Shaping Modern History  (Vintage, 2009), defines a “conspiracy theory” as follows: “a conspiracy theory is the unnecessary assumption of conspiracy where other explanations are more probable”. This seems commonsensical: we don’t need, for instance, a class of white evil males to explain the emergence of HIV epidemics, so this belief is a conspiracy theory.

But the problem is that whoever holds a conspiracy theory will never accept that her postulation of the conspiring agents is unnecessary. The issue is with what a satisfying explanation actually is: is there a rational way to assess competing hypotheses, including a conspiracy theory?

The dual face of conspiracy theories

Shutterstock/aga7ta. All rights reserved.

Shutterstock/aga7ta. All rights reserved.Let’s acknowledge that conspiracy theories don’t seem to rely on crazy elements. One major theory, “the Illuminati rule the world” – so obviously wrong and even mocked by other conspiracy theorists – includes very general notions of human affairs that are commonsensical: some people have more power than others; a strong ability to influence world events accompanies fame, wealth or political functions; people often group together in order to act in a more efficient manner; and often they act in a hidden manner because revealing their plans would preclude their chances of success.

But ascribing to a general conspiracy the major events of human history is not plausible, because it forgets that in many cases, planned operations result in unintended effects; and, that the rich and famous have, more often than not, conflicting interests.

Conspiracy theories take truths on board. What appears unnecessary is often the appeal to the conspiracy itself.

But this feature is important for two reasons: first, under a coarse-grained definition, conspiracy theories may appear well-founded. A Counterpoint poll claimed that 51 percent of the French believe in conspiracy theories. However, the question actually asked was: “do you feel that someone pulls the strings?” This is a much too general claim, and it’s hard to falsify it.

Conspiracy theories can also be perceptive regarding the relations of power.

Of course, some do influence events more than others, and of course in general they are not well known: think of the economic interests at play in political affairs or conflicts. The feeling that the governing forces act in a secret fashion is at work in many of the films or books we love, from Don De Lillo’s novels to the Jason Bourne series.

Secondly, starting with Hofstadter’s famous paper on the “paranoid style in American politics” (1964), many authors have considered that, even though conspiracy theories are wrong and sometimes crazy, they can also be perceptive regarding the relations of power, classes and races in modern western societies. Mark Fenster argues in Conspiracy theories: Secrecy and power in American culture: “Similarly, overarching conspiracy theories may be wrong or overly simplistic, but they may sometimes be on to something. Specifically, they may well address real structural inequities, albeit ideologically, and they may well constitute a response, albeit in a simplistic and decidedly non-pragmatic form, to an unjust political order, a barren or dysfunctional civil society, and/or an exploitative economic system”.

Many conspiracy theories entertained by black people in the US (that HIV was created and diffused by white people to poison black people, and so on) are indeed a false representation of an oppression which actually does exist (think of the figures of recent killings of black people by police, or all forms of discrimination in professional life). An emphasis on this “core of truth” in conspiracy theories contrasts with the view that they illustrate human cognition going awry, and the complexity of conspiracy theories as a philosophical object lies between those two poles.

So, what exactly makes a conspiracist epistemology, and why does human cognition go awry when it subscribes to it?

Hyperbolic doubt

Flickr/Daniel Horacio Agostini. Some rights reserved.

Flickr/Daniel Horacio Agostini. Some rights reserved.An obvious mark of conspiracy theorising is the inversion of ordinary evidential weight. Details become significant, whereas major pieces of evidence are downplayed, because they are presented by experts or institutions seen as suspicious (the “official version”). This parallels the method usually employed by negationists of the Jewish genocide, who would take advantage of a minor contradiction between two reports about a truck delivering victims to an extermination camp, to say that the whole extermination system is questionable. This is called “doubt” by their supporters, and often conspiracy theorists claim to be the genuine adepts of the scientific method.

But it’s what commentators on Descartes would call “hyperbolic doubt” and it does not have any role in science. Scientific doubt is, in principle, limited. For example, in biology, one will not doubt that living beings exist, or that the laboratory itself may be unreal.

Such a doubt, a major ingredient of the conspiracist epistemology, has two features that make it irrational:

– The obsessive quest for any gap or conflict within the evidences on which relies the “received view”.

– The systematic dismissal of any official discourse and experts (governmental reports, scientific institutions). This is in common with science denial in all its forms: climate denial, creationism, and vaccine hostility.

The first point is a principled mistake: one should never postulate that all the data proper to a phenomenon should be coherent; any curve-fitting procedure in science will leave some data point out of the curve, which means that as data, they will be overlooked. Any signal comes with some noise around it. And science is not about integrating everything in a coherent manner, but precisely about correctly partitioning signal and noise.

Conspiracist doubt is often introduced by the phrase, “this can’t be there by chance”. However, here again it often relies on prescientific, wrong notions of what chance and randomness are. As has been made clear by mathematicians such as Chaitin or Martin-Löf and psychologists, genuine (mathematical) randomness is not always equal to homogeneous patterns, and many coincidences are precisely what should be expected if a process is indeed random. Here, a psychological bias (regarding our expectations of randomness) is co-opted in favour of a conspiracist view.

We are more likely to notice expert disagreement, than expert agreement.

The second feature is motivated by the fact that, by definition of a conspiracy, evidences are precisely acted upon by conspiracists and distorted in a purposeful way. If one assumes conspiracy, it’s not irrational to be cautious with apparently convincing evidences. However, even leaving aside that, at its limits, there is a performative contradiction here, which makes any rational enquiry impossible: the more convincing a piece of evidence seems, the more one should reject it because it has been designed to be convincing. This doubt casts a problematic suspicion on any expertise.

While evidence as such may be in some cases distorted, experts in general should not be dismissed in principle, and it’s their task, as scientists, to themselves be critical about any piece of evidence. Their reliability partly hinges upon the cost that the scientific institution puts on any deliberate lack of reliability: fraud, lies and other cheating behaviours are highly costly once discovered; and because of the competitive nature of science all scientists are indeed paid to check and test the behaviour of their colleagues (the process of peer review), which makes the whole system quite reliable.

Dismissal of expertise is irrational for the following reason: the very possibility of holding true certain beliefs ultimately relies on our deference to some experts. After all, I know that there are innumerable galaxies in the universe, and that there are 8 or 9 planets in our solar system, because I believe in physicists. And my practical life is possible only because I trust the engineers who conceived my car and the computer scientists who designed my Macbook. Even if there are (often) disagreements among experts, they occur on the background of a shared consensus. Within evolutionary theory, biologists disagree a lot about the weight of various factors in evolution, but they all agree that evolution occurred and that natural selection played an important role in it.

But we are more likely to notice expert disagreement, than expert agreement, since the latter grounds our ordinary knowledge.

And the well conceived notion of expertise includes that an expert in X is a non-expert in anything but X. This is crucial, because conspiracists, from creationists to climate deniers, often present their own counter-experts, who lack the relevant expertise (biochemists who will advocate creationism, paleontologists who will discuss the collapsing pattern of the World Trade Center, and so on), and form another element of the conspiracist epistemology.

Epistemic costs and benefits

Demotix/P Nutt. All rights reserved.

Demotix/P Nutt. All rights reserved.More generally, this doubt about expertise is irrational, because if I can doubt some experts, it’s precisely because I rely on what I know from some other experts. If I genuinely doubted experts in principle, I could not even buy drugs prescribed by doctors, or trust newspapers – the cost would be very high indeed. This flaw in the conspiracist epistemology leads to its major problem, which concerns epistemic costs and benefits.

Think of Flat Earthism: accepting the theory that says that Earth is actually flat would involve revising astronomy, geology and geography. Who would pay that price? As for believing in a faked moon landing, and still wanting to be coherent, we would have to doubt not only the Apollo missions, but the Nasa findings in general, astronomy, and even ideas around sociology and psychology (the idea that keeping a secret involving so many people for such a long time is impossible).

Conspiracy theories have a very high epistemic cost, because they necessitate the revision of so much of our usual knowledge – while the benefit they provide is very low. Great scientific revolutions have forced us to revise many scientific notions (from Copernicus to Darwin) – but they provided huge benefits in return: the representation and calculations of celestial motions, and a unique degree of integration of all biological sciences (taxonomy, biogeography and morphology) in the case of evolutionary theory.

Quine’s view of science helps to make sense of that epistemic feature. For him, any knowledge can, in principle, be revised. But science is structured like an onion: there are more or less internal layers, namely the various disciplines or theories, and upper layers that use the lower layers to justify their assumptions or axioms. Any experiment would test the whole theory as such, and hence would not tell us which layer should be revised if our predictions are denied. Modifying a layer affects only the upper layers, not the ones on which it relies. For instance, volcanology is more external than biochemistry or quantum physics, hence any finding on volcanoes that contradicts our theories should first be dealt with by modifying volcanology, and only if it’s impossible should one turn to the lower layers. Inversely, revising a core layer of knowledge such as mechanics may affect many other layers – as happened with evolutionary theory, or relativity.

At the core of the ‘onion of science’ are mathematics and logic. Any modification in it would affect all the upper layers (physics, chemistry, and so on); hence, even if they are not a priori immune to revision, the mathematical-logical layers should be revised only very parsimoniously. The cost of any revision of the system of science, suggested to cope with experimental findings, is therefore estimated by considering the amount of revisions to be done in other layers of the system – which means that, the more central the layer of knowledge involved, the more costly the revision.

It is therefore rational to maximize the epistemic payoff by weighting all epistemic benefits and costs – considering the maximization of a payoff of some kind as a requisite for rationality, in accordance with the way economists conceive of rationality as maximizing utility. Any testimony about ghosts, for instance, can legitimately be deemed very improbable because it’s less costly to assume some local hallucinations rather than to revise all the laws of physics and biology (that ghosts obviously transgress).

Conspiracy theories have a very high epistemic cost.

As for conspiracist epistemology, it obviously requires huge epistemic costs: dismissal of expertise, and theoretical entities posited in order to provide a scenario replacing the official view (for instance, superpowerful agents such as the CIA, secret agencies, or the Illuminati). All that implies major revisions in many layers of our system of knowledge. Therefore it is less rational, in general, than making fewer major revisions (for instance, accepting some level of randomness in events, or accepting that some data is just wrong and to be neglected). This does not preclude that, in principle, one day, a conspiracist view of a set of events would be rational, or that, adopting an epistemologist conspiracy (in which official data and experts are in principle doubtful and incoherencies in narratives have to be taken very seriously) would be warranted by the facts.

Suppose that one lives somewhere like North Korea – all that is reliably known of these regimes supports the idea that citizens are systemically deluded. This is part of our web of knowledge (it’s located somewhere in sociology and political theory). Therefore, all official versions in those regimes would impose a major revision of some parts of our web of knowledge, and incur a cost that is not incurred by people experiencing the system of expertise, media, pluralist press and institutions seen in democratic countries.

And the epistemic cost should not only be evaluated according to specific revisions made to the web of knowledge – hence specific theories. But the methods to assess evidence can be themselves deemed more or less costly, in conjunction with the systematic revisions or revision biases they impose onto the web of knowledge. This is a major element in our appreciation of conspiracy theories, and of the fact that a conspiracist epistemology, with its inverted processes of assessing evidence (for instance, giving more weight to an anonymous blogger than to a professional scientist), is less rational than ordinary stances.

Finally the evidence-assessing protocol proper to the conspiracist epistemology carries a crucial difference with what the scientific method would recommend, namely the nature of the initial assumption. Being, in principle, suspicious about experts and their reports on the basis that they may be distorting reality in order to hide a plot, means that one already assumes that some conspiracy is going on. There is a crucial difference between starting from the hypothesis that historical events are sufficiently explainable by the explicit intents of agents, the laws of nature and some random circumstances, and sometimes doubting this explanation when certain empirical data challenges it, compared to starting from the hypothesis of conspiring agents and then doubting any alternative hypothesis that would not accept this postulate.

This latter methodological option is proper to a conspiracist epistemology. It puts the burden of proof on the shoulders of those who do not hold a conspiracist worldview; even though it is difficult – even impossible – to prove the non-existence of something (for example, a specific conspiracy). Hence the often noted robustness of conspiracist theses regarding empirical refutations.

Granted this methodological option, the epistemic benefits and costs of the conspiracist epistemology may be comparable to those of ordinary epistemologies. However, the grounding hypothesis itself would globally impose a very high cost to the web of knowledge – because at least all layers of knowledge related to human and social sciences would have to be revised. For instance, the Illuminati theory, the epitome of conspiracist worldviews, requires that all history told by professional historians should be trashed.

The theory of cognitive vices and virtues should therefore be supplemented by a kind of economics of rationality. Conspiracist epistemology seems irrational because it imposes extremely high costs that do not affect rival explanations.

Rationalities and the conspiracy theorist

Flickr/Guy Mayer. Some rights reserved.

Flickr/Guy Mayer. Some rights reserved.All of this raises a major question regarding the rationality of the conspiracist. Let’s consider someone holding a conspiracist epistemology. How comes she is likely to pay the very high epistemic costs I have sketched out here? Wouldn’t it be fully irrational? The answer relies in our notion of rationality.

Notice that I have focused on epistemic costs, regarding the web of knowledge as our sciences constitute it. However all the beliefs of an individual constitute a web of belief with the same structure – some beliefs support others, and any revision of belief involves costs and benefits that are partly defined by the amount of overall revisions it imposes elsewhere in the web. Therefore, if our conspiracist holds beliefs that, were she to aim at a coherent web of belief and therefore pay all the costs proper to her position, are not rational in the sense that they will not maximize the benefits, then a conspiracist epistemology is irrational.

Yet rationality in the very minimal sense of economists, which is the sense implicitly used here, is notoriously value-free: whatever the preferences of the subject are, rationality is about maximizing the utilities related to those preferences. Preferences themselves are exogenous to rationality, that is, they can’t be rational or irrational. (Economists can define the strategy a rational agent would choose, when she is a drug addict, having chosen heroin as her top preference.)

To this extent, one could also argue that the conspiracist is still rational. All her beliefs in her web of beliefs could indeed be ascribed a sort of utility in the economic sense, and an overall utility could be calculated for any change or addition of beliefs. Then, it might be that the revisions imposed by her conspiracist belief upon her total web of knowledge are not so costly, because a few beliefs, underpinning her acceptance of a conspiracy theory, are so highly valued by her that their preservation compensates the highly costly theory she wants to support.

One could also argue that the conspiracist is still rational.

For instance, suppose that the belief that the federal government is evil, or that the Jews rule the world, is given, in her web of knowledge, the most precious value – then, adopting a conspiracy theory, which is in sync with those views, would not be so costly overall because it preserves the most valuable beliefs, whereas the official version would be at odds which them and therefore be more costly.

That’s an example of how, if a minimalist notion of rationality is used, which is neutral regarding the intrinsic value of beliefs, then the conspiracist epistemology may not be wholly irrational.

But it’s also plausible to think that those key beliefs, highly valued and supportive of the overall benefit of the conspiracist views, are themselves poorly supported by evidences, and therefore initially require a conspiracist epistemology – all of which involves some vicious circularity. 

Had enough of ‘alternative facts’? openDemocracy is different Join the conversation: get our weekly email

Comments

We encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.
Audio available Bookmark Check Language Close Comments Download Facebook Link Email Newsletter Newsletter Play Print Share Twitter Youtube Search Instagram WhatsApp yourData