The Gordian Knot of Demarcation: Tying Up Some Loose Ends

ABSTRACT In this article, I seek to improve upon a definition of pseudoscience put forward by Sven Ove Hansson. I argue that not only does its use of ‘pseudoscientific statement’ as definiendum inadequately address the theoretical issue of demarcation, it also makes the definition inapt for practical demarcation. Moreover, I argue that Hanson’s definition subsumes statements and associated practices that are forms of bad science, resulting in an unfavourably wide concept. I try to save the definition from the brunt of this critique by arguing that some pseudoscientific statements are particularly significant when demarcating a discipline as a pseudoscience.


Introduction
Whereas 'science' is typically cast in a positive light and accepted as the highest authority on epistemic issues, 'pseudoscience' is a derogatory term. The perceived relationship between these concepts can be put somewhat bluntly: pseudoscience is parasitic on science as the paradigm for knowledge and knowledge production, leeching off the authority and legitimacy of proper science.
While there is often agreement on the pseudoscientific status of disciplines like astrology and homeopathy, there is none concerning the criteria of demarcation (Hansson 2017). Hansson suggests demarcating pseudoscience from science by postulating science as epistemically authoritative, and defining as pseudoscientific unreliable doctrines on scientific issues: Instead of specifying the features of science that make it epistemically privileged, it cuts through that issue and simply assumes that science provides us with the most epistemically warranted statements that can be made, at the time being, on the subject matter within its domain. (Hansson 2009, 240) criteria for demarcation largely agree on particular demarcations: their criteria are simply alternative approaches to the above issue of warrant (Hansson 2009).
However, Hansson only defines 'pseudoscientific statement'. The adjective 'pseudoscientific' is derived from the noun 'pseudoscience', meaning 'of', or 'relating to', pseudoscience. Thus, defining 'pseudoscientific statement' only takes us so far towards an understanding of the pseudosciences.
Moreover, if applied, Hansson's take on pseudoscience as stand-alone statements permits only a fragmentary and inefficient approach to the practical problem of demarcation: instead of challenging the disciplines head-on, and in one fell swoop disrobe a wide range of statements from an ill-gotten scientific status, it leaves the tedious task of assessing numerous individual statements one by one.
An additional issue is that the definition, despite Hansson's intention, subsumes several statements and associated practices that are instances of bad science. I shall make this argument by demonstrating how a particular case of bad science, scientific myths, i.e. uncorroborated or discredited statements widely propagated by authoritative research publications, meet all the criteria in Hansson's definition, and thereby qualify as pseudoscientific statements. I shall argue that 'pseudoscience' ought to be explicated in a way that distinguishes pseudoscience from bad science.
I shall attempt to resolve all three issues by making 'pseudoscience' the definiendum, and by adding a criterion of significance to Hansson's definition. Introducing a working distinction between core and subsidiary statements, I shall argue that core statements, unlike subsidiary statements, are significant to the identity of the corresponding epistemic discipline, and that criticism of the core statements will have greater impact on the conceived legitimacy of the discipline than criticism of the subsidiary statements. Although reminiscent of Lakatos's distinction between core and auxiliary hypotheses (Lakatos 1978a), I will explain how they differ in section 3.

Pseudoscience as Unreliable Doctrines
Hansson develops his definition in the course of three articles (Hansson 1996(Hansson , 2009(Hansson , 2013, but see also 2016). Although my critique in general applies to each of these versions, I shall take as my point of departure his 2013 account: A statement is pseudoscientific if and only if it satisfies the following three criteria: 1. It pertains to an issue within the domains of science in the broad sense (the criterion of scientific domain). 2. It suffers from such a severe lack of reliability that it cannot at all be trusted (the criterion of unreliability). 3. It is part of a doctrine whose major proponents try to create the impression that it represents the most reliable knowledge on its subject matter (the criterion of deviant doctrine) (Hansson 2013, 70-71). In criterion 1, of scientific domain, Hansson has opted for including all academic disciplines, rather than using 'science' in the narrower sense of the term, covering merely the natural sciences. Instead, he stipulates 'science' as synonymous with the German Wissenschaft, which also includes the social sciences and the humanities. The rationale is that the sciences in this broad sense produce 'the most epistemically warranted statements that can be made, at the time being, on the subject matter within their respective domains' (Hansson 2013, 63). Also, this extended concept of science enables critique of epistemic disciplines like holocaust denialism, Bible codes, and their likes (Hansson 2013, 65), as well as a host of parapsychological disciplines perhaps otherwise excluded. Pseudoscientific beliefs are not an issue exclusive to the natural sciences. A wide concept of science, and subsequently also of pseudoscience, meets this need for demarcation.
Criterion 2, of unreliability, stresses the statement's want of warrant: a pseudoscientific statement 'suffers from such a severe lack of reliability that it cannot at all be trusted' (Hansson 2013, 67). This criterion is consistent with several conceivable explanations of the intrinsic untrustworthiness of the pseudosciences, whether rooted in dogmatism, the use of weak or inept methods, or vague or ambivalent concepts. Furthermore, by stressing reliability rather than veracity, the definition avoids labelling large parts of the history of science as less than scientific (Hansson 2013).
In criterion 3, Hanson contrasts a doctrinal aspect of pseudoscience with the individual mishaps and mismeasurements of bad science. He exemplifies bad science with the following imaginary case: A biochemist performs an experiment that she interprets as showing that a particular protein has an essential rôle in muscle contraction. There is a consensus among her colleagues that the result is a mere artefact, due to experimental error (Hansson 1996, 173;2017).
While the above example is an instance of bad science, a pseudoscientific statement is 'part of a doctrine whose major proponents try to create the impression that it represents the most reliable knowledge on its subject matter'. According to Hansson, the continued propagation of unwarranted, unreliable beliefs as authoritative is what separates pseudoscience from isolated and accidental instances of poor workmanship. Hansson exemplifies pseudoscience thus: A biochemist goes on performing one sloppy experiment after the other. She consistently interprets them as showing that a particular protein has a rôle in muscle contraction not accepted by other scientists (Hansson 1996, 173;2017).
Unlike the first case, here the mishaps have become a practice, and the researcher is repeatedly trying to validate the statement. While bad science is limited to discrete episodes of failure, pseudoscience becomes manifest at a systematic level, it is 'a sustained effort to promote teachings different from those that have scientific legitimacy at the time' (Hansson 1996, 173;2009, 240;2017).
However, this is, as noted, a definition of 'pseudoscientific statement', and not of 'pseudoscience'. How we can construct a noun from this adjective, and a discipline from these statements, Hansson does not say. While it seems plausible that a pseudoscience contains a greater measure of pseudoscientific statements than do the sciences, it is not clear how many are needed in order to rightly classify it as a pseudoscience.
Second, assessing individual pseudoscientific statements detached from their disciplines will be laborious, time consuming, and inefficient. The Italian programmer Alberto Brandolini is attributed an informal yet illustrative aphorism referred to as the 'Bullshit Asymmetry Principle' or 'Brandolini's Law': 'the amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it' (Nicoulaud 2014). James Ladyman, more specific and less tongue-in-cheek, writes: … confronting pseudoscience in this way is problematic: it consumes too much time and too many resources, is not useful when engaging in public debates that operate at a general level, and is too detailed for scientifically nonliterate audiences (Ladyman 2013, 49-50). In recent years, psychologists and political scientists have debated whether debunking efforts are also thwarted by a 'backfire effect': debunking a piece of bunkum may contribute to the cementation of the same claim. Nyhan and Reifler (2010) seminally described such an effect when studying the correction of political misperceptions; however, a larger study (Wood and Porter 2019) failed to replicate the effect. One recent meta-analysis, while not rejecting the possibility of a backfire effect, shows that debunking efforts are more likely to succeed if they also provide the public with a coherent explanation of how the false information originated (Walter and Murphy 2018). Nonetheless, the relevant upshot is that debunking is a demanding task in terms of time, resources, effort, and communication. If the ambition is to efficiently counter the diffusion of pseudoscientific statements, their potentially vast number suggests rejecting an exclusively piecemeal approach. Admittedly, Hansson does not consider the definition applicable due to its lack of specific criteria for methodology (Hansson 2013). However, pseudoscientific statements can be compared to scientifically warranted statements, and thus be put in operation. I shall attempt to demonstrate this in the next section.
Third, bad science can in fact be doctrinal. I shall demonstrate this using as an example the supra-individual nature of scientific myths: the distribution of a myth is a collective phenomenon, and scientific myths qualify as deviant doctrines through their extensive and persistent proliferation. Consequently, Hanson's distinction between pseudoscience and bad science does not hold. Consider these three true-life scenarios: (i) An educational researcher argues for the importance of practice and active learning strategies. In his discussion, he lends support from an educational model ('the Learning Pyramid') frequently quoted within his field. He does not consult or cite sources for this model himself, because it appears to be common knowledge. (ii) Another educational researcher contend that digital technology can enhance education, and quotes a version of the above educational model that seemingly supports the use of such technology. She does consult the source she cites, but fails to recognise that it offers no evidence in favour of the model whatsoever. (iii) Yet another educational researcher quotes this educational model. She cites and consults an oft-cited source, and recognises that the evidence is inadequate. Nevertheless, she chooses to ignore this problem, as the general approval of the model will be adequate for her argument to be accepted.
These examples all represent offences against norms of good research, and serve to exemplify different forms and grades of bad science. The reader will presumably consider (ii) to be a rather minor breach, while (i), not citing a source for what one conceives to be common knowledge in the field, is arguably consistent with the guidelines of major citation style guides, despite representing a rather slapdash approach in this context. On the other hand, (iii) is arguably a case of deliberate misconduct.
It seems unfitting to classify such presumably common deviations from responsible research practice as pseudoscience. Nevertheless, according to Hansson's definition the unwarranted statements proliferated in (i)-(iii) are pseudoscientific. Although these examples may be indicative of a shared culture of sloppiness, the responsibility for all these breaches belongs to the individual researchers, and they are discrete incidences, with individual explanations. Observe, however, the following context: these three researchers are accompanied by hundreds of others in distributing this model in peerreviewed journals and field specific encyclopaedias, and the academic distribution of the model has persisted for one and a half century Hernes 2016, 2018). Incorporating the three above episodes within this collectivistic perspective exposes them as parts of a larger issue-as instances of scientific myth propagation.
The above-mentioned educational model, as well as several other scientific myths propagated by academic journals satisfy the criterion of scientific domain, as well as the criterion of unreliability (see, for instance, critique of the Hawthorne Effect in Carey 1967;Jones 1992;Kompier 2006, and of the Yerkes-Dodson law in Teigen 1994; Hancock and Neil Ganey 2003;Corbett 2015). They also satisfy the criterion of deviant doctrine: their proponents do indeed seek to create the impression that these statements represent state-of-the-art knowledge on these particular subject matters.
Unfortunately, Hansson does not explicate what distinguishes pseudoscience from bad science beyond this demonstrably inadequate doctrinal component. I agree, however, that we ought to sustain the distinction between bad science and pseudoscience. Otherwise, several diversions from responsible research practice, however trivial, may fall under the concept of pseudoscience. I would consider this to be a rather heavy-handed normative approach, given the derogatory nature of the term and the presumed prevalence of such practices exemplified by (i)-(iii). And, if this concept is set to also comprise all sorts of sloppy epistemic work, the more fundamental issues with pseudoscience will be lost from sight.
Admittedly, assessing theories and expertise based on scientific myths (e.g. learning styles) or pseudoscience (e.g. astrology) may amount more or less to the same problem in professional deliberations. However, separating pseudoscience from bad science allows us to recognise bad science as an internal problem as well as an internal responsibility. While the pseudosciences may be parasitic on the authority of the sciences, bad science represents not only a threat to the authority of science, but also to its legitimacy.
I believe Hansson's definition can be salvaged if the pseudoscientific statements are linked to their respective disciplines: adding a disciplinary perspective will frame these pseudoscientific statements as parts of a pseudoscience, reduce the number of statements that need to be assessed, and delimit pseudoscience from bad science.

The Criterion of Significance
I shall replace the criterion of deviant doctrine with a criterion of disciplinary significance in an attempt to address the issues raised in the previous section. The argument that I shall pursue is that Hansson's pseudoscientific statements ought to be limited to those that are closely associated with the identity of its discipline. By adding this criterion of significance to the unreliable yet persistent statements identified as pseudoscientific by Hansson, I aim to make these statements indicative about their discipline's legitimacy.
This effort at formulating an additional criterion takes as its basis a conceived distinction between what I shall refer to as 'core statements' of a discipline, and 'subsidiary statements'. The argument that I shall seek to develop is that core statements are pivotal for the identity of their epistemic disciplines, in that one cannot reject them and still be regarded as a supporter of the discipline, at least not in its present form. Subsidiary statements, on the other hand, may be widely held and frequently cited among the discipline's participants, but it is possible to contradict these subsidiary statements (albeit not without controversy), while continuing to be adherent of the discipline. One can be an astrologist while believing that Virgos are loud, outgoing people (apparently, they are not). But one cannot hold that the positions of the stars and the character and behaviour of people are unrelated. Critique of core statements will also extend to the identity and legitimacy of the discipline. If critique brings about a revision of these core statements, the resulting discipline will acquire a different identity.
The core/subsidiary distinction resembles the Lakatosian distinction between hard core and auxiliary hypotheses (Lakatos 1978a). However, in this case it is more pertinent to speak about core and subsidiary statements rather than hard core and auxiliary hypotheses: first, Lakatos models his hypotheses on historical cases of physics, distinguishing, albeit not systematically, between non-empirical laws on one hand, and observation statements on the other (Musgrave and Pigden 2016). This division cannot easily be made for sciences in the wide sense, i.e. the statements of the social sciences and the humanities. Second, core and auxiliary hypotheses are integral parts of Lakatos's own definition of pseudoscience: when a research programme based on these hypotheses is unable to discover new facts, it is degenerative, or pseudoscientific (Lakatos 1978b). The core/subsidiary distinction allows me to draw a line of demarcation different from the one drawn by Lakatos.
I shall borrow a set of statements from homeopathy in order to capture the core/subsidiary distinction into a working criterion: while the field of homeopathy presumably contains statements consistent with the medical sciences, such as recommendations of drinking water when suffering from urinary infections, it also includes several statements that fail to be recognised by physicists, chemists and medical researchers. Some of these statements are only found within, and are significant to the very identity of, the discipline. Consider the following statements: (1) The more stages of dilution and succussion the preparation has gone through, the more potent the medicine is … (British Homeopathic Association n.d.-b) (2) [W]ater is capable of storing information relating to substances with which it has previously been in contact. (British Homeopathic Association n.d.-a) According to homeopathic potentisation theory, diluting a working substance in water makes the substance stronger, more potent. The most potent dilutions do not retain any molecules of the original substance, and homeopaths theorise that water somehow remember the molecular structure, or other qualities of the substance. This imprint is produced by a particular way of shaking the dilutions, 'succusion'.
Both the potentisation and the water memory hypotheses are rejected outside the realms of homeopathy (Ball 2008). Both are, however, major theoretical and practical premises for the homeopathic discipline. Criticising the correctness of statements (1) and (2) amounts to a radical challenge to the entire field. This is not the case with subsidiary statements. Consider the following cure for the fear of heights published on the webpages of the British Homeopathic Association: (3) The picture of the fire brigade being called out to rescue a kitten from the top of a tree is almost a cliché. The kitten has no trouble getting up there, yet seems rooted with fear to the branch. No amount of coaxing seems to help. Yet logically, cats are climbers. It will not fall, it will eventually climb down. It is a question of self-belief. You do not, after all, find cat skeletons in trees. … The Lacs (the Milk remedies) have certain themes in common. Milk is the food used to nurture all young animals. This indicates their use in situations in people's lives which revolve around issues of dependence and independence. And about bonding between parents and child. Each of the milk remedies will manifest this in a particular way. Lac felinum [cat's milk] is very useful for fear of heights as illustrated beautifully in this case. (Souther and Coates n.d.) Perhaps unsurprising to the reader, searches for relevant studies in PubMed and Scopus with the search strings 'cat's milk' and 'lac felinum' produced no results. Furthermore, it seems uncontroversial to claim that anecdotes about feline behaviour are insufficient evidence for the efficiency of cat's milk against the fear of heights. Yet, rejecting the entire epistemic discipline of homeopathy based on (3) would be premature, as opposed to doing so based on the above examples (1) and (2). The distinction between core and subsidiary statements that I seek to establish is probably not clear-cut. Nevertheless, the resulting definition ought to be functional.

I propose the following modification of Hansson's definition:
A pseudoscience is an epistemic discipline that contains a sufficient number of statements that satisfy all of the following three criteria: 1. It pertains to an issue within the domains of science in the broad sense (the criterion of scientific domain). 2. It suffers from such a severe lack of reliability that it cannot at all be trusted (the criterion of unreliability). 3. It is a core statement of its discipline (the criterion of significance) I am, admittedly, unable to state the specific number of these statements needed to qualify as a pseudoscience. Theoretically, the number of core statements of a given discipline may vary, and those with several core statements would conceivably be less vulnerable from criticism than disciplines containing only a few core statements. However, the suggested definition will at least reduce the practical problem of the piecemeal strategy: by limiting the relevant statements to those that are core statements of the discipline, there will be fewer statements to criticise. And, due to the central role played by these statements, criticising them becomes a matter of urgency, and debunkings will presumably gain gravity. Therefore, using time and energy debunking core statements can more easily be justified, because it will be more efficient.
Finally, by replacing the criterion of deviant doctrine with the criterion of significance, the modified definition delimits pseudoscience from bad science more efficiently than did the original definition: while flawed statements are seemingly rampant in some epistemic disciplines (e.g. educational studies), disciplines are pseudosciences only when they are based on such statements. The same statement can thus theoretically both be pseudoscientific as well as a case of bad science. Consider the widespread claim that we only use 10% of the brain, or that sugar makes children hyperactive. If these statements were widely distributed by academic publications, they could qualify as scientific myths. If they were made bases for epistemic disciplines, e.g. 'Deci-cerebral Neuropsychology' or 'Saccharide Behaviourism', they would enable us to classify these disciplines as pseudosciences.
The definition of pseudoscience offered by this paper admittedly make trade-offs for simplicity and applicability. By focusing on statements that divert from epistemically warranted statements, both the modified and the original definition largely reduce the 'pseudo' of pseudoscience to an epistemic issue, leaving much of the normative aspect unaddressed.
I suspect that there is also an element of sham that puts the 'pseudo' in 'pseudoscience'. Picture, for instance, an attentive, soft-speaking homeopath wearing a doctor's coat and a stethoscope, with diplomas from accredited homeopathy colleges displayed on the wall of his office, or a homeopathic researcher wearing a lab coat, making careful measurements in a lab with computers, tubes and microscopes (e.g. images.google.com, search string 'homeopath*'). Categorising homeopathy as a pseudoscience is not exclusively a question of unreliability. It is also an issue of plucking a borrowed medical plume of competence and legitimacy acquired through the appropriation of these symbols of authority and professionalism. I surmise that most pseudosciences will feign some sort of scientific authority, whether by wrongfully associating themselves with meticulous and methodical studies, collective critical efforts, institutions for research and education, or even legends of inspiration and geniality. If this is correct, a cultural and sociological approach to the issue of pseudoscience will better address this derogatory aspect of the term.