Rationality, Science, Pseudoscience & Policy

Scientific thinking is rational, evidence based, sceptical.

Pseudoscientific thinking is irrational, anecdotal rather than evidence based, and gullible.

Generally, conventional science is scientific, and alternative science is pseudoscientific – but like most generalizations, this truth sweeps important issues under the carpet.

Conventional Science

Conventional science is generally rational, but there are important flaws in the rationality. Conventional science relies heavily on peer review. Peer review is probably the best available method of assessment, but it certainly isn’t foolproof.

For one thing, there are several possible motivations for people to falsify the results of their experiments, and peer review gives no guarantee of catching them out in this. It may be claimed that such dishonesty will be uncovered in the longer term, but this isn’t necessarily true, or the longer term may be too long. One possible reason for a failure to expose false results might be that they’re in an obscure area of science in which nobody takes a great deal of interest. One might argue that this really doesn’t matter much, and that’s fair enough. More seriously, the failure might be because it’s considered unethical to repeat an experiment in which the control group are given less than the best available medical treatment. There might also be controversy over the political implications of the work, resulting in some fudged compromise among the academics involved in the peer review process, leaving the question of whether the results were actually falsified or not difficult or impossible to determine.

Even where results are not deliberately falsified, whole groups of academics in fields where public scrutiny is difficult or impossible – which is many of them – may get wrapped up in their collective ideas. Where the only available peers to review your work are all parts of the same community, it’s quite possible to go off on the most fantastic wild goose chases. Perhaps they might build a great tower of deductions, each reasonably secure in itself – say about 80% certain to be right. You only need five such reasonably secure deductions piled on top of one another for the composite deduction to be less than 33% likely to be right. Pile seven of them up, and even if each is 90% secure, the combined pile is probably wrong. These “probabilities of correctness” are of course very subjective, too – it’s horribly easy to be over-optimistic about them, which is particularly problematical when there’s a pile of them... (For more detail on this, see Logic and Reality.)

Another major issue, affecting medical research especially, is the non-reporting of negative results. Generally (where deliberate falsification is not involved) scientists conducting an experiment will report both their negative and positive results. Then, when it comes to publishing, the publishers will also report both. However, when an experiment gives mainly negative results, the scientists are less likely to submit the report for publication, and even if they do submit it, journals are less likely to publish it. Negative results are less interesting than positive ones! The result is that any meta-analysis of published sets of reports of the same or similar experiments is almost certain to give a more positive impression (of the effectiveness and lack of deleterious side-effects of a drug for example) than is actually justified. (See also Medicines and Snake Oil.)

Yet another issue is sampling bias. See Can we trust observational data? Keeping bias in mind for more on this. (My cousin Martin is one of the authors; we’ve never discussed this issue as far as I can remember, but we are obviously of one mind on this subject!)

An important part of peer review should be replication – repeating someone else’s experiment to see whether you get the same results, or something different. Sadly, there’s little kudos or financial reward in doing this, and it’s not done nearly as often as it needs to be.

There’s an excellent section on all these and some related issues in Scientific American, October 2018, “How to Fix Science” – Science Funding is Broken in the online version, but sadly that’s behind a paywall. Another relevant Scientific American article, also behind the paywall, is “A Significant Problem” (October 2019) – The Significant Problem of P Values in the online version.

Alternative Science

Alternative science may not be rational, but that doesn’t mean it always gets everything wrong. Traditional knowledge may not have been consciously derived from meticulously recorded, planned experiments; but equally, it may well have been subconsciously derived from huge numbers of vaguely remembered unplanned experiments, passed on haphazardly by word of mouth. This is not a completely useless means of acquiring knowledge – it works reasonably well, that’s why natural selection favoured our species’s possession of it. Some branches of alternative science give considerable weight to traditional knowledge.

Still, the bulk of alternative science is, in my opinion, pseudoscience. Even where its beliefs are correct, the thinking behind them may well be irrational.

There’s considerable antagonism between the advocates of conventional science and those of alternative science, resulting in misrepresentations of each other’s views on both sides.

For example, conventional scientists often state that the basic premise of homeopathy is that its active ingredients are more effective when more dilute, and most effective when diluted to such an extent that there is unlikely to be a single molecule of the active ingredient remaining in the dose finally administered. Some advocates of homeopathy do indeed make such claims, but it’s certainly not the basic premise, which is this: certain human afflictions may be cured by the administration of dilute solutions of poisons that, when taken in larger quantities, cause symptoms that resemble the affliction concerned.

Whoever thought of trying this may or may not have had a rational reason for trying it, we will never know. (The origins of homeopathy go back much further than Samuel Hahnemann, who is often regarded as its founder. He was no such thing: he was merely the chap who coined the term.) Whatever the original reason was, there have been many rationalizations since, some more rational than others. Whatever the rationale, it works – in some cases. For example, digitalis from foxgloves does control some heart problems. It’s a traditional cure, based on the homeopathic principle (though the first people to use it didn’t call it that). (See also Old Wives' Tales.)

Traditional cures have been extensively “mined” by the pharmaceutical industry for real, working cures. What is left probably is largely ineffective, because what was effective has already been adopted by conventional medicine. Certainly the “dilute beyond the point of zero concentration” technique is demonstrably ineffective – with the important proviso that it can, like chalk and sugar tablets, work by reassuring the patient that something is being done. This even works on small children and animals, if only by second-hand reassurance. Long live the placebo effect!

Policy Making

The divide between conventional and alternative thinking is at its most controversial when it comes to policy making – and at its least clear cut.

There are those who claim that opposition to nuclear power and genetic engineering fall plainly in the pseudoscientific camp. There are those who claim that worries about rising carbon dioxide levels in the atmosphere fall plainly in the pseudoscientific camp.

Equally, there are those who would make precisely the converse claims.

Neither is appropriate. Certainly when either side make pseudoscientific statements, they should be challenged about them. But the questions are questions of policy, not simply of scientific truth. Neither side of any of these debates has a monopoly on science. The policy decisions need to be made on the basis of an understanding of the scientific truths, but they cannot be determined simply on scientific grounds: there are real choices to be made between alternative courses of action, with different consequences in each case – and more problematically, with different probabilities of various consequences. Science can help us determine the probabilities of the consequences of the various courses of action, but it’s a political question which of those consequences – and probabilities – we prefer.

The important thing is not to allow either side to attempt to influence policy on the basis of pseudoscience – either totally unsupported claims as to the consequences of proposed courses of action, or claims of an unrealistic level of certainty as to the outcomes.