Re: How Many Coincidences Make a Conspiracy?

I got a somewhat humorous two-page email on conflicting claims on Covid-19 a few weeks back. I could probably, off the top of my head, produce a 20 page article on conflicting claims.

Interesting take. Some I suspect is right, but I do not agree with the
take on the severity of Covid-19. Everyone seems to be able to interpret
the numbers in a way that buttresses their own biases.

So, this fellow Dr Vernon Coleman makes a case that a number of others have made with respect to “follow the money” issues. He asserts that institutional corruption, big pharmaceutical corporate influence, Bill Gates Machiavellianism, and other corrupt factors are at play here. I have sympathy with those views, but do not have access to all of the relevant information, since of course it is probably a conspiracy at some level. I do not know where the truth resides, I have only suspicions.

Having read opinions that diverge widely on just about any issue, I have lost faith in the ability of people, all people, myself included, to actually have a correct understanding of any moderately complex issue.  Much recent work showing how science is failing us only makes things seem worse.

With Covid-19, I have seen more conflicting opinion than any person can make sense of. Of course, all claim that evidence, data and logic are on their side. Sometimes, opinions fall more or less along the right-left political divide, but on Op Ed News, a progressive site I have often read, some of the folks I previously thought had a good understanding of things have been coming out with views on Covid-19 that many associate with the right.

Some are more ready to see conspiracy than others, although for those tending towards seeing conspiracy, there is a lot of dispute about the nature and breadth of any conspiracy.

Since we still have to act, even with uncertain knowledge, I believe that risk management for Covid-19 suggests that:

  • the disease must be treated seriously (the downside of such infection control measures may or may not be exaggerated)
  • it is way worse than the flu both in its effects (not just death) and its infectivity
  • social distancing is prudent to reduce viral load
  • masks are useful, both for the wearer as source of infection and the wearer for protection from infection and they do not have to work 100%, or filter out virus-sized particles (the downside seems exaggerated IMHO). The main thing is to reduce viral load
  • reduce viral load, and you are less likely to have a severe infection (a claim I read recently that one virion is enough to infect you makes little sense to me – we do have immune systems)
  • various vitamins, minerals, and supplements give greater protection (D3, K2, B complete, Zinc, Selenium, C, NAC, Quercitin, Calcium, Magnesium, Postassium, fish oil, Omega-3, healthy fats)
  • we can not always get what we need from food, for various reasons – organic may well be better in its nutritional profile, but maybe not always
  • we do not get enough sunshine in a large number of cases so many are quite Vitamin D deficient
  • stress reduction – keep the cortisol from damaging your body with sleep, exercise, meditation, good relationships, and other stress reducing measures
  • adequate sleep helps increase immunity
  • above all, you want to reduce your viral load – eliminating it would be nice, but the body, with a good immune system, can more often than not fight off smaller amounts of the virus
  • we may never reach herd immunity, there are indications that immunity is short lived
  • an effective vaccine may never be developed
  • vaccines have a downside, and frequently injure a lot of people

There are risk factors: older, elevated blood pressure, obese, impaired sugar metabolism (pre-diabetes or full-blown), and probably other things. However, some studies have shown the low Vitamin D is a huge risk factor for severity of the disease. That is something that can be remedied easily.

Some of my thinking will eventually prove to be mistaken, but these are my working assumptions.

Denis Rancourt, a formerly tenured and now fired professor from University of Ottawa, has written a paper claiming that no research shows that masks work. I don’t think he is correct, and am aware of studies showing the opposite. Are they good studies? Maybe. However, Dr. Rancourt is not alone in his views. Russell Blaylock, retired U.S. neurosurgeon, is also saying similar things. Who is correct? I dunno. See my comments above on risk management.

With respect to HCQ fraud (talked about in the video above): thinking about what is happening and who is benefiting, I only have suspicions. I have noticed some patterns that make the best sense in “follow the money” terms. I don’t think I could do this one justice if trying to write about it.

Others researchers are finding smoke on the current pandemic response, and writing about the things that smell funny also, but without documents, leaked correspondence, or surreptitiously recorded video, we are left with speculation and circumstantial evidence. Even if we get the straight goods, it is not something likely to penetrate through to the masses, since the main stream media does a pretty good job of gate keeping.

Reliable and Probative

“Observations are not simply vetted by presuming the reliability of their source, but also the by probative value which they may serve to impart to the question at hand. If you reject highly probative observations, simply because you have made an assumption as to their reliability – this is a practice of weak science. ” — The Ethical Skeptic at

So, if reliable and probative are independent concepts (orthogonal), it follows that we can make a two dimensional matrix for them, and classify various things on both dimensions, to see where they fall. Maybe instead of the five dollar word probative, we should use the one dollar word ‘relevant.’ For the word reliable, we could use ‘certainty that it is correct.’

So, what fits into the matrix cells? I have no examples at present. However, the idea seems interesting.

Unreliable, low certainty of correctnessHighly reliable, high certainty of correctness
Strongly probative, relevant to the situationNot so good: relevant, but not at all certainBest evidence: both certain and relevant
Weakly probative, irrelevant for the situationPoorest evidence: neither certain nor relevantNot so good: certain, but no bearing on the situation

“probative value

The ability of a piece of evidence to make a relevant disputed point more or less true.

For example: In a trial of a defendant for murder, the defendant’s dispute with his neighbor (unrelated to the crime) has a no probative value because it provides no relevant information to the trier of the fact. However, if the defendant’s dispute was with the victim, this has a much higher probative value as it could be a motive for the murder.”

Reasoning from evidence


My intention is to discuss how we can improve our confidence in our beliefs, and thus have greater assurance that our actions, based on our beliefs, will be appropriate.

Thinking about this a little bit more, I decided it does not make sense to talk about “science” as though it is some monolithic creation. There are individuals and institutions involved in the scientific enterprise: administering, funding, publishing, supporting, doing, … . Some are brilliant, some are drones. Some are careerists and opportunists; some just really are driven to know stuff. Some are honest; some are lacking in the integrity department. Some do good work; some produce pretty shoddy stuff.

Since this is an ambitious program, it will take me a lot of time, and will surely result in multiple posts.

I have been saying for quite a while now that not only is science broken, but all disciplines are broken. I have informed opinions on the brokenness of science and the work of academics in general.

I wish science worked better, in any number of ways. It is not nearly as effective at figuring things out as many believe, as I used to believe. Many groups, doing studies, looking at the underpinnings of the scientific enterprise have revealed its inadequacies. It is worth looking at some of this literature, just to become aware of some of the issues, if you have not done so.

It is probably the best method we have regardless, for examining many types of issues, despite its flaws, which are numerous. Enthusiasm for science needs to be tempered.

These flaws are in the areas of: bias, careerism, dishonesty, incompetence, inadequate methodology, statistical inadequacy and misunderstanding, flawed peer review, poor research design, dogmatism, fashion, corporate vested interest, failure to replicate, triviality of research, lack of relevance of studies to the issue and so on. I wish it were a lot better. The criticisms by John P. Ioannidis on the problems with tests of significance are really troubling, since it seems to show that what many of us were taught and have believed about statistics was quite a bit off the mark.

See for instance:”failure+to+replicate”&ia=web

 A good researcher might say:

My interpretation of the evidence leads me to believe such and such. However, I realize that the evidence may be ambiguous and my interpretation unsound. I may be missing critical evidence. I may be ignoring, or unfairly rejecting, other interpretations. I may be biased. I may not have thought things through clearly. I also realize that the evidence itself may be fraudulent, manufactured. As a result, based on the evidence that I am familiar with my conclusion points in this direction. Others may agree; some may disagree. I know that science is not settled and may never be.  

Given the uncertainty of opinion, how do we increase our confidence in our conclusions, i.e., how do we know what to believe?

Remember, hold your views lightly, there is an excellent chance a lot of them are wrong, sometimes disastrously wrong.

Possible Future topics:

The Interpretation of Evidence


  • Recognition, manipulation and creation of patterns is the essence thinking


  • Evidence – how do we get it?
  • Interpretation of evidence


  • It may be that there are facts, but we only have evidence, which we must interpret.


  • The meaning of proof

Assessing information

  • Information, disinformation, misinformation: we must interpret and evaluate it


  • Kidding ourselves, this is what we want to reduce
  • Ambiguity of evidence
  • Ambiguity of research results

Subjective or objective

  • All studies have to be interpreted – not without subjectivity
  • Subjective or objective-– a distinction without that much merit


  • The reliability of evidence and interpretation


  • Probative value


  • We want a plurality of sources
  • Plurality means multiplicity

Ockham’s razor

  • Occam’s razor, the law of parsimony is past its prime, it never was a very good rule

Anecdotes as evidence

  • What is the role of anecdotal evidence in investigation?
  • Anecdotal evidence is still evidence
  • Anecdotes are essential for navigating through life, and have a necessary place in scholarly study
  • The dismissal of anecdotal evidence without consideration is irrational, and shows the malign influence of pseudo-skepticism on scholarship
  • See for a bit more.

Bias affecting interpretation

  • Cherry picking of data
  • Cherry picking of evidence

Causation is not shown by correlation

  • Causation is not shown by correlation

Use of logic


  • Analogy
  • Cumulative argument
  • Meaning
  • Relationships
  • Semantic nets

Classical logic

  • Fallacies
  • Syllogisms
  • Logic trees

Computational logic

  • Boolean algebra
  • If-then the-else logic
  • Truth tables

Inductive logic

  • Inductive argument

Mathematical Logic

  • First-order predicate calculus
  • Gödel’s theorem
  • Symbolic logic
  • Functional analysis
  • Mathematical mapping
  • Mathematics – subset of logic


  • Fuzzy logic
  • Multi-valued logic

Use of measurement

Big data

  • Big data analysis

Data quality

  • Data quality

Display of data

  • Displaying data as graphs, charts, trees, networks, semantic nets, other graphical modelling techniques


  • Qualitative versus quantitative thinking

The peer review fallacy

  • Peer review is flawed. Peer review serves more to impede innovative work and force group consensus than to improve scientific findings

Some basic ideas on statistics

Abuse of statistics

  • Misuse of statistics
  • Lies, damned lies and statistics


  • Bayesian probability/statistics


  • Correlation, co-variance
  • Multiple regression
  • Time series
  • Time-lagged multiple regression

Effect size

  • Small effects, big effects
  • Size of the effect versus the city statistical significance of the effect big

Hypothesis (null)

  • Hypothesis testing
  • Setting up a null hypothesis

Power of a study

  • The power of a statistical study and the number of observations (n)


  • Odds – calculated and estimated

Significance levels

  • Tests of significance
  • The magical 95% level of significance
  • If the effects are strong and clear, the sample size large, tests of significance probably don’t add that much value

Size of effect

  • The effect may be too small to be important, but still statistically significant


  • Statistical analysis


  • Variance and variability
  • Variance is a squared correlation

Error types

  • Type I and type II errors
  • False positives, false negatives

Signal detection

  • Look at signal detection theory
  • Missing the fire, or false alarm
  • The receiver operating characteristic
  • We want a good signal to noise ratio
  • Improving the detector

Computer models

  • What they are
  • Uses
  • Limitations

Trusting the experts

Trust the experts? Why would I be daft enough to do that? It is not that experts are wrong, although they are often wrong. It is that we are exhorted to treat them with reverence when that defies reason. — ephektikoi


We routinely rely on experts for services and for advice. Does this work out? Sometimes the results are acceptable and sometimes they are barely adequate. In other cases, our interaction with experts becomes a disaster.

There are experts we can be pretty confident will solve our problems, and experts we need to treat with caution, if not just ignore. In some cases, the outcomes, positive or negative, only effect our pocket book. In other cases, the outcomes affect our long term health or our lives.

We are bombarded with opinion delivered by experts on a daily basis, some sounding authoritative, but with views contradicting one and other. Logically they can’t all be right, at least in those areas where views conflict. Maybe none are correct in any significant respect on particular topics.

There is not a lot of reason to universally trust experts, since they are frequently wrong, and in some disciplines, may be right only at the level of chance – they disagree, they agree, they get it wrong, they get it right.

Expert opinion, is it better than ours? That depends. Listening to the experts is sometime the best we can do, since the non-experts are not likely to be even as good. In some fields, experts are actually quite good, but in other important areas, they are not very good at all.

What Areas Of Expertise Are We Discussing?

Expertise can reside in trades, service industries, music, art, dance, sports, various professions, academia, healthcare, retail, consulting, finance, banking, mortgage, and on. Expertise can reside anywhere where knowledge, skill and experience go beyond the average. So, under this broad umbrella, many, if not most, people will have some areas of expertise.

I have several areas of expertise. It does not mean that I am at the top of hill in any of these particular regions, but I have competency in several things that are fairly complex.

What Services Do The Experts Provide?

Experts make the world run. They may do fabrication, construction, repair, give advise, explanation things, predict things, research things, teach things and render opinions.


Too many “experts” start from the position that our current understanding is correct, and anomalous results must be wrong. You can be an absolute master of the body of knowledge of your discipline, and also of the schisms which invariably exist in any discipline, and be dead wrong on a large number of topics, because the body of knowledge in your discipline is very flawed. You can be an expert in bullshit to be more blunt.

Opinions, under the guise of journalism, is the domain of non-experts who think they are experts. It is often called (if honest) “opinion,” or called (if less honest) “reporting.” It is also the realm of any number of people who like to pontificate on things of which they have little first hand knowledge, or relevant qualifications.

Surely We Can Trust Science

Scientists are experts, but the history of science is to a great extent the history of error, and of mistakes in expert opinion. The battleground of science is littered with the corpses of damaged findings, defeated theories and the detritus of discarded hypotheses. It is not even clear that it is as self-correcting as many would have it.

Recent informed critiques of science have pointed out the problems facing the scientific enterprise. There is a problem with the ability to replicate findings for instance. Not only is replication seldom attempted, but when it is the failure to replicate is quite high, in both soft and hard sciences. A recent critique published in PLOS Medicine “Why Most Published Research Findings Are False” by John P. A. Ioannidis from August 30, 2005 has been a much cited paper on various problems with bio-medical research. There are many other informed critique in other papers, by other authors.

I have been saying for quite a while now that not only is science broken, but all disciplines are broken. I have informed opinions on the brokenness of science and the work of academics in general. Some thoughts:

  1. Science is never settled
  2. Science is not done by consensus
  3. Science is only somewhat self-correcting
  4. Dismissal of anecdotal evidence without consideration is irrational
  5. Ockham’s razor is past its prime, it never was a very good rule
  6. Experimental science is not the only type of science
  7. Science is not the only mode of useful inquiry
  8. If the effects are strong and clear, tests of significance probably don’t add that much value
  9. Peer review serves more to impede innovative work and force group consensus than to improve scientific findings

The disagreement of experts

Experts, like all of us, base their views on imperfect information. They fail to consider all of the available evidence, for a variety of reasons. They may:

  1. not be aware of evidence
  2. dismiss evidence without due consideration
  3. evaluate evidence according to their own prior beliefs and biases, in a theory-blind fashion
  4. evaluate within a flawed, imperfect body of knowledge from their domain of expertise
  5. not be particularly good at integrating large amounts of information
  6. lack sound judgment, common sense
  7. interpret things shaped by their culture, family experience, society, emotion, bias, exposure to the strictures of their discipline and incentives.


A lot of experts can tell a good story, a convincing story, but they will not agree. Different experts may tell different stories, and according to the law of contradiction they can’t all be right. Maybe none are right, in whole, or even in part. Well, maybe in part, sometimes.

Consequences of Error

When an expert is relied upon for his knowledge, and is wrong, the consequences may be bad and sometimes they are life threatening consequences. We tend to deferred experts anyway, although this may not be a safe bet.

The Key Limiting Factors

Expert performance, professional capability and competence depend upon, at the least, these factors:

  1. The individual – human capabilities
  2. The incentives underlying the individual and the collective – financial and otherwise
  3. The cultural and social matrix – social pressures, social factors
  4. The Institutions – The institutions involved, and their influence on the discipline
  5. The discipline – methods and body of knowledge in the domain of the discipline and its limitations

All of these work to determine expert performance. I discuss each in turn below.

The individual

Individual and personal factors and capabilities of the expert are of course of great importance.


An expert can be competent because of his training, experience, and the quality of the discipline. That is, competence because of the discipline. An expert can also be competent despite the problems with the discipline, because of his overall intelligence and judgment, and understanding of the deficiencies in knowledge in his field.

I should note that on any dimension, half are below average in ability, and half above; this is just basic statistics.


What are the biases of the practitioner? Experts, like all of us, are prone to rush to premature judgments. They examine a subset of the available evidence, and then draw a conclusion consistent with their current beliefs, according to their biases. We can only reason from current beliefs. It is difficult for many to honestly consider new information which conflicts with current views. More and more their views are held onto with intensity, the person becoming emotionally invested in a certain perspective. More and more their understanding is limited by confirmation and dis-confirmation bias. They may also become theory blind. By that I mean they see what they expect to see according to the dominant paradigm, and disregard things outside of the theory.

Interpretation of Evidence

Evidence must be interpreted, and this is done within the framework of existing beliefs and theories. These interpretations may sometimes be right, but clearly are often in error.

We all have idiosyncratic interpretations. It is inevitable that different individuals will understand things differently, interpret the evidence differently. Not only is evidence interpreted differently by different individuals, over time, interpretations by one individual may change.


No expert will understand, or retain in memory every aspect of their professional cannon. Far from it. In a complicated discipline, the task of learning the whole body of knowledge will be impossible. Forgetting starts immediately, and after a short period of time, only some of the material studied will be retained. There are individual differences in this regard, but the general case is that we forget a lot of material.


Intelligence enables or restricts the performance of an expert. How bright is the expert? Do they have the ability to integrate large amounts of perhaps overwhelming information? Does the practitioner have the types of intelligence demanded by their specialty?

Intelligence has different aspects. There may be some overall general intelligence, but there are surely special type of intelligence that give capabilities beyond that.

How much time does the expert give to reflection about his discipline? I consider reflectivity an aspect of intelligence, although it probably does not show up in IQ tests. IQ is based on speed, reflection is based on slow examination, and never shallow analysis.

Emotional intelligence not only helps in dealing with people, but also is necessary for sound judgment.

I consider humility an aspect of intelligence, although it probably does not show up in IQ tests.

Is this person creative? Is the practitioner able to think outside the box, to think laterally?

I consider open-mindedness an aspect of intelligence. How open-minded are they? How close-minded? How willing is the expert to consider other points of view? How stubborn are they? This is a character trait, but it may influence intelligent thinking. When an expert is wrong, they will often hold fast to their dogma beyond all reason.


What is the depth of training and experience of the expert in his specialty? How well does the expert understand the body of knowledge of his discipline? How well does the expert understand the methods of his discipline?

Does he go back and refresh his knowledge at frequent intervals? How current is his knowledge?

What is the breadth of knowledge in other areas, maybe outside of the discipline, in other fields that may be of relevance? How generally broadly educated, well rounded, is the individual?

Experience and Judgment

How much experience has the expert had in trying things out in the world, to see if the theory is actually applicable? There are theory smart experts with no practical experience. How much real world smarts, as opposed to academic smarts does the expert possess? How sound is the judgment, how much common sense is evident for this person?


How much money, prestige, or careerism are associated with the discipline? Some professions are notorious for attracting careerists. People only interested in career advantage may give short shrift to serving others. They may sacrifice integrity in order to advance their careers. Also, see the incentives section.


Personal integrity, including honesty and self-honesty, make for a better world. Some experts are mostly honest, and some are mostly dishonest. There are those who routinely deceive, lie, shill, con others and generally show sociopathic or even psychopathic behaviour. There are others who believe in fair dealing, and work hard to show personal integrity.

Theory blind experts

Experts may be unable to see things outside of the bounds of the theories espoused by the discipline. I call this being “theory-blind.”

The Incentives

The incentives and disincentives operating on the individual and on the collective are a huge factor in shaping the judgment of the expert. “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!” — Attributed to Upton Sinclair

The cultural and social matrix

The cultural and social matrix influence the body of knowledge of the discipline and of the individual. Any expert will be enmeshed within various cultures and sub-cultures and their beliefs. These will invariably play a role in the shaping the judgment of the expert. These beliefs may include shared intellectual delusions and pathological thinking such as group-think. Peer pressure can bias thinking considerably. This will shape and constrain the experts view of the world.

An expert may routinely interact with colleagues, employers, employees, clients, family, governments, representatives of institutions, bankers, regulatory bodies and others. All of these will provide the matrix for the actions of the expert. All of these will influence the judgment of the expert, and not necessarily in a favourable fashion.

The Institutions

Governments, institutions, politicians, organizations, businesses, unions, trade associations, professional organizations, regulatory agencies, bureaucracies can all influence an expert of any discipline. These agencies are engaged in such things as making recommendations, promoting view points, promulgating rules and regulations, and setting standards of practice.

There is also the issue of institutional integrity, with some institutions being more or less corrupt, becoming top heavy with individuals who wish only to enrich themselves. This becomes an institutional failure.

The Discipline

I’m going to use the word discipline to describe any organized body of knowledge. Each discipline will have methods of practice that govern how things are done.

Flawed bodies of knowledge

The body of knowledge of any discipline will of necessity be incomplete. It will have ambiguities. It will have errors. It may have self-contradictions, or be otherwise flawed. There may be competing and even contradictory schools of thought in the discipline.

Some possibilities for a metric for soundness

You could systematically arrive at a metric for soundness for various disciplines with sufficient work. We could rate disciplines on a scale according to their successes and failures in results, explanation and prediction. The omniscient one could do it for us, if we have direct access.

How sound is the body of knowledge claimed for the discipline? Does it lead to consistent, correct and positive results? How concrete is the discipline: is it one with directly observable effects, able to be seen and repeated in real time (e.g., materials science versus string theory).

How useful is the existing body of knowledge of the discipline in explaining and controlling the real world? Some disciplines have a body of knowledge and methods which have repeatedly been shown to be sound, to be useful, to be reliable. Others seem to be hopelessly inadequate in making explanations, predictions, or recommendations. This is one key dimension: the soundness of the body of knowledge.

Flawed methods yield flawed knowledge. What are the methods of practice used in the discipline? How well do they work; do they give consistent, correct and positive results?

Some possibilities for a metric for complexity

How do we measure difficulty of a discipline, that is, what is required to master it in terms of hours spent in study, training and practice and the sort and amount of intelligence required for mastery? The omniscient one could tell us, but given that we do not have divine intervention, we might try making some rough estimate of the factors and of the requirements of some representative fields of study.

Some disciplines are hard, requiring a great great deal of highly specialized training in a broad range of difficult topics. The demands placed on the practitioner are great. I will call this dimension complexity.

Some disciplines involve a large number of things to be learned, they have great size. I will group this under complexity as well.

You could systematically arrive at a metric for complexity for various disciplines with sufficient work. We could rate disciplines on a scale according to the amount of schooling required, the ability required to complete the schooling, the number of textbooks, and presumably other factors.


We can have a two dimensional classification. One dimension is soundness and the other is complexity. Let’s say we use 1 to 7 scale for each. We can situate various disciplines on this matrix so for instance electrical wiring is extremely sound, near the top, but the complexity relative to some other things is modest (I know this from experience having worked in that field). Something like mathematics is also extremely sound but also very complex, at least for most people. Something like astrology, according to my biases, is neither sound nor particularly complex. I suppose I could be wrong on both points. Take something like nutrition science. It’s not really all that sound as the science goes, as far as I can see. But it’s probably moderately complex.

Feel free to quarrel with my rankings. Don’t get your nose bent out of shape if you and I disagree.

Example Ratings of Complexity Versus Soundness 1

Soundness1 – abysmal2 – low3 – less thanso-so4 – so-so5 – more thanso-so6 – high7 – stellar
1 – dead easyDowsing

2 – easy

Ditch digging
3 – not quite easyFortune tellingHomeopathy AstrologyPseudo-skeptics

4 – so-so

JournalismChiropractic Nutrition Science

Any trade Carpentry Plumbing Electrical Auto body

5 – somewhat hard
Stock market forecastingArt criticism Music criticismEconomics Psychology

Investigative Journalism

6 – hard

Oncology Neuro- psychologyAnaesthetics Materials Science
7 – very hard



Even if we have a highly capable individual, their judgment may be impaired by social factors. However, if the foundations of their discipline are unsound, the role of the individual is mostly irrelevant. They may be the best in their field, but if the field has a body of knowledge that is too erroneous to be worth pursuing there will be no cigar.

You might have thought this discussion was only about experts. Well, most of the same factors apply to all of us. Maintain a healthy skepticism about the opinions of mankind. This includes media, prognosticators, consultants, scientists or just about any specialty – experts and non-experts of all persuasions. Also remember, you are part of mankind.

1I had the omniscient one give me this information, so if you have a problem with it, take it upstairs. More seriously, this is my very biased and ill-informed viewpoint, and to be taken as an illustration only.

Reading broadly

I have been thinking that it might be better to not know where the truth lies than to believe things that are mistaken. For this, you need a great ability to tolerate uncertainty. If we act on mistaken beliefs, we can end up with unfortunate results. If we keep our beliefs more fluid, investigate more, we may have better odds of discovering a correct basis for action. Maybe, maybe not. I can’t rigorously defend this at the moment. Better to be aware of multiple possibilities and not wrong, or certain, and wrong?

I think we do benefit from reading broadly, across any spectrum of opinion that you might care to name. Again, since there is going to be a great deal of contradiction across writers, we realize, if we think logically, that most assertions will be false, even if well-argued. This breadth of reading may not help us knowing what is true, but we may be a little more cautious in forming our beliefs. In the end, we will probably anchor to certain assertions, deeming them more likely to be correct. Unfortunately, science itself is a lot more flawed than we previously thought.

The key question for me is: do we improve our chances of getting things right by looking for multiple viewpoints, or do we just get more confused? We can certainly find out that our current viewpoint is only one of many. That should be a good thing for the open-minded to know. It might just create emotional distress (cognitive dissonance) in others, and particularly the dogmatists.

I have not found a good discussion of this issue. I suppose that most never even think of it, and some may believe that the answer is obvious. I am unsure of the best answer.

Specialist or generalist

If the specialist comes to know more and more about less and less, and eventually knows everything about nothing does the generalist come to know less and less about more and more, and eventually knows nothing about everything? Just asking. — Ephektikoi

Found an interesting site

The fellow has some interesting ideas, and claims experience in a couple of related disciplines for analysis of evidence, but his writing style really lacks clarity. He could be a philosopher, mathematician, or a post-modern academic given his style. Regardless, I think there is meat there, but awfully hard to get it off the bone.

Of course, my thoughts are subject to confirmation bias, since he defends the value of anecdotal evidence, disputes the common interpretation of Occam’s Razor, and takes the organized skeptics to task for the poverty of their understanding of how to interpret the world. This is a program I am much in sympathy with.

It is too bad that he does not show the gift of writing clear prose.

“Refreshing to new and weary seekers of truth alike. If you claim to be a skeptic and have not read The Ethical Skeptic, you risk echochamber irrelevancy.” -TRB

“I suspect that I possess neither the lifetime nor competencies to grasp all that is said therein; nevertheless inside I also suspect greatness.” -Tech Journalist

“An extraordinary work. Masterpiece.” -LS

“Essential for any philosophy of science course. The pageantry of pseudo-skepticism is abused to belie its truly corrupt core. What we lacked are the frameworks necessary in pinpointing the very flaws and deceptions many of us have sensed, but have been unable to articulate. That is, until now.” -ADR

“Sir, I hope you realize the high quality of material you have produced here. Hopefully you will choose a world stage someday and take personal credit for it. The material is that good.” -AOD

“This site/blog/whatever is messing with my mind and I love it.” -SR

“I love that blog by The Ethical Skeptic. It punches effectively and by the end I was cheering!” -PhD Physicist

“I am a military intelligence instructor. Honestly, your knowledge structure of deception ought to be standard teaching inside graduate level US military intelligence courses. Do you mind if I use your material to do so?” -JWH

“I was asked by a colleague, just whom I regarded to be a signature philosopher of our time, as viewed say a century into the future; to which I responded, ‘I don’t even know his name, other than ethical skeptic’.” -JKP

“[One of the] best non Cathedral empiricists outside Nassim Taleb.” -BH


There exists a pro-science, educated and rational movement of conscience, on the part of people just like you and me. Professionals who apply skepticism daily in their STEMM disciplines; who nonetheless are raising a warning flag of concern. Welcome to my blog. Within its pages, I hope to illustrate genuine skepticism, or what is called Ethical Skepticism. Indeed, its mission is to promote the wonder of science through a contrast of authentic skeptical discipline, versus its distorted, pseudo-intellectual and socio-politically motivated counterfeit. I am a graduate level science and engineering professional who laments the imprisonment of science by control-minded special interests and bullying dogmatic social epistemologists. As you survey my blog, hopefully you will encounter ideas you’ve never personally considered before. Indeed, its mission is to foster foremost a discerning perspective for us all on the Cabal of pretenders who abuse and seek control in the name science. Science based upon a flawed philosophy called social skepticism. —

Contempt Prior to Investigation

There is a principle which is a bar against all information, which is proof against all arguments and which cannot fail to keep a man in everlasting ignorance – that principle is contempt prior to investigation.

— English Philosopher, Herbert Spencer


I want to share some ideas I have about misinformation. Here are my contentions:

  1. If we have two assertions that are mutually contradictory, they cannot both be correct. This is the law of non-contradiction. It does not follow that either are correct. If we have any number of mutually contradictory assertions, at most one of these can be correct. There is no guarantee that any are correct.
  2. On any reasonably complex topic, there is more often than not controversy, with mutually contradictory assertions proliferating. You can see this on almost any site or publication where opinions may be tendered. It is clear from logic that the general rule applies; most of these assertions are going to be wrong. We might get lucky and have some set of views that is right, but in truth, only the omniscient one knows.
  3. A lot of people can tell a good story – seemingly buttressed by evidence, coherent, convincingly told
  4. A lot of people can also tell an equally good story which contradicts many of the assertions of the first one
  5. I think that the implication of this is that most of what we assert, what we believe, what we opine is on pretty shaky ground.
  6. But, I could be wrong about this.;-)