Reasoning from evidence

Overview

My intention is to discuss how we can improve our confidence in our beliefs, and thus have greater assurance that our actions, based on our beliefs, will be appropriate.

Thinking about this a little bit more, I decided it does not make sense to talk about “science” as though it is some monolithic creation. There are individuals and institutions involved in the scientific enterprise: administering, funding, publishing, supporting, doing, … . Some are brilliant, some are drones. Some are careerists and opportunists; some just really are driven to know stuff. Some are honest; some are lacking in the integrity department. Some do good work; some produce pretty shoddy stuff.

Since this is an ambitious program, it will take me a lot of time, and will surely result in multiple posts.

I have been saying for quite a while now that not only is science broken, but all disciplines are broken. I have informed opinions on the brokenness of science and the work of academics in general.

I wish science worked better, in any number of ways. It is not nearly as effective at figuring things out as many believe, as I used to believe. Many groups, doing studies, looking at the underpinnings of the scientific enterprise have revealed its inadequacies. It is worth looking at some of this literature, just to become aware of some of the issues, if you have not done so.

It is probably the best method we have regardless, for examining many types of issues, despite its flaws, which are numerous. Enthusiasm for science needs to be tempered.

These flaws are in the areas of: bias, careerism, dishonesty, incompetence, inadequate methodology, statistical inadequacy and misunderstanding, flawed peer review, poor research design, dogmatism, fashion, corporate vested interest, failure to replicate, triviality of research, lack of relevance of studies to the issue and so on. I wish it were a lot better. The criticisms by John P. Ioannidis on the problems with tests of significance are really troubling, since it seems to show that what many of us were taught and have believed about statistics was quite a bit off the mark.

See for instance:

https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124
https://www.firstthings.com/article/2016/05/scientific-regress https://www.firstthings.com/article/2017/11/the-myth-of-scientific-objectivity https://slate.com/technology/2017/08/science-is-not-self-correcting-science-is-broken.html

https://www.vox.com/2015/5/13/8591837/how-science-is-broken

https://duckduckgo.com/?t=ffsb&q=”failure+to+replicate”&ia=web

 A good researcher might say:

My interpretation of the evidence leads me to believe such and such. However, I realize that the evidence may be ambiguous and my interpretation unsound. I may be missing critical evidence. I may be ignoring, or unfairly rejecting, other interpretations. I may be biased. I may not have thought things through clearly. I also realize that the evidence itself may be fraudulent, manufactured. As a result, based on the evidence that I am familiar with my conclusion points in this direction. Others may agree; some may disagree. I know that science is not settled and may never be.  

Given the uncertainty of opinion, how do we increase our confidence in our conclusions, i.e., how do we know what to believe?

Remember, hold your views lightly, there is an excellent chance a lot of them are wrong, sometimes disastrously wrong.

Possible Future topics:

The Interpretation of Evidence

Thinking

  • Recognition, manipulation and creation of patterns is the essence thinking

Evidence

  • Evidence – how do we get it?
  • Interpretation of evidence

Facts

  • It may be that there are facts, but we only have evidence, which we must interpret.

Proof

  • The meaning of proof

Assessing information

  • Information, disinformation, misinformation: we must interpret and evaluate it

Ambiguity

  • Kidding ourselves, this is what we want to reduce
  • Ambiguity of evidence
  • Ambiguity of research results

Subjective or objective

  • All studies have to be interpreted – not without subjectivity
  • Subjective or objective-– a distinction without that much merit

Reliability

  • The reliability of evidence and interpretation

Probative

  • Probative value

Plurality

  • We want a plurality of sources
  • Plurality means multiplicity

Ockham’s razor

  • Occam’s razor, the law of parsimony is past its prime, it never was a very good rule

Anecdotes as evidence

  • What is the role of anecdotal evidence in investigation?
  • Anecdotal evidence is still evidence
  • Anecdotes are essential for navigating through life, and have a necessary place in scholarly study
  • The dismissal of anecdotal evidence without consideration is irrational, and shows the malign influence of pseudo-skepticism on scholarship
  • See https://ephektikoi.ca/2020/05/14/anecdotal-evidence/ for a bit more.

Bias affecting interpretation

  • Cherry picking of data
  • Cherry picking of evidence

Causation is not shown by correlation

  • Causation is not shown by correlation

Use of logic

Argument

  • Analogy
  • Cumulative argument
  • Meaning
  • Relationships
  • Semantic nets

Classical logic

  • Fallacies
  • Syllogisms
  • Logic trees

Computational logic

  • Boolean algebra
  • If-then the-else logic
  • Truth tables

Inductive logic

  • Inductive argument

Mathematical Logic

  • First-order predicate calculus
  • Gödel’s theorem
  • Symbolic logic
  • Functional analysis
  • Mathematical mapping
  • Mathematics – subset of logic

Modern

  • Fuzzy logic
  • Multi-valued logic

Use of measurement

Big data

  • Big data analysis

Data quality

  • Data quality

Display of data

  • Displaying data as graphs, charts, trees, networks, semantic nets, other graphical modelling techniques

Measurement

  • Qualitative versus quantitative thinking

The peer review fallacy

  • Peer review is flawed. Peer review serves more to impede innovative work and force group consensus than to improve scientific findings

Some basic ideas on statistics

Abuse of statistics

  • Misuse of statistics
  • Lies, damned lies and statistics

Bayes

  • Bayesian probability/statistics

Correlation

  • Correlation, co-variance
  • Multiple regression
  • Time series
  • Time-lagged multiple regression

Effect size

  • Small effects, big effects
  • Size of the effect versus the city statistical significance of the effect big

Hypothesis (null)

  • Hypothesis testing
  • Setting up a null hypothesis

Power of a study

  • The power of a statistical study and the number of observations (n)

Probability

  • Odds – calculated and estimated

Significance levels

  • Tests of significance
  • The magical 95% level of significance
  • If the effects are strong and clear, the sample size large, tests of significance probably don’t add that much value

Size of effect

  • The effect may be too small to be important, but still statistically significant

Statistics

  • Statistical analysis

Variance

  • Variance and variability
  • Variance is a squared correlation

Error types

  • Type I and type II errors
  • False positives, false negatives

Signal detection

  • Look at signal detection theory
  • Missing the fire, or false alarm
  • The receiver operating characteristic
  • We want a good signal to noise ratio
  • Improving the detector

Computer models

  • What they are
  • Uses
  • Limitations

Leave a Reply

Your email address will not be published. Required fields are marked *