My intention is to discuss how we can improve our confidence in our beliefs, and thus have greater assurance that our actions, based on our beliefs, will be appropriate.
Thinking about this a little bit more, I decided it does not make sense to talk about “science” as though it is some monolithic creation. There are individuals and institutions involved in the scientific enterprise: administering, funding, publishing, supporting, doing, … . Some are brilliant, some are drones. Some are careerists and opportunists; some just really are driven to know stuff. Some are honest; some are lacking in the integrity department. Some do good work; some produce pretty shoddy stuff.
Since this is an ambitious program, it will take me a lot of time, and will surely result in multiple posts.
I have been saying for quite a while now that not only is science broken, but all disciplines are broken. I have informed opinions on the brokenness of science and the work of academics in general.
I wish science worked better, in any number of ways. It is not nearly as effective at figuring things out as many believe, as I used to believe. Many groups, doing studies, looking at the underpinnings of the scientific enterprise have revealed its inadequacies. It is worth looking at some of this literature, just to become aware of some of the issues, if you have not done so.
It is probably the best method we have regardless, for examining many types of issues, despite its flaws, which are numerous. Enthusiasm for science needs to be tempered.
These flaws are in the areas of: bias, careerism, dishonesty, incompetence, inadequate methodology, statistical inadequacy and misunderstanding, flawed peer review, poor research design, dogmatism, fashion, corporate vested interest, failure to replicate, triviality of research, lack of relevance of studies to the issue and so on. I wish it were a lot better. The criticisms by John P. Ioannidis on the problems with tests of significance are really troubling, since it seems to show that what many of us were taught and have believed about statistics was quite a bit off the mark.
See for instance:
https://www.firstthings.com/article/2016/05/scientific-regress https://www.firstthings.com/article/2017/11/the-myth-of-scientific-objectivity https://slate.com/technology/2017/08/science-is-not-self-correcting-science-is-broken.html
A good researcher might say:
My interpretation of the evidence leads me to believe such and such. However, I realize that the evidence may be ambiguous and my interpretation unsound. I may be missing critical evidence. I may be ignoring, or unfairly rejecting, other interpretations. I may be biased. I may not have thought things through clearly. I also realize that the evidence itself may be fraudulent, manufactured. As a result, based on the evidence that I am familiar with my conclusion points in this direction. Others may agree; some may disagree. I know that science is not settled and may never be.
Given the uncertainty of opinion, how do we increase our confidence in our conclusions, i.e., how do we know what to believe?
Remember, hold your views lightly, there is an excellent chance a lot of them are wrong, sometimes disastrously wrong.
Possible Future topics:
The Interpretation of Evidence
- Recognition, manipulation and creation of patterns is the essence thinking
- Evidence – how do we get it?
- Interpretation of evidence
- It may be that there are facts, but we only have evidence, which we must interpret.
- The meaning of proof
- Information, disinformation, misinformation: we must interpret and evaluate it
- Kidding ourselves, this is what we want to reduce
- Ambiguity of evidence
- Ambiguity of research results
Subjective or objective
- All studies have to be interpreted – not without subjectivity
- Subjective or objective-– a distinction without that much merit
- The reliability of evidence and interpretation
- Probative value
- We want a plurality of sources
- Plurality means multiplicity
- Occam’s razor, the law of parsimony is past its prime, it never was a very good rule
Anecdotes as evidence
- What is the role of anecdotal evidence in investigation?
- Anecdotal evidence is still evidence
- Anecdotes are essential for navigating through life, and have a necessary place in scholarly study
- The dismissal of anecdotal evidence without consideration is irrational, and shows the malign influence of pseudo-skepticism on scholarship
- See https://ephektikoi.ca/2020/05/14/anecdotal-evidence/ for a bit more.
Bias affecting interpretation
- Cherry picking of data
- Cherry picking of evidence
Causation is not shown by correlation
- Causation is not shown by correlation
Use of logic
- Cumulative argument
- Semantic nets
- Logic trees
- Boolean algebra
- If-then the-else logic
- Truth tables
- Inductive argument
- First-order predicate calculus
- Gödel’s theorem
- Symbolic logic
- Functional analysis
- Mathematical mapping
- Mathematics – subset of logic
- Fuzzy logic
- Multi-valued logic
Use of measurement
- Big data analysis
- Data quality
Display of data
- Displaying data as graphs, charts, trees, networks, semantic nets, other graphical modelling techniques
- Qualitative versus quantitative thinking
The peer review fallacy
- Peer review is flawed. Peer review serves more to impede innovative work and force group consensus than to improve scientific findings
Some basic ideas on statistics
Abuse of statistics
- Misuse of statistics
- Lies, damned lies and statistics
- Bayesian probability/statistics
- Correlation, co-variance
- Multiple regression
- Time series
- Time-lagged multiple regression
- Small effects, big effects
- Size of the effect versus the city statistical significance of the effect big
- Hypothesis testing
- Setting up a null hypothesis
Power of a study
- The power of a statistical study and the number of observations (n)
- Odds – calculated and estimated
- Tests of significance
- The magical 95% level of significance
- If the effects are strong and clear, the sample size large, tests of significance probably don’t add that much value
Size of effect
- The effect may be too small to be important, but still statistically significant
- Statistical analysis
- Variance and variability
- Variance is a squared correlation
- Type I and type II errors
- False positives, false negatives
- Look at signal detection theory
- Missing the fire, or false alarm
- The receiver operating characteristic
- We want a good signal to noise ratio
- Improving the detector
- What they are