“In online discussions, this is usually due to someone misreading the headline and not reading the study. Other times, it is due to expectation bias, the human tendency that can cause people to see or experience what they expect, despite the fact that it is not there. Such things are possible because the nature of our perception is constructive. We don’t see the world as it is; our brain constructs a world for us to experience. Because it often does so based on what it expects to be there, we will often literally see just what we expect to see. One can easily read a study and literally see something that is not there.” — David Kyle Johnson Ph.D. https://www.psychologytoday.com/ca/blog/logical-take/202007/yes-masks-work-debunking-the-pseudoscience
Month: July 2020
I am going to use the neutral term communication to cover the transmission of information from one person to another. This may be spoken or written information, or perhaps communications transmitted by other means. We do not have a special term for information that is correct, although we have special terms for information that is incorrect, misinformation, and for information the is deliberately deceptive, disinformation.
Below is a mind map showing these relationships. After the diagram, I will expand a bit on these ideas.
Some times communication contains true assertions. This can happen in at least two ways:
- Logical truth
- Coincidental truth
Some true assertions are arrived at correctly, logically, reasoning from evidence. If the premises are correct, and the logic is sound, then the conclusions are supported by reason.
Some true assertions are arrived at by chance, guess work, or whim. They are not arrived at from true premises and logical reasoning. Although the conclusions are true, this is only a coincidence.
Some times communication contains false assertions, but not outright lies. This can happen in at least these ways:
- Poor logic
- Mistaken evidence
- Incorrect premises
- Clinical delusion
Poor logic means that even if the premises are true, the conclusions do not follow because the logic is shaky. If the premises are false, you will not have a logically supported conclusion either.
On the other hand, with coincidentally correct assertions, it does not matter if either the premises or the logic are shaky, you get a true conclusion that is not justified.
Evidence requires interpretation. Sometimes, this is straight forward, but frequently it is not. So, conclusions based on suspect interpretations of evidence are of course also suspect.
Incorrect premises can not produce correct assertions through logic. Sometimes our premises are articulated explicitly, and sometimes they are really left ambiguous. In either case, if they are wrong we can not reach a correct conclusion by reasoning from them.
Clinical delusion will result in an inability to think coherently and consistently. It is not likely to lead to logically supported assertions.
Confabulation is another pathological condition where people come up with fact free explanations for events in order to interpret life experiences, to answer questions. It is not deliberate lying, but results in inadvertent lies.
Disinformation is deliberate deception. It comes in at least three varieties:
- Recreational lies
- Lies and evasions of convenience
Governments, organizations, businesses and individuals engage in deliberate deception all too often, in order to gain an advantage at the expense of others, or to cover their own misadventures. It is called propaganda when done by governments and organizations, advertising when done by business, and lies when done by the rest of us. In any case, it generally puts the recipient of the disinformation at a disadvantage, and sabotages trust if found out.
Recreational lies and hoaxes are common among people who take delight in duping or gas-lighting others. It takes a certain amount of malevolence, narcissism, Machiavellianism, sociopathy or even psychopathy to engage in this activity. Online trolls are one aspect of this. Hoaxers are another. There are just far too many who delight in spreading bull excrement.
Lies and evasions of convenience are common. People do this both deliberately and reflexively, to get themselves out of immediate trouble or socially awkward situations, or to avoid giving offence. Some of these are classed as “white lies.” They can often backfire. Children do this all of the time to avoid negative consequences. Adults may not be that much different. Such lies are often discovered, reduce trust, and may anger others. Short term gain and long term pain may be the real consequence. In situation comedies, it has been the norm to create entire episodes around such lies and their outcomes.
From Wikipedia; see https://en.wikipedia.org/wiki/Motivated_reasoning:
“Motivated reasoning is a phenomenon studied in cognitive science and social psychology that uses emotionally-biased reasoning to produce justifications or make decisions that are most desired rather than those that accurately reflect the evidence, while still reducing cognitive dissonance. In other words, motivated reasoning is the “tendency to find arguments in favor of conclusions we want to believe to be stronger than arguments for conclusions we do not want to believe”. It can lead to forming and clinging to false beliefs despite substantial evidence to the contrary. The desired outcome acts as a filter that affects evaluation of scientific evidence and of other people.“
“Motivated reasoning is similar to confirmation bias, where evidence that confirms a belief (which might be a logical belief, rather than an emotional one) is either sought after more or given more credibility than evidence that disconfirms a belief. It stands in contrast to critical thinking where beliefs are approached in a skeptical and unbiased fashion.”
My response is: are any beliefs not motivated, not subject to confirmation and disconfirmation bias? We are more emotionally invested in some beliefs than in others of course.
This emotional attachment seems to me to be the basis for the idea of cognitive dissonance, a discomfort when confronted with ideas that challenge our beliefs. Many people parrot the term cognitive dissonance, but I suspect they have not looked at the underlying studies by Leon Festinger. See https://en.wikipedia.org/wiki/Leon_Festinger
As I have said elsewhere, we can only reason from the basis of existing beliefs, and it could not be otherwise. These beliefs serve as the grounds for further belief, and are resistant to revision if they link to other beliefs in an extensive network, and also are held with emotion and conviction. https://ephektikoi.ca/2020/04/24/clarity-or-murk/, https://ephektikoi.ca/2020/04/30/the-fundamental-problem-is-belief/
Also, from https://ephektikoi.ca/2020/04/29/interpreting-the-world/:
“We don’t understand the world as much as interpret it. Things and events come to our attention, are perceived, and interpreted; sometimes more or less correctly; frequently quite incorrectly. We understand things in the context of our current beliefs, values, biases and emotional investment. It could not be otherwise. Some types of events lead to a more correct understanding. Some types of events will probably always be beyond our ability to comprehend.”— Ephektikoi
I know that my views are subject to these processes, despite my best intentions. However, those without emotional intelligence also reason badly, so we cannot be Mr. Spock from the fictional Star Trek series and still function rationally. See: https://en.wikipedia.org/wiki/Emotional_intelligence
Choking down word salad
“This person must be a brilliant writer; I have no idea what they are trying to say.” — Ephektikoi
Word salad is a medical condition; a manifestation of clinical and neurological problems. It is called schizophasia.
I don’t wish to trivialize schizophasia, however, in the writings of some people, we routinely see unclear, disorganized, and chaotic exposition, almost a word salad. We might see:
- Odd assertions
- Random phrases
- Inappropriate catchphrases
- Trite clichés
- Limping analogies
- Strange metaphors
Such writing presents fragmentary ideas, not even proper assertions, and how they relate is never made clear. There must be a connection in the mind of the writer, but it is not necessarily a coherent connection. It is quite subjective, solipsistic almost, and not something with a clear and unambiguous meaning.
Clarity is a virtue in communication. However, such such writing is incoherent and not cohesive; it is almost free association. It leads a person to wonder: is the intent to communicate or obfuscate? Does it simply imply that no effort was put in to crafting the prose? Perhaps poor writing springs from poor reasoning and disorganized thinking. Is it possible that the chaotic words are a manifestation of chaotic thoughts? Whatever is behind it, it is bad, bad writing.
Word salad speech – claimed to be a transcription of a video with Donald Trump https://www.c-span.org/video/?c4546796/user-clip-donald-trump-sentence:
“Look, having nuclear—my uncle was a great professor and scientist and engineer, Dr. John Trump at MIT; good genes, very good genes,
OK, very smart, the Wharton School of Finance, very good, very smart
—you know, if you’re a conservative Republican, if I were a liberal, if,
like, OK, if I ran as a liberal Democrat, they would say I’m one of the
smartest people anywhere in the world—it’s true!—but when you’re a
conservative Republican they try—oh, do they do a number—that’s
why I always start off: Went to Wharton, was a good student, went
there, went there, did this, built a fortune—you know I have to give my
like credentials all the time, because we’re a little disadvantaged—but
you look at the nuclear deal, the thing that really bothers me—it would
have been so easy, and it’s not as important as these lives are (nuclear
is powerful; my uncle explained that to me many, many years ago, the
power and that was 35 years ago; he would explain the power of
what’s going to happen and he was right—who would have thought?),
but when you look at what’s going on with the four prisoners—now it
used to be three, now it’s four—but when it was three and even now, I
would have said it’s all in the messenger; fellas, and it is fellas because,
you know, they don’t, they haven’t figured that the women are smarter
right now than the men, so, you know, it’s gonna take them about
another 150 years—but the Persians are great negotiators, the Iranians are great negotiators, so, and they, they just killed, they just killed us.”
Word salad from a word salad generator http://artatom.com/lyric_creator/word_salad_generator.php
“There is no escape
and that’s fantastic
This is the end we won’t seek any more
Say goodbye to the world you live in
You’ve always been searching
but now you’re hiding”
Word salad from an academic paper discussed at http://profron.net/fun/BadWriting.html
“The visual is essentially pornographic, which is to say that it has its end in rapt, mindless fascination; thinking about its attributes becomes an adjunct to that, if it is unwilling to betray its object; while the most austere films necessarily draw their energy from the attempt to repress their own excess (rather than from the more thankless effort to discipline the viewer).” — Fredric Jameson
Risk – threats and opportunities
- There is a calculus of risk
- Threats are downside risk
- Opportunities are upside risks
- Risk has odds and risk has outcomes
- Odds are measured as probabilities
- Outcomes are measured as impacts, favourable and unfavourable
- We can to some extent manage risk, looking at potential impacts, reducing or enhancing the outcomes.
Risk is the stuff of life
- There is a yin and yang of risk
- We can avoid threats but perhaps miss opportunities.
- We can embrace opportunities and perhaps expose ourselves to threats.
- There are common everyday risks: to health, travel, jobs, having a shower, climbing a ladder, walking down the stairs, recreation, many domains of life
- There are rare risks beyond our control in many ways such as natural disasters like tornadoes and earthquakes
- Driving a car presents both threat and opportunity
- The same for swimming in a lake, stepping outside, or staying inside.
- There is a calculus of risk and opportunity
Who are the pros at risk management?
Risk managers may be found as:
- Disaster response planners
- Project managers
- Financial experts
Professionals manage risks in a systematic way. Although they may use a formal plan, methods will differ, according to the discipline and its needs, and the training of the risk manager. A management approach might include the following elements:
- Plan to manage the risks
- Prepare for the execution of the plan
- Execute the plan
- Revise the plan based on new information
Some risk managers, such as project managers, focus mostly on threats. Some, such as investors, focus also on opportunities. Financial forecasters, actuaries, disaster planners, entrepreneurs and surely others, each seek their own balance between threat and opportunity.
- Why concern yourself with this?
- You can get by without using this level of discipline in your daily life.
- We assess risk all of the time but do we do it rationally?
- We do informally assess threats and opportunities all of the time in daily living, and in planning for the future.
- If knowledge of the methods improves the accuracy, gives greater success, there is a better cost benefit.
- Risk management can give a better response to opportunities and threats.
- There’s less unnecessary panic and more reasoned actions.
- Done well a group can be more successful; obviously a good thing.
- However you don’t have to be a pro to use this thinking.
- Look at the odds
- Events have odds, reduce or enhance the odds of events
- Outcomes have odds, reduce or enhance the odds of outcomes
- There are ways that things can go wrong and there are ways that things can go right
- The ways that things can go right is small
- The ways that things could go wrong is large
- There is a ratio of good to bad and the ratio is huge
- There are everyday threats and opportunities
- There are rare threats and opportunities
- Black swan events are rare events that we have not been able to anticipate.
- Odds lead to predictions but these have a poor record of success in many fields.
- Predictions are based on probabilities
- We calculate odds based on data or do some intuitive wild ass guess (SWAG)
- Statistician calculate odds: frequentists calculate statistics one way; Bayesian’s calculate statistics another way
- What are the potential impacts?
- We can get rewarded, we can get punished
- Identify positive and negative outcomes
Costs and Benefits
Do a cost and benefit analysis on the outcomes
- There is a cost to managing risk
- There is a cost to missed opportunities
- There is a cost to missed threats
- There is a benefit to realized opportunities
- There is a benefit to managed threats
- Predictability and unpredictability give you odds we can chart
- We can have impacts on some scale from low to high
- We can have odds on some scale from low to high
- Impacts with low odds are things we should devote little time to
- Impacts with high odds are things that we should give a lot of tender loving care to
- Low impact and low odds are things we should not waste our time with
- Low impact but high odds are something we should devote some time to
|Threat Odds||High||Light management||Manage well|
|Opportunity Odds||High||Light management||Manage well|
Managing to a Plan
Plan to manage the risks
There is a certain way of thinking about risk
- Prepare emergency response plans
- Change the odds – reduce or increase the odds
- Avoid – avoid negative outcomes
- Accept – We can accept risk and not try to manage it
- Mitigate negative outcomes – mitigate negative outcomes
- Enhance positive outcomes – seek or enhance positive outcomes
Identify threats and opportunities
- Assess things
- Look at business and life risks
- Identify potential events both favourable and unfavourable
- Potential impacts, good and bad
- Odds for impacts
- Some risks are manageable, some are not
Decide how to manage
- Avoid if possible
- Mitigate if possible
Identify trigger events
- What are you going to look for, to alert you that some action is needed?
Write it all down
- Unless you are totally intuitive, with a fantastic memory put your thoughts into written form
Prepare for the execution of the plan
- Get your ducks in a row
- Obtain necessary resources and supports
- Trial runs
- Train responses
- Drill responses
Execute the plan
- Execute response plans
- Identify potential events both anticipated and unanticipated
- Look at emergent potential risk both positive and negative
- Monitor the environment, look for emerging trends
- Assess ongoing changes
- Manage the plan and the responses
- Look for triggers during execution
Revise the plan based on new information
- In light of ongoing experience, improve the plan
- Adapt to changed circumstances
- Which door would you choose? The one that leads to the tiger or the one that leads the diamonds?
- Black clouds may have a silver lining; it follows that silver linings may have black clouds.
- Within threats, there may be opportunities and within opportunities, there may be threats.
- Be cautious, be courageous, and assess your risks well.
- Life is like that.
Bad writing of academics is frustrating me
If the intent of writing is to analyze and express thoughts clearly, and communicate them to others, much academic writing is very, very bad. Sometimes, this may be because the writer has no real intention of being well understood; they wish their writing to be pompous and obscure. Other times, it may reflect incoherence in the ideas of the author. In other situations, it may just be the case that the topic is inherently complicated and complex. On the other hand, the writer may simply be unskilled, unclear on what it takes to craft readable, understandable prose. That can be remedied.
I once read a few pages in old book on a difficult topic. I’m not completely sure what the book was. It might have been “Foundations of Belief” by Balfour. At one point in the book, half of the page was taken up with one paragraph. That paragraph was one sentence, sub-clauses and semicolons interspersed freely. I have no idea how the author could have thought such style was suitable for conveying his thoughts. Although an extreme example, it is not untypical of much current academic writing.
It is hard to say which discipline has the worst writers. Academia has no shortage of bad examples. Since I like to explore philosophical ideas, I am most unhappy with writers of philosophy. Much of the writing is abysmal – so unreadable, so unclear.
This is something I find extremely frustrating; it could be remedied. Perhaps the arguments in philosophy are inherently difficult, but by writing with greater skill, with more clarity, the worth of the writings of philosophers would be vastly increased. There are a few philosophers who succeed at this. Most don’t.
Sometimes, the intended audience will be specialists in the author’s field of expertise. So, there will be concepts and jargon appropriate to the discipline used freely. However, no matter what the discipline, unclear writing is unclear writing. Some of the more common faults are:
- co-opting common words with a specialized and non-standard meaning
- using obscure words and generally unfamiliar words when there are words or phrases with the same meaning in more common use, more readily understandable
- use of more words than is necessary because the sentence structure is awkward
- unnecessary use of the passive voice which increases length and decreases concision and clarity
- lack of cohesion in paragraphs
- inclusion of extraneous detail
- excessive use of footnotes
- excessive use of parenthesis interrupting the flow
- run-on sentences
- too many conditional phrases
- forgetting that full stops, i.e. the period, are available.
There are many books on the craft of writing; clearly many authors have not studied this material.
Concentration of wealth
The most unhappy feature of many societies is the growing gap between the rich and the poor, the extreme concentration of wealth in a few hands. Wealth gives power and power creates more wealth. Wealth becomes more and more concentrated, unless society sets up mechanisms to reduce the concentration. This reduction is somewhat possible, but not to a great extent.
The wealthy use their wealth to ensure that the deck is heavily stacked in their favour. They also control the organs of propaganda, i.e., the media, the politicians and their flunkies, and now, the social media. This prevents most people from really understanding what is going on.
See: “Is the basic inequality in wealth distribution repeated in country after country the result of the talents—or lack thereof—of a country’s citizens? Not according to a new study. According to this excerpt from a Harvard Business Review article, the disparity ” ‘appears to be something akin to a law of economic life that emerges naturally as an organizational feature of a network.’ ” See https://hbswk.hbs.edu/archive/wealth-happens-wealth-distribution-and-the-role-of-networks
and also see: “We introduce a simple model of economy, where the time evolution is described by an equation capturing both exchange between individuals and random speculative trading, in such a way that the fundamental symmetry of the economy under an arbitrary change of monetary units is insured. We investigate a mean-field limit of this equation and show that the distribution of wealth is of the Pareto (power-law) type. The Pareto behaviour of the tails of this distribution appears to be robust for finite range models, as shown using both a mapping to the random `directed polymer’ problem, as well as numerical simulations. In this context, a transition between an economy dominated by a few individuals from a situation where the wealth is more evenly spread out, is found. An interesting outcome is that the distribution of wealth tends to be very broadly distributed when exchanges are limited, either in amplitude or topologically. Favoring exchanges (and, less surprisingly, increasing taxes) seems to be an efficient way to reduce inequalities.” from Wealth condensation in a simple model of economy, Jean-Philippe Bouchaud, Marc Mezard (CEA-Saclay, Science et Finance and LPTENS-Paris, https://arxiv.org/abs/cond-mat/0002374
Here is the core of the concept of underdetermination; the evidence available to us at a given time may not be enough to determine what we should believe. We might see this with a simple example:
- A dozen eggs cost $6 dollars
- An apple costs $1
- Carrots cost $2 a bunch.
- I spent $12 on apples, eggs and carrots.
- How many of each did I get?
You can deduce that $6 was spent on eggs, leaving $6 for apples and carrots. However, you could buy 2 bunches of carrots and 2 apples, or 1 bunch of carrots and 4 apples. The problem is underdetermined.
We might know that just because a correlation was found between two factors, it does not follow that one causes the other. In short, correlation does not imply causation. This is also an example of underdetermination. See https://ephektikoi.ca/2020/07/13/correlating-fish-and-water/
Various philophers such as Rene Descarte and David Hume discussed ideas that presaged underdetermination, but it came to the forefront of discussion with arguments by Pierre Duhem and later with arguments by W.V.O. Quine. Duhem’s version might be called holist underdetermination, and Quine’s version contrastive underdetermination. Both, if taken seriously, have strong implications for how much we can demonstrate with scientific evidence.
Holist underdetermination argues that within a given research paradigm, we must of necessity, consider our research evidence within a web of supporting hypothesis. We can not rule out that an experimental result is false; it might be that one of the supporting hypothesis is false, and needs to be reconsidered. We need to examine the whole web of belief of the researcher, the discipline, and perhaps society in order to make sense of research results.
Contrastive underdetermination argues that for any research result, or any set of results, we can develop alternative theories which do an equivalent job of explaining the evidence.
It is still a topic of much debate among philosophers just what this implies for our understanding of scientific evidence, or our understanding of the world.
For a more thorough, though somewhat difficult, discussion see Stanford Encyclopedia of Philosophy, Underdetermination of Scientific Theory, Kyle Stanford, https://plato.stanford.edu/entries/scientific-underdetermination/, 2017.
Relative and absolute risk
Relative risk versus absolute risk: one cannot be interpreted without the other
Marlies Noordzij, Merel van Diepen, Fergus C. Caskey, Kitty J. Jager
“For the presentation of risk, both relative and absolute measures can be used. The relative risk is most often used, especially in studies showing the effects of a treatment. Relative risks have the appealing feature of summarizing two numbers (the risk in one group and the risk in the other) into one. However, this feature also represents their major weakness, that the underlying absolute risks are concealed and readers tend to overestimate the effect when it is presented in relative terms. In many situations, the absolute risk gives a better representation of the actual situation and also from the patient’s point of view absolute risks often give more relevant information. In this article, we explain the concepts of both relative and absolute risk measures. Using examples from nephrology literature we illustrate that unless ratio measures are reported with the underlying absolute risks, readers cannot judge the clinical relevance of the effect. We therefore recommend to report both the relative risk and the absolute risk with their 95% confidence intervals, as together they provide a complete picture of the effect and its implications.” — https://academic.oup.com/ndt/article/32/suppl_2/ii13/3056571
Warts from touching a toad
Musings on science
Science provides systematic method of investigation which often has produced a useful and secure understanding of some aspects of the world. But, it is frayed around the edges and maybe a little moth eaten. (See John P. Iaonnidis at https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124. Also see for instance: https://slate.com/technology/2017/08/science-is-not-self-correcting-science-is-broken.html)
Science is also quite incomplete, probably massively incomplete (although John Horgan disagrees: https://www.goodreads.com/book/show/250814.The_End_of_Science ).
Each discipline and sub-discipline has a body of knowledge consisting of the findings of the researches and the methods used in research.
All of these findings should be held tentatively, as a useful approximation, and subject to revision as understanding deepens. Scientific understanding may change incrementally, or change in large ways called a paradigm shift by philosopher of science Thomas Kuhn (see https://www.goodreads.com/book/show/61539.The_Structure_of_Scientific_Revolutions. It does not necessarily follow that the change is for the better, and there are some examples of scientific regress.
For a further analysis of other factors affecting scientific progress, see my post Trusting the experts https://ephektikoi.ca/2020/06/27/trusting-the-experts/
Science is often described as being self-correcting in the long run. Maybe it is, overall, but in my view, it seems to make progress by lurching, sometimes forwards, sometimes backwards. It is not always easy to see the self-correcting aspects, although I will allow that they may be there.
Fun for Uber-geeks? Maybe we can look at a frivolous example of how we might approach a topic with scientific analysis:
- Can you catch warts from touching a toad? Maybe, or maybe not!
- How would you find out?
- Why would you want to?
- What if there was a researcher with unlimited time who wished to win the Ig Noble Prize? (see https://www.improbable.com/ig-about/winners/)
- Maybe that person might be interested in running a program of studies.
- What would such a program look like?
- How would it be funded?
- Why don’t you set up your own research designs for this program of study?
- Some of the considerations are outlined below.
The culture and infrastructure around science determine what gets studied, what gets funded, what gets published, and what notice is taken of research. For a discussion of these issues, see my post Trusting the experts https://ephektikoi.ca/2020/06/27/trusting-the-experts/
In research, we attempt to understand the world, in a systematic and useful fashion. We look for explanations, either to explain what has gone on in the past (postdiction) or to explain the course of future events (prediction). We look for regularities, consistent and useful patterns of explanation, and try to refine them and document them. We attempt to grasp them qualitatively and quantitatively.
Evidence is typically ambiguous. We have to interpret it, and each person may arrive at a different interpretation of any piece of evidence. We can only interpret things in terms of our prior beliefs about the world, and we are always subject to incentive, bias, and self-deception. Our understanding of events may be thoroughly confounded, confused, perplexed and baffled.
In order to remove some possible causes of misunderstanding, we attempt to use research designs that reduce confounding factors. We call these methods experimental controls. There is a whole literature on research control procedures, and interested readers can start by looking at a discussion at Wikipedia (see here: https://en.wikipedia.org/wiki/Scientific_control). The “gold standard” for research would be studies that try to reduce bias with full randomization, control groups, and double blind participants. However, not all areas can be explored using these techniques. In some research areas, experimental controlled studies can only play a very minor role.
Causality is a deep topic, the subject of numerous discussions by philosophers, but yet part of everyday experience. In simplest terms, some event or events happen, and as a consequence, another event happens. Causes can be chained together and always will be in a thorough analysis.
Variability is the notion that things are subject to change (and these changes often seem random). Don’t underestimate how important this aspect of the universe is to human understanding.
Determinism is a philosophical view holding that all events are dependent completely on previously existing causes, and if we could set up identical conditions for another run, we would always get identical results. It has been debated for millennia.
Underdetermination happens when the available evidence is insufficient to determine which conclusion we can reach. Some philosophers make a case that all conclusions are underdetermined in one manner or another. See https://ephektikoi.ca/2020/07/16/underdetermination/ for more discussion.
Confounding factors are those not currently under investigation that may have caused the result. The interpretation of the study results is thus ambiguous. Confounding factors in research are also called ‘confounds’ for brevity.
Coincidence is seen when there is no causality, but there appears to be a pattern of causality
We really want to establish the truth, accuracy, or reality of research claims. One method that is supposed to be used, but seldom is, is replication of the study. By replication, the researcher hopes to obtain the same results. In practice, replication is seldom done. Apparently replication research seldom gets funded, and replication studies seldom get published. When replication has been attempted to see how well research has been done, they have frequently found that studies are not replicated at a very satisfactory rate. This is still an ongoing area for debate. It has been termed the replication crisis. See for instance https://www.embopress.org/doi/10.15252/embr.201744876, https://www.nature.com/collections/prbfkwmwvz/ and this https://www.wyatt.com/blogs/quality-standards-for-the-life-sciences-pqs.html.
Ethics committees examine proposed research to determine if ethical guidelines for the institution and the discipline will be adhered to. These guidelines can apply to both human and animal research subjects.
Dissemination of Body of Knowledge
Scientific knowledge is disseminated in various ways. These include:
- Conferences where there is the presentation of papers.
- Informal exchanges of information among experts such as informal get-togethers, colloquia, and chats over drinks.
- Papers peer reviewed and formally published in research journals.
- Scholarly visits, where researchers may come from another institution for a period of time, maybe for a research sabbatical.
- Formal and informal teaching, where experts systematically explain the body of knowledge of their discipline.
There are problematic aspects to many of these activities, and the progress of science is undoubtedly retarded because of them.
The peer review of journal articles prior to publication is supposed to make sure that research is reasonably sound, and that the results may be trusted. The peer review process is unfortunately flawed. Research studies make it through the peer review process without being properly vetted in numerous cases. On the other hand, it tends to filter out studies which challenge accepted dogma. There are numerous critiques of the peer review process, and these can be readily found through Internet searching. See for instance https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7197125/
Statistics is a difficult mathematical study for most of us. I am not going to try to teach anyone statistics; that would be foolish. I am going to present as clearly and as briefly as I can some of the important ideas. This is a very cursory overview. Don’t take any of this as gospel; it has been decades since I formally studied the subject, and statistics for me was three courses and a few years running data through computer programs, not a deep study.
Statistics can be broken into two major areas, descriptive statistics and inferential statistics. I will give a quick explanation of each below.
Population and Sample
Inferences can be made from the results in a sample to the population at large using probabilistic methods, statistical inference.
Statistics starts with measurement. There must be some scale or metric, and there must be a device for making the measurement. There will be some error in any measurement; we and our machines are not perfect. There will also be some variability in the measure, since the world seems to give us variable results, even when we are trying to be very careful. Some type of measurements are horrible in this regard, some are not.
There are three sorts of scales used in measurement for statistical purposes. These are categorical, ordinal, and continuous. For more information, see: https://www.scalelive.com/scales-of-measurement.html
Measures taken of some factor of interest are called variables. Variables that we wish to study as outcomes of manipulations are termed dependent. Variable which we wish to manipulate to see their effect on outcomes are called independent. Experimental studies aim to determine how changes in the value of independent variables in a sample effect changes in the dependent variables in the sample. It is possible to investigate if the relationships are causal.
There are three types of measurements all called averages. The first, the mean, is what most of us think of as average. For this statistic, add up all the figures and divide by the number of figures. The second, the mode, is not so well known. It is the figure that occurs most often in the set of measurements. The third, the median, is also not well known. It is the number having half of the measurements lower, and half of the measurements higher.
Dispersion of a Set of Measurements
There are three common measures of how spread out, how dispersed the data are. These are range, variance and standard deviation. The first, range, is simply the difference between the lowest and highest numbers. The second, variance, is a measure of how far a set of numbers are spread out from their average value. The third, standard deviation, is the square root of the variance. There are many tutorials on the Internet showing how to calculate these statistics.
Histograms and Graphical Representations
Data are often displayed with charts and graphs to give insight into patterns of the measurements. Pie charts, bar graphs, scatter plots, histograms, two dimensional and three dimensional plots, and other graphical displays are routinely used.
It is very common to display graph in tabular format, to aid in understanding. Data may be charted using multiple dimensions specific to the needs of the analyst.
Shape of a Distribution
When a set of continuous numbers are plotted on a graph, using the count for a variable on the vertical axis, and the value of the variable on the horizontal axis, we get a graph with a certain shape. A very common shape for many measures is the normal distribution, the so called bell-shaped curve. It may not be totally symmetrical, and there are some technical terms for the degree of deviation from the bell shape: kurtosis and skew. Kurtosis is a measure of how much the distribution is shaped by extreme values, outliers as they are called. Skew is a measure of asymmetry around the centre. More thorough explanations of these terms are again found on the Internet.
Descriptive statistics are combined with probability calculations to yield probabilistic inferences about the interpretation of the data. These come in two major varieties: frequentist statistics and Bayesian statistics. I formally studied frequentist statistics for research a few decades ago, and have only retained a bit. I have not studied Bayesian statistics formally, and have not yet succeeded in understanding it. Frequentist statistics are based on the odds of something being true based on the statistical properties of standard distributions. Bayesian statistics are based on the probabilities of something being true given some odds for prior evidence. Note that this is my conception, and may not be quite correct.
If you are running an experiment and want to select subjects or cases for your study, you want to control factors which can bias the results. So, if you want your experimental groups to be similar, or at least not biased in a way that can skew results, you need to take care with assigning subjects to each experimental group. The subjects should be assigned randomly, or maybe selected according to categories that are equivalently stratified in all cases. Without this, the interpretation of your results can be very problematic.
Correlation is the tendency of two or more things to vary together. There is a reciprocal and mutual relationship between them. I have briefly discussed correlation at https://ephektikoi.ca/2020/07/13/correlating-fish-and-water/
Linear regression is a method of using the ideas underlying correlation to predict the values of one dependent variable from the values of another independent variable. It yields a linear equation. Multiple regression extends this idea to predictions involving more than one independent variable.
Odds, chance, probability
The concept of chance is known to almost everyone. It is common place to assess the odds intuitively in almost all situations where there is uncertainty about outcomes. We are not particularly good at it, but we do it on a routine bases. Mathematicians, psychologists and numerous other academics have studied probability from multiple perspectives. Statisticians use it as the basis of their discipline. Statistics is the study of computed probabilities, and how to draw inferences from them.
Sample versus population
When conducting a study, usually a subset of the total population is used as a sample of the broader group. Inferential statistics uses measurement from the sample, combined with probability calculations, to make inferences about the population. The researcher will explore the likelihood of apparent relationships being true in the population. The mathematics of probability and statistics allow this to be done.
Being right and being wrong
In a binary world, there are two ways of being right and two ways of being wrong. We can say there was an effect when there is truly effect, or we can say there was no effect when there was truly no effect. We could call these respectively true positive and true negative, although no one bothers with these terms. Conversely we can say there was an effect when there is truly no effect, or we can say there was no effect when there was truly an effect. We call these respectively false positive (type I error) and false negative (type II error). These are analogous to missing that the house was on fire, or giving a false alarm.
In the statistical trade, these are referred to as type I and type II errors, although I don’t really like the terms, since they are arbitrary and confusing labels. Every time I use them, I have to go out and look them up again, since the terms do not stay in memory for me. I suspect that others have the same problem.
Analysis methods and tests
Analysis methods and statistical tests proliferate. A person really needs a course in basic statistics to grasp the ideas, but they include such things as regression, multiple regression, analysis of co-variance (ANCOVA), analysis of variance (ANOVA), T-tests, Chi Square, and on and on. Each one yields probability values and maybe other derived statistics, primarily for the frequentist approach to analysis.
The power of a statistical conclusion is a measure of the trust we might place in our study to find a real effect in the population, if such exists. This concept of finding a real effect is called hypothesis testing. The power of a test is the probability of making a false negative finding, a type II error.
It is common in research to arbitrarily set a threshold of probability for a finding. Different disciplines use different thresholds. It is usual in many fields to accept the results of being correct if a statistical threshold of 95% is reached. The thinking is that you will be right 95 time out of 100, making a type I error 5% of the time. This is common in the frequentist approach, and has been criticized as being wrong headed and mis-applied. The Bayesian statisticians seem to have a different take on this, but I have yet to understand their reasoning.
It is common to see reports on studies in the media saying that such and such a finding was significant. The problem with this is that the common implication of significant is important. The research meaning of significant is that the finding is likely to be real, and says nothing about how big the effect is. We can talk about the size of the effect as percentages or as actual values. We can give the effect size as absolute figures or figures relative to some other measurement. We can talk about the amount of variability accounted for in the dependent measure by the independent measures. All of these show the actual significance in terms of how big an effect we have found, as opposed to the likelihood of it being real.