MESSAGE
DATE | 2020-10-12 |
FROM | Ruben Safir
|
SUBJECT | Subject: [Hangout - NYLXS] what we learn about the nature of science
|
What the Pandemic Has Taught Us About Science
Matt Ridley
17-21 minutes
The Covid-19 pandemic has stretched the bond between the public and the
scientific profession as never before. Scientists have been revealed to
be neither omniscient demigods whose opinions automatically outweigh all
political disagreement, nor unscrupulous fraudsters pursuing a political
agenda under a cloak of impartiality. Somewhere between the two lies the
truth: Science is a flawed and all too human affair, but it can generate
timeless truths, and reliable practical guidance, in a way that other
approaches cannot.
In a lecture at Cornell University in 1964, the physicist Richard
Feynman defined the scientific method. First, you guess, he said, to a
ripple of laughter. Then you compute the consequences of your guess.
Then you compare those consequences with the evidence from observations
or experiments. “If [your guess] disagrees with experiment, it’s wrong.
In that simple statement is the key to science. It does not make a
difference how beautiful the guess is, how smart you are, who made the
guess or what his name is…it’s wrong.”
So when people started falling ill last winter with a respiratory
illness, some scientists guessed that a novel coronavirus was
responsible. The evidence proved them right. Some guessed it had come
from an animal sold in the Wuhan wildlife market. The evidence proved
them wrong. Some guessed vaccines could be developed that would prevent
infection. The jury is still out.
Seeing science as a game of guess-and-test clarifies what has been
happening these past months. Science is not about pronouncing with
certainty on the known facts of the world; it is about exploring the
unknown by testing guesses, some of which prove wrong.
“ In general, science is much better at telling you about the past and
the present than the future. ”
Bad practice can corrupt all stages of the process. Some scientists fall
so in love with their guesses that they fail to test them against
evidence. They just compute the consequences and stop there.
Mathematical models are elaborate, formal guesses, and there has been a
disturbing tendency in recent years to describe their output with words
like data, result or outcome. They are nothing of the sort.
An epidemiological model developed last March at Imperial College London
was treated by politicians as hard evidence that without lockdowns, the
pandemic could kill 2.2 million Americans, 510,000 Britons and 96,000
Swedes. The Swedes tested the model against the real world and found it
wanting: They decided to forgo a lockdown, and fewer than 6,000 have
died there.
In general, science is much better at telling you about the past and the
present than the future. As Philip Tetlock of the University of
Pennsylvania and others have shown, forecasting economic, meteorological
or epidemiological events more than a short time ahead continues to
prove frustratingly hard, and experts are sometimes worse at it than
amateurs, because they overemphasize their pet causal theories.
Work on a potential Covid-19 vaccine in June at a lab in Beerse, Belgium.
Photo: Virginia Mayo/Associated Press
A second mistake is to gather flawed data. On May 22, the respected
medical journals the Lancet and the New England Journal of Medicine
published a study based on the medical records of 96,000 patients from
671 hospitals around the world that appeared to disprove the guess that
the drug hydroxychloroquine could cure Covid-19. The study caused the
World Health Organization to halt trials of the drug.
It then emerged, however, that the database came from Surgisphere, a
small company with little track record, few employees and no independent
scientific board. When challenged, Surgisphere failed to produce the raw
data. The papers were retracted with abject apologies from the journals.
Nor has hydroxychloroquine since been proven to work. Uncertainty about
it persists.
A third problem is that data can be trustworthy but inadequate.
Evidence-based medicine teaches doctors to fully trust only science
based on the gold standard of randomized controlled trials. But there
have been no randomized controlled trials on the wearing of masks to
prevent the spread of respiratory diseases (though one is now under way
in Denmark). In the West, unlike in Asia, there were months of
disagreement this year about the value of masks, culminating in the
somewhat desperate argument of mask foes that people might behave too
complacently when wearing them. The scientific consensus is that the
evidence is good enough and the inconvenience small enough that we need
not wait for absolute certainty before advising people to wear masks.
This is an inverted form of the so-called precautionary principle, which
holds that uncertainty about possible hazards is a strong reason to
limit or ban new technologies. But the principle cuts both ways. If a
course of action is known to be safe and cheap and might help to prevent
or cure diseases—like wearing a face mask or taking vitamin D
supplements, in the case of Covid-19—then uncertainty is no excuse for
not trying it.
Passengers wear face masks aboard a New York City subway traveling
through Brooklyn in August.
Photo: Robert Nickelsberg/Getty Images
A fourth mistake is to gather data that are compatible with your guess
but to ignore data that contest it. This is known as confirmation bias.
You should test the proposition that all swans are white by looking for
black ones, not by finding more white ones. Yet scientists “believe” in
their guesses, so they often accumulate evidence compatible with them
but discount as aberrations evidence that would falsify them—saying, for
example, that black swans in Australia don’t count.
Advocates of competing theories are apt to see the same data in
different ways. Last January, Chinese scientists published a genome
sequence known as RaTG13 from the virus most closely related to the one
that causes Covid-19, isolated from a horseshoe bat in 2013. But there
are questions surrounding the data. When the sequence was published, the
researchers made no reference to the previous name given to the sample
or to the outbreak of illness in 2012 that led to the investigation of
the mine where the bat lived. It emerged only in July that the sample
had been sequenced in 2017-2018 instead of post-Covid, as originally
claimed.
These anomalies have led some scientists, including Dr. Li-Meng Yan, who
recently left the University of Hong Kong School of Public Health and is
a strong critic of the Chinese government, to claim that the bat virus
genome sequence was fabricated to distract attention from the truth that
the SARS-CoV-2 virus was actually manufactured from other viruses in a
laboratory. These scientists continue to seek evidence, such as a lack
of expected bacterial DNA in the supposedly fecal sample, that casts
doubt on the official story.
“ This year has driven home as never before the message that there is no
such thing as ‘the science’; there are different scientific views. ”
By contrast, Dr. Kristian Andersen of Scripps Research in California has
looked at the same confused announcements and stated that he does not
“believe that any type of laboratory-based scenario is plausible.”
Having checked the raw data, he has “no concerns about the overall
quality of [the genome of] RaTG13.”
Given that Dr. Andersen’s standing in the scientific world is higher
than Dr. Yan’s, much of the media treats Dr. Yan as a crank or
conspiracy theorist. Even many of those who think a laboratory leak of
the virus causing Covid-19 is possible or likely do not go so far as to
claim that a bat virus sequence was fabricated as a distraction. But it
is likely that all sides in this debate are succumbing to confirmation
bias to some extent, seeking evidence that is compatible with their
preferred theory and discounting contradictory evidence.
Dr. Andersen, for instance, has argued that although the virus causing
Covid-19 has a “high affinity” for human cell receptors, “computational
analyses predict that the interaction is not ideal” and is different
from that of SARS, which is “strong evidence that SARS-CoV-2 is not the
product of purposeful manipulation.” Yet, even if he is right, many of
those who agree the virus is natural would not see this evidence as a
slam dunk.
As this example illustrates, one of the hardest questions a science
commentator faces is when to take a heretic seriously. It’s tempting for
established scientists to use arguments from authority to dismiss
reasonable challenges, but not every maverick is a new Galileo. As the
astronomer Carl Sagan once put it, “Too much openness and you accept
every notion, idea and hypothesis—which is tantamount to knowing
nothing. Too much skepticism—especially rejection of new ideas before
they are adequately tested—and you’re not only unpleasantly grumpy, but
also closed to the advance of science.” In other words, as some wit once
put it, don’t be so open-minded that your brains fall out.
Peer review is supposed to be the device that guides us away from
unreliable heretics. A scientific result is only reliable when reputable
scholars have given it their approval. Dr. Yan’s report has not been
peer reviewed. But in recent years, peer review’s reputation has been
tarnished by a series of scandals. The Surgisphere study was peer
reviewed, as was the study by Dr. Andrew Wakefield, hero of the
anti-vaccine movement, claiming that the MMR vaccine (for measles, mumps
and rubella) caused autism. Investigations show that peer review is
often perfunctory rather than thorough; often exploited by chums to help
each other; and frequently used by gatekeepers to exclude and extinguish
legitimate minority scientific opinions in a field.
Herbert Ayres, an expert in operations research, summarized the problem
well several decades ago: “As a referee of a paper that threatens to
disrupt his life, [a professor] is in a conflict-of-interest position,
pure and simple. Unless we’re convinced that he, we, and all our friends
who referee have integrity in the upper fifth percentile of those who
have so far qualified for sainthood, it is beyond naive to believe that
censorship does not occur.” Rosalyn Yalow, winner of the Nobel Prize in
medicine, was fond of displaying the letter she received in 1955 from
the Journal of Clinical Investigation noting that the reviewers were
“particularly emphatic in rejecting” her paper.
The health of science depends on tolerating, even encouraging, at least
some disagreement. In practice, science is prevented from turning into
religion not by asking scientists to challenge their own theories but by
getting them to challenge each other, sometimes with gusto. Where
science becomes political, as in climate change and Covid-19, this
diversity of opinion is sometimes extinguished in the pursuit of a
consensus to present to a politician or a press conference, and to deny
the oxygen of publicity to cranks. This year has driven home as never
before the message that there is no such thing as “the science”; there
are different scientific views on how to suppress the virus.
Anthony Fauci (left), director of the U.S. National Institute of Allergy
and Infectious Diseases, and his Swedish counterpart, state
epidemiologist Anders Tegnell, have taken different approaches to
combating the pandemic.
Photo: Alex Wong/Getty Images; ALI LORESTANI/TT NEWS AGENCY/AFP/Getty Images
Anthony Fauci, the chief scientific adviser in the U.S., was adamant in
the spring that a lockdown was necessary and continues to defend the
policy. His equivalent in Sweden, Anders Tegnell, by contrast, had
insisted that his country would not impose a formal lockdown and would
keep borders, schools, restaurants and fitness centers open while
encouraging voluntary social distancing. At first, Dr. Tegnell’s
experiment looked foolish as Sweden’s case load increased. Now, with
cases low and the Swedish economy in much better health than other
countries, he looks wise. Both are good scientists looking at similar
evidence, but they came to different conclusions.
Having proved a guess right, scientists must then repeat the experiment.
Here too there are problems. A replication crisis has shocked psychology
and medicine in recent years, with many scientific conclusions proving
impossible to replicate because they were rushed into print with
“publication bias” in favor of marginally and accidentally significant
results. As the psychologist Stuart Ritchie of Kings College London
argues in his new book, “Science Fictions: Exposing Fraud, Bias,
Negligence and Hype in Science,” unreliable and even fraudulent papers
are now known to lie behind some influential theories.
For example, “priming”—the phenomenon by which people can be induced to
behave differently by suggestive words or stimuli—was until recently
thought to be a firmly established fact, but studies consistently fail
to replicate it. In the famous 1971 Stanford prison experiment, taught
to generations of psychology students, role-playing volunteers
supposedly chose to behave sadistically toward “prisoners.” Tapes have
revealed that the “guards” were actually instructed to behave that way.
A widely believed study, subject of a hugely popular TED talk, showing
that “power posing” gives you a hormonal boost, cannot be replicated.
And a much-publicized discovery that ocean acidification alters fish
behavior turned out to be bunk.
The famous 1971 Stanford prison experiment, which purported to show how
assigned roles shape behavior, has been debunked by new evidence.
Photo: Chuck Painter / Stanford News Service
Prof. Ritchie argues that the way scientists are funded, published and
promoted is corrupting: “Peer review is far from the guarantee of
reliability it is cracked up to be, while the system of publication
that’s supposed to be a crucial strength of science has become its
Achilles heel.” He says that we have “ended up with a scientific system
that doesn’t just overlook our human foibles but amplifies them.”
At times, people with great expertise have been humiliated during this
pandemic by the way the virus has defied their predictions. Feynman also
said: “Science is the belief in the ignorance of experts.” But a
theoretical physicist can afford such a view; it is not much comfort to
an ordinary person trying to stay safe during the pandemic or a
politician looking for advice on how to prevent the spread of the virus.
Organized science is indeed able to distill sufficient expertise out of
debate in such a way as to solve practical problems. It does so
imperfectly, and with wrong turns, but it still does so.
How should the public begin to make sense of the flurry of sometimes
contradictory scientific views generated by the Covid-19 crisis? There
is no shortcut. The only way to be absolutely sure that one scientific
pronouncement is reliable and another is not is to examine the evidence
yourself. Relying on the reputation of the scientist, or the reporter
reporting it, is the way that many of us go, and is better than nothing,
but it is not infallible. If in doubt, do your homework.
Mr. Ridley is a member of the House of Lords and the author, most
recently, of “How Innovation Works: And Why It Flourishes in Freedom.”
SHARE YOUR THOUGHTS
What has the pandemic revealed to you about both the limits and the
ingenuity of science? Join the conversation below.
--
So many immigrant groups have swept through our town
that Brooklyn, like Atlantis, reaches mythological
proportions in the mind of the world - RI Safir 1998
http://www.mrbrklyn.com
DRM is THEFT - We are the STAKEHOLDERS - RI Safir 2002
http://www.nylxs.com - Leadership Development in Free Software
http://www.brooklyn-living.com
Being so tracked is for FARM ANIMALS and extermination camps,
but incompatible with living as a free human being. -RI Safir 2013
_______________________________________________
Hangout mailing list
Hangout-at-nylxs.com
http://lists.mrbrklyn.com/mailman/listinfo/hangout
|
|