Two weeks ago, a minor squabble broke out between two eminent health researchers. On one side stood Stanford professor John P.A. Ioannides, a leader in the 鈥渕eta-research鈥 movement and frequent critic of the reliability of health research. Writing on March 17th, Ioannidis that 鈥渄ata collected so far on how many people are infected and how the epidemic is evolving are utterly unreliable,鈥 and that this 鈥渆vidence fiasco鈥 created too much uncertainty to justify long-term countermeasures.
The next day, Marc Lipsitch, an infectious disease epidemiologist and director of Harvard鈥檚 Center for Communicable Disease Dynamics, politely but vehemently disagreed. While concurring that a lack of good data contributes to considerable uncertainty at this stage of the pandemic, Lipsitch that we have enough information to justify extreme social distancing measures, and that waiting for perfectly reliable data would do more harm than good.
This debate illustrates the larger problem of the use of evidence, and the role of uncertainty, during the COVID-19 pandemic and in public policy more generally. In this brief, I discuss several key questions: Why do we need evidence? What do we need it for? What kind of evidence do we need? I end with a few observations about evidence and uncertainty during the pandemic.
听
Why do we need evidence?
What function does evidence serve in the policy process? The most common answer is that it improves decision-making by providing relevant, objective, and unbiased information to policy-makers. An evidence-based approach has the virtue of reducing the impact of personal, financial, and ideological biases in policy development, and helping decision-makers produce the best possible policies and interventions.
Ignorance in a crisis can be fatal. Georgia governor Brian Kemp, apparently (perhaps willfully) that asymptomatic people could transmit the virus, needlessly delayed implementing countermeasures. Yet evidence rarely if ever provides certainty. Ioannidis, acutely sensitive to the perils of false certainty, insisted that we wait for better evidence. Lipstich, sensitive to the need to act even under conditions of extreme uncertainty, countered that we have enough to implement at least some countermeasures. Savvy producers and consumers of evidence understand both its virtues and its limitations.
A second function of evidence is that it provides a shared, objective basis for decision-making, which facilitates public accountability and promotes public trust in decision-makers and their decisions. Policies and interventions based on evidence, which can be evaluated and challenged by citizens and third parties, are more trustworthy than policies based on the personal whims of politicians or the hidden agendas of lobbyists and special interests. The ability to examine the evidentiary basis for policies can also act as a safeguard against discrimination and the arbitrary imposition of power. This last function is particularly important during the COVID-19 pandemic, as marginalized populations are frequently targeted with from both individuals and political institutions.
Finally, a shared evidence base can help to ensure consistency and predictability in behavior at multiple levels. On an individual level, evidence can help ensure that citizens respond in a predictable and consistent manner to policies that require concerted collective action, such as isolation and social distancing. Acting on the same evidence promotes coordination across levels of government, from municipalities all the way up to intergovernmental organizations such as the World Health Organization. In the United States, there is based not only on partisan differences, but also different understandings of the severity of the pandemic.
听
What do we need evidence for?
As Taylor Owen noted in a previous brief, we are in the midst of an 鈥渋nfodemic.鈥 While having too little information clearly presents a problem for policy, so does having too much. Not all information is evidence. Separating reliable from unreliable information 鈥 asking whether particular information is true or not 鈥 is only the first step. The next and arguably more difficult step is determining whether and how information is useful. All evidence is evidence for a particular purpose. In the context of the COVID-19 pandemic, we need evidence to support the following activities:
- Description. How bad is the pandemic? Where is it? Who is contracting, recovering, and dying from it? How is it transmitted? How is it changing?
- Prediction. How much worse could it get? What are the best- and worst-case scenarios? What might impact might different kinds of intervention measures have?
- Explanation. Why did it get so bad? What actions (or inactions) made it worse? What actions have made it better?
- Evaluation. Did we actually make it better? Which interventions worked? Which did not? Did different interventions work differently in different settings? Can thing that worked in one place work in another?
Different evidence is needed for different purposes, and collecting each kind of evidence poses unique challenges. We may also need to employ different evidentiary standards for different purposes, because the costs and benefits associated with decisions can vary widely. Moderate social distancing or use of home-made face masks incurs few harms or costs, so we may recommend them despite robust evidence to support these recommendations. Conversely, drug regulators usually require a very high degree of certainty, based on high-quality evidence, because of the high costs of approving an ineffective or harmful drug. The U.S Food and Drug Adminitration鈥檚 emergency of hydroxychloroquine despite limited evidence thus represents a major step.
听
What evidence do we need?
At the most basic level, we need to know the absolute morbidity and mortality associated with the disease, to understand the appropriate scale of the response. A pandemic that kills 5,000 people globally demands 鈥 and justifies 鈥 a very different response than one that kills 500,000 or 5 million people. We also need to know mortality among different populations, to guide targeted interventions and ensure that prevent health inequities. Almost every health problem takes a disproportionate toll on economically and socially marginalized populations; early evidence of higher numbers of cases and deaths among African Americans suggest that the COVID-19 pandemic is no different.
While mortality is usually one of the most straightforward health indicators (there is no ambiguity about whether a person is alive or dead), and is routinely collected by governments across the world, its collection poses two challenges. First, during a novel and slow-moving pandemic, interventions must be based on projections of mortality, not actual figures. If we wait for mortality to grow large enough to justify vigorous intervention, it will be too late. So the salient mortality data during this pandemic are not straightforward deaths, but rather predictions emerging from that are themselves based on assumptions incorporating other kinds of data.
A second challenge is one of ascription: which deaths should we attribute to the COVID-19 pandemic? Clearly, we should attribute the death of an individual who dies of respiratory failure while testing positive for COVID-19 to the pandemic. But what about an elderly hospital patient with chronic obstructive pulmonary disorder who suffers respiratory failure and dies because there are no ventilators available? Or an otherwise healthy individual who suffers a heart attack but , in order to protect health care workers from possible COVID-19 transmission? While neither of these people died from the COVID-19 virus, we can and perhaps should attribute their deaths to the pandemic. For policy purposes, we are concerned with the overall mortality, including the 鈥渃ollateral damage鈥 from overwhelmed health care systems. The 鈥榝lattening the curve鈥 strategy is driven at least in part by the importance of reducing the overall burden on the health care system.
Morbidity, of the absolute number of cases is important in its own right, but also because it informs the case fatality rate. The case fatality rate (or ratio) is mathematically straightforward but practically challenging to collect, as it depends upon accurately measuring both the number of deaths (the numerator) and the number of cases (the denominator). The observed case fatality rate varies over time: in many epidemics, including COVID-19, the case fatality rate will tend to start high but drop as we get a better handle on the number of asymptomatic and minimally symptomatic cases. It also varies by population: case fatality rates are higher among older individuals, particularly those over age 70; men, possibly because of underlying health behaviors and chronic health conditions; across countries and sub-country regions; and possibly between . Accurate assessment of case fatality rates among different populations is crucial to response, as it signals who is at highest risk of death or severe illness, and can help both with targeting resources and planning interventions. For example, some epidemiologists have suggested eventually moving toward a 鈥溾 that lifts the lockdown only for healthy individuals under the age of 50.
We also need to know the transmissibility, or R0, of the COVID-19 virus. R0 depends upon both universal biological factors (e.g. whether or not a virus can be transmitted through the air; how long it can survive on surfaces), but also on geographically and temporally-specific factors (e.g., population density; effectiveness of social distancing measures). Epidemic countermeasures such as isolation and social distancing seek to minimize the 鈥榚ffective鈥 R0.
On a longer time scale, it is useful to know the mutation rate and patterns of the COVID-19 virus. Understanding the virus鈥 mutation helps , determine whether it is getting more or less harmful, and inform development of vaccines and therapies.
Finally, and perhaps most important for policy purposes, we need evidence of the effectiveness of mitigation measures, including behavioral interventions such as social distancing and face masks, policy interventions like prohibitions on travel, and pharmacological treatments and vaccines. Collecting this evidence poses an additional challenge: it takes time. Randomized controlled trials can take weeks or months to produce reliable answers about effectiveness of vaccines and treatments. Understanding the effectiveness of even an apparently straightforward intervention like can be challenging.
听
The politics of evidence and uncertainty in the COVID-19 pandemic
Responses to the COVID-19 pandemic have varied considerably between and within countries. The politics of evidence plays an important role in this variation. While we may take comfort in the idea of evidence 鈥渟peaking truth to power,鈥 in practice evidence requires institutional support to counteract arbitrary and ideologically-driven power. The COVID-19 epidemic has been shaped not only by the decisions of individual leaders, but also by the larger institutional arrangements and cultures around evidence and expertise.
This is seen most clearly in the United States and the United Kingdom, which have responded comparatively slowly to the pandemic, despite abundant evidence that delay would lead to catastrophe. It is no coincidence that these two countries are led by men who have built their political identities around disregard and contempt for both evidence and for the 鈥渆lite鈥 experts who speak on its behalf. Corrosive in itself, this contempt has greatly exacerbated the impact of COVID-19. In the midst of an 鈥,鈥 decision-making by individuals and policymakers in these two countries is driven by personal whim, financial interests, and ideological bias. Without a shared basis of reliable evidence, responses are haphazard and unpredictable. After years of goading by populist leaders to distrust expert advice, many citizens continue to disregard clear evidence-based advice and those who offer it. Discrimination and xenophobia are evident from individual acts of violence all the way up to proclamation.
We are also seeing the impact of a sustained politics of certainty. A virtue of the scientific way of producing evidence is its commitment to understanding, quantifying, and communicating uncertainty. Good predictive models provide best- and worst-case scenarios, and are revised as new information comes to light. Good decisions require evidence producers to uncertainty, and decision-makers to adequately understand uncertainty and act accordingly. Unfortunately, two trends impede the ability to make good decisions under conditions of uncertainty such as the COVID-19 pandemic.
A principal weapon in the populist arsenal is to proclaim absolute certainty, celebrating authority and personal charisma while demeaning sources of evidence other than a powerful individual. Certainty is an indication of power and control, uncertainty a risible sign of weakness rather than an indication of an honest appraisal of the situation. One of the Trump administration鈥檚 first public acts was to repeatedly double down on claims that his inauguration was the largest in history, despite overwhelming evidence to the . This display of power over subordinates and willful ignorance of the facts was simply the first in a long series of steps towards the Trump鈥檚 administration鈥檚 catastrophic handling of the COVID-19 pandemic.
A second trend, common not only among populists but also powerful special interests, is to insist on impossible levels of certainty to justify changes to the status quo, and disparage one鈥檚 opponents when they cannot provide it. Tobacco companies, climate change deniers, and other 鈥 have employed this tactic for decades. We see echoes in contemporary on epidemiologists who revise their COVID-19 mortality estimates in light of new information about current interventions. Where these two trends converge, we see policy responses that oscillate wildly between false certainty and conspiracy theory.
Evidence, no matter how robust or persuasive, cannot on its own counter these trends. Effectively responding to COVID-19 requires a commitment to producing, understanding, and acting on the best available evidence, fully recognizing its attendant uncertainties and accepting accountability for decisions. It is clear that powerful leaders like Trump and other populists maintain no such commitments, even in the face of an historic public health crisis. We must thus look to designing democratic institutions that cherish and maintain these values.
We are still in the early stages of the COVID-19 pandemic, and there is considerable uncertainty about both basic facts about the disease, and potential interventions for reducing the pandemic's impact. Here, Professor Nicholas King discusses the roles of evidence, expertise, and uncertainty in decision-making during a pandemic.
This briefing note was prepared by Nicholas King in response to his webinar delivered on March 25, 2020. You can watch that webinar below.
Nicholas King
Associate Professor in Biomedical Ethics and in the Department of Social Studies of Medicine, 成人VR视频
Complexity Seminar:听Science in the Policy Process
Core Policy Course:听Experts, Science, and Evidence in Public Policy听