Date: 
Thursday, March 28, 2024

March 2024 marked four years since the World Health Organization declared COVID-19 a global pandemic. In those four years, we’ve seen a rapid proliferation of laws enacted to, among other things, establish or block pandemic mitigation measures like vaccines and mask wearing. 

Nowadays, we are also seeing a growing number of studies attempting to assess the efficacy of such measures. These studies necessarily rely on legal data measuring the population’s “exposure” to these measures (for example, who has been affected by the laws, and for how long). Given the importance of and often politically charged nature of some of the interventions being studied, , this month, we are sharing some key factors to consider when considering an evaluation study, particularly the legal data used in that study. 

Creating legal data for use in research requires a systematic process where researchers identify what features of laws they’d like to collect, find the legal texts where those features are codified, then identify the locations of those features within the texts, and then code those features to create the data. 

All these steps should be carried out with careful quality control mechanisms in place, and the process should be transparent and replicable — that is, the authors of a study should provide sufficient detail about how they produced the data so that a reader can understand how the data were compiled, what limitations they may have, and how to replicate the research. This is standard science and applies to legal data just like any other data.

No matter how carefully the rest of the study is designed and conducted, if the legal data are not transparent and rigorous, the study’s findings must be considered unreliable. 

Here are three important factors to consider when assessing legal data sources:

  1. Is there a description of the process used to compile the legal data? Does the research paper describe where the data being used came from and how they created it? Legal data sources should always provide details of the methods and processes used to create the legal data. That documentation should include  the overall scope of data (i.e., what’s included and what’s not), and should provide definitions for the measured variables. Quality control measures (i.e., redundancy, secondary source verification, independent review) implemented during the data creation process should also be described.

  2. Did the study use the best evidence of what’s in the law? The best evidence is the  actual law, order, or rule. Press releases, news accounts, web pages or other secondary sources —unfortunately used in too many COVID-19 era studies — are no more reliable in the measurement of law than they would be as a measure of, say, vaccination rates. Aside from their obvious limitations as secondary sources, these sources often do not provide information about key features of the law, like the date the law went into effect (the  "effective date") or any exemptions from a mandate — two features that are essential to good measurement of the law and its effects.

  3. Does the legal data properly measure exposure to the law? Remember that measuring the effects of a law requires measuring exposure to that law and its features. To do so, the data must capture where (i.e., what jurisdictions) and when (i.e., for what time periods) the law was in effect. While this may seem straightforward, understanding the nuances of all the possible dates associated with law can be complex and lead to errors in measurement. When considering a study’s reliability, we want to ask: Is this study measuring from the date a bill was signed into law by the governor or the date the law became effective according to the legislation itself or a default state rule? Or is it examining the implementation date of a law stated within the text of the law itself? All three of those dates (signed date, effective date or implementation date) can be different for any given law, so it is important to recognize what kind of date the source is using and if this is consistent for all jurisdictions with the study. 

Most consumers of research on the effects of laws and legal practices do not feel qualified to assess the methods used in the research to support the findings.  They have to take statistical operations to some degree on trust. But lawyers and others with some legal epidemiology exposure can readily assess the quality and transparency of the legal data part. If it does not meet basic standards, the rest just doesn’t matter: the paper’s findings cannot be deemed credible.

We can and must do our part to ensure that legal research and data analyses remain valuable tools for improving people’s lives by evaluating what truly works, and what doesn’t. And if you have been asked to engage with, respond to, or rely on studies that involve legal data that doesn’t satisfy these three requirements, especially as it relates to vaccines or other essential, valuable, public health services, please reach out to the Center for Public Health Law Research or other legal experts for support.

This first appeared as the March 2024 Legislative Update for the Act for Public Health initiative. To receive those updates, sign up here.

 
Work Area: