Mercury Project Findings: Intention2Action
Over the course of the past three years, the Social Science Research Council’s Mercury Project has supported 18 research teams around the globe in evaluating interventions designed to increase vaccination and other evidence-based health behaviors. We are excited to now present the many policy-relevant and actionable insights from their projects.
A Mercury Project team examined whether SMS-based interventions that are backed by behavioral research insights and aimed at increasing Covid-19 booster rates translate to real-world outcomes.
This work was recently published in Nature Human Behaviour and is part of our Mercury Project Findings series on SMS-based Interventions.
The problem
Policymakers often rely on applied or theoretical behavioral science to make decisions on which policy solutions to implement. However, solutions guided by behavioral science research can include interventions that were proven to be effective in field tests, interventions that are hypothetically effective but yet to be field tested, or interventions that were developed based on theory in academic literature. Especially in the latter two cases, the transferability of an intervention from a hypothetical to real-world context, and the longevity of its impacts, are not guaranteed.
In public health settings, implementing “best guess” policy interventions based on hypothetical or theoretical evidence can be cost ineffective and even damaging to public health. Selecting these policy interventions was an even riskier gamble at the peak of Covid-19, when health officials were desperate to find long-lasting solutions to combat rising infection rates. To better understand the transferability of behavioral insights, a Mercury research team examined whether a set of SMS-based interventions that are effective at increasing Covid-19 booster uptake in hypothetical or predictive studies translate to field settings.
The intervention
The team conducted three randomized controlled trials (RCTs) on a total of 386,615 adults in California who had received their initial Covid-19 dosage but had not received their follow-up booster. The three RCTs were supported by different types of behavioral research insights.
The first RCT consisted of field-tested SMS interventions that successfully increased uptake of the initial Covid-19 vaccine. This first set of SMS interventions included combinations of messaging that tells patients a booster is waiting for them (sometimes called “ownership-based language”), doctor recommendations to get vaccinated, and embedded links to schedule appointments. The second RCT consisted of interventions that were built based on a survey of Covid-19 beliefs and subsequently tested online, where the interventions were shown to be hypothetically effective. The messaging in these interventions aimed to highlight the difference between the Covid-19 vaccine and booster, inform patients they were eligible for the booster, and/or compliment patients on their success in receiving the first Covid-19 vaccine to encourage continued pro-vaccination behavior. Lastly, the third RCT consisted of an intervention that was derived from the academic literature and forecasted by experts and laypeople to be effective, where patients received information about bundling their Covid-19 vaccine with a flu shot.
Table 1. Interventions from each RCT
Researchers checked whether study participants received the Covid-19 booster anywhere in California, based on their immunization registry records, within four weeks of the SMS interventions.
Results
Booster rates increased by an average of 13.5% across all 14 interventions. Overall, thirteen of the fourteen interventions significantly increased Covid-19 booster rates. The exception was the intervention in the third RCT that was predicted to be effective by experts.
The average effect of all interventions in the first RCT, which included field-tested interventions only, was a 1.13 percentage point increase in booster uptake within 4 weeks relative to a control group. This effect was larger than the effects of both the second and third RCTs. The most effective intervention in this first group included ownership-based language and a link that led to a specific vaccination site’s scheduling page. The analysis additionally found that adding a doctor recommendation to the most effective intervention did not significantly improve its impact on booster uptake rates.
Figure 1.
The second RCT found that interventions that work in a survey-based online exercise do not necessarily translate to real-world results. While these interventions still led to a significant increase in booster uptake relative to the control group, the individual interventions did not behave as predicted by the online survey. For example, the online survey suggested that including language that addresses misconceptions about vaccines and including a compliment to encourage continued vaccination efforts would be more effective than a simple reminder message. However, neither addition improved booster uptake rates.
Lastly, results from the third RCT suggest that predictions of experts and laypeople do not necessarily translate to results in the real world, highlighting the importance of both piloting surveys and field-testing prior to implementation.