Policy learning and change during crisis: COVID‐19 policy responses across six states

This article is being made freely available through PubMed Central as part of the COVID-19 public health emergency response. It can be used for unrestricted research re-use and analysis in any form or by any means with acknowledgement of the original source, for the duration of the public health emergency.

Associated Data

All policy documents and summary data will be made available to other researchers through the research team's website or through a database repository the journal prefers.

Abstract

Whereas policy change is often characterized as a gradual and incremental process, effective crisis response necessitates that organizations adapt to evolving problems in near real time. Nowhere is this dynamic more evident than in the case of COVID‐19, which forced subnational governments to constantly adjust and recalibrate public health and disease mitigation measures in the face of changing patterns of viral transmission and the emergence of new information. This study assesses (a) the extent to which subnational policies changed over the course of the pandemic; (b) whether these changes are emblematic of policy learning; and (c) the drivers of these changes, namely changing political and public health conditions. Using a novel dataset analyzing each policy's content, including its timing of enactment, substantive focus, stringency, and similar variables, results indicate the pandemic response varied significantly across states. The states examined were responsive to both changing public health and political conditions. This study identifies patterns of preemptive policy learning, which denotes learning in anticipation of an emerging hazard. In doing so, the study provides important insights into the dynamics of policy learning and change during disaster.

Keywords: COVID‐19, policy change, policy learning, state policymaking

Resumen

Mientras que el cambio de política a menudo se caracteriza como un proceso gradual e incremental, la respuesta efectiva a la crisis requiere que las organizaciones se adapten a los problemas en evolución casi en tiempo real. En ninguna parte esta dinámica es más evidente que en el caso de COVID‐19, que obligó a los gobiernos subnacionales a ajustar y recalibrar constantemente las medidas de salud pública y mitigación de enfermedades ante los patrones cambiantes de transmisión viral y la aparición de nueva información. Este estudio evalúa (a) la medida en que las políticas subnacionales cambiaron en el transcurso de la pandemia; (b) si estos cambios son emblemáticos del aprendizaje de políticas; y (c) los impulsores de estos cambios, a saber, las cambiantes condiciones políticas y de salud pública. Usando un nuevo conjunto de datos que analiza el contenido de cada política, incluido el momento de la promulgación, el enfoque sustantivo, el rigor y variables similares, los resultados indican que la respuesta a la pandemia varió significativamente entre los estados. Los estados examinados respondieron a cambios tanto en la salud pública como en las condiciones políticas. Este estudio identifica patrones de aprendizaje de políticas preventivas, lo que denota aprendizaje en previsión de un peligro emergente. Al hacerlo, el estudio proporciona información importante sobre la dinámica del aprendizaje y el cambio de políticas durante un desastre.

摘要

INTRODUCTION

When and under what conditions do policy change and learning occur during long duration crisis events? Extant policy research provides important insights into the dynamics of lesson learning in the aftermath of disasters (Birkland, 2004a, 2006; Crow et al., 2018; O'Donovan, 2017a; Sabatier & Jenkins‐Smith, 1993), yet little is known about the conditions leading to learning during a crisis. This omission is problematic given the marked uptick of long duration crises over the last two decades, including various climate‐related disasters and novel disease outbreaks (DeLeo et al., 2021).

It is against this backdrop that the following study assesses subnational learning during the COVID‐19 pandemic. The COVID‐19 pandemic, which has lasted over 2 years and resulted in over 1 million deaths in the United States alone, presents a useful context for assessing governmental learning and policy change during an evolving crisis (Boin et al., 2020; Dostal, 2020). The crisis has forced states to grapple with various changes in public health guidance on everything from mask wearing to social distancing protocols while managing the emergence of novel disease variants. Nor were the effects of the virus confined to the public health domain. Instead, COVID‐19 is a uniquely boundary spanning problem that impacted virtually every sector of society and the economy. Complicating matters further, a vacuum in pandemic response leadership rooted in federal inaction during the initial phase of the COVID‐19 crisis provided both responsibilities and opportunities for state‐level policy change and learning (Birkland et al., 2021; Fowler et al., 2021; Kettl, 2020; Taylor et al., 2022). The COVID‐19 pandemic thus provides an opportunity to assess whether such change and learning can occur in near real time and in response to rapidly changing social, economic, political, and public health environments.

To assess variation in subnational policy change and learning, we examine COVID‐19 policy making in six geographically and politically distinct states in the United States, including initial policies and subsequent modifications during 2020. We specifically seek to assess (a) the extent to which subnational policies changed over the course of the pandemic; (b) whether these changes are emblematic of policy learning; and (c) what the drivers of these changes are, namely changing political and public health conditions. We develop a novel dataset analyzing each policy's content, including its timing of enactment, substantive focus, stringency, and similar variables. Results suggest the pandemic response varied significantly across states, with Democratic‐led states engaging earlier in the pandemic timeline than Republican‐led states. Specifically, the states examined in our study were responsive to both changing public health and political conditions. We identify patterns of what we call preemptive policy learning, which denotes learning in anticipation of an emerging hazard. In doing so, our study provides important insights into the dynamics of policy learning and change during disaster.

THE IDEA OF CRISIS‐INITIATED LEARNING

The varieties of policy change

Policy change is broadly defined as the replacement of one policy with one or more new policies. Policy change can include situations where a new policy is adopted, an existing policy is changed, or an old policy is terminated (Lester & Stewart, 1996). Policy change is not the byproduct of a binary, “go/no go” decision. It does not end with the passage of a law. Instead, policy change involves actions and decisions taken across time and in response to shifting demands (Capano & Howlett, 2009; Šinko, 2016). In this respect, policy change can be measured in degrees. At one extreme, policy innovation encompasses pioneering decisions that seek to involve government in a new area. At the other extreme, policy termination refers to a policy that is abandoned or wound down. Between these two poles sits policy maintenance, which refers to minor adjustments to help ensure a policy continues to meet its goals.

Major policy reform is relatively rare, especially within the United States (Baumgartner & Jones, 2010; Peters, 2018). Consequently, we focus not only on innovations and termination (the two extremes) but also on what Hall (1993) calls “first order” reforms or policy changes that recalibrate the application of existing policy instruments and tools to better address changes in a problem condition. First‐order reforms are more or less akin to policy maintenance in that they do not involve government in new policy initiation but instead seek to improve upon or alter policy that is already in place. This is especially critical within the context of a long duration crisis event like the COVID‐19 pandemic because most states, particularly prior to deployment of vaccines, focused on managing the crisis as opposed to fashioning new legislation aimed at making lasting changes to their public health policy regime. DeLeo (2015) observed a similar pattern during the 2009 swine flu pandemic, noting that most major reforms were put on hold during the crisis as the federal government shifted its attention to more technocratic policy decisions like vaccine deployment and bolstering hospital surge capacity. In stark contrast to the focusing event literature that suggests sudden onset events with a rapid accumulation of problem indicators will be effective at capturing fleeting policy attention, events that involve a slow accumulation of indicators may not promote learning as government organizations and their publics slowly become acclimatized to the problem as a new normal. However, in a longer duration crisis where indicators accumulate rapidly but also endure over time, we may observe learning, as evidenced by policy changes and modifications that respond to changes in indicators. The COVID‐19 pandemic represents this last category of a long duration crisis with rapid accumulation of problem indicators (e.g., COVID‐19 cases). Therefore, it may be possible to observe policy changes that indicate government organizations' learning as the pandemic crisis evolves.

Policy change, and first‐order reforms in particular, can vary depending on the policy domain and situational context. For the purposes of public health policy making, two concepts are especially important. First, policy change can present itself as change in the relative stringency of a particular policy. The concept of policy stringency has been widely applied within the environmental policy context and is used to describe the extent to which a policy puts an explicit or implicit price on failing to comply with different rules and regulations (see, e.g., the OECD Environmental Policy Stringency Index; Galeotti et al., 2020). Unlike most environmental policies that are often monitored at the firm level, the various nonpharmaceutical interventions used during the COVID‐19 pandemic frequently asked individuals to engage in certain risk reduction strategies, like mask wearing, social distancing, or staying at home. Stringency within the context of COVID‐19 thus ranges from simply recommending certain risk mitigation behaviors to mandating individual compliance.

Second, complex problems like environmental and health protection typically require the creation of governance regimes encompassing an array of distinct but interrelated instruments and policies, colloquially referred to as policy mixes (Howlett & Rayner, 2013). According to Howlett (2005), different policy mixes are implemented based on capacity to affect behavior change and the types of actors that governments must engage with when enacting their programs and policies.

Put differently, policy makers are rarely presented with a single “silver bullet” solution but instead need to select from a variety of different interventions based on their perceived efficacy, legitimacy, equity, and partisan support (Howlett, 2005; Salamon, 1989). Policy change thus involves critical decisions about which policies to add or subtract from an existing mix or portfolio across time and, as noted below, in response to changing political and problem conditions. Regarding the COVID‐19 pandemic, policy scholars have advocated for deeper understanding of policy mixes and policy designs to examine the mechanisms that result in policy outcomes (Dunlop et al., 2020). The varieties of policies that could be included in a COVID‐19 mix are substantial, ranging from nonpharmaceutical interventions to economic development packages, vaccination campaigns, or online learning programs.

The dynamics of policy learning

Policy learning is the “relatively enduring alterations of thought or behavioral intentions which result from experience and which are concerned with the attainment (or revision) of policy objectives” (Sabatier, 1988, p. 133). Individuals in their role as officials in public organizations—agencies, legislatures, and executive offices—act on new information which in turn potentially leads to organizational‐level change (Heikkila & Gerlak, 2013). Learning occurs when individuals within organizations come to discover or realize the significance of policy problems, uptake new information about how to address a problem, and potentially change policies (Albright & Crow, 2021; Birkland, 2004a, 2004b, 2006; Crow et al., 2018; O'Donovan, 2017b; Sabatier & Jenkins‐Smith, 1993). Although policy learning is an important part of the policy process, policy change may not necessarily occur as an outcome of learning (O'Donovan, 2017b; Taylor et al., 2021).

Learning is arrayed along a spectrum from instrumental learning about policy tools and their successes or failures to social learning about the underlying causes of policy problems to political learning about the various strategies and tactics that can be used when advocating for a particular policy (Albright & Crow, 2021; May, 1992; O'Donovan, 2017b; Sabatier, 1988). In this study, we focus on instrumental policy learning and political learning because social learning typically occurs in the aftermath of a crisis rather than during management or response. First, we use instrumental policy learning to assess the extent to which state governments changed policy because of learning about policy tools and instruments to address the problems presented by the pandemic. Instrumental policy learning is the most common form of policy learning, particularly compared to social learning (Birkland, 2006, 2009; May, 1992; O'Donovan, 2017b). It involves learning about new or existing policy tools and instruments that can be used to address a policy problem (Birkland, 2006; May, 1992; O'Donovan, 2017b). The empirical connection between instrumental learning and change is clear (Birkland, 2009; Crow et al., 2018; Howlett, 2012). Policies represent the lessons—political, instrumental, organizational, social—derived from a particular crisis (Albright & Crow, 2021; Crow & Albright, 2021). Policy change is generally considered to be one outcome of the learning process (Heikkila & Gerlak, 2013). In the context of the COVID‐19 pandemic, our expectation is that the lessons will be instrumental, meaning that states will engage in instrumental policy learning because of a change in problem indicators (i.e., COVID‐19 case counts).

Second, we examine the prospect of political learning in states during the COVID‐19 pandemic. Political learning is observed in cases where political influences—such as public opinion and electoral cycles—or considerations of political strategy lead to changes in policies. Political learning may include learning about the credibility or public acceptability of certain types of policies (Taylor et al., 2021) as well as the political strategies and tactics used by public organizations (Crow & Albright, 2019; May, 1992). In the absence of new information about a problem or other traditional drivers of learning, political learning can also take the form of policy mimicking where decision makers “copy” policies used in nearby or similar jurisdictions (May, 1992), perhaps because they simply observed that it worked well elsewhere (Shipan & Volden, 2014). Similarly, there may be political opposition to some forms of change that will prevent the putative lessons from an event or phenomenon from being “learned” to the extent that it is applied in the form of a new or revised policy. Our expectation in this study is that political learning will occur when there is no new information, either about the policy instruments or COVID‐19 case counts, to inform choices by policy makers. As a result, states will mimic the policies of states that are politically—either by partisanship or ideologically—similar.

Because we are examining the policies that are adopted, we look to policies as the products of learning. Building on this idea, we conceive of policy change—be it innovation, maintenance, termination, or adjustments in a policy's stringency—as potential evidence of learning. This is often referred to as prima facia evidence of policy learning. Relying on prima facia evidence of policy learning was a practice initially used by May (1992), meaning that evidence of policy learning can be inferred from face value of government documents in the absence of direct evidence gathered by surveying policy makers and government officials. This practice of inferring evidence of policy learning based on prima facia evidence has been further refined over time (Albright & Crow, 2015; Birkland, 2004a, 2004b, 2006, 2009; Crow et al., 2018; Crow & Albright, 2019; O'Donovan, 2017b). This approach is especially useful within the context of learning during crisis since policy makers are not responding to a single, discreet disaster, but rather an ongoing crisis that is perpetually evolving and changing.

Crisis as a catalyst for learning and potential policy change

A number of factors can catalyze policy change and learning, particularly within the context of crises and hazards. Kingdon (2011) introduced the term “focusing event” as a “little push” that can elevate issues on a government's or the public's agenda. Birkland (1997, 1998) later refined Kingdon's description by defining a potential focusing event as an event that is sudden, rare, harmful, or illustrative of future harms, that affects a particular geographic area or a community of interest, and that is known to policy elites and the general public nearly simultaneously (1997). Focusing events create pressure for public organizations to learn from disaster and, ideally, change policy in ways that improve future performance (Birkland, 2006; May, 1992; O'Donovan, 2019; Taylor et al., 2021).

Focusing events allow certain issues to bowl their way onto the policy agenda, effectively leapfrogging other items in the wake of disaster (Kingdon, 2011). COVID‐19 similarly skyrocketed to the top of federal and state policy agendas in March of 2020, but the crisis does not meet Birkland's definition of the term in that it was neither sudden nor defined to a specific geographic area—a pandemic is by definition a global phenomenon (DeLeo et al., 2021). Instead, the COVID‐19 crisis revealed itself through the accumulation of what scholars call problem indicators or measures and metrics of a policy problem. The most prominent indicators within the context of COVID‐19—indeed all public health crises (DeLeo, 2018)—included the number of cases, deaths, and hospitalizations resulting from the virus.

Whereas most federal policy during the first year of the pandemic focused on passing fairly large relief packages to support individuals and state governments during the crisis (DeLeo et al., 2021), subnational governments were more dynamical because states and localities were tasked with directly managing the crisis. This critical response function most clearly manifested in decisions regarding whether or not to require various nonpharmaceutical interventions, like mask mandates, stay‐at‐home orders, and social distancing requirements, as well as various policy interventions aimed at buoying the economic, education, and housing sectors, among others. Because COVID‐19 was a long duration crisis event, states had to consider adjusting and recalibrating their policies in the face of improving or worsening indicators of the crisis like cases, deaths, and hospital capacity.

Often times, public health problems are said to be subject to indicator lock, meaning there is general consensus that cases and deaths represent the only viable metrics of changes in problem conditions (DeLeo, 2018; see also Jones & Baumgartner, 2005). However, this was not the case for the COVID‐19 pandemic, which had profound effects across multiple sectors. Of particular importance was the virus's economic effects, which were readily tracked and tabulated alongside public health metrics (Carrieri et al., 2021). Thus, indicator change at the state level should facilitate not only greater policy change (DeLeo et al., 2021) but also greater policy mixing both with respect to the types of public health interventions applied and policies aimed at mitigating the deleterious economic and social effects of the crisis.

Crises can force governments to both change and learn. While learning after a disaster is normatively desirable, certain factors such as presence of a focusing event, issue salience, and group mobilization can make learning more likely, and it may take several iterations of crises for learning to occur (e.g., repeated disasters). Disasters provide the impetus for policy learning because they often lay bare flaws or shortcomings within the existing crisis management regime. However, much of the literature has focused on change and learning after crisis, a testament to the fact that many of the crises studied are short duration rapid onset events like hurricanes, terrorist attacks, tornadoes, and flooding (Albright & Crow, 2021; Birkland, 2006; O'Donovan, 2017a). COVID‐19, by contrast, has lasted for more than 2 years thereby creating pathways for learning during disaster and in response to changing indicators. This type of long duration crisis event can teach us a great deal about learning during disaster rather than our more frequent approach to understanding learning after disaster.

Of course, problem indicators do not present themselves in a vacuum. Instead, they interact with competing social and political forces that ultimately shape how they are perceived by policy makers at any point in time, a phenomenon known as indicator politicization (DeLeo & Duarte, 2021). Various studies have demonstrated the profound effect of partisanship on COVID‐19 governance. Fowler et al. (2021) observe partisan divergence with respect to the timing and enactment of state emergency declarations, noting that Democratic governors were quicker to declare emergencies than their Republican counterparts. Birkland et al. (2021) echo this finding, adding that states not only differed in terms of the timing of emergency declarations but also the adoption of various nonpharmaceutical interventions. Grossman et al. (2020) find that Republicans were far more likely to support a return to in‐person learning than Democrats, regardless of COVID‐19 severity in their state. Taken together, these studies suggest that while indicators remain an important driver of change and learning throughout the pandemic, the crisis unfolded in a hyper‐politicized environment where Democratic versus Republican‐controlled states pursued different policies. Thus, political pressures can both promote and stymie normatively desirable learning depending on the state context.

RESEARCH QUESTIONS AND METHODS

While policy change and learning after disaster are widely studied phenomena, much less is known about learning during disaster, particularly long duration crises like the COVID‐19 pandemic. We stipulate that the literature on policy and organizational learning is based not on careful assessment of whether and to what extent individuals experienced cognitive change, but is assessed based on the extent to which a reasonable case can be made that policy changed in the face of new information (Busenberg, 2001) and that this policy change is an artifact of learning (May, 1992). Based on these assumptions, we ask:

RQ: When and under what conditions do policy change and learning occur during long duration crisis events?

We conceive of learning in a way that is influenced by the severity of the policy problem, as signaled by changes in public health problem indicators over time as well as different political contexts. Thus, we hypothesize:

State policies will change in response to changing problem indicators, with increases in problem indicators leading to greater policy promulgation and stringency and decreases leading to greater policy termination or relaxation.

State policies will change according to political conditions, with Republican governors in states that voted for President Trump less likely to adopt policies designed to slow the pandemic, Democratic governors in states that did not vote for President Trump more likely to adopt more policies and more stringent policies, and states whose governors' parties differed from their electorates' partisan preferences adopting an intermediate number of policies with an intermediate level of stringency.

We also expect see multiple patterns of policy mixing, a testament to COVID‐19's boundary spanning effects and a desire by state elected officials to throw all possible tools at the problem, given its severity. We hypothesize the following with regard to policy mixing:

Policy mixes with greater policy variety will be associated with increases in problem indicators.

Policy mixes with greater policy variety indicate instrumental policy learning as opposed to political learning by state governments.

Research design

The study's hypotheses are examined using a comparative case analysis approach. We selected six states for comparison: Colorado, Iowa, Louisiana, Massachusetts, Michigan, and Washington. We chose these states based on variation in regional, political, and economic characteristics, COVID‐19 case rates, and early actions in response to COVID‐19 (Table 1 ). Most importantly for the key variables analyzed here related to politically induced policy change as well as indicator‐driven policy change, the sample of states includes those governed by both Democrats and Republicans, states that took early and frequent COVID‐19 policy action as well as those that did not, and states with early outbreaks as well as those that saw more severe outbreaks later in 2020. The time period of analysis is March 2020 through December 2020. This period is characterized by a particular type of policy response focused on risk mitigation, limiting spread of the virus, and responding to corollary effects of the pandemic across sectors. It is also prior to the release of vaccines for COVID‐19 and therefore a markedly different timeframe in terms of the understanding of risk and an eventual end to the pandemic.

TABLE 1

Characteristics of U.S. states included in analysis

RegionPolitical Party of Governor/2020 Presidential VoteUnemployment rate, June 2020Unemployment rate, June 2021Cumulative COVID‐19 cases per 1,000,000 as of February 27, 2022
ColoradoMountain WestDemocrat/Democrat10.66.2228,708
IowaMidwestRepublican/Republican8.44.0237,097
LouisianaSouthDemocrat/Republican9.56.9263,646
MassachusettsNortheastRepublican/Democrat17.74.9242,233
MichiganMidwestDemocrat/Democrat14.95.0235,541
WashingtonWestDemocrat/Democrat10.05.2186,357

State policies

State‐level policy 1 documents were collected by scraping relevant state policy documents and information from the websites of state governors' offices, state health agencies, and state‐sponsored COVID‐19 websites from March 2020 to December 2020. These documents included executive orders, proclamations, directives, emergency health orders, and other documents that were determined to constitute a policy. The policy documents were cross‐referenced with state press releases, the National Governors Association's list of state COVID‐19 policies, and the University of Washington's COVID‐19 State Policy database to ensure that all significant policies were collected. While these other sources provide a useful check on the Risk and Social Policy Working Group's database used in this analysis, they do not serve as a substitute, as the database of policies collected for this study includes all COVID‐19‐related policy topics (e.g., social distancing, mask wearing, business closures, housing policies, tax moratoria, etc.), which casts a wider net than other publicly available databases.

External indicators and political drivers

To assess the relationship between problem indicators, policy change, and learning, our analysis includes variables measuring changes in COVID‐19 cases in each of the six states included in the study (Table 1 ). Additionally, we consider political drivers of policy change and learning. Data sources are listed in Table 2 .

TABLE 2

Data sources used to examine the relationships between policy change and learning and problem indicators

VariableSource of data
Dependent variable: policy change and learningPolicy documents downloaded from state websites from March 2020 to December 2020
COVID‐19 case numbersCenter for Disease Control and Prevention a and the COVID‐19 Tracking Project b
State‐level 2016 voting dataAssociated Press c
State‐level partisan compositionNational Conference of State Legislatures (NCSL) d
Unemployment ratesBureau of Labor Statistics
b COVID‐19 Tracking Project: https://covidtracking.com/data c Associated Press: https://elections.ap.org/dailykos/results/2020‐11‐03/state/US

Data analysis

The state policies were coded using a multi‐step process. First, we used topic modeling (Latent Dirichlet Allocation) to identify clusters of COVID‐19‐related words in the state policy documents (included in Appendix B). Key policy topics were identified based on these clusters (e.g., masks, long‐term care facilities, and gatherings, among others; see Appendix B). The policy topics include three broad categories: (1) risk mitigation policies that include stay‐at‐home orders, masks, events and gatherings, businesses, testing, and correctional facilities; (2) social support policies that include social and financial supports from the government and housing policies; and (3) medical capacity, including policies governing medical and long‐term care facilities and elective surgeries. The documents (n = 581) were then analyzed for the presence of these topics using dictionaries of words commonly related to a topic developed from the topic modeling. Topics that appeared in more than one percent of words in the policy document were captured. When multiple topics were detected in a single document, we focused on the three most common topics. This allowed the researchers to systematically identify the units of analysis (i.e., distinct policies) within each document, as some policy documents introduced or modified multiple policies at once.

Second, each distinct policy identified through the automated analysis was coded manually (codebook available in Supplemental Material) to identify the following: (1) the issuer of the identified policy, (2) timing of enactment (or revision/termination), (3) policy design (e.g., mandates, economic incentives, persuasion, etc.), (4) stringency (highest stringency policies are mandates that apply to all people in a state while lowest stringency policies are recommendations), (5) policy targets, and (6) policy topics that were missed or inaccurate in the topic modeling stage of coding. Coders then returned to inaccurate or incomplete topics to code those by hand. The final dataset contains all distinct policies (n = 748), related to the identified topics. To examine whether risk mitigation policies became more or less restrictive over time, policies were ordered chronologically and adjacent coded policies were compared, including codes for stringency, whether a policy was new, revised, or continued, and the targets and goals of the policy (e.g., reopening or to reduce COVID‐19 prevalence).

Intercoder reliability for the manual coding was established on a subset of policy documents. Two pairs of coders coded a set of identical documents separately. The coded data were analyzed by ReCal2 2 to determine percent agreement and Scott's Pi. Table 3 outlines the reliability scores for each of the variables used in the analysis presented next. According to Krippendorff (2018), a Scott's Pi value above .80 is acceptable for reliability of coded data. Importantly, some categories (i.e., stringency) allowed coders to select all that apply from a list of common policy characteristics. By allowing multiple responses we capture more complexity but maintain a lower intercoder reliability because even if two coders selected three policy tools, for instance, and only disagreed on one of the three, it was counted as a disagreement. Because percent agreement for these variables was still acceptable, the variables are included in this analysis despite lower Scott’s Pi measures.

TABLE 3

Pair onePair two
Variable NameScott's PiPercent agreementScott's PiPercent agreement
Effective date0.87188.20.87288.9
Expiration date0.79782.40.80483.3
Issuing office0.87194.11100
Stringency0.43788.90.62477.8