Newswise — School accountability policies from around the world list an array of mandates and recommendations to improve schools. One prevalent mandate, especially in the United States, calls for the development of a school improvement plan (SIP). Since the 1970s, many U.S. states have required that schools develop SIPs, and, in the 1990s, the U.S. federal government started to require that all state-designated underperforming schools develop SIPs (IASA, 1994; Odden & Dougherty, 1982). These school accountability policy mandates assert that SIPs are an improvement tool for educators to use to set direction, organize resources, and take actions to enhance school performance (Beach & Lindahl, 2007; Doud, 1995). Studies have found that higher-quality SIPs are positively—but often not significantly—correlated with better student achievement in English/language arts (ELA) and/or mathematics (Fernandez, 2011; Huber & Conway, 2015; Strunk et al., 2016; VanGronigen & Meyers, 2022). Yet, other work suggests that educators charged with developing SIPs consider the process to be more of a compliance exercise than a legitimate tool for improving their schools (Meyers & VanGronigen, 2019; Mintrop et al., 2001). As a result, educators create SIPs that are just “good enough” (Simon, 1957, p. xxv) to be approved by their school district or their state education agency (SEA) so they can check the SIP off their to-do list and return to work they deem more important (Duke, 2015; Duke et al., 2013).
Study Purpose and Research Questions
The U.S. federal government and many SEAs, scholars, and practitioners have developed a range of resources over the last 30 years to aid educators in crafting high-quality SIPs, from targeted professional workshops to extensive school improvement toolkits (Anfara et al., 2006; Rhim et al., 2007; Scott et al., 2009). In the present study, we focus on one resource: SIP templates. Prior work (e.g., Miller, 2003; Rentner et al., 2017; Rhim & Redding, 2011; White & Smith, 2010) has found that some SEAs devise their own SIP templates whereas other SEAs adopt an external SIP template, such as those from the Indistar® online planning platform. Still other SEAs provide their schools with no SIP template, leaving the design of SIPs up to school- and school district–level officials.
Research (e.g., Louis & Robinson, 2012) suggests that the design of school accountability policy mandates like SIPs influences the work of educators. Thus, the design and characteristics of SIP templates can signal to educators what is and is not important when developing and implementing a SIP (Mintrop et al., 2001). A SIP template that does not require an analysis of student attendance data, for example, may prompt fewer educators to address chronic absenteeism among students. A SIP template that does not call for family-school-community engagement strategies may see fewer educators invest time in building relationships with people outside their school. Despite their potential influence, though, few peer-reviewed empirical studies have intentionally investigated SIP templates in general, much less the specific influences SIPs may have on school improvement efforts. The present study is in direct response to both this gap and calls from scholars (e.g., Bickmore et al., 2021; Dunaway et al., 2014) to better describe the SIP development and implementation process.
The broad purpose of the present study was to better understand the design and characteristics of SIP templates used in public schools around the United States. To strengthen our analysis and examine variation over time, we gathered SIP templates used before and after the 2015 passage of the Every Student Succeeds Act (ESSA). ESSA devolved some federal authority over school improvement efforts back to states, and we wanted to explore the potential influence of this devolution on SIP templates. In service of our purpose and desired examination of change over time, this exploratory qualitative content analysis study asked the following two research questions:
1. What are the design and characteristics of SIP templates used before and after ESSA’s passage?
2. How does the typical pre- and post-ESSA SIP template espouse the SIP development and implementation process?
Findings
Research Question 1: SIP Template Characteristics
Our first research question asked about the characteristics of SIP templates before and after ESSA’s passage (the prior era and the current era, respectively).
General SIP Template Characteristics
To get a general sense of the data, we calculated the overall prevalence of the 103 SIP template characteristics across the prior and current eras (see Table 1). Starting with the prior era, the most prevalent characteristics that appeared in at least half of states were a general description of goals (84%), a required ELA goal (68%), a general description of action steps (61%), a general description of strategies to implement goals (59%), and the school principal’s name (50%). On the other hand, we observed no instances of 12 characteristics across prior-era SIP templates, such as including early warning data, rationales for objectives, recommendations for future school years, expected results from improvement efforts, or a description of the school’s cultural competency plan. As a reminder, these characteristics came from our set of deductive codes derived from extant literature.
For the current era, the most prevalent characteristics appearing in at least half of states were a general description of goals (65%), a required ELA goal (54%), a general description of action steps (52%), and a required mathematics goal (52%). We observed no instances of 15 characteristics across current-era SIP templates, such as staff and community demographic data, several details related to objectives (e.g., rationale, supporting evidence, timeline), expected results of improvement efforts, a description of feedback loops between the school and parents and surrounding community, or a description of the school’s cultural competency plan.
To examine more nuanced changes in SIP template characteristic prevalence rates after ESSA’s passage, we ranked the prevalence of all characteristics within each era with Rank 1 being the most prevalent and Rank 103 being the least. Ranking changes from the prior to the current era permitted us to consider the extent to which ESSA’s mandates—such as SQSS indicators (e.g., students’ social-emotional learning) and needs assessments—were present in our sample of SIP templates. Looking to the ranking distribution’s tails, 11 characteristics decreased at least 26 ranks (i.e., one quartile) between eras while 12 characteristics increased at least 26 ranks. Fewer SIP templates used after ESSA’s passage included staff demographic data (↓65 ranks), a list of stakeholders involved in developing the SIP (↓48), explicit strategies to communicate information to parents (↓39), subject area test scores by student subgroup (↓38), a required science test score goal (↓38), a description of monitoring progress on meeting goals (↓33), and a required social studies test score goal (↓29). Conversely, more SIP templates used after ESSA’s passage included purpose statements (↑44), a required student-focused social-emotional learning (SEL) goal (↑39), measurable outcomes for strategies (↑37), supporting evidence for goals (↑33), supporting evidence for strategies (↑31), a timeline for meeting goals (↑30), progress benchmarks for action steps (↑28), and goals in the SMART goal format (↑27).
We then categorized SIP template characteristic prevalence rates from both eras into the six emphasis areas of the SIP development and implementation process (see Table 2). Of the 737 characteristics we observed in prior era SIP templates, 31% focused on assessing current conditions, 10% on determining needs, 28% on setting direction, 8% on organizing resources, 16% on taking action, and 7% on evaluating progress. Of the 663 characteristics we observed in current-era SIP templates, 25% focused on assessing current conditions, 12% on determining needs, 33% on setting direction, 8% on organizing resources, 14% on taking action, and 8% on evaluating progress. Considering change over time, SIP templates used after ESSA’s passage had more characteristics that emphasized setting direction (+5%), evaluating progress (+2%), and determining needs (+1%)—and fewer characteristics that emphasized assessing current conditions (−6%) and taking action (−2%).
SIP Template Characteristics by State
Turning to findings by state, we calculated a general “coverage rate” for each SIP template from each era from each state, which was the percentage of characteristics in each SIP template divided by 103. For the prior era, coverage rates ranged from 7% (Arkansas, Hawaii, Iowa) to 29% (New Jersey) with an average of 16%, meaning a prior-era SIP template included—at most—30 of the 103 characteristics. For the current era, coverage rates ranged from 3% (Utah) to 25% (New Mexico) with an average of 14%, meaning a current-era SIP template included—at most—26 of the 103 characteristics.
To examine more nuanced changes in coverage rates after ESSA’s passage, we calculated the differences in coverage rates between eras for the 44 states with both prior- and current-era SIP templates. Across these 44 states, coverage rate changes ranged from a 19% decrease (Minnesota) to an 11% increase (Florida), with an average of −3%, suggesting that SIP templates used after ESSA’s passage included fewer characteristics compared to before ESSA’s passage. Looking to the coverage rate distribution’s tails, current era SIP templates in nine states included at least 10% fewer characteristics compared to their prior-era SIP templates: Minnesota (−19%), Tennessee (−17%), Rhode Island (−15%), Texas (−13%), Michigan (−11%), Virginia (−11%), Illinois (−10%), New Jersey (−10%), and Utah (−10%). On the contrary, only one state—Florida—included at least 10% more characteristics in its current-era SIP template compared to its prior-era SIP template.
We then explored state findings with respect to the six emphasis areas of the SIP development and implementation process. For each state’s prior- and current-era SIP template, we calculated “focus rates” to assess the extent to which a SIP template focused on the six emphasis areas. To calculate these focus rates, we divided each emphasis area’s observed characteristic count by the total number of observed characteristics in that SIP template. For example, Alabama’s prior-era SIP template included 29 total characteristics, eight of which aligned with the assessing current conditions emphasis area while two aligned with the determining needs emphasis area. The resulting focus rates for these two emphasis areas in Alabama’s prior-era SIP template were 28% (eight divided by 29) and 7% (two divided by 29), respectively.
Across all 44 prior-era SIP templates, average focus rates for the six emphasis areas were 29% for assessing current conditions, 10% for determining needs, 31% for setting direction, 8% for organizing resources, 16% for taking action, and 6% for evaluating progress. Although all prior-era SIP templates had at least one characteristic emphasizing setting direction, various states had 0% focus rates for the other five emphasis areas: 4 states included no characteristics about assessing current conditions, 13 states included nothing about determining needs, 15 states included nothing about organizing resources, 2 states included nothing about taking action, and 15 states included nothing about evaluating progress. See Table 4 for a full listing of prior-era focus rates for each emphasis area by state.
Turning to the 48 current-era SIP templates, average focus rates for the six emphasis areas were 23% for assessing current conditions, 11% for determining needs, 36% for setting direction, 8% for organizing resources, 14% for taking action, and 8% for evaluating progress. Similar to the prior era, all current-era SIP templates included at least one characteristic emphasizing setting direction, but some states had 0% focus rates for the other five emphasis areas: 8 states included no characteristics about assessing current conditions, 13 states included nothing about determining needs, 18 states included nothing about organizing resources, 8 states included nothing about taking action, and 12 states included nothing about evaluating progress. See Table 5 for a full listing of current-era focus rates for each emphasis area by state.
Comparing focus rates between the 44 states with SIP templates from both eras, current era SIP templates had higher focus rates in determining needs (+1%), setting direction (+5%), and evaluating progress (+2%) and lower focus rates in assessing current conditions (−6%) and taking action (−2%). Focus rates in organizing resources did not change between eras. The number of states with 0% focus rates in certain emphasis areas also changed between eras: four more states in the current era did not include characteristics about assessing current conditions, three more states did not include characteristics about organizing resources, and six more states did not include characteristics about taking action. Three fewer states in the current era did not include characteristics about evaluating progress while 0% focus rates between the eras remained the same for determining needs and setting direction. See Table 6 for a full listing of focus area rate changes between eras for each emphasis area by state.
Research Question 2: Espousals of the SIP Development and Implementation Process
Drawing upon our conceptual framework, our second research question asked how our sample of SIP templates—through their design and characteristics—espoused the SIP development and implementation process before and after ESSA’s passage. The next sections, though, do not make value judgments about whether such espousals were “good” or “bad.” Our goal with the present study was to describe–not evaluate. In a later section, we critically reflect upon these espousals and our findings more generally, especially with respect to extant literature.
To set the stage for these espousals, our coding scheme included SIP template characteristics about general school details (e.g., principal name); school demographic data; SIP development details; school performance data; early warning data; needs assessment data; goals; objectives; strategies; action steps; family and community engagement; budgeting; and other information, such as schools’ plans for staff professional learning, technology, and cultural competency. Extant research (e.g., Duke et al., 2013) suggests that the bulk of a SIP’s content focuses on sections related to “goals, objectives, strategies, and action steps,” which—for simplicity—we abbreviated as GOSAS. Across all SIP templates, we coded whether each GOSAS included a general description, a rationale for selection, evidence to support selection, progress benchmarks, measurable outcomes, timeline for completion, progress monitoring information, and those responsible for doing the work (see Table 1).
Prior-Era Espousal
The typical SIP template used during the NCLB era included approximately 17 of the 103 SIP template characteristics. A SIP template from the prior era often included a mission/vision/purpose statement; a required ELA goal; and general descriptions of goals, strategies, and action steps. A prior-era SIP template did not often include more granular details about GOSAS, especially information related to why a particular GOSAS was selected (e.g., a connection to prior school performance data or current needs assessment data) or how progress and ultimate success for a particular GOSAS would be measured (e.g., student formative assessment scores, end-of-year standardized test scores). A SIP template from the prior era also did not include early warning data or a cultural competency plan. Finally, 77% of the characteristics included in a typical prior-era SIP template emphasized developing the SIP (e.g., assessing current conditions, determining needs, setting direction, organizing resources) whereas 23% emphasized implementing the SIP (e.g., taking action, evaluating progress). Although this prior-era espousal aligns with NCLB’s (2002) focus on student achievement in ELA, the lack of SIP template characteristics related to SIP implementation—especially monitoring and measuring progress—suggests that SEAs charged educators more so with developing improvement efforts and less so with implementing those efforts. Such an espousal comports with extant literature published before ESSA’s passage asserting that educators develop SIPs and then rarely refer to them as implementation occurs during the school year (e.g., Duke, 2015; Duke et al., 2013).
Current-Era Espousal
The typical SIP template used during the ESSA era included approximately 14 of the 103 SIP template characteristics. Although a current-era SIP template included many of the same characteristics as a prior-era SIP template (e.g., a mission/vision/purpose statement, a required ELA goal), more states called for SIPs to include SMART goals, goals related to students’ nonacademic outcomes (e.g., behavior, social-emotional learning), and evidence for selecting strategies and setting measurable outcomes for strategies. Fewer states called for SIPs to include science and social studies goals, details about monitoring progress on meeting goals, subject area test scores disaggregated by student subgroups, and staff and community demographic data. Similar to the prior era, though, the typical current-era SIP template included no characteristics about a school’s cultural competency plan. Finally, fewer characteristics in a typical current-era SIP template emphasized assessing current conditions whereas more characteristics prompted educators to set direction. This current-era espousal aligned with some of ESSA’s tenets (Hale et al., 2017), such as drafting SIP goals related to more than student achievement in ELA and mathematics and considering evidence with respect to school improvement strategy selection. Despite these increased emphases, the typical current-era SIP template included fewer characteristics than its prior-era predecessor. The next section expounds upon positive and negative consequences of these changes between eras.
Discussion
ESSA’s Lackluster Influence
Given our interest in change over time, we start by returning to our conceptual framework to discuss differences in states’ espoused theories of school improvement planning between the prior era and the current era. Despite ESSA’s passage, the typical current-era SIP template—by and large—looked rather similar to the typical prior current-era SIP template. As a result, the espoused SIP development and implementation process will likely remain rather similar during the current era.
Curiously, though, some states appeared to use their ESSA-granted autonomy to decrease the number of characteristics in their SIP templates. From one perspective, fewer SIP template characteristics can provide educators with more autonomy over school improvement efforts (see Mintrop & Sunderman, 2009), which was one of ESSA’s espoused goals (Portz & Beauchamp, 2022). Such autonomy can create conditions for educators to proactively identify and address internally developed, school-specific needs rather than reactively respond to mandates from externally developed school accountability policies (Altrichter & Kemethofer, 2015).
From a different perspective, fewer SIP template characteristics may prompt less attention on certain critical issues, especially equity. Current-era SIP templates from four states studied by Wronowski and colleagues (2022), for instance, included few characteristics related to enhancing educators’ cultural competency to better serve increasingly diverse student populations, shifting educators’ deficit views to better serve families and communities, or involving community members in school improvement efforts. Although fewer SIP template characteristics may enhance educator autonomy and promote educator professionalization, such omissions place greater responsibility on educators—especially educational leaders—to use their preparation to ensure improvement efforts address student, teacher, and community needs.
Relatedly, we observed that fewer SIP templates used after ESSA’s passage called for educators to provide some kind of evidence for GOSAS selection—a finding that stood in direct contrast to ESSA’s charge that select improvement strategies, especially those used in underperforming schools, needed to be supported by evidence (Hale et al., 2017). This finding aligns with recent work on states’ ESSA plans that found few SEAs themselves included evidence to support their espoused approaches to school improvement more generally (VanGronigen et al., 2022). This lack of modeling at the state level may prompt school- and school district–level officials to act similarly.
There was one bright spot, though—more SIP templates used after ESSA’s passage called for educators to include nonacademic goals, such as those focused on student behavior generally and SEL for students specifically. Our findings suggest that ESSA’s provision that states develop broader criteria to measure school performance took root in some states’ current-era SIP templates. Despite being incremental in the larger scheme of our findings, this encouraging change suggests that some states may have used their ESSA-granted autonomy to reshape their espoused theories of school improvement planning, signaling to educators that students’ nonacademic outcomes deserved attention alongside students’ academic outcomes.
Implications, Recommendations for Future Work, and Conclusion
Our findings prompt several implications for policy, preparation, and practice. Regarding policy, states must develop and disseminate a coherent theory of action about how they think school improvement happens, where a SIP resides in that theory of action, and how educators—especially principals charged with leading the SIP development and implementation process—can be supported to develop and implement a SIP that recognizes both their school’s needs and the state’s theory of action. With our findings suggesting no specific cultural competency-related SIP template characteristics, for instance, would such a topic be on the typical principal’s radar when leading the SIP development and implementation process? This implication walks a fine line, though, between mandate and recommendation. Although states may not require certain characteristics in their SIP templates, they can still list them to prompt educators to reflect on key topics (e.g., equity) and encourage educators to consider those key topics in their SIPs.
Turning to preparation, principals are often the primary drivers of developing and implementing SIPs. As a result, educational leadership preparation programs (ELPPs) should allocate specific space and time in their program of study to have aspiring leaders review, discuss, and critique their state’s SIP template. ELPPs could also share SIP templates from other states and prompt aspiring leaders to consider alternative ways to how the SIP development and implementation process may unfold and ultimately be accomplished. Moreover, ELPPs should also provide aspiring leaders with explicit training that describes school improvement as a systems issue—and that even if a SIP template does not prompt for information about certain parts of the system (e.g., a reflection on early warning data), aspiring leaders should still consider the wider system when gathering and analyzing information for inclusion in their SIPs.
Extending the previous implications to practice, our findings suggest that educators in some states may receive very little guidance from their SIP templates about improvement efforts generally and the SIP development and implementation process specifically. Consequently, the onus of responsibility to identify and address schools’ unique, contextualized needs while also satisfying external mandates continues to rest mostly with school-level educators—not other actors in the system. As such, school district officials, in particular, should take an active role in supporting school-level SIP development and implementation efforts. First, these officials should emphasize to their school-level leaders the need to prioritize the SIP development and implementation process and the need for SIPs to comport with school district goals (e.g., those listed in a school district’s strategic plan) and school-specific needs. Second, school district officials should critically review SIPs early in their development to ensure alignment among state regulations (e.g., SIP template prompts), school district goals, and school-specific needs. Several meetings that occur before the start of the fall and spring semesters can provide important opportunities for school district officials to offer essential feedback before school-level leaders would finalize and start implementing a SIP. Third, school district officials should spend more time with school-level leaders, especially principals, throughout the school year to monitor SIP implementation. These monitoring efforts, which school district officials should take responsibility for initiating and sustaining, may occur monthly and consist of reviewing a school’s progress toward meeting SIP goals and discussing if any revisions to SIP contents are warranted based on implementation (e.g., new strategy or action steps). We recognize that these officials may not know how to support SIP-related efforts, though, so school-level officials—especially principals—may need to provide contextual insight to aid school district officials in helping provide feedback on early SIP development and later SIP implementation efforts.
We also recommend the continuation of this line of inquiry in future research. This line of inquiry could detail and compare how educators in a few states develop and implement SIPs. Colorado’s current-era SIP template, for instance, included several reflection prompts whereas New Mexico’s current-era SIP template was organized around plan-do-study-act (PDSA) cycles. Original qualitative data collection using interviews and/or focus groups could explore how educators interact with SIP templates and what subsequent SIP implementation looks like within and across states. Such work would offer insight into enacted theories of school improvement planning and be an excellent complement to the present study’s focus on espoused theories.
To close, the present study was among the first to specifically examine the characteristics of SIP templates used in states before and after ESSA’s passage. Although we identified some encouraging changes in what SIP templates prompted after ESSA’s passage, the typical SIP template used during the ESSA era looked much like the typical SIP template used during the NCLB era. Consequently, not much is poised to change in the near future with respect to the SIP development and implementation process. We nevertheless remain steadfast, though, that SIP templates can be a tool to help educators identify and address a range of important issues in their schools, especially those related to equity and social justice. States, especially SEAs, occupy powerful positions to help shape that kind of work—and an intentionally developed, comprehensive SIP template is one tool that educators can use to foster more high-quality, equitable learning experiences for all students.