Continuous Quality Improvement in Afterschool Settings: Impact Findings from the Youth Program Quality Intervention Study

Abstract

Background: Out-of-school time programs can have positive effects on young people’s development; however, programs do not always produce such effects. The quality of instructional practices is logically a key factor but quality improvement interventions must be understood within a multi-level framework including policy, organization, and point of service if they are to be both effective and scalable.

Purpose: To evaluate the effectiveness of the Youth Program Quality Intervention (YPQI), a data-driven continuous improvement model for afterschool systems. Research questions include:

  • Does the YPQI increase managers’ focus on instruction and the use of continuous improvement practices by site-based teams?
  • Does the YPQI improve the quality of afterschool instruction?
  • Does the YPQI increase staff tenure?
  • Can the YPQI be taken to scale across programs that vary widely in terms of structure, purposes, and funding and using resources available to public agencies and community-based organizations?
  • Will afterschool organizations implement the YPQI under lower stakes conditions where compliance with the model is focused on the improvement process rather than attainment of pre-determined quality ratings?

Participants: Eighty-seven afterschool sites in five diverse afterschool networks participated in the study. Each site employed the equivalent of one full-time program manager and between two and ten direct staff; had an average annual enrollment of 216 youth; and had an average daily attendance of 87 youth.

Research Design: This is a cluster randomized trial. Within each of the five networks, between 17 and 21 sites were randomly assigned to an intervention (N=43) or control group (N=44). Survey data were collected from managers, staff, and youth in all sites at baseline prior to randomization (spring 2006), at the end of the implementation year of the study (spring 2007) and again at the end of the follow-up year (spring 2008). External observers rated instructional practices at baseline and at the end of the implementation year. Implementation data were collected from both intervention and control groups. Hierarchical linear models were used to produce impact estimates.

Findings: The impacts of the YPQI on the central outcome variables were positive and statistically significant. The YPQI produced gains in continuous improvement practices with effect sizes of .98 for managers and .52 for staff. The YPQI improved the quality of staff instructional practices, with an effect size of .55. Higher implementation of continuous improvement practices was associated with higher levels of instructional quality, with effects nearly three times greater than the overall experimental impact. Level of implementation was sustained in intervention group sites in the follow-up year.

Conclusions: This study demonstrates that a sequence of continuous improvement practices implemented by a site-based team – standardized assessment of instruction, planning for improvement, coaching from a site manager, and training for specific instructional methods – improves the quality of instruction available to children and youth. The YPQI produces a cascade of positive effects beginning with the provision of standards, training, and technical assistance, flowing through managers and staff implementation of continuous improvement practices, and resulting in effects on staff instructional practices. Evidence also suggests that participation in the YPQI may increase the length of staff tenure and that YPQI impacts are both sustainable and scalable.

Quality at the Point of Service: Profiles of Practice in After-School Settings

Abstract

A unique observational data set was used to explore quality at the point of service in after-school programs. Staff practices in after-school settings were represented on a series of unidimensional scales closely indexed to staff behavior. In order to account for heterogeneity of staff performances, pattern-centered methods were used to construct profiles of common staff practices. Results revealed six pedagogy profiles that were classified in terms of three broad types of performances delivered by after-school staff: (1) positive youth development, (2) staff-centered, and (3) low-quality. Staff membership in these profiles was not related to youth-staff ratio. However, results revealed significant differences between the profiles on the content of the offering and the age of youth in the setting.

Quality and Accountability in the Out-of-School Time Sector

In the fragmented out-of-school-time sector, defining and measuring quality in terms of staff behaviors at the point of service provides a common framework that can reduce obstacles to cross-sector and cross-program performance improvement efforts and streamline adoption of data-driven accountability policies. This chapter views the point of service, that is, the microsettings where adults and youth purposefully interact, as the critical unit of study because it is ubiquitous across out-of-school-time programs and because it is the place where key developmental experiences are intentionally delivered. However, because point-of-service behaviors are embedded within multilevel systems where managers set priorities and institutional incentives constrain innovation, effective quality interventions must contend with and attend to this broader policy environment. The Youth Program Quality Assessment (Youth PQA) is one of an emerging class of observational assessment tools that measure staff performances at the point of service and, depending on methodology of use, can help create the conditions that managers and youth workers need to accept, adopt, and sustain accountability initiatives. Observational assessment tools can be flexible enough to be used for program self-assessment (appropriate for low-stakes, non-normative learning purposes), external assessment (appropriate for higher stakes, normative comparisons, and performance accountability), and various hybrids that combine elements from each. We provide advice for decision makers regarding how to most effectively use the Youth PQA and similar measurement tools depending on the articulation of clear purposes for which accountability and improvement policies are enacted and effective sequencing of implementation.

Quality Accountability: Improving Fidelity of Broad Developmentally Focused Interventions

Abstract

This chapter describes the Youth Program Quality Intervention (YPQI), a setting-level intervention model designed to raise quality in out-of-school time programs. The YPQI takes managers and staff from a network of youth programs through a process of identifying and addressing strengths and areas for improvement, using a standardized assessment tool. This tool operationalizes a definition of program quality based on providing youth access to key developmental experiences. Descriptive findings about the quality of youth programs are presented. A three-level model of settings also addresses system accountability, management, and the point of service in youth programs. The chapter discusses accountability structures ranging from low stakes to higher stakes, and presents a generic model for setting change.

Palm Beach Quality Improvement System Pilot: Final Report

This report on the Palm Beach County Quality Improvement System (QIS) pilot provides evaluative findings from a four-year effort to imagine and implement a powerful quality accountability and improvement policy in a countywide network of after-school programs. The Palm Beach QIS is an assessment-driven, multi-level intervention designed to raise quality in after-school programs, and thereby raise the level of access to key developmental and learning experiences for the youth who attend. At its core, the QIS asks providers to identify and address strengths and areas for improvement based on use of the Palm Beach County Program Quality Assessment (PBC-PQA)—a diagnostic and prescriptive quality assessment tool – and then to develop and enact quality improvement plans. Throughout this process training and technical assistance are provided by several local and national intermediary organizations.

We present baseline and post-pilot quality ratings for 38 after-school programs that volunteered to participate in the Palm Beach QIS pilot over a two-year cycle. This data is the routine output from the QIS system and is designed to support evaluative decisions by program staff and regional decision-makers. In addition to the typical QIS output, we also provide as much detail as possible about the depth of participation in the various elements of the improvement initiative and offer a few opinions about what worked.

Primary findings include:

  • Quality changed at both the point of service and management levels. During the QIS quality scores changed substantially at both the point of service and management levels, suggesting that the delivery of key developmental and learning experiences to children and youth increased between baseline and post-pilot rounds of data collection.
    • Point-of-service quality increased most substantially in areas related to environmental supports for learning and peer interaction, but positive and statistically significant gains were evidenced in all assessed domains of quality.
    • The incidence of organizational best practices and policies increased in all assessed management-level domains, especially staff expectations, family connections and organizational logistics.
  • Planning strategies that targeted specific improvement areas were effective. Pilot sites registered larger quality gains on point of service metrics that were aligned with intentionally selected areas for improvement. This indicates that the quality improvement planning process effectively channels improvement energies.
  • Site managers and front line staff participated in core elements of the QIS at high rates. Relative to other samples, participation by front line staff was especially high, suggesting that the core tools and practices of the QIS are reasonably easy for site managers to introduce into their organizations.
  • The core tools and practices of the QIS were adopted at high rates. Thirty-five of 38 sites (92%) completed the self-assessment process and 28 sites (74%) completed all of the steps necessary to submit a quality improvement plan.

Several secondary questions posed by stakeholders or relevant to policy were also explored. These secondary findings must be treated with caution since they are drawn from a small sample and, in some cases, less than perfect data sources. Secondary findings include:

  • The low stakes approach to accountability within the QIS model appears to have increased provider buy in. Through review of secondary documents and quantitative data, the QIS emphasis on partnership rather than external evaluation achieved buy-in from pilot group providers for the self-assessment and improvement planning process.
  • The self-assessment and improvement planning sequence was associated with change in quality scores. Programs that participated in the self-assessment process were more likely than those that did not to experience improvement in their quality scores.
  • Structural characteristics such as organization type, licensing status, supervisor education and experience levels were not strongly related to point-of-service quality. This suggests that the variables most often manipulated by reform initiatives are, at best, weak drivers of setting quality and thus less-than-ideal policy targets. Put another way, these several program “credentials”, while reasonably easy to measure, were poor proxies for quality.

Quality in the Out-of-School Time Sector: Insights from the Youth PQA Validation Study

From the Introduction:

Over the last decade the High/Scope Educational Research Foundation has developed and validated an observational assessment instrument for out-of-school time (OST) programs, the Youth Program Quality Assessment, and several methodologies for its use. This experience has been instrumental in shaping our ideas about what program quality is, how it works, and how OST organizations can consistently produce it. There is much discussion in the field of youth development about the nature and effects of program quality—even arguably a rough consensus on program practices and elements that define quality in youth development settings. However, there is less guidance available regarding the relative importance of specific quality practices, how to know if a program is producing them well enough, and perhaps most importantly, how these elements of quality can be intentionally applied to improve OST settings. This article attempts to join what we have learned about program quality in OST programs—what counts and how best to see it—to a framework that describes organizational structure and change. We hope that this effort to will inform setting-level intervention, improvement and accountability work in the OST field.

After collecting hundreds of structured observations in a wide variety of youth work settings, we frame the issue like this: a high quality program provides youth with access to key experiences that advance adaptive, developmental and learning outcomes. However, OST organizations frequently miss opportunities to provide these key experiences for the youth who attend them. This is an area of systemic underperformance because these missed opportunities occur due to existing structures, practices and policies across the OST sector. The areas of underperformance can be identified, described and assessed through two setting-level constructs: quality at the point of service (POS quality) and quality of the professional learning community (PLC quality). POS occurs where youth, staff, and resources come together, and POS quality involves both (1) the delivery of key developmental experiences and (2) the level of access participating youth have to these experiences. In high quality programs, the PLC exists primarily to build and sustain the POS. PLC quality, as we construe it, is primarily focused on (1) the role of supervisors as human resource mangers and (2) the creation of knowledge management systems that facilitate the translation of program data/information into plans for action related to POS quality. Our ideas about the elements of quality that make up POS and PLC constructs are not new. What is important is how these setting-level constructs allow us to see quality more clearly and in ways that are linked to structure and change dynamics in OST organizations.

In [this paper] we examine in turn, a diagram about how the PLC and POS settings occur in organizations, a theory of dynamics that influence these settings (make quality higher or lower), and review the contents and psychometric characteristics of our primary POS quality measure, the Youth Program Quality Assessment (Youth PQA). With these pieces in hand, we then move to the task of defining the empirical context of POS and PLC quality across a wide range of OST settings.