Moving the Needle on “Moving the Needle”

Summary

This paper introduces the nomenclature of performance-based accountability systems (PBAS) to the expanded learning field, provides a policy case study for a countywide system in southern Florida and uses data from that system to explore the issue of quality thresholds. We present an expanded design standard to guide development and improvement of PBAS policies and further develop a theory of lower-stakes accountability to guide effective use of incentives of various types. Findings suggest that (1) the PBAS framework defines critical concepts and improves our ability to describe existing quality improvement systems, (2) the Youth Program Quality Assessment (Youth PQA) can be used to produce a program rating of sufficient reliability for use in a PBAS, and (3) that the Palm Beach County PBAS
design is an exemplar for expanded learning policies.

General recommendations for PBAS designs include:

  • PBAS design should differentiate roles and link performance measures to incentives targeted at specific management and service delivery roles.
  • PBAS designs should include program ratings for multiple service domains linked to a mix of higher- and lower-stakes incentives.
  • PBAS should emphasize participants’ understanding of performance levels and sense of fairness while evolving toward higher-stakes incentives over time.

Detailed recommendations for Weikart Center clients using the Youth Program Quality Intervention and related Program Quality Assessments as the basis for an expanded learning PBAS design include:

  • Recommendations for best practice in each element of the seven elements in PBAS design
    standard.
  • Detailed description of a composition map for program ratings and performance levels for nine commonly used measures in expanded learning PBAS.
  • A PBAS design exemplar based on the Palm Beach County case describing specific combinations four types of incentives (financial, customer review, supervisory review, access to data) with two types of performance levels (high and low) and nine program ratings to achieve an optimal, lower-stakes, PBAS design with higher-stakes elements.

Moving the needle on moving the needle: Next stage technical guidance for performance based accountability systems in the expanded learning field with a focus on performance levels for the quality of instructional services

This paper introduces the nomenclature of performance-based accountability systems (PBAS) to the expanded learning field, provides a policy case study for a countywide system in southern Florida and uses data from that system to explore the issue of quality thresholds. We present an expanded design standard to guide development and improvement of PBAS policies and further develop a theory of lower-stakes accountability to guide effective use of incentives of various types. Findings suggest that (1) the PBAS framework defines critical concepts and improves our ability to describe existing quality improvement systems, (2) the Youth Program Quality Assessment (Youth PQA) can be used to produce a program rating of sufficient reliability for use in a PBAS, and (3) that the Palm Beach County PBAS design is an exemplar for expanded learning policies.

Citation: Smith, C., Akiva, T., McGovern, G. and Peck, S.C. (2014), Afterschool quality. New Directions for Youth Development, 2014: 31-44. https://doi.org/10.1002/yd.20111

Building citywide systems for quality: Guide and case studies for afterschool leaders

How-to guide for cities drawing on experience from implementation of QIS in 40+ cities and states.
Citation: Yohalem, N., Devaney, E., Smith, C., & Wilson-Ahlstrom, A. (2012). Building Citywide Systems for Quality: A Guide and Case Studies for Afterschool Leaders.

Continuous quality improvement in afterschool settings: Impact findings from the Youth Program Quality Intervention study

Multi-site randomized controlled trial that identifies a substantively large and statistically significant cross-level cascade of QIS effects from network to organization to point-of-service instruction.

Citation: Akiva, T., Sugar, S.A., Smith, C., Pearson, L.M., Peck, S.C., Denault, A., & Blazevski, J. (2012). Continuous Quality Improvement in Afterschool Settings: Impact findings from the Youth Program Quality Intervention Study.

Understanding the “how” of effective quality improvement: Lessons from the Rhode Island Program Quality Intervention

Case study of scaled QIS implementation in Rhode Island with focus on manager skills and implementation at the organization level.

Citation: Devaney, E., Smith, C., & Wong, K.K. (2012). Understanding the “How” of Quality Improvement: Lessons from the Rhode Island Program Quality Intervention. Afterschool Matters.

Final report on the Palm Beach quality improvement system pilot: Model implementation and program quality improvement in 38 after-school programs

Mixed methods study of QIS implementation and effects in afterschool programs in Palm Beach County, Fl.
Citation: Smith, C., Akiva, T., Blazevski, J., & Pelle, L. (2008). Final report on the Palm Beach Quality Improvement System pilot: Model implementation and program quality improvement in 38 after-school programs. Forum for Youth Investment, David P, Weikart Center for Youth Program Quality.

Palm Beach Quality Improvement System Pilot: Final Report

This report on the Palm Beach County Quality Improvement System (QIS) pilot provides evaluative findings from a four-year effort to imagine and implement a powerful quality accountability and improvement policy in a countywide network of after-school programs. The Palm Beach QIS is an assessment-driven, multi-level intervention designed to raise quality in after-school programs, and thereby raise the level of access to key developmental and learning experiences for the youth who attend. At its core, the QIS asks providers to identify and address strengths and areas for improvement based on use of the Palm Beach County Program Quality Assessment (PBC-PQA)—a diagnostic and prescriptive quality assessment tool – and then to develop and enact quality improvement plans. Throughout this process training and technical assistance are provided by several local and national intermediary organizations.

We present baseline and post-pilot quality ratings for 38 after-school programs that volunteered to participate in the Palm Beach QIS pilot over a two-year cycle. This data is the routine output from the QIS system and is designed to support evaluative decisions by program staff and regional decision-makers. In addition to the typical QIS output, we also provide as much detail as possible about the depth of participation in the various elements of the improvement initiative and offer a few opinions about what worked.

Primary findings include:

  • Quality changed at both the point of service and management levels. During the QIS quality scores changed substantially at both the point of service and management levels, suggesting that the delivery of key developmental and learning experiences to children and youth increased between baseline and post-pilot rounds of data collection.
    • Point-of-service quality increased most substantially in areas related to environmental supports for learning and peer interaction, but positive and statistically significant gains were evidenced in all assessed domains of quality.
    • The incidence of organizational best practices and policies increased in all assessed management-level domains, especially staff expectations, family connections and organizational logistics.
  • Planning strategies that targeted specific improvement areas were effective. Pilot sites registered larger quality gains on point of service metrics that were aligned with intentionally selected areas for improvement. This indicates that the quality improvement planning process effectively channels improvement energies.
  • Site managers and front line staff participated in core elements of the QIS at high rates. Relative to other samples, participation by front line staff was especially high, suggesting that the core tools and practices of the QIS are reasonably easy for site managers to introduce into their organizations.
  • The core tools and practices of the QIS were adopted at high rates. Thirty-five of 38 sites (92%) completed the self-assessment process and 28 sites (74%) completed all of the steps necessary to submit a quality improvement plan.

Several secondary questions posed by stakeholders or relevant to policy were also explored. These secondary findings must be treated with caution since they are drawn from a small sample and, in some cases, less than perfect data sources. Secondary findings include:

  • The low stakes approach to accountability within the QIS model appears to have increased provider buy in. Through review of secondary documents and quantitative data, the QIS emphasis on partnership rather than external evaluation achieved buy-in from pilot group providers for the self-assessment and improvement planning process.
  • The self-assessment and improvement planning sequence was associated with change in quality scores. Programs that participated in the self-assessment process were more likely than those that did not to experience improvement in their quality scores.
  • Structural characteristics such as organization type, licensing status, supervisor education and experience levels were not strongly related to point-of-service quality. This suggests that the variables most often manipulated by reform initiatives are, at best, weak drivers of setting quality and thus less-than-ideal policy targets. Put another way, these several program “credentials”, while reasonably easy to measure, were poor proxies for quality.

Quality accountability: Improving fidelity of broad developmentally focused interventions

Design theory and formative evidence to define (1) the system/network-level within which organization and point-of-service settings are materially nested and (2) the lower-stakes QIS design that networks and organizations should implement for improved performance.

Citation: Smith, C.M., & Akiva, T. (2008). Quality Accountability: Improving Fidelity of Broad Developmentally Focused Interventions.

Quality in the Out-of-School Time Sector: Insights from the Youth PQA Validation Study

From the Introduction:

Over the last decade the High/Scope Educational Research Foundation has developed and validated an observational assessment instrument for out-of-school time (OST) programs, the Youth Program Quality Assessment, and several methodologies for its use. This experience has been instrumental in shaping our ideas about what program quality is, how it works, and how OST organizations can consistently produce it. There is much discussion in the field of youth development about the nature and effects of program quality—even arguably a rough consensus on program practices and elements that define quality in youth development settings. However, there is less guidance available regarding the relative importance of specific quality practices, how to know if a program is producing them well enough, and perhaps most importantly, how these elements of quality can be intentionally applied to improve OST settings. This article attempts to join what we have learned about program quality in OST programs—what counts and how best to see it—to a framework that describes organizational structure and change. We hope that this effort to will inform setting-level intervention, improvement and accountability work in the OST field.

After collecting hundreds of structured observations in a wide variety of youth work settings, we frame the issue like this: a high quality program provides youth with access to key experiences that advance adaptive, developmental and learning outcomes. However, OST organizations frequently miss opportunities to provide these key experiences for the youth who attend them. This is an area of systemic underperformance because these missed opportunities occur due to existing structures, practices and policies across the OST sector. The areas of underperformance can be identified, described and assessed through two setting-level constructs: quality at the point of service (POS quality) and quality of the professional learning community (PLC quality). POS occurs where youth, staff, and resources come together, and POS quality involves both (1) the delivery of key developmental experiences and (2) the level of access participating youth have to these experiences. In high quality programs, the PLC exists primarily to build and sustain the POS. PLC quality, as we construe it, is primarily focused on (1) the role of supervisors as human resource mangers and (2) the creation of knowledge management systems that facilitate the translation of program data/information into plans for action related to POS quality. Our ideas about the elements of quality that make up POS and PLC constructs are not new. What is important is how these setting-level constructs allow us to see quality more clearly and in ways that are linked to structure and change dynamics in OST organizations.

In [this paper] we examine in turn, a diagram about how the PLC and POS settings occur in organizations, a theory of dynamics that influence these settings (make quality higher or lower), and review the contents and psychometric characteristics of our primary POS quality measure, the Youth Program Quality Assessment (Youth PQA). With these pieces in hand, we then move to the task of defining the empirical context of POS and PLC quality across a wide range of OST settings.