Moving the Needle on “Moving the Needle”

Summary

This paper introduces the nomenclature of performance-based accountability systems (PBAS) to the expanded learning field, provides a policy case study for a countywide system in southern Florida and uses data from that system to explore the issue of quality thresholds. We present an expanded design standard to guide development and improvement of PBAS policies and further develop a theory of lower-stakes accountability to guide effective use of incentives of various types. Findings suggest that (1) the PBAS framework defines critical concepts and improves our ability to describe existing quality improvement systems, (2) the Youth Program Quality Assessment (Youth PQA) can be used to produce a program rating of sufficient reliability for use in a PBAS, and (3) that the Palm Beach County PBAS
design is an exemplar for expanded learning policies.

General recommendations for PBAS designs include:

  • PBAS design should differentiate roles and link performance measures to incentives targeted at specific management and service delivery roles.
  • PBAS designs should include program ratings for multiple service domains linked to a mix of higher- and lower-stakes incentives.
  • PBAS should emphasize participants’ understanding of performance levels and sense of fairness while evolving toward higher-stakes incentives over time.

Detailed recommendations for Weikart Center clients using the Youth Program Quality Intervention and related Program Quality Assessments as the basis for an expanded learning PBAS design include:

  • Recommendations for best practice in each element of the seven elements in PBAS design
    standard.
  • Detailed description of a composition map for program ratings and performance levels for nine commonly used measures in expanded learning PBAS.
  • A PBAS design exemplar based on the Palm Beach County case describing specific combinations four types of incentives (financial, customer review, supervisory review, access to data) with two types of performance levels (high and low) and nine program ratings to achieve an optimal, lower-stakes, PBAS design with higher-stakes elements.

Measuring Youth Skills in Expanded Learning Systems: Case Study for Reliability and Validity of YDEKC Skill Measures and Technical Guidance for Local Evaluators

Weikart Center Expanded Learning Initiative, Technical Working Paper #4 for the Ready by 21 Project at the Forum for Youth Investment

Summary

  • YDEKC has made great progress toward development of skill measures for expanded learning service providers that serve multiple purposes of community positioning, performance improvement, and proof of program effectiveness. Already YDEKC’s efforts have advanced the field toward the most important questions: What are the important skills of interest for the expanded learning field? How do expanded learning settings cause change in these skills?
  • The current set of YDEKC measures (Table 1) are valuable for positioning in relation to community goals because they state the intentions of YDEKC providers. These measures utilize scales that are reliable (defined as internal consistency) but have weak evidence for construct validity because many of the scales and items are highly correlated.
  • An improved set of skill measures (Table 8) can be extracted from the YDEKC skill measures with more sufficient evidence of reliability, construct validity, and additional evidence for convergent validity. This structure was replicated in important subgroups in the YDEKC sample, including middle school youth, high school youth, and at-risk youth.
  • Additional evidence for convergent validity includes:
    • External measures of program quality are positively associated with youth reports of the program fit for skill building.
    • Youth reports of the program fit for skill building are positively associated with most of the other youth skill measures.
    • Measures related to managing academic work are positively associated with youth reports on school success measures, including grades and attendance in the past month.
  • YDEKC data can be used to create multi-variate skill profiles that better reflect the integrated nature of skill learning and demonstration. These profiles indicate that a subgroup of youth in lower skill profiles can be identified and that these youth are spread across nearly all YDEKC partner organizations.
  • Due to within-program heterogeneity of skills, program averages should not be used. However, all measures considered here have substantial negative skew or ceiling effects, which limits the usefulness of these measures for multiple time point designs.
  • We recommend a three-step method that addresses the integrated nature of skill learning as well as the use of youth skill measures that have lower construct validity and ceiling effects: (a) identify dimensionality in the data to best reflect the independent components of an individual’s integrated skill set, (b) use pattern-centered methods to identify independent profiles or subgroups of individuals defined by similar skill sets, and (c) collect the data at multiple time points for youth in the lower skill profiles at baseline.
  • We carried out a similar set of analyses using data from the YDEKC school survey, finding substantial positive evidence for the reliability and construct validity of these measures (see Appendix D).

Building Citywide Systems for Quality: A Guide and Case Studies for Afterschool Leaders

Summary

High-quality programming is essential in order for afterschool efforts to generate positive effects for youth. This guide can help those working to create better, more coordinated afterschool programming get started building a quality improvement system (QIS), or further develop existing efforts. A quality improvement system is an intentional effort to raise the quality of afterschool programming in an ongoing, organized fashion.

Components of an Effective Quality Improvement System 

Shared definition of quality: There should be general agreement on what constitutes a high-quality program.

Lead organization: Not having a clear leader can cause confusion. Lead organizations can be stand-alone technical assistance organizations, intermediaries, city agencies, funding entities, or policy/advocacy organizations.

Engaged stakeholders: a QIS is more likely to be effective, sustainable, and scalable if a defined group of organizations is supportive.

Continuous improvement model: Effective models typically include a standard for high-quality performance, an assessment tool, and aligned improvement supports such as planning, coaching, and training.

Information system(s): Quality improvement systems generate data. These data can include administrative audits, tracking of QIS participation and engagement, or data about program attendance and/or child outcomes. The QIS should capture and store such information.

Guidelines and incentives for participation: Guidelines brings coherence and relevance to the system.

Adequate resources: Funding is necessary for sustainability. 

System-building Stages and Tasks

The guide describes a series of tasks organized into three broad stages.

  1. Plan and Engage. Specific tasks include assessing readiness, forming a work group, making the case, engaging stakeholders, identifying a lead organization, defining quality, clarifying purpose, considering information needs, and determining costs and potential resources.
  2. Design and Build. This step takes the process from the conceptual to the practical. Specific tasks include designing the continuous improvement model that programs will experience, developing system-level supports for the model, recruiting pilot sites, and piloting the continuous improvement cycle.
  3. Adjust and Sustain. Specific tasks include refining the continuous improvement model and system supports, building capacity of the lead organization, engaging new programs and sectors, evaluating, and embedding and sustaining the system.

In addition to a QIS Capacity Self-Assessment Tool, the guide also includes case studies of efforts by six communities to build quality improvement systems.

Key Takeaways

  • Developing a quality improvement system for afterschool and youth development programs in a community is important, complex work.
  • Although quality improvement systems vary, mature, effective systems share some common components and characteristics. These include a shared definition of quality, engaged stakeholders, and adequate resources.
  • There are three stages for developing a quality improvement system: Plan and engage, design and build, and adjust and sustain. 

The STEM supplement to the Youth Program Quality Assessment

Introduction

curricula from the fields of environmental science, technology, engineering, and mathematics (“STEM”) at 10 sites(one partner delivered the same curriculum at two sites). The offerings were organized at 10 school-based Afterzone sites and each offering included field work in the local Providence region. Across the 10 sites, STEM curricula were delivered to a total of approximately 250 middle school students (about 25 students per Afterzone section).

In order to evaluate the Afterzone Summer Scholars model and collect information for future improvement, PASA (a) hired an external evaluator for the project; (b) committed to providing continuous improvement supports to participating program managers and content providers (quality assessment and coaching); and (c) formed an evaluation advisory board to monitor the development and implementation of the external evaluation. In addition, PASA contracted with the David P. Weikart Center for Youth Program Quality (Weikart Center) at the Forum for Youth Investment to develop an observation-based measure of instructional practices to support continuous improvement during STEM programming. This report describes the process of development of the STEM supplement to the Youth Program Quality Assessment (Youth PQA; HighScope, 2005) and preliminary reliability and validity evidence based on data collected during Afterzone Summer Scholars program.

Continuous Quality Improvement in Afterschool Settings: Impact Findings from the Youth Program Quality Intervention Study

Abstract

Background: Out-of-school time programs can have positive effects on young people’s development; however, programs do not always produce such effects. The quality of instructional practices is logically a key factor but quality improvement interventions must be understood within a multi-level framework including policy, organization, and point of service if they are to be both effective and scalable.

Purpose: To evaluate the effectiveness of the Youth Program Quality Intervention (YPQI), a data-driven continuous improvement model for afterschool systems. Research questions include:

  • Does the YPQI increase managers’ focus on instruction and the use of continuous improvement practices by site-based teams?
  • Does the YPQI improve the quality of afterschool instruction?
  • Does the YPQI increase staff tenure?
  • Can the YPQI be taken to scale across programs that vary widely in terms of structure, purposes, and funding and using resources available to public agencies and community-based organizations?
  • Will afterschool organizations implement the YPQI under lower stakes conditions where compliance with the model is focused on the improvement process rather than attainment of pre-determined quality ratings?

Participants: Eighty-seven afterschool sites in five diverse afterschool networks participated in the study. Each site employed the equivalent of one full-time program manager and between two and ten direct staff; had an average annual enrollment of 216 youth; and had an average daily attendance of 87 youth.

Research Design: This is a cluster randomized trial. Within each of the five networks, between 17 and 21 sites were randomly assigned to an intervention (N=43) or control group (N=44). Survey data were collected from managers, staff, and youth in all sites at baseline prior to randomization (spring 2006), at the end of the implementation year of the study (spring 2007) and again at the end of the follow-up year (spring 2008). External observers rated instructional practices at baseline and at the end of the implementation year. Implementation data were collected from both intervention and control groups. Hierarchical linear models were used to produce impact estimates.

Findings: The impacts of the YPQI on the central outcome variables were positive and statistically significant. The YPQI produced gains in continuous improvement practices with effect sizes of .98 for managers and .52 for staff. The YPQI improved the quality of staff instructional practices, with an effect size of .55. Higher implementation of continuous improvement practices was associated with higher levels of instructional quality, with effects nearly three times greater than the overall experimental impact. Level of implementation was sustained in intervention group sites in the follow-up year.

Conclusions: This study demonstrates that a sequence of continuous improvement practices implemented by a site-based team – standardized assessment of instruction, planning for improvement, coaching from a site manager, and training for specific instructional methods – improves the quality of instruction available to children and youth. The YPQI produces a cascade of positive effects beginning with the provision of standards, training, and technical assistance, flowing through managers and staff implementation of continuous improvement practices, and resulting in effects on staff instructional practices. Evidence also suggests that participation in the YPQI may increase the length of staff tenure and that YPQI impacts are both sustainable and scalable.

Understanding the “How” of Quality Improvement: Lessons from the Rhode Island Program Quality Intervention

Introduction

Over the past 10 years, afterschool and youth development programming has moved from providing childcare for working parents to being an integral component of the learning day, supporting the academic, social, and emotional development of young people (C. S. Mott Foundation, 2007; Durlak & Weissberg, 2007). An important part of that transition has been a growing emphasis on improving program quality. Many communities around the country have begun to create site-level continuous improvement models (Wilson-Ahlstrom & Yohalem, 2008; Yohalem & Wilson-Ahlstrom, 2009). Aligned performance measures help program administrators evaluate the quality of young people’s experience and give them a framework for improvement.

Quality at the Point of Service: Profiles of Practice in After-School Settings

Abstract

A unique observational data set was used to explore quality at the point of service in after-school programs. Staff practices in after-school settings were represented on a series of unidimensional scales closely indexed to staff behavior. In order to account for heterogeneity of staff performances, pattern-centered methods were used to construct profiles of common staff practices. Results revealed six pedagogy profiles that were classified in terms of three broad types of performances delivered by after-school staff: (1) positive youth development, (2) staff-centered, and (3) low-quality. Staff membership in these profiles was not related to youth-staff ratio. However, results revealed significant differences between the profiles on the content of the offering and the age of youth in the setting.

Linking after-school instructional practices to youth engagement: A pattern-centered approach

Youth participation in after-school settings has been linked to numerous positive outcomes but this is inconsistent across studies. Two necessary ingredients for this link are proposed: appropriate instructional practices and youth engagement. Both are profiled user cluster analysis in a dataset including observations of staff instructional practices in 151 youth program offerings and 1176 surveys from youth attending these offerings. Instructional practice profiles suggest patterns corresponding to positive youth development (PYD), staff-centered (SC), and low quality. Youth engagement profiles range from low to medium to high, and further vary across perceived learning and voice. Cross-tabulations reveal strong positive links between PYD and high engagement and between low-quality and low engagement; and strong negative links between PYD and low engagement and between low-quality and high engagement.

Quality and Accountability in the Out-of-School Time Sector

In the fragmented out-of-school-time sector, defining and measuring quality in terms of staff behaviors at the point of service provides a common framework that can reduce obstacles to cross-sector and cross-program performance improvement efforts and streamline adoption of data-driven accountability policies. This chapter views the point of service, that is, the microsettings where adults and youth purposefully interact, as the critical unit of study because it is ubiquitous across out-of-school-time programs and because it is the place where key developmental experiences are intentionally delivered. However, because point-of-service behaviors are embedded within multilevel systems where managers set priorities and institutional incentives constrain innovation, effective quality interventions must contend with and attend to this broader policy environment. The Youth Program Quality Assessment (Youth PQA) is one of an emerging class of observational assessment tools that measure staff performances at the point of service and, depending on methodology of use, can help create the conditions that managers and youth workers need to accept, adopt, and sustain accountability initiatives. Observational assessment tools can be flexible enough to be used for program self-assessment (appropriate for low-stakes, non-normative learning purposes), external assessment (appropriate for higher stakes, normative comparisons, and performance accountability), and various hybrids that combine elements from each. We provide advice for decision makers regarding how to most effectively use the Youth PQA and similar measurement tools depending on the articulation of clear purposes for which accountability and improvement policies are enacted and effective sequencing of implementation.

Linking Management Practices to Instructional Performances in OST Organizations

Abstract

Youth participation in after-school settings has been linked to numerous positive outcomes but this is inconsistent across studies. Two necessary ingredients for this link are proposed: appropriate instructional practices and youth engagement. Both are profiled user cluster analysis in a dataset including observations of staff instructional practices in 151 youth program offerings and 1176 surveys from youth attending these offerings. Instructional practice profiles suggest patterns corresponding to positive youth development (PYD), staff-centered (SC), and low quality. Youth engagement profiles range from low to medium to high, and further vary across perceived learning and voice. Cross-tabulations reveal strong positive links between PYD and high engagement and between low-quality and low engagement; and strong negative links between PYD and low engagement and between low-quality and high engagement.