Bringing in the Community: Partnerships and Quality Assurance

As a matter of policy, 21st Century Community Learning Centers rely heavily on community organizations to provide a variety of instructional programs. In this way, 21st Century sites tap the depth and breadth of knowledge available in their communities to provide non-traditional learning experiences that can better meet young participants’ need for engagement and relevance than can a simple extension of school-day routine. However, the inclusion of multiple partners along with school-based site staff at any given 21st Century site means that the quality of instruction can be extremely uneven. How do school districts that receive 21st Century grants, and the coordinators of each of their sites, ensure high quality across a wide variety of offerings led by staff from many different organizations? To begin to answer this question, we first explored the extent to which 21st Century sites in Michigan are actually partnering with community organizations. We then researched selected high-quality sites to arrive at an instructional partnerships model of quality assurance practices whose wide adoption could have significant impact on 21st Century policy and on the youth development field as a whole.

Also available for download at Research Gate

Quality Accountability: Improving Fidelity of Broad Developmentally Focused Interventions

Abstract

This chapter describes the Youth Program Quality Intervention (YPQI), a setting-level intervention model designed to raise quality in out-of-school time programs. The YPQI takes managers and staff from a network of youth programs through a process of identifying and addressing strengths and areas for improvement, using a standardized assessment tool. This tool operationalizes a definition of program quality based on providing youth access to key developmental experiences. Descriptive findings about the quality of youth programs are presented. A three-level model of settings also addresses system accountability, management, and the point of service in youth programs. The chapter discusses accountability structures ranging from low stakes to higher stakes, and presents a generic model for setting change.

Palm Beach Quality Improvement System Pilot: Final Report

This report on the Palm Beach County Quality Improvement System (QIS) pilot provides evaluative findings from a four-year effort to imagine and implement a powerful quality accountability and improvement policy in a countywide network of after-school programs. The Palm Beach QIS is an assessment-driven, multi-level intervention designed to raise quality in after-school programs, and thereby raise the level of access to key developmental and learning experiences for the youth who attend. At its core, the QIS asks providers to identify and address strengths and areas for improvement based on use of the Palm Beach County Program Quality Assessment (PBC-PQA)—a diagnostic and prescriptive quality assessment tool – and then to develop and enact quality improvement plans. Throughout this process training and technical assistance are provided by several local and national intermediary organizations.

We present baseline and post-pilot quality ratings for 38 after-school programs that volunteered to participate in the Palm Beach QIS pilot over a two-year cycle. This data is the routine output from the QIS system and is designed to support evaluative decisions by program staff and regional decision-makers. In addition to the typical QIS output, we also provide as much detail as possible about the depth of participation in the various elements of the improvement initiative and offer a few opinions about what worked.

Primary findings include:

  • Quality changed at both the point of service and management levels. During the QIS quality scores changed substantially at both the point of service and management levels, suggesting that the delivery of key developmental and learning experiences to children and youth increased between baseline and post-pilot rounds of data collection.
    • Point-of-service quality increased most substantially in areas related to environmental supports for learning and peer interaction, but positive and statistically significant gains were evidenced in all assessed domains of quality.
    • The incidence of organizational best practices and policies increased in all assessed management-level domains, especially staff expectations, family connections and organizational logistics.
  • Planning strategies that targeted specific improvement areas were effective. Pilot sites registered larger quality gains on point of service metrics that were aligned with intentionally selected areas for improvement. This indicates that the quality improvement planning process effectively channels improvement energies.
  • Site managers and front line staff participated in core elements of the QIS at high rates. Relative to other samples, participation by front line staff was especially high, suggesting that the core tools and practices of the QIS are reasonably easy for site managers to introduce into their organizations.
  • The core tools and practices of the QIS were adopted at high rates. Thirty-five of 38 sites (92%) completed the self-assessment process and 28 sites (74%) completed all of the steps necessary to submit a quality improvement plan.

Several secondary questions posed by stakeholders or relevant to policy were also explored. These secondary findings must be treated with caution since they are drawn from a small sample and, in some cases, less than perfect data sources. Secondary findings include:

  • The low stakes approach to accountability within the QIS model appears to have increased provider buy in. Through review of secondary documents and quantitative data, the QIS emphasis on partnership rather than external evaluation achieved buy-in from pilot group providers for the self-assessment and improvement planning process.
  • The self-assessment and improvement planning sequence was associated with change in quality scores. Programs that participated in the self-assessment process were more likely than those that did not to experience improvement in their quality scores.
  • Structural characteristics such as organization type, licensing status, supervisor education and experience levels were not strongly related to point-of-service quality. This suggests that the variables most often manipulated by reform initiatives are, at best, weak drivers of setting quality and thus less-than-ideal policy targets. Put another way, these several program “credentials”, while reasonably easy to measure, were poor proxies for quality.

Quality in the Out-of-School Time Sector: Insights from the Youth PQA Validation Study

From the Introduction:

Over the last decade the High/Scope Educational Research Foundation has developed and validated an observational assessment instrument for out-of-school time (OST) programs, the Youth Program Quality Assessment, and several methodologies for its use. This experience has been instrumental in shaping our ideas about what program quality is, how it works, and how OST organizations can consistently produce it. There is much discussion in the field of youth development about the nature and effects of program quality—even arguably a rough consensus on program practices and elements that define quality in youth development settings. However, there is less guidance available regarding the relative importance of specific quality practices, how to know if a program is producing them well enough, and perhaps most importantly, how these elements of quality can be intentionally applied to improve OST settings. This article attempts to join what we have learned about program quality in OST programs—what counts and how best to see it—to a framework that describes organizational structure and change. We hope that this effort to will inform setting-level intervention, improvement and accountability work in the OST field.

After collecting hundreds of structured observations in a wide variety of youth work settings, we frame the issue like this: a high quality program provides youth with access to key experiences that advance adaptive, developmental and learning outcomes. However, OST organizations frequently miss opportunities to provide these key experiences for the youth who attend them. This is an area of systemic underperformance because these missed opportunities occur due to existing structures, practices and policies across the OST sector. The areas of underperformance can be identified, described and assessed through two setting-level constructs: quality at the point of service (POS quality) and quality of the professional learning community (PLC quality). POS occurs where youth, staff, and resources come together, and POS quality involves both (1) the delivery of key developmental experiences and (2) the level of access participating youth have to these experiences. In high quality programs, the PLC exists primarily to build and sustain the POS. PLC quality, as we construe it, is primarily focused on (1) the role of supervisors as human resource mangers and (2) the creation of knowledge management systems that facilitate the translation of program data/information into plans for action related to POS quality. Our ideas about the elements of quality that make up POS and PLC constructs are not new. What is important is how these setting-level constructs allow us to see quality more clearly and in ways that are linked to structure and change dynamics in OST organizations.

In [this paper] we examine in turn, a diagram about how the PLC and POS settings occur in organizations, a theory of dynamics that influence these settings (make quality higher or lower), and review the contents and psychometric characteristics of our primary POS quality measure, the Youth Program Quality Assessment (Youth PQA). With these pieces in hand, we then move to the task of defining the empirical context of POS and PLC quality across a wide range of OST settings.

Findings from the Self-Assessment Pilot in Michigan 21st Century Learning Centers

Overall 24 sites within 17 grantees participated in the self-assessment pilot study by assembling staff teams to collect data and score the Youth Program Quality Assessment (PQA).

At each site an average of 5 staff spent an average of 13 staff hours to complete the self-assessment process.

Whether using an absolute standard or group norms as a benchmark for interpretation of data from the Youth PQA Self-Assessment Pilot Study (hereafter called the Pilot Study), quality scores were very positive for participating programs and also reflected the tendency of self-assessment scores to be biased toward higher quality levels.

The quality scores followed the same pattern as outside observer scores in other samples, highest on for issues of safety and staff support and lowest on higher order practices focused on interaction and engagement.

Youth PQA data collected using the self-assessment method demonstrated promising patterns of both internal consistency and concurrent validity with aligned youth survey responses.

Two thirds or more of sites reported that the observation and scoring process helped the self-assessment team to have greater insight into the operation of their programs, talk in greater depth about the program quality than usual, and have more concrete understanding of program quality.

Site directors and local evaluators said that the self-assessment process was a source of good conversations about program priorities and how to meet them. In almost all cases, concrete action followed from the self-assessment process.

Site directors and local evaluators demonstrated the ability to improvise the self-assessment method to fit local needs.

Program directors, site coordinators, and local evaluators have used the Youth PQA and statewide Youth PQA data to generate statewide program change models, suggesting that the instrument and data are useful for setting system-level improvement priorities.

Original Validation of the Youth Program Quality Assessment (Youth PQA)

Summary

The Youth Program Quality Assessment (PQA) is an assessment of best practices in afterschool programs, community organizations, schools, summer programs, and other places where you have fun, work, and learn with adults. The Youth PQA creates understanding and accountability focused on the point of service — where youth and adults come together to coproduce developmental experiences. The ultimate purposes of the Youth PQA are empowering staff to envision optimal programming and building motivation of youth to participate and engage. As an approach to assessment at the systems level, the Youth PQA links accountability to equity by focusing on access to high-quality learning environments for all youth who enroll. As a research tool, the Youth PQA improves measurement of instructional process in places where young people learn.

The Youth PQA consists of seven sections or subscales, each bearing on one dimension of program quality critical for positive youth development: safe environment, supportive environment, interaction, engagement, youth-centered policies and practices, high expectations, and access. Administration of the Youth PQA employs direct observation of youth program activities for its first four sections and a structured interview with a program director for its remaining three sections. The instrument can be used by outside observers to produce the most precise data or as a program self-assessment directed toward generation of rich conversations among staff.

The Youth PQA Validation Study was a 4-year effort to develop and validate a tool to assess program quality in youth settings. Through the process of instrument development, dozens of expert practitioners and researchers were brought together to provide input on the tool. In total, the validation study encompassed 59 organizations in Michigan and more than 300 Youth PQA observations and interviews conducted in programs serving 1,635 youth. Most of these youth programs were afterschool programs that met weekly or daily over several months. The average age of youth in the sample was 14 years, and more than half were attending programs in an urban context.

The Youth PQA Validation Study employed multiple, independent data sources, including interviews with program administrators, observations in youth work settings, surveys of program youth, expert opinions, and verified reports of staff training. The study’s primary concurrent measure of program quality was the Youth Survey from Youth Development Strategies, Inc. All Youth Survey data were independently collected and prepared for analysis by Youth Development Strategies, Inc.

In general, findings from the study demonstrate that the Youth PQA is a valid, reliable, and highly usable measure of youth program quality. Principle findings include:

  1. The Youth PQA measurement rubrics are well calibrated for use in a wide range of youth serving organizations. Average scores fall near the center of the scale and are spread across all five scale points.
  2. Pairs of data collectors were able to achieve acceptable levels of inter-rater reliability on most of the Youth PQA’s measurement constructs.
  3. The Youth PQA scales subscales are reliable measures of several dimensions of quality. Key subscales demonstrated acceptable levels of internal consistency in two samples.
  4. The Youth PQA can be used to assess specific components of programs and is not just a single global quality rating. In repeated factor analyses on two waves of Youth PQA data, the subscales were validated as separate, distinguishable constructs.
  5. Youth PQA quality ratings reflect youth reports about their own experiences in the same program offerings. Youth PQA scores demonstrate concurrent validity through significant positive correlation with aligned scores from the YDSI Youth Survey.
  6. The Youth PQA measures dimensions of quality that are related to positive outcomes for youth such as youth sense of challenge and growth from the youth program. Youth PQA scores demonstrate predictive validity in multivariate and multilevel models of the data, controlling for youth background variables.
  7. Staff in 21st Century afterschool programs find the instrument to have face validity, to be applicable to their current work, and to be a foundation for purposeful change in their programs.