How could OST address climate change?

Share on linkedin
LinkedIn
Share on twitter
Twitter
Share on email
Email
650 words | 3 min

With the publication of the IPCC report[1], it’s not difficult to conclude that our current political leadership is not going to take us where we need to go, and we can’t wait anymore. The scientists are telling us right now, in clear language, that the time is up: Major transformations in our thinking and behavior around energy use must happen right now. We’re already on track to exceed (by the early 2030s) the 1.5C degrees threshold of global warming (above pre-industrial levels), and the weather patterns that are projected to ensue, long before 2040, are going to be catastrophic for everyone on the planet. That’s the very difficult news. However, the amazingly lucky news is that there is still time to do something. So, what should that something be for OST professionals as OST professionals? What should we be asking our leading agencies and philanthropies to step in quickly and fund?

Building on Strengths

For the children, youth, families, and communities served by OST programs, socio-emotional skills are at the top of my list, including getting up to speed on trauma-informed practices. We are likely to encounter more trauma from climate-related dislocation, intermittency of power, failing ground water systems, etc.

As important, how to protect children from talk about the ongoing climate catastrophe? How do we nurture their hopefulness rather than sharing our adult fear and unhealthy denial? How do we nurture our own hopefulness as an example? This is a profoundly important task if we are going to do anything at all as a field. The main point of this blog is that it’s our ethical obligation to the children and families that we serve to demonstrate hopefulness through our own OST practice, like setting examples in the time that we have rather than sticking our heads in the sand. It almost doesn’t matter how we go about addressing it as along as we communicate hope rather than fear while we’re doing it.

Another big one that we already know about is demographic change. We need to understand south-to-north migrations and how to be intentional about multiculturalism in practice and policy. This will be a place to use all of the equity stuff that so many of us have been working on and thinking about. We need to prepare for society with a bunch of folks from southern US states, Central America, Mexico, and the islands arriving in the next decade. Its already happening. Good news is that many of us are building our capacities to engage multi-cultural youth to embrace differences and learn about their own cultures (e.g., intersectionality) and to understand inequities.

There is a lot going on in our field in youth civic engagement. This is a small subset of our OST field, but these folks have been doing it for a long time, know how to do it, and have written a lot about it. The trick here is to re-focus what we know about how to do youth activism – e.g., community and service learning, voter registration and other issue-driven campaigns, peaceful protest – with the mental model that Greta (Thornburg) is providing about breaking carbon and consumption cycles. I can easily convince myself that learning how to break carbon and consumption cycles in everyday life is now more important than any time spent on academic testing for example (or advanced math or competitive sport etc. etc.).

 

Filling in STEM Denial

One of the huge opportunities to demonstrate hope is precisely where we are least prepared: all of the new habits of daily life that we’re going to need to learn in a zero-carbon world. We are facing a task like the early twentieth century policy of agricultural extension that sought to educate a whole country of beginner-farmers about agricultural science and how not to starve in the countryside. But what institutions are filling this void?

Elsewhere we’ve referred to these habits as the applied science of ecological stewardship[2][SP1]  – again, to push the agricultural extension analogy, think home economics for zero carbon. The new zero-carbon habits are all STEM skills even though the OST STEM field seems to be in denial. From my reading, these are the STEM issues that are directly and substantively related to reducing carbon use and living well in the future:

First, there is a lot to learn about energy use technologies and energy conservation skills. Dramatically reducing carbon use in the next three years and staying comfortable in everyday life is something we should all be learning about and about which there is almost certainly some content available for use in OST programs. This ranges from the simple stuff like when to best turn on and off the air conditioning to the basics of the new energy technologies and infrastructure that every neighborhood will need.

Another important set of STEM skills is related to food types and sources. It is an uncomfortable fact that two big ways to cut greenhouse gases and carbon use is to stop eating meat and to grow food locally. The arguments about the ecological implications of plant-based diets are clear and should at least be available to all persons. It’s also clear that growing your own local food in any city is likely going to occur on a brownfield where the underlying soil is already contaminated. This requires some horticultural design know-how and a little bit of soil science that many urban farmers are learning.

Another big one may surprise you: Almost all old trees (i.e., largest biomass retaining carbon) have been eliminated except for those in the urban forests maintained by our city and town governments and national parks. The carbon retaining potential of the urban forests is critical, and OST programs could learn a lot about trees (e.g., dendrology) and carbon retention from the several STEM disciplines involved. There are few afterschool programs actually located in non-urban areas (i.e., even in the countryside, OST programs are typically located in a small town). These cities and towns are home to the last remaining 60 year old+ trees in the United States and represent an important part of the solution for carbon retention.

Access to clean water is a major challenge of climate change, and procuring clean drinking water is going to involve all of the STEM disciplines – both building mental models for applied stewardship and then acting in those terms. Shouldn’t all OST students learn about the local clean water agenda and learn how to understand contaminants in water testing output? Shouldn’t all students learn about the science and technology involved with water filtration at home?

For an OST Social Movement

What about the power of the OST social movement – we professionals as a group? How could OST leaders help us act together as a coordinated profession to use our power? Are there a few obvious choices that we could ask leading agencies, membership organizations, and philanthropies to help us engage the field around?

The first question is: How do we quickly integrate all of this content into our professional conferences and workplaces so that we can quickly figure out how to help a next generation of ecological citizens, scientists, educators, advocates, and policy makers who are inheriting our legacy? A second question follows from the ethical charge to demonstrate hopefulness to children: How should we engage local administrators and governments on carbon-reducing practices for our buildings and program offerings? Finally: How do OST professionals engage in political activism as an interest-group to identify and advocate for breaking carbon and brownfield consumption cycles in personal lives and governance?

Although it may be uncomfortable to discuss these issues – the fear for all of us is real – it is our obligation to the children we serve to put them in the best position to deal with the situation, by building our own and their socio-emotional skills and mental models about the new zero-carbon world that’s coming. Please note also that all of the issues addressed above represent the major job categories of the future – for those still defining everything we do in terms of the economy. And finally, please also note that the OST profession is filled with rational people with progressive views. If we acted together, we could likely act as one. If we don’t act now, it will actually be too late. How can we organize ourselves and our young people and communities to amplify their voices, needs, and realities? How would we like our leaders at all levels to help?



[1] Intergovernmental Panel on Climate Change (IPCC; 2022), Climate Change 2022, Mitigation of Climate Change, Summary for Policy Makers [https://www.ipcc.ch/report/ar6/wg3/]

[2] Smith, C. (2019). SEMIS coalition for place-based ecological stewardship: Growing a movement, getting ready for growth. [https://www.qturngroup.com/wp-content/uploads/2022/04/2022-04-11_SEMIS_WP-v7.pdf]


 

How the Q-ODM impact model is a more cost-effective form of the quasi-experimental design (QED)

Share on linkedin
LinkedIn
Share on twitter
Twitter
Share on email
Email

The Quality-Outcomes Design and Methods (Q-ODM) approach to program evaluation increases the use value of all estimates produced as part of an impact analysis. Put simply: We replace the “no-treatment” counterfactual condition (i.e., children who were not exposed to an afterschool program) with low-implementation conditions (e.g., children who were exposed to lower-quality instructional practices in an afterschool program) in order to describe the impact of optimal implementation on child outcomes (e.g., socio-emotional skill change, equity effects).  Said again: The “control group” in our impact model is any quality profile, subgroup configuration, or pathway (e.g., low-quality practices profile) that is contrasted with an optimal “treatment” group (e.g., high-quality practices profile).[1]

The “Analytic Tools” section of White Paper 3 provides an introductory discussion of Q-ODM impact models for student skill and equity outcomes. Also, check out this UK impact evaluation.

Now, let’s talk about three reasons why our approach is a cost-effective choice for CEOs seeking evidence about impact and equity outcomes:

Lots of Reality-Based Estimates that Analogize to Action. Our point about cost effectiveness is this: Every estimate produced in this impact model is useful. Where coupled with QTurn measures, Q-ODM impact estimates are interpretable in terms of specific adult and child behaviors and contexts. This means that there is a direct analogy from meaning encoded in the data to meaningful teacher and student behavior that occurs in the classroom – direct analogy from data to reality. The data used to identify the lower-quality profile actually identifies the lower-quality settings! The amount of skill change that occurs in the high-quality setting actually demonstrates what’s possible in the program; that is, it sets the benchmark for other programs.

An impact estimate implies a subtraction of one magnitude from another. What use is a counterfactual estimate if there is no such thing as a counter factual condition? Doesn’t that just mean that we are subtracting an imaginary quantity from a real one?

Using Natural Groupings to Address Threats to Valid Inference. Its not just usefulness of estimates (consequential validity) but, we argue, a more valid way to rule out primary threats to validity of inference that the treatment caused an effect. Two points: The children in the low-quality group are more likely to be similar to the kids in the high-quality group for all of the right reasons (i.e., SEL histories) that are missed by most efforts at matching individuals or groups using demographic and education data.

The case that families in one group have more education-relevant resources (e.g., SEL histories) than families in the other group plays out in two ways. When families have unmeasured resources before the child attends, we are talking about selection effects. When families use those unmeasured resources during the program intervention we are talking about history effects. We argue, and present evidence, that the Q-ODM method better addresses these threats to valid inferences about impact than the pernicious and unethical use of race/ethnicity and social address variables as covariates – pretended “controls” – in linear models.

Capturing Full Information from Small Samples. Our method is designed to detect such differences in the ways things go together in the real world, in or around the average expectable environments characterizing human development and socialization (cf. Magnusson, 2003). This in-the-world structure is a constraint on the states that can and cannot occur during development. In the pattern-centered frame, small cell sizes indicate sensitivity of the approach. Relatively-low Ns are not necessarily a problem for the distribution-free statistical tests used in pattern-centered impact analyses.

 

[1] We realize that others would claim that our designs are not QED at all. We delve deeper into the rationales used to disqualify “groups that receive different dosages of a treatment” from being considered “control groups” within the context of experimental design in White Paper 4.

 

Why are Q-ODM’s Pattern-Centered Methods (PCM) More Realistic and Useful for Evaluators?

Share on linkedin
LinkedIn
Share on twitter
Twitter
Share on email
Email

Pattern-centered theory and methods (PCM) can be used to tell simple and accurate stories about how real persons grow in real school and afterschool classrooms. Stories about the quality and outcomes (i.e., causes and effects) that are modeled using PCM are particularly useful because they can address questions related to “how” programs and classrooms work and “how much” children grow skills.

Most training for education researchers and evaluators is focused on variable-centered methods (VCM), also called linear statistical methods (regression, the analysis of variance, and structural equation modeling) or the general linear model. VCM are powerful in cases where the causes and effects are similar across individuals and classrooms. In cases where that’s not true – which is most school and afterschool classrooms – VCM designs tend to provide information that means practically nothing about the actual people or contexts involved. Some of the basic issues have been summarized nicely by Todd Rose in the following TEDx presentation: https://youtu.be/4eBmyttcfU4 (“The Myth of Average”), but the critique is not new.

To better illustrate the point, let’s talk about three basic assumptions about the person-reality in afterschool classrooms and how PCM applies:

A person’s socio-emotional skills are most accurately represented as a pattern with multiple skills indicated simultaneously. This is not just about more information from more variables, although that is also a fundamental advantage of pattern-centered methods. The neuroperson is also a “multilevel system” – which is mouthful but as detailed in White Paper 1: Different parts of mental skill change for different reasons, on different timelines, and cause different types of behavior! This means different amounts and types of cause are involved in changing any mental skill or behavior. How could one variable at a time constraints of VCM ever do an adequate job of representing socio-emotional skill? PCM are uniquely fit for sorting out multilevel causal dynamics so that the full meaning encoded in the data can emerge.

Change in socio-emotional skill is always qualitative, from one pattern to a different pattern at a later time point. Given the multilevel nature of socio-emotional skills, the combination of skill parts is likely to differ at different time points and in different settings. The fact that skills turn into different skills as they change has been an Achilles heel for VCM. Check out the “Analytic Tools” section of White Paper 3 to see how PCM can be applied to (a) identify each individual’s unique pattern of skill parts at different points in time and then (b) compare across those qualitatively different patterns to detect stability, growth, or decline for each individual. When coupled with the sensitivity of optimal skill measures (see White Paper 2), PCM are ideal for describing the how (e.g., an individual child’s movement from one pattern to a subsequent pattern) and how much (e.g., how many children grew) of skills-change over short time periods, such as a semester or school year.

The same classroom causes different patterns of change for different subgroups of children. An adage from mid-20th century psychology (Kluckhohn and Murray, 1948, p. 35) is a helpful reminder: Any individual can, for different causal variables, be simultaneously like all others, like some others, or like no others. VCM work only in the first case, where every person experiences a very similar type of cause and effect. Case-study and qualitative methods are preferred in the third case, where the causes and effects may apply only to a single person. PCM are uniquely fit for the second case; that is, where different subgroups of children with different socio-emotional histories have qualitatively different types of responses to the same education settings.

In the end, VCM assumptions about the validity of single variables, the quantitative nature of skill change, and the homogeneity of causal dynamics lead to an impoverished view of reality – and likely a lot of inaccurate conclusions about what to do.

Introduction to White Paper 3

Share on linkedin
LinkedIn
Share on twitter
Twitter
Share on email
Email

Greetings friends! In this third White Paper, Realist(ic) Evaluation Tools for OST Programs: The Quality-Outcomes Design and Methods (Q-ODM) Toolbox, we extend from the neuroperson framework for socio-emotional skills to a focus on evaluation design and impact evidence. Focusing on the methods used to evaluate out-of-school time (OST) programs and to assess the impact on student skill growth is a critical issue, especially given the ambiguity about impacts from gold-standard evaluations of publicly funded afterschool programs. Are programs producing weak or no effects? Or, are gold-standard designs missing something?

We offer a sequence of evaluation questions that chart the course to realistic evidence about quality and outcomes (i.e., cause and effect, or “how” and how much”) – and is useful to managers, teachers, coaches, and evaluators. We’ve learned these questions over the past two decades by asking tens of thousands of afterschool, early childhood, and school day teachers about how data and results about their own work works best for them.

Getting the evaluation questions right calls for measurement and analytics tools that:

…reflect the assumption that children have mental skills that are causes of their behavior…. These mental skills are conceived of as several different aspects of mental functioning (i.e., schemas, beliefs, & awareness) that exist within every biologically-intact person, enable behavioral skills, and can be assessed, more or less accurately, using properly-aligned measures. When the parts and patterns of skill are reflected in theory and measures, the accuracy and meaningfulness of data about program quality and SEL skill – and all subsequent manipulations and uses of the data – are dramatically improved.

Our thinking is deeply anchored in pattern- and person-centered science. Check out a related blog here: Why are Q-ODM’s Pattern-Centered Methods (PCM) More Realistic and Useful?

Finally, we provide data visualization examples that complete an unbroken chain of encoded meaning, from the observation of students’ socio-emotional skills in an afterschool classroom, to the decoding of the data visualization by an end-user. We’re pleased to share these insights. Cheers!

P.S. For CEOs that need impact evidence: Why are gold-standard designs not as cost-effective as we might think? Elsewhere, we have argued that gold-standard designs for afterschool programs are misspecified models because they lack key moderator and mediator variables (e.g., instructional quality and socio-emotional skills). For example, the large impacts (often equity effects, as predicted by the neuroperson framework) that we typically find for students who start programs with lower socio-emotional skills but who receive high-quality instruction cannot be detected using most gold-standard designs. As a result, it is difficult (or impossible) to analogize from the results of gold-standard designs to the real actions taken by real people; thus, those designs are not very cost effective for improvement or for telling compelling stories about impact. Check out a related blog here: How the Q-ODM impact model is a more cost-effective form of the quasi-experimental design (QED).

Introduction to White Paper 2

Share on linkedin
LinkedIn
Share on twitter
Twitter
Share on email
Email

Welcome back! In this second white paper, Measuring Socio-Emotional Skill, Impact, and Equity Outcomes, we extend from the White Paper 1 skill framework to discuss implications for accurate measurement.

We are pleased to share these hard-won lessons from two decades of trying to describe the actual outcomes of “broad developmentally-focused programs” – which means trying to figure out how to measure socio-emotional skill changes of both adults (i.e., quality practices) and children over short time periods. In the paper, we work through the logic of measurement in a way that we hope non-technical readers can follow with minimal suffering. There are no Greek symbols!

We’re passionate about this subject because the potential is real. Getting measurement right will make a big difference for the oft-ignored questions about how and how much skills change during relatively short periods, such as a semester or school year. To put the conclusion up front: Maximize measurement sensitivity in applied settings by using (a) adult ratings of child behavior that (b) reference periods of not more than two weeks past,  and (c) using a scale anchored by frequency of behavior – what we call optimal skill measures.

Another message is that regardless of measure choice, items should analogize to actual mental and behavioral skill states that occur in real time, using words that all raters understand in the same way. Without this power of analogy from the raters’ concept of the verb/predicate in the written item to an observed quality in the room, external-raters can’t make clear comparisons before checking the box. The same is true for self-raters observing thoughts and feelings happening inside their own mind/body.

The kicker is that as inaccurate data are aggregated, the extent of invalidity is compounded. What if the ambivalent impact findings repeatedly demonstrated by gold-standard evaluations in publicly-funded afterschool programs were caused by leaving out accurate information about socio-emotional skills? (This is, in fact, a key argument elaborated in White Papers 3 and 4.) Thanks for checking out our work!

P.S. for the psychometrically minded. Why are many SEL skill measurement constructs likely to be inaccurate, despite psychometric evidence of reliability and validity? First, many measures of SEL skill lump things together that they shouldn’t. For example, mixing self-report items about (beliefs about emotional control in general (efficacy), the felt level of charged energy in the body (motivation), and specific behaviors (taking initiative) that follow – creates scale scores that obscure distinct parts of skill that change on different timelines and with different causes.

Second, it turns out that most measures young people encounter in school day and OST settings are self-reports of beliefs about skills. Students are rarely trained in the meaning of the words in the items that they are responding to while coming from different histories and different “untrained” perspectives on emotion-related words. We just don’t know what the words mean to the self-reporter, particularly the relative intensities offered in multi-point Likert-type response scales.

Third, items that refer to the use of skills in general (i.e., a verb without clear predicating context or time period) are much less sensitive to specific skill changes that actually occur over short periods of time. We refer to these as measures of functional skill levels that change more slowly over time.

In the new year, we’re highlighting the third white paper, Realist(ic) Evaluation Tools for OST Programs: The Quality-Outcomes Design and Methods (Q-ODM) Toolbox. In this paper, Charles Smith and Steve Peck extend the ideas introduced in White Paper 1 (socio-emotional framework) and White Paper 2 (socio-emotional measures) to program evaluation and impact evidence.

Reflections on White Paper 1

Share on linkedin
LinkedIn
Share on twitter
Twitter
Share on email
Email

In conjunction with the release of White Paper 1 this week – A Framework for Socio-Emotional Skills, Quality, and Equity – we want to mention a few of the highlights:

What are socio-emotional skills? In our view, a person’s socio-emotional skills are integrated sets of mental and behavioral parts and processes (i.e., schemas, beliefs, and awareness); these integrated systems are socio-emotional skills and produce both basic and advanced forms of agency.

Why are socio-emotional skills important? Socio-emotional skills have a compounding effect on many developmental outcomes that has been described as dynamic complementarity (Heckman, 2007); that is, socio-emotional skills beget other types of skills. Children and adults operating at high levels of SEL skill can more easily get on to the business of learning what the context has to offer. Settings that do not address SEL skills can become a further cause of educational inequity.

Why are organizations and policies struggling to implement socio-emotional skill reforms? A recent review found over 100 different frameworks describing SEL skills and supports (Berg et al., 2017). This cacophony of words and concepts undermines the shared understanding and language necessary for coordinated action, both within organizations doing the work and among evaluators producing the evidence.[i] Confusion about what constitutes SEL skill, and how “skill” may or may not differ from many other concepts – such as, competence, abilities, traits, attitudes, and mindsets – undermines scientific progress and slows policy processes that rely on at least approximate consensus around shared meanings and objects of measurement.

How can the QTurn socio-emotional skills framework help increase the effectiveness of reform? By defining, naming, and sorting out the key parts of integrated SEL skill sets, we can much more effectively measure and model both changes in socio-emotional skills and, ultimately, impacts on outcomes and equity. In White Paper 2, we extend from the socio-emotional skills framework described in White Paper 1 to corresponding guidance for measuring socio-emotional skills with increased precision, accuracy, and sensitivity.

We’ll be back with more soon…

 


[i] Given the extent of diversity across SEL frameworks, Jones et al. (2019) developed resources to help stakeholders understand the unique strengths of different frameworks as well as the alignment between core elements of these different frameworks. The general conclusions from this work are (a) there is currently no single consensus framework that is obviously more scientifically or practically valid than any or all of the others, and (b) the use of the same terms by different frameworks where presumably referring to different things (i.e., jingle fallacies), and the use of different terms by different frameworks where presumably referring to the same things (i.e., jangle fallacies), are abiding challenges faced by stakeholders charged with making funding, evaluation, training, performance, measurement, and analysis decisions. Our approach is designed to help solve these problems.

Introduction to QTurn White Papers

Share on linkedin
LinkedIn
Share on twitter
Twitter
Share on email
Email

We, at QTurn, are pleased to share the first three, in a series of four, white papers. White Paper 1, Socio-Emotional Skills, Quality, and Equity (Peck & Smith, 2020), provides a translational framework for understanding our relatively unique view of the key parts of a socio-emotional skill set. In short, we develop a case for supplementing the traditional focus on student beliefs and behavior with a much more extensive focus on students’ emotional life and the attention skills necessary for becoming the primary authors of their own development.

You can download White Paper 1 from our website or ResearchGate. We’ve also published a blog describing what we think are some of the important points and implications of White Paper 1.

Although our work is anchored in the wide and deep range of developmental supports that are currently evident in the out-of-school time (OST) field, we view the “neuroperson” model described in White Paper 1 as applying to all adults and children in all settings. Quoting from the paper:

We introduce a theoretical framework designed to describe the integrated set of mental and behavioral parts and processes (i.e., schemas, beliefs, and awareness) that are socio-emotional skills and that produce both basic and advanced forms of agency. With improved definitions and understanding of SEL skills, and the causes of SEL skill growth, we hope to improve reasoning about programs and policies for socio-emotional supports in any setting where children spend time. Perhaps most importantly, we hope to inform policy decisions and advance applied developmental science by improving the accuracy and meaningfulness of basic data on children’s SEL skill growth. (p. 3)

The series of white papers will define what exactly we do and believe at QTurn. After the translational framework is explained in White Paper 1, White Paper 2 – Measuring Socio-Emotional Skill, Impact, and Equity Outcomes (Smith & Peck, 2020a) – provides guidance for selecting feasible and valid SEL skill measures. White Paper 3 –  Realist(ic) Evaluation Tools for OST Programs – integrates the SEL framework and measures with a pattern-centered approach to both CQI and impact evaluation. White Paper 4 – Citizen Science and Advocacy in OST (Smith & Peck, 2020b) – presents an alternative evidence-based approach to improving both the impact and equity of OST investments. Over the next few weeks, we’ll be releasing blogs related to White Papers 2 and 3.

We’ll also be updating our website as we go along and hope to be joined in the blogging by a couple of expert clients in Flint and London. That’s it for now. We look forward to sharing further information in the coming months and would love to receive any feedback you think might help further the cause of supporting OST staff and students.

A Compassionate Evaluation using the GOLD Assessment in Genesee Intermediate School District

In early 2020, COVID-19 rates were soaring. Masks, cleaning supplies, and clear information were in short supply. This was especially true for schools across the country. Teachers, parents, and students were unsure about what was going to happen next. On Thursday, April 1, 2020 (in-person) school was still in session in the Genesee Intermediate School District (GISD) and the students and staff of the 21st CCLC Brides to Success afterschool programs were looking forward to a 4-day weekend. On Friday, April 2, Governor Whitmer released executive order 2020-35, immediately suspending school for the remainder of the school year and drastically changing how delivery of school and afterschool services would be provided (e.g., shifting from in-person to remote interactions with children and youth).

This was the second year QTurn partnered with GISD’s Bridges to Success programs, and we quickly realized two things. First, our original continuous quality improvement (CQI) cycle (with a heavy focus on Socio-Emotional Learning) was, in spirit, more relevant than ever but, in implementation, completely inappropriate. Second, the setting-based tools (such as the Youth and School-Age PQA) available to evaluators in the OST field in April of 2020 could not provide valid program quality data and could not demonstrate how afterschool programs, like Bridges to Success, were pivoting to meet the needs of children.

During this pivot point, we discovered the four “rules” for designing and implementing a compassionate evaluation. It was clear that it felt unethical (to us and to the Bridges to success leadership) to ask staff to partake in an external evaluation processes ill-fit for the transitional physical setting or within the greater context of learning during a global pandemic. Our partners didn’t need the added pressure of an external observation while they were still figuring out what it meant to offer virtual and non-virtual programming to students and families.

By the time schools were closed in Michigan, QTurn was in the midst of developing a self-assessment tool for evaluating program quality during the COVID pandemic, which would eventually become the Guidance for OST Learning at a Distance (GOLD). The development of the GOLD, funded by the Michigan Afterschool Association (MAA) and the Michigan Department of Education (MDE), was the culmination of interviews, workshops and reviews with over 25 youth development and OST experts from the State of Michigan. Because the GOLD materials were not released for another month to the public, the QTurn team and Bridges to Success leadership decided to use the 27 best practices described in the GOLD as a framework for coding a series of interviews in order to tell the story of Bridges to Success’ response to the pandemic.

Over the course of two weeks, the QTurn team interviewed 15 afterschool staff, from 9 GISD afterschool sites, and 1 administrator from the GISD office. Each interview was structured around the following five questions:

  • What is the experience of transitioning from in-person to distance programming?
  • What are you hearing from students and families?
  • What are the barriers to students’ virtual learning?
  • Where are you experiencing success?
  • Where could you be successful with more support?

Some calls were quick, lasting only 35 minutes. Some were over an hour. We asked questions, and we listened, asked follow up questions, and listened more. Every conversation was an intense opportunity for direct staff, program administrators, and team leads to tell someone outside of their world what was going on. We heard many sentiments filled with hope and gratitude, confusion and uncertainty of their impact, moments of fear and sadness, and overwhelming concern for the students and families in their programs.

Although each of the afterschool sites used their own approach to providing services intended to facilitate learning at a distance, with no systematic coordination across sites, five key themes emerged from our analysis of the interview responses:

  1. Staff were unsure about how to define some aspects of program quality and/or professional practice within the context of learning at a distance; particularly, how to most effectively monitor children’s (a) socio-emotional well-being, (b) academic effort and progress, and (c) attendance.
  2. GISD Bridges to Success leadership style and organizational culture were important sources of support for staff experiencing programmatic uncertainty and professional disequilibrium.
  3. Learning at a distance both exacerbates and clarifies inequities. GISD responded by providing a diverse set of programming options, such as: (a) virtual communication and supports, (b) non-virtual communication and supports, and (c) supports for adults supporting younger children.
  4. GISD continued to deliver a whole-child curriculum that provided supports for safety, fun, academic work, and socio-emotional skill building.
  5. Increased flexibility of staff schedules was necessary to meet student and family needs, even though it often increased the length of staff workdays.
2020-2021 Bridges to Success CQI Design

When we started the 2020-2021 school year, we realized again that the setting-based assessments required by the Michigan Department of Education were going to once again offer incomplete data on the impact of programming being offered by Bridges to Success. The afterschool model was no longer face-to-face general enrichment programming. Bridges to Success was still responding to families in crisis, so their approach was a casework model focused on socio-emotional and academic support. Knowing this, we conducted external (virtual) program evaluations, using the MDE recommended PQA items plus two scales from the SEL PQA (emotion management and empathy), But, the PQA data alone felt incomplete. To supplement, we again interviewed site leads and coded the transcripts to the GOLD. By interviewing site leads based on general questions, and letting them talk, we were able to learn about not only their experiences but also what they were most concerned with and focused on. The GOLD demonstrated what the PQA alone could not: that lots of SEL programming was being done but mostly one-on-one with children and families, outside of their regularly-scheduled virtual programming

In early spring, 2021, QTurn and Bridges to Success staff came back together to decide how we could work together for the rest of the year. Schools were opening again, and in-person afterschool programming would be offered again. We decided that would do external observations using the SEL PQA and use a custom distance-learning external assessment tool that was designed specifically for GISD.

By adding the GOLD into our CQI plan,we were able to really define the quality and breadth of services. No two sites were operating the exact same way – but every site was working with their school to meet the needs of children. Utilization of setting-based assessment tools not designed for virtual learning (or non-virtual distanced learning) was only scratching the surface of what programs like Bridges to Success accomplished on 2020-21. And by working with our partners and by centering compassion, our evaluation not only articulated, but honored the heroic effort and continuous dedication of the Bridges to Success program to their communities during a difficult year.

What Exactly is Compassionate Evaluation?

Compassion has a lot of definitions, but most have to do with recognition of suffering, action to alleviate suffering, and tolerance of discomfort during the action.[i] By April of 2020, we knew that our afterschool partners in Genesee County (including the city of Flint) Michigan, and many of the children and families that they served, were suffering. A significantly higher proportion than usual of those families were in a crisis-mode. For afterschool educators, the learning environment had moved, and the means of delivering programs had changed dramatically. A “pivot” was required.

When our partners told us how evaluation could help, they emphasized a compassionate approach to the work that would address suffering in multiple ways: by reducing workload related to evaluation, by providing an evaluation design that was of timely value in the current moment of challenge, and by wherever possible reflecting back to staff their own incredible commitment and ingenuity in meeting those challenges.

We translated this desire for an experience of compassion into a few rules about method:

Rule 1 was make it quick. We knew that staff were in crisis mode and that time was precious. We eliminated all data collection responsibilities for staff. Staff had only to schedule dates for observers, sit for a 45-minute interview, review the report during a 70-minute training, and then review the report again during a subsequent 15 minute portion of an all-staff meeting. This meant less than 4-hours of total time for a site coordinator engaging in required evaluation activities between September and December 2021.

Rule 2 was prioritize local expertise. When program practices and objectives are changing rapidly (the pivot mentioned above), prior evaluation designs (including measures) are of reduced validity. This was true simply because it was no longer the same service – models varied widely both within and across programs.[ii] We asked open-ended questions about what was and wasn’t working and then coded text segments to existing program standards for program fidelity, instructional quality, and students’ socio-emotional skills. In this way, we identified standards that were applicable in the new situation, named new priorities in those terms, and respected site managers as expert sources of data about what works.

Rule 3 was ask about what is changing (and reflect strengths). Afterschool staff told us they felt like their professional tools became outdated overnight, by the pandemic, and it was not a good feeling. We spent our moments of access to leaders and site managers asking how it was going, letting them give voice to however it was going by taking the conversation wherever it went, and then by intentionally reflecting strengths back to them. Although this therapeutic aspect of our service may feel a bit uncomfortable to some evaluators, the situation required it as an aspect of method. As evaluators, we were “giving value to” leaders’ and site managers’ experiences by letting them flow some ideas and emotion while answering our questions.

Rule 4 was write it down. By asking staff about practices and coding their transcribed responses into categories, we were identifying sentences written by program staff that describe specific local best practices in specific local terms. By identifying and writing down local best practices in the words of program staff, the evaluator helps speed up the development of shared mental models about what the pivoted service is. This helps service providers demonstrate accountability in the sense of “this is actually what we did every day.” It also makes it possible for leaders to pivot the service more easily in the future by returning to documentation for crisis- or emergency-management.


[i] Strauss, C., Lever Taylor, B., Gu, J., Kuyken, W., Baer, R., Jones, F., & Cavanagh, K. (2016). What is compassion and how can we measure it? A review of definitions and measures. Clinical Psychology Review, 47, 15–27. https://doi-org.proxy.lib.umich.edu/10.1016/j.cpr.2016.05.004

[ii] Roy, L. & Smith, C. (2021). YouthQuest Interim Evaluation Report. [Grantee Evaluation Report]. QTurn. and Smith, C. & Roy, L. (2021). Best Practices for Afterschool Learning at a Distance: GISD Bridges to Success. [Grantee Evaluation Report]. QTurn.

Continuous Quality Improvement and Evaluation in 2020: A Plan for 21st Century Community Learning Centers

During times of crisis when programs are under tremendous pressures, evaluation and assessment can be challenging. Programs enter triage mode, putting their limited time and energy into the most urgent tasks. This heightens the need for evaluation that reduces strain and improves capacity. When conditions that created the crisis are long-lasting, like the coronavirus pandemic, it becomes necessary to revisit and restore vital activities that may have been moved to the backburner, but to do this successfully often requires intelligent redesign. How can the same needs be met in a new way? How can evaluation and assessment be adapted to succeed in challenging conditions?

QTurn has developed a comprehensive evaluation plan for afterschool programs at a moment when redesign of the service and delivery of the redesigned service are happening at the same time. This plan, ”Afterschool Evaluation Plan 2020,” was developed to address the unique needs of programs in the 2020-2021 school year – and support compliance with the specific requirements for the 21st Century Community Learning Centers. An evaluation plan for 2020 must remove burdens rather than adding to them and make life easier not harder. Aware of these needs, QTurn’s design includes short, validated assessment tools, guidance available through online trainings, and a reassuring, therapeutic approach referred to as the “lower stakes” model.

A Lower Stakes Approach

The lower stakes model is a strong component of QTurn’s work. Lower stakes means that the results of assessments are used to support and inform, and program staff are able to interpret the meaning of their own individual and group performance data. In a lower stakes model, the results of assessments do not influence funding or prompt sanctions. Instead, low assessments scores are opportunities for mutual learning, support, and growth. 

While this approach has been integral to the work of QTurn’s founder, Charles Smith, for decades, lower stakes is especially critical in the 2020-2021 school year as program staff strive to adapt to a new normal. The AEP 2020 is intentionally designed to alleviate stress and confusion and help staff adapt to rapid change and achieve shared meaning.

User-Friendly Assessment Tools

The Afterschool Evaluation Plan 2020 includes three assessment tools designed for remote or in-person programming (or a combination of both). The first measures fidelity to best practices at the management level. The second captures quality at the point-of-service (and applies to home learning environments). The third charts the growth of social and emotional learning (SEL) skills among youth. The tools are short and easy to use. Designed to work together as part of the cycle, they also support impact evaluation.

Management Practices Self Assessment (MPSA).  The MPSA was developed with extensive input from 21st CCLC program project directors and aligns with the core requirements for 21st CCLC programs in Michigan. With 24 indicators forming eight standards in four domains, the tool requires less than two hours for program managers to complete.

Guidance for Out-of-School Time Learning at a Distance (GOLD) Self-Assessment. Site managers and staff complete a self-assessment produced with extensive input from other expert  21st CCLC site managers. GOLD contains 27 indicators that form 11 standards in four domains. The four domains represent point-of-service quality in the individual learning environment.

Adult Rating of Youth Behavior (ARYB).  Each child is rated on the ARYB in November and April. By completing the assessment at two time points (earlier in the school year then again toward the end of the year), the ARYB is able to capture growth in social and emotional skills across the school year. The ARYB has 30 items that form six skill domains, including emotion management, teamwork, and responsibility.

Additional Features:

Cost-Effective.  The Afterschool Evaluation Plan (AEP) 2020 can be adapted to a wide range of cost structures. The Guidebooks and scoring forms are available for free to all users. Additionally, the demands on time are low. Completing the assessments can require as little as 2 hours for program directors and 3 hours for site managers and staff per year.

Guidance Through Online Training.  QTurn will offer live online trainings covering the use of the MPSA, GOLD, and ARYB. Support also includes online training that equip leaders and staff to do data informed planning.

Emphasis on School-Day Alignment.  QTurn’s AEP 2020 helps programs pivot toward greater integration with schools during a time when school has become more challenging for many children.

Support for Impact Evaluation.  Finally, data obtained using the assessment tools can be used to evaluate the overall impact of programs, particularly across multiple programs.

To adopt the AEP 2020, begin by downloading the assessment tools and resources.

For support with implementation of the AEP 2020, please contact the QTurn Team.