Child Labor – the Missing ACE that Hides Child Abusers

Share on linkedin
LinkedIn
Share on twitter
Twitter
Share on email
Email
2,346 words | 18 min

I’ve always thought there were at least two missing ACEs (adverse child experiences) due to the fundamentally bourgeois experience of the people who think these things through. One of those missing ACES was child labor, now visible for the moment in the lens of fresh New York Times reporting (https://nyti.ms/3Ld1bdV). It can be hard to define child labor, in part because the laws have kept the definitions ambiguous to protect agricultural sector interests, a few of whom represent actual smaller and middle-sized farm families who still do their own dirty work – but most of these interests are agribusiness.  

I felt as though I had to blog about this issue now because it’s near and dear to my own childhood experience – and the reason why I’ve spent my second career trying to make educational settings do less harm to children with ACES, visible and invisible. Perhaps this time our field could rise above the embarrassing non-response that happened in the child separation policy under the Trump administration. Weren’t those children in the cages “out of school?” 

Since a big part of the New York Times story was about meat packing plants – and because those were many of the violations in my own state of Michigan – let’s talk a bit about two types of work experience available to children in meat packing plants: Dangerous machines and brutality toward animals. 

Traumatic Memory Lane 

I know these issues well, as the oldest child of a single-parent dairy farmer between 1968 and 1988, I spent roughly 20 years in compulsory labor that included lots of dangerous machines and the murder and torcher of animals on a regular basis.  

The compulsory labor started at age 6 and included about 4 hours each day after school and then dawn to after dark on weekends and summers. Now, there are many kids who work to help their families, and there are conditions of family labor in which children are protected from the parts of work that are “too much, too soon.” But that’s not what we’re talking about here. These are cases where children as young as 13 are being pressed into service by bosses, not parents. 

First the machines: I can still remember standing over a late 1950’s Gehl silage wagon (google it) while blowing silage at age 13, a John Deer 3010 at full throttle right next to me running a blower fan strong enough to blow heavy chopped corn 65 feet up a tube to the top of the silo… and another tractor power take-off shaft running the wagon’s conveyor chains and beaters – all in a tight space where I had to stand to control the machines and help manage the load of silage as it met the beaters. My intrusive memory from this experience is always the same – standing on the front edge of the wagon with a long pitchfork trying to pull the top of the load down evenly so it didn’t clog the blower… and in this action I was within inches of many moving parts that were not going to stop if I got caught up in them. I was working alone. No one would have known.  

This is a traumatic memory and, while many years of well-guided work has reduced its powerful emotional charge, the memory will never go away. Even at 56 years old, it remerges every time I feel taken advantage of, every time I feel that someone is not caring about the risk they are asking me to take on for their advantage. Let’s be clear: The threats are no longer the same, now that I’m working in a collared shirt. Still, when a situation that fits comes up, I automatically project a composite of all of those “dangerous machine” days onto the present situation, including the physical and emotional experiences that the 13 year old was having then during those stressful and scary moments. If you’ve ever spent any time in a meat packing plant, it’s nothing but wicked dangerous machines with sharp edges that require cleaning with high heat and caustic chemicals. 

But that wasn’t the worst. If you think that the terms murder and torcher are too strong: 

You’ve never seen a calf’s knees buckle and it’s aimless wandering after burning its budding horns back into its skull with a hot electric iron. 

You’ve never heard the desperate lowing of a young steer as you first cut both its horns off (crushed bone and blood squirting everywhere) and then castrate it, often falling to the ground in the process. 

You’ve never had to end the life of an animal that was down – or, worse, tried to get it on the sale-truck still alive with a loader and chains. 

I could go on. This is just what it takes to make meat and milk – but children shouldn’t be involved. I specifically use the terms murder and torcher because I left my 20 years as a steward of 300+ dairy animals convinced that cows and pigs have emotions and memories. After those experiences, they never forget, just like me, and the kids at the plant. They don’t want it to happen, and the fear and pain in their eyes and voices is clear, just like for me, and it will be that way for the kids at the plant. Again, to this day, these memories are vivid, and when they come it can take days or weeks to get them suppressed again.  

Do a little reading on the subject – going back to Upton Sinclair in the 1920s – and you will see that while regulated, these are industries where the protective shields (on the machines) are often disregarded for the sake of profit. By this, I mean very specifically that the machines are in disrepair, moving parts are not shielded and are open to the operator, and unnecessary brutality to animals is acceptable to make it all go faster. Even if you kept the kids away from the most dangerous machines and most egregious experiences of animal death (which I doubt), do you want 13 year olds hanging around with non-family adults while those adults are having these experiences?  

This is what the people who want children to work think is okay for children to experience in meat packing plants. These people are child abusers and should be in jail, not receiving fines to their companies with no personal accountability. If you think that’s extreme, let’s look at the science that says that the kinds of child labor I’ve been talking about have results similar to severe cases of sexual abuse. That’s why, in the past, both child labor and child sexual abuse were both called trafficking. The people who want children to work in meat packing plants are like sexual abusers. 

What the Science Says 

Unlike the anti-science gaslighting of conservative state legislators and corporate CEOs who believe that capitalism should exploit the vulnerable – there is science to guide us here. It can help sort out the causes and effects of traumatic experiences and where they specifically might occur in children’s experience of work and the conditions around work.  

Again, children’s work in benevolent circumstances, under the protection of caregivers in conditions of family economy, is not necessarily a bad thing. I also learned a lot from my first career in childhood – but the “too much, too soon” stuff made it a bad trade. 

Here are a couple of important definitions from the United Nations Children’s Fund: 

Child Labor. Includes work by children too young to perform it and/or work that is likely to harm a child’s health, safety or morals. Excludes permitted light work by children of a specific age range.  

Hazardous Work. Work that is likely to harm a child’s health, safety or morals. Each country creates their own hazardous work list. This typically involves activities that expose children to abuse; to work underground, underwater, at dangerous heights, or in confined spaces; and/or to heavy loads or an unhealthy environment. It also includes work in difficult conditions (e.g., long hours). 

Although national statistics on prevalence (like police homicides) conveniently do not exist for the United States, the more general categories of child exploitation and child trafficking typically include both sexual and labor abuses, and many older studies put them in the same category. For some purposes that works, but here is the more nuanced story: Sex abuse survivors have more messed up family situations and more prior experience in out-of-home care, child protective services, courts, etc. – approximately four times as high (Hopper and Gonzalez, 2018). They are also a bit more frequently girls. One thing this means is that messed up families are a risk factor for sexual abuse. Family dysregulation makes children more vulnerable, less protected. 

But child labor is often an intensive form of mistreatment too and, like sexual exploitation, is a source of the more severe form of post-traumatic stress disorder (PTSD) and its endemic mental and physical symptoms. Rather than “event trauma,” we’re talking about “complex trauma” that comes from long-term repeated exposure to too much, too soon. While children who experience event trauma (e.g., a single episode of sexual abuse from a non-caregiver, a climate disaster, witnessing an act of violence) can have high rates of recovery, the same is not true for complex trauma (also called complex PTSD – see Pete Walker). Here is what the corporate CEOs and state legislators around the country are committing these children to: 

Hopper and Gonzalez reported high rates of depression and PTSD among participants with a history of labor trafficking (Hopper & Gonzalez, 2018). The majority of individuals fulfilled the DSM criteria for depression (72%) and PTSD (54%), and specific symptoms commonly assessed in a review of systems were endorsed; for example, sleep disturbance (85%), fatigue (71%), weight changes (54%), suicidal ideation (43%), and nightmares (57%). Other symptoms of emotional distress that may be assessed during medical and mental health visits include intrusive thoughts (80%), avoidance of thoughts/memories (77%), hypervigilance (63%), concentration problems (45%), and somatic dysregulation (38%). 

The story here is that even if “hazardous” child labor causes symptoms of complex PTSD, the sequence of exposure differs: In sexual abuse cases, prior family history of dysregulation makes sexual abuse more likely. In child labor, it is the initial child labor situation itself that makes the child more likely to experience additional abuses of different types, often including being under paid (no irony here for the bosses, kids are cheap labor). 

Child labor co-occurs with all of the other types of mistreatment. Making or allowing children to work makes them more vulnerable to other forms of mistreatment and atypical patterns of child development. Because child labor, often rural, can be invisible to the upper-middle class and urbane folks who do child advocacy and developmental science, most of the good science here is brand new: 

Koegler et al. (2020) documented threats, as well as physical and sexual abuse, as co-occurring exposures during the labor-trafficking experience. 

Hopper and Gonzales (2012) reported significant rates of sexual violence and physical assault co-occurring with child labor.  

Zhang (2012) reported 15% of those involved in labor trafficking endured threats to their physical integrity.  

Marquez et al. (2020) highlighted a youth involved in labor and sex trafficking that met criteria for PTSD due in part to pre- and peri-traumatic exposures during both events.  

Macias-Konstantopoulos (2017) noted in the non-scientific literature that the medical needs of labor trafficking survivors may result from violence exposure, hazardous working conditions, and lack of healthcare access (suggesting the labor-trafficking experience may include barriers to self-agency and self-protection).  

Protect Children or Be an Abuser 

Under the Trafficking Victims Protection Act (TVPA), it is considered child trafficking when force, fraud, or coercion is used to compel a minor to labor (United States Government, 2000). Doesn’t this seem a bit weak? What happens when a caregiver adult encourages a child to work? Does that make it better? The answer to this question goes back to the too much, too soon point. It may be okay if a caregiver agrees with the decision to have a child do labor, if that labor is explicitly of the non-hazardous kind, and if it happens in the context of family economy where a family labors together. Everyone learns from work. However, it’s funny how it’s okay for poor rural kids to work (in non-hazardous conditions), but the children of the people who write about ACEs, and all of their children’s friends, spend their time studying and skill building, often never experiencing real labor with back and hands in their whole lifetime. Hence a missing ACE. 

It comes down this, the New York Times and the current science says that a significant number of our children are being exposed to hazardous working conditions that will hurt them badly, often throughout their lifetime. It’s pretty simple for us in the child development field. We need to decide which side we are on. Just saying that we use a developmental method when we’ve got the children in our setting is not enough. Like the kids who were separated from their parents at the border, these children are being set up with powerful and self-destructive memories that will last a lifetime. What will the out-of-school time field do? Because we are all “developmentalists,” shouldn’t we be using our privileged knowledge of psychology to take a position? Shouldn’t we be speaking truth to the men and women in power who are abusing our children? What should we ask our philanthropic and agency leaders to do?  

Please feel free to share this blog with legislators in your state. From a non-exhaustive review, the legislatures in the states of Iowa, Minnesota, Ohio, Michigan, Wisconsin, Arkansas, Georgia, South Dakota, New Jersey, New Hampshire, and Illinois have all recently introduced or passed legislation making it easier to exploit children, see: Republicans in Some States Want to Ease Child Labor Laws to Fill Jobs (businessinsider.com); States Look to Ease Some Child-Labor Laws Amid Tight Market – WSJ

How could OST address climate change?

Share on linkedin
LinkedIn
Share on twitter
Twitter
Share on email
Email
650 words | 3 min

With the publication of the IPCC report[1], it’s not difficult to conclude that our current political leadership is not going to take us where we need to go, and we can’t wait anymore. The scientists are telling us right now, in clear language, that the time is up: Major transformations in our thinking and behavior around energy use must happen right now. We’re already on track to exceed (by the early 2030s) the 1.5C degrees threshold of global warming (above pre-industrial levels), and the weather patterns that are projected to ensue, long before 2040, are going to be catastrophic for everyone on the planet. That’s the very difficult news. However, the amazingly lucky news is that there is still time to do something. So, what should that something be for OST professionals as OST professionals? What should we be asking our leading agencies and philanthropies to step in quickly and fund?

Building on Strengths

For the children, youth, families, and communities served by OST programs, socio-emotional skills are at the top of my list, including getting up to speed on trauma-informed practices. We are likely to encounter more trauma from climate-related dislocation, intermittency of power, failing ground water systems, etc.

As important, how to protect children from talk about the ongoing climate catastrophe? How do we nurture their hopefulness rather than sharing our adult fear and unhealthy denial? How do we nurture our own hopefulness as an example? This is a profoundly important task if we are going to do anything at all as a field. The main point of this blog is that it’s our ethical obligation to the children and families that we serve to demonstrate hopefulness through our own OST practice, like setting examples in the time that we have rather than sticking our heads in the sand. It almost doesn’t matter how we go about addressing it as along as we communicate hope rather than fear while we’re doing it.

Another big one that we already know about is demographic change. We need to understand south-to-north migrations and how to be intentional about multiculturalism in practice and policy. This will be a place to use all of the equity stuff that so many of us have been working on and thinking about. We need to prepare for society with a bunch of folks from southern US states, Central America, Mexico, and the islands arriving in the next decade. Its already happening. Good news is that many of us are building our capacities to engage multi-cultural youth to embrace differences and learn about their own cultures (e.g., intersectionality) and to understand inequities.

There is a lot going on in our field in youth civic engagement. This is a small subset of our OST field, but these folks have been doing it for a long time, know how to do it, and have written a lot about it. The trick here is to re-focus what we know about how to do youth activism – e.g., community and service learning, voter registration and other issue-driven campaigns, peaceful protest – with the mental model that Greta (Thornburg) is providing about breaking carbon and consumption cycles. I can easily convince myself that learning how to break carbon and consumption cycles in everyday life is now more important than any time spent on academic testing for example (or advanced math or competitive sport etc. etc.).

 

Filling in STEM Denial

One of the huge opportunities to demonstrate hope is precisely where we are least prepared: all of the new habits of daily life that we’re going to need to learn in a zero-carbon world. We are facing a task like the early twentieth century policy of agricultural extension that sought to educate a whole country of beginner-farmers about agricultural science and how not to starve in the countryside. But what institutions are filling this void?

Elsewhere we’ve referred to these habits as the applied science of ecological stewardship[2][SP1]  – again, to push the agricultural extension analogy, think home economics for zero carbon. The new zero-carbon habits are all STEM skills even though the OST STEM field seems to be in denial. From my reading, these are the STEM issues that are directly and substantively related to reducing carbon use and living well in the future:

First, there is a lot to learn about energy use technologies and energy conservation skills. Dramatically reducing carbon use in the next three years and staying comfortable in everyday life is something we should all be learning about and about which there is almost certainly some content available for use in OST programs. This ranges from the simple stuff like when to best turn on and off the air conditioning to the basics of the new energy technologies and infrastructure that every neighborhood will need.

Another important set of STEM skills is related to food types and sources. It is an uncomfortable fact that two big ways to cut greenhouse gases and carbon use is to stop eating meat and to grow food locally. The arguments about the ecological implications of plant-based diets are clear and should at least be available to all persons. It’s also clear that growing your own local food in any city is likely going to occur on a brownfield where the underlying soil is already contaminated. This requires some horticultural design know-how and a little bit of soil science that many urban farmers are learning.

Another big one may surprise you: Almost all old trees (i.e., largest biomass retaining carbon) have been eliminated except for those in the urban forests maintained by our city and town governments and national parks. The carbon retaining potential of the urban forests is critical, and OST programs could learn a lot about trees (e.g., dendrology) and carbon retention from the several STEM disciplines involved. There are few afterschool programs actually located in non-urban areas (i.e., even in the countryside, OST programs are typically located in a small town). These cities and towns are home to the last remaining 60 year old+ trees in the United States and represent an important part of the solution for carbon retention.

Access to clean water is a major challenge of climate change, and procuring clean drinking water is going to involve all of the STEM disciplines – both building mental models for applied stewardship and then acting in those terms. Shouldn’t all OST students learn about the local clean water agenda and learn how to understand contaminants in water testing output? Shouldn’t all students learn about the science and technology involved with water filtration at home?

For an OST Social Movement

What about the power of the OST social movement – we professionals as a group? How could OST leaders help us act together as a coordinated profession to use our power? Are there a few obvious choices that we could ask leading agencies, membership organizations, and philanthropies to help us engage the field around?

The first question is: How do we quickly integrate all of this content into our professional conferences and workplaces so that we can quickly figure out how to help a next generation of ecological citizens, scientists, educators, advocates, and policy makers who are inheriting our legacy? A second question follows from the ethical charge to demonstrate hopefulness to children: How should we engage local administrators and governments on carbon-reducing practices for our buildings and program offerings? Finally: How do OST professionals engage in political activism as an interest-group to identify and advocate for breaking carbon and brownfield consumption cycles in personal lives and governance?

Although it may be uncomfortable to discuss these issues – the fear for all of us is real – it is our obligation to the children we serve to put them in the best position to deal with the situation, by building our own and their socio-emotional skills and mental models about the new zero-carbon world that’s coming. Please note also that all of the issues addressed above represent the major job categories of the future – for those still defining everything we do in terms of the economy. And finally, please also note that the OST profession is filled with rational people with progressive views. If we acted together, we could likely act as one. If we don’t act now, it will actually be too late. How can we organize ourselves and our young people and communities to amplify their voices, needs, and realities? How would we like our leaders at all levels to help?



[1] Intergovernmental Panel on Climate Change (IPCC; 2022), Climate Change 2022, Mitigation of Climate Change, Summary for Policy Makers [https://www.ipcc.ch/report/ar6/wg3/]

[2] Smith, C. (2019). SEMIS coalition for place-based ecological stewardship: Growing a movement, getting ready for growth. [https://www.qturngroup.com/wp-content/uploads/2022/04/2022-04-11_SEMIS_WP-v7.pdf]


 

How the Q-ODM impact model is a more cost-effective form of the quasi-experimental design (QED)

Share on linkedin
LinkedIn
Share on twitter
Twitter
Share on email
Email

The Quality-Outcomes Design and Methods (Q-ODM) approach to program evaluation increases the use value of all estimates produced as part of an impact analysis. Put simply: We replace the “no-treatment” counterfactual condition (i.e., children who were not exposed to an afterschool program) with low-implementation conditions (e.g., children who were exposed to lower-quality instructional practices in an afterschool program) in order to describe the impact of optimal implementation on child outcomes (e.g., socio-emotional skill change, equity effects).  Said again: The “control group” in our impact model is any quality profile, subgroup configuration, or pathway (e.g., low-quality practices profile) that is contrasted with an optimal “treatment” group (e.g., high-quality practices profile).[1]

The “Analytic Tools” section of White Paper 3 provides an introductory discussion of Q-ODM impact models for student skill and equity outcomes. Also, check out this UK impact evaluation.

Now, let’s talk about three reasons why our approach is a cost-effective choice for CEOs seeking evidence about impact and equity outcomes:

Lots of Reality-Based Estimates that Analogize to Action. Our point about cost effectiveness is this: Every estimate produced in this impact model is useful. Where coupled with QTurn measures, Q-ODM impact estimates are interpretable in terms of specific adult and child behaviors and contexts. This means that there is a direct analogy from meaning encoded in the data to meaningful teacher and student behavior that occurs in the classroom – direct analogy from data to reality. The data used to identify the lower-quality profile actually identifies the lower-quality settings! The amount of skill change that occurs in the high-quality setting actually demonstrates what’s possible in the program; that is, it sets the benchmark for other programs.

An impact estimate implies a subtraction of one magnitude from another. What use is a counterfactual estimate if there is no such thing as a counter factual condition? Doesn’t that just mean that we are subtracting an imaginary quantity from a real one?

Using Natural Groupings to Address Threats to Valid Inference. Its not just usefulness of estimates (consequential validity) but, we argue, a more valid way to rule out primary threats to validity of inference that the treatment caused an effect. Two points: The children in the low-quality group are more likely to be similar to the kids in the high-quality group for all of the right reasons (i.e., SEL histories) that are missed by most efforts at matching individuals or groups using demographic and education data.

The case that families in one group have more education-relevant resources (e.g., SEL histories) than families in the other group plays out in two ways. When families have unmeasured resources before the child attends, we are talking about selection effects. When families use those unmeasured resources during the program intervention we are talking about history effects. We argue, and present evidence, that the Q-ODM method better addresses these threats to valid inferences about impact than the pernicious and unethical use of race/ethnicity and social address variables as covariates – pretended “controls” – in linear models.

Capturing Full Information from Small Samples. Our method is designed to detect such differences in the ways things go together in the real world, in or around the average expectable environments characterizing human development and socialization (cf. Magnusson, 2003). This in-the-world structure is a constraint on the states that can and cannot occur during development. In the pattern-centered frame, small cell sizes indicate sensitivity of the approach. Relatively-low Ns are not necessarily a problem for the distribution-free statistical tests used in pattern-centered impact analyses.

 

[1] We realize that others would claim that our designs are not QED at all. We delve deeper into the rationales used to disqualify “groups that receive different dosages of a treatment” from being considered “control groups” within the context of experimental design in White Paper 4.

 

Why are Q-ODM’s Pattern-Centered Methods (PCM) More Realistic and Useful for Evaluators?

Share on linkedin
LinkedIn
Share on twitter
Twitter
Share on email
Email

Pattern-centered theory and methods (PCM) can be used to tell simple and accurate stories about how real persons grow in real school and afterschool classrooms. Stories about the quality and outcomes (i.e., causes and effects) that are modeled using PCM are particularly useful because they can address questions related to “how” programs and classrooms work and “how much” children grow skills.

Most training for education researchers and evaluators is focused on variable-centered methods (VCM), also called linear statistical methods (regression, the analysis of variance, and structural equation modeling) or the general linear model. VCM are powerful in cases where the causes and effects are similar across individuals and classrooms. In cases where that’s not true – which is most school and afterschool classrooms – VCM designs tend to provide information that means practically nothing about the actual people or contexts involved. Some of the basic issues have been summarized nicely by Todd Rose in the following TEDx presentation: https://youtu.be/4eBmyttcfU4 (“The Myth of Average”), but the critique is not new.

To better illustrate the point, let’s talk about three basic assumptions about the person-reality in afterschool classrooms and how PCM applies:

A person’s socio-emotional skills are most accurately represented as a pattern with multiple skills indicated simultaneously. This is not just about more information from more variables, although that is also a fundamental advantage of pattern-centered methods. The neuroperson is also a “multilevel system” – which is mouthful but as detailed in White Paper 1: Different parts of mental skill change for different reasons, on different timelines, and cause different types of behavior! This means different amounts and types of cause are involved in changing any mental skill or behavior. How could one variable at a time constraints of VCM ever do an adequate job of representing socio-emotional skill? PCM are uniquely fit for sorting out multilevel causal dynamics so that the full meaning encoded in the data can emerge.

Change in socio-emotional skill is always qualitative, from one pattern to a different pattern at a later time point. Given the multilevel nature of socio-emotional skills, the combination of skill parts is likely to differ at different time points and in different settings. The fact that skills turn into different skills as they change has been an Achilles heel for VCM. Check out the “Analytic Tools” section of White Paper 3 to see how PCM can be applied to (a) identify each individual’s unique pattern of skill parts at different points in time and then (b) compare across those qualitatively different patterns to detect stability, growth, or decline for each individual. When coupled with the sensitivity of optimal skill measures (see White Paper 2), PCM are ideal for describing the how (e.g., an individual child’s movement from one pattern to a subsequent pattern) and how much (e.g., how many children grew) of skills-change over short time periods, such as a semester or school year.

The same classroom causes different patterns of change for different subgroups of children. An adage from mid-20th century psychology (Kluckhohn and Murray, 1948, p. 35) is a helpful reminder: Any individual can, for different causal variables, be simultaneously like all others, like some others, or like no others. VCM work only in the first case, where every person experiences a very similar type of cause and effect. Case-study and qualitative methods are preferred in the third case, where the causes and effects may apply only to a single person. PCM are uniquely fit for the second case; that is, where different subgroups of children with different socio-emotional histories have qualitatively different types of responses to the same education settings.

In the end, VCM assumptions about the validity of single variables, the quantitative nature of skill change, and the homogeneity of causal dynamics lead to an impoverished view of reality – and likely a lot of inaccurate conclusions about what to do.

Introduction to White Paper 3

Share on linkedin
LinkedIn
Share on twitter
Twitter
Share on email
Email

Greetings friends! In this third White Paper, Realist(ic) Evaluation Tools for OST Programs: The Quality-Outcomes Design and Methods (Q-ODM) Toolbox, we extend from the neuroperson framework for socio-emotional skills to a focus on evaluation design and impact evidence. Focusing on the methods used to evaluate out-of-school time (OST) programs and to assess the impact on student skill growth is a critical issue, especially given the ambiguity about impacts from gold-standard evaluations of publicly funded afterschool programs. Are programs producing weak or no effects? Or, are gold-standard designs missing something?

We offer a sequence of evaluation questions that chart the course to realistic evidence about quality and outcomes (i.e., cause and effect, or “how” and how much”) – and is useful to managers, teachers, coaches, and evaluators. We’ve learned these questions over the past two decades by asking tens of thousands of afterschool, early childhood, and school day teachers about how data and results about their own work works best for them.

Getting the evaluation questions right calls for measurement and analytics tools that:

…reflect the assumption that children have mental skills that are causes of their behavior…. These mental skills are conceived of as several different aspects of mental functioning (i.e., schemas, beliefs, & awareness) that exist within every biologically-intact person, enable behavioral skills, and can be assessed, more or less accurately, using properly-aligned measures. When the parts and patterns of skill are reflected in theory and measures, the accuracy and meaningfulness of data about program quality and SEL skill – and all subsequent manipulations and uses of the data – are dramatically improved.

Our thinking is deeply anchored in pattern- and person-centered science. Check out a related blog here: Why are Q-ODM’s Pattern-Centered Methods (PCM) More Realistic and Useful?

Finally, we provide data visualization examples that complete an unbroken chain of encoded meaning, from the observation of students’ socio-emotional skills in an afterschool classroom, to the decoding of the data visualization by an end-user. We’re pleased to share these insights. Cheers!

P.S. For CEOs that need impact evidence: Why are gold-standard designs not as cost-effective as we might think? Elsewhere, we have argued that gold-standard designs for afterschool programs are misspecified models because they lack key moderator and mediator variables (e.g., instructional quality and socio-emotional skills). For example, the large impacts (often equity effects, as predicted by the neuroperson framework) that we typically find for students who start programs with lower socio-emotional skills but who receive high-quality instruction cannot be detected using most gold-standard designs. As a result, it is difficult (or impossible) to analogize from the results of gold-standard designs to the real actions taken by real people; thus, those designs are not very cost effective for improvement or for telling compelling stories about impact. Check out a related blog here: How the Q-ODM impact model is a more cost-effective form of the quasi-experimental design (QED).

Introduction to White Paper 2

Share on linkedin
LinkedIn
Share on twitter
Twitter
Share on email
Email

Welcome back! In this second white paper, Measuring Socio-Emotional Skill, Impact, and Equity Outcomes, we extend from the White Paper 1 skill framework to discuss implications for accurate measurement.

We are pleased to share these hard-won lessons from two decades of trying to describe the actual outcomes of “broad developmentally-focused programs” – which means trying to figure out how to measure socio-emotional skill changes of both adults (i.e., quality practices) and children over short time periods. In the paper, we work through the logic of measurement in a way that we hope non-technical readers can follow with minimal suffering. There are no Greek symbols!

We’re passionate about this subject because the potential is real. Getting measurement right will make a big difference for the oft-ignored questions about how and how much skills change during relatively short periods, such as a semester or school year. To put the conclusion up front: Maximize measurement sensitivity in applied settings by using (a) adult ratings of child behavior that (b) reference periods of not more than two weeks past,  and (c) using a scale anchored by frequency of behavior – what we call optimal skill measures.

Another message is that regardless of measure choice, items should analogize to actual mental and behavioral skill states that occur in real time, using words that all raters understand in the same way. Without this power of analogy from the raters’ concept of the verb/predicate in the written item to an observed quality in the room, external-raters can’t make clear comparisons before checking the box. The same is true for self-raters observing thoughts and feelings happening inside their own mind/body.

The kicker is that as inaccurate data are aggregated, the extent of invalidity is compounded. What if the ambivalent impact findings repeatedly demonstrated by gold-standard evaluations in publicly-funded afterschool programs were caused by leaving out accurate information about socio-emotional skills? (This is, in fact, a key argument elaborated in White Papers 3 and 4.) Thanks for checking out our work!

P.S. for the psychometrically minded. Why are many SEL skill measurement constructs likely to be inaccurate, despite psychometric evidence of reliability and validity? First, many measures of SEL skill lump things together that they shouldn’t. For example, mixing self-report items about (beliefs about emotional control in general (efficacy), the felt level of charged energy in the body (motivation), and specific behaviors (taking initiative) that follow – creates scale scores that obscure distinct parts of skill that change on different timelines and with different causes.

Second, it turns out that most measures young people encounter in school day and OST settings are self-reports of beliefs about skills. Students are rarely trained in the meaning of the words in the items that they are responding to while coming from different histories and different “untrained” perspectives on emotion-related words. We just don’t know what the words mean to the self-reporter, particularly the relative intensities offered in multi-point Likert-type response scales.

Third, items that refer to the use of skills in general (i.e., a verb without clear predicating context or time period) are much less sensitive to specific skill changes that actually occur over short periods of time. We refer to these as measures of functional skill levels that change more slowly over time.

In the new year, we’re highlighting the third white paper, Realist(ic) Evaluation Tools for OST Programs: The Quality-Outcomes Design and Methods (Q-ODM) Toolbox. In this paper, Charles Smith and Steve Peck extend the ideas introduced in White Paper 1 (socio-emotional framework) and White Paper 2 (socio-emotional measures) to program evaluation and impact evidence.

Reflections on White Paper 1

Share on linkedin
LinkedIn
Share on twitter
Twitter
Share on email
Email

In conjunction with the release of White Paper 1 this week – A Framework for Socio-Emotional Skills, Quality, and Equity – we want to mention a few of the highlights:

What are socio-emotional skills? In our view, a person’s socio-emotional skills are integrated sets of mental and behavioral parts and processes (i.e., schemas, beliefs, and awareness); these integrated systems are socio-emotional skills and produce both basic and advanced forms of agency.

Why are socio-emotional skills important? Socio-emotional skills have a compounding effect on many developmental outcomes that has been described as dynamic complementarity (Heckman, 2007); that is, socio-emotional skills beget other types of skills. Children and adults operating at high levels of SEL skill can more easily get on to the business of learning what the context has to offer. Settings that do not address SEL skills can become a further cause of educational inequity.

Why are organizations and policies struggling to implement socio-emotional skill reforms? A recent review found over 100 different frameworks describing SEL skills and supports (Berg et al., 2017). This cacophony of words and concepts undermines the shared understanding and language necessary for coordinated action, both within organizations doing the work and among evaluators producing the evidence.[i] Confusion about what constitutes SEL skill, and how “skill” may or may not differ from many other concepts – such as, competence, abilities, traits, attitudes, and mindsets – undermines scientific progress and slows policy processes that rely on at least approximate consensus around shared meanings and objects of measurement.

How can the QTurn socio-emotional skills framework help increase the effectiveness of reform? By defining, naming, and sorting out the key parts of integrated SEL skill sets, we can much more effectively measure and model both changes in socio-emotional skills and, ultimately, impacts on outcomes and equity. In White Paper 2, we extend from the socio-emotional skills framework described in White Paper 1 to corresponding guidance for measuring socio-emotional skills with increased precision, accuracy, and sensitivity.

We’ll be back with more soon…

 


[i] Given the extent of diversity across SEL frameworks, Jones et al. (2019) developed resources to help stakeholders understand the unique strengths of different frameworks as well as the alignment between core elements of these different frameworks. The general conclusions from this work are (a) there is currently no single consensus framework that is obviously more scientifically or practically valid than any or all of the others, and (b) the use of the same terms by different frameworks where presumably referring to different things (i.e., jingle fallacies), and the use of different terms by different frameworks where presumably referring to the same things (i.e., jangle fallacies), are abiding challenges faced by stakeholders charged with making funding, evaluation, training, performance, measurement, and analysis decisions. Our approach is designed to help solve these problems.

Introduction to QTurn White Papers

Share on linkedin
LinkedIn
Share on twitter
Twitter
Share on email
Email

We, at QTurn, are pleased to share the first three, in a series of four, white papers. White Paper 1, Socio-Emotional Skills, Quality, and Equity (Peck & Smith, 2020), provides a translational framework for understanding our relatively unique view of the key parts of a socio-emotional skill set. In short, we develop a case for supplementing the traditional focus on student beliefs and behavior with a much more extensive focus on students’ emotional life and the attention skills necessary for becoming the primary authors of their own development.

You can download White Paper 1 from our website or ResearchGate. We’ve also published a blog describing what we think are some of the important points and implications of White Paper 1.

Although our work is anchored in the wide and deep range of developmental supports that are currently evident in the out-of-school time (OST) field, we view the “neuroperson” model described in White Paper 1 as applying to all adults and children in all settings. Quoting from the paper:

We introduce a theoretical framework designed to describe the integrated set of mental and behavioral parts and processes (i.e., schemas, beliefs, and awareness) that are socio-emotional skills and that produce both basic and advanced forms of agency. With improved definitions and understanding of SEL skills, and the causes of SEL skill growth, we hope to improve reasoning about programs and policies for socio-emotional supports in any setting where children spend time. Perhaps most importantly, we hope to inform policy decisions and advance applied developmental science by improving the accuracy and meaningfulness of basic data on children’s SEL skill growth. (p. 3)

The series of white papers will define what exactly we do and believe at QTurn. After the translational framework is explained in White Paper 1, White Paper 2 – Measuring Socio-Emotional Skill, Impact, and Equity Outcomes (Smith & Peck, 2020a) – provides guidance for selecting feasible and valid SEL skill measures. White Paper 3 –  Realist(ic) Evaluation Tools for OST Programs – integrates the SEL framework and measures with a pattern-centered approach to both CQI and impact evaluation. White Paper 4 – Citizen Science and Advocacy in OST (Smith & Peck, 2020b) – presents an alternative evidence-based approach to improving both the impact and equity of OST investments. Over the next few weeks, we’ll be releasing blogs related to White Papers 2 and 3.

We’ll also be updating our website as we go along and hope to be joined in the blogging by a couple of expert clients in Flint and London. That’s it for now. We look forward to sharing further information in the coming months and would love to receive any feedback you think might help further the cause of supporting OST staff and students.

A Compassionate Evaluation using the GOLD Assessment in Genesee Intermediate School District

In early 2020, COVID-19 rates were soaring. Masks, cleaning supplies, and clear information were in short supply. This was especially true for schools across the country. Teachers, parents, and students were unsure about what was going to happen next. On Thursday, April 1, 2020 (in-person) school was still in session in the Genesee Intermediate School District (GISD) and the students and staff of the 21st CCLC Brides to Success afterschool programs were looking forward to a 4-day weekend. On Friday, April 2, Governor Whitmer released executive order 2020-35, immediately suspending school for the remainder of the school year and drastically changing how delivery of school and afterschool services would be provided (e.g., shifting from in-person to remote interactions with children and youth).

This was the second year QTurn partnered with GISD’s Bridges to Success programs, and we quickly realized two things. First, our original continuous quality improvement (CQI) cycle (with a heavy focus on Socio-Emotional Learning) was, in spirit, more relevant than ever but, in implementation, completely inappropriate. Second, the setting-based tools (such as the Youth and School-Age PQA) available to evaluators in the OST field in April of 2020 could not provide valid program quality data and could not demonstrate how afterschool programs, like Bridges to Success, were pivoting to meet the needs of children.

During this pivot point, we discovered the four “rules” for designing and implementing a compassionate evaluation. It was clear that it felt unethical (to us and to the Bridges to success leadership) to ask staff to partake in an external evaluation processes ill-fit for the transitional physical setting or within the greater context of learning during a global pandemic. Our partners didn’t need the added pressure of an external observation while they were still figuring out what it meant to offer virtual and non-virtual programming to students and families.

By the time schools were closed in Michigan, QTurn was in the midst of developing a self-assessment tool for evaluating program quality during the COVID pandemic, which would eventually become the Guidance for OST Learning at a Distance (GOLD). The development of the GOLD, funded by the Michigan Afterschool Association (MAA) and the Michigan Department of Education (MDE), was the culmination of interviews, workshops and reviews with over 25 youth development and OST experts from the State of Michigan. Because the GOLD materials were not released for another month to the public, the QTurn team and Bridges to Success leadership decided to use the 27 best practices described in the GOLD as a framework for coding a series of interviews in order to tell the story of Bridges to Success’ response to the pandemic.

Over the course of two weeks, the QTurn team interviewed 15 afterschool staff, from 9 GISD afterschool sites, and 1 administrator from the GISD office. Each interview was structured around the following five questions:

  • What is the experience of transitioning from in-person to distance programming?
  • What are you hearing from students and families?
  • What are the barriers to students’ virtual learning?
  • Where are you experiencing success?
  • Where could you be successful with more support?

Some calls were quick, lasting only 35 minutes. Some were over an hour. We asked questions, and we listened, asked follow up questions, and listened more. Every conversation was an intense opportunity for direct staff, program administrators, and team leads to tell someone outside of their world what was going on. We heard many sentiments filled with hope and gratitude, confusion and uncertainty of their impact, moments of fear and sadness, and overwhelming concern for the students and families in their programs.

Although each of the afterschool sites used their own approach to providing services intended to facilitate learning at a distance, with no systematic coordination across sites, five key themes emerged from our analysis of the interview responses:

  1. Staff were unsure about how to define some aspects of program quality and/or professional practice within the context of learning at a distance; particularly, how to most effectively monitor children’s (a) socio-emotional well-being, (b) academic effort and progress, and (c) attendance.
  2. GISD Bridges to Success leadership style and organizational culture were important sources of support for staff experiencing programmatic uncertainty and professional disequilibrium.
  3. Learning at a distance both exacerbates and clarifies inequities. GISD responded by providing a diverse set of programming options, such as: (a) virtual communication and supports, (b) non-virtual communication and supports, and (c) supports for adults supporting younger children.
  4. GISD continued to deliver a whole-child curriculum that provided supports for safety, fun, academic work, and socio-emotional skill building.
  5. Increased flexibility of staff schedules was necessary to meet student and family needs, even though it often increased the length of staff workdays.
2020-2021 Bridges to Success CQI Design

When we started the 2020-2021 school year, we realized again that the setting-based assessments required by the Michigan Department of Education were going to once again offer incomplete data on the impact of programming being offered by Bridges to Success. The afterschool model was no longer face-to-face general enrichment programming. Bridges to Success was still responding to families in crisis, so their approach was a casework model focused on socio-emotional and academic support. Knowing this, we conducted external (virtual) program evaluations, using the MDE recommended PQA items plus two scales from the SEL PQA (emotion management and empathy), But, the PQA data alone felt incomplete. To supplement, we again interviewed site leads and coded the transcripts to the GOLD. By interviewing site leads based on general questions, and letting them talk, we were able to learn about not only their experiences but also what they were most concerned with and focused on. The GOLD demonstrated what the PQA alone could not: that lots of SEL programming was being done but mostly one-on-one with children and families, outside of their regularly-scheduled virtual programming

In early spring, 2021, QTurn and Bridges to Success staff came back together to decide how we could work together for the rest of the year. Schools were opening again, and in-person afterschool programming would be offered again. We decided that would do external observations using the SEL PQA and use a custom distance-learning external assessment tool that was designed specifically for GISD.

By adding the GOLD into our CQI plan,we were able to really define the quality and breadth of services. No two sites were operating the exact same way – but every site was working with their school to meet the needs of children. Utilization of setting-based assessment tools not designed for virtual learning (or non-virtual distanced learning) was only scratching the surface of what programs like Bridges to Success accomplished on 2020-21. And by working with our partners and by centering compassion, our evaluation not only articulated, but honored the heroic effort and continuous dedication of the Bridges to Success program to their communities during a difficult year.

What Exactly is Compassionate Evaluation?

Compassion has a lot of definitions, but most have to do with recognition of suffering, action to alleviate suffering, and tolerance of discomfort during the action.[i] By April of 2020, we knew that our afterschool partners in Genesee County (including the city of Flint) Michigan, and many of the children and families that they served, were suffering. A significantly higher proportion than usual of those families were in a crisis-mode. For afterschool educators, the learning environment had moved, and the means of delivering programs had changed dramatically. A “pivot” was required.

When our partners told us how evaluation could help, they emphasized a compassionate approach to the work that would address suffering in multiple ways: by reducing workload related to evaluation, by providing an evaluation design that was of timely value in the current moment of challenge, and by wherever possible reflecting back to staff their own incredible commitment and ingenuity in meeting those challenges.

We translated this desire for an experience of compassion into a few rules about method:

Rule 1 was make it quick. We knew that staff were in crisis mode and that time was precious. We eliminated all data collection responsibilities for staff. Staff had only to schedule dates for observers, sit for a 45-minute interview, review the report during a 70-minute training, and then review the report again during a subsequent 15 minute portion of an all-staff meeting. This meant less than 4-hours of total time for a site coordinator engaging in required evaluation activities between September and December 2021.

Rule 2 was prioritize local expertise. When program practices and objectives are changing rapidly (the pivot mentioned above), prior evaluation designs (including measures) are of reduced validity. This was true simply because it was no longer the same service – models varied widely both within and across programs.[ii] We asked open-ended questions about what was and wasn’t working and then coded text segments to existing program standards for program fidelity, instructional quality, and students’ socio-emotional skills. In this way, we identified standards that were applicable in the new situation, named new priorities in those terms, and respected site managers as expert sources of data about what works.

Rule 3 was ask about what is changing (and reflect strengths). Afterschool staff told us they felt like their professional tools became outdated overnight, by the pandemic, and it was not a good feeling. We spent our moments of access to leaders and site managers asking how it was going, letting them give voice to however it was going by taking the conversation wherever it went, and then by intentionally reflecting strengths back to them. Although this therapeutic aspect of our service may feel a bit uncomfortable to some evaluators, the situation required it as an aspect of method. As evaluators, we were “giving value to” leaders’ and site managers’ experiences by letting them flow some ideas and emotion while answering our questions.

Rule 4 was write it down. By asking staff about practices and coding their transcribed responses into categories, we were identifying sentences written by program staff that describe specific local best practices in specific local terms. By identifying and writing down local best practices in the words of program staff, the evaluator helps speed up the development of shared mental models about what the pivoted service is. This helps service providers demonstrate accountability in the sense of “this is actually what we did every day.” It also makes it possible for leaders to pivot the service more easily in the future by returning to documentation for crisis- or emergency-management.


[i] Strauss, C., Lever Taylor, B., Gu, J., Kuyken, W., Baer, R., Jones, F., & Cavanagh, K. (2016). What is compassion and how can we measure it? A review of definitions and measures. Clinical Psychology Review, 47, 15–27. https://doi-org.proxy.lib.umich.edu/10.1016/j.cpr.2016.05.004

[ii] Roy, L. & Smith, C. (2021). YouthQuest Interim Evaluation Report. [Grantee Evaluation Report]. QTurn. and Smith, C. & Roy, L. (2021). Best Practices for Afterschool Learning at a Distance: GISD Bridges to Success. [Grantee Evaluation Report]. QTurn.