How the Q-ODM impact model is a more cost-effective form of the quasi-experimental design (QED)

Share on linkedin
LinkedIn
Share on twitter
Twitter
Share on email
Email

The Quality-Outcomes Design and Methods (Q-ODM) approach to program evaluation increases the use value of all estimates produced as part of an impact analysis. Put simply: We replace the “no-treatment” counterfactual condition (i.e., children who were not exposed to an afterschool program) with low-implementation conditions (e.g., children who were exposed to lower-quality instructional practices in an afterschool program) in order to describe the impact of optimal implementation on child outcomes (e.g., socio-emotional skill change, equity effects).  Said again: The “control group” in our impact model is any quality profile, subgroup configuration, or pathway (e.g., low-quality practices profile) that is contrasted with an optimal “treatment” group (e.g., high-quality practices profile).[1]

The “Analytic Tools” section of White Paper 3 provides an introductory discussion of Q-ODM impact models for student skill and equity outcomes. Also, check out this UK impact evaluation.

Now, let’s talk about three reasons why our approach is a cost-effective choice for CEOs seeking evidence about impact and equity outcomes:

Lots of Reality-Based Estimates that Analogize to Action. Our point about cost effectiveness is this: Every estimate produced in this impact model is useful. Where coupled with QTurn measures, Q-ODM impact estimates are interpretable in terms of specific adult and child behaviors and contexts. This means that there is a direct analogy from meaning encoded in the data to meaningful teacher and student behavior that occurs in the classroom – direct analogy from data to reality. The data used to identify the lower-quality profile actually identifies the lower-quality settings! The amount of skill change that occurs in the high-quality setting actually demonstrates what’s possible in the program; that is, it sets the benchmark for other programs.

An impact estimate implies a subtraction of one magnitude from another. What use is a counterfactual estimate if there is no such thing as a counter factual condition? Doesn’t that just mean that we are subtracting an imaginary quantity from a real one?

Using Natural Groupings to Address Threats to Valid Inference. Its not just usefulness of estimates (consequential validity) but, we argue, a more valid way to rule out primary threats to validity of inference that the treatment caused an effect. Two points: The children in the low-quality group are more likely to be similar to the kids in the high-quality group for all of the right reasons (i.e., SEL histories) that are missed by most efforts at matching individuals or groups using demographic and education data.

The case that families in one group have more education-relevant resources (e.g., SEL histories) than families in the other group plays out in two ways. When families have unmeasured resources before the child attends, we are talking about selection effects. When families use those unmeasured resources during the program intervention we are talking about history effects. We argue, and present evidence, that the Q-ODM method better addresses these threats to valid inferences about impact than the pernicious and unethical use of race/ethnicity and social address variables as covariates – pretended “controls” – in linear models.

Capturing Full Information from Small Samples. Our method is designed to detect such differences in the ways things go together in the real world, in or around the average expectable environments characterizing human development and socialization (cf. Magnusson, 2003). This in-the-world structure is a constraint on the states that can and cannot occur during development. In the pattern-centered frame, small cell sizes indicate sensitivity of the approach. Relatively-low Ns are not necessarily a problem for the distribution-free statistical tests used in pattern-centered impact analyses.

 

[1] We realize that others would claim that our designs are not QED at all. We delve deeper into the rationales used to disqualify “groups that receive different dosages of a treatment” from being considered “control groups” within the context of experimental design in White Paper 4.

 

You Might Also Like

Leave a Reply

Your email address will not be published. Required fields are marked *