Frustration and confusion often occur when practitioners require detailed information about program processes for continuous quality improvement (CQI) while policy-makers require evidence of outcome effects for accountability and funding. Impact studies are often preferred over continuous improvement studies, but they seldom offer useful information to practitioners. Per the conference theme, this situation leads to a worldview that emphasizes the limitations of social science methods for achieving practical purposes and welcomes arbitrary decision making (i.e., Type-2 error) in the absence of better evidence and arguments.
This paper describes a generic quality-outcomes design (Q-O design) that meets the need for performance measurement methodology for concurrent and integrated impact evaluation and continuous improvement in the same organization; that is, measure once, cut twice.
Smith, C., Peck S., Roy, R., & Smith, L. (2019). Measure once, cut twice: Using data for continuous improvement and impact evaluation in education programs. Meetings of the American Education Research Association, Toronto, ON, CA.