November 5, 2009
Alliance evaluators shared recent study results in five different sessions at the annual conference of the American Evaluation Association in Orlando, Florida between November 9 and 14, 2009 (visit www.eval.org for conference details). Summaries of the presentations are provided below.
Rethinking Students' Educational Expectations in the Context of Evaluation Elise Laorenza and Stephanie Feger
The study presented examined whether after school program participation closed the gap between minority students’ education aspirations and expectations. Results suggest that minority students' educational expectations positively increased to more closely match their aspirations.
Evaluating Short-Term Impacts on Student Achievement: What Does Motivation Tell Us? Elise Laorenza and Stephanie Feger
This roundtable discussion focused on the reception of motivation as an indicator of impact among stakeholders and the usefulness of this measure in terms of the evaluation context. Evidence and data from two evaluation studies were shared.
Capturing Context in a Multi-site Evaluation: An Examination of Magnet Program Impact in Four School Districts Amy Burns
This roundtable presentation described both the statistical models and tools utilized in the rigorous evaluation of four magnet programs and the nuances in district-level contexts that influence study design. The presentation also identified the benefits and challenges associated with a multi-faceted approach and discussed the ways in which these challenges might be addressed.
Context Matters: How Context Impacts Implementation of the Federal Smaller Learning Communities Program Across Sites Beth Ann Tek
The flexibility in federal program guidelines along with the varying needs from district to district contribute to the varied implementation of the SLC program across sites. Three case studies were presented highlighting the role of context and the methods used to capture this important element and its impact on implementation.
Implications for Evaluation: Barriers to Implementation Fidelity in a Randomized Controlled Trial Kimberley Sprague
This panel session included the presentation of four large-scale federal studies examining intervention effects on struggling readers and a discussion of the challenges in implementation and various shortcomings in its measurement led by experts in the field. Presenters discussed the types of fidelity issues faced, efforts made to correct these issues, and techniques utilized during data analysis to correct for potential biases introduced by the fidelity issues to leverage more information from their study design than otherwise would have been possible.