Increasingly, funders such as the U.S. Department of Education are calling for rigorous evaluation designs. The Education Alliance conducts experimental evaluations using methodologies such as random assignment with control groups, growth modeling, and hierarchical linear modeling. Our evaluations use multiple measures to assess fidelity of implementation and establish causal links between programs and impacts.
Alliance staff are experienced in fielding randomized control trial evaluation studies.
Recent experience has focused on a federally funded national initiative to investigate
interventions that improve literacy. This and other studies build on the Alliance's
work in adolescent literacy and high school redesign as well as more recent initiatives
in early childhood literacy. Alliance evaluators work together with school staff and
program providers to clarify program specifications and monitor program implementation to
assure the integrity of evaluation data collection and analysis activities. Valid and
reliable measures are obtained through extant standardized assessments or collected directly
by trained Alliance staff. Evaluators use these measures to respond directly to evaluation
questions about whether program interventions work or have an effect on students.
Examples of recent projects include:
Striving Readers Program: Evaluation
The Springfield-Chicopee, MA school districts contracted with The Education Alliance to assess the impact on student achievement of the U.S. Department of Education's Striving Readers program using a randomized control trial in five high schools.
Ready to Learn Providence: Evaluation
Ready to Learn Providence (R2LP) contracted with The Education Alliance to conduct an initial evaluation of R2LP's work. R2LP is an initiative to improve the early learning opportunities for all children in Providence, RI while specifically focusing on the eight linguistically and culturally diverse neighborhoods exhibiting the greatest need.
When experimental methodologies are not feasible, Alliance evaluators engage
quasi-experimental designs to examine program impact and implementation. Again,
collaboration is key to making the right fit between evaluation design and
local context and data configurations. Primary data collection using surveys
developed and field tested by the Alliance for validity and reliability or
extant surveys, interview and observation protocols accessible through established
research domains are combined with secondary data sources to provide the outcome
and context measures necessary for rigorous evaluation studies. The Alliance
has applied matched samples, interrupted time series and other quasi-experimental
design options in examining school, teacher or learner effects in evaluations
of magnet schools, smaller learning communities, bilingual and ESL programs,
after-school and summer program initiatives, comprehensive school reform, Early
Reading First, and professional development programs such as Teaching American
History, Math and Science Partnerships, and NIH and NSF science, technology,
engineering and math (STEM) initiatives.
Examples of recent projects include:
Advancing Rhode Island Science Education (ARISE)
ARISE is a Brown University program funded by the National Institutes of Health through its Science Education Partnership Awards program. ARISE is designed to engage students in inquiry-based approaches to learning about science, bring cutting-edge research into the classroom, and improve the understanding of the relevance of science to everyday life. The Education Alliance's evaluation design for this program includes components to examine critical outcomes of the ARISE program, which are both developmental and performance driven in nature.
Implementing for Success: An Analysis of Five CSR Models
The United States Department of Education contracted with The Education Alliance to conduct an evaluation of the implementation of six widely used comprehensive school reform models and changes in teacher practice and student performance in 90 predominantly low-performing schools across the eastern United States.
Magnet School Assistance Programs: Evaluation
Massachusetts, Connecticut, New York, Tennessee, Florida
American Education Solutions and The Education Alliance have partnered to conduct evaluations of the quality and effectiveness of federally funded Magnet School Assistance Programs (MSAP) for over eight years. Evaluations conducted through this partnership target educational opportunities designed to benefit 45,000 minority students in multiple diverse school districts from New York to Florida.
Rhode Island Technology Enhanced Science Program (RITES) Evaluation
The Education Alliance is leading evaluation efforts to assess the Rhode Island Technology Enhanced Sciences (RITES) program, a major statewide initiative focused on improving middle and high school science and mathematics education. In a $12.5 million award from the National Science Foundation Math & Science Partnership, the RITES program builds on extant initiatives across the state focused on inquiry-based science through a rollout of professional development, online resources, research-based content and partnered support. The evaluation design for the RITES program includes quasi-experimental approaches as well as qualitative data collection to continuously inform program staff on all aspects of program implementation and improvement efforts.
Teaching American History: Evaluation
Boston, MA, Salem & Sussex, MA, Danbury, CT, Bronx, NY and Jamestown, NY
School districts in Massachusetts, Connecticut and New York along with the Southeastern Massachusetts Teaching American History Consortium contracted individually with The Education Alliance to conduct evaluations of their Teaching American History programs.