Evaluation-Specific Methodology


Dr. Michael Scriven – Distinguished Professor, Claremont Graduate University and Co-Director, Claremont Evaluation Center
Tuesday, April 29, 2014
Location: 4410 Ellsworth Hall
Time: Noon – 1pmW

Professional, scientifically acceptable evaluation requires a wide range of competencies from the social science methodology toolkit, but that’s not enough. Unlike the old-time social scientists, you’ve got to get to an evaluative conclusion—that’s what it means to say you’re doing evaluation—and to do that you apparently have to have some evaluative premises. Where do those come from, and how can you validate them against attack by people who don’t like your conclusion? The way most evaluators—and the texts—do this is by relying on common agreement or intuitions about what value claims are correct, but of course that won’t work when there’s deep disagreement e.g., about abortion, suicide hot lines, creationism in the science curriculum, ‘natural’ medicine, healthcare for the poor, torture, spying and war. Evaluation-specific methodology covers the selection and verification/refutation of all value claims we encounter in professional evaluation; and how to integrate them with data claims (and data syntheses) by inferences from and to them; and how to represent the integrated result in an evaluation report. Important sub-topics include: rubrics, needs assessment, the measurement of values, crowd-sourced evaluation, and the special case of ethical value claims. (Not to mention a completely new philosophy of science.)

Racial Equity Lens for Culturally Sensitive Evaluation Practices


Video

Mr. Willard Walker-Senior Policy Consultant, Public Policy Associates; Dr. Paul Elam, Project Manager, Public Policy Associates; and Dr. Christopher Dunbar, Professor, MSU
Wednesday, April 16, 2014
Location: 4410 Ellsworth Hall
Time: Noon – 1pm

We believe that this work will play a critical role in efforts to identify particular aspects of diversity, inclusion, and equity, and their relevance to an evaluation process.  We have intentionally embedded these concepts to emphasize the importance of racial and cultural proficiency throughout analysis and evaluation processes.  In this way, a researcher is moved away from simply assessing outcomes to recognizing the historical events that play a part in maintaining the adverse conditions that exist in underserved communities.  We do our work through a lens that acknowledges white privilege and structural racism as fundamental forces in understanding how conditions came to be as they are today.

Summary of Evaluation Through a Culturally Responsive and Racial Equity Lens

Draft Racial Equity Self-Assessment for Evaluators

Draft Template for Racial Equity Evaluation

CANCELED: The Detroit Sexual Assault Kit (SAK) Action Research Project: Developmental Evaluation in Practice


THIS EVENT WILL BE RESCHEDULED FOR FALL 2014

Dr. Rebecca Campbell—Professor of Ecological-Community Psychology, MSU and Recipient of AEA’s 2013 Outstanding Evaluation AwardWednesday, April 9, 2014
Location: 4410 Ellsworth Hall
Time: Noon – 1pm

In 2009, 11,000+ sexual assault kits (SAKs) were discovered in a Detroit Police Department property storage facility, most of which had never been forensically tested and/or investigated by the police.  In 2011, a multi-stakeholder group convened to develop long-term response strategies, including protocols for notifying victims whose kits had been part of the backlog.  In this presentation, I will describe the process by which we used developmental evaluation theory to create a mixed-methods evaluation of this initiative. This presentation will summarize the numerous challenges (psychological, ethical, and legal) we have faced attempting to locate survivors so many years later to evaluate the efficacy of the protocols developed by the multidisciplinary team.

Assessing Cultural Competence in Evaluation


Video

Slides

Worksheet

Dr. Jody Brylinsky—Professor of Health, Physical Education, and Recreation, WMU
Wednesday, April 2, 2014
Location: 4410 Ellsworth Hall
Time: Noon – 1pm

Cultural Competence is the ability to function effectively in diverse cultures. This presentation will attempt to create an awareness of how cultural competence impacts evaluation in very implicit ways. Participants will be asked to reflect on how their particular culture may be biasing evaluation practices. An understanding of a personal level of cultural bias is important as it has impact on

  • Academic and interpersonal skills
  • Understanding and appreciation of cultural differences & similarities
  • Willingness to draw on community-based values, traditions, customs
  • Valuing diversity both between and within groups

Guidelines for Developing Evaluation Recommendations


Slides

Dr. Lori Wingate—Assistant Director, The Evaluation Center, WMU
Wednesday, March 26, 2014
Location: 4410 Ellsworth Hall
Time: Noon – 1pm

Michael Scriven has said “an evaluation without recommendations is like a fish without a bicycle.” Nonetheless, most evaluation clients expect an evaluator to produce recommendations for program improvement. In this session, I will present guidelines for evaluation recommendations, covering their development, delivery, and follow up.  Most of the session will be discussion—participants are invited to share their trials, tribulations, successes, and lessons learned with regard to providing recommendations based on evaluation findings.

ASSESS: Web-based Assessment Instrument Selection for Engineering Education


Video

Handout

Dr. Denny Davis—Emeritus Professor, Washington State University
Wednesday, March 19, 2014
Location: 4410 Ellsworth Hall
Time: Noon – 1pm

Engineering educators and scholars of student learning are challenged to improve and document student achievements for program accreditation, grading, and institutional accountability. Difficulty finding appropriate assessment instruments is exacerbated by engineering educators untrained in educational assessment and evaluation professionals unfamiliar with specialized knowledge and professional skills in engineering. The Appraisal System for Superior Engineering Education Evaluation-instrument Sharing and Scholarship (ASSESS) addresses such challenges by providing educators and evaluation professionals a web-based catalog of information about engineering education assessment instruments. ASSESS enables searches and comparisons to facilitate selection of instruments based on constructs assessed, ABET (formerly Accreditation Board for Engineering and Technology) criteria, and technical and administrative characteristics.

Improving the Design of Cluster Randomized Trials


Carl Westine—IDPE Student, WMU
Wednesday, March 12, 2014
Location: 4410 Ellsworth Hall
Time: Noon – 1pm

Evaluators and researchers rely on estimates of parameter values including effect sizes and variances (unconditional or conditional) to appropriately power cluster randomized trial designs.  Individual disciplines, including education, increasingly test interventions using more complex hierarchical linear model structures.  In order to improve the design of these studies, researchers have emphasized the development of precise parameter value estimates through meta-analyses and empirical research.  In this presentation, I summarize recent research on empirically estimating design parameters with an emphasis on intraclass correlations and the percent of variance explained by covariates, R2.  I then demonstrate how these parameter values are becoming increasingly accessible through software.  Using Optimal Design Plus, I show how evaluators and researchers can utilize these parameter value estimates to improve the design of cluster randomized trials.

 

“Expectations to Change” (E2C): A Participatory Method for Facilitating Stakeholder Engagement with Evaluation Findings


Handout

Dr. Adrienne Adams—Assistant Professor of Ecological-Community Psychology, Michigan State University
Wednesday, February 26, 2014
Location: 4410 Ellsworth Hall
Time: Noon – 1pm

Program evaluators strive to conduct evaluations that are useful to stakeholders. To achieve this goal, it is critical to engage stakeholders in meaningful ways throughout the evaluation. The “Expectations to Change” (E2C) process is an interactive, workshop-based method for engaging stakeholders with their evaluation findings as a means of promoting evaluation use and building evaluation capacity. In the E2C process, stakeholders are guided through establishing standards, comparing the actual results to those standards to identify areas for improvement, and then generating recommendation and concrete action steps to implement desired changes. In this presentation, I will describe the process, share findings from an evaluation of its effectiveness, and discuss its general utility in evaluation practice.

The Process of Evaluating Grant Applications at the National Institutes of Health (NIH)


Dr. Stephen Magura—Director of The Evaluation Center, WMU
Wednesday, February 19, 2014
Location: 4410 Ellsworth Hall
Time: Noon – 1pm

The National Institutes of Health (NIH) grant application evaluation process is considered a model for the field. The presentation will describe this process and discuss recent changes that were made in the rating scheme in response to perceived shortcomings.  The effect of these changes on the evaluation of grants will be discussed and critiqued. (The presenter is a permanent member of an NIH grant review committee and has been reviewing NIH grants for 25 years.)

International Large-Scale Assessments: TIMSS and PIRLS – A Guide to Understanding Learning Outcomes Locally and Globally


Video

Slides

Handout

Dr. Hans Wagemaker—Executive Director of the International Association for the Evaluation of Educational Achievement (IEA)
Thursday, February 13, 2014
Location: Bernhard Center 204
Time: Noon – 1:30pm

The last three decades have witnessed a considerable growth in and development of international large-scale assessments.  Despite the wealth of data that is collected through the Progress in International Reading Literacy Study (PIRLS) and Trends in International Mathematics and Science Study (TIMSS) research programs, much of the media attention continues to be focused on international rankings. This presentation will provide an overview of the International Association for the Evaluation of Educational Achievement’s (IEA) TIMSS and the PIRLS with a view to providing participants with an understanding of the purpose, practices, and challenges associated with international large-scale assessments.   Using some examples from the US and other countries, it will address some of the common concerns expressed about the validity of these assessments and their use in understanding learning outcomes locally and globally.

For more information visit http://www.iea.nl/.