2007 – 2008


 

September 18, 2007


Watch the video
Title: Evaluation of the Federal Railroad Administration’s Confidential Close Call Reporting System

Presenters:Jonny Morell – TechTeam Government Solutions, Inc. (Ann Arbor, MI) & Liesel Ritchie – Senior Research Associate, The Evaluation Center, WMUAbstract: TechTeam Government Solutions and WMU’s Evaluation Center are cooperating on a five-year evaluation of the Federal Railroad Administration’s Confidential Close Call Reporting System (C3RS). The methodology calls for quantitative analysis of many types of data over time and across multiple railroads and qualitative analysis of C3RS’ implementation and execution. Process, outcome, and sustainability dimensions are included throughout. Rich links with all stakeholders are nurtured to assist with evaluation implementation and knowledge transfer.

September 25, 2007
Watch the videoView the slides
Title: The Heifer Hoofprint Model: An Approach to Impact Evaluations of International Development Projects

Presenter: Thomaz Chianca – Interdisciplinary Evaluation Doctoral Student, WMU
Abstract: Since 2005, The Evaluation Center has been working with Heifer International as an independent agency to evaluate its efforts focused on ending hunger and caring for the environment throughout the world. So far, external evaluation teams comprising of mostly professors and Ph.D. students from the Interdisciplinary PhD in Evaluation and The Evaluation Center staff have evaluated 74 development projects supported by Heifer in Albania, Cameroon, China, Kenya, Peru, Tanzania, Thailand, Nepal, and South-Central U.S. A specific approach—the Heifer Hoofprint Model—was devised for these evaluations and is being improved over the years. During this Eval Café, participants will have the opportunity to learn about the new features of the evaluation approach and the major challenges for implementing them. They will also be invited to contribute ideas to improve the process for future years. The session will be facilitated by the evaluation manager; members of the evaluation teams responsible for the site visits to different countries may also contribute to the discussions.

October 2, 2007









Watch the videoView the slides
Title: Getting to Good: Evaluating a Personnel Selection System

Presenter: Wes Martz, Vice President, Marketing, Kadant Inc. (Three Rivers, MI) and Interdisciplinary Evaluation Doctoral Student, WMU
Abstract: The validity of interviews, assessment tests, criterion measures, and a variety of subtopics fills the personnel and industrial psychology literature. However, less attention has been given to evaluating the essential components of a selection system and the selection system’s link to organizational performance. A Personnel Selection System Maturity Model that defines a “good” system and the links to organizational performance will be presented to illustrate the practical elements associated with evaluating a personnel selection system.

October 9, 2007
Watch the videoView the slides
Title: Making the Most of Your Thesis or Dissertation: Publication Strategies for Graduate Students and Recent Graduates in Evaluation and the Social Sciences

Presenter: Chris L. S. Coryn – Interdisciplinary Evaluation Ph.D. Program Director, WMUAbstract: Nearly 90 percent of papers published in academic journals are never cited. In fact, as many as 50 percent are never even read by anyone other than their authors, referees, and journal editors. The purpose of this Evaluation Café is to provide useful information and strategies for current graduate students and recent graduates on the process of (re)structuring a dissertation or thesis into a publishable journal or book manuscript, getting it published, and perhaps most important, getting it noticed. Other topics that will be discussed include targeting a journal, submitting a manuscript, understanding the review process, deciphering the editor’s letter, revising and resubmitting the manuscript, and regrouping after rejection.

October 16, 2007

Watch the video
Title: Evaluating the Kalamazoo Promise

Presenter: Gary Miron – Chief of Staff, & Stephanie Evergreen, Project Manager, The Evaluation Center, WMUAbstract: The presentation will provide an overview and explanation for the overall approach and design for the evaluation of the Kalamazoo Promise, a scholarship program that provides up to four years of tuition and fees at any two-year or four-year public college or university in Michigan for students who graduated from Kalamazoo Public Schools. Furthermore, the methods for the collection, entry, and analysis of data will also be discussed. Preliminary results from year 1 of the evaluation may also be included in the presentation if they are first cleared by the district.

October 23, 2007
Watch the videoView the slides
Title: Sustainability Evaluation: A Checklist Approach

Presenter: Daniela Schroeter – Interdisciplinary Evaluation Doctoral Student, WMUAbstract: Checklists in evaluation are tools to support practitioners in the design, implementation, and metaevaluation of evaluations. They do not quite represent methodologies or theories, but commonly incorporate a set of components, dimensions, criteria, tasks, and strategies that are critical in evaluation. The development of such checklists requires focusing the task, gathering all relevant information, and categorizing, classifying, verifying, fieldtesting, and evaluating the information compiled. This Evaluation Cafe is intended to introduce an initial draft for a sustainability evaluation checklist. After summarizing key concepts, the checklist will be shared and discussed. Finally, attendees will be asked to provide critical feedback.

October 30, 2007
Watch the videoView the slides
Title: Summative Confidence: An Introductory Overview

Presenter: Cristian Gugiu – Interdisciplinary Evaluation Doctoral Student, WMAbstract: One of the cornerstones of methodology is that “a weak design yields unreliable conclusions.” While this principle is certainly true, the constraints of conducting evaluations in real-world settings often necessitate the implementation of less-than-ideal designs. To date, no quantitative or qualitative method exists for estimating the impact of sampling, measurement error, and design on the precision of an evaluative conclusion. Consequently, evaluators formulate recommendations and decision makers implement program and policy changes without full knowledge of the robustness of an evaluative conclusion. In light of the billions of dollars spent annually on evaluations and the countless millions of lives that are affected, the impact of decision error can be disastrous. This paper will introduce an analytical method that can be used to estimate the degree of confidence that can be placed on an evaluative conclusion and discuss the factors that impact the precision of a summative conclusion.

November 13, 2007Watch the video Title: The Adoption of an Arts Integration Program for Whole School Reform: Exploring the Line between Evaluation and Research

Presenters: Allison Downey – Assistant Professor, Elementary Education, & Claudia Ceccon – Educational Leadership, Research, and Technology Doctoral Student, WMU
Abstract: The presenters will share preliminary findings from a three-year study of the adoption of the A+ Program (an arts integration and interdisciplinary curriculum approach) for whole school reform by an elementary and middle school. After describing the research project, entitled “Study of Arts Integration Practices and School Climate at A Elementary & B Middle School,” its design, methodology and results, the presenters will address the distinction between their goals and approaches as researchers and those of evaluators. Finally, the audience will be invited to discuss the line between research and evaluation from a variety of perspectives.

November 21, 2007
Watch the videoView the Slides
Title: RealWorld Evaluation: Working Under Budget, Time, Data, and Political Constraints

Presenter: Jim Rugh, consultant and former Coordinator for Design, Monitoring and Evaluation for Care International
Abstract: After learning good evaluation theory, it can be a rude awakening when you begin practicing in the real world – a world where all too often there was no baseline, there can’t be a comparison group, there is little time, a very limited budget, and political pressures, yet your clients expect “rigorous impact evaluation”! What’s a conscientious evaluator to do?! Jim Rugh, who has led workshops on the subject at American Evaluation Association conferences and in many countries, will present a very brief overview of the RealWorld Evaluation approach (see book, Sage 2006) and then facilitate a discussion on the issues raised.

November 27, 2007 Title: Evaluation Cafe Book Group: Evaluation Ethics for Best Practice

Facilitator: Lori Wingate – Assistant to the Director, The Evaluation Center, WMU

December 4, 2007Watch the video

View the slides

Title: Controlling for Selection and Endogeneity Biases in Estimating Alcoholic Anonymous’ Outcomes in Project MATCH

Presenter: Stephen Magura – Director, The Evaluation Center, WMU
Abstract: There is a considerable research literature that apparently shows relations between Alcoholics Anonymous (AA) participation and less drinking or abstinence. The studies are almost universally correlational in nature, however, including those with longitudinal data. Randomized controlled trials of mutual aid are rare and extremely difficult. This literature is known to be susceptible to two main kinds of artifacts. One is selection bias, e.g. where different types of people choose to participate or not participate in AA. The second artifact is endogeneity or “reverse causation,” i.e., the possibility that reducing or stopping drinking leads to increased or sustained AA participation. It would be important to control for these possible biases to determine the “true” magnitude of effect and causal direction in the relation between AA participation and drinking behavior. To address these issues, the presentation will outline a plan to conduct a secondary analysis of AA participation and drinking behavior in a large national alcoholism treatment database-Project MATCH. Three statistical techniques – propensity score matching, instrumental variable analysis and structural equation modeling with cross-lagged panel – that are designed to control for possible selection and endogeneity biases in correlational data will be discussed in relation to Project MATCH.

January 22, 2008
No Photo



View the handout
Title: Evaluating and Synthesizing the Research on School Choice

Presenters: Gary Miron – Chief of Staff, Stephanie Evergreen – Project Manager, & Jessica Urschel – Research Assistant, The Evaluation Center, WMUAbstract: This presentation will summarize the methods and techniques the presenters are using to synthesize the extensive body of empirical research on school choice. Six distinctive types of school choice are considered: (1) vouchers/tuition tax credits, (2) charter schools, (3) magnet schools and interdistrict choice program, (4) intradistrict choice programs, (5) cyber schools, and (6) homeschooling. During the presentation, specific attention will be given to following: selection or inclusion criteria, weighting scheme used to weigh outcomes based on the quality and scope of the empirical studies, and graphical techniques for presenting findings.

January 29, 2008Watch the video Title: Building Strong Collaboration Between an Academic Institution and an International NGO through Evaluation Use: The Case of Heifer International and The Evaluation Center

Presenter: Tererai Trent – Deputy Director for Planning, Monitoring & Evaluation, Heifer International (Little Rock, AR) & Interdisciplinary Evaluation Doctoral Student, WMUAbstract: The presentation will summarize the effectiveness and usefulness of the findings and results of an evaluation conducted by the WMU Evaluation Center from a client’s perspective. Heifer International is an international NGO working mostly in developing countries and has been working with the Center to conduct impact evaluations on three continents. The discussion will focus on the usage of the evaluation results at various levels of the organization. Five distinct users within the organization will be considered: (1) Heifer International’s marketing and fundraising division, (2) donors, (3) Heifer International’s senior leadership, (4) international country offices, and (5) impactees indicating the pragmatic use of the evaluation findings at grassroot community level. Perspectives on The Evaluation Center’s methodology and the opportunities that can be realized when an academic institution and an international NGO collaborate for institutional change will be explored.

February 5, 2008Watch the video

View the slides

 

Title: How to Publish an Article in the American Journal of Evaluation (and Similar Journals): Guidance for Graduate Students and First-Time Authors

Presenter: Robin Miller – Associate Professor of Psychology, Michigan State University, & Editor, American Journal of Evaluation
Abstract: Publishing in peer-refereed journals is essential to success in many research- and evaluation-oriented careers. Publishing in refereed journals also provides an important means to influence evaluation theory, method, and practice. Despite the centrality of publishing to evaluation scholarship, graduate students may have limited opportunities to learn about journals and how their editorial processes work while completing their studies. As part of an effort of the AJE editorial board to encourage paper submissions from students and junior colleagues, I will provide an overview of how the journal’s editorial process works and offer advice on how to succeed in publishing work in refereed journals.

February 12, 2008Watch the video Title: Evaluation of Team Performance in Medical Simulation

Presenter: Amy Gullickson – Interdisciplinary Evaluation Doctoral Student, WMU
Abstract: Every year at least 40,000 people in the U.S. are victims of medical errors resulting in severe injury or death. The TriCorridor Center of Excellence in Simulation Research, housed at WMU’s College of Aviation, is attempting to reduce that number. Our efforts involve the use of simulations conducted in hospitals, which re-create real-life situations in order to evaluate the performance of medical professionals in terms of teamwork skills rather than technical medical skills. This presentation will discuss the challenges of creating a taxonomy, developing scales, and training raters for this project.

February 19, 2008 View the slides

Watch the video

Title: The Basic Characteristics and Experimental Designs of Group Randomized Trials Funded by the Institute of Education Sciences

Presenter: Jessaca Spybrook – Senior Research Associate, The Evaluation Center, WMU
Abstract: In federally sponsored education research, randomized trials, particularly those that randomize entire classrooms or schools, have been deemed as the most effective method for establishing strong evidence of the effectiveness of an intervention. Consequently, there has been a dramatic increase in the number of federally funded evaluations of educational programs that utilize experimental methods such as group randomized trials. In this Evaluation Café, I present the rationale behind the federal shift towards randomized trials for establishing evidence of the effectiveness of interventions in education. In addition, I describe the basic characteristics and study designs of the first wave of group randomized trails funded by the Institute of Education Sciences.

February 26, 2008Watch the video Title: Concepts Underlying Evaluating Organizational Effectiveness

Presenter: Wes Martz – Vice President, Marketing, Kadant Inc. (Three Rivers, MI) and Interdisciplinary Evaluation Doctoral Student, WMU
Abstract: All evaluations of organizational effectiveness consist of two components: facts and values. However, the value-based nature of the effectiveness construct generally is ignored in most models of organizational effectiveness. Although numerous suggestions have been made to improve the assessment of organizational effectiveness, issues related to value premises, criterion instability, conflicting criteria, weighting criteria of merit, sub-optimization, boundary specifications, inclusion of multiple stakeholders, and the struggle to synthesize evaluative findings into an overall conclusion abound. This presentation outlines the traditional approaches to assessing organizational effectiveness, highlighting the advantages and limitations of each. An introduction to the organizational effectiveness evaluation checklist is presented as an alternative approach for practitioners and professional evaluators to use when conducting an evaluation of organizational effectiveness.

March 11, 2008No Video Title: Tools for Conceptualizing, Planning, Writing, and Evaluating a Doctoral Dissertation

Presenter: Chris L. S. Coryn – Interdisciplinary Evaluation Program Manager, WMU
Abstract: Of the numerous obstacles encountered by students seeking a doctoral degree, none is more formidable than completing a dissertation. In this presentation, several tools designed to assist students in conceptualizing, preparing, writing, and evaluating a doctoral dissertation prospectus or proposal will be presented, including an agenda for research on evaluation. These tools were specifically developed around the standard by which an Interdisciplinary Ph.D. in Evaluation (IDPE) program dissertation is intended to be upheld: “A unique, significant contribution to evaluation theory, methodology, or practice.” These tools have the potential for both formative and summative applications in that they can be used by students for conceptualizing, designing, planning, and improving a dissertation prospectus or proposal and by advisors and committee members for rendering judgments about the merit of a proposed or completed dissertation.

March 18, 2008

Watch the video

View the slides

Title: Validating Evaluation Checklists

Presenter: Daniela Schroeter – Interdisciplinary Evaluation Doctoral Student, WMU
Abstract: Evaluation checklists are usually validated and refined by their usage over long periods of time and have credibility because of the extensive experience, knowledge, and professional status of their originators. This Evaluation Café introduces some means for validating checklists that have not yet been used for extensive periods of time. It is argued that evaluation checklists are unique meta-tools with features that guide evaluation efforts, rather than assessment instruments intended to collect data. Therefore, content and multicultural validity will be discussed. After the audience is introduced to some theoretical basics for validating checklists, two methods for validating the sustainability evaluation checklist will be introduced: a qualitative open-ended interview process and a questionnaire to potential users

April 1, 2008View the slides

View the handout

Title: The Role of Interaction and General Adjustment in Expatriate Attitudes

Presenter: Jennifer Palthe – Associate Professor of Management, WMUAbstract: Using an international field study of executive expatriates, this research explored the relationship between cross-cultural adjustment (work, interaction, and general) and expatriate attitudes. While much research demonstrates the relationship between work and general adjustment and expatriate outcomes the role of interaction adjustment has received less attention. The results of this study demonstrate the influence of each facet of adjustment on expatriate attitudes. Future research directions and implications for practice are offered.

April 8, 2008 View the slides

View the handout

 

Title: Kingdom of Ends: Developing a Community of Practice in Advocacy Evaluation

Presenter: Kristin Kaylor Richardson – Interdisciplinary Evaluation Doctoral Student, WMU
Abstract: Although there is a long history of evaluating social service programs and policies, evaluating advocacy and activist approaches to creating policy change is a relatively undeveloped field of practice. During the last few years, a number of useful evaluation frameworks for advocacy work have emerged. Familiarity with these frameworks can inform evaluation practice; however, it is equally important to critically examine the unique attributes, contributions, and challenges of advocacy initiatives and activities.Understanding the advocacy process can lay a foundation for mindful, ethical, high quality evaluation. This presentation will provide a brief overview of advocacy and its roles, functions, and purposes within the policy arena; describe and critique historical and contemporary advocacy evaluation models; and conclude with questions and considerations to expand and deepen a “community of practice” in advocacy evaluation.

April 15, 2008View the slides

Watch the video

Title: Design Issues Re: An Evaluation Association to Improve Evaluation of Development Assistance Programs

Presenter: Paul Clements – Associate Professor of Political Science & Masters of Development Administration Program Director, WMUAbstract: At present the evaluation of international development assistance programs and projects is highly inconsistent. Many evaluations are methodologically and/or analytically weak and/or offer conclusions and ratings that are positively biased, and this undermines learning and accountability throughout the development community. Aid could be rendered substantially more cost-effective if each evaluation offered a consistent and reliable estimate of the intervention’s cost-effectiveness. This presentation rehearses the reasons for establishing an association to this end along the lines of professional associations of accountants and auditors, it presents features of the proposed association’s design, and it discusses challenges to be overcome in establishing the association.
Past Events | Upcoming Events

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.