2010 – 2011


Using Nonequivalent Dependent Variables to Reduce Internal Validity Threats: Rationale, History, and Examples from Practice

Date: September 14th, 2010
Presenter(s):

Chris Coryn — Director, Interdisciplinary Ph.D. in Evaluation Program, WMU & Kristin A. Hobson — IDPE Doctoral Student, WMU
Abstract:
Legitimate knowledge claims about causal relationships have been a central concern among evaluators and applied researchers for several decades and often have been the subject of heated debates. Since publication of Campbell and Stanley’s (1966) Experimental and Quasi-Experimental Designs for Research, which was followed more than a decade later by Cook and Campbell’s (1979) Quasi-Experimentation: Design and Analysis for Field Settings, and thereafter by Shadish, Cook, and Campbell’s (2002) Experimental and Quasi-Experimental Designs for Generalized Causal Inference, alternative explanations for effects or outcomes observed from studies of applied interventions have been the bane of practicing social scientists and evaluators. Collectively, these alternative explanations are generally known as threats to validity and are largely related to the correctness of or support for one or more types of inference that arise from investigations of applied interventions. In this Evaluation Café the rationale for, history of, and examples from practice of using nonequivalent dependent variables as one strategy to reduce the number and plausibility of certain types of internal validity threats will be presented and discussed.


Wine Evaluation

Date: September 21st, 2010
Presenter(s):
Terry Stingley — The Wine Guru, Harding’s Marketplace
Abstract: As The Wine Guru, Terry Stingley brings over 30 years of wine knowledge and passion to his presentation at The Evaluation Center’s Evaluation Café. In this lunchtime talk, Terry will introduce the main criteria or qualities of good red and white wine and show audience members how to make informed, evaluative conclusions about wine quality. Seating is limited. Identification will be checked at the door.


Challenges of Small ‘n’ Numbers in Evaluating NSF GK-12 Projects

Date: September 28th, 2010
Presenter(s):
Mark Jenness — Senior Researcher, SAMPI, WMU & Bob Ruhf — Research Associate, SAMPI, WMU
Abstract: Based on experiences as external evaluators of three GK-12 projects, presenters will discuss issues of gathering impact data that satisfies project staff and the funding agency. Challenges also include balancing formative and summative evaluation efforts; meeting changing needs and expectations of the funding agency; and providing useful evaluation information for project staff and participants.


Elucidating the Synergistic Effects of Cooperative Efforts

Date: October 5th, 2010
Presenter(s):
Krystin Martens — IDPE Doctoral Candidate, WMU
Abstract: POSTPONED!

Synergy is a word that is often bandied about, but little understood. The broad idea that by pairing a few components together it is possible to receive great benefit from minimal effort is enticing. This session seeks, through an evaluative eye, to shed light on cooperative efforts and their much sought after, but often elusive, synergistic effects. To do so, an exemplar containing both internal and external partners will be presented. Coordinated School Health, the example, is a Federal initiative which is of interest as it is set in the public school domain, yet brings together, among others, the medical and mental health disciplines. It is hoped that a lively discussion will ensue.


Help Wanted: The Essential Need for Evaluators in Public Policy in a Contradictory Environment

Date: October 5th, 2010
Presenter(s):
Todd Harcek – Doctoral Candidate, Interdisciplinary PhD in Evaluation, WMU
Abstract: The need for solid evaluation in American government has never been higher, yet the political climate today puts evaluators in a very difficult position. With record high federal deficits, depleted state budgets, and an angry electoral base, the need to evaluate government programs and policies is urgent. For example, the success or failure of economic stimulus seems to be determined by political or philosophical perspective rather than sound, evaluative conclusions. Evaluators can provide an invaluable service in assisting decision-makers in determine the merit or worth of various initiatives yet will likely find themselves in the middle of a political tug-of-war. This presentation will explore the role evaluators can and should play in a volatile, and often hostile, political environment and discuss ways that evaluators can provide high value while staying above the fray.


Decision Process for Integration of Design and Methods

Date: October 12th, 2010
Presenter(s):
Jason Burkhardt — IDPE Doctoral Student, WMU & Lee Balcom — IDPE Doctoral Student, WMU
Abstract: This presentation intends to provide a comprehensive flowchart representation of a “design with the end
in mind” approach to evaluation. The content of the presentation centers around a flowchart that
encapsulates the areas of conceptualization of the evaluation question, identification of the evidence
needed to answer the question, the methods used to gain that evidence, and the interpretation of
the gathered evidence to illuminate the knowledge base of practicing evaluators. Those who view
the poster as a part of this presentation will be given the opportunity to provide direct feedback on the development of a comprehensive model of design choice, as well as have the opportunity to gain acknowledgment in the final product if they so wish.


Legal Matters for Evaluators

Date: October 19th, 2010
Presenter(s):
Carrick Craig — Deputy General Counsel, Legal Affairs, WMU
Abstract: This presentation will be a review of legal issues related to the operation of the Evaluation Center. These topics will include the legal structure of the University and Center, the essential legal elements of agreements to engage in evaluation activities, the allocation of legal risk in the agreements, the execution of those agreements and post-evaluation legal issues. The review is intended to give faculty and administrative staff a general overview of how the legal department at the University views and manages legal review and practical advice on how to make the legal review process more efficient and comprehensive. The presentation will be followed by an opportunity to ask follow up questions.


Strengthening Evaluation Policy in Saudi Arabia for Higher Evaluation Quality

Date: November 2nd, 2010
Presenter(s):
Mohammed Alyami — IDPE Doctoral Student, WMU
Abstract: Saudi Arabia government and organizations have begun to consider the importance of professional evaluation. However there are few formal evaluation policies to support professional evaluation standards and practices. The Program Evaluation Standards by the Joint Committee (1994) is used as a guiding standard (and informal policy) for evaluation, although there are other frameworks that may influence how evaluation is practiced. This presentation will address two main issues. First, what evaluation standards are used to guide evaluation policy in Saudi Arabia and how can these be modified or adapted to insure high quality evaluation practice. Second, what are good evaluation models and how can these approaches improve the overall quality of evaluation in this national context?


The Moral Art of Evaluation

Date: November 16th, 2010
Presenter(s):
Jan Kevin Fields — IDPE Doctoral Student, WMU
Abstract: During this session, we’ll explore the ontological and epistemological groundings of evaluation. We will discuss the distinction between fact and value and the schema of value theory. We will also delve into the murky world of value praxis which here is used to represent an amalgam of rationality and science with morals, emotions, and ethics. From this we will explore the more practical concept of conflict resolution which has major implications in evaluation use.


Standardized Measures of Effectiveness for National Science Foundation (NSF) Advanced Technological Education (ATE) Centers/Projects

Date: November 30th, 2010
Presenter(s):
Stephen Magura — Director, The Evaluation Center, WMU & Kelly Robertson — Project Manager, The Evaluation Center, WMU
Abstract: The presentation will describe the common metrics and methodologies developed by to measure the effectiveness of NSF Advanced Technological Education (ATE) center and project activities. The central goals of NSF ATE are to produce more science and engineering technicians to meet workforce demands and to improve the technical skills and general educational preparation of these technicians and their educators. Previous to this study, there were no generally accepted means of measuring the effectiveness of ATE activities; thus, the results of this study should allow NSF to better understand variations in success of its ATE grantees and to apply an objective effectiveness measurement strategy to ATE and similar programs in the future.


Evaluating the Integrated Model of International Food Protection Training Institute

Date: December 7th, 2010
Presenter(s):
Kieran Fogarty — Associate Professor, Department of Occupational Therapy, WMU
Abstract: The International Food Protection Training Institute is primarily focused on providing standards based (IACET, ANSI) food protection training for the 45,000 food regulators at the state and local level. The assessment and evaluation of these national efforts will be discussed including the feasibility of establishing a national impact model for determining the levels of attribution of an integrated national food protection training system in reducing the prevalence of food borne illness in the United States.


Race Exhibit and Research Dialogue – Note Day Change

Date: January 13th, 2011
Presenter(s):
Lauren Freedman – Professor, Literacy Studies & Dannie Alexander – Director of Athletic Facilities, Intercollegiate Athletics Department, WMU
Abstract: The Race Exhibit at the Kalamazoo Valley Museum has been a catalyst for discussion and action in the Kalamazoo community. The session will focus on the three themes of the Race Exhibit, history, biology, and lived experience. Connections to the themes of the exhibit will be used to elicit comments, questions, and further dialogue about connections to research and evaluation and issues of race.


Community Indicators: Implications for Evaluation

Date: January 25th, 2011
Presenter(s):
John Risley — Senior Researcher, Community Research Institute, Grand Valley State University
Abstract: Many definitions of community (or neighborhood, or social) indicators incorporate the ideas of assessing current and past conditions and enabling the task of program evaluation. Technological advances over the past two decades (better computing capacity, more automated administrative data available) have led to increasing use of indicators in communities across the nation. But how are these community indicator systems—systems that seek to reflect conditions at the community level—affecting evaluation at the program level? Can evaluation theory and practice contribute to the thinking about community indicators? This presentation will examine these questions and discuss how evaluation can shape our thinking about community indicators.


When Good Evaluations Go Bad: Lessons Learned the Hard Way

Date: February 1st, 2011
Presenter(s):
Lindsay Noakes & Robert McCowen — IDPE Doctoral Students, WMU
Abstract: As aspiring new evaluators, we began work on a fairly large evaluation project filled with excitement and hopes of making an impact. It soon became clear, however, that things were not going to go as planned. Come join us in an honest discussion about the political nature of evaluation and what to do when things begin to spiral out of control. Be prepared to (anonymously) share your own horror stories as well and offer suggestions for heading off problems. There will also be a drawing for books and resources for new evaluators.


The Use of Sensory and Analytical Evaluation to Match the Flavor of Onion Powder in a White Sauce

Date: February 8th, 2011
Presenter(s):
Polly Barrett — Senior Manager of Sensory Evaluation, Kalsec, Inc. & Doctoral Student in Evaluation, Measurement, & Research, WMU
Abstract: Dehydrated onion consumption in the United States is estimated at 200 million dry pounds annually. Increased consumption, a single growing region and bacterial concerns of dehydrated onion create a need for a low cost, low microbial count onion powder replacer, such as a natural onion extract. In this Evaluation Café, Polly Barrett discusses how sensory panels were used to evaluate flavor profiles in onion powders and onion extracts, using a white sauce matrix, such that production can meet demand. Barrett shares the results of the sensory panel, as well as the correlation of sensory data with analytic data to show how the flavor profile of an onion extract can be shifted toward the flavor profile of an onion powder. These modified onion extracts could be used in most food applications where dehydrated onion products are used.


Fractal Evaluation: New Methodology Based On New Science

Date: February 15th, 2011
Presenter(s):
Kurt Wilson — IDPE Doctoral Student, WMU
Abstract: In this Evaluation Café, Kurt Wilson draws on the classic book “Leadership and the New Science,” in which Margaret Wheatley, EdD challenges traditional organizational structures based on Newtonian science, demonstrating how ideas drawn from quantum physics, chaos theory, and molecular biology could improve organizational performance. These ideas are a provocative challenge to evaluation, which has similarly been rooted in a Newtonian perspective with linear logic-models, measurement tools based on averages, etc. This Evaluation Café will provide a brief background to these ideas with the intent of sparking a lively discussion about the specific needs and opportunities for new evaluation theory and methodology.


Bias in the Success Case Method: A Monte Carlo Simulation Approach

Date: February 22nd, 2011
Presenter(s):
Julio Cesar Hernandez-Correa — IDPE Doctoral Student, WMU
Abstract: Recently, the success case method (SCM) has become one of the most popular methods in applied evaluation. However, Brinkerhoff (2002) admits that the SCM produces biased information with respect to the central value. I utilize Monte-Carlo simulations to assess different issues related to the bias in the estimations of the SCM. I also propose intermediate steps to correct bias issues in the SCM results.


Communicating about Evaluation: Style and Substance

Date: March 8th, 2011
Presenter(s):
Lori Wingate — Assistant Director, The Evaluation Center
Abstract: Evaluation use—types of use, factors that affect use, and the impacts of use—has been subject to considerable discussion, theorizing, and research within the evaluation discipline since the 1970s. Evaluation utilization is a ubiquitous concern for practicing evaluators. In this session, we’ll go over some simple things evaluators can to do enhance their communication about evaluation to clients. Attention to report organization and formatting; slide design; graphic presentation of data; and writing quality so as to support sound content can facilitate users’ receptivity to, interest in, and—most importantly—use of evaluation information.


Safe On Campus Training – Note: Two Hours

Date: March 15th, 2011
Presenter(s):
Sarah Stangl — Coordinator, LBGT Student Services, WMU
Abstract: The Level I Safe on Campus Training provides a general overview, educating individuals about gender identity and sexual orientation. This training is intended for those who have little experience with individuals of varying orientations and identities. A brief introduction to terminology and basic information on being an ally will accompany stories of Coming Out. Special attention will be given to the implications for research and evaluation activities.


Structural Issues in Engaging Evaluation Clients – Lessons From the Counseling World

Date: March 29th, 2011
Presenter(s):
Jason Burkhardt — IDPE Doctoral Student, WMU
Abstract: This Eval Café will be a workshop where the participants will discuss issues related to developing rapport, techniques for understanding client frameworks that can generate resistance to evaluation, and strategies for managing entry into client “systems.” The moderator will discuss techniques learned from his time as a counselor, and will also use the experience of the participants to develop participant skills.


If only I’d known: Help in Navigating the Dissertation Process

Date: April 5th, 2011
Presenter(s):
Amy Gullickson, Margaret Richardson & Anne Cullen – Recent Graduates of the IDPE Doctoral Program, WMU
Abstract: A panel of three recent IDPE graduates (Anne Cullen, Margaret Richardson, and Amy Gullickson) will convene to discuss the dissertation process from choosing a topic, a committee, and a research plan; writing and defending a proposal; conducting the research; writing and defending the dissertation; and graduate college requirements. Bring your questions to get honest (and perhaps terrifying) answers.


What’s New?: A Discussion on Revisions to the Program Evaluation Standards

Date: April 12th, 2011
Presenter(s):
Lee Balcom — IDPE Doctoral Student, WMU
Abstract: The Program Evaluation Standards were recently revised in a third edition. Numerous changes are apparent when comparing the third to previous editions. Some changes amount to little more than semantics while other changes have major implications for the evaluation practice. Join us in this Evaluation Café for a dialogue on the differences that seem most substantial and how they can be reflected in our day-to-day work. Audience members should be familiar with at least the second and third editions of the Standards. Hard copies can be borrowed from The Evaluation Center’s library and summary statements of the Standards can be downloaded from: http://www.jcsee.org/program-evaluation-standards/program-evaluation-standards-statements


Evaluation, Community Collaboration, and Race: Lessons Learned from a Multicultural Project

Date: April 19th, 2011
Presenter(s):
Diane Rogers & Mohammed Alyami — IDPE Doctoral Students, WMU
Abstract: Changing a system or set of practices requires certain elements such as time, resources, and experience. What happens when you do not have enough of these elements? In this presentation, a multicultural team with diverse backgrounds and levels of experience discuss the challenges and successes of evaluating a community-based anti-racism project. Topics such as team formation, team relationships with the stakeholders, multi-level communication, team management, and clarity of the evaluand will be discussed. Participants will be asked to provide feedback for both an upcoming publication and AEA presentation. We encourage you to think critically about new approaches and/or adaptations to evaluation practices in light of meeting the needs of our increasingly multicultural society.


Conversation with Craig and Karen

Date: May 6th, 2011
Presenter(s):
Craig Russon — Senior Evaluation Officer, International Labour Organization &
Karen Russon — President, Evaluation Capacity Development Group
Abstract: Craig and Karen Russon have been collaborating on evaluation-related activities over the past three decades. Some of their latest collaborations have been related to:
an evaluation approach based on Chinese philosophy (published in JMDE)
a network of evaluators in Geneva (GEN)
a project to develop international standards for evaluation capacity development
Craig and Karen are co-founders of the Evaluation Capacity Development Group (ECDG). Karen is President and Craig is Board Chair. Information about ECDG can be found at www.ecdg.net. Craig is a former staff member of the Evaluation Center and the W.K. Kellogg Foundation. He is currently a Senior Evaluation Officer at the International Labour Organization.


Program Evaluation, 4th edition

Date: May 18th, 2011
Presenter(s):
Jim Sanders — Professor Emeritus, Western Michigan University
Abstract: With the 4th edition of Program Evaluation released on October 2010, Jim Sanders will discuss major changes to the book. From the product description on Amazon: As schools and other organizations increase their demand for information on program effectiveness and outcomes, it has become even more important for students to understand the prevalent approaches and models for evaluation, including approaches based on objectives and logic models, participative, and decision-making approaches. The new tenth edition of Program Evaluation not only covers these vital approaches but also teaches readers how to best mix and match elements of different approaches to conduct optimal evaluation studies for individual programs.
The revised edition of the text includes new approaches to program evaluation, an expanded discussion of logic models, added information on mixed models, and, as always, updated coverage of the most current trends and controversial issues in evaluation.

Past Events | Upcoming Events

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.