2005 – 2006


September 14, 2005Watch the video Title: Theory-Free EvaluationPresenter: Michael Scriven, Ph.D. – Professor of Philosophy, Associate Director of The Evaluation Center, and Interdiscipinary Ph.D. in Evaluation Program Director, Western Michigan University

Abstract: Theories come into evaluation today in two ways: once at the metalevel, via theories (aka models) of evaluation, and once at the program level, as program logic or program theories. I’ll look critically at both these levels and argue for a new assessment of each, on the grounds of performance.


September 22, 2005
Title: Building Capacity for Evaluation and Providing Evaluation Services in Changing Organizations: Lessons from the FieldPresenters: Gary Miron, Ph.D. – Principal Research Associate and Chief of Staff and Barbara Wygant – Project Manager, The Evaluation Center, Western Michigan University

Abstract: The presentation will include a brief review of contractual services provided by The Evaluation Center to diverse and changing organizations over the past few years. A number of issues and concerns raised from our experience will be discussed, including (i) why and how flexible contractual arrangements can be made, (ii) the importance of separating technical assistance and actual evaluation services, (iii) means and importance of ensuring client ownership, (iv) working in diverse communities, and (v) tools and tips regarding communication of findings to clients and key stakeholders.


September 29, 2005View the slides Title: Closing the Deal: Tips for Working Effectively with Evaluation ClientsPresenter: Jerry Horn, Ed.D. – Principal Research Associate, The Evaluation Center, Western Michigan University

Abstract: As we initiate and continue discussions on prospective evaluation projects, it is important for us to understand the interests, concerns, and issues from the client’s perspective. During this session, I will present some ideas and practices that have worked for me over the years and facilitate discussion among participants about how we can be more effective in the process of “closing the deal” for evaluation arrangements and contracts.


October 6, 2005View the handouts Title: Making a Causal Argument for Training ImpactPresenter: Robert Brinkerhoff, Ed.D. – Professor and Coordinator of graduate programs in Human Resources Development, Western Michigan University

Abstract: When training outcomes and value are defined as the results of improved performance (e.g., an increase in sales, an increase in productivity), a multitude of other “causes” enters the picture and clouds claims that training had “impact.” But clients for evaluation of training programs want “proof” that training did or did not have impact. The Success Case method attempts to make such claims by making a “beyond reasonable doubt” argument. Come to learn how the evaluator attempts to make these arguments and provide criti


October 13, 2005View the slides Title: Market Pricing for Evaluation ContractingPresenter: Carl Hanssen, Ph.D. – Senior Research Associate, The Evaluation Center, Western Michigan University

Abstract: This session presents a method for developing evaluation pricing using a market-based approach. The basis for this approach is predicated on the idea that university-based research units should seek, where possible, to over-recover its costs for conducting evaluation and research contracts. Over-recovered funds, then, can be used to fund research, scholarship, and other development activities. Key components of the model that will be presented include: (1) market-based billing rates, (2) allowances for other direct costs, (3) allowances for indirect costs, and (4) developing a parallel budget that reflects actual costs.


October 20, 2005View the slides Title: The Malcolm Baldrige National Quality Award Business Evaluation Process: Using the Evaluation Criteria to Improve Business OutcomesPresenter: Keith Ruckstuhl, Ph.D. – Organizational Development Consultant [internal], Pfizer Global Manufacturing (Kalamazoo, MI)

Abstract: The Malcolm Baldrige National Quality Award, established n 1987, was developed to promote practices that enhance the quality and competitiveness of American companies, and the evaluation process has continued to evolve over time. The evaluation process is not presciptive in nature, but rather compares the applicant’s self-reported critical characteristics against their business processes to determine alignment with the applicant’s described needs. Finally, the applicant’s business results are reviewed to determine the effectiveness of those business processes at achieving critical outcomes. I’ll review the Baldrige evaluation framework and process, discuss methods to learn the evaluation framework, and describe working with different business units at a prior employer to help them develop applications for a state quality award (which used the Baldrige evaluation process) and the impact it had on business results in different corporate cultures.


November 3, 2005Read the article on which this presentation was based Title: Measuring the Impact of Electronic Business Systems at the Defense Logistics Agency: Lessons Learned From Three EvaluationsPresenter: Jonathan A. Morell, Ph.D. – Senior Policy Analyst, Altarum (Ann Arbor, MI)

Abstract: The consequences of deploying three electronic business systems at the Defense Logistics Agency were evaluated: Central Contractor Registration (CCR), Electronic Document Access (EDA), and DoD Emall. Findings will be presented, with an emphasis on lessons learned about evaluating the impact of IT systems that are inserted into complex, changing organizations. Lessons fall into five categories: metrics, methodology, logic models, adaptive systems, and realistic expectations. Interactions among these categories will also be discussed.


November 10, 2005View the slides Title: Goal-Free EvaluationPresenter: Brandon Youker – Interdisciplinary Evaluation Doctoral Student, Western Michigan University

Abstract: The presenter conducted a goal-free evaluation (GFE) of a local middle school summer enrichment program. The GFE was a supplement to a goal-based evaluation (GBE) by two other evaluators on the evaluation team. The combination of the GFE and GBE approaches resulted in a more comprehensive evaluation than either would have provided on its own. GFE methodology will be discussed, in addition to the strengths, weaknesses, and challenges related to synthesizing and combining the two evaluation methodologies.


November 17, 2005View the slides Title: Identifying Relevant Evaluative Criteria: Lessons Learned from the Evaluation of a Middle School Enrichment ProgramPresenters: Daniela Schroeter & Chris Coryn – Interdisciplinary Evaluation Doctoral Students, Western Michigan University

Abstract: The evaluation of a local middle school summer enrichment program consisted of two simultaneous, independent evaluations: one goal-based and the other goal-free. Central to any credible evaluation is the process of identifying relevant evaluative criteria or dimensions of merit—the attributes by which evaluators determine how good, effective, or valuable a program is. While the goal-based evaluation emphasized evaluative criteria centering on program goals and desires, a long list of value-oriented criteria were also identified using Scriven’s (2005) Key Evaluation Checklist). The most important lesson learned from this evaluation was the usefulness of goal-free evaluation as a supplementary, albeit crucial, mechanism for supporting and informing goal-based evaluation; particularly in identifying criteria of merit related to program side effects, side impacts, and unintended outcomes.


December 1, 2005 Title: The Comparative Method and the Schema of Cost-Effectiveness in the Evaluation of Foreign AidPresenter: Paul Clements, Ph.D. – Associate Professor of Political Science and Masters of Development Administration Program Director, Western Michigan University

Abstract: The structural conditions of foreign aid present particularly severe challenges of accountability and learning. Due to the diversity and complexity of the tasks, the intense competition for resources, and the real human costs of failure, effective management in this field is particularly difficult. The primary role for evaluation under these circumstances, I argue, is to sustain an effective orientation to cost-effectiveness in the allocation of resources, largely in the area of program design. While rigorous evaluation designs may be needed to generate valid data on program impacts, it is the comparative method and a consistent orientation to cost-effectiveness that are essential to support the judgments on which effective foreign aid depends.


January 18, 2006
Title: Reconsidering Qualitative Research Methods and Reinstating Evaluation as the Heart of the Social SciencesPresenter: Dr. Michael Scriven – Professor of Philosophy, Associate Director of The Evaluation Center, and Interdiscipinary Ph.D. in Evaluation Program Director, Western Michigan University

Abstract: It’s often said, with some justification, that economics is too important to be left to the economists, and it’s clear that qualitative methods of inquiry are too important to be left to the authors of most books about them. Those texts load their treatment with epistemological larding that is not in fact implied, and is certainly repellent to most scientists. Dr. Scriven will comment briefly on the usual basic entries in the qualitative research stakes and say a little more about the three most notably crippled treatments: causation, comprehension, and evaluation. The total impact of this reconsideration is what could be called the revaluing of the social sciences, making them both more valuable and more securely based.


January 25, 2006View the slides Title: Evaluator Skills: What’s Taught vs. What’s SoughtPresenters: Dr. Carolyn Sullins – Senior Research Associate, The Evaluation Center; Ms. Daniela Schroeter – Interdisciplinary Evaluation Doctoral Student; Ms. Christine Ellis – Educational Studies Master’s Student

Abstract: What roles, competencies, and skills do employers look for when hiring evaluators? Are they in line with what is emphasized in graduate programs in evaluation? To explore this question, we conducted (1) a literature review, (2) pilot surveys for both job candidates and employers; and (3) an analysis of AEA Job Bank entries. The preliminary findings were discussed during a Think Tank presentation at the AEA/CES conference in October 2005. This presentation will highlight the findings from the various measures, present perspectives from the Think Tank, and discuss further direction for the ongoing study.


Feburary 1, 2006View the slides Title: Enhancing Disaster Resilience Through EvaluationPresenter: Dr. Liesel Ritchie – Senior Research Associate, The Evaluation Center, Western Michigan University

Abstract: In recent months, national and international attention has been focused on events surrounding natural and technological disasters. This session will present various perspectives on social impacts of disasters, exploring ways in which the evaluation community might contribute to this substantial and growing body of research. Questions for discussion will include: What approaches, practices, concepts, and theories from evaluation might be employed to enhance disaster preparedness, response, recovery, and community resilience? How might experiences from the field of evaluation improve understanding of and ability to address challenging contexts in which disaster-related evaluations are conducted, as well as use of evaluation findings in these settings?


February 8, 2006View the slides Title: Strengthening Capacity for Evaluation in the Context of Developing CountriesPresenter: Dr. Gary Miron – Principal Research Associate and Chief of Staff, The Evaluation Center, Western Michigan University

Abstract: Knowledge is said to be universal. The same could be said of the available stock of evaluation methods, techniques and experiences accumulated in the industrialized nations. Advocating indigenous evaluation and research does not necessarily mean disregarding the knowledge available. Yet a problem often arises in the interpretation of the experiences accrued in the industrialized nations and in the application of the methods and techniques utilized in drastically different climates. Most approaches, designs, and methods are universally applicable; however, adjustments and adaptations must be made from country to country and from context to context. There are many directions and paths that the developing countries might take in their quest for more effective and appropriate means of evaluation and planning their educational programs. The promotion of South-South cooperation and the development of a better understanding of the practical experiences and genuine conditions and obstacles present in the context of developing countries are obvious first steps.

Participants will be encouraged to engage in a discussion regarding the means and obstacles for sharing and adapting the theory and methods of evaluation across national and cultural borders.


February 15, 2006View the slides Title: Evaluating a Nonprofit Organization: Methodological and Management Strategies from the Foods Resource Bank EvaluationPresenters: Mr. Thomaz Chianca and Mr. John Risley—Interdisciplinary Evaluation Doctoral Students, Western Michigan University

Abstract: The presenters will discus their experience conducting an organizational evaluation of Foods Resource Bank, a nonprofit dealing with U.S.-based donors (growing projects) and international recipients (food security programs). Specifically, they will discuss the Success Case Method (SCM) component of the evaluation. The SCM addressed questions including: Why are some projects more successful than others? What are the key factors affecting the performance of field coordinators supporting local projects? What critical, contextual aspects must be understood, and possibly leveraged, to help projects succeed? This presentation explores the advantages and limitations of integrating SCM into a broader evaluation design and reflects on how SCM can successfully be adapted to an organizational evaluation context. The presentation will also discuss management strategies and issues addressed during this evaluation including: the use of evaluation advisory committees, how to increase participation, issues faced while reporting evaluation findings, and working with organization staff. Ms. Bev Abma, Foods Resource Bank’s Executive Director for Programming, present in the session to discuss the institutional perspective on our evaluation.


Feburary 22, 2006View the slides Title: Keys to Global Executive SuccessPresenter: Dr. Jennifer Palthe – Assistant Professor of Management, Western Michigan University

Abstract: This study extends previous research on cross-cultural adjustment through a field study of 196 American business executives on assignment in Japan, Netherlands, and South Korea. The results demonstrate the relative importance of learning orientation, self-efficacy, parent and host company socialization, work, and non-work variables, on three facets of cross-cultural adjustment (work, interaction, and general). While past research has consistently shown that family adjustment is by far the strongest predictor of cross-cultural adjustment, this study reveals that socialization at the host company, previously proposed yet unmeasured, may be as strong a predictor. Implications for practice and directions for future research are offered.


March 8, 2006View the slides and handout Title: Standards for Educational Evaluation: The Case of ProprietyPresenter: Dr. Arlen Gullickson – Director of The Evaluation Center, Western Michigan University

Abstract: The Program, Personnel, and Student Evaluation Standards, development by the Joint Committee on Standards for Educational Evaluation provide guidelines for ensuring that evaluations meet the standards of utility, feasibility, propriety, and accuracy. Dr. Arlen Gullickson, Chair of the Joint Committee on Standards for Educational Evaluation, will present a brief history and overview of the Program and Student Evaluation Standards. A case study will be presented to facilitate in-depth examination and application of one Propriety standard.


March 15, 2006 Title: Doing, Knowing, and Being: Integrating Assumptions and Actions in Evaluation PracticePresenter: Dr. Eileen Stryker – President, Stryker and Endias, Inc., Research, Planning and Evaluation Services, Kalamazoo, Michigan

Abstract: Evaluation methods (doing) are grounded in epistemological (knowing) and ontological (being) assumptions. This is an invitation to discuss our journeys toward integrating our most deeply held beliefs about the nature of truth and reality with the ways we practice evaluation. How do we deepen conversations about method and design with stakeholders so as to reveal and negotiate shared and differing assumptions? Examples from my own practice to start the discussion include: If reality is essentially connected, and knowing is essentially grounded in love, what does evaluation practice look like? If an educational program rests on a belief that learning requires positive emotional experiences, and the client wants an evaluation consistent with that belief: a) would you take the contract? b) what would the evaluation practice look like? c) what about “negative” findings? When working across cultures (and when are we not working across cultures?) how do we discover and cross boundaries of language, religion, power, and negotiate evaluation issues across worldviews shaped by differing experiences? I look forward to learning together.


March 22, 2006View the slides Title: Project versus Cluster EvaluationPresenter: Dr. Teri Behrens – Director of Evaluation, W.K. Kellogg Foundation

Abstract: The W. K. Kellogg Foundation has a long-standing view that the evaluation of its grantmaking is for the purpose of learning. After several years of funding intentional “clusters” of projects, Foundation staff initiated a new approach to evaluating groups of related projects that came to be known as “cluster evaluation.” In more recent years, WKKF has begun funding “strategic initiatives,” designed to create systems change. Dr. Behrens will discuss the implications for evaluation of this more strategic approach to grantmaking and will share a preliminary typology of types of change efforts.


March 29, 2006View the handout Title: Collaborative Evaluations: A Step-by-Step Model for the EvaluatorPresenter: Dr. Liliana Rodriguez-Campos – Assistant Professor of Educational Studies, Western Michigan University

Abstract: Dr. Rodriguez-Campos’s presentation is based on her book of the same title. She will present the Model for Collaborative Evaluations (MCE), which emerged from a wide range of collaboration efforts that she conducted in the private sector, nonprofit organizations, and institutions of higher education. MCE has six major components: (1) identify the situation, (2) clarify the expectations, (3) establish a shared commitment, (4) ensure open communication, (5) encourage best practices, and (6) follow specific guidelines. Dr. Rodriguez-Campos will outline key concepts and methods to help master the mechanics of collaborative evaluations. Practical tips for “real-life” applications and step-by-step suggestions and guidelines on how to apply this information will be shared.


April 5, 2006Watch the video

View the slides
Title: Synthesizing Multiple Evaluative Statements into a Summative Evaluative ConclusionPresenter: P. Cristian Gugiu – Interdisciplinary Evaluation Doctoral Student, Western Michigan University

Abstract: The synthesis of multiple evaluative statements into a summative conclusion is a task often overlooked or avoided by evaluators, perhaps owing to its complexity. Despite its apparent complexity, however, synthesis should be a critical component of all evaluations. Synthesis is the process of combining evaluative conclusions derived from measures of performances across several dimensions of the evaluand into an overall rating and conclusion. This paper will discuss the strengths and weakness of two methodological approaches (i.e., qualitative-weight-and-sum versus quantitative-weight-and-sum) that can be used to synthesize micro-evaluative statements into a macro-evaluative conclusion. Finally, the paper will introduce the concept of summative confidence to highlight the need for determining the degree of confidence with which a summative statement can be delivered. Detailed illustrative examples from an actual evaluation will be provided for each concept.


April 12, 2006 Title: Evaluation in BrazilPresenter: Dr. Ana Carolina Letichevsky – Coordinator, Department of Statistics, Cesgranrio Foundation (Brazil)

Abstract: This presentation will include a brief review of evaluation in Brazil over the past few years. It includes a summary of the evaluative processes in different areas (social, educational, and corporate), as well as a description of the structure of an evaluative Brazilian nonprofit organization, the Cesgranrio Foundation. Nowadays, the main challenges for Brazilian evaluators are the following: (a) to implement the principles and standards that guide formal and professional evaluation, ensuring the quality of the evaluation; (b) to adapt evaluative approaches and methodologies to the Brazilian context; (c) to ensure the utilization of the evaluation results for the improvement of the evaluative focus; and (d) to qualify a larger number of professionals to act in the area of evaluation, that is, to develop evaluators.


April 19, 2006View the slides Title: Lessons Learned: A Value-Added Product of the Project Life CyclePresenter: Rebecca Gilman – Senior Principal Consultant, Keane

Abstract: Five or ten years ago, lessons learned may have been gathered during a meeting at the end of a project and often were used for a single purpose such as an explanation for project overage or failed deliverables. Often, organizations ignored lessons learned and we saw failures repeated. Project and business environments are becoming more diverse, complex, and continually changing. Today, lessons learned is evolving into a broader concept that may be used to refine and transform entire systems and organizations. What are lessons learned? How does Keane identify and work with them? What value do they provide for the project team, Keane, and our clients? These are questions we will explore in this presentation.


May 9, 2006
Title: Some Rights of Those Researching and Those ResearchedPresenter: Robert Stake, Ph.D. – Director, Center for Instructional Research and Curriculum Evaluation, University of Illinois

Abstract: Bob Stake has been one of the most creative and productive evaluators throughout the development of the discipline since its emergence around 1960. Beginning as a measurement specialist, he went on to introduce the notion of responsive evaluation, as a reaction to the standard social science model of hypothesis-testing, and has written one of the very rare treatises on the evaluation of arts education, as well as two books on the case study method. The latest of these, which came out a few months ago, extends the approach to multiple case studies. Those interested in evaluation or experimental design, or some new problems about using human subjects in any area of research will find this talk important.

Past Events | Upcoming Events

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.