2002 – 2003


January 22, 2003




Title: Shaping Stories to Fit Evaluation Issues

Presenters: Dr. Robert Stake, Director, Center for Instructional Research and Curriculum Evaluation, University of Illinois; and Dr. Laurie Thorp, Michigan State University
Abstract: This is continuation of a conversational presentation at AEA with fellow panelists Yvonna Lincoln and Craig Russon on the storied nature of evaluation. One story that occurs is that of the evaluand program. Another is the story of the project evaluator keep probing the story until connections with the theme are more apparent? director. Or a trainee. Some breakthrough or calamity may be captured in story form. Such a story often fits only loosely with the main evaluation questions being raised. Are there times when the issues should be retro-fitted to the theme of the story? Or should the evaluator keep probing the story until connections with the theme are more apparent?

January 30, 2003

View the Slides

Title: The Evaluator’s Methodological Toolkit: A Helicopter Tour with a Few Landings

P
resenter: Dr. Jane Davidson – Associate Director, The Evaluation Center & Assistant Professor, Sociology

Abstract:This presentation will cover the main methodologies that are very specific to evaluation and not just borrowed from applied social science research methods: needs assessment, merit determination (combining descriptive facts with assessments of need/value), and synthesis (ways of systematically condensing ratings on multiple dimensions into more concise profiles and/or to draw overall conclusions about merit).This special event was cosponsored with the Dept. of Sociology.

February 13,2003 Title: Complex Systems Theory as a Perspective on Theory Building in Evaluation

Presenter: Dr. Jonathan A. Morell – Senior Policy Analyst, Altarum
Abstract: We use program theory to shape evaluation. We ask stakeholders to articulate program theory. We search the literature to discover relationships between action and result. We choose variables based on program theory. We execute methodologies and interpret findings based on program theory. These activities assume that critical elements of a system can be identified and that relationships among those elements can be articulated. Although we certainly believe in interactions, feedback loops, and unpredictable environments, we have faith that to some reasonable degree, relationships among critical variables can be defined, and will remain stable over time. Without a doubt, our faith is often rewarded by well designed programs and valid, useful evaluation. Often, though, either the program, or its evaluation, or both, do not work out as expected. The usual prescription to remedy failure is to do what we have always done, but to try harder. We must do a better job of understanding how a program really works. We must put more work into assuring fidelity of program reality to program theory. We must change the variables we measure, or do a better job of measurement, or use a more powerful mix of methodologies. But there is another possible explanation for failure. It may be that our theories miss the true dynamics that explain system behavior. This would be the case for evaluation settings that are best described with the principles of Complex System Theory.

February 27, 2003 Title: Toward a Methodology of Evaluation

Presenter: Dr. Christopher Nelson – Senior Research Associate, The Evaluation Center
Abstract:Evaluative inferences are a combination of two sets of claims: (a) factual descriptions about the evaluand and (b) evaluative criteria that provide a basis for saying whether the evaluand is good, bad, or otherwise. The focus on evaluative criteria is, indeed, what distinguishes evaluation from applied social science and other endeavors. Yet, with a few notable exceptions, the evaluation literature is strangely underdeveloped in this area. This talk seeks to do three things. First, it outlines a number of core questions that must be addressed in a fully adequate methodology of valuation. Among these questions are:• What is the knowledge-status of value claims in real-world evaluations? That is, can they be justified in the same way as empirical claims?• Where should evaluative criteria come from?• How can evaluators address value tradeoffs?Second, it draws upon developments in a number of fields that provide useful perspectives on these core questions. In particular, the talk draws upon political science, welfare economics, and administrative law. Finally, the talk outlines some ways in which these existing techniques might be improved or elaborated to meet the needs of practitioners.

March 10, 2003 Title: The Success Case Method for Evaluation
Presenter:Dr. Robert Brinkerhoff – Professor of Counseling Psychology & Counselor Education, Western Michigan University

Abstract: Dr. Brinkerhoff will explain how the Success Case Method (SCM) works and describe several recent applications in Fortune 500 companies. SCM is an innovative evaluation approach that organizations can use to find out quickly what is working and what is not with new organizational change initiatives. Called “The next big thing in evaluation” by Michael Scriven, SCM is faster and less expensive than traditional evaluation methods and provides compelling evidence about the impact of organizational interventions such as training programs, technology installations, and so forth.


March 24, 2003

View the Slides

Title: Beyond Use: A Theory of Evaluation Influence
Presenter:Dr. Gary Henry – Professor of Public Administration and Urban Studies, Political Science, and Educational Policy Studies; Georgia State University

Abstract: Dr. Henry’s presentation will be based on a paper he coauthored with Mel Mark.

Use is a core construct in the field of evaluation. Nevertheless, the term “use” conjures up different meanings, alternatively signifying empirical, normative, conceptual, and procedural interpretations. We present a number of reasons why the field should move beyond use in order to make progress on understanding how and through what mechanisms evaluation influences attitudes and actions. We summarize a multi-level framework for understanding the effects of evaluation at three levels of analysis: individual, interpersonal, and collective. Elaborating on this framework, we illustrate alternative mechanisms through which evaluation can influence change, such as policy diffusion, opinion minority influence, and attitude change. We also illustrate how different mechanisms can be triggered sequentially, forming different “pathways” of influence. We discuss several of the potential benefits of the proposed framework. The framework offers a way of thinking about and improving the evidence base concerning evaluation outcomes as well as a new vocabulary for specifying, targeting, and measuring the influence of evaluations.


April 7, 2003 Title:The Boundaries of EvaluationPresenter:Dr. Carolyn Sullins – Senior Research Associate, The Evaluation Center, Western Michigan University

Abstract: What do you do when your evaluation client wants or needs services or products that seem somewhat outside the realm of traditional evaluation? (E.g, program development? Management consultation?) What do you do if circumstances make these auxiliary roles more appropriate than evaluation – especially if this is not discovered until part way through the evaluation? How can you clearly communicate the boundaries of your roles to clients? What do you do when these roles need to be renegotiated?

Dr. Sullins will provide examples of these challenging situations, including some from her own work, and invite the participants to share their own experiences as well.


April 28,2003
Title: Developing Capacity in Evaluation and Assessment in Southern Africa
Presenter:Dr. Sarah Howie – Director of the Centre for Evaluation and Assessment, University of Pretoria (South Africa)Abstract: In May 2002, the University of Pretoria in Pretoria, South Africa established the Centre for Evaluation and Assessment (CEA) in the Faculty of Education. It was envisaged that the Centre would provide leadership in research within these fields and produce researchers and professionals who would build and strengthen a base of expertise within agencies and institutions, both governmental and non-governmental, dealing with education and the quality thereof in South Africa and in the region. In keeping with this vision, the CEA has undertaken a number of research projects and has initiated a Masters programme in Evaluation and Assessment. In 2004, a new Ph.D. seminar-based programme will be introduced aimed at developing the leadership in the country. During this presentation, I will discuss some of the exciting endeavours and the challenges that the new Centre is facing in its key task to develop capacity in Southern Africa.

May 14, 2003
Title: Improving Public Schools by Example Rather Than Competition: Results From the Statewide Evaluation of Charter Schools in Connecticut

Presenter: Dr. Gary Miron – Principal Research Associate, The Evaluation Center, Western Michigan University

Abstract:In terms of performance accountability and regulatory accountability, charter schools in Connecticut are among the very best in the country. This judgement is based on our work in evaluating charter schools in four other states as well as the extensive literature reviews and the metaanalysis of results from studies of charter schools across the country that we have conducted. Some reasons for the exceptional performance of Connecticut’s charter schools are that these schools have received relatively better funding and more technical assistance than charter schools have received in other states. Perhaps the most important factor is that the demands for accountability in Connecticut are more rigorous than in any other state we have studied.The small size of the reform has made it possible for the State to provide effective assistance and oversight. At the same time, the very small size of the reform suggests that the greatest hope for positive impact will be the examples these schools set for others, rather than the competitive effect that would put pressure on districts to improve.

July 11, 2003
Title: Prospective Evaluation: Thoughts on How to Assess the Potential Merit and Worth of Programs

Presenter: Dr. Christopher Nelson – Senior Research Associate, The Evaluation Center, Western Michigan University
Abstract: Program and policy evaluators are often called upon to help decision makers arrive at judgments about the potential merit and worth of programs. Even retrospective, summative evaluations are usually designed to inform decisions about “investments” in future program activities. Taking the notion of investment as a point of departure, this talk conceptualizes some of the core issues in prospective evaluation and considers a range of useful methods. Throughout, the talk will draw upon the author’s experience in helping the U.S. Department of Transportation evaluate the potential merit and worth of safety interventions in the railroad industry.

July 22, 2003
Title: Evaluation in the Private Sector: A Glimpse Inside Andersen Learning and Personal Growth’s Evaluation and Measurement Function

Presenter: Mr. Carl Hanssen – Research Associate, The Evaluation Center, Western Michigan University
Abstract:Mr. Hanssen spent six years as part of the Evaluation and Measurement team within Andersen Learning & Personal Growth. Following the breakup of that organization in 2002, he offers a perspective on that organization, which was widely regarded as one of the premiere evaluation functions in the United States, during its existence. This presentation and discussion will explore several topics aimed at providing insights into what evaluators might expect if they pursue an evaluation career in the private sector. Topics that will be examined include:· Evaluator roles within Andersen Learning & Personal Growth
· Evaluation orientation and typical projects
· Funding mechanisms for evaluation
· Individual and team performance measures
· Selected evaluation issues
Past Events | Upcoming Events

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.