2009 – 2010


September 15, 2009

Watch the video
Title: Contemporary Trends & Movements in Evaluation: Evidence-Based, Participatory & Empowerment, & Theory-Driven Evaluation

Presenter: Chris Coryn – Director, Interdisciplinary Ph.D. in Evaluation Program, WMU

Abstract: In recent years, three major movements have dominated evaluation practice. These include evidence-based, participatory and empowerment, and theory-driven evaluation (TDE), in particular. Of these, the former has received the greatest attention, particularly in education and health and medicine, mostly centered on randomized controlled trials (RCTs). Whereas RCTs are premised on the assumption that they are the most appropriate method for generating strong knowledge claims as to “does it work?” (i.e., causal description), TDE is premised on the assumption that the approach generates knowledge claims as to “how?” “for whom?” and “under what conditions?” (i.e., causal explanation) a treatment or intervention works by formulating and testing complex causal hypotheses. Participatory and empowerment approaches, however, are essentially premised on the assumption that by engaging stakeholders in the evaluation process, greater buy-in is generated and, therefore, such evaluations are more likely to be used and have an impact than more scientifically rigorous methods that often exclude stakeholders in the evaluation process. In this Café, the key assumptions, strengths, and weaknesses of these three approaches will be critically examined, including their commonalities, with an emphasis on the specific contexts in which each is appropriate and their implications for contemporary evaluation theory and practice.

September 24, 2009

Watch the video

Title: RealWorld Evaluation: Maximizing Utility in Spite of Inadequate Budget, Time, & Data As Well As Conflicting Political Pressures

Presenter: Jim Rugh – Independent Consultants

Abstract: Jim Rugh was a coauthor (with Michael Bamberger and Linda Mabry) of the RealWorld Evaluation book published by Sage in 2006. That book and the many workshops on the subject at AEA and in many countries around the world have continued to be very popular – probably because many new and even experienced evaluators are looking for practical ideas, backed by experience, on how to conduct adequately credible program evaluations in spite of constraints such as inadequate budget and time, lack of data (such as from baseline and comparison groups), political pressures, and so on. Jim will summarize the RealWorld Evaluation approach and address lessons learned from many workshops and current discussions among networks of evaluators on conducting impact evaluations, especially of international development programs.

September 29, 2009

View the slides
Title: Evaluating School Mathematics, Science Textbooks, & Classroom Practice Using an Assessment for Learning Framework

Presenters: Amy Bentz & Jonathan Engelman – Assessment for Learning Scholars, WMU; Steven Ziebarth – Associate Professor, Mathematics, WMU

Abstract: This session examines research into the extent to which new mathematics and science textbooks have incorporated AfL practices in their materials and, using a similar analysis framework, how we have developed an observation protocol to help gather data about AfL teaching practices within secondary classrooms. We will examine one section of the framework. An opportunity will be provided for attendees to use a portion of the protocol.

October 6, 2009

Title: (Meta)Evaluation at the International Labour Organization: Current Practice, Potential for Improvement, & Prospects for In-Depth Study

Presenters: Daniela Schroeter – Director of Research, The Evaluation Center; Anne Cullen – Senior Research Associate, The Evaluation Center; Kelly Robertson – Project Manager, The Evaluation Center

Abstract: In this session, we will first discuss the independent appraisal of a sample of the International Labour Organization’s (ILO) 2008 evaluation reports developed for its Evaluation Unit (EVAL), using adapted metaevaluation tools and methodologies. Second, we will emphasize differences between evaluation methodologies used in the past two years and invite attendees to join the discussion of means for improving appraisals for ILO EVAL. Third, we will discuss the challenges of comparing metaevaluation findings across years and raters. Finally, we will introduce our ideas for studying ILO evaluation in more detail.

October 15, 2009

Title: Exploring a Utilization-Focused Evaluation Option: Developmental Evaluation Using Systems Thinking & Complex Nonlinear Dynamics

Presenter: Michael Quinn Patton – Independent organizational development and evaluation consultant

Abstract: Systems  thinking and attention to complex nonlinear dynamics have created new opportunities—and new challenges—in evaluation. Developmental evaluation offers an alternative to traditional linear logic models and an additional option beyond formative/summative designs especially targeted to innovative initiatives in turbulent and uncertain conditions that display the characteristics of emergent complexities. Patton will discuss developmental evaluation as an option in support of more sophisticated matching of evaluation to the nature of interventions within a utilization-focused evaluation framework. Students will also have the opportunity to review chapters of and provide feedback on Patton’s new book on Developmental Evaluation just before he submits the final manuscript for publication.

October 20, 2009

Title: Evaluating Evidence-Based Medicine

Presenter: Cristian Gugiu – Doctoral student, Interdisciplinary Ph.D. in Evaluation Program, WMU

Abstract: While there is much to admire about existing efforts at grading the strength of evidence, considerable shortcomings and unsubstantiated logic limit their usefulness and validity. An abundance of grading guidelines exist for evaluating the quality of evidence produced by a study and for generating informed recommendations on the basis of this evidence, the balance between desirable and undesirable consequences, and consideration of costs and relevant values. On this basis, clinicians are able to develop treatment plans tailored to meet evidence-based standards as well as patient needs and desires. Over time, grading systems have become more sophisticated and complex in the methodology employed to evaluate the strength of evidence. Although in the past five years a consensus appears to be emerging with regard to how evidence should be evaluated, it is important to question research assumptions periodically because methodological hubris can be just as dangerous as medical hubris. This presentation will review several popular grading guidelines used by medical researchers. Furthermore, two new grading guidelines, one for grading research designs and the other for grading the statistical components of a study, will be introduced.

October 27, 2009

Watch the video

Title: Evaluation – A Quasi-Profession?

Presenter: Wolfgang Beywl – Berne University/Univation CologneAbstract: A recent monograph on Evaluation in Germany (Brandt, 2009) presumes that evaluation is far from becoming a profession. A comparison of three German-speaking countries for ten broad evaluation fields (Widmer/Beywl/Fabian, 2009) comes to similar conclusions. A main argument is that there is no unique theoretical core of evaluation in relation to contiguous academic disciplines or management approaches. Wolfgang Beywl will present theses on attributes of evaluation which, in specific combinations, constitute such a theoretical/methodological core. Uniqueness could be established at least for program evaluation in human services sectors, a main branch of the evaluation business.

November 3, 2009


Watch the video

Title: Documenting Dependency Relationships Among the Standards to Facilitate Metaevaluation

Presenter: Carl Westine – Doctoral Associate, Interdisciplinary Ph.D. in Evaluation Program, WMU

Abstract: The 30 standards set forth by The Joint Committee on Standards for Educational Evaluation in The Program Evaluation Standards (PES) (Joint Committee, 1994) form the basis for a checklist to be used for metaevaluation (Stufflebeam, 1999). However, identifying overlap among the standards should simplify the metaevaluation process. Through a systematic content analysis, we learn what the PES reveals about the overlapping nature of the standards. In most standards, specific references of up to ten standards are stated in the textual overview, guidelines, and common errors sections of the PES. Additional references among standards not explicitly stated also are documented. Incongruence in references among standards implies a dependency relationship exists. Moreover, the PES functional table of contents outlines further dependency relationships among the standards. Documenting well-defined dependency relationships has implications for how the standards could be differentially weighted in a condensed and efficient instrument to facilitate metaevaluation.

November 24, 2009No Photo Title: Clients’ Perceptions of Buprenorphine Versus Methadone Maintenance Treatments

Presenters: Stephen Magura – Director, The Evaluation Center, WMU; Awgu Ezechewcu – Doctoral student, Interdisciplinary Ph.D. in Evaluation Program, WMU

Abstract: Methadone and buprenorphine are maintenance treatments for opioid-addiction. Unlike methadone, buprenorphine is relatively new and not the primary standard maintenance treatment in the USA and internationally. This study examines clients’ perceptions of methadone versus buprenorphine treatments offered to inmates at the Key Extended Entry Program (KEEP) in the Rikers Island jail, New York City. The method consists of an analysis of structured and semi-structured interviews to investigate (a) clients’ satisfaction with the KEEP program, (b) clients’ satisfaction with methadone versus buprenorphine treatments, and (c) clients’ overall perceptions of methadone versus buprenorphine treatments.

December 1, 2009

Title: Evaluative Tools for Assessing Quality in Healthcare Organizations

Presenter: Stephanie Means – Doctoral student, Interdisciplinary Ph.D. in Evaluation Program, WMU

Abstract: Quality in health care is important, and so is measuring it the right way. Accurate, evaluative quality assessment leads to improved healthcare structures, processes, and outcomes. A review of the literature revealed the differences in assessing healthcare quality for the Joint Commission Accreditation of Healthcare Organization and the Malcolm Baldrige National Quality Award. In recent years, the government, healthcare administrators and personnel, and researchers have realized the need to provide innovative ways to evaluate and quickly disseminate information concerning quality in healthcare organizations. This presentation will identify the appropriate evaluation tools to assess quality at different stages of the healthcare delivery process.

January 27, 2010

Watch the video

Title: An Epidemiologist’s Perspective on Some Evaluation Research Issues

Presenter: Jim Anthony – Professor of Epidemiology, Michigan State University

Abstract: If a study is to be “epidemiological,” there must be a theory or concept of a “source population” under study. Or, there must be a specific study population roster to be studied or sampled, often nested within a source population, as is required for estimation of standardized mortality ratios and standardized morbidity ratios. This basic principle of epidemiological research is the starting point for today’s talk, which includes a selective and somewhat autobiographical review of evaluation research issues that arise when evaluation researchers might choose to assume the epidemiologist’s perspective.

February 2, 2010

Watch the video

Title: The Utility of Clinical Diagnostic Reasoning in Performance Analysis

Presenter: Nicholas Andreadis – Acting Dean, Lee Honors College, WMU

Abstract: The effective practice of organizational development (OD) requires that interventions are based on the results of a thorough performance analysis. Practitioners are taught to recognize suboptimal performance, collect and analyze relevant data, diagnose and draw conclusions, and then introduce rational interventions. Inherent to these analytical approaches are the cognitive reasoning processes. Making the right diagnosis is essential to the performance improvement process. In this discussion Dr. Andreadis will argue that the art and science of clinical diagnosis as performed by experienced physicians has utility in organizational settings as a method of reasoning helpful to OD practitioners.

February 9, 2010
Title: The Direction for Research at Western Michigan University

Presenter: Paula Kohler – Associate Vice President, WMU

Abstract: Given the change in federal administration and the rapid development of new legislation and technologies, opportunities abound for research and evaluation. Western Michigan University is poised to build on its existing research and evaluation capacities and take greater steps into collaboration with federal departments. As the new Associate Vice President for Research, Dr. Kohler will speak about the evolution of research opportunities for faculty and staff and her vision for Western’s part in the increasing call for evidence-based practices.

February 16, 2010

Watch the video

Title: Evaluation Reports as Metaevaluation Data: How Much Can and Should Written Reports Tell Us About an Evaluation’s Quality?

Presenter: Lori Wingate – Senior Research Associate, The Evaluation Center, WMU

Abstract: Evaluation reports often are the primary source of data used for a metaevaluation, but they offer only a limited view of how an evaluation was conducted. In this session, results from a metaevaluation of 10 evaluations conducted by 30 independent raters in terms of how well the evaluations met the Program Evaluation Standards will be presented. An analysis of the relative frequency with which raters refrained from judgment because of “insufficient information” across the 30 Standards will serve as a springboard for a facilitated discussion about (a) what information can and should be included in evaluation reports, (b) which Standards can be applied when reports are the only source of data for an evaluation, and (c) which Standards metaevaluators should exclude from consideration if they do not have access to additional data.

February 23, 2010

Title: Nonprofits and Evaluation: Internal and External Accountability

Presenter: Bobbe Luce – Director of ONEplace, Kalamazoo Public Library

Abstract: Nonprofit organizations are mission-driven, public entities that typically focus most of their energies on the programs and services they believe benefit their constituents. More and more they are being asked by donors and funders to “prove” benefits actually occur; that outcomes, rather than numbers, are effective and efficiently delivered. They also are required to increase internal operational competencies, transparency, and accountability by the government. While evaluation is increasingly recognized as necessary and valuable, many nonprofits don’t know how to get started, which tools to use, what to measure, how to use data, or how to work with outside evaluators. Bobbe A. Luce, director of ONEplace @ kpl, will explore these challenges and what you need to know about nonprofits to work with them in the evaluation process.

March 9, 2010

Title: An Introduction to Analyzing Text Data with MAXQDA

Presenter: Chris Coryn – Director, Interdisciplinary Ph.D. in Evaluation Program, WMU

Abstract: No longer are colored pencils, sticky notes, or selected text segments the only means available for organizing, analyzing, and interpreting text data. In recent years, a proliferation of technological advances has made computer-assisted analysis of such data more efficient, systematic, and reliable. Even so, few are either aware of or able to use such technologies effectively and, unfortunately, they are not often a part of courses on qualitative research methods. In this Evaluation Café, Coryn introduce and demonstrate some basic methods for managing and analyzing text data in MAXQDA (formerly WINMAX), including structured coding procedures, methods for examining relationships in qualitative data, and exporting results to statistical packages for other types of analysis. To follow the demonstration, attendees are encouraged bring a laptop computer and also to download the free trial version of MAXQDA from http://www.maxqda.com/. We anticipate that attendees will leave the session with a very general understanding of how to use MAXQDA for systematic analysis of text data.

March 16, 2010

Title: Evaluation of the “Letter and Life” Professional Development Program for Teachers

Presenter: Adriana Bauer – Visiting Scholar, Interdisciplinary Ph.D. in Evaluation Program, WMU

Abstract: “Letter and Life” was a professional development program for teachers proposed by the São Paulo State Secretary of Education as a keypiece of the São Paulo educational system reform in 2003. The program was elaborated in a context of educational management for better results and quality of education, that prioritized the production of data to support decisions. Thus, in that context, evaluation of students’ knowledge is understood as a management tool, and its results provide feedback for the Letter and Life program. The program has been changed since its implementation due to students’ results and informal evaluations through the use of questionnaires answered by course participants at the end of the course. Bauer, a Ph.D. student, is proposing an extension of this evaluation research to understand the possible effects of the program on student achievement and teacher practices.

March 23, 2010

Watch the video

Title: Foundation Evaluation

Presenter: Teresa Behrens – Editor-in-Chief, The Foundation Review & Senior Research Associate, Community Research Institute

Abstract: Teri Behrens is the former Director of Evaluation at the W. K. Kellogg Foundation. She left WKKF in January 2009 to join the Johnson Center for Philanthropy at GVSU, where she launched The Foundation Review, the first peer-reviewed journal of philanthropy. Teri will discuss why she started this journal and the role and challenges of peer review in the philanthropic sector. She’ll also provide an overview of the Johnson Center for Philanthropy.

March 30, 2010

View the slidesView the handout
Title: Michigan $aves: Evaluability Assessment & Process Evaluation of an Innovative Energy Program

Presenters: Daniela Schroeter – Director of Research, Anne Cullen—Senior Research Associate, & Kelly Robertson—Research Associate, The Evaluation Center, WMU

Abstract: Michigan Saves is a newly funded, innovative, statewide energy efficiency and distributed renewable financing program intended to stimulate Michigan’s green economy. The Evaluation Center has been contracted to conduct an evaluability assessment and process evaluation of Michigan Saves. This session will discuss concept mapping as a tool employed during the evaluability assessment and outline (1) what concept mapping is, (2) how it can be used for program and evaluation planning, and (3) how it has been applied to the context of Michigan Saves to develop a process evaluation plan. To learn more about Michigan Saves, visit http://www.michigansaves.org/

April 6, 2010
Title: Models to Evaluate System Change in Community Initiatives

Presenter: Margaret Richardson – Doctoral student, Interdisciplinary Ph.D. in Evaluation Program, WMU

Abstract: Because of the circular and intertwining nature of system interaction, social system change initiatives provide challenges for robust evaluation. Logic models provide a static picture of change, but lack the conceptualization of social change within a more fluid systems change framework. The more dynamic the system change initiative, the more dynamic the evaluation model needs to be to capture nuances that are enhancing or inhibiting change of the larger system. A theoretical framework for initiating system change will be shared to help understand necessary implementation factors for system change, along with an example of an applied, structured framework for evaluation of system change initiatives.

April 14, 2010
Title: Workshop: Participatory Evaluation Up Close

Presenter: J. Bradley Cousins – Professor of Educational Administration, University of Ottawa

Abstract: This full day workshop will consider connections between the theory and practice of participatory evaluation and the promise and potential of research on participatory approaches. Two streams, practical and transformative participatory evaluation, will be differentiated and consideration will be given to important process dimensions. In addition to practical, illustrative examples, a tentative agenda for ongoing research will be discussed.

April 20, 2010

Watch the video

Title: Racism, Evaluation, & Research

Presenters: Kelly Robertson – Research Associate, The Evaluation Center, WMU & Diane Rogers – Assessment for Learning Fellow, WMU

Abstract: This presentation will examine the construct validity of race and its impact on research and evaluation. First, a brief history and discussion of the construct of race will be provided, followed by an examination of how racism has changed over time and evidence of racism today. The majority of the presentation will focus on examples of how constructs of race impact research/evaluation and how to counter that impact. Throughout the presentation we will highlight how addressing these issues can improve the quality of scientific research/evaluation. We will conclude with next steps for personal and professional growth.

 


Past Events | Upcoming Events

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.