2011 – 2012


Evaluation Myths, Misconceptions, and Mistakes

Date: April 17th, 2012
Presenter(s): Dr. Lori Wingate—Assistant Director, The Evaluation Center, WMU
Abstract: In this Eval Café session, Lori Wingate will engage participants in a discussion of some common misunderstandings when it comes to program evaluation language and practice. In a profession where virtually anyone can put “evaluator” on their business card, it is immensely important that we protect the integrity of the terminology we use and be conscientious about how our individual practices collectively influence the field. We will look at some widely repeated evaluation mistakes and missteps, consider the causes and consequences of these problems, and generate ideas about we can each work to set the record straight and keep in that way. Participants are invited to bring their favorite examples of evaluation myths, misconceptions, and mistakes to enrich and broaden the discussion.

Concept of Validity in the Context of Evaluation

Date: April 10th, 2012
Presenter(s): Dr. Brooks Applegate—Professor, Educational Leadership, Research, and Technology and Director, Evaluation, Measurement, and Research, WMU
Abstract: No abstract available.

A Collaborative Partnership to Define and Measure Empowering Practice within a Domestic Violence Shelter Program

Date: April 3rd, 2012
Presenter(s): Dr. Cris Sullivan— Professor, Ecological and Community Psychology and Coordinator, Violence Against Women Research and Outreach Initiative, MSU
Abstract: The goal of empowerment-based programs is to help clients increase their personal, interpersonal, and political power. This talk describes a collaborative partnership with a domestic violence shelter program interested in evaluating how well they integrated the empowerment model into day-to-day service provision and whether their approach to empowerment-based service delivery was contributing to the intended “empowered outcomes” for the women with whom they work. In the presentation, I will describe how we jointly defined empowerment within this setting, how the empowerment-based practices and intended empowered outcomes were measured, and how the process has impacted the work of the advocates.

Pass the Aspirin: When Projects Become Headaches

Date: March 27th, 2012
Presenter(s): Dr. Mary Anne Sydlik—Director, SAMPI, WMU Dr. Bob Ruhf—Senior Research Associate, SAMPI, WMU Kristin Everet—Senior Research Assistant, SAMPI, WMU
Abstract: Science and Mathematics Program Improvement (SAMPI) at Western Michigan University currently has a number of evaluation projects, seven projects out for review, and six in the early stages of development with potential clients.Members of the SAMPI evaluation team will address challenges that can arise 1) during the pre-submission proposal/project development phase; 2) while trying to coordinate evaluation and project activities with another organization; and 3) when the clients’ expectations change mid-course in ways that exceed the evaluation budget, the evaluator’s time and energy, and cost-overruns threaten to shut down the evaluation before it can be completed.

“What is in Question?”

Date: March 20th, 2012
Presenter(s): Lee Balcom—Interdisciplinary Evaluation Doctoral Student, WMU
Abstract: Framing the evaluation question precedes all other design elements in an evaluation. It is also a key factor distinguishing the inquiry process of evaluation from that of research. However, the literature gives little mention to the particulars of choosing an evaluation question. This presentation identifies theoretical perspectives and technical implications involved in framing the evaluation question as well as a tool for selecting questions deliberately.

Utilizing the PMBOK and PRINCE2 for Project Evaluation Management

Date: March 13th, 2012
Presenter(s): Dr. Willis H. Thomas, Ph.D., CPT, PMP—Alumnus, Interdisciplinary Ph.D. in Evaluation, WMU
Abstract: The Project Management Body of Knowledge (PMBOK) and Projects in Controlled Environment (PRINCE2) are two well-recognized global standards for project management. Millions of practitioners are utilizing these standards to support project evaluations. Evaluators are engaged in all types of projects and have to choose from a multitude of models, methods and approaches. This presentation discusses managing evaluation projects using both of these standards.

The Value of Voice: Gaining Access to Marginalized Populations

Date: February 22nd, 2012
Presenter(s): Dr. Karen Vocke—Associate Professor, Department of English, WMU; and Dr. Brooks Applegate—Professor, Educational Leadership, Research, and Technology and Director, Evaluation, Measurement, and Research, WMU
Abstract: This presentation focuses on issues related to evaluating marginalized populations. For example, known challenges include access and the complicated issues of ethical representation necessary for authentic evaluation. However, critical examination shows when the evaluation process includes members of marginalized populations, results are more tangible, valid, and generalizable, with increased participation of the sample under study. Because the needs of marginalized populations are nuanced and diverse, evaluators must carefully consider the procedures and analyses involving the evaluation participants, especially the need for authentic, and not token, participation. This demonstration offers a protocol for access, collaboration, and evaluation for working with marginalized subpopulations in the K-12 setting, namely children and families of migrant farm workers and students with disabilities. Strategies will be presented for planning evaluations, accessing populations, developing survey instruments, developing a collaborative team, data collection strategies, and data analysis.

Rawlsian Political Analysis and its Links with Evaluation

Date: February 14th, 2012
Presenter(s): Dr. Paul Clements—Professor, Political Science and Director, MIDA Program, WMU
Abstract: My forthcoming book, Rawlsian Political Analysis: Rethinking the Microfoundations of Social Science, offers a new approach to social analysis as an improvement on neoclassical economics and rational choice theory. While these approaches are based on the assumption of rational utility maximization, I follow Rawls and Kant and assume that choice is based on independent reasonable and rational capacities. I will begin this presentation by laying out the book’s principal arguments. Then I will discuss two of the book’s applications of the proposed approach that directly involve evaluation: analyses of the Grameen Bank of Bangladesh and of the ethics and politics of climate change.

Writing Your HSIRB Protocol and the HSIRB Review Process

Date: February 7th, 2012
Presenter(s): Julia Mays—Research Compliance Coordinator, WMU
Abstract: To assist you with the HSIRB protocol and review process the Research Compliance Coordinator will discuss general HSIRB processes, levels of review, how to submit, and provide examples of research to help define “human subject research” under the Federal definition. Topics will include common errors that slow down the review process from submission to approval. If you are preparing to write a HSIRB protocol please consider attending as we will discuss applications, protocol outlines, and consent document requirements.

Veterans: Understanding and Serving this Unique and Growing Population on Campus

Date: February 1st, 2012
Presenter(s): Tracey Moon—Director, Office of Military and Veterans Affairs, WMU
Abstract: When evaluating veterans, it's essential to be aware of the unique attributes of this population.  This workshop will cover background information on veterans, including their strengths and struggles, and suggestions for conducting an effective data collection process.  Community resources to access participants, questions to ask or avoid, and other considerations to be mindful of when interviewing both male and female service members will be discussed, as well. Participants will have a better understanding of military and veteran perspectives and will have specifics to make evaluating this diverse group more effective.

What Does it Take for an Outsider to Evaluate in Cross Cultural Contexts: What About the Cultural Nuances & Subtleties in Language Dialects?

Date: January 25th, 2012
Presenter(s): Dr. Tererai Trent—Founder and Principle Evaluator, Tinogona Evaluation and Research Consultin
Abstract: Debate continues on the value of evaluations performed by outsider evaluators in cross cultural settings. One view maintains that it does not matter as long as the individual is a professional evaluator, and if cultural competence is an issue, a local translator can be brought to the process. Others strongly believe the nuances in local cultures and often the subtleties in the language dialects are too often lost in translation to the outsider evaluator due to language barriers. Through a review of the literature and experience, the question shifts from 'who has the right to evaluate' to 'what does it take for an outsider to evaluate in cross cultural contexts'? To gain insight into the question of credibility and validity of whether an outsider can possess the cultural competence required in a cross cultural context, my observations are based on Betty LaDuke, a renowned artist, who has gained international reputation for her murals, paintings, and sketches.

Meta-Analysis De-Mystified: A Step-by-Step Guide

Date: January 17th, 2012
Presenter(s): Robert McCowen—Interdisciplinary Evaluation Doctoral Student, WMU
Abstract: Meta-analysis is a powerful technique for synthesizing the results of multiple studies into a single conclusion. It yields objective, empirical, and largely value-neutral evidence; using meta-analysis, evaluators can estimate the central tendency of study outcomes, test the pattern of variations in outcomes, and estimate the overall effects of and relationships between variables. This Eval Cafe session will focus on providing an overview of this tool, including examples and resources for audience members who are interested in finding out how to conduct their own meta-analyses.

A Preview of Evaluation Theory, Models, and Applications (2nd ed.)

Date: December 13th, 2011
Presenter(s): Chris Coryn — Director, Interdisciplinary Ph.D. in Evaluation Program, WMU
Abstract: The second edition of Evaluation Theory, Models, & Applications has undergone substantial revision since the first edition was published by Stufflebeam and Shinkfield in 2007. One major change is the inclusion of several additional evaluation approaches (and the addition of a new second author). In the first version, 26 unique evaluation approaches were introduced and described. In this edition, many of the evaluation approaches have been substantially revised and updated. Other evaluation approaches have been added, including transformative evaluation, participatory evaluation, consumer feedback, research synthesis, and meta-analysis. Additionally, the second edition has a dedicated website provided by Jossey-Bass which includes course syllabi, PowerPoint summaries of each chapter, additional study questions, and links to other relevant websites. In this Café, Coryn and Stufflebeam will compare and contrast the two editions.

Meta-Analysis as a Method of Multi-Site Evaluation of International Development Programs & Projects

Date: December 10th, 2011
Presenter(s): Dr. Chris Coryn—Associate Professor and Director, Interdisciplinary Ph.D. in Evaluation Program, WMU; and Kristin A. Hobson—Interdisciplinary Evaluation Doctoral Student, WMU
Abstract: In this Evaluation Café, the presenters will demonstrate how meta-analysis methods can be used to study the effects of multi-site programs and projects, including how meta-analysis can be used to investigate and explain between-site variability. The presenters will also discuss some of the advantages and disadvantages of the meta-analysis approach over more traditional multi-site evaluation methods using an example from a recent study of Heifer International projects in three countries.

Evaluation of Mobile Clinic & Emergency Department Case Management

Date: December 6th, 2011
Presenter(s): Raymond Higbea — Assistant Professor, Public Affairs & Administration, WMU
Abstract: This presentation will compare and contrast summative evaluations of the implementation phase of two projects that address access to healthcare services at opposite ends of the access to healthcare spectrum. First, the Nursing Clinic of Battle Creek launched a mobile healthcare clinic to address lack of access to healthcare services of identified uninsured and underinsured populations of Calhoun County. The goals of the mobile clinic are to connect individuals with a medical home as often as possible, provide primary care services for individuals seeking treatment, connect individuals with chronic illnesses with a primary care provider, and refer all emergent cases to the closest emergency department. Second, a case management system was launched in the Bronson Battle Creek Emergency Department to address the needs of individuals over use and misuse the healthcare system. The goals of the Emergency Department Case Management program are to decrease frequency of Emergency Department visits, connect individuals with a medical home as often as possible, increase medical regimen compliance, increase health literacy, and increase lifestyle function.

A Lesson in Carefully Managing Resources: A Case Study from an Evaluation of a Local Program

Date: November 29th, 2011
Presenter(s): Kristen Hobson & Jason Burkhardt — Project Staff, The Evaluation Center
Abstract: Join Jason Burkhardt and Kristin Hobson in examining and discussing an evaluation of a small-scale program. This presentation will explore the time, data, and budget constraints involved in the evaluation of the Marvelous Music! Program. Through a guided discussion, attendees will reflect upon the evaluation plan and process, and will compare small and large scale evaluations in terms of rigor, reliability, and validity.

Evaluation as a Non-profit Capacity Building Strategy

Date: November 8th, 2011
Presenter(s): Meg Blinkiewicz — Principal, Innovatus Consulting
Abstract: Too often organizations think they are operating with a shared understanding of their purpose, values, assumptions, and goals but in reality, this shared meaning does not exist. Evaluation as an on-going, embedded process helps organizations translate their vision, mission, and values into measureable goals, standardized practices, and improved results. This session will illustrate how the process of on-going evaluation, or continuous quality improvement, can impact organizations’ hiring practices, training, budgeting, programming, and fund development. Examples of successful strategies will be presented and discussed. Specific and useful evaluator skills also will be highlighted and discussed.

Intellectual Property and Evaluation

Date: October 25th, 2011
Presenter(s): Michael Sharer — Director of IP Management and Commercialization, WMU
Abstract: The Intellectual Property (IP) Management and Commercialization office manages the IP and is responsible for commercializing new technologies developed from research performed at WMU. This “commercialization” step can take place by offering IP rights to existing corporations or by forming new for-profit enterprises specifically for marketing and distributing a new technology. This seminar will give an overview of how this process works, along with an understanding of what constitutes “IP” at WMU, and the different types of legal protection available for new technologies and discoveries.

Beer Evaluation

Date: October 18th, 2011
Presenter(s): Tim Surprise — Founder & President, Arcadia Brewing Company
Abstract: We all know evaluation logic can apply to programs and personnel, but it also can be applied to products—in this case, beer. Stretch your thinking about evaluation and join Tim Suprise of Arcadia Brewing Company in Battle Creek as he shares with us what beer evaluators would look for when taste testing. Seating is limited. Identification will be checked at the door.

AEA Preview Session 2

Date: October 11th, 2011
Presenter(s): Various Presenters
Abstract: In anticipation of the upcoming American Evaluation Association annual conference, we will feature four preregistered presenters who will screen their conference sessions for our Evaluation Café audience. Each presenter is limited to 15 minutes, which includes content and question-and-answer time. This session is perfect for attendees who cannot attend the annual conference or who wish to free up their conference schedules by catching WMU-related sessions ahead of time. Attendees will be asked to complete brief feedback forms for each presenter.

AEA Preview Session 1

Date: October 4th, 2011
Presenter(s): Various Presenters
Abstract: In anticipation of the upcoming American Evaluation Association annual conference, we will feature four preregistered presenters who will screen their conference sessions for our Evaluation Café audience. Each presenter is limited to 15 minutes, which includes content and question-and-answer time. This session is perfect for attendees who cannot attend the annual conference or who wish to free up their conference schedules by catching WMU-related sessions ahead of time. Attendees will be asked to complete brief feedback forms for each presenter.

Evaluation and the Public Good

Date: September 28th, 2011
Presenter(s): Rodney Hopson — Hillman Distinguished Professor, Duquesne University
Abstract: Note: This Eval Cafe will be two hours in length, from noon-2pm on Wednesday September 28 in Room 210, Bernhard Center. Evaluation takes place in complex ecologies where we evaluators play important roles in building better organizations and communities and in creating opportunities for a better world. At the core of our work is an attention to relationships, responsibilities, and relevance that make up these complex program, policy (and political), institutional, fiscal, environmental, sociocultural ecologies. The concern for relationships obliges evaluators to consider questions such as:  what key interactions, variables, stakeholders, or consumers do we need to attend in an evaluation? The attention to responsibilities requires evaluators to consider questions such as: to whom do we owe what in evaluations? The need for relevance suggests that evaluations consider questions such as: how do we make evaluations beneficial and meaningful to the diverse communities, contexts, and cultures in which we work? The presentation will focus on questions that evaluators, those who use evaluations, and those concerned with the public good face and should be concerned about, including ways to promote the liberatory role of evaluation in the service of public values, interests, and good.

Cross-lagged Panel Analysis of Alcoholics’ Anonymous Effects on Drinking

Date: September 13th, 2011
Presenter(s): Stephen Magura — Director, The Evaluation Center
Abstract: Evaluation studies consistently report correlations between Alcoholics Anonymous (AA) participation and less drinking or abstinence. Randomization of alcoholics to AA or non-AA is impractical and difficult. Unfortunately, non-randomization studies are susceptible to artifacts due to endogeneity bias, where variables assumed to be exogenous (“independent variables”) may actually be endogenous (“dependent variables”). A common artifact is reverse causation, where reduced drinking leads to increased AA participation, the opposite of what is typically assumed. The presentation focuses on a secondary analysis of a national alcoholism treatment data set, Project MATCH, that consists of multi-wave data on AA participation and severity of drinking over a 15 month period. An autoregressive cross-lagged model was formulated and indicated the predominance of AA effects on reduction of drinking, not the reverse. The presentation will be accessible to evaluators without advanced statistical training.

Current Eval Cafe Events | 2012-2013 | 2011-2012 | 2010-2011 | 2009-2010 | 2008-2009 | 2007-2008 | 2006-2007 | 2005-2006

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.