2008 – 2009


September 16, 2008


Watch the video
Title: The New Metaevaluation Standards: Metaevaluation Joins Utility, Feasibility, Propriety, and Accuracy Among the Program Evaluation Standards

Presenter: Lori Wingate – Project Manager, The Evaluation Center, WMU

Abstract: In this session, we will (1) learn about the new standards on metaevaluation that have been drafted and are currently under review for the forthcoming third edition of The Program Evaluation Standards and (2) discuss ways in which they could be improved. The group’s feedback will be summarized for contribution to the national online hearing for The Program Evaluation Standards that is being conducted by The Center for Evaluation and Assessment at the University of Iowa on behalf of the Joint Committee on Standards for Educational Evaluation. The three new standards state that metaevaluations should “be responsive to the needs of their intended users” (M1 Purposes), “identify and apply appropriate standards of quality” (M2 Standards of Quality), and “be based on adequate and accurate documentation” (M3 Documentation).

September 23, 2008

Watch the videoView the slides
Title: Managing the Values of the Key Stakeholders in a Health-Care Setting

Presenter: Ezechukwu Awgu – Interdisciplinary Evaluation Doctoral Student, WMU

Abstract: Stakeholders constitute one element of the political environment or context in which health care operates. The values of key stakeholders consist of such things as their desires and goals. In the past decade, the numbers of stakeholders in health care settings have been on the rise. This has increased complexity and diversity of values. The diversity of values has decreased supportiveness and put pressure on the activities and organizational success or goals of health-care settings, such as clinical quality, profitability, and so forth. This presentation will address these questions: What are the values in health-care settings? What are the values of the key stakeholders? How can the values and standards of the key stakeholders be synthesized into merit and worth? Additionally, the author will discuss the qualitative synthesis method and stakeholders’ assessment mapping.

September 30, 2008

Watch the video
Title: The Student Evaluation Standards

Presenter: Arlen Gullickson – Researcher Emeritus, The Evaluation Center, WMU

Abstract: Part one of a three-part series regarding improving student evaluations/assessment. The Student Evaluation Standards is the product of a collaborative effort sponsored by 16 national and international organizations and certified by ANSI. This session will focus on The Student Evaluation Standards and how they serve student evaluations by providing guidelines for student evaluations in the classroom. Specifically, the presenter will discuss how the standards were created, why they are important, how they can be applied, and case examples to show what can go wrong when these standards are not followed. The four attributes of evaluation practice that educators strive to meet will also be introduced.

October 7, 2008

Watch the videoView the slides
Title: Assessment for Learning

Presenter: Steven Ziebarth – Associate Professor, Mathematics, WMU

Abstract: Part two of the three-part series regarding improving student evaluations/assessment. This session focuses on Assessment for Learning (AfL), a cross-discipline approach to student assessment that incorporates formative assessment techniques. Student assessment outcomes are used to modify instruction to meet the needs of their students. AfL places responsibility for learning on individual students in an attempt to foster independent learners. Students take an active role in the learning process through the use of peer-assessments, self-assessments, and teacher feedback. This session will highlight research conducted regarding AfL and the main components of AfL implementation in a classroom.

October 14, 2008

Watch the video
View the slides
Title: Benchmarking to Improve Student Evaluation Practices

Presenters: Katharine Cummings – Associate Dean, College of Education, WMU; and Lindsay Noakes and Emily Rusieki – Interdisciplinary Evaluation Doctoral Students, WMU

Abstract: Part three of the three-part series regarding improving student evaluations/assessment. Benchmarking, not to be confused with benchmarks or standards, is a process to improve performance by indentifying, understanding and implementing best-practices. Benchmarking is common practice among large corporations and is becoming more prevalent in education and government. To help teachers use benchmarking as a method of professional development to improve their student evaluation practices, a benchmarking toolkit is being developed that incorporates The Student Evaluation Standards and Assessment for Learning principles. The process of benchmarking and the work completed to date on the teacher toolkit will be shared.

October 24, 2008

Watch the video
Title: Contemporary Thinking about Causation in Education Evaluation: A Dialogue with Tom Cook and Michael Scriven

Presenters: Tom Cook – Professor of Sociology, Psychology, Education, and Social Policy, Northwestern University’s Institute for Policy Research; and Michael Scriven — Distinguished Professor at the School of Organizational and Behavioral Sciences, Claremont Graduate UniversityAbstract: Contemporary thinking about causal inference in educational research and evaluation, largely centered around randomized controlled trials (RCTs), has been a point of high priority as well as contention. In this Café, Cook and Scriven will discuss their recent thinking about causation, in general, as well as recent methodological developments and alternatives to RCTs for cause‐probing investigations in educational and other settings, more specifically.

October 28, 2008

Watch the video
Title: Needs Assessment with Youth Participants

Presenter: Brandy Pratt – Interdisciplinary Evaluation Doctoral Student, WMU

Abstract: This session will explore conducting needs assessments with youth participants through the lens of the Grand Rapids Youth Master Plan. In particular, the presenter will convey the theoretical basis, methods, and current practice. Additionally, the benefits and challenges to adhering to youth participation principles will be addressed.

November 11, 2008 Watch the videoView the slides Title: Evaluation, Organizational Change, and Evaluation Capacity Building at the Federal Railroad Administration

Presenter: Jonny Morell – TechTeam Government Solutions, Inc.

Abstract: The Federal Railroad Administration has been engaged in three related streams of activity: evaluating innovative safety programs, organizational change, and evaluation capacity building. What began as a single evaluation developed into a comparative evaluation of several different ways to achieve the same end. As these programs were deployed, they coalesced into an organizational change within the agency—a change that in itself had evaluation requirements. All this activity led to an emphasis on data-informed decision making and the evaluation capacity needed to support it. This presentation will summarize each of these three activities and how they interacted with one another.

November 18 , 2008

Watch the videoBrowse the Website on which this presentation was based
Title: Better Strategies for Great Results: Using Logic Models for Quality Improvement in Planning, Managing, and Evaluation

Presenters: Cynthia Phillips and Lisa Wyatt Knowlton – Phillips Wyatt Knowlton, Inc.

Abstract: What advances have been made in creating and using logic models since the publication of the W.K. Kellogg Foundation Logic Model Development Guide in 2000

How can they be used to improve the quality of evaluation design?
How they can be developed more quickly and efficiently?
How are they being used by stakeholders other than for evaluation?Lisa Wyatt Knowlton, Ed.D., and Cynthia C. Phillips, Ph.D., authors of “The Logic Model Guidebook: Better Strategies for Great Results”(Sage, 2009), walk through the newest ideas and innovations in the modeling process step by step:

  • Explore the role of beliefs and assumptions–Are the thinking expressed by the model informed by knowledge?
  • Learn a process for encouraging better thinking (and thus better models)–Has the modeling process helped stakeholders question their program design and/or contribute to evaluation design?
  • Discuss the importance of quality evaluation data–Is evaluation data relevant, comprehensible, applicable to key stakeholders?
  • Consider the role of organizational learning–Are stakeholders reflective about their outcomes and are they willing to change strategies to obtain better results?

November 25, 2008

View the slides
Title: Methodological Challenges of Collecting Evaluation Data from Traumatized Clients/Consumers

Presenter: Rebecca Campbell – Michigan State UniversityAbstract: This project integrates elements of responsive evaluation and participatory evaluation to compare three evaluation data collection methods for use with a hard-to-find (HTF), traumatized, vulnerable population: rape victims seeking postassault medical forensic care. The three methods employed had significant differences in response rates, but yielded positive feedback about both the services received and the method of data collection. The findings suggest that evaluations with HTF service clients may need to be integrated into on-site services because other methods may not yield sufficient response rates.

December 2, 2008

View the slides
Title: Validating Self-Reports of Illegal Drug Use to Evaluate National Drug Control Policy: Taking the Data for a Spin

Presenter: Stephen Magura – Director, The Evaluation Center, WMU
Abstract: Illicit drug use remains at high levels in the U.S. The federal Office of National Drug Control Policy evaluates the outcomes of national drug demand reduction policies by assessing changes in the levels of drug use, including measures of change from several federally-sponsored annual national surveys. This analysis critiques a major validity study of self-reported drug use conducted by the federal government, showing that the favorable summary offered for public consumption is highly misleading. Specifically, the findings of the validity study, which compared self-reports with urine tests, are consistent with prior research showing that self-reports substantially underestimate drug use and can dramatically affect indicators of change.

January 20, 2009

Title: HSIRB, Evaluation, and You

Presenter: Vicki Janson – Research Compiance Coordinator, Office of the Vice President for Research, WMU

Abstract: Passing a proposal through the Human Subjects Institutional Review Board can be a complicated task, particularly for evaluators. This presentation reviews HSIRB’s distinction between research and evaluation, including when evaluation projects do and do not need review. Janson will also give advice on the construction of an HSIRB application, reflecting the most current thinking by the board and federal regulations. Time will be allowed for questions from the audience.

January 27, 2009No Photo

 

 

 

View the slides

View the handout

 

Title: The Politics of Evaluation Local Reform: Caught in the Crosshairs of Conflicting Stakeholder IdeologiesPresenter: Nancy Mansberger – Associate Professor, Department of Teaching, Learning, and Leadership, WMU
Abstract: Though the Kalamazoo Promise was designed as an economic revival tool, local educators rallied in response to create more equitable opportunities for disenfranchised students. As these secondary reform efforts gained momentum, unexpected resistance emerged from powerful elites, thereby throwing the role of the external evaluation into a dilemma: How does one identify impacts and communicate about outcomes when the reform is interpreted by key stakeholders as revolution? What can this experience help us understand about the role and function of evaluation in school reform projects?

February 3, 2009

No Photo
View the slidesView the handout
Title: Cultural Competence in Research and Evaluation: The “Kalamazoo Wraps” Study
Presenters: Carolyn Sullins – Senior Research Associate, Kercher Center for Social Research; and Ladel Lewis – Doctoral student, Department of Sociology, WMUAbstract: The Kercher Center is responsible for evaluating “Kalamazoo Wraps,” a federally funded initiative to improve services for children with mental health issues and their families. Perspectives of all consumers must be heard, understood, and acted upon—but many people are understandably reluctant to participate in an evaluation concerning such sensitive issues.

  • How do we break the barriers to participation?
  • How do we balance the academic and legal perspectives of “informed consent” and “confidentiality” with those of the families who participate?
  • How do we balance the need for consistent measures in our national study with the local realities of our participants?
  • How do we interpret and report the results?

These dilemmas and strategies to overcome them will be presented.


February 10, 2009
Title: Evalua|t|e Open House – Note Extended Time – 11:30 a.m. – 1:30 p.m.
Presenters: Stephanie Evergreen – Project manager; Arlen Gullickson – Researcher emeritus; and Lori Wingate – Project manager, The Evaluation Center, WMUAbstract: Evalua|t|e is the new evaluation resource center for the National Science Foundation’s Advanced Technological Education (ATE) program. The ATE program awards nearly $50 million a year to support efforts to improve the quality of technician education and increase the number of highly qualified technicians in high-tech fields. Evalua|t|e helps the program’s grantees obtain and conduct high-quality evaluations by providing evaluation resources, technical assistance, and training to those leading and evaluating ATE projects. Join Evalua|t|e staff in our new office for refreshments and to learn about this exciting new endeavor, which includes opportunities for faculty and staff to engage with us through research, collaboration, and field experiences. You won’t leave empty-handed — we have books, booklets, brochures, newsletters, and more to give away!

February 17, 2009
No Photo
View the slides
Title: The Critical Importance of Establishing Good Working Relationships in External Evaluations

Presenter: Robert Wertkin – Professor, School of Social Work, WMUAbstract: Establishing good working relationships between the evaluator and the consumer has numerous benefits. Unfortunately, relationship- building skills may not receive adequate attention in the training of evaluators. Dr. Wertkin will share the lessons learned from 20 years as an external evaluator to human service and educational organizations. This includes methods for gaining trust, establishing rapport, demonstrating empathy, and managing the dual role of evaluator and partner. The presentation also will address the need to understand the context within which evaluation takes place and the feelings of staff affected by evaluation processes and outcomes.

February 24, 2009No Photo
Title: Power Analysis for Univariate and Multivariate Parametric and Nonparametric Study DesignsPresenter: Jason Davey – Doctoral student, Evaluation, Measurement, and Research, WMU
Abstract: Do you know how many subjects your study needs in order to address your questions of interest? Knowing the answer to this question may mean the difference between a failed study and a successful one. It may mean the difference between spending a year or more to collect and analyze data that has little hope of revealing significant differences even when such differences really do exist. It may mean the difference between committing too many resources to data collection rather than diverting those resources other parts of your project. A Power Analysis incorporated into planning and study design will inform you of the minimum number of subjects you need to collect data from in order to detect statistically significant difference, providing, of course, those differences really exist. This lecture will introduce parametric and nonparametric univariate and multivariate power analysis.

March 17 , 2009View the slidesView the handout Title: An Empirical Review of Theory-Driven Evaluation Practice

Presenters: Chris Coryn – Interdisciplinary Evaluation Ph.D. Program Director, WMU; Lindsay Noakes – Interdisciplinary Evaluation Doctoral Student, WMU; Daniela Schröter – Director of Research, The Evaluation Center, WMU; Carl Westine – Interdisciplinary Evaluation Doctoral Student, WMU

Abstract:
Evaluation theories are models for evaluation practice. They are intended to guide practice rather than explain phenomena, and they are prescriptions for the ideal. Such theories address the focus and role of evaluation, specific evaluation questions to be studied, evaluation design and implementation, and use of evaluation results. Although its origins can be traced to Ralph Tyler in the 1930s (with his notion of formulating and testing program theory for evaluative purposes), later reappearing in the 1960s and 1970s and again in the 1980s, it was not until 1990 that theory-driven evaluation resonated more widely in the evaluation community with the publication of Huey Chen’s book, Theory-Driven Evaluations. Since then, conceptual and theoretical writings—and, to a lesser extent, actual case examples—on the approach have become commonplace. Nonetheless, the degree to which practice actually adheres to and exemplifies the central principles of theory-driven evaluation as described and prescribed by prominent theoretical writers is disputable. This study is intended to contribute to the dearth of empirical knowledge about evaluation practice and more specifically whether theoretical prescriptions and real-world practices do or do not align.

March 24, 2009
No Photo
View the slidesView the handout
Title: A Comparative Study of Extended Meta-Ethnography and Meta-Analysis Based on the Fundamental Micro-Purposes of a Literature Review

Presenter: Rhae-Ann Booker – Director of Pre-College Programs, Calvin College
Abstract: The purpose of this study was to explore the results of literature review methods operating out of interpretivist and positivist paradigms. My investigation included an examination of the research processes and results of an extended meta-ethnography (EME) and a published meta-analysis (PMA). The premise of my investigation was that both review methods include and, furthermore, require some level of interpretation, as answers are sought by researchers for their questions of interest. Furthermore, my exploration included comparing the EME and PMA results using a newly developed analytical framework of the four primary micropurposes of a literature review.

March 31, 2009View the slides

View the handout

 

 

Title: The Shape of Evaluation in Organizations

Presenter: Amy Gullickson – Doctoral candidate, Interdisciplinary Evaluation, WMU
Abstract: For the past 10 years, The Evaluation Center has surveyed the projects and centers funded by the National Science Foundation’s ATE program. Evaluation is a required component of ATE grants; however, not all projects and centers use evaluation in the same way or to the same extent. The survey database includes information on needs assessment practices, the kinds of evaluators, and the usefulness of evaluation reports and interactions with their evaluators. The Evaluation Café presentation will include a discussion of the survey findings from 2005-2008 on these topics, as well as conversation about possible organization-level influences on the kinds of evaluation and its uses.

April 7, 2009No Photo
View the slidesView the handout
Title: Statistical Prediction Models in Track and Field World Record Performances

Presenter: Yuanlong Liu – Professor, Department of Health, Physical Education, and Recreation, WMUAbstract: In the past one hundred years, the Olympic slogan “run faster, jump higher, and throw more strongly” was not only the theme of the Olympic Games, but also the objective for all track and field competitions. In the new millennium, athletes, coaches, and sports fans find that new track and field records have been getting more and more difficult to achieve. Many interesting questions may need measurement and evaluation specialists’ help such as “Is the sky the limit? If the world records have reached the asymptotic level, is there no record anymore?” It is not like a statistical or measurement question, is it? This presentation will try to show if we measurement specialists could provide some of the possible answers.

April 14 , 2009




 

Watch the video

View the slides

View the handout

Title: A Radical Reconceptualization of the Discipline of Evaluation

Presenter: Michael Scriven – Professor of Psychology, Claremont Graduate University

Abstract: Most of you know about the many interesting ?models? (aka theories, or approaches) concerning the nature of evaluation, including current examples such as ?theory-driven? (aka ?realist,? and so forth.), hoofprint, checklist, and appreciative inquiry approaches. I call these ?internal? theories and will try here to sketch and integrate a hierarchy of ?external? theories, ranging from an evolutionary account, a tacit cognitive processing account, a neuroeconomics account, to the ?Country of the Disciplines? account, into what I call the ?alpha discipline? theory. (My evangelical efforts this year?in Rio, Orlando, Stockholm, and the six Heifer countries?center on this approach, and I’m counting on getting suggestions for improving it [or abandoning it!] in Kalamazoo.)

May 4, 2009

No Photo

 

 

 

 

 

 

View the video

View the slides

View the handout

Title: Visible Learning: A Synthesis of 800 Meta-Analyses Relating to Achievement

Presenter: Dr. John Hattie – Professor of Education, Auckland University, New ZealandAbstract: This unique and groundbreaking book is the result of 15 years of research and synthesizes more than 800 meta-analyses on the influences on achievement in school-aged students. It builds a story about the power of teachers, feedback, and a model of learning and understanding. The research involves many millions of students and represents the largest ever evidence-based research into what actually works in schools to improve learning. Areas covered include the influence of the student, home, school, curricula, teacher, and teaching strategies. A model of teaching and learning was developed based on the notion of visible teaching and visible learning. A major message is that what works best for students is similar to what works best for teachers—an attention to setting challenging learning intentions, being clear about what success means, and an attention to learning strategies for developing conceptual understanding about what teachers and students know and understand. Although the current evidence-based fad has turned into a debate about test scores, this book is about using evidence to build and defend a model of teaching and learning. A major contribution is a fascinating benchmark/dashboard for comparing many innovations in teaching and schools.

May 5, 2009No Photo
View the slidesView the handout
Title: Local context, use, and adaptation of the Centers for Disease Control & Prevention evaluation framework for public health initiatives

Presenter: Dr. Janet Clinton – Director of Bachelor of Health Science in the School of Population Health, Auckland University, New ZealandAbstract: The presentation will describe a process for the use and adaptation of the Centers for Disease Control & Prevention (CDC) evaluation framework for public health initiatives. This framework is acknowledged as scientifically acceptable for use in large scale community-based evaluations. Examples from a number of public health programs from the greater Auckland region of New Zealand demonstrate that the CDC framework needs to be adapted to the local context and be more inclusive and inviting. Furthermore, to ensure the evaluation adds value, the framework must be culturally responsive. Consequently, within the framework the process of stakeholder engagement needs to be extensive and highly structured to ensure the evaluation is embedded in the program and stakeholders fully engage in evaluation activities. Moreover, the ethics and protocols of the framework need to reflect the cultural world-view of the local context. It is argued that adapting the framework appropriately will ensure that evaluation becomes a vehicle of change by promoting a learning environment. Further, it is suggested that the use of the framework in this manner will increase the probability of program success.

Past Events | Upcoming Events

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.