2006 – 2007



September 19, 2006
Watch the Video
Title: The Latest Battle in the War over Research Designs for Establishing Causation

Presenter: Michael Scriven – Professor of Philosophy, Associate Director of The Evaluation Center, and Interdiscipinary Ph.D. in Evaluation Program Director, WMU

Abstract: The extremist position in the causal wars (i.e., the view that the only scientific basis for causal claims is randomly controlled experiments) is now more or less abandoned, but since the supporters of that position now control well over half a billion per annum in US governmental research funding, they can afford to concede a little. However, the modified position is open to a tough attack for which there is currently no defense, but also no signs of acknowledgement of indefensibility. I’ll discuss this situation and try to answer the question: What should be done now to level the playing field?

September 26, 2006
Watch the Video

View the handout
Title: Ten Things Evaluation Needs: An Evaluation Needs Assessment

Presenter: James Sanders – Professor Emeritus of Educational Studies and former Associate Director of The Evaluation Center, WMU

Abstract: For evaluation to continue to develop as a profession, a rolling needs assessment would be useful. By identifying issues and problems in evaluation for which we have no good answers, we can concentrate our research on evaluation and pool wisdom gained from evaluative experiences. If for no other reason, this continuing look at the field of evaluation will provide topics for dissertations, research proposals, and communications among evaluation professionals. Be sure to bring your favorite candidate for this top 10 list.

October 3, 2006Wes Martz- Evaluation Cafe

 

 

 

Watch the Video

Title: American Evaluation Association Conference Preview

Presenters: Amy Gullickson & Wes Martz – Interdisciplinary Evaluation Doctoral Students; Michael Scriven – Professor of Philosophy, Associate Director of The Evaluation Center, and Interdiscipinary Ph.D. in Evaluation Program Director, WMU

Abstracts:This session includes abbreviated presentations of papers prepared for the upcoming American Evaluation Association conference.Strategic Evaluation of Business and Industry: Evaluative Approaches for Improving Organizational Culture (Amy Gullickson)Employee engagement and satisfaction are integral components for adding value to an organization and its products. Forward thinking corporations are moving away from command-and-control models toward learning organization and complex adaptive systems. This paper explores how evaluative tools such as needs assessment, after-action review, and minimum specification documents can be used to help increase employee satisfaction and performance and thus, results for shareholders.Building Shareholder Value Using Formative Evaluation (Wes Martz)The use of formal evaluation as a tool to drive value from improved operational efficiency presents an opportunity to strengthen an organization’s performance and shareholder value. This presentation explores the application of evaluation outside the scope of human resource development initiatives and considers evaluation as a tool to build shareholder value. Specifically, a case study of a formative evaluation conducted at an operating division of a U.S.-based global manufacturer of industrial products is presented.The Evaluator’s Responsibility for the Consequences of an Evaluation (Michael Scriven)Evaluations often have consequences, some intended, some unintended. The hard questions concern the extent to which the evaluator is responsible for these consequences. If the evaluation concludes with recommendations, then it’s reasonable to suppose, and it’s legally likely, that the evaluator will be held (at least partly) responsible for those consequences. But the much more fundamental question is how a typical evaluative conclusion can imply consequences at all. I will examine the traps that evaluatorshave fallen into when too-quickly moving from evaluative conclusions to recommendations, and indicate how and when to avoid the traps – or to avoid making recommendations.

October 10, 2006
Watch the Video

View the slides
Title: How to Plan and Start a Consulting Business

Presenter: Wes Martz – Vice President, Corporate Marketing, Kadant Johnson (Three Rivers, MI); Founder, Martz Marketing Group (Kalamazoo, MI); and Interdisciplinary Evaluation Doctoral Student, WMU

Abstract: A new business is created every 11 seconds in the U.S. Within four years, the majority of them have failed. Understanding the basics of planning and starting a business provides the foundation for long-term business success. This presentation highlights the seven steps to open for business and is designed for people who are thinking about starting a consulting or other service business as well as those who have recently formed a business and are looking for additional insight into business start-up activities and organizational structure.Wes Martz has more than 15 years of experience in business development, planning, and strategy. He was previously named by West Michigan Business Review as a “Top 40 Under 40 Business Leader” influencing the course of business in West Michigan. As a SCORE volunteer, Mr. Martz has advised more than 300 small business owners and entrepreneurs and has been frequently quoted in leading business publications including Sales & Marketing Management, MyBusiness, American Express’ Ventures, and Pitney Bowes award-winning magazine Priority.

October 17, 2006
View the slides
Title: New Directions in K-12 Mathematics Curriculum Evaluation?

Presenter: Steven Ziebarth – Professor of Mathematics, WMU

Abstract: This talk will examine some of the national pressures that have emerged over the last decade that influence current evaluation work in K-12 mathematics curricula: namely the work of the What Works Clearing House and the National Research Council Panel on Evaluating Curriculum Effectiveness. These two groups will serve as background for examining the evaluation of a particular high school curriculum project developed here at WMU, Core Plus Mathematics.

October 24, 2006
Watch the Video

View the handouts
Title: A Review of the History and Current Practice of Aid Evaluation

Presenter: Ryoh Sasaki – Interdisciplinary Evaluation Doctoral Student, WMU

Abstract: The effectiveness of aid has been questioned for a long time. Following a brief review of preceding studies, this presentation critically examines aid evaluation systems. First, the current volume and types of aid are examined. Second, the history of aid from the 1940s to 2000s is reviewed. Third, more than fifty aid agencies are reviewed under consideration of their major characteristics, criteria, types, and extent of their evaluation activities. Finally, the Remaining issues that the aid community should consider for better aid evaluation are discussed.

November 7, 2006
Watch the Video

View the handout
Title: Evaluating Poverty Interventions in Albania, Nepal, and Thailand: The Summer, 2006, Heifer Impact Study

Presenter: Thomaz Chianca – Interdisciplinary Evaluation Doctoral Student, WMU

Abstract: Since 2005, The Evaluation Center at WMU has been working with Heifer International as an independent party to evaluate their projects in five countries: the U.S., Peru, Thailand, Nepal, and Albania. A specific evaluation approach has been devised, the Heifer Hoofprint Model, that not only responds to Heifer’s main interest in assessing its the impact of its programming on the lives of project recipients, but also provides a comprehensive assessment of merit, worth, and significance of Heifer’s work. The project has involved about 20 evaluators (2/3 from the U.S. and 1/3 from other countries) and has had investments of approximately $300,000. This presentation will focus primarily on the discussion of the Hoofprint Model, the major challenges we faced in designing and implementing the evaluations, and the initial accounts on uses and consequences of those evaluations to different stakeholders, including Heifer headquarters, country offices and local projects, and the WMU Evaluation Center. Participants will be invited to contribute ideas to improve the next round of Heifer “impact” evaluations.

November 14, 2006
Watch the Video

View the slides
Title: How Well Did WMU’s Graduate Program Review Meet the Government Auditing Standards?

Presenter: Daniel Stufflebeam – Distinguished University Professor and Former Director of The Evaluation Center, WMU

Abstract: In the past 30 years WMU twice attempted to evaluate its programs. The 1979 program review’s purpose was to ” . . . improve decision making by departments and all administrative levels . . .” The 2006 review’s goal was to determine which graduate programs are highest strategic priorities for funding. Both reviews were important, neither was keyed to approved standards for program evaluations, and both failed. It is relevant to consider whether such reviews would more likely succeed if keyed to appropriate standards. One set of standards for consideration in future program reviews is the GAO Government Auditing Standards. These standards cover program evaluations as well as financial audits. Using the WMU program review context, this presentation will summarize and consider the potential utility for Western of GAO’s most recent version of the Government Auditing Standards.

November 21, 2006

View the slides
Title: Demonstrating Value through Learning Analytics

Presenter: Jeffrey Berk – Vice President of Products and Strategy & Susan Johnston – Account Manager, KnowledgeAdvisors, ChicagoAbstract: This presentation will discuss measurement trends in learning evaluation and present a model and toolset for practical, scaleable and repeatable learning analytics. Measurement processes, templates and standards will be discussed along with an approach to building a measurement dashboard.Jeffrey Berk is Vice President of Products and Strategy for KnowledgeAdvisors, a business intelligence company that helps organizations measure and manage their learning investments. Berk is the author behind the organization’s proprietary learning measurement methodologies. He is also the functional architect of the technology product Metrics that Matter, which helps organizations measure the effectiveness of learning investments through automation and technology.

November 28, 2006






Watch the Video
Title: Evaluation in Kalamazoo

Presenters: Denise Hartsough – Director of Community Investment; Suprotik Stotz-Ghosh – Associate Director of Community Investment; Ronda Webber – Community Investment Associate, Greater Kalamazoo United WayAbstract: Members of the Community Investment staff of the Greater Kalamazoo United Way (GKUW) will be present to discuss how GKUW and WMU’s Interdisciplinary Ph.D. in Evaluation and The Evaluation Center engage in evaluation in the Kalamazoo area and brainstorm their efforts can be mutually supportive.

December 5, 2006

Watch the VideoView the slides
Title: Exploring Cost Analysis: A Case Study Illustrating How Different Assumptions Influence Evaluative Conclusions
Presenter: Nadini Persaud – Interdisciplinary Evaluation Doctoral Student, WMUAbstract: This presentation will illustrate with a case study, the differences between rudimentary economic analyses and sophisticated economic analyses. Additionally, the presentation will show how different assumptions affect evaluative conclusions and in some instances, completely reverse an evaluative conclusion. The importance of sensitivity analyses in cost studies will be highlighted, using salary and discount rate to illustrate this concept. Finally, the paper will briefly discuss real-world constraints on data availability and how this problem was resolved in the ABC study, and explain why sophisticated economic analyses may be difficult for stakeholders to understand.

January 16, 2007
Watch the Video
Title: I Think . . . Therefore I Need Funding: Or, What Can Evaluation Offer in Terms of Allocating, Apportioning, and Distributing Federal Research Funds

Presenter: Chris Coryn – Interdisciplinary Evaluation graduate student and Research Associate, The Evaluation Center, WMU

Abstract: This Evaluation Café is intended to serve as an open forum, or dialogue, to explore the potentials for what evaluation has to offer in the way of allocating, apportioning, or distributing national research funding and setting national research initiatives and agendas. During the Cold War era, the notion of research as an autonomous pursuit, free of interference by sponsors, was asserted by the United States’ Presidential Science Advisor, Vannevar Bush in the 1940s. Bush managed to instill the idea of a generously funded yet self-governing scientific establishment by stressing the importance and inevitable benefits of research. In the last 50 years, the rationale for government support of research has been the contribution of science and technology to military security and national prestige, coupled with a sense—taken mainly on faith—that a strong research community will more than pay for itself in economic and social benefits. Following the end of the Cold War, old questions about the control of the United States’ research agenda and procedures and methods for determining the allocation of resources among fields and disciplines, research institutions, and regions were once again surfaced.

January 23, 2007
Watch the Video
Title: The Mysteries of Educational Technology and its Evaluation

Presenter: Michael Scriven – Professor of Philosophy, Associate Director of The Evaluation Center, and Interdisciplinary Ph.D. in Evaluation Program Director, WMU

Abstract: In 1960, O.K. Moore, a professor at Yale, was running – and getting considerable publicity about – the “talking typewriter” experiment at a preschool in New Haven. The children were becoming competent touch typists and learning spelling and composition on a couple of mainframe-driven electric IBM typewriters with a tape-drive attachment – and enjoying the experience. In 2001, Larry Cuban, a professor at Stanford, published a book (Oversold and Underused: Computers in the Classroom) in which he concluded that computers, although now ubiquitous, had made no discernible contribution to literacy or other academic subjects. There is no reference in his index to Moore or his work; it has vanished from the radar. There are other cases like this. What is the real truth? And what can we learn from this extraordinary story of paradox and prejudice? It turns out to be a story with shocking implications for business as well as education.

January 30, 2007
View the slides
Title: The Newest Issue in Aid Evaluation
Presenter: Ryoh Sasaki – Interdisciplinary Evaluation Doctoral Student, WMUAbstract: Recently a new aid approach, Sector-Wide Approach (SWAps), has been proposed and is rapidly becoming dominant by replacing the traditional stand-alone project approach. This trend is seriously challenging the validity and usefulness of the current aid evaluation practices. However, the ideal mechanism for monitoring and evaluation (M & E) suitable for SWAps is still under discussion. The presenter, who has four years of experience in supporting application of this new approach in Tanzania’s agricultural sector, will present his view on SWAps and discuss issues related to and possible recommendations for the appropriate M & E mechanism.

February 6, 2007
Watch the Video

View the slides
Title: Evaluations in the U.S. Department of Health and Human Services

Presenter: Ann Maxwell – Regional Inspector General, Office of Inspector General, U.S. Department of Health and Human Services (Chicago)

Abstract: Within each federal department there is an independent oversight body charged with the task of evaluating the department’s programs to prevent fraud, waste, and abuse as well as promote economy and efficiency and protect beneficiaries. This discussion will introduce you to these offices, the Offices of Inspector General, and the type of evaluative work they conduct. Specifically, we will focus on their approach to applied research, the methodologies they typically employ, the topics they cover, and the impact their work has had. Specific examples will be provided. The presentation will last approximately 30 minutes, leaving ample time for any questions you might have about this type of evaluative work.

February 13, 2007

View the slides
Title: Using System Dynamics to Extend Evaluation: A Case Example from HIV Prevention

Presenter: Robin Miller – Associate Professor of Psychology, Michigan State University & Editor, American Journal of Evaluation

Abstract: Systems thinking and tools provide intriguing opportunities to extend the ways in which evaluators conceptualize programs and explore program processes and outcomes. One popular tool, system dynamics, has principally been used to guide evaluation planning. In this presentation, I will demonstrate how system dynamics may be used to synthesize evaluation data to provide insight about program processes and outcomes. I will present data from an ongoing investigation in which my colleagues and I are applying system dynamics to the evaluation of HIV prevention evidence-based programs.

February 20, 2007
View the slides
Title: Challenges to Personnel Evaluations: People Evaluating People

Presenter: Fred Brown – President, Quantum Services, Inc. (Grand Rapids, MI)Abstract: Perhaps one can not totally eliminate the subjectivity in the vital tasks of hiring, rewarding, and promoting personnel. Fred Brown, President of Quantum Services, will present how his company has provided processes and tools to help organizations create some objectivity into an otherwise subjective process. He will be present and discuss two processes: Targeted Selection and 360 Feedback. Both methods have been used to successfully evaluate both prospective and incumbent personnel for many of Quantum’s customers

February 27, 2007
Watch the video

View the slides
Title: Evaluability Assessment from 1986-2006: A Review of Literature

Presenter: Mike Trevisan—Professor and Director, Assessment and Evaluation Center, Washington State University & Visiting Scholar, The Evaluation Center, WMU

Abstract: An appraisal of the state of practice of evaluability assessment (EA) as represented in the archived literature will be presented. Twenty studies were located, representing a variety of programs, disciplines, and settings. Most studies employed common EA methodologies, such as document reviews, site visits, and interviews. Other methodologies such as expert testimony and examination of program statistics were also found in the literature. The main purpose for conducting EA mentioned in these studies was to determine whether or not a program was ready for full evaluation. Outcomes included the construction of a program logic model, program development, and or modification. The findings suggest that EA may be employed more widely than previously thought. Recommendations to enhance EA practice are offered.

March 13, 2007
Watch the video

View the slides and handout
Title: Legislative Program Evaluation in the American States
Presenter: John Risley – Interdisciplinary Evaluation graduate student and Research Associate, College of Health and Human Services, WMUAbstract: This session examines reports written by state government legislative program evaluation organizations in the United States. These reports (variously called performance evaluations, program evaluations, effectiveness and efficiency audits, managerial audits, or, more commonly, performance audits) are an important part of legislative oversight and policymaking. A random sample of reports released by state legislative agencies from 2001 through 2005 were examined to learn their common and differing methods, goals, orientations, timelines, and scope. An approach was developed to metaevaluate these reports using criteria from the program evaluation standards, government performance auditing standards, and the Key Evaluation Checklist. The session also draws on the literature exploring metaevaluation across a large number of evaluation reports. The two goals of this session are 1) to provide a basis of information about state legislative program evaluation unit reports and 2) to test an approach for feasibly and competently metaevaluating state legislative program evaluation reports.
March 20, 2007
Watch the videoRead the paper on which this presentation was based.
Title: What Are We? Chopped Liver? Or Why It Matters if the Comparisons are Active, and What To Do

Presenter: Lois-ellin Datta – President, Datta Analysis, Captain Cook, HIAbstract: Could a randomized control design be used in 2007 for an evaluation of Sesame Street, as the design was used in 1968? At that time, the evaluation was a test of whether television could be an effective way of helping preschoolers get ready for kindergarten, with “ready” to include pre-academic, cognitive, social, and other skills.Probably not, because Sesame Street is so infused in every-day experiences for children, that a meaningful comparison or control group could not be found in 2007. At least some of the popular 2007 television programs for preschoolers have so many elements akin to Sesame Street that a “compared to what” could not test the original question, long since answered, I believe, “Yes.” Further, the likelihood might be close to zero that a group of randomly selected group of control parents and children would agree – for the sake of an evaluation – not to watch Sesame Street or any other television program. The control group would, in all likelihood, be active…the children, if not the parents!

I’m not sure how many proponents of the widespread applicability of RCT when attribution is a purpose would agree with this argument. One might think, “Quite a few.” In actuality, there doesn’t seem to be adequate recognition of the challenge of the active control group in areas such as education, health, and social services. For example, a congressionally mandated randomized control experiment to prove, once and for all, whether Head Start is worthwhile will soon issue its final report. In technical terms, “Aargh!” For example, Cordray and Lipsey to their immense credit reported in thorough detail what happened in a large, very costly national experiment to test the effectiveness of several approaches to treatment for men with multiple diagnoses when the control group men decided they wanted to choose their own treatments. For example – if you are currently doing an evaluation in a school, find out what else is happening that is relevant to your outcomes in addition to the program you are judging. Dollars to doughnuts, your “experimental” or focal group is someone else’s comparison and vice versa.

The active control group can matter a great deal when it comes to analysis and conclusions. The effects on variance can lead to macro-negative effects, something known since about 1972 when Stallings et al. published their SRCD monograph and other reports on the Follow Through experiment. Arguably, the more active the control group in finding experiences similar to that of the treatment, the more threatened the logic of the RCT or comparison designs. Sometimes discussed as “contamination,” this seems to me a factor both different and more difficult to deal with.

What to do? The Café will include a discussion of some approaches that seem useful and that in at least some instances, have proven to be able to sort out effects that truly exist, thus avoiding unnecessary death by evaluation for worthy ideas.


March 27, 2007
Watch the Video
Title: March Madness and Methods: A Case Study of Evaluating Tsunami Awareness & Prepared

Presenter: Liesel Ritchie – Senior Research Associate, The Evaluation Center, WMUAbstract: After the Indian Ocean tsunami disaster in 2004, a group of U.S. universities received a National Science Foundation grant to evaluate the effectiveness of tsunami warnings in the United States. Communities in Alaska, California, Hawaii, North Carolina, Oregon, and Washington are being studied, with the intent that findings will be used to improve the effectiveness of tsunami readiness efforts and warning messages in these and other communities. This presentation will provide an overview of the study, preliminary findings from the Kodiak, Alaska site (where work is being led by The Evaluation Center), then focus on how findings will be used to evaluate awareness and education activities in various locations.

April 3, 2007
Title: Evidenced-Based Community Interventions: International and Cross-Cultural Considerations

Presenter: Mozdeh Bruss – Associate Professor of Family and Consumer Science, WMU

Abstract: The increasing demand for evidenced-based practices and the need for cultural relevancy poses a challenge and opportunity for program developers and evaluators working in international settings with limited resources. We will examine a case from the Pacific and review lessons learned in the development and evaluation of programs that target public health issues in complex community settings. Guests from the Commononwealth of the Northern Mariana Islands, program partners Rosa Palacios and Jackie Quitigua, will also participate in the session.

April 16, 2007
Watch the Video

View the slides
Title: Evaluating Motives

Presenter: Gene Glass – Regents’ Professor of Educational Leadership & Policy Studies, Arizona State UniversityAbstract: Are some evaluations incomplete unless they examine the motives of the actors? Are motives “fair game” in an evaluation? And if they are, what is the best way to find out about them? What if certain actions are driven primarily by xenophobia, or worse, racial suspicions and hatred? Do we stand a chance of documenting this? How? And finally, should we pay consultants who only ask questions and have no answers?

April 24, 2007
Watch the Video

View the slides
Title: Yes, When Will We Ever Learn? Strategies For Causal Attribution in Evaluation

Presenter: Patricia Rogers – Associate Professor in Public Sector Evaluation & Director, CIRCLE (Collaborative Institute for Research, Consulting and Learning in Evaluation), Royal Melbourne Institute of Technology, AustraliaAbstract: The substantial international efforts currently underway to improve the quality of evaluations, particularly in international development, have drawn attention to inadequacies in providing credible evidence of impact – most notably in the Center for Global Development report “When will we ever learn?”. Remarkably, these efforts have focused almost exclusively on the use of randomized control trials, with little or no recognition of their limitations or the development of alternatives that are more suited to the evaluation of complex interventions in open implementation environments. This presentation will discuss evaluation of interventions involving complex causal relationships – such as where an intervention is necessary but not sufficient (with other contributing factors needed for success), or sufficient but not necessary (with alternative causal paths available), or where the causal relationships are of interdependence not simple linear causality – and present examples of ways to provide credible evidence of impact without RCTs.
Past Events | Upcoming Events

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.