2003 – 2004


September 8, 2003
Title: Combining “Needs Assessment Plus” with Theory-Based Evaluation: A Hawai`ian Excursion Beyond the Deficit Mode

Presenter: Dr. Jane Davidson – Associate Director, The Evaluation Center & Assistant Professor, SociologyAbstract: In its simplest form, a needs assessment is the systematic documentation of the dimensions in which a community, organization, or a certain group of individuals is functioning at a level that is considered less than satisfactory in that context. However, there are several ways one can design a needs assessment that pushes beyond simply gathering and presenting (often depressing) statistics about a community or organization’s current plight – an approach sometimes referred to as the “deficit model” of needs assessment. By complementing information about needs with information about strengths, one can gain a better understanding of key leverage points (or advantages) that will help your project succeed. The other useful strategy is to dig one level beneath the surface needs to show that you understand something about the underlying causes. Information about both strengths and the underlying causes of need leads naturally to the development of a logic model that can then be used to evaluate the extent to which the needs and their underlying causes are being met. An applied example drawn from a community nutrition project in Hawai`i will be used to illustrate the key points.




September 22, 2003 Title: Measuring the Impact of Electronic Business Systems at the Defense Logistics Agency: Lessons Learned From Three Evaluations

Presenter: Dr. Jonathan A. Morell – Senior Policy Analyst, AltarumAbstract: The consequences of deploying three electronic business systems at the Defense Logistics Agency were evaluated: Central Contractor Registration (CCR), Electronic Document Access (EDA), and DoD Emall. Findings will be presented, with an emphasis on lessons learned about evaluating the impact of IT systems that are inserted into complex, changing organizations. Lessons fall into five categories: metrics, methodology, logic models, adaptive systems, and realistic expectations. Interactions among these categories will also be discussed.

October 7, 2003

Read the paper on which this presentation was based.
Title: Monitoring and Evaluation for Cost-Effectiveness in Development Management

Presenter: Dr. Paul Clements – Associate Professor, Department of Political Science, Western Michigan UniversityAbstract: Recent efforts to focus aid on countries with favorable economic policies have been motivated in part by the fact that cross-country econometric studies have failed to find a causal relationship between development assistance and economic growth. This can lead to a neglect of many of the neediest, however, as many of the poorest countries also have poor policies. An alternative account of aid’s limited impacts can be found in weaknesses in monitoring and evaluation systems. The on average positive findings from project evaluations are hard to reconcile with the macroeconomic evidence, and the structural conditions of development assistance make evaluation vulnerable to positive bias. While much of the evaluation literature focuses on attaining statistically valid impact estimates or on participatory approaches that empower beneficiary populations, this essay presents an approach to monitoring and evaluation that aims to strengthen judgments of cost-effectiveness. The proposed approach involves evaluators achieving independence from project management in a manner similar to that of accountants in the private sector.

October 30, 2003



View the slides.

Title: The Rural Systemic Initiatives Study

Presenter: Dr. Jerry Horn – Principal Research Associate, The Evaluation Center, Western Michigan UniversityAbstract: The Rural Systemic Initiatives (RSI) Evaluation Study is a longitudinal investigation of one component of the National Science Foundation’s large-scale school improvement efforts in science, mathematics, and technology. This study is unique in that it has been conducted from the perspective of local rural, economically poor communities, and it is designed to better understand how the context and factors of these communities impact reform efforts and student achievement. At the same time, it is not an evaluation of local schools, RSI collaboratives, or NSF’s program.The study began in late 1998 and is scheduled to be completed in May 2004. Involving an array of methodologies, including case studies, Delphi techniques, surveys, interviews, focus groups, document reviews, and secondary data analyses, efforts have been made to identify unique differences and approaches to school improvement across six collaboratives. In total, the collaboratives include participating schools from 17 states, and while the RSIs focus on common goals guided by six drivers of educational reform, there are a number of unique challenges and conditions that must be addressed in each one. This evaluation study has attempted to understand these challenges, with particular attention being given to the values of poor, rural communities.Due to the nature of the project and other considerations, a somewhat unique staffing arrangement has been utilized in the evaluation study, including the involvement of a ten-person Research Advisory Team. The members of this team are some of the most highly respected persons in their field, and they have served on case study site-visit teams, as well as being important advisors for the project. Special challenges in conducting this type of project will be discussed, as well as plans for dissemination of the findings.

November 13, 2003



View the Slides

Title: Unfunded Mandates as an Opportunity for Evaluability Assessment: The Evaluation of State Funded Drug Courts in Michigan

Presenter: Dr. David Hartmann – Professor, Department of Sociology & Director, Kercher Center for Social Research; Western Michigan UniversityAbstract: Federal and state pressures for statewide evaluation of drug courts in Michigan led to an RFP from the Michigan Supreme Court, State Court Administrator’s Office. Unfortunately, that RFP stipulated a timeline and budget that would not support meaningful outcome evaluation. The evaluation team therefore negotiated a scope of work that included 1) development of a model protocol for outcome assessment, 2) a site-based evaluability assessment of two sites relative to this protocol, and 3) a process assessment of all sites based on operational indicators of an accepted model of drug court operation. The evaluation work as a whole was therefore presented not as the final word, but as a step toward a full and systematic evaluation program that would require ongoing support.

December 3, 2003
Title: Towards an Assessment-Based Registry of Professional Evaluators

Presenter: Dr. Daniel Stufflebeam – Distinguished University Professor, WMU

Abstract: For many years, members of the evaluation field have discussed and debated the need for, and feasibility of, establishing a system for certifying competent evaluators. While there has been considerable exchange on certifying evaluators, no concrete effort to evaluate and attest to the competence of evaluators has yet resulted. The American Evaluation Association is probably too large, broadly focused, diverse, and loosely governed to reach consensus any time soon on a functional system for certifying evaluators. A strong, efficient, and dependable system could help evaluation clients looking for creditable assessments of the competence of prospective evaluators. The Evaluation Center might be able to achieve such an objective and thereby provide a valuable service to both evaluation clients and practitioners. Overcoming the potential difficulties of this endeavor in the cause of helping better professionalize the evaluation field is directly relevant to the Center’s leadership mission and consistent with its tradition of courage in delivering effective service. The presentation will engage participants on two questions: 1) Why should The Evaluation Center develop and administer a system for certifying evaluators? 2) Why should The Evaluation Center not develop and administer a system for certifying evaluators?


January 28, 2004

View the slides
Title: The Global Evaluation Community, the Brazilian Evaluation Network and the Evaluation Center

Presenter: Thomaz Chianca – Evaluation Ph.D. student

Abstract: In the last six years, the number of evaluation associations, societies and networks in countries and regions throughout the world has significantly increased – from less than 10 in 1997 to around 50 in 2003. Initiatives such as the International Organization for Cooperation in Evaluation (IOCE) and the International Development Evaluation Association (IDEAS) are attempting to create a global community of evaluators. In Brazil, the growth of the evaluation field follows the solid consolidation of democracy. Government agencies, philanthropic organizations, universities, and businesses are increasingly interested in using evaluation to improve effectiveness and promote accountability. Brazilian evaluators, though most are not formally trained, are starting to organize evaluation as a professional field. Established in 2002, the Brazilian Evaluation Network has local groups in four states and in the Federal District, mobilizing, at this moment, more than 300 evaluators. The evaluation field’s growth in Brazil and several other developing countries creates several opportunities for collaboration, given these countries’ current lack of a strong body of professional evaluators. Educational institutions, such as the Evaluation Center, that posses solid experience training evaluators and conducting quality evaluations are in a special position to greatly contribute to the profession‘s future in the international arena. This session will present information on the development of the evaluation field worldwide, and particularly in Brazil, and promote a collective discussion to explore the opportunities and challenges for the Evaluation Center to assume a leading role in promoting the evaluation profession internationally.


February 9, 2004

View the slides

Title: Interviewing Techniques: An Interactive Workshop

Presenter: Dr. Carolyn Sullins – Senior Research Associate, The Evaluation Center, WMU

Abstract: Interviews can generate rich qualitiative data. However, the quality of the data is limited by the skills of the interviewer. Interviewing is an art as well as a skill; one that takes practice and critical feedback. This interactive workshop will cover the basics of interviewing skills. Foci will be on (1) choosing the most appropriate type of interview (structured, semi-structured, or unstructured), (2) generating follow-up questions and probes, (3) leading vs. non-leading questions, (4) attending to body language and other nonverbals, (5) the processes of member-checking. The presenter will demonstrate an interview, giving the participants the chance to critique it. Participants will then have the opportunity to practice interviewing skills with one another.


February 26, 2004

View the slides
Title: Public Policy Evaluation

Presenter: John Risley – Evaluation Ph.D. student

Abstract: The study of public policy has many similarities with evaluation. The growth of both disciplines (both within and otutside of the United States) can be traced to the expansion of government involvement in society through taxation, spending and regulation. This session addresses how evaluation is often viewed as a sub-aspect of public policy studies–relegated to the final stage of the policy analysis process. The presenter attempts to lay the groundwork for integrating evaluation into all aspects of public policy analysis thus aiding the process of developing and choosing policy proposals.


April 8, 2004

View the slides

Title: Using Microsoft Project for Evaluation Planning, Organizing, and Monitoring

Presenter: Chris Coryn – Evaluation Ph.D. student

Abstract: Managing evaluation projects is a sophisticated and demanding endeavor. In order to facilitate effective and systematic management of projects, technological advances have become critical instruments to facilitate optimal performance. This presentation will provide a brief overview of Microsoft Project for planning, organizing, and monitoring evaluation and other projects. Microsoft Project is an efficient tool for multiple aspects of project management including proposals, budgets, tasks, resources, mapping, and reports. Projects are made more transparent for internal and external personnel and collaborators through tabular or graphic presentation of essential project features. Moreover, a basis for potential metaevaluation or accountability assessment throughout the lifecycle of a project is established through ongoing documentation. An introduction to the software will illustrate the basic functions and techniques for all phases and aspects of project management. Evaluation Center staff and students desiring to incorporate Microsoft Project into future and ongoing projects will receive an opportunity to participate in a hands-on workshop in which they can explore this application further and attain the essential skills to initiate its utilization.


May 26, 2004
Title: Evaluating Michigan’s Alternative Dispute Resolution Process

Presenter: Doug Van Epps – Director, Office of Dispute Resolution, Michigan State Court Administrative Office

Abstract: In 2000, the Michigan Supreme Court adopted rules authorizing judges to order parties in virtually any kind of civil litigation proceeding to try an alternative dispute resolution (ADR) process. The task force that developed the rules considered several goals including providing more dispute resolution choices for litigants, promoting early resolution of disputes, increasing the involvement of parties in the process of resolving their disputes, assisting parties in developing a wider range of outcomes than are available through adjudication, decreasing the cost to parties of resolving disputes, and increasing the court’s ability to resolve cases within given resources. The State Court Administrative Office (SCAO) is chiefly interested in assessing the impact of the rules on the courts’ case disposition time and “judicial efficiency” and intends to conduct a formal evaluation beginning later this year. Mr. Van Epps would like to use this session to receive advice from attendees on designing and conducting the evaluation.


May 21, 2004 Title: Using Evaluation Checklists to Guide Good Evaluation

Presenter: Lori Wingate – Assistant to the Director, The Evaluation Center

Abstract: The session will introduce participants to the use of checklists to enhance the quality and consistency of program evaluations. Checklists have many virtues that make them indispensable tools for practical evaluations of public health programs. They reduce the chances of overlooking important factors, reduce biases, and increase the defensibility and utility of evaluation findings. The Evaluation Center has devoted significant resources to developing a collection of evaluation checklists that address evaluation management, evaluation models, values and criteria, and metaevaluation. Session activities will orient participants to the checklist resources available on The Evaluation Center’s Web site, demonstrate how multiple checklists can be used together to maximize effectiveness, and provide opportunities for hands-on experience in applying and developing checklists. [Ms. Wingate will be leading this workshop at the Centers or Disease Control Evaluation Institute in June and welcomes feedback for improving the materials and activities.]

Past Events | Upcoming Events

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.