EVAL 6000: Foundations of Evaluation

Course Description 

With an emphasis on constructing a sound foundational knowledge base, this course is designed to provide an overview of both past and contemporary perspectives on evaluation theory, method, and practice. Course topics include, but are not limited to, basic evaluation concepts and definitions, evaluation as a cognitive activity, the view of evaluation as a transdiscipline, the general and working logic of evaluation, an overview of the history of the field, distinctions between evaluation and basic and applied social science research, evaluation-specific methods (e.g., needs assessment, stakeholder analysis, identifying evaluative criteria, standard setting), reasons and motives for conducting evaluation, central types and purposes of evaluation, objectivity, bias, and validity, the function of program theory in evaluation, evaluator roles, core competencies required for conducting high quality, professional evaluation, audiences and users of evaluation, alternative evaluation models and approaches, the political nature of evaluation and its implications for practice, professional standards and codes of conduct, and emerging and enduring issues in evaluation theory, method, and practice. Although the major focus of the course is program evaluation in multiple settings (e.g., education, criminal justice, health and medicine, human and social services, international development, science and technology), examples from personnel evaluation, policy analysis, and product evaluation also are used to illustrate foundational concepts. Throughout the course, critical thinking and active learning are emphasized.

Syllabus 

Course Syllabus PDF

Instructor

Dr. Chris L. S. Coryn

Teaching Assistant

Nicholas A. Saxton

Required Textbooks

Alkin, M. C. (Ed.). (2012). Evaluation roots: A wider perspective of theorists’ views and influences (2nd ed.). Thousand Oaks, CA: Sage.

Mathison, S. (Ed.). (2005). Encyclopedia of evaluation.  Thousand Oaks, CA: Sage.

Stufflebeam, D. L., & Coryn, C. L. S. (2014). Evaluation theory, models, & applications (2nd ed.). San Francisco, CA: Jossey-Bass.


Required & Supplementary Readings 

These readings are for instructional purposes only.

Chelimsky, E. (1985). Comparing and contrasting auditing and evaluation: Some notes on their relationship. Evaluation Review, 9(4), 483-508. PDF

Chelimsky, E. (1997). The political environment of evaluation and what it means for the development of the field. In E. Chelimsky & W. R. Shadish (Eds.), Evaluation for the 21st century: A handbook (pp. 53-68). Thousand Oaks, CA: Sage. PDF

Chelimsky, E. (1998). The role of experience in formulating theories of evaluation practice. American Journal of Evaluation, 19(1), 35-55. PDF

Christie, C. A. (2003). What guides evaluation? A study of how evaluation practice maps onto evaluation theory. In C. A. Christie (Ed.), The practice-theory relationship in evaluation (pp. 7-35). New Directions for Evaluation, 97. San Francisco, CA: Jossey-Bass. PDF

Christie, C. A. (2007). Reported influence of evaluation data on decision makers’ actions. American Journal of Evaluation, 28(1), 8-25. PDF

Conlin, S., & Stirrat, R. L. (2008). Current challenges in development evaluation. Evaluation, 14(2), 193-208. PDF

Cook, T. D., Scriven, M., Coryn, C. L. S., & Evergreen, S. D. H. (2010). Contemporary thinking about causation in evaluation: A dialogue with Tom Cook and Michael Scriven. American Journal of Evaluation, 31(1), 105-117. PDF

Cooksy, L. J., & Caracelli, V. J. (2005). Quality, context and use: Issues in achieving the goals of metaevaluation. American Journal of Evaluation, 26(1), 31-42. PDF

Coryn, C. L. S., & Hobson, K. A. (2011). Using nonequivalent dependent variables to reduce internal validity threats in quasi-experiments: Rationale, history, and examples from practice. In S. Mathison (Ed.), Really new directions in evaluation. New Directions for Evaluation, 131. San Francisco, CA: Jossey-Bass. PDF

Coryn, C. L. S., Noakes, L. A., Westine, C. D., & Schröter, D. C., (2011). A systematic review of theory-driven evaluation practice from 1990 to 2009. American Journal of Evaluation, 32(3), 199-226. PDF

Coryn, C. L. S., Schröter, D. C., & Hanssen, C. E. (2009). Adding a time-series design element to the Success Case Method to improve methodological rigor: An application for non-profit program evaluation. American Journal of Evaluation, 30(1), 80-92. PDF

Cousins, J. B. (2004). Commentary: Minimizing evaluation misuse as principled practice. American Journal of Evaluation, 25(3), 391-397. PDF

Cousins, J. B., & Earl, L. M. (1992). The case for participatory evaluation. Educational Evaluation and Policy Analysis, 14(4), 397-418. PDF

Cousins, J. B., & Leithwood, K. A. (1986). Current empirical research on evaluation utilization. Review of Educational Research, 56(3), 331-364. PDF

Cullen, A. E., Coryn, C. L. S., & Rugh, J. (2011). The politics and consequences of including stakeholders in international development evaluations. American Journal of Evaluation. PDF

Datta, L-E. (2011). Politics and evaluation: More than methodology. American Journal of Evaluation, 32(2), 273-294. PDF

Dewey, J. D., Montrosse, B. E., Schröter, D. C., Sullins, C. D. & Mattox II, J. R. (2008). Evaluator competencies: What’s taught versus what’s sought. American Journal of Evaluation, 29(3), 268-287. PDF

Fleischer, D. N., & Christie, C. A. (2009). Evaluation use: Results from a survey of U.S. American Evaluation Association members. American Journal of Evaluation, 30(2), 158-175. PDF

Fournier, D. M. (1995). Establishing evaluative conclusions: A distinction between general and working logic. In D. M. Fournier (Ed.), Reasoning in evaluation: Inferential links and leaps (pp. 15-32). New Directions in Evaluation, 68. San Francisco, CA: Jossey-Bass. PDF

Heberger, A. E., Christie, C. A., & Alkin, M. C. (2010). A bibliometric analysis of the academic influences of and on evaluation theorists’ published works. American Journal of Evaluation, 31(1), 24-44. PDF

Henry, G. T., & Mark, M. M. (2003). Toward an agenda for research on evaluation. In C. A. Christie (Ed.), The practice-theory relationship in evaluation (pp. 69-80). New Directions for Evaluation, 97. San Francisco, CA: Jossey-Bass. PDF

Johnson, K., Greenseid, L. O., Toal, S. A., King, J. A., Lawrenz, F. & Volkov, B. (2009). Research on evaluation use: A review of the empirical literature from 1986 to 2005. American Journal of Evaluation, 30(3), 377-410. PDF

Joint Committee on Standards for Educational Evaluation. (2011). The program evaluation standards (3rd ed.). Thousand Oaks, CA: Sage. HTML

Mark, M. M. (2007). Building a better evidence base for evaluation theory: Beyond general calls to a framework of types of research on evaluation. In N. L. Smith & P. R. Brandon (Eds.), Fundamental issues in evaluation (pp. 111-134). New York: Guilford. PDF

Mathison, S. (2007). What is the difference between research and evaluation—and why do we care? In N. L. Smith & P. R. Brandon (Eds.), Fundamental issues in evaluation (pp. 183-196). New York: Guilford. PDF

Miller, R. L. (2010). Developing standards for empirical examinations of evaluation theory. American Journal of Evaluation, 31(3), 390-399. PDF

Miller, R. L., & Campbell, R. (2006). Taking stock of empowerment evaluation: An empirical review. American Journal of Evaluation, 27(3), 296-319. PDF

Newman, D. L., Scheirer, M. A., Shadish, W. R., & Wye, C. (1995). Guiding principles for evaluators. In W. R. Shadish, D. L. Newman, M. A. Scheirer, & C. Wye (Eds.), Guiding principles for evaluators (pp. 19-26). New Directions in Evaluation, 66. San Francisco, CA: Jossey-Bass. HTML

Patton, M. Q. (2001). Evaluation, knowledge management, best practices, and high quality lessons learned. American Journal of Evaluation, 22(3), 329-336. PDF

Picciotto, R. (2003). International trends and development evaluation: The need for ideas. American Journal of Evaluation, 24(2), 227-234. PDF

Picciotto, R. (2007). The new environment for development evaluation. American Journal of Evaluation, 28(4), 509-521. PDF

Rogers, P. J. (2008). Using programme theory to evaluate complicated and complex aspects of interventions. Evaluation, 14(1), 29-48. PDF

Rogers, P. J., Petrosino, A., Huebner, T. A., & Hacsi, T. A. (2000). Program theory evaluation: Practice, promise, and problems. In P. J. Rogers, T. A. Hacsi, A. Petrosino, & T. A. Huebner (Eds.), Program theory in evaluation: Challenges and opportunities (pp. 5-14). New Directions for Evaluation, 87. San Francisco, CA: Jossey-Bass. PDF

Shulha, L. M., & Cousins, J. B. (1997). Evaluation use: Theory, research, and practice since 1986. American Journal of Evaluation, 18(1), 195-208. PDF

Scriven, M. (1986). New frontiers of evaluation. Evaluation Practice, 7(1), 7-44. PDF

Scriven, M. (1994a). The final synthesis. Evaluation Practice, 15(3), 367-382. PDF

Scriven, M. (1994b). The fine line between evaluation and explanation. Evaluation Practice, 15(1), 75-77. PDF

Scriven, M. (1994c). Product evaluation—The state of the art. Evaluation Practice, 15(1), 45-62. PDF

Scriven, M. (1998). Minimalist theory: The least theory that practice requires. American Journal of Evaluation, 19(1), 57-70. PDF

Scriven, M. (2001). Evaluation future tense. American Journal of Evaluation, 22(3), 301-307. PDF

Scriven, M. (2007). Key evaluation checklist (KEC). Kalamazoo, MI: Western Michigan University, The Evaluation Center. PDF

Shadish, W. R. (1994). Need-based evaluation theory: What do you need to know to do good evaluation? American Journal of Evaluation, 15(3), 347-358. PDF

Shadish, W. R. (1998). Evaluation theory is who we are. American Journal of Evaluation, 19(1), 1-19. PDF

Smith, N. L. (1993). Improving evaluation theory through the empirical study of evaluation practice. Evaluation Practice, 14(3), 237-242. PDF

Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005). Establishing essential competencies for program evaluators. American Journal of Evaluation, 26(1), 43-59. PDF

Stufflebeam, D. L. (2001a). The metaevaluation imperative. American Journal of Evaluation, 22(2), 183-209. PDF

Stufflebeam, D. L. (2001b). Evaluation models. New Directions for Evaluation, 89. San Francisco, CA: Jossey-Bass. PDF

Tourmen, C. (2009). Evaluators’ decision making. American Journal of Evaluation, 30(1), 7-30. PDF

Assignments 

Critical Reading Papers PDF

Application Paper PDF

Thought Paper PDF

Assignment Grading Rubrics

Critical Reading Assignments PDF

Lecture Notes

Lecture #1 PPTX

Lecture #2 PPTX

Lecture #3 PPTX

Lecture #4 PPTX

Lecture #5 PPTX

Lecture #6 PPTX

Lecture #7 PPTX

Lecture #8 PPTX

Videos

Coryn, C. L. S. (2009, September). Contemporary trends & movements in evaluation: Evidence-based, participatory & empowerment, & theory-driven evaluation. Kalamazoo, MI: Western Michigan University, The Evaluation Center. HTML

Datta, L-E., (2007, March). What are we? Chopped liver? Or why it matters if comparison groups are active, and what to do. Kalamazoo, MI: Western Michigan University, The Evaluation Center. HTML

Rugh, J. (2007, November). RealWorld evaluation: Working under budget, time, data, and political constraints. Kalamazoo, MI: Western Michigan University, The Evaluation Center. HTML

Rugh, J. (2009, September). RealWorld evaluation: Maximizing utility in spite of inadequate budget, time, & data as well as conflicting political pressures. Kalamazoo, MI: Western Michigan University, The Evaluation Center. HTML

Scriven, M. (2005, September). Theory-free evaluation. Kalamazoo, MI: Western Michigan University, The Evaluation Center. HTML

Scriven, M. (2006, September). The latest battle in the war over research designs for establishing causation. Kalamazoo, MI: Western Michigan University, The Evaluation Center. HTML

Self-Assessments

Ten Questions About Evaluation Theory PDF

Essential Competencies for Program Evaluators Self-Assessment PDF

Supplementary Materials

Key Evaluation Checklist PDF

 

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.