Evaluation-Specific Methodology


Dr. Michael Scriven – Distinguished Professor, Claremont Graduate University and Co-Director, Claremont Evaluation Center
Tuesday, April 29, 2014
Location: 4410 Ellsworth Hall
Time: Noon – 1pm

Professional, scientifically acceptable evaluation requires a wide range of competencies from the social science methodology toolkit, but that’s not enough. Unlike the old-time social scientists, you’ve got to get to an evaluative conclusion—that’s what it means to say you’re doing evaluation—and to do that you apparently have to have some evaluative premises. Where do those come from, and how can you validate them against attack by people who don’t like your conclusion? The way most evaluators—and the texts—do this is by relying on common agreement or intuitions about what value claims are correct, but of course that won’t work when there’s deep disagreement e.g., about abortion, suicide hot lines, creationism in the science curriculum, ‘natural’ medicine, healthcare for the poor, torture, spying and war. Evaluation-specific methodology covers the selection and verification/refutation of all value claims we encounter in professional evaluation; and how to integrate them with data claims (and data syntheses) by inferences from and to them; and how to represent the integrated result in an evaluation report. Important sub-topics include: rubrics, needs assessment, the measurement of values, crowd-sourced evaluation, and the special case of ethical value claims. (Not to mention a completely new philosophy of science.)

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.