Have a Question?
Ask the Graduate
College at our new
Doctoral Dissertation Announcement
Candidate: John S. Risley
Doctor of Philosophy
Department: The Evaluation Center
Title: Legislative Program Evaluation Conducted by State Legislatures in the United States
Dr. Michael Scriven, Chair
Dr. James Sanders
Dr. Chris Coryn
Date: Wednesday, November 28, 2007 2:00 p.m. – 4:00 p.m.
Ellsworth Hall, Room 4405
This study examines how U.S. state legislative staffs conduct evaluations. The study addresses the ubiquity of state legislative program evaluation (LPE) units, the standards those units follow, the recommendations that LPE reports proffer, and the quality of the reports on several criteria. The study also addresses the feasibility of using metaevaluation to evaluate a large number of reports using solely the information contained in the reports.
The study uses metaevaluation criteria developed by combining aspects of, primarily, the Generally Accepted Government Auditing Standards (GAGAS) for performance audits, the Joint Committee’s Program Evaluation Standards (PES), and, secondarily, Scriven’s Key Evaluation Checklist. In the process of developing the metaevaluation criteria the GAGAS and the PES are closely compared. The criteria were applied to a random sample of 100 of the 1,911 LPE reports published by state LPE units from 2001 through 2005.
The study finds that state LPE units, and consequently the reports they produce, are overwhelmingly more connected to performance auditing and the GAGAS than to evaluation and evaluation standards. The metaevaluation criterion on which the LPE reports varied most was the comparisons criterion. Roughly a third of all LPE reports were graded excellent or good, another third fair, and the final third poor, reflecting no mention of comparisons in the report. Evaluations were more likely to be graded excellent or good on this criterion than were performance audits.
This study also seeks to test a methodological model— that of using metaevaluation to examine a large number of reports. The results from this attempt are mixed. Using metaevaluation in this way can determine the specific areas where evaluation reports are excelling or failing. However, accurately and fairly evaluating reports solely from the report itself presents some major problems. Among these problems are the inability to check both the accuracy of most data collected and the propriety of techniques used to collect data from human subjects. Nevertheless, we can formulate important conclusions including how well LPE reports use comparative studies when reaching their conclusions, how focused the reports are on goals and objectives, and how closely the reports follow established professional standards.