EMR 6550: Experimental and Quasi-Experimental Designs

Course Description 

Design provides the conceptual framework, using structural elements, from which a study is planned and executed. It also sets the basic conditions from which facts and conclusions are inferred. As such, design warrants special treatment given that even the most sophisticated and elegant statistical procedures can rarely, if ever, correct for poor design. Design is one of three discreet, yet interrelated parts of what social scientists often refer to as method or methodology, and is perhaps the most important. With an emphasis on causal inference and various types of validity, the course consists of systematically studying the theoretical, philosophical, and ideological foundations of and principles for designing experimental, quasi-experimental, and, to a lesser extent, nonexperimental investigations for applied research and evaluation. Design of quasi-experimental studies that either lack a comparison group or pretest observation and quasi-experimental studies that use both control groups and pretests, including interrupted time-series and regression discontinuity designs, as well as randomized experimental designs, including the conditions conducive to doing them and more practical matters such as ethical considerations, attrition, and random assignment, are the primary foci of the course. Students also will be introduced to design sensitivity/statistical power for individual-level and group-level studies. Each of the major designs (as well as statistical power) include data analysis applications, therefore, students should have at least a fundamental knowledge of statistics to succeed in the course.

Syllabus 

Course Syllabus PDF

Instructors

Dr. Chris L. S. Coryn

Teaching Assistant

Kristin A. Hobson

Required Textbook

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.

Lecture Notes

Lecture #1 PPTX

Lecture #2 PPTX

Lecture #3 PPTX

Lecture #4 PPTX

Lecture #5 PPTX

Lecture #6 PPTX

Lecture #7 PPTX


Supplementary Readings

Boruch, R. F. (1998). Randomized controlled experiments for evaluation and planning. In L. Bickman & D. J. Rog (Eds.), Handbook of applied social research methods (pp. 161-192). Thousand Oaks, CA: Sage.

Boruch, R. F., & Rui, N. (2008). From randomized controlled trials to evidence grading schemes: current state of evidence-based practice in social sciences. Journal of Evidence-Based Medicine, 1(1), 41-49.

Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multi-trait, multi-method matrix. Psychological Bulletin, 56(2), 81-105.

Cook, T. D. (2002). Randomized experiments in educational policy research: A critical examination of the reasons the educational evaluation community has offered for not doing them. Educational evaluation and Policy Analysis, 24(3), 175-199.

Cook, T. D. (2006). Describing what is special about the role of experiments in contemporary educational research?: Putting the “gold standard” rhetoric into perspective. Journal of MultiDisciplinary Evaluation, 3(6), 1-7.

Cordray, D. S., & Pion, G. M. (2006). Treatment strength and integrity: Models and methods. In R. R. Bootzin & P. E. McKnight (Eds.), Strengthening research methodology: Psychological measurement and evaluation (pp. 103-124). Washington, DC: American Psychological Association.

Coryn, C. L. S., & Hobson, K. A. (2011). Using nonequivalent dependent variables to reduce internal validity threats in quasi-experiments: Rationale, history, and examples from practice. In S. Mathison (Ed.), Really new directions in evaluation: Young evaluators’ perspectives. New Directions for Evaluation, No. 131, 31-39. San Francisco, CA: Jossey-Bass.

Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52(4), 281-302.

Flay, B. R., Biglan, A., Boruch, R. F., Castro, F. G., Gottfredson, D., Kellam, S., Mościcki, E. K., Shinke, S., Valentine, J. C., & Ji, P. (2005). Standards of evidence: Criteria for efficacy, effectiveness and dissemination. Prevention Science, 6(3), 151-175.

Gugiu, P. C., Westine, C. D., Coryn, C. L. S., & Hobson, K. A. (2012). An application of a new evidence grading system to research on the Chronic Care Model. Evaluation & The Health Professions.

Lipsey, M. W., & Cordray, D. S. (2000). Evaluation methods for social intervention. Annual Review of Psychology, 51, 345-375.

Lipsey, M. W., & Hurley, S. M. (2009). Design sensitivity: Statistical power for applied experimental research. In L. Bickman & D. J. Rog (Eds.), The Sage handbook of applied social research methods (2nd ed.; pp. 44-76). Thousand Oaks, CA: Sage.

Reichardt, C. S., & Mark, M. M. (1998). Quasi-experimentation. In L. Bickman & D. J. Rog (Eds.), Handbook of applied social research methods (pp. 193-228). Thousand Oaks, CA: Sage.

Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66(5), 688-701.

Smith, G. C. S., & Pell, J. P. (2008). Parachute use to prevent death and major trauma related to gravitational challenge: Systematic review of randomised controlled trials. British Medical Journal, 327, 1459-1461

Power and Precision

Order Form DOCX

Power Analysis for Cluster, Interrupted-Time Series, and Regression Discontinuity Designs XLSX

Homework

Homework #1 PDF

Homework #2 PDF

Homework #3 PDF

Homework #4 PDF

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.