Your class presentation must be a web-based presentation. We will have a one-say workshop on using FrontPage to create a web page. The workshop will be in one of computer classrooms, Columbine 223.
Prior to the workshop you will need to ask Computing Services to set up a FrontPage account for you on the NT server. Go to Dwire Hall 250 to ask to have your "Front Page, NT server" account set up. It may take a few days to get the account set up.
Make sure you know your NT username and password when you arrive at the workshop.
The homework assignment is to work through these workshop notes prior to coming to the workshop.
Whitley, B. E., Jr. (1996). The internal validity of research. In Principles of research in behavioral science (pp. 203-231). Mountain View, CA: Mayfield.
Whitley, B. E., Jr. (1996). The external validity of research. In Principles of research in behavioral science (pp. 465-491). Mountain View, CA: Mayfield.
Additional (optional) reading:
Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research. Chicago: Rand McNally.
March 9 to March 21
IX. Statistical Analysis of Quasi-Experimental Designs
The focus of the statistics course (Psy 585) is on experimental designs. In this section we extend that knowledge to quasi-experimental designs. What techniques are available to design and analyze studies where you have not randomly assigned participants to condition? For example, suppose you were interested in the personality of people who had suffered a closed-head injury. You want to compare close-head injured participants with a normative group of people who had not suffered a closed-head injury. You can't randomly assign people to treatment and control conditions so this is a quasi-experimental design. What special precautions are necessary for quasi-experimental designs? Randomization is used to equate groups prior to treatment, how can the experimental and control groups be equated in a quasi-experiment? We will look at apriori procedures for selecting participants and at post-hoc statistical procedures to equate quasi-experimental groups such as matching, blocking, and analysis of covariance.
Effect Size. The APA publication manual strongly recommends reporting effect sizes. We will discuss how to compute and interpret effect size measures such as Cohen's d and the effect-size correlation.
Rosnow, R. L., & Rosenthal, R. (1996). Computing contrasts, effect sizes, and counternulls on other people's published data: General Procedures for Research Consumers. Psychological Methods, 1, 331–340.
Schmidt, F. L. (1996). Statistical significance testing and cumulative knowledge in psychology: Implications for training of researchers. Psychological Methods, 1, 115–129.
Van Etten, M. L., & Taylor, S. (1998). Comparative efficacy of treatments for posttraumatic stress disorder: A meta-analysis. Clinical Psychology and Psychotherapy.
Clinical psychologists are interested in whether the treatment produced clinically significant effects rather than whether the effect statistically significant. We will look at some of the methods that are being developed to look at clinical significance.
Tingey, R. C., Lambert, M. J., Burlingame, G. M., Hansen, N. B. (1996). Assessing clinical significance: Proposed extensions to method. Psychotherapy Research, 6, 109-123.
Jacobson, N. S., & Truax, P. (1991). Clinical significance: A statistical approach to defining meaningful change in psychotherapy research. Journal of Consulting and Clinical Psychology, 59, 12-19.
XII. Single-Case Experiments
Single case experiments involve the a more detailed analysis of an individual's response to a treatment. They evolved out of the behaviorist perspective. We will look at some of the single case experiment designs and look at a case study of case-experiment design.