General Linear Model (GLM): 2-Way, Between-Subjects Designs

GLM: 2-way factorial

Reading: SPSS Base 9.0 User's Guide: Chapter 20, GLM Univariate Analysis
                 SPSS Advanced Models 9.0:
                     Syntax - GLM Overview, pp.  312-319
                     Syntax - GLM Univariate, pp. 320-341
Homework:
Download: glm_2way.sav        (Download Tips)

1. Overview

This set of notes describes how to analyze analyses of variance that have more than one factor. We will begin by describing a two-way, factorial design.

In a factorial analysis of variance design each level of a treatment occurs under each level of every other treatment. A 2 x 3 factorial design is shown below. The 2 x 3 factorial has 6 cells.

Table 1. Schematic of a 2 x 3 Full-Factorial, ANOVA Design
  Factor B
B1B2B3
Factor AA1      
A2      

Level A1 occurs under all levels of B, and level A2 also occurs under all levels of B.

This set of notes describes how to analyze data of this type with the GLM: Univariate procedure.

top


2. The Data

The data for this example comes from a behavioral study of performance. The participants are 24 monkeys. Their task is to perform an "oddity" problem. They are shown three objects, e.g., two circles and a square, the odd object has a food reward placed in a small well under the object. The dependent measure (score) is the total number of trials that they select the odd object and get rewarded. There are two independent variables reward (reward) with three levels (1 grape, 3 grapes, or 5 grapes) and drive level (drive) with two levels (1 hour of food deprivation or 24 hours of food deprivation).

The data are saved in the file glm-2way.sav. The variables in glm_2way.sav are shown in Table 1.

Table 1. Variables in the glm-2way.sav Data File
VariableVariable Label / Value Labels / Missing Values
reward Magnitude of reward /
   1 = "1 grape"
   2 = "3 grapes"
   3 = "5 grapes"
drive Drive Level of the Animals (hours of deprivation) /
   1 = "1 hour"
   2 = "24 hours"
score Number correct on the 20 training trials

top


3. Testing the Assumptions

The assumptions of a full-factorial, between subjects, analysis of variance are shown in Table 2.

Table 2. ANOVA Assumptions
1. The observations in each of the cells are independent.
2. The scale of measurement for the dependent variables at least interval.
3. The shapes of the distributions in each cell are symmetric. 
4. The distributions in each of the cells are homogeneous.

Assumption 1 (independence). The data are independent, there are different participants in each cell of the design.

Assumption 2 (scale of measurement). The scale of measurement for the total number of correct responses is ratio.

Assumption 3 (normality).  It is assumed that the distributions in each of the six cells of the design are normal. The analysis of variance is robust if each of the distributions are symmetric. This assumption can be tested using explore to run normality tests for each cell of the design. However,  the default analysis for explore is to run the requested statistics on the main effects for each of the selected factors.  For this data, when you place both reward and drive in the Factor List: box, explore will perform normality tests within the three levels of the reward main effect, and within the two levels of the drive main effect.  It will not run the requested statistics within each of the six cells of the design, as required by the assumption. So, we will have to make a change to the syntax command in order to produce the required tests.

Open the explore dialog box

Analyze
   Descriptive Statistics
        Explore ...

Move the variable score to the Dependent List: window and the variables drive and reward to the Factor List: window. Select stem-and-leaf plots, Boxplots with factor levels together, Normality plots with tests, power estimation for the Spread vs. Level with Levene Test,  and descriptive statistics. You can run explore to see that it does not give you the correct tests of the ANOVA assumptions. Note that the output displays statistics for each of the factors. That is, the output describes the main effects of drive level and reward, they do not give the statistics for each of the six cells of the design.

Get back to the explore dialog box and Paste the explore commands in the syntax editor. Here is what they should look like

Table 3. Examine Syntax
EXAMINE
VARIABLES=score BY drive reward
/PLOT BOXPLOT STEMLEAF NPPLOT SPREADLEVEL
/COMPARE GROUP
/STATISTICS DESCRIPTIVES
/CINTERVAL 95
/MISSING LISTWISE
/NOTOTAL.

To get the statistics for each of the cells in this 2 x 3 design merely insert the keyword BY between the two factors drive level and reward in the following line of syntax:

VARIABLES = score BY drive reward

The corrected syntax file is shown in Table 4. The inserted BY is shown in blue.

Table 4. Revised Examine Syntax
EXAMINE
VARIABLES=score BY drive
BY reward
/PLOT BOXPLOT STEMLEAF NPPLOT SPREADLEVEL
/COMPARE GROUP
/STATISTICS DESCRIPTIVES
/CINTERVAL 95
/MISSING LISTWISE
/NOTOTAL.

Then Run the syntax commands.

Note that the homogeneity of variance assumption can be tested in GLM or by running the Examine procedure.  The Examine procedure will suggest a power transformation that you could use to reduce the homogeneity problem. GLM will not suggest a transformation.

The skewness and kurtosis statistics are summarized in Table 5. They indicate that the data in each of the cells is normally distributed. There is some kurtosis in the 24-hour, 5-grape condition, but it is not a concern because the ANOVA is robust when the distributions are symmetric.

Table 5. Skewness(SE Skewness) and Kurtosis(SE Kurtosis) Statistics
    Reward
1 grape3 grapes5 grapes
Drive Level1 hourskewness    0.63(1.01)    0.00(1.01) -0.60(1.01)
kurtosis -1.70(2.62) -4.34(2.62) -0.77(2.62)
24-hoursskewness -0.60(1.01)   0.00(1.01)   0.00(1.01)
kurtosis -0.77(2.62) -3.30(2.62) -5.41(2.62)

The Kolmogorov-Smirnov normality tests do not provide significance levels because of the small ns in each cell.

Assumption 4 (homogeneity). Levene's test of homogeneity indicates that the variances are homogeneous, see Table 6. The homogeneity of variance assumption can be tested in GLM or by running the Examine procedure.  The Examine procedure will suggest a power transformation that you could use to reduce the homogeneity problem. GLM will not suggest a transformation.

Table 6. Test of Homogeneity of Variance Based on the Mean

Levene Statisticdf1df2Sig.
Number correct on the 20 training trials 1.000 5 18 .446

Whenever you run the Levene statistic you should always check the degrees of freedom. Df1, the numerator, should be the number of between subjects cells in the design minus 1. In this design there are 6 between subjects cells so df1 is 5. If you forget to add the BY term in the syntax for explore, there will be several Levene tests, one for each factor in the design. In this example there would have been two Levene tests, one for the drive level factor with df1=1, and one for the reward factor with df1=2.

The boxplots provide a nice visual sense of what's happening in this study, See Figure 1. First of all, variances look to be reasonably homogeneous. It looks as though performance is not influenced by the magnitude of the reward for the animals with the high drive level, those that have been deprived for 24-hours. However, for animals with the low drive level, those that have been deprived for 1-hour, the higher the magnitude of the reward, the higher the performance. The boxplots suggest that the ANOVA should show a significant interaction between reward and drive level.

Figure 1. Boxplots showing the number of correct responses in the reward by drive level conditions.

Your boxplots may not look like the boxplots in Figure 1.  The boxplots are organized according to the order in which the factors are entered into the Factor List: box.  Table 7 shows the order of the boxplots (left to right) when drive level of the animals  is the first variable entered into the Factor List: box, followed by the magnitude of reward.

Table 7. Order of Boxes when the Factor List order is:  Drive Level; Reward Magnitude;
Box order
(left to right)
Drive Level
(2 levels)
Reward Magnitude
(3 levels)
1st box 1 1
2nd box 1 2
3rd box 1 3
4th box 2 1
5th box 2 2
6th box 2 3

The rule is: the index for the factor entered last is the fastest moving. In this case drive level remains at 1 while the index for the last factor (reward magnitide) is incremented through all its levels. Then drive level is incremented to the next level (2) and the index for the last factor is again incremented through all its levels. 

Table 8 shows the order of the boxplots (left to right) when magnitude of reward is the first variable entered into the Factor List: box, followed by the drive level of the animals. The same rule for determining the left to right order of the boxes applies.

Table 8. Order of Boxes when the Factor List order is: Reward Magnitude; Drive Level
Box order
(left to right)
Reward Magnitude
(3 levels)
Drive Level
(2 levels)
1st box 1 1
2nd box 1 2
3rd box 2 1
4th box 2 2
5th box 3 1
6th box 3 2

top


4. Select the GLM General Factorial Procedure

Select the GLM General Factorial procedure by clicking on

Analyze
   General Linear Model
        Univariate ...

Move the variable score to the Dependent Variable: window.

Reward and drive are both fixed factors, move them to the Fixed Factor(s): window. In this study there are no random factors nor covariates. In fact, I can't remember any undergraduate or graduate thesis in psychology that used a random factor (other than subjects).

A variable can be assigned as a weighting variable in a weighted least squares analysis (WLS Weight).  This is an uncommon option in psychological research.

There are a couple of different approaches we could take at this point. We could go through all the option buttons in the dialog box, select every available option and then try to wade through all the output that GLM produces. Or, my preferred approach, we could look at the basic output generated by GLM and then decide what additional information would be useful. So, after identifying the dependent variable, the factors, and descriptive statistics, click the OK button and see what we get.

top


5. The Basic GLM Output

The basic GLM output includes two tables The first table is a list of the between subjects' factors and the ns for each level of those factors, see Table 9.

Table 9. Between-Subjects Factors

Value LabelN
Drive level of animals (hours of deprivation)1 1 hour deprived 12
2 24 hours deprived 12
Magnitude of reward1 1 grape 8
2 3 grapes 8
3 5 grapes 8

The descriptive statistics are described in the next section, 6. Interpreting Significant Effects: Displaying the Means.

The third table contains the results of the analysis of variance, see Table 10. It includes the sums of squares, F values, and significance levels. There is one significant effect, the interaction between drive and reward, F(2, 18) = 3.927, p = .038. Neither of the main effects are significant, deprivation F(1, 18) = 1.309, p = .268, reward F(2, 18) = 3.055, p = .072.

Table 10. Tests of Between-Subjects Effects
Dependent Variable: Number correct on the 20 training trials
SourceType III Sum of SquaresdfMean SquareFSig.
Corrected Model 280.000(b) 5 56.000 3.055 .036
Intercept 2400.000 1 2400.000 130.909 .000
DRIVE 24.000 1 24.000 1.309 .268
REWARD 112.000 2 56.000 3.055 .072
DRIVE * REWARD 144.000 2 72.000 3.927 .038
Error 330.000 18 18.333    
Total 3010.000 24      
Corrected Total 610.000 23      
a R Squared = .459 (Adjusted R Squared = .309)

Intercept. The intercept term in this ANOVA is a test of whether the grand mean is different from zero. Because all the dependent variable scores are positive the grand mean is different from zero. Therefore the test of the intercept is not of interest to us.

Corrected model. The corrected model, with 5 df, is the overall model. It includes the variance due to the two main effects and the interaction, hence the 5 degrees of freedom. When the design is balanced, that is when there are equal ns in each cell and there are no missing cells, the sums of squares for the corrected model will be sum of the sums of squares for the each of the main effects and interactions. In this case

SS corrected model = SSdrive + SSreward + SSdrive*reward
                   280.00 = 24.00 + 112.00 + 144.00

There is an R Squared associated with the corrected model, see note b. The R squared is the amount of dependent variable variance that is accounted for by the corrected model. In this case the two main effects and the interaction account for 46% of the variance in the scores. The R squared for the particular sample will always be larger than the R square for the population from which the sample was drawn. R squared takes advantage of chance variation in the sample that will not be present in the population as a whole. The Adjusted R Square is an estimate of the predictability of the model in the population as a whole. It is always smaller then R squared. How much smaller it is than R Squared depends upon the number of variables in the model and the sample size. The smaller the sample size, holding constant the number of variables, the larger the correction. The larger the number of variables in the model, holding sample size constant, the larger the correction. In this case the model is expected to account for 31% of the variance in the dependent variable in the general population. The adjusted R squared is sometimes called the shrunken R squared.

The sums of squares for the corrected total is the sum of the sums of squares for the corrected model and the error terms.

In psychology it is unusual to report statistics for the corrected model.   We are typically interested in the main effect and interaction effects rather than the model as a whole.

The sums of squares for the total is the sum of the sums of squares for the intercept, the main effects, the interaction, and the error term.

The statistics for the within cells error term is reported in the Error row.  This mean square error is used to test the main effects and the interaction.

top


6. Interpreting Significant Effects: Displaying the Means

There was one significant effect in the ANOVA table, the interaction between reward and drive level. The next step is to try to understand the interaction. We need to look at the means for the interaction. For this 2 x 3 design we have three different ways of looking at the interaction means.

Descriptives

The first option is to look at the descriptive statistics. The descriptive statistics will give the means, standard deviations, and ns for each between-subjects cell in the design. Because we have a 2-way ANOVA these descriptive statistics are also the 2-way interaction means. If we had a 3-way or larger design, the descriptive statistics would give us the means for the highest order interaction, but not for any of the lower order interactions. The check box for descriptive statistics is located in the Options.. section. Find the Display section at the lower left and click the Descriptive Statistics box.  

The descriptive statistics are shown in Table 9. Let's look at how the means are organized in Table 9. As you go across the table the first variable displayed is drive level, the second variable displayed is the magnitude of reward. Because of this organization the means for each level of magnitude of reward are shown within each of two drive levels. The means displayed in the total summary at the bottom of the table are the main effect means for magnitude of reward. This order was determined when the factors were initially selected in the dialog box. In this instance the drive factor was listed first. If the magnitude of reward had been the first variable listed then the first variable in Table 9 would have been magnitude of reward. The means for each level of drive level would have been shown within each level of reward. And the summary at the bottom would have been the main effect means for the drive level main effect.

Table 9. Descriptive Statistics

Drive level of animals (hours of deprivation)Magnitude of rewardMeanStd. DeviationN
Number correct on the 20 training trials1 hour deprived1 grape 3.00 3.16 4
3 grapes 10.00 4.76 4
5 grapes 14.00 3.92 4
Total 9.00 5.97 12
24 hours deprived1 grape 11.00 3.92 4
3 grapes 12.00 5.48 4
5 grapes 10.00 4.08 4
Total 11.00 4.20 12
Total1 grape 7.00 5.40 8
3 grapes 11.00 4.87 8
5 grapes 12.00 4.28 8
Total 10.00 5.15 24

The main effect means can be found in the rows identified as Total.  For example, the main effect means for rewards of 1 grape, 2 grapes and 3 grapes are 7.00, 11.0 and 12.0, respectively.  The main effect means for 1-hour deprived and 24-hours deprived are 9.0 and 11.0, respectively.  

Note: As described the notes on unequal n designs, the main effect means shown in the Descriptive statistics table are only appropriate if you have an equal number of cases in each cell of the the design.  In this case there are 4 animals in each cell of the design, so these main effect means are appropriate.   If the cell sizes were not equal, then you would report the "estimated marginal means" rather than the means from the Descriptives table.

Estimated Marginal Means

There is another way to print the means for significant effects.  The top of the Options... dialog box has a section called Estimated Marginal Means. The box at the left displays the names of the Factors(s) and Factor Interactions:.   Moving any main effect or interaction to the Display Means for: box will cause those main effects and interaction means to be displayed.  Move the significant drive*reward interaction term to the Display Means for: box.

The estimated marginal means for the drive*reward interaction are shown in Table 10.   In addition to the means, the table also displays the standard error and the 95% confidence intervals for each mean.  Note that cell sizes and standard deviations are not printed in the estimated marginal means table.

Table 10. Estimated marginal means for the drive level by magnitude of reward interaction
Drive level of animals (hours of deprivation) * Magnitude of reward
Dependent Variable: Number correct on the 20 training trials

     

Mean

 

Std. Error

 

95% Confidence Interval

 

Drive level of animals (hours of deprivation)

 

Magnitude of reward

     

Lower Bound

 

Upper Bound

1 hour deprived

1 grape

3.000

2.141

-1.498

7.498

 

3 grapes

10.000

2.141

5.502

14.498

 

5 grapes

14.000

2.141

9.502

18.498

24 hours deprived

1 grape

11.000

2.141

6.502

15.498

 

3 grapes

12.000

2.141

7.502

16.498

 

5 grapes

10.000

2.141

5.502

14.498

Profile Plots

The means can be displayed graphically using the Plots... option. This options will display profile plots. The vertical axis of a profile plot will always be the dependent variable (score in this example). You can select any factor as the horizontal axis of the plot. For this study lets choose the reward factor for the horizontal axis. Next, choose which of the factors will be displayed as separate lines or as separate plots. If you select drive level to be displayed as separate lines then each drive level will be displayed as a separate line in the profile plot. If you select drive level to be displayed as separate plots, then the two levels of drive will be displayed in separate plots. Lets display drive level as separate lines. Finally press the Add button to complete the process of defining the profile plot. The profile plot is shown in Figure 2.

Figure 2. Profile Plot of the Reward by Drive Level Interaction

 

fig2.jpg (43504 bytes)

This profile plot will emphasizes the effects of reward within each of the drive level conditions by drawing a line for each of the drive level conditions. Inspection of the plot clearly shows that the two lines are not parallel. The fact that the lines are not parallel is the defining feature of an interaction. Parallel lines would indicate no interaction. The line representing the high drive level (24 hours deprived) is relatively flat. That is, it looks as though the magnitude of the reward does not influence performance if the animals have a high drive level. The line representing the low drive level condition (1 hour deprived) has a positive slope, the larger the reward the better the performance of the animals. In summary, inspection of the plots suggests that reward has no effect for animals that have a high drive level, while reward is positively related to performance for animals that have a low drive level.

If you choose the drive factor as the horizontal axis the resulting profile plot will emphasize the effects of drive level within each level of reward. After you look at the profile plot you can always go back and choose the other factor as the horizontal axis if you think it will prove more helpful in describing the data. For this set of data try each of the factors as the horizontal axis. Think about which of the displays will be most helping in describing the data.

You can create profile plots for main effects by selecting a factor for the horizontal axis only. That is, don't select a factor for either the separate lines or separate plot options.. Try creating a profile plot for the reward main effect.

top


7. Interpreting Significant Effects: Post-Hoc Pairwise Comparisons

Main Effects

There are no significant main effects in this analysis.  If there were, then post hoc tests for those significant main effects can be performed in two ways in GLM: 

(a)  Go to the Post Hoc... dialog box.  Select the desired main effect(s), then select the type(s) of   test(s)  you wish to perform, and then press continue. The post hoc tests are the same as those described in the oneway procedure.

(b) Go to the Estimated Marginal Means section of the Options... dialog box.  Select the desire main effect(s) from the Factor and Factor Interactions window and move them over to the Display Means for: window. Check the Compare main effects option, and press Continue. Using this option there are three possible ways to make adjustments to the confidence interval:  (a) none,  in which case the comparisons are made by an uncorrected t test,  this is also called the LSD test; (b) Bonferroni, in which case the alpha level is found as -

Bonferroni alpha = alpha/C

where C is the number of possible paired comparisons for that main effect test; and (c) Sidak, in which case the alpha level is found as -

Sidak alpha = 1 - (1 - alpha)1/C

where C is the number of possible paired comparisons for that main effect test.

For practice run the Tukey HSD post hoc test on the reward main effect. The output should show no differences between any of the means because the reward main effect was not significant.

Interaction Effects

Inspection of the interaction means has suggested that there is no effect of reward within the high drive conditions but that there is an effect of reward within the low drive conditions, the higher the reward the better the performance. One possible way of statistically testing those observations is to run post hoc tests. Post hoc tests are included in the

Post Hoc... option.

The Factor(s): window in the upper left displays the factors for which post hoc tests can be performed. Move the factor to the Post Hoc Tests for: window to perform the post hoc test for that factor. As you can see there is a problem. We want to test the interaction means, but only the main effects are listed.

There seems to be no way to get around this problem. If you try to edit the GLM syntax for the post hoc test you get an error message stating that the asterisk is an illegal symbol and that only main effects can be specified in post hoc tests. GLM will only perform post hoc tests on main effect factors.

We will discuss two options for making paired comparisons tests for significant interactions in this section: (a) running the test by hand using Tukey's HSD; and (b) creating a single factor from the interaction and testing that factor in GLM.

Interaction Effects: Running Tukey's HSD Test by Hand

One solution for testing the differences among the interaction means is run the post-hoc tests by hand. For equal n designs the critical difference, y(HSD), is

where, MSerror is the MSerror from the analysis of variance; n is the number of cases in each cell; and qa,p,v is obtained from the Percentage Points for the Studentized Range Statistic table at a given significance level, a, with p means, and v degrees of freedom for the MS error term.

In this example MSerror = 18.333, n = 4, a = .05, p = 6 cells, and v = 18. The value qa,p,v from the Studentized Range Statistic table is 4.49. Substituting into the HSD formula we get

               = 4.49 * 2.14
               = 9.61

Mean differences that are larger than 9.61 are significantly different from each other. 

The differences between all pairs of means in this study are shown in Table 11.  

Table 11.  Mean Differences for all Possible Paired Comparisons
Diff(i - j) = rowi - columnj
      1 hour deprived 24 hours deprived
      1 grape 3 grapes 5 grapes 1 grape 2 grapes 3 grapes

Drive level of animals (hours of deprivation)

Magnitude of reward

  3.00 10.0 14.0 11.0 12.0 10.0

1 hour deprived

1 grape

3.0

  -7.0 -11.0* -8.0 -9.0 -7.0
 

3 grapes

10.0

    -3.0 -1.0 -2.0 -0.0
 

5 grapes

14.0

      3.0 2.0 4.0

24 hours deprived

1 grape

11.0

        -1.0 1.0
 

3 grapes

12.0

          2.0
 

5 grapes

10.0

           
* different at p < .05 using Tukey's HSD procedure.

The only significant difference among the paired comparisons is between low drive (1 hour deprived) - 1 grape condition, M = 3.00, and the low drive (1 hour deprived) - 5 grape condition, M = 14.00. The difference between those two conditions, 11.00, is larger than the HSD critical difference of 9.61.

 

Interaction Effects: Creating a Oneway Effect from the 2-Way Interaction

In this solution we (a) transform the six cells in the two-way interaction into a single factor with 6 levels, and then (b) run single factor ANOVA asking for an appropriate post-hoc test (e.g., Tukeys).  

    Transform the 2-way interaction into a main effect.  The IF data transformation can be used to create a factor that includes each of the cells in the 2-way interaction. The syntax for creating a new factor, called "int" (for interaction) is shown in Table 12. The IF syntax can be entered by hand, or created using the COMPUTE transformation dialog box.  If you use the COMPUTE transformation dialog box you will need to individually paste each IF to the syntax window.  It takes more time to use the dialog box than to just enter the syntax commands directly.

Table 12.  Using IF transformations to create a new factor based on the interaction means
IF (drive = 1 and reward = 1) int = 1 .

IF (drive = 1 and reward = 2) int = 2 .

IF (drive = 1 and reward = 3) int = 3 .

IF (drive = 2 and reward = 1) int = 4 .

IF (drive = 2 and reward = 2) int = 5 .

IF (drive = 2 and reward = 3) int = 6 .

VARIABLE LABEL  int 'factor for post hoc test of the interaction'.

VALUE LABELS  int  1 'drive = 1, reward = 1'

                   2 'drive = 1, reward = 2' 

                   3 'drive = 1, reward = 3'

                   4 'drive = 2, reward = 1'

                   5 'drive = 2, reward = 2'

                   6 'drive = 2, reward = 3' .

EXECUTE .

You should include the variable label and value label commands in the syntax or else go into the newly created variable, int, and manually add the labels. 

    Running the post-hoc test. After you run the syntax commands you can go into GLM: Univariate, enter score as the dependent variable and int as the factor.  Then you can test the int "main effect" in either of the ways described above under Main Effects.

You should get the same results whether you run the HSD paired comparison by hand or by creating the interaction factor and running the Tukey test on that factor.

Here are a few things to be careful about when you run this type of analysis: 

1) Always check your transformations to make sure they doing what you expected them to do.

2) After running the analysis check the analysis of variance table to make sure that the df and MS for the error term are the same as the df and MS for error term in the original analysis. If these are not the same then you have done something wrong. 

3) Do not be concerned with the F statistic for the 'int' main effect term, it is not relevant for this analysis.

top


8. Simple Main Effects Analysis

Another way to interpret 2-way interaction effects is through the use of a simple main effects analysis. In a simple main effects analysis the main effects of one variable are analyzed within each of the levels of the other variable.

Simple main effects analyses are described in

GLM: Simple Main Effects

top


9. Interpreting Effects: Effect Size and Observed Power

Two other statistics that are often useful in interpreting an analysis of variance are an estimate of effect size and the observed power of an effect.  Estimates of effect size and observed power can be selected in the Options... dialog box.  The resulting statistics are shown in Table 13.

Table 13. Tests of Between-Subjects Effects

Dependent Variable: Number correct on the 20 training trials

Source

Type III Sum of Squares

df

Mean Square

F

Sig.

Eta Squared

Noncent. Parameter

Observed Power

Corrected Model

280.000

5

56.000

3.055

.036

.459

15.273

.742

Intercept

2400.000

1

2400.000

130.909

.000

.879

130.909

1.000

DRIVE

24.000

1

24.000

1.309

.268

.068

1.309

.192

REWARD

112.000

2

56.000

3.055

.072

.253

6.109

.517

DRIVE * REWARD

144.000

2

72.000

3.927

.038

.304

7.855

.630

Error

330.000

18

18.333

         

Total

3010.000

24

           

Corrected Total

610.000

23

           

a Computed using alpha = .05

b R Squared = .459 (Adjusted R Squared = .309)

Eta squared. The effect size is measured as the partial eta squared (h2). The partial eta squared describes the proportion of the variability in the dependent measure that is attributable to a factor. For univariate F tests and t tests the formula for the partial eta squared is

h2 = (SSeffect)/(SSeffect + SSerror)

where SSeffect is the sums of squares for the effect and SSerror is the sums of squares for the error term.

The eta squared for the interaction effect would be

h2 = (144.00)/(144.00 + 330.00)
     = 144.00/474.00
     = .304

Thirty percent of the variability in the performance scores can be attributed to the interaction effect.

See Statistical note: Effect size for additional information about different measures of effect size.

Observed Power. Power is the probability of correctly rejecting the null hypothesis. The power of the interaction effect is .630. If the study were to be replication 100 times we would correctly reject the null hypothesis on 63% of those replications. Power can be improved by increasing the sample size, by decreasing sources of error in the study, or by increasing the magnitude of the effect itself. In this example perhaps the magnitude of the effect could be increased by increasing the magnitude of the highest reward.

Noncentrality parameter. The noncentrality parameter is always displayed when you asked for observed power.  If the null hypothesis is not true, then the F statistic has a noncentral sampling distribution and a an associated noncentrality parameter. This noncentrality parameter is used to compute observed power.

top


�Lee A. Becker, 1997-1999  -revised 11/28/99