Growth Curve Analysis, Part 2

Dan Mirman

13 March 2018

Modeling non-linear change over time

Function must be adequate to data

Two kinds of non-linearity

  1. Non-linear in variables (Time): \(Y_{ij} = \beta_{0i} + \beta_{1i} \cdot Time_{j} + \beta_{2i} \cdot Time^2_{j} + \epsilon_{ij}\)
  2. Non-linear in parameters (b): \(Y = \frac{p-b}{1+exp(4 \cdot \frac{s}{p-b} \cdot (c-t))} + b\)

Example of lack of dynamic consistency

Logistic power peak function (Scheepers, Keller, & Lapata, 2008) fit to semantic competition data (Mirman & Magnuson, 2009).

Prediction: Two kinds

Fits: Statistical models

Using higher-order polynomials

Natural vs. Orthogonal polynomials

Interpreting orthogonal polynomial terms

Intercept (\(\beta_0\)): Overall average

Interpreting orthogonal polynomial terms

Intercept (\(\beta_0\)): Overall average

Linear (\(\beta_1\)): Overall slope

Quadratic (\(\beta_2\)): Centered rise and fall rate

Cubic, Quartic, … (\(\beta_3, \beta_4, ...\)): Inflection steepness

Example

Effect of transitional probability on word learning

summary(WordLearnEx)
##     Subject       TP          Block         Accuracy    
##  244    : 10   Low :280   Min.   : 1.0   Min.   :0.000  
##  253    : 10   High:280   1st Qu.: 3.0   1st Qu.:0.667  
##  302    : 10              Median : 5.5   Median :0.833  
##  303    : 10              Mean   : 5.5   Mean   :0.805  
##  305    : 10              3rd Qu.: 8.0   3rd Qu.:1.000  
##  306    : 10              Max.   :10.0   Max.   :1.000  
##  (Other):500
ggplot(WordLearnEx, aes(Block, Accuracy, color=TP)) + 
  stat_summary(fun.data=mean_se, geom="pointrange") + 
  stat_summary(fun.y=mean, geom="line")

Example: Prepare for GCA

Create 2nd-order orthogonal polynomial

WordLearn.gca <- code_poly(WordLearnEx, predictor="Block", poly.order=2, orthogonal=TRUE)

summary(WordLearn.gca)
##     Subject       TP          Block         Accuracy      Block.Index  
##  244    : 10   Low :280   Min.   : 1.0   Min.   :0.000   Min.   : 1.0  
##  253    : 10   High:280   1st Qu.: 3.0   1st Qu.:0.667   1st Qu.: 3.0  
##  302    : 10              Median : 5.5   Median :0.833   Median : 5.5  
##  303    : 10              Mean   : 5.5   Mean   :0.805   Mean   : 5.5  
##  305    : 10              3rd Qu.: 8.0   3rd Qu.:1.000   3rd Qu.: 8.0  
##  306    : 10              Max.   :10.0   Max.   :1.000   Max.   :10.0  
##  (Other):500                                                           
##      poly1            poly2       
##  Min.   :-0.495   Min.   :-0.348  
##  1st Qu.:-0.275   1st Qu.:-0.261  
##  Median : 0.000   Median :-0.087  
##  Mean   : 0.000   Mean   : 0.000  
##  3rd Qu.: 0.275   3rd Qu.: 0.174  
##  Max.   : 0.495   Max.   : 0.522  
## 

Example: Fit the models

library(lme4)
#fit base model
m.base <- lmer(Accuracy ~ (poly1+poly2) + (poly1+poly2 | Subject), 
               data=WordLearn.gca, REML=F)
#add effect of TP on intercept 
m.0 <- lmer(Accuracy ~ (poly1+poly2) + TP + (poly1+poly2 | Subject), 
            data=WordLearn.gca, REML=F)
#add effect on slope
m.1 <- lmer(Accuracy ~ (poly1+poly2) + TP + TP:poly1 + (poly1+poly2 | Subject), 
            data=WordLearn.gca, REML=F)
#add effect on quadratic
m.2 <- lmer(Accuracy ~ (poly1+poly2)*TP + (poly1+poly2 | Subject), 
            data=WordLearn.gca, REML=F)

Example: Compare the model fits

anova(m.base, m.0, m.1, m.2)
## Data: WordLearn.gca
## Models:
## m.base: Accuracy ~ (poly1 + poly2) + (poly1 + poly2 | Subject)
## m.0: Accuracy ~ (poly1 + poly2) + TP + (poly1 + poly2 | Subject)
## m.1: Accuracy ~ (poly1 + poly2) + TP + TP:poly1 + (poly1 + poly2 | 
## m.1:     Subject)
## m.2: Accuracy ~ (poly1 + poly2) * TP + (poly1 + poly2 | Subject)
##        Df  AIC  BIC logLik deviance Chisq Chi Df Pr(>Chisq)  
## m.base 10 -331 -288    175     -351                          
## m.0    11 -330 -283    176     -352  1.55      1      0.213  
## m.1    12 -329 -277    176     -353  0.36      1      0.550  
## m.2    13 -333 -276    179     -359  5.95      1      0.015 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Example: Parameter estimates with p-values

get_pvalues(m.2)
##               Estimate Std..Error  t.value  p.normal p.normal.star
## (Intercept)   0.778525    0.02173 35.83141 0.000e+00           ***
## poly1         0.286315    0.03779  7.57682 3.531e-14           ***
## poly2        -0.050849    0.03319 -1.53217 1.255e-01              
## TPHigh        0.052961    0.03073  1.72357 8.478e-02             .
## poly1:TPHigh  0.001075    0.05344  0.02012 9.839e-01              
## poly2:TPHigh -0.116455    0.04693 -2.48121 1.309e-02             *

Example: Plot the model fit

ggplot(WordLearn.gca, aes(Block, Accuracy, color=TP)) + 
  stat_summary(fun.data=mean_se, geom="pointrange") + 
  stat_summary(aes(y=fitted(m.2)), fun.y=mean, geom="line") +
  theme_bw() + expand_limits(y=c(0.5, 1)) + scale_x_continuous(breaks = 1:10)

Questions?

Random effects

Within-subject effects

Example: Target fixation in spoken word-to-picure matching (VWP)

ggplot(TargetFix, aes(Time, meanFix, color=Condition, fill=Condition)) +
  stat_summary(fun.y=mean, geom="line") +
  stat_summary(fun.data=mean_se, geom="ribbon", color=NA, alpha=0.3) +
  theme_bw() + expand_limits(y=c(0,1)) + legend_positioning(c(1,1)) +
  labs(y="Fixation Proportion", x="Time since word onset (ms)")

Within-subject effects: prep for GCA

TargetFix.gca <- code_poly(TargetFix, predictor="Time", poly.order=3)

str(TargetFix.gca)
## 'data.frame':    300 obs. of  11 variables:
##  $ Subject   : Factor w/ 10 levels "708","712","715",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ Time      : num  300 300 350 350 400 400 450 450 500 500 ...
##  $ timeBin   : num  1 1 2 2 3 3 4 4 5 5 ...
##  $ Condition : Factor w/ 2 levels "High","Low": 1 2 1 2 1 2 1 2 1 2 ...
##  $ meanFix   : num  0.1944 0.0286 0.25 0.1143 0.2778 ...
##  $ sumFix    : num  7 1 9 4 10 5 13 5 14 6 ...
##  $ N         : int  36 35 36 35 36 35 36 35 36 35 ...
##  $ Time.Index: num  1 1 2 2 3 3 4 4 5 5 ...
##  $ poly1     : num  -0.418 -0.418 -0.359 -0.359 -0.299 ...
##  $ poly2     : num  0.4723 0.4723 0.2699 0.2699 0.0986 ...
##  $ poly3     : num  -0.4563 -0.4563 -0.0652 -0.0652 0.1755 ...

Within-subject effects: fit full GCA model

m.full <- lmer(meanFix ~ (poly1+poly2+poly3)*Condition + #fixed effects
                 (poly1+poly2+poly3 | Subject) + #random effects of Subject
                 (poly1+poly2+poly3 | Subject:Condition), #random effects of Subj by Cond
               data=TargetFix.gca, REML=F)
get_pvalues(m.full)
##                      Estimate Std..Error   t.value  p.normal p.normal.star
## (Intercept)         0.4773228    0.01385 34.457777 0.000e+00           ***
## poly1               0.6385604    0.05994 10.654180 0.000e+00           ***
## poly2              -0.1095979    0.03849 -2.847573 4.405e-03            **
## poly3              -0.0932612    0.02330 -4.002200 6.276e-05           ***
## ConditionLow       -0.0581122    0.01879 -3.093227 1.980e-03            **
## poly1:ConditionLow  0.0003188    0.06579  0.004846 9.961e-01              
## poly2:ConditionLow  0.1635455    0.05393  3.032545 2.425e-03            **
## poly3:ConditionLow -0.0020869    0.02704 -0.077168 9.385e-01

Within-subject effects: random effects

head(ranef(m.full)$"Subject")
##     (Intercept)    poly1      poly2     poly3
## 708   -0.000158  0.01086 -0.0035599 -0.004931
## 712    0.011022  0.10692 -0.0093299 -0.036514
## 715    0.011375  0.11482 -0.0109608 -0.039651
## 720   -0.002071 -0.01300 -0.0003585  0.003742
## 722    0.013828  0.16224 -0.0200789 -0.058174
## 725   -0.017827 -0.20734  0.0253442  0.074200

Within-subject effects: random effects

# str(ranef(m.full))
VarCorr(m.full)
##  Groups            Name        Std.Dev. Corr             
##  Subject:Condition (Intercept) 0.0405                    
##                    poly1       0.1404   -0.43            
##                    poly2       0.1124   -0.33  0.72      
##                    poly3       0.0417    0.13 -0.49 -0.43
##  Subject           (Intercept) 0.0124                    
##                    poly1       0.1195    0.91            
##                    poly2       0.0165   -0.42 -0.76      
##                    poly3       0.0421   -0.85 -0.99  0.83
##  Residual                      0.0438

What is being estimated?

This is why df for parameter estimates are poorly defined in MLR

Alternative random effects structure

m.left <- lmer(meanFix ~ (poly1+poly2+poly3)*Condition + #fixed effects
                ((poly1+poly2+poly3)*Condition | Subject), #random effects
              data=TargetFix.gca, REML=F)
## Warning in commonArgs(par, fn, control, environment()): maxfun < 10 *
## length(par)^2 is not recommended.
get_pvalues(m.left)
##                      Estimate Std..Error   t.value  p.normal p.normal.star
## (Intercept)         0.4773228    0.01679 28.434031 0.000e+00           ***
## poly1               0.6385604    0.05146 12.409015 0.000e+00           ***
## poly2              -0.1095979    0.03717 -2.948228 3.196e-03            **
## poly3              -0.0932612    0.02062 -4.523841 6.073e-06           ***
## ConditionLow       -0.0581122    0.02110 -2.753958 5.888e-03            **
## poly1:ConditionLow  0.0003188    0.07477  0.004264 9.966e-01              
## poly2:ConditionLow  0.1635455    0.06274  2.606891 9.137e-03            **
## poly3:ConditionLow -0.0020869    0.03334 -0.062598 9.501e-01

Alternative random effects structure

# str(ranef(m.left))
# head(ranef(m.left)$"Subject")
VarCorr(m.left)
##  Groups   Name               Std.Dev. Corr                                     
##  Subject  (Intercept)        0.0519                                            
##           poly1              0.1570    0.18                                    
##           poly2              0.1094   -0.29  0.03                              
##           poly3              0.0490   -0.28 -0.37  0.63                        
##           ConditionLow       0.0649   -0.89 -0.13  0.49  0.34                  
##           poly1:ConditionLow 0.2285    0.38 -0.46 -0.62  0.10 -0.56            
##           poly2:ConditionLow 0.1889    0.20  0.08 -0.81 -0.22 -0.43  0.74      
##           poly3:ConditionLow 0.0862   -0.08 -0.40 -0.06 -0.47  0.06 -0.27 -0.43
##  Residual                    0.0430

This random effect structure makes fewer assumptions:

Alternative random effects structure requires more parameters

Convergence problems

Warning message: 
In (function (fn, par, lower = rep.int(-Inf, n), upper = rep.int(Inf, : 
    failure to converge in 10000 evaluations

Need to simplify random effects

Participants as fixed vs. random effects

In general, participants should be treated as random effects.

This captures the typical assumption of random sampling from some population to which we wish to generalize.

Pooling

http://tjmahr.github.io/plotting-partial-pooling-in-mixed-effects-models/

http://tjmahr.github.io/plotting-partial-pooling-in-mixed-effects-models/

Participants as fixed vs. random effects

In general, participants should be treated as random effects.

This captures the typical assumption of random sampling from some population to which we wish to generalize.

Shrinkage

http://tjmahr.github.io/plotting-partial-pooling-in-mixed-effects-models/

http://tjmahr.github.io/plotting-partial-pooling-in-mixed-effects-models/

Participants as fixed vs. random effects

In general, participants should be treated as random effects.

This captures the typical assumption of random sampling from some population to which we wish to generalize.

Exceptions are possible: e.g., neurological/neuropsychological case studies where the goal is to characterize the pattern of performance for each participant, not generalize to a population.

Exercises

Exercise 2 (CP: d’ peak at category boundary): Compare categorical perception along spectral vs. temporal dimensions using second-order orthogonal polynomials.

Exercise 3 (Az): Use natural (not orthogonal) polynomials to analyze decline in performance of 30 individuals with probable Alzheimer’s disease on three different kinds of tasks - Memory, complex ADL, and simple ADL. (Why are natural polynomials more useful for these data?)

For each one: