Growth Curve Analysis, Part 2

Dan Mirman

22 May 2020

Modeling non-linear change over time

Function must be adequate to data

Two kinds of non-linearity

  1. Non-linear in variables (Time): \(Y_{ij} = \beta_{0i} + \beta_{1i} \cdot Time_{j} + \beta_{2i} \cdot Time^2_{j} + \epsilon_{ij}\)
  2. Non-linear in parameters (b): \(Y = \frac{p-b}{1+exp(4 \cdot \frac{s}{p-b} \cdot (c-t))} + b\)

Example of lack of dynamic consistency

Logistic power peak function (Scheepers, Keller, & Lapata, 2008) fit to semantic competition data (Mirman & Magnuson, 2009).

Prediction: Two kinds

Fits: Statistical models

Using higher-order polynomials

Natural vs. Orthogonal polynomials

Interpreting orthogonal polynomial terms

Intercept (\(\beta_0\)): Overall average

Interpreting orthogonal polynomial terms

Intercept (\(\beta_0\)): Overall average

Linear (\(\beta_1\)): Overall slope

Quadratic (\(\beta_2\)): Centered rise and fall rate

Cubic, Quartic, … (\(\beta_3, \beta_4, ...\)): Inflection steepness

Example

Effect of transitional probability on word learning

summary(WordLearnEx)
##     Subject       TP          Block         Accuracy    
##  244    : 10   Low :280   Min.   : 1.0   Min.   :0.000  
##  253    : 10   High:280   1st Qu.: 3.0   1st Qu.:0.667  
##  302    : 10              Median : 5.5   Median :0.833  
##  303    : 10              Mean   : 5.5   Mean   :0.805  
##  305    : 10              3rd Qu.: 8.0   3rd Qu.:1.000  
##  306    : 10              Max.   :10.0   Max.   :1.000  
##  (Other):500
ggplot(WordLearnEx, aes(Block, Accuracy, color=TP)) + 
  stat_summary(fun.data=mean_se, geom="pointrange") + 
  stat_summary(fun=mean, geom="line")

Example: Prepare for GCA

Create 2nd-order orthogonal polynomial

WordLearn.gca <- code_poly(WordLearnEx, predictor="Block", poly.order=2, orthogonal=TRUE)

summary(WordLearn.gca)
##     Subject       TP          Block         Accuracy      Block.Index  
##  244    : 10   Low :280   Min.   : 1.0   Min.   :0.000   Min.   : 1.0  
##  253    : 10   High:280   1st Qu.: 3.0   1st Qu.:0.667   1st Qu.: 3.0  
##  302    : 10              Median : 5.5   Median :0.833   Median : 5.5  
##  303    : 10              Mean   : 5.5   Mean   :0.805   Mean   : 5.5  
##  305    : 10              3rd Qu.: 8.0   3rd Qu.:1.000   3rd Qu.: 8.0  
##  306    : 10              Max.   :10.0   Max.   :1.000   Max.   :10.0  
##  (Other):500                                                           
##      poly1            poly2       
##  Min.   :-0.495   Min.   :-0.348  
##  1st Qu.:-0.275   1st Qu.:-0.261  
##  Median : 0.000   Median :-0.087  
##  Mean   : 0.000   Mean   : 0.000  
##  3rd Qu.: 0.275   3rd Qu.: 0.174  
##  Max.   : 0.495   Max.   : 0.522  
## 

Example: Fit the models

library(lme4)
library(lmerTest)
#fit base model
m.base <- lmer(Accuracy ~ (poly1+poly2) + (poly1+poly2 | Subject), 
               data=WordLearn.gca, REML=F)
#add effect of TP on intercept 
m.0 <- lmer(Accuracy ~ (poly1+poly2) + TP + (poly1+poly2 | Subject), 
            data=WordLearn.gca, REML=F)
#add effect on slope
m.1 <- lmer(Accuracy ~ (poly1+poly2) + TP + TP:poly1 + (poly1+poly2 | Subject), 
            data=WordLearn.gca, REML=F)
#add effect on quadratic
m.2 <- lmer(Accuracy ~ (poly1+poly2)*TP + (poly1+poly2 | Subject), 
            data=WordLearn.gca, REML=F)

Example: Compare the model fits

anova(m.base, m.0, m.1, m.2)
## Data: WordLearn.gca
## Models:
## m.base: Accuracy ~ (poly1 + poly2) + (poly1 + poly2 | Subject)
## m.0: Accuracy ~ (poly1 + poly2) + TP + (poly1 + poly2 | Subject)
## m.1: Accuracy ~ (poly1 + poly2) + TP + TP:poly1 + (poly1 + poly2 | 
## m.1:     Subject)
## m.2: Accuracy ~ (poly1 + poly2) * TP + (poly1 + poly2 | Subject)
##        npar  AIC  BIC logLik deviance Chisq Df Pr(>Chisq)  
## m.base   10 -331 -288    175     -351                      
## m.0      11 -330 -283    176     -352  1.55  1      0.213  
## m.1      12 -329 -277    176     -353  0.36  1      0.550  
## m.2      13 -333 -276    179     -359  5.95  1      0.015 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Example: Inspect model

summary(m.2)
## Linear mixed model fit by maximum likelihood . t-tests use Satterthwaite's
##   method [lmerModLmerTest]
## Formula: Accuracy ~ (poly1 + poly2) * TP + (poly1 + poly2 | Subject)
##    Data: WordLearn.gca
## 
##      AIC      BIC   logLik deviance df.resid 
##   -332.6   -276.4    179.3   -358.6      547 
## 
## Scaled residuals: 
##    Min     1Q Median     3Q    Max 
## -3.618 -0.536  0.126  0.567  2.616 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev. Corr       
##  Subject  (Intercept) 0.01076  0.1037              
##           poly1       0.01542  0.1242   -0.33      
##           poly2       0.00628  0.0792   -0.28 -0.82
##  Residual             0.02456  0.1567              
## Number of obs: 560, groups:  Subject, 56
## 
## Fixed effects:
##              Estimate Std. Error       df t value Pr(>|t|)    
## (Intercept)   0.77853    0.02173 56.02039   35.83  < 2e-16 ***
## poly1         0.28632    0.03779 62.51319    7.58  2.1e-10 ***
## poly2        -0.05085    0.03319 93.21826   -1.53    0.129    
## TPHigh        0.05296    0.03073 56.02039    1.72    0.090 .  
## poly1:TPHigh  0.00108    0.05344 62.51319    0.02    0.984    
## poly2:TPHigh -0.11645    0.04693 93.21826   -2.48    0.015 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) poly1  poly2  TPHigh p1:TPH
## poly1       -0.183                            
## poly2       -0.114 -0.229                     
## TPHigh      -0.707  0.129  0.081              
## poly1:TPHgh  0.129 -0.707  0.162 -0.183       
## poly2:TPHgh  0.081  0.162 -0.707 -0.114 -0.229
## convergence code: 0
## boundary (singular) fit: see ?isSingular

Example: Plot the model fit

ggplot(WordLearn.gca, aes(Block, Accuracy, color=TP)) + 
  stat_summary(fun.data=mean_se, geom="pointrange") + 
  stat_summary(aes(y=fitted(m.2)), fun=mean, geom="line") +
  theme_bw() + expand_limits(y=c(0.5, 1)) + scale_x_continuous(breaks = 1:10)

Exercise 2

CP (d’ peak at category boundary): Compare categorical perception along spectral vs. temporal dimensions using second-order orthogonal polynomials.