Growth Curve Analysis, Part 2

Dan Mirman

28 November 2019

Modeling non-linear change over time

Function must be adequate to data

Two kinds of non-linearity

  1. Non-linear in variables (Time): \(Y_{ij} = \beta_{0i} + \beta_{1i} \cdot Time_{j} + \beta_{2i} \cdot Time^2_{j} + \epsilon_{ij}\)
  2. Non-linear in parameters (b): \(Y = \frac{p-b}{1+exp(4 \cdot \frac{s}{p-b} \cdot (c-t))} + b\)

Example of lack of dynamic consistency

Logistic power peak function (Scheepers, Keller, & Lapata, 2008) fit to semantic competition data (Mirman & Magnuson, 2009).

Prediction: Two kinds

Fits: Statistical models

Using higher-order polynomials

Natural vs. Orthogonal polynomials

Interpreting orthogonal polynomial terms

Intercept (\(\beta_0\)): Overall average

Interpreting orthogonal polynomial terms

Intercept (\(\beta_0\)): Overall average

Linear (\(\beta_1\)): Overall slope

Quadratic (\(\beta_2\)): Centered rise and fall rate

Cubic, Quartic, … (\(\beta_3, \beta_4, ...\)): Inflection steepness

Example

Effect of transitional probability on word learning

summary(WordLearnEx)
##     Subject       TP          Block         Accuracy    
##  244    : 10   Low :280   Min.   : 1.0   Min.   :0.000  
##  253    : 10   High:280   1st Qu.: 3.0   1st Qu.:0.667  
##  302    : 10              Median : 5.5   Median :0.833  
##  303    : 10              Mean   : 5.5   Mean   :0.805  
##  305    : 10              3rd Qu.: 8.0   3rd Qu.:1.000  
##  306    : 10              Max.   :10.0   Max.   :1.000  
##  (Other):500
ggplot(WordLearnEx, aes(Block, Accuracy, color=TP)) + 
  stat_summary(fun.data=mean_se, geom="pointrange") + 
  stat_summary(fun.y=mean, geom="line")

Example: Prepare for GCA

Create 2nd-order orthogonal polynomial

WordLearn.gca <- code_poly(WordLearnEx, predictor="Block", poly.order=2, orthogonal=TRUE)

summary(WordLearn.gca)
##     Subject       TP          Block         Accuracy      Block.Index  
##  244    : 10   Low :280   Min.   : 1.0   Min.   :0.000   Min.   : 1.0  
##  253    : 10   High:280   1st Qu.: 3.0   1st Qu.:0.667   1st Qu.: 3.0  
##  302    : 10              Median : 5.5   Median :0.833   Median : 5.5  
##  303    : 10              Mean   : 5.5   Mean   :0.805   Mean   : 5.5  
##  305    : 10              3rd Qu.: 8.0   3rd Qu.:1.000   3rd Qu.: 8.0  
##  306    : 10              Max.   :10.0   Max.   :1.000   Max.   :10.0  
##  (Other):500                                                           
##      poly1            poly2       
##  Min.   :-0.495   Min.   :-0.348  
##  1st Qu.:-0.275   1st Qu.:-0.261  
##  Median : 0.000   Median :-0.087  
##  Mean   : 0.000   Mean   : 0.000  
##  3rd Qu.: 0.275   3rd Qu.: 0.174  
##  Max.   : 0.495   Max.   : 0.522  
## 

Example: Fit the models

library(lme4)
## Warning: package 'lme4' was built under R version 3.5.3
#fit base model
m.base <- lmer(Accuracy ~ (poly1+poly2) + (poly1+poly2 | Subject), 
               data=WordLearn.gca, REML=F)
## Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl =
## control$checkConv, : Model failed to converge with max|grad| = 0.00242054
## (tol = 0.002, component 1)
#add effect of TP on intercept 
m.0 <- lmer(Accuracy ~ (poly1+poly2) + TP + (poly1+poly2 | Subject), 
            data=WordLearn.gca, REML=F)
## Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl =
## control$checkConv, : Model failed to converge with max|grad| = 0.00573494
## (tol = 0.002, component 1)
#add effect on slope
m.1 <- lmer(Accuracy ~ (poly1+poly2) + TP + TP:poly1 + (poly1+poly2 | Subject), 
            data=WordLearn.gca, REML=F)
#add effect on quadratic
m.2 <- lmer(Accuracy ~ (poly1+poly2)*TP + (poly1+poly2 | Subject), 
            data=WordLearn.gca, REML=F)

Example: Compare the model fits

anova(m.base, m.0, m.1, m.2)
## Data: WordLearn.gca
## Models:
## m.base: Accuracy ~ (poly1 + poly2) + (poly1 + poly2 | Subject)
## m.0: Accuracy ~ (poly1 + poly2) + TP + (poly1 + poly2 | Subject)
## m.1: Accuracy ~ (poly1 + poly2) + TP + TP:poly1 + (poly1 + poly2 | 
## m.1:     Subject)
## m.2: Accuracy ~ (poly1 + poly2) * TP + (poly1 + poly2 | Subject)
##        Df  AIC  BIC logLik deviance Chisq Chi Df Pr(>Chisq)  
## m.base 10 -331 -288    175     -351                          
## m.0    11 -330 -283    176     -352  1.55      1      0.213  
## m.1    12 -329 -277    176     -353  0.36      1      0.550  
## m.2    13 -333 -276    179     -359  5.95      1      0.015 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Example: Parameter estimates with p-values

get_pvalues(m.2)
##               Estimate Std..Error  t.value  p.normal p.normal.star
## (Intercept)   0.778525    0.02173 35.82968 0.000e+00           ***
## poly1         0.286315    0.03779  7.57631 3.553e-14           ***
## poly2        -0.050849    0.03319 -1.53223 1.255e-01              
## TPHigh        0.052961    0.03073  1.72349 8.480e-02             .
## poly1:TPHigh  0.001075    0.05344  0.02012 9.839e-01              
## poly2:TPHigh -0.116455    0.04693 -2.48132 1.309e-02             *

Example: Plot the model fit

ggplot(WordLearn.gca, aes(Block, Accuracy, color=TP)) + 
  stat_summary(fun.data=mean_se, geom="pointrange") + 
  stat_summary(aes(y=fitted(m.2)), fun.y=mean, geom="line") +
  theme_bw() + expand_limits(y=c(0.5, 1)) + scale_x_continuous(breaks = 1:10)

Exercise 2

CP (d’ peak at category boundary): Compare categorical perception along spectral vs. temporal dimensions using second-order orthogonal polynomials.