Dan Mirman
28 November 2019
Example: Target fixation in spoken word-to-picure matching (VWP)
ggplot(TargetFix, aes(Time, meanFix, color=Condition, fill=Condition)) +
stat_summary(fun.y=mean, geom="line") +
stat_summary(fun.data=mean_se, geom="ribbon", color=NA, alpha=0.3) +
theme_bw() + expand_limits(y=c(0,1)) + legend_positioning(c(1,1)) +
labs(y="Fixation Proportion", x="Time since word onset (ms)")
## Loading required package: reshape2
## 'data.frame': 300 obs. of 11 variables:
## $ Subject : Factor w/ 10 levels "708","712","715",..: 1 1 1 1 1 1 1 1 1 1 ...
## $ Time : num 300 300 350 350 400 400 450 450 500 500 ...
## $ timeBin : num 1 1 2 2 3 3 4 4 5 5 ...
## $ Condition : Factor w/ 2 levels "High","Low": 1 2 1 2 1 2 1 2 1 2 ...
## $ meanFix : num 0.1944 0.0286 0.25 0.1143 0.2778 ...
## $ sumFix : num 7 1 9 4 10 5 13 5 14 6 ...
## $ N : int 36 35 36 35 36 35 36 35 36 35 ...
## $ Time.Index: num 1 1 2 2 3 3 4 4 5 5 ...
## $ poly1 : num -0.418 -0.418 -0.359 -0.359 -0.299 ...
## $ poly2 : num 0.4723 0.4723 0.2699 0.2699 0.0986 ...
## $ poly3 : num -0.4563 -0.4563 -0.0652 -0.0652 0.1755 ...
m.full <- lmer(meanFix ~ (poly1+poly2+poly3)*Condition + #fixed effects
(poly1+poly2+poly3 | Subject) + #random effects of Subject
(poly1+poly2+poly3 | Subject:Condition), #random effects of Subj by Cond
data=TargetFix.gca, REML=F)
## boundary (singular) fit: see ?isSingular
## Estimate Std..Error t.value p.normal p.normal.star
## (Intercept) 0.4773228 0.01385 34.467183 0.000e+00 ***
## poly1 0.6385604 0.05990 10.660663 0.000e+00 ***
## poly2 -0.1095979 0.03850 -2.847031 4.413e-03 **
## poly3 -0.0932612 0.02330 -4.002883 6.258e-05 ***
## ConditionLow -0.0581122 0.01879 -3.092739 1.983e-03 **
## poly1:ConditionLow 0.0003188 0.06578 0.004847 9.961e-01
## poly2:ConditionLow 0.1635455 0.05393 3.032282 2.427e-03 **
## poly3:ConditionLow -0.0020869 0.02705 -0.077136 9.385e-01
## (Intercept) poly1 poly2 poly3
## 708 -0.0001059 0.0108 -0.0035370 -0.004883
## 712 0.0109835 0.1068 -0.0094313 -0.036512
## 715 0.0113630 0.1147 -0.0110401 -0.039625
## 720 -0.0020526 -0.0130 -0.0003739 0.003741
## 722 0.0138860 0.1620 -0.0201892 -0.058087
## 725 -0.0179045 -0.2070 0.0254648 0.074079
## (Intercept) poly1 poly2 poly3
## 708:High 0.012228 -0.13114 -0.12988 0.01513
## 708:Low -0.061265 0.17044 0.06224 0.01229
## 712:High 0.021266 0.08300 0.02812 0.02003
## 712:Low -0.014420 0.04493 0.03263 -0.01733
## 715:High 0.012373 0.05982 0.05525 -0.02503
## 715:Low -0.008674 0.10608 0.13040 -0.03952
## Groups Name Std.Dev. Corr
## Subject:Condition (Intercept) 0.0405
## poly1 0.1404 -0.43
## poly2 0.1124 -0.33 0.71
## poly3 0.0418 0.13 -0.49 -0.43
## Subject (Intercept) 0.0124
## poly1 0.1193 0.91
## poly2 0.0166 -0.43 -0.76
## poly3 0.0421 -0.86 -0.99 0.83
## Residual 0.0438
What is being estimated?
This is why df for parameter estimates are poorly defined in MLR
m.left <- lmer(meanFix ~ (poly1+poly2+poly3)*Condition + #fixed effects
((poly1+poly2+poly3)*Condition | Subject), #random effects
data=TargetFix.gca, REML=F)
## Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl =
## control$checkConv, : Model failed to converge with max|grad| = 0.00930636
## (tol = 0.002, component 1)
## Estimate Std..Error t.value p.normal p.normal.star
## (Intercept) 0.4773228 0.01678 28.440232 0.000e+00 ***
## poly1 0.6385604 0.05142 12.419150 0.000e+00 ***
## poly2 -0.1095979 0.03720 -2.946310 3.216e-03 **
## poly3 -0.0932612 0.02061 -4.524719 6.048e-06 ***
## ConditionLow -0.0581122 0.02110 -2.754137 5.885e-03 **
## poly1:ConditionLow 0.0003188 0.07483 0.004261 9.966e-01
## poly2:ConditionLow 0.1635455 0.06281 2.603953 9.216e-03 **
## poly3:ConditionLow -0.0020869 0.03328 -0.062709 9.500e-01
## Groups Name Std.Dev. Corr
## Subject (Intercept) 0.0519
## poly1 0.1568 0.18
## poly2 0.1095 -0.29 0.02
## poly3 0.0490 -0.28 -0.37 0.63
## ConditionLow 0.0649 -0.89 -0.13 0.49 0.34
## poly1:ConditionLow 0.2287 0.38 -0.46 -0.63 0.10 -0.56
## poly2:ConditionLow 0.1891 0.20 0.08 -0.82 -0.22 -0.43 0.74
## poly3:ConditionLow 0.0859 -0.08 -0.40 -0.06 -0.47 0.06 -0.27 -0.43
## Residual 0.0430
This random effect structure makes fewer assumptions:
Warning message:
In (function (fn, par, lower = rep.int(-Inf, n), upper = rep.int(Inf, :
failure to converge in 10000 evaluations
Need to simplify random effects
Outcome ~ (poly1+poly2+poly3)*Condition + (poly1+poly2+poly3 | Subject)
Outcome ~ (poly1+poly2+poly3)*Condition + (poly1+poly2 | Subject)
Outcome ~ (poly1+poly2+poly3)*Condition + (1 | Subject) +
(0+poly1 | Subject) + (0+poly2 | Subject) + (0+poly3 | Subject)
In general, participants should be treated as random effects.
This captures the typical assumption of random sampling from some population to which we wish to generalize.
Pooling
In general, participants should be treated as random effects.
This captures the typical assumption of random sampling from some population to which we wish to generalize.
Shrinkage
In general, participants should be treated as random effects.
This captures the typical assumption of random sampling from some population to which we wish to generalize.
Exceptions are possible: e.g., neurological/neuropsychological case studies where the goal is to characterize the pattern of performance for each participant, not generalize to a population.
Az
: Use natural (not orthogonal) polynomials to analyze decline in performance of 30 individuals with probable Alzheimer’s disease on three different kinds of tasks - Memory, complex ADL, and simple ADL.