``[1] "Created: Wed Apr  1 16:44:31 2015"``

See the introduction for an overview.

``````library(lme4)
library(ggplot2)
options(digits=5,show.signif.stars=FALSE)``````

Load up and look at the data:

``````data(pulp, package="faraway")
pulp``````
``````   bright operator
1    59.8        a
2    60.0        a
3    60.8        a
4    60.8        a
5    59.8        a
6    59.8        b
7    60.2        b
8    60.4        b
9    59.9        b
10   60.0        b
11   60.7        c
12   60.7        c
13   60.5        c
14   60.9        c
15   60.3        c
16   61.0        d
17   60.8        d
18   60.6        d
19   60.5        d
20   60.5        d``````
``ggplot(pulp, aes(x=operator, y=bright))+geom_point(position = position_jitter(width=0.1, height=0.0))``

## Model

Fit a model with a fixed intercept and random effect for the operators:

``````mmod <- lmer(bright ~ 1+(1|operator), pulp)
summary(mmod)``````
``````Linear mixed model fit by REML ['lmerMod']
Formula: bright ~ 1 + (1 | operator)
Data: pulp

REML criterion at convergence: 18.6

Scaled residuals:
Min     1Q Median     3Q    Max
-1.467 -0.759 -0.124  0.628  1.601

Random effects:
Groups   Name        Variance Std.Dev.
operator (Intercept) 0.0681   0.261
Residual             0.1062   0.326
Number of obs: 20, groups:  operator, 4

Fixed effects:
Estimate Std. Error t value
(Intercept)   60.400      0.149     404``````

## Fixed Effects

There is only an intercept term which we are not normally interested in testing. We can use a normal approximation to the t-statistic given in the output of 404 to test the hypothesis that the intercept is zero. Clearly the intercept is very far from zero.

We can produce confidence intervals using the profile likelihood (which is based on the likelihood ratio test):

``confint(mmod)``
``````               2.5 %   97.5 %
.sig01       0.00000  0.61790
.sigma       0.23891  0.48218
(Intercept) 60.07130 60.72870``````

This also produces confidence intervals for the two random effect parameters.

## Mixed Effects

The `RLRsim` package offers a convenient way of using the parametric bootstrap method:

``````library(RLRsim)
smod <- lmer(bright ~ 1+(1|operator), pulp, REML=FALSE)
nullmod <- lm(bright ~ 1, pulp)
exactLRT(smod, nullmod)``````
``````
simulated finite sample distribution of LRT. (p-value based on
10000 simulated values)

data:
LRT = 2.5684, p-value = 0.0232``````

We that the random effect term, `operator` is statistically significant.

The `lmerTest` package also allows a test on the random effect term:

``````library(lmerTest)
smod <- lmer(bright ~ 1+(1|operator), pulp, REML=FALSE)
rand(smod)``````
``````Analysis of Random effects Table:
Chi.sq Chi.DF p.value
operator   3.47      1    0.06``````

This test uses the twice the differences in the log-likelihoods to construct a test statistic with a chi-squared null distribution to generate the p-value. As explained in the text, this test is not accurate because the null hypothesis that the variance is zero lies on the boundary of the parameter space. This makes the asymptotic approximation of the chi-square invalid.

## MCMCglmm

We can also take a Bayesian approach to analyzing this data using the `MCMCglmm` package.

We fit the model using all the default settings which involve almost uninformative priors with 10,000 iterations and sampling every 10th case:

``````library(MCMCglmm)
bmod <- MCMCglmm(bright ~ 1, random= ~operator, data=pulp, verbose=FALSE)``````

Before we can start making any conclusions we need to verify that the MCMC has converged. We need to check for stability in the iterations:

First check the intercept term:

``xyplot(bmod\$Sol)``