Skip to contents

The measurement error models in inlamemi are hierarchical models with three or more sub-models. To construct these kinds of models in R-INLA, the structure of the data plays an important role (see for instance chapter 6.4 in “Bayesian Inference with INLA by Virgilio Gómez-Rubio).

The data for each of the models is organized and stacked together in a specific way, which together with the formula tells R-INLA how to construct the model. For you as a user, it is not necessary to understand these structures, but I have included this vignette nonetheless to give some more insight for those that are interested.

Defining the model: formula + structured data stacks

f_moi <- y ~ x + z
f_imp <- x ~ z

When using the inlamemi package, the user specifies two formulas, one for the main model of interest, and one for the imputation model. But these are just two of the sub-models, there is also at least one measurement error model under the hood. All of these models need to be communicated to the inla() function, but this takes only one formula-argument. So all the formula terms are simply added together in one big formula, and in order to communicate to the inla function which terms should apply to which sub-model, we supply the data in structured matrices where each layer of the matrix shows which terms should be “activated” in that layer.

In practice, this can be done either through manually constructed matrices (see for instance https://emmaskarstein.github.io/Missing-data-and-measurement-error/simulation_example.html for a detailed explanation of that), or one can use the inla.stack function to specify the modules for each sub-model separately, and then stack them together. For the inlamemi package, I structure the data using inla.stacks. The outcome is the same, but it just makes the construction a bit more generalizable. The stack construction is done in the function make_inlamemi_stacks, which is then called by fit_inlamemi, so this is not a function the user would typically need to interact with.

The function show_data_structure converts the stack object to LaTeX matrices, and only shows the first and last row for each sub-model, making it easier to get a clear overview over the structure of the model. Now, we will take a look at these matrices for two different models: one with Berkson measurement error, and one without.

Structure for a classical measurement error model

All models fit with inlamemi will contain a classical measurement error layer, as even if there is no measurement error, the classical ME model is needed for technical reasons in order to do the covariate imputation. In the cases where there is no classical ME, we scale the classical error precision to be very large, thus effectively “turning off” the error adjustment while keeping the imputation model.

In this example, our hierarchical model consists of the main model of interest: 𝐲=β0+βx𝐱+βz𝐳+𝛆,𝛆N(𝟎,τy𝐈), \boldsymbol{y} = \beta_0 + \beta_x \boldsymbol{x} + \beta_z \boldsymbol{z} + \boldsymbol{\varepsilon} \ , \quad \boldsymbol{\varepsilon} \sim N(\boldsymbol{0}, \tau_y\boldsymbol{I}) \ , the classical error model: 𝐰=𝐱+𝐮c,𝐮cN(𝟎,τuc𝐈), \boldsymbol{w} = \boldsymbol{x} + \boldsymbol{u}_c \ , \quad \boldsymbol{u}_c \sim N(\boldsymbol{0}, \tau_{u_c}\boldsymbol{I}) \ , and the imputation model: 𝐱=α0+αz𝐳+𝛆x,𝛆xN(𝟎,τx𝐈). \boldsymbol{x} = \alpha_0 + \alpha_z \boldsymbol{z} + \boldsymbol{\varepsilon}_x \ , \quad \boldsymbol{\varepsilon}_x \sim N(\boldsymbol{0}, \tau_x\boldsymbol{I}) \ .

These need to be rewritten for R-INLA such that there are no latent effects on the left hand side: 𝐲=β0+βx𝐱+βz𝐳+𝛆,𝛆N(𝟎,τy𝐈),𝐰=𝐱+𝐮c,𝐮cN(𝟎,τuc𝐈),𝟎=𝐱+α0+αz𝐳+𝛆x,𝛆xN(𝟎,τx𝐈). \begin{align} \boldsymbol{y} &= \beta_0 + \beta_x \boldsymbol{x} + \beta_z \boldsymbol{z} + \boldsymbol{\varepsilon} \ , \quad &\boldsymbol{\varepsilon} \sim N(\boldsymbol{0}, \tau_y\boldsymbol{I}) \ , \\ \boldsymbol{w} &= \boldsymbol{x} + \boldsymbol{u}_c \ , \quad &\boldsymbol{u}_c \sim N(\boldsymbol{0}, \tau_{u_c}\boldsymbol{I}) \ , \\ \boldsymbol{0} &= -\boldsymbol{x} + \alpha_0 + \alpha_z \boldsymbol{z} + \boldsymbol{\varepsilon}_x \ , \quad &\boldsymbol{\varepsilon}_x \sim N(\boldsymbol{0}, \tau_x\boldsymbol{I}) \ . \end{align} We can construct the necessary stacks for this model using the make_inlamemi_stacks function:

classical_stack <- make_inlamemi_stacks(data = simple_data,
                                      formula_moi = f_moi,
                                      formula_imp = f_imp,
                                      error_type = "classical")

and then we can visualize the data in these stacks with the show_data_structure function:

classical_summary <- show_data_structure(classical_stack)

The LaTeX code is stored in classical_summary$matrix_string. If you want to display the matrices in an rmarkdown document, you can use the cat function, and just remember to set results = 'asis' as the chunk option.

cat(classical_summary$matrix_string)

[10.81𝙽𝙰𝙽𝙰6.18𝙽𝙰𝙽𝙰𝙽𝙰4.98𝙽𝙰𝙽𝙰1.46𝙽𝙰𝙽𝙰𝙽𝙰0𝙽𝙰𝙽𝙰0]𝚈=β0[11𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰]𝚋𝚎𝚝𝚊.𝟶+βx[11000𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰]𝚋𝚎𝚝𝚊.𝚡+βz[0.980.77𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰]𝚋𝚎𝚝𝚊.𝚣+[𝙽𝙰𝙽𝙰1100011000]𝚒𝚍.𝚡+αx,0[𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰11]𝚊𝚕𝚙𝚑𝚊.𝚡.𝟶+αx,z[𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰0.980.77]𝚊𝚕𝚙𝚑𝚊.𝚡.𝚣\underbrace{\begin{bmatrix} 10.81 & \texttt{NA} & \texttt{NA}\\ \vdots & \vdots & \vdots\\ 6.18 & \texttt{NA} & \texttt{NA}\\ \texttt{NA} & 4.98 & \texttt{NA}\\ \vdots & \vdots & \vdots\\ \texttt{NA} & 1.46 & \texttt{NA}\\ \texttt{NA} & \texttt{NA} & 0\\ \vdots & \vdots & \vdots\\ \texttt{NA} & \texttt{NA} & 0\\ \end{bmatrix}}_{\texttt{Y}} = \beta_{0}\underbrace{\begin{bmatrix} 1\\ \vdots\\ 1\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \end{bmatrix}}_{\texttt{beta.0}} + \beta_{x}\underbrace{\begin{bmatrix} 1\\ \vdots\\ 1000\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \end{bmatrix}}_{\texttt{beta.x}} + \beta_{z}\underbrace{\begin{bmatrix} 0.98\\ \vdots\\ 0.77\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \end{bmatrix}}_{\texttt{beta.z}} + \underbrace{\begin{bmatrix} \texttt{NA}\\ \vdots\\ \texttt{NA}\\ 1\\ \vdots\\ 1000\\ -1\\ \vdots\\ -1000\\ \end{bmatrix}}_{\texttt{id.x}} + \alpha_{x,0}\underbrace{\begin{bmatrix} \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ 1\\ \vdots\\ 1\\ \end{bmatrix}}_{\texttt{alpha.x.0}} + \alpha_{x,z}\underbrace{\begin{bmatrix} \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ 0.98\\ \vdots\\ 0.77\\ \end{bmatrix}}_{\texttt{alpha.x.z}}

Structure for Berkson and classical measurement error model

Next, we consider a case where we have Berkson measurement error, along with either classical ME, missingness or both in the same covariate (whether it is classical ME, missingness or both makes no difference for this example, as that only affects the scaling of the measurement error precision).

In this example, our hierarchical model consists of four levels; the main model of interest: 𝐲=β0+βx𝐱+βz𝐳+𝛆,𝛆N(𝟎,τy𝐈), \boldsymbol{y} = \beta_0 + \beta_x \boldsymbol{x} + \beta_z \boldsymbol{z} + \boldsymbol{\varepsilon} \ , \quad \boldsymbol{\varepsilon} \sim N(\boldsymbol{0}, \tau_y\boldsymbol{I}) \ , the Berkson error model: 𝐱=𝐫+𝐮b,𝐮bN(𝟎,τub𝐈), \boldsymbol{x} = \boldsymbol{r} + \boldsymbol{u}_b \ , \quad \boldsymbol{u}_b \sim N(\boldsymbol{0}, \tau_{u_b}\boldsymbol{I}) \ , the classical error model: 𝐰=𝐫+𝐮c,𝐮cN(𝟎,τuc𝐈), \boldsymbol{w} = \boldsymbol{r} + \boldsymbol{u}_c \ , \quad \boldsymbol{u}_c \sim N(\boldsymbol{0}, \tau_{u_c}\boldsymbol{I}) \ , and the imputation model: 𝐫=α0+αz𝐳+𝛆r,𝛆rN(𝟎,τr𝐈). \boldsymbol{r} = \alpha_0 + \alpha_z \boldsymbol{z} + \boldsymbol{\varepsilon}_r \ , \quad \boldsymbol{\varepsilon}_r \sim N(\boldsymbol{0}, \tau_r\boldsymbol{I}) \ .

Notice that we now have a new latent variable 𝐫\boldsymbol{r}, which serves as a link between the sub-models and corresponds to the variable that is subject to Berkson error but that does not yet have the classical error added.

Rewritten for R-INLA:

𝐲=β0+βx𝐱+βz𝐳+𝛆,𝛆N(𝟎,τy𝐈),𝟎=𝐱+𝐫+𝐮b,𝐮bN(𝟎,τub𝐈),𝐰=𝐫+𝐮c,𝐮cN(𝟎,τuc𝐈),𝟎=𝐫+α0+αz𝐳+𝛆r,𝛆rN(𝟎,τr𝐈). \begin{align} \boldsymbol{y} &= \beta_0 + \beta_x \boldsymbol{x} + \beta_z \boldsymbol{z} + \boldsymbol{\varepsilon} \ , \quad &\boldsymbol{\varepsilon} &\sim N(\boldsymbol{0}, \tau_y\boldsymbol{I}) \ , \\ \boldsymbol{0} &= -\boldsymbol{x} + \boldsymbol{r} + \boldsymbol{u}_b \ , \quad & \boldsymbol{u}_b &\sim N(\boldsymbol{0}, \tau_{u_b}\boldsymbol{I}) \ , \\ \boldsymbol{w} &= \boldsymbol{r} + \boldsymbol{u}_c \ , \quad &\boldsymbol{u}_c &\sim N(\boldsymbol{0}, \tau_{u_c}\boldsymbol{I}) \ , \\ \boldsymbol{0} &= -\boldsymbol{r} + \alpha_0 + \alpha_z \boldsymbol{z} + \boldsymbol{\varepsilon}_r \ , \quad &\boldsymbol{\varepsilon}_r &\sim N(\boldsymbol{0}, \tau_r\boldsymbol{I}) \ . \end{align} Just like above, we construct the stacks:

berkson_stack <- make_inlamemi_stacks(data = simple_data,
              formula_moi = f_moi,
              formula_imp = f_imp,
              error_type = c("classical", "berkson", "missing"))

and then visualize the resulting matrices:

berkson_summary <- show_data_structure(berkson_stack)

[10.81𝙽𝙰𝙽𝙰𝙽𝙰6.18𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰0𝙽𝙰𝙽𝙰𝙽𝙰0𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰4.98𝙽𝙰𝙽𝙰𝙽𝙰1.46𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰0𝙽𝙰𝙽𝙰𝙽𝙰0]𝚈=β0[11𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰]𝚋𝚎𝚝𝚊.𝟶+βx[11000𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰]𝚋𝚎𝚝𝚊.𝚡+βz[0.980.77𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰]𝚋𝚎𝚝𝚊.𝚣+[𝙽𝙰𝙽𝙰11000𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰]𝚒𝚍.𝚡+[𝙽𝙰𝙽𝙰110001100011000]𝚒𝚍.𝚛+αx,0[𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰11]𝚊𝚕𝚙𝚑𝚊.𝚡.𝟶+αx,z[𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰0.980.77]𝚊𝚕𝚙𝚑𝚊.𝚡.𝚣\underbrace{\begin{bmatrix} 10.81 & \texttt{NA} & \texttt{NA} & \texttt{NA}\\ \vdots & \vdots & \vdots & \vdots\\ 6.18 & \texttt{NA} & \texttt{NA} & \texttt{NA}\\ \texttt{NA} & 0 & \texttt{NA} & \texttt{NA}\\ \vdots & \vdots & \vdots & \vdots\\ \texttt{NA} & 0 & \texttt{NA} & \texttt{NA}\\ \texttt{NA} & \texttt{NA} & 4.98 & \texttt{NA}\\ \vdots & \vdots & \vdots & \vdots\\ \texttt{NA} & \texttt{NA} & 1.46 & \texttt{NA}\\ \texttt{NA} & \texttt{NA} & \texttt{NA} & 0\\ \vdots & \vdots & \vdots & \vdots\\ \texttt{NA} & \texttt{NA} & \texttt{NA} & 0\\ \end{bmatrix}}_{\texttt{Y}} = \beta_{0}\underbrace{\begin{bmatrix} 1\\ \vdots\\ 1\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \end{bmatrix}}_{\texttt{beta.0}} + \beta_{x}\underbrace{\begin{bmatrix} 1\\ \vdots\\ 1000\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \end{bmatrix}}_{\texttt{beta.x}} + \beta_{z}\underbrace{\begin{bmatrix} 0.98\\ \vdots\\ 0.77\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \end{bmatrix}}_{\texttt{beta.z}} + \underbrace{\begin{bmatrix} \texttt{NA}\\ \vdots\\ \texttt{NA}\\ -1\\ \vdots\\ -1000\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \end{bmatrix}}_{\texttt{id.x}} + \underbrace{\begin{bmatrix} \texttt{NA}\\ \vdots\\ \texttt{NA}\\ 1\\ \vdots\\ 1000\\ 1\\ \vdots\\ 1000\\ -1\\ \vdots\\ -1000\\ \end{bmatrix}}_{\texttt{id.r}} + \alpha_{x,0}\underbrace{\begin{bmatrix} \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ 1\\ \vdots\\ 1\\ \end{bmatrix}}_{\texttt{alpha.x.0}} + \alpha_{x,z}\underbrace{\begin{bmatrix} \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ 0.98\\ \vdots\\ 0.77\\ \end{bmatrix}}_{\texttt{alpha.x.z}}

As you see, the only things that change is that there is added another model level, which leads to another column in the response matrix, and we additionally have a new column for the latent effect 𝐫\boldsymbol{r}.

Accessing the stacks from the model object

The data stacks that are generated by make_inlamemi_stacks inside fit_inlamemi is also returned with the inlamemi model object, so you don’t need to run make_inlamemi_stacks yourself. This is how to access the stacks from the model object:

inlamemi_model <- fit_inlamemi(data = simple_data,
                           formula_moi = f_moi,
                           formula_imp = f_imp,
                           family_moi = "gaussian",
                           error_type = c("berkson", "classical"),
                           prior.prec.moi = c(0.5, 0.5),
                           prior.prec.berkson = c(10, 9),
                           prior.prec.classical = c(10, 9),
                           prior.prec.imp = c(0.5, 0.5),
                           initial.prec.moi = 1,
                           initial.prec.berkson = 1,
                           initial.prec.classical = 1,
                           initial.prec.imp = 1)

cat(show_data_structure(inlamemi_model$stack_data)$matrix_string)

[10.81𝙽𝙰𝙽𝙰𝙽𝙰6.18𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰0𝙽𝙰𝙽𝙰𝙽𝙰0𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰4.98𝙽𝙰𝙽𝙰𝙽𝙰1.46𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰0𝙽𝙰𝙽𝙰𝙽𝙰0]𝚈=β0[11𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰]𝚋𝚎𝚝𝚊.𝟶+βx[11000𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰]𝚋𝚎𝚝𝚊.𝚡+βz[0.980.77𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰]𝚋𝚎𝚝𝚊.𝚣+[𝙽𝙰𝙽𝙰11000𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰]𝚒𝚍.𝚡+[𝙽𝙰𝙽𝙰110001100011000]𝚒𝚍.𝚛+αx,0[𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰11]𝚊𝚕𝚙𝚑𝚊.𝚡.𝟶+αx,z[𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰0.980.77]𝚊𝚕𝚙𝚑𝚊.𝚡.𝚣\underbrace{\begin{bmatrix} 10.81 & \texttt{NA} & \texttt{NA} & \texttt{NA}\\ \vdots & \vdots & \vdots & \vdots\\ 6.18 & \texttt{NA} & \texttt{NA} & \texttt{NA}\\ \texttt{NA} & 0 & \texttt{NA} & \texttt{NA}\\ \vdots & \vdots & \vdots & \vdots\\ \texttt{NA} & 0 & \texttt{NA} & \texttt{NA}\\ \texttt{NA} & \texttt{NA} & 4.98 & \texttt{NA}\\ \vdots & \vdots & \vdots & \vdots\\ \texttt{NA} & \texttt{NA} & 1.46 & \texttt{NA}\\ \texttt{NA} & \texttt{NA} & \texttt{NA} & 0\\ \vdots & \vdots & \vdots & \vdots\\ \texttt{NA} & \texttt{NA} & \texttt{NA} & 0\\ \end{bmatrix}}_{\texttt{Y}} = \beta_{0}\underbrace{\begin{bmatrix} 1\\ \vdots\\ 1\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \end{bmatrix}}_{\texttt{beta.0}} + \beta_{x}\underbrace{\begin{bmatrix} 1\\ \vdots\\ 1000\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \end{bmatrix}}_{\texttt{beta.x}} + \beta_{z}\underbrace{\begin{bmatrix} 0.98\\ \vdots\\ 0.77\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \end{bmatrix}}_{\texttt{beta.z}} + \underbrace{\begin{bmatrix} \texttt{NA}\\ \vdots\\ \texttt{NA}\\ -1\\ \vdots\\ -1000\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \end{bmatrix}}_{\texttt{id.x}} + \underbrace{\begin{bmatrix} \texttt{NA}\\ \vdots\\ \texttt{NA}\\ 1\\ \vdots\\ 1000\\ 1\\ \vdots\\ 1000\\ -1\\ \vdots\\ -1000\\ \end{bmatrix}}_{\texttt{id.r}} + \alpha_{x,0}\underbrace{\begin{bmatrix} \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ 1\\ \vdots\\ 1\\ \end{bmatrix}}_{\texttt{alpha.x.0}} + \alpha_{x,z}\underbrace{\begin{bmatrix} \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ 0.98\\ \vdots\\ 0.77\\ \end{bmatrix}}_{\texttt{alpha.x.z}}

mis_mod <- fit_inlamemi(formula_moi = y ~ x + z1 + z2,
                          formula_imp = x ~ z1,
                          formula_mis = m ~ z2 + x,
                          family_moi = "gaussian",
                          data = mar_data,
                          error_type = "missing",
                          prior.beta.error = c(0, 1/1000),
                          prior.gamma.error = c(0, 1/1000),
                          prior.prec.moi = c(10, 9),
                          prior.prec.imp = c(10, 9),
                          initial.prec.moi = 1,
                          initial.prec.imp = 1)
cat(show_data_structure(mis_mod$stack_data)$matrix_string)

[7.9𝙽𝙰𝙽𝙰𝙽𝙰2.58𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰2.31𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰0𝙽𝙰𝙽𝙰𝙽𝙰0𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰0𝙽𝙰𝙽𝙰𝙽𝙰1]𝚈=β0[11𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰]𝚋𝚎𝚝𝚊.𝟶+βx[11000𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰]𝚋𝚎𝚝𝚊.𝚡+βz1[0.980.77𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰]𝚋𝚎𝚝𝚊.𝚣𝟷+βz2[0.050.72𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰]𝚋𝚎𝚝𝚊.𝚣𝟸+[𝙽𝙰𝙽𝙰1100011000𝙽𝙰𝙽𝙰]𝚒𝚍.𝚡+αx,0[𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰11𝙽𝙰𝙽𝙰]𝚊𝚕𝚙𝚑𝚊.𝚡.𝟶+αx,z1[𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰0.980.77𝙽𝙰𝙽𝙰]𝚊𝚕𝚙𝚑𝚊.𝚡.𝚣𝟷+γx,0[𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰11]𝚐𝚊𝚖𝚖𝚊.𝚡.𝟶+γx,z2[𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰0.050.72]𝚐𝚊𝚖𝚖𝚊.𝚡.𝚣𝟸+γx[𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰𝙽𝙰11000]𝚐𝚊𝚖𝚖𝚊.𝚡\underbrace{\begin{bmatrix} 7.9 & \texttt{NA} & \texttt{NA} & \texttt{NA}\\ \vdots & \vdots & \vdots & \vdots\\ 2.58 & \texttt{NA} & \texttt{NA} & \texttt{NA}\\ \texttt{NA} & 2.31 & \texttt{NA} & \texttt{NA}\\ \vdots & \vdots & \vdots & \vdots\\ \texttt{NA} & \texttt{NA} & \texttt{NA} & \texttt{NA}\\ \texttt{NA} & \texttt{NA} & 0 & \texttt{NA}\\ \vdots & \vdots & \vdots & \vdots\\ \texttt{NA} & \texttt{NA} & 0 & \texttt{NA}\\ \texttt{NA} & \texttt{NA} & \texttt{NA} & 0\\ \vdots & \vdots & \vdots & \vdots\\ \texttt{NA} & \texttt{NA} & \texttt{NA} & 1\\ \end{bmatrix}}_{\texttt{Y}} = \beta_{0}\underbrace{\begin{bmatrix} 1\\ \vdots\\ 1\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \end{bmatrix}}_{\texttt{beta.0}} + \beta_{x}\underbrace{\begin{bmatrix} 1\\ \vdots\\ 1000\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \end{bmatrix}}_{\texttt{beta.x}} + \beta_{z1}\underbrace{\begin{bmatrix} 0.98\\ \vdots\\ 0.77\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \end{bmatrix}}_{\texttt{beta.z1}} + \beta_{z2}\underbrace{\begin{bmatrix} -0.05\\ \vdots\\ -0.72\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \end{bmatrix}}_{\texttt{beta.z2}} + \underbrace{\begin{bmatrix} \texttt{NA}\\ \vdots\\ \texttt{NA}\\ 1\\ \vdots\\ 1000\\ -1\\ \vdots\\ -1000\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \end{bmatrix}}_{\texttt{id.x}} + \alpha_{x,0}\underbrace{\begin{bmatrix} \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ 1\\ \vdots\\ 1\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \end{bmatrix}}_{\texttt{alpha.x.0}} + \alpha_{x,z1}\underbrace{\begin{bmatrix} \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ 0.98\\ \vdots\\ 0.77\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \end{bmatrix}}_{\texttt{alpha.x.z1}} + \gamma_{x,0}\underbrace{\begin{bmatrix} \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ 1\\ \vdots\\ 1\\ \end{bmatrix}}_{\texttt{gamma.x.0}} + \gamma_{x,z2}\underbrace{\begin{bmatrix} \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ -0.05\\ \vdots\\ -0.72\\ \end{bmatrix}}_{\texttt{gamma.x.z2}} + \gamma_{x}\underbrace{\begin{bmatrix} \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ \texttt{NA}\\ \vdots\\ \texttt{NA}\\ 1\\ \vdots\\ 1000\\ \end{bmatrix}}_{\texttt{gamma.x}}