Two-Way Tables (Reading: Faraway (2006), section 4.1~4.2)
The data shown below were collected as part of a quality improvement study at a semiconductor factory.
A sample of wafers was drawn and cross-classified according to whether a particle was found on the die that produced the wafer and whether the wafer was good or bad. First, let us set up the data in a convenient form for analysis:
> y <-
> particle <- gl(2,1,4,labels=c("no","yes"))
> quality <- gl(2,2,labels=c("good","bad"))
> wafer <- data.frame(y,particle,quality)
We will need the data in this form with one observation per line for our model fitting, but usually we prefer to observe its table form:
> (ov <- xtabs(y ~ quality+particle))
The data might have arisen under several possible sampling schemes:
observe the manufacturing process for a certain period of time and observed 450 wafers Þ all the numbers are random Þ we could use a Poisson model
decide to sample 450 wafers Þ 450 is a fixed number and other numbers are random Þ we could use a multinomial model
select 400 wafers without particles and 50 wafers with particles Þ 400, 50, and 450 are fixed numbers and other numbers are random Þ we could use a binomial model
select 400 wafers without particles and 50 wafers with particles that also included, by design, 334 good wafers and 116 bad ones Þ 400, 50, 334, 116, and 450 are fixed numbers and other numbers are random Þ we could use a hypergeometric model
The main question of interest concerning these data is whether the presence of particles on the wafer affects the quality outcome. We shall see in the following that all four sampling schemes lead to exactly the same conclusion.
Sampling Scheme 1: Poisson Model.
Under sampling scheme 1, it would be natural to view these outcomes, Yij, 1£i£I, 1£j£J, occurring at different rates, lpij, where l is an unknown value of a size variable and pij is the probability that a randomly selected subject falls in the ijth cell. We can form a Poisson model for these rates, i.e., Yij~Poisson(lpij). Suppose that we fit a Poisson GLM with only main effect terms:
glm(y ~ particle+quality, poisson)
the null model, which suggests all four outcomes occur at the same rate, does not fit because the deviance of 474.10 is very large for 3 degrees of freedom
the fitted model has a residual deviance of 54.03, which is clearly an improvement over the null model. An alternative goodness-of-fit measure is the Pearson X2:
we might also want to test the significance of the individual predictors. The null of the tests corresponds to the hypothesis of equivalent marginal proportions, i.e., p1+=p2+=...=pI+. or p+1=p+2=...=p+J. For the table, we could use the z-values in the output, but it is better to use the difference-in-deviance test:
We see that both predictors are significant relative to the fitted model. It suggests that the marginal proportions are not equivalent for both predictors.
by examining the coefficients, we see that wafers without particles occur at a significantly higher rate than wafer with particles because the estimated coefficient of particleyes is negative. Similarly, we see that good-quality wafers occur at a significantly higher rate than bad-quality wafers because the estimated coefficient of qualitybad is negative
the coefficients of a model containing only main effect terms are closely related to the marginal totals because the MLE of the rates m=(lpij) must satisfy XTy=XT(m-hat), where XTy is, in this example:
> (t(model.matrix(modl)) %*% y)[,]
So we see that the fitted values, m-hat, must be a function of marginal totals. The coding we use will determine exactly how the coefficients relate to the marginal totals. For example, consider the particle factor, which is coded as 0 and 1 for No Particle and Particle categories, respectively. Its fitted coefficient is:
> (coefparticle <- exp(c(0, coef(modl))))
We see that these are just the marginal proportions for the no particle wafers and particle wafers respectively:
> c(400, 50)/450
this model has a deviance of 54.03 on one degree of freedom and so does not fit the data. A Poisson GLM with only main effect terms corresponds to independent contingency tables, i.e., pij=pi+p+j. Because the model does not fit, we reject the hypothesis that good- or bad-quality outcomes would occur independently of whether a particle was found on the wafer.
the addition of an interaction term would saturate the model and so would have zero deviance and zero degrees of freedom. So, an hypothesis comparing the models with and without interaction would use a test statistic of 54.03 on one degree of freedom, which is exactly the goodness-of-fit test. The hypothesis of no interaction would be rejected --- the necessity to fit a Poisson GLM with interaction indicates that the two predictors are associated, i.e., dependent.
for each cells, the fitted values under the Poisson GLM with only main effect terms are:
Notice that these fitted values form an independent contingency table because its odds-ratio is one:
Sampling Scheme 2: Multinomial Model.
> (pp <- prop.table( xtabs(y ~ particle)))
> (qp <- prop.table( xtabs(y ~ quality)))
and the fitted values, Yij-hat, of each cells under the independence hypothesis, i.e., pij=pi+p+j, are y++(pi+-hat)(p+j-hat):
> (fv <- outer(qp,pp)*450)
Notice that the fitted values are the same as that in the Poisson GLM with only main effect terms.
To test whether the fitted values are close to the observed values, we can compare this model against the saturated model, for which the MLE of cell counts are the observed values Yij. So, the log-likelihood-ratio test statistic for independence is 2SiSjYijlog[Yij/(Yij-hat)] [Note: it is because the log-likelihood of multinomial is SiSjYijlog(pij) ], which computes to:
This is the same deviance we observed in the Poisson model with only main effect terms. So, we see that the test for independence in the multinomial model coincides with the test for no interaction in the Poisson model. The latter test is easier to execute in R, so we shall usually take that approach.
One alternative to the deviance is the PearsonX2 statistic SiSj[(Oij-Eij)2/Eij]:
> sum( (ov-fv)^2/fv)
which is also the same as that in the Poisson model with only main effect terms. We can apply Yates' continuity correction on the PearsonX2 statistic, which gives superior results for small samples. This correction is implemented in:
TheX2 statistic with continuity correction is 60.1239 for one degree of freedom, which clearly rejects the null, i.e., independence hypothesis.
Sampling Scheme 3: Binomial Model.
Under the sampling scheme 3, it would be natural to view the presence of the particle as a cause that affects the quality of wafer. In other words, we would view the variable quality as the response and the variable particle as a predictor. When the number of wafers with no particles is fixed at 400 and the number with particles at 50, we can use a binomial model for the response for both groups. To fit a binomial GLM to the response in R, we need to form a two-column matrix with the 1st column representing the number of "successes" and the 2nd column the number of "failures":
> (m <- matrix(y,nrow=2))
Let us fit a binomial GLM that only contains a constant term, which suggests that the probability of the response is the same in both the particle and no particle group, i.e., a homogeneous model:
> modb <- glm(m ~ 1, family=binomial)
the deviance of the homogeneous binomial model, 54.03, is exactly the same as the deviance of the Poisson model containing only main effect terms
in other words, the hypothesis of homogeneity in the binomial GLM corresponds exactly to the hypothesis of independence in the Poisson GLM
Because either row or column marginal totals are fixed in this sampling scheme, the marginal totals in the table does not carry any useful information about the marginal proportions pi+ and p+j. Therefore, it does not make sense to test equivalence of marginal proportions for data obtained from the sampling scheme with fixed marginal totals.
Sampling Scheme 4: Hypergeometric Model.
Under the sampling scheme 4, if we fix any number in the table, say Y11, the remaining three numbers are completely determined because the row and column marginal totals are known. In the circumstance, the distribution of Y11 under the null hypothesis: pij=pi+p+j is a hypergeometric distribution. Because there is only a limited number of values which Y11 can possibly take, we can compute the total probability of all outcomes that are "more extreme" than the one observed. The total probability is then the p-value. The method is called Fisher's exact test:
the default takes as "more extreme" all Y11 with probabilities (under null) less than or equal to that of the observed Y11
the exact p-value, 2.955e-13, also suggests that the two variables quality and particle are not independent
the method estimates an odds ratio of 10.21331 with an exact confidence interval [5.090628, 21.544071]. Odd ratio is a measure of association for 2´2 table. We see that 10.21331 is much larger than one (odd-ratio=1 Û independence) and also the confidence interval does not contain one.
the sample odds ratio, which is (Y11Y22)/(Y12Y21), takes the value:
which is very close to 10.21331. An exact confidence interval can also be calculated as we see in the output of Fisher's exact test.
haireye <- read.table("haireye.txt")
The data is more conveniently displayed using:
> (ct <- xtabs(y ~ hair + eye, haireye))
(Q: what information have you found by merely looking at the table?)
We can execute the usual Pearson's X2 test for independence as:
we see that hair and eye color are clearly not independent because an X2 of 138.29 on 9 degrees of freedom is way too large
9=(4-1)(4-1) is the degrees of freedom for the interaction between the two categorical variables.
Now, let us fit a Poisson GLM containing only main effect terms:
> modc <- glm(y ~ hair + eye, family=poisson, haireye)
most of the levels of hair and eye color show up as significantly different from the reference levels of black hair and green eyes, which indicates that there are higher numbers of people with some hair colors than others and some eye colors than others, i.e., the marginal proportions of hair and eye color are not equivalent
the deviance of 146.44 on 9 degrees of freedom shows that they are clearly dependent
we can use the Pearson residuals under this Poisson model to calculate a Pearson X2:
One option for displaying contingency table data is a graphical tool, dotchart:
An alternative graphical tool is the mosaic plot, which divides the plot region according to the frequency of each level in a recursive manner:
In the plot,
the area is first divided according to the frequency of hair color
within each hair color, the area is then divided according to the frequency of eye color
we can easily read the information of pi+ and pj|i from the plot
a different plot could be constructed by reversing the order of hair and eye in the xtabs command above
we can now readily see the frequency of various outcomes. For example, brown hair and brown eyes is the most common combination while black hair and green eyes is the least common
(Q: what patterns will you see in the dotchart and mosaic plots if the two variables are independent? what if they have equivalent marginal proportions? Generate some tables with the two properties and draw the plots.)