One Factor Designs

From ReliaWiki

Jump to: navigation, search

Chapter 5: One Factor Designs



Index

Chapter 5  
One Factor Designs  

Contents

Available Software:
DOE++

Download Reference Book:
Experiment Design & Analysis (*.pdf)

Generate Reference Book:
File may be more up-to-date

More Resources:
DOE++ Examples Collection

As explained in Simple Linear Regression Analysis and Multiple Linear Regression Analysis, the analysis of observational studies involves the use of regression models. The analysis of experimental studies involves the use of analysis of variance (ANOVA) models. For a comparison of the two models see Fitting ANOVA Models. In single factor experiments, ANOVA models are used to compare the mean response values at different levels of the factor. Each level of the factor is investigated to see if the response is significantly different from the response at other levels of the factor. The analysis of single factor experiments is often referred to as one-way ANOVA.

To illustrate the use of ANOVA models in the analysis of experiments, consider a single factor experiment where the analyst wants to see if the surface finish of certain parts is affected by the speed of a lathe machine. Data is collected for three speeds (or three treatments). Each treatment is replicated four times. Therefore, this experiment design is balanced. Surface finish values recorded using randomization are shown in the following table.


Surface finish values for three speeds of a lathe machine.


The ANOVA model for this experiment can be stated as follows:


{{Y}_{ij}}={{\mu }_{i}}+{{\epsilon }_{ij}}\,\!


The ANOVA model assumes that the response at each factor level, i\,\!, is the sum of the mean response at the i\,\!th level, {{\mu }_{i}}\,\!, and a random error term, {{\epsilon }_{ij}}\,\!. The subscript i\,\! denotes the factor level while the subscript j\,\! denotes the replicate. If there are {{n}_{a}}\,\! levels of the factor and m\,\! replicates at each level then i=1,2,...,{{n}_{a}}\,\! and j=1,2,...,m\,\!. The random error terms, {{\epsilon }_{ij}}\,\!, are assumed to be normally and independently distributed with a mean of zero and variance of {{\sigma }^{2}}\,\!. Therefore, the response at each level can be thought of as a normally distributed population with a mean of {{\mu }_{i}}\,\! and constant variance of {{\sigma }^{2}}\,\!. The equation given above is referred to as the means model.

The ANOVA model of the means model can also be written using {{\mu }_{i}}=\mu +{{\tau }_{i}}\,\!, where \mu \,\! represents the overall mean and {{\tau }_{i}}\,\! represents the effect due to the i\,\!th treatment.


{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!


Such an ANOVA model is called the effects model. In the effects models the treatment effects, {{\tau }_{i}}\,\!, represent the deviations from the overall mean, \mu \,\!. Therefore, the following constraint exists on the {{\tau }_{i}}\,\!s:


\underset{i=1}{\overset{{{n}_{a}}}{\mathop \sum }}\,{{\tau }_{i}}=0\,\!


Fitting ANOVA Models

To fit ANOVA models and carry out hypothesis testing in single factor experiments, it is convenient to express the effects model of the effects model in the form y=X\beta +\epsilon \,\! (that was used for multiple linear regression models in Multiple Linear Regression Analysis). This can be done as shown next. Using the effects model, the ANOVA model for the single factor experiment in the first table can be expressed as:


{{Y}_{ij}}=\mu +{{\tau }_{i}}+{{\epsilon }_{ij}}\,\!


where \mu \,\! represents the overall mean and {{\tau }_{i}}\,\! represents the i\,\!th treatment effect. There are three treatments in the first table (500, 600 and 700). Therefore, there are three treatment effects, {{\tau }_{1}}\,\!, {{\tau }_{2}}\,\! and {{\tau }_{3}}\,\!. The following constraint exists for these effects:


\begin{align}
\underset{i=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{i}}= & 0 \\ 
\text{or   } {{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= & 0  
\end{align}\,\!


For the first treatment, the ANOVA model for the single factor experiment in the above table can be written as:


{{Y}_{1j}}=\mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+0\cdot {{\tau }_{3}}+{{\epsilon }_{1j}}\,\!


Using {{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!, the model for the first treatment is:


\begin{align}
{{Y}_{1j}}= & \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}-0\cdot ({{\tau }_{1}}+{{\tau }_{2}})+{{\epsilon }_{1j}} \\ 
\text{or   }{{Y}_{1j}}= & \mu +{{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}}  
\end{align}\,\!


Models for the second and third treatments can be obtained in a similar way. The models for the three treatments are:


\begin{align}
\text{First Treatment}: & {{Y}_{1j}}=1\cdot \mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{1j}} \\ 
\text{Second Treatment}: & {{Y}_{2j}}=1\cdot \mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{2j}} \\ 
\text{Third Treatment}: & {{Y}_{3j}}=1\cdot \mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{3j}}  
\end{align}\,\!


The coefficients of the treatment effects {{\tau }_{1}}\,\! and {{\tau }_{2}}\,\! can be expressed using two indicator variables, {{x}_{1}}\,\! and {{x}_{2}}\,\!, as follows:


\begin{align}
\text{Treatment Effect }{{\tau }_{1}}: & {{x}_{1}}=1,\text{   }{{x}_{2}}=0 \\ 
\text{Treatment Effect }{{\tau }_{2}}: & {{x}_{1}}=0,\text{   }{{x}_{2}}=1\text{           } \\ 
\text{Treatment Effect }{{\tau }_{3}}: & {{x}_{1}}=-1,\text{   }{{x}_{2}}=-1\text{     }  
\end{align}\,\!


Using the indicator variables {{x}_{1}}\,\! and {{x}_{2}}\,\!, the ANOVA model for the data in the first table now becomes:

Y=\mu +{{x}_{1}}\cdot {{\tau }_{1}}+{{x}_{2}}\cdot {{\tau }_{2}}+\epsilon \,\!


The equation can be rewritten by including subscripts i\,\! (for the level of the factor) and j\,\! (for the replicate number) as:

{{Y}_{ij}}=\mu +{{x}_{i1}}\cdot {{\tau }_{1}}+{{x}_{i2}}\cdot {{\tau }_{2}}+{{\epsilon }_{ij}}\,\!


The equation given above represents the "regression version" of the ANOVA model.


Treat Numerical Factors as Qualitative or Quantitative?

It can be seen from the equation given above that in an ANOVA model each factor is treated as a qualitative factor. In the present example the factor, lathe speed, is a quantitative factor with three levels. But the ANOVA model treats this factor as a qualitative factor with three levels. Therefore, two indicator variables, {{x}_{1}}\,\! and {{x}_{2}}\,\!, are required to represent this factor.

Note that in a regression model a variable can either be treated as a quantitative or a qualitative variable. The factor, lathe speed, would be used as a quantitative factor and represented with a single predictor variable in a regression model. For example, if a first order model were to be fitted to the data in the first table, then the regression model would take the form {{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\epsilon }_{ij}}\,\!. If a second order regression model were to be fitted, the regression model would be {{Y}_{ij}}={{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}x_{i1}^{2}+{{\epsilon }_{ij}}\,\!. Notice that unlike these regression models, the regression version of the ANOVA model does not make any assumption about the nature of relationship between the response and the factor being investigated.

The choice of treating a particular factor as a quantitative or qualitative variable depends on the objective of the experimenter. In the case of the data of the first table, the objective of the experimenter is to compare the levels of the factor to see if change in the levels leads to a significant change in the response. The objective is not to make predictions on the response for a given level of the factor. Therefore, the factor is treated as a qualitative factor in this case. If the objective of the experimenter were prediction or optimization, the experimenter would focus on aspects such as the nature of relationship between the factor, lathe speed, and the response, surface finish, so that the factor should be modeled as a quantitative factor to make accurate predictions.

Expression of the ANOVA Model as Y = + ε

The regression version of the ANOVA model can be expanded for the three treatments and four replicates of the data in the first table as follows:


\begin{align}
{{Y}_{11}}= & 6=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{11}}\text{    Level 1, Replicate 1} \\ 
{{Y}_{21}}= & 13=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{21}}\text{  Level 2, Replicate 1} \\ 
{{Y}_{31}}= & 23=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{31}}\text{  Level 3, Replicate 1} \\ 
{{Y}_{12}}= & 13=\mu +1\cdot {{\tau }_{1}}+0\cdot {{\tau }_{2}}+{{\epsilon }_{12}}\text{  Level 1, Replicate 2} \\ 
{{Y}_{22}}= & 16=\mu +0\cdot {{\tau }_{1}}+1\cdot {{\tau }_{2}}+{{\epsilon }_{22}}\text{  Level 2, Replicate 2} \\ 
{{Y}_{32}}= & 20=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{32}}\text{  Level 3, Replicate 2} \\ 
 &  & ... \\ 
{{Y}_{34}}= & 18=\mu -1\cdot {{\tau }_{1}}-1\cdot {{\tau }_{2}}+{{\epsilon }_{34}}\text{  Level 3, Replicate 4}  
\end{align}\,\!


The corresponding matrix notation is:


y=X\beta +\epsilon \,\!


where


y=\left[ \begin{matrix}
   {{Y}_{11}}  \\
   {{Y}_{21}}  \\
   {{Y}_{31}}  \\
   {{Y}_{12}}  \\
   {{Y}_{22}}  \\
   .  \\
   .  \\
   .  \\
   {{Y}_{34}}  \\
\end{matrix} \right]=X\beta +\epsilon =\left[ \begin{matrix}
   1 & 1 & 0  \\
   1 & 0 & 1  \\
   1 & -1 & -1  \\
   1 & 1 & 0  \\
   1 & 0 & 1  \\
   . & . & .  \\
   . & . & .  \\
   . & . & .  \\
   1 & -1 & -1  \\
\end{matrix} \right]\left[ \begin{matrix}
   \mu   \\
   {{\tau }_{1}}  \\
   {{\tau }_{2}}  \\
\end{matrix} \right]+\left[ \begin{matrix}
   {{\epsilon }_{11}}  \\
   {{\epsilon }_{21}}  \\
   {{\epsilon }_{31}}  \\
   {{\epsilon }_{12}}  \\
   {{\epsilon }_{22}}  \\
   .  \\
   .  \\
   .  \\
   {{\epsilon }_{34}}  \\
\end{matrix} \right]\,\!


Thus:


\begin{align}
y= & X\beta +\epsilon  \\ 
 &  &  \\ 
 & \left[ \begin{matrix}
   6  \\
   13  \\
   23  \\
   13  \\
   16  \\
   .  \\
   .  \\
   .  \\
   18  \\
\end{matrix} \right]= & \left[ \begin{matrix}
   1 & 1 & 0  \\
   1 & 0 & 1  \\
   1 & -1 & -1  \\
   1 & 1 & 0  \\
   1 & 0 & 1  \\
   . & . & .  \\
   . & . & .  \\
   . & . & .  \\
   1 & -1 & -1  \\
\end{matrix} \right]\left[ \begin{matrix}
   \mu   \\
   {{\tau }_{1}}  \\
   {{\tau }_{2}}  \\
\end{matrix} \right]+\left[ \begin{matrix}
   {{\epsilon }_{11}}  \\
   {{\epsilon }_{21}}  \\
   {{\epsilon }_{31}}  \\
   {{\epsilon }_{12}}  \\
   {{\epsilon }_{22}}  \\
   .  \\
   .  \\
   .  \\
   {{\epsilon }_{34}}  \\
\end{matrix} \right]  
\end{align}\,\!


The matrices y\,\!, X\,\! and \beta \,\! are used in the calculation of the sum of squares in the next section. The data in the first table can be entered into DOE++ as shown in the figure below.


Single factor experiment design for the data in the first table.

Hypothesis Test in Single Factor Experiments

The hypothesis test in single factor experiments examines the ANOVA model to see if the response at any level of the investigated factor is significantly different from that at the other levels. If this is not the case and the response at all levels is not significantly different, then it can be concluded that the investigated factor does not affect the response. The test on the ANOVA model is carried out by checking to see if any of the treatment effects, {{\tau }_{i}}\,\!, are non-zero. The test is similar to the test of significance of regression mentioned in Simple Linear Regression Analysis and Multiple Linear Regression Analysis in the context of regression models. The hypotheses statements for this test are:


\begin{align}
  & {{H}_{0}}: & {{\tau }_{1}}={{\tau }_{2}}=...={{\tau }_{{{n}_{a}}}}=0 \\ 
 & {{H}_{1}}: & {{\tau }_{i}}\ne 0\text{    for at least one }i  
\end{align}\,\!


The test for {{H}_{0}}\,\! is carried out using the following statistic:


{{F}_{0}}=\frac{M{{S}_{TR}}}{M{{S}_{E}}}\,\!


where M{{S}_{TR}}\,\! represents the mean square for the ANOVA model and M{{S}_{E}}\,\! is the error mean square. Note that in the case of ANOVA models we use the notation M{{S}_{TR}}\,\! (treatment mean square) for the model mean square and S{{S}_{TR}}\,\! (treatment sum of squares) for the model sum of squares (instead of M{{S}_{R}}\,\!, regression mean square, and S{{S}_{R}}\,\!, regression sum of squares, used in Simple Linear Regression Analysis and Multiple Linear Regression Analysis). This is done to indicate that the model under consideration is the ANOVA model and not the regression model. The calculations to obtain M{{S}_{TR}}\,\! and S{{S}_{TR}}\,\! are identical to the calculations to obtain M{{S}_{R}}\,\! and S{{S}_{R}}\,\! explained in Multiple Linear Regression Analysis.


Calculation of the Statistic {{F}_{0}}\,\!

The sum of squares to obtain the statistic {{F}_{0}}\,\! can be calculated as explained in Multiple Linear Regression Analysis. Using the data in the first table, the model sum of squares, S{{S}_{TR}}\,\!, can be calculated as:


\begin{align}
S{{S}_{TR}}= & {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ 
= & {{\left[ \begin{matrix}
   6  \\
   13  \\
   .  \\
   .  \\
   18  \\
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}
   0.1667 & -0.0833 & . & . & -0.0833  \\
   -0.0833 & 0.1667 & . & . & -0.0833  \\
   . & . & . & . & .  \\
   . & . & . & . & .  \\
   -0.0833 & -0.0833 & . & . & 0.1667  \\
\end{matrix} \right]\left[ \begin{matrix}
   6  \\
   13  \\
   .  \\
   .  \\
   18  \\
\end{matrix} \right] \\ 
= & 232.1667  
\end{align}\,\!


In the previous equation, {{n}_{a}}\,\! represents the number of levels of the factor, m\,\! represents the replicates at each level, y\,\! represents the vector of the response values, H\,\! represents the hat matrix and J\,\! represents the matrix of ones. (For details on each of these terms, refer to Multiple Linear Regression Analysis.) Since two effect terms, {{\tau }_{1}}\,\! and {{\tau }_{2}}\,\!, are used in the regression version of the ANOVA model, the degrees of freedom associated with the model sum of squares, S{{S}_{TR}}\,\!, is two.


dof(S{{S}_{TR}})=2\,\!


The total sum of squares, S{{S}_{T}}\,\!, can be obtained as follows:


\begin{align}
S{{S}_{T}}= & {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot m})J]y \\ 
= & {{\left[ \begin{matrix}
   6  \\
   13  \\
   .  \\
   .  \\
   18  \\
\end{matrix} \right]}^{\prime }}\left[ \begin{matrix}
   0.9167 & -0.0833 & . & . & -0.0833  \\
   -0.0833 & 0.9167 & . & . & -0.0833  \\
   . & . & . & . & .  \\
   . & . & . & . & .  \\
   -0.0833 & -0.0833 & . & . & 0.9167  \\
\end{matrix} \right]\left[ \begin{matrix}
   6  \\
   13  \\
   .  \\
   .  \\
   18  \\
\end{matrix} \right] \\ 
= & 306.6667  
\end{align}\,\!


In the previous equation, I\,\! is the identity matrix. Since there are 12 data points in all, the number of degrees of freedom associated with S{{S}_{T}}\,\! is 11.


dof(S{{S}_{T}})=11\,\!


Knowing S{{S}_{T}}\,\! and S{{S}_{TR}}\,\!, the error sum of squares is:


\begin{align}
S{{S}_{E}}= & S{{S}_{T}}-S{{S}_{TR}} \\ 
= & 306.6667-232.1667 \\ 
= & 74.5  
\end{align}\,\!


The number of degrees of freedom associated with S{{S}_{E}}\,\! is:


\begin{align}
dof(S{{S}_{E}})= & dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ 
= & 11-2 \\ 
= & 9  
\end{align}\,\!


The test statistic can now be calculated using the equation given in Hypothesis Test in Single Factor Experiments as:


\begin{align}
{{f}_{0}}= & \frac{M{{S}_{TR}}}{M{{S}_{E}}} \\ 
= & \frac{S{{S}_{TR}}/dof(S{{S}_{TR}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ 
= & \frac{232.1667/2}{74.5/9} \\ 
= & 14.0235  
\end{align}\,\!


The p\,\! value for the statistic based on the F\,\! distribution with 2 degrees of freedom in the numerator and 9 degrees of freedom in the denominator is:


\begin{align}
p\text{ }value= & 1-P(F\le {{f}_{0}}) \\ 
= & 1-0.9983 \\ 
= & 0.0017  
\end{align}\,\!


Assuming that the desired significance level is 0.1, since p\,\! value < 0.1, {{H}_{0}}\,\! is rejected and it is concluded that change in the lathe speed has a significant effect on the surface finish. DOE++ displays these results in the ANOVA table, as shown in the figure below. The values of S and R-sq are the standard error and the coefficient of determination for the model, respectively. These values are explained in Multiple Linear Regression Analysis and indicate how well the model fits the data. The values in the figure below indicate that the fit of the ANOVA model is fair.


ANOVA table for the data in the first table.

Confidence Interval on the ith Treatment Mean

The response at each treatment of a single factor experiment can be assumed to be a normal population with a mean of {{\mu }_{i}}\,\! and variance of {{\sigma }^{2}}\,\! provided that the error terms can be assumed to be normally distributed. A point estimator of {{\mu }_{i}}\,\! is the average response at each treatment, {{\bar{y}}_{i\cdot }}\,\!. Since this is a sample average, the associated variance is {{\sigma }^{2}}/{{m}_{i}}\,\!, where {{m}_{i}}\,\! is the number of replicates at the i\,\!th treatment. Therefore, the confidence interval on {{\mu }_{i}}\,\! is based on the t\,\! distribution. Recall from Statistical Background on DOE (inference on population mean when variance is unknown) that:


\begin{align}
{{T}_{0}}= & \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{{{{\hat{\sigma }}}^{2}}/{{m}_{i}}}} \\ 
= & \frac{{{{\bar{y}}}_{i\cdot }}-{{\mu }_{i}}}{\sqrt{M{{S}_{E}}/{{m}_{i}}}}  
\end{align}\,\!


has a t\,\! distribution with degrees of freedom =dof(S{{S}_{E}})\,\!. Therefore, a 100 (1-\alpha \,\!) percent confidence interval on the i\,\!th treatment mean, {{\mu }_{i}}\,\!, is:


{{\bar{y}}_{i\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}}\,\!


For example, for the first treatment of the lathe speed we have:


\begin{align}
{{{\hat{\mu }}}_{1}}= & {{{\bar{y}}}_{1\cdot }} \\ 
= & \frac{6+13+7+8}{4} \\ 
= & 8.5  
\end{align}\,\!


In DOE++, this value is displayed as the Estimated Mean for the first level, as shown in the Data Summary table in the figure below. The value displayed as the standard deviation for this level is simply the sample standard deviation calculated using the observations corresponding to this level. The 90% confidence interval for this treatment is:


\begin{align}
= & {{{\bar{y}}}_{1\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{M{{S}_{E}}}{{{m}_{i}}}} \\ 
= & {{{\bar{y}}}_{1\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{M{{S}_{E}}}{4}} \\ 
= & 8.5\pm 1.833\sqrt{\frac{(74.5/9)}{4}} \\ 
= & 8.5\pm 1.833(1.44) \\ 
= & 8.5\pm 2.64  
\end{align}\,\!


The 90% limits on {{\mu }_{1}}\,\! are 5.9 and 11.1, respectively.


Data Summary table for the single factor experiment in the first table.

Confidence Interval on the Difference in Two Treatment Means

The confidence interval on the difference in two treatment means, {{\mu }_{i}}-{{\mu }_{j}}\,\!, is used to compare two levels of the factor at a given significance. If the confidence interval does not include the value of zero, it is concluded that the two levels of the factor are significantly different. The point estimator of {{\mu }_{i}}-{{\mu }_{j}}\,\! is {{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\!. The variance for {{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\! is:


\begin{align}
var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})= & var({{{\bar{y}}}_{i\cdot }})+var({{{\bar{y}}}_{j\cdot }}) \\ 
= & {{\sigma }^{2}}/{{m}_{i}}+{{\sigma }^{2}}/{{m}_{j}}  
\end{align}\,\!


For balanced designs all {{m}_{i}}=m\,\!. Therefore:


var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})=2{{\sigma }^{2}}/m\,\!


The standard deviation for {{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\,\! can be obtained by taking the square root of var({{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }})\,\! and is referred to as the pooled standard error:


\begin{align}
Pooled\text{ }Std.\text{ }Error= & \sqrt{var({{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }})} \\ 
= & \sqrt{2{{{\hat{\sigma }}}^{2}}/m}  
\end{align}\,\!


The t\,\! statistic for the difference is:


\begin{align}
{{T}_{0}}= & \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2{{{\hat{\sigma }}}^{2}}/m}} \\ 
= & \frac{{{{\bar{y}}}_{i\cdot }}-{{{\bar{y}}}_{j\cdot }}-({{\mu }_{i}}-{{\mu }_{j}})}{\sqrt{2M{{S}_{E}}/m}}  
\end{align}\,\!


Then a 100 (1- \alpha \,\!) percent confidence interval on the difference in two treatment means, {{\mu }_{i}}-{{\mu }_{j}}\,\!, is:


{{\bar{y}}_{i\cdot }}-{{\bar{y}}_{j\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}}\,\!


For example, an estimate of the difference in the first and second treatment means of the lathe speed, {{\mu }_{1}}-{{\mu }_{2}}\,\!, is:

\begin{align}
{{{\hat{\mu }}}_{1}}-{{{\hat{\mu }}}_{2}}= & {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }} \\ 
= & 8.5-13.25 \\ 
= & -4.75  
\end{align}\,\!


The pooled standard error for this difference is:


\begin{align}
Pooled\text{ }Std.\text{ }Error= & \sqrt{var({{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }})} \\ 
= & \sqrt{2{{{\hat{\sigma }}}^{2}}/m} \\ 
= & \sqrt{2M{{S}_{E}}/m} \\ 
= & \sqrt{\frac{2(74.5/9)}{4}} \\ 
= & 2.0344  
\end{align}\,\!


To test {{H}_{0}}:{{\mu }_{1}}-{{\mu }_{2}}=0\,\!, the t\,\! statistic is:


\begin{align}
{{t}_{0}}= & \frac{{{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}-({{\mu }_{1}}-{{\mu }_{2}})}{\sqrt{2M{{S}_{E}}/m}} \\ 
= & \frac{-4.75-(0)}{\sqrt{\tfrac{2(74.5/9)}{4}}} \\ 
= & \frac{-4.75}{2.0344} \\ 
= & -2.3348  
\end{align}\,\!


In DOE++, the value of the statistic is displayed in the Mean Comparisons table under the column T Value as shown in the figure below. The 90% confidence interval on the difference {{\mu }_{1}}-{{\mu }_{2}}\,\! is:


\begin{align}
= & {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{\alpha /2,dof(S{{S}_{E}})}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ 
= & {{{\bar{y}}}_{1\cdot }}-{{{\bar{y}}}_{2\cdot }}\pm {{t}_{0.05,9}}\sqrt{\frac{2M{{S}_{E}}}{m}} \\ 
= & -4.75\pm 1.833\sqrt{\frac{2(74.5/9)}{4}} \\ 
= & -4.75\pm 1.833(2.0344) \\ 
= & -4.75\pm 3.729  
\end{align}\,\!


Hence the 90% limits on {{\mu }_{1}}-{{\mu }_{2}}\,\! are -8.479\,\! and -1.021\,\!, respectively. These values are displayed under the Low CI and High CI columns in the following figure. Since the confidence interval for this pair of means does not included zero, it can be concluded that these means are significantly different at 90% confidence. This conclusion can also be arrived at using the p\,\! value noting that the hypothesis is two-sided. The p\,\! value corresponding to the statistic {{t}_{0}}=-2.3348\,\!, based on the t\,\! distribution with 9 degrees of freedom is:


\begin{align}
p\text{ }value= & 2\times (1-P(T\le |{{t}_{0}}|)) \\ 
= & 2\times (1-P(T\le 2.3348)) \\ 
= & 2\times (1-0.9778) \\ 
= & 0.0444  
\end{align}\,\!


Since p\,\! value < 0.1, the means are significantly different at 90% confidence. Bounds on the difference between other treatment pairs can be obtained in a similar manner and it is concluded that all treatments are significantly different.


Mean Comparisons table for the data in the first table.

Residual Analysis

Plots of residuals, {{e}_{ij}}\,\!, similar to the ones discussed in the previous chapters on regression, are used to ensure that the assumptions associated with the ANOVA model are not violated. The ANOVA model assumes that the random error terms, {{\epsilon }_{ij}}\,\!, are normally and independently distributed with the same variance for each treatment. The normality assumption can be checked by obtaining a normal probability plot of the residuals.


Equality of variance is checked by plotting residuals against the treatments and the treatment averages, {{\bar{y}}_{i\cdot }}\,\! (also referred to as fitted values), and inspecting the spread in the residuals. If a pattern is seen in these plots, then this indicates the need to use a suitable transformation on the response that will ensure variance equality. Box-Cox transformations are discussed in the next section. To check for independence of the random error terms residuals are plotted against time or run-order to ensure that a pattern does not exist in these plots. Residual plots for the given example are shown in the following two figures. The plots show that the assumptions associated with the ANOVA model are not violated.


Normal probability plot of residuals for the single factor experiment in the first table.


Plot of residuals against fitted values for the single factor experiment in the first table.

Box-Cox Method

Transformations on the response may be used when residual plots for an experiment show a pattern. This indicates that the equality of variance does not hold for the residuals of the given model. The Box-Cox method can be used to automatically identify a suitable power transformation for the data based on the relation:


{{Y}^{*}}={{Y}^{\lambda }}\,\!


\lambda \,\! is determined using the given data such that S{{S}_{E}}\,\! is minimized. The values of {{Y}^{\lambda }}\,\! are not used as is because of issues related to calculation or comparison of S{{S}_{E}}\,\! values for different values of \lambda \,\!. For example, for \lambda =0\,\! all response values will become 1. Therefore, the following relation is used to obtain {{Y}^{\lambda }}\,\! :


{{Y}^{\lambda }}=\{\begin{matrix}
   \frac{{{Y}^{\lambda }}-1}{\lambda {{{\dot{y}}}^{\lambda -1}}}  \\
   \dot{y}\ln y  \\
\end{matrix}\text{    }\begin{matrix}
   \lambda \ne 0\begin{matrix}
     \\
     \\
\end{matrix}  \\
   \lambda =0  \\
\end{matrix}\,\!


where \dot{y}={{\ln }^{-1}}[(1/n)\mathop{}_{}^{}\ln y]\,\!. Once all {{Y}^{\lambda }}\,\! values are obtained for a value of \lambda \,\!, the corresponding S{{S}_{E}}\,\! for these values is obtained using {{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!. The process is repeated for a number of \lambda \,\! values to obtain a plot of S{{S}_{E}}\,\! against \lambda \,\!. Then the value of \lambda \,\! corresponding to the minimum S{{S}_{E}}\,\! is selected as the required transformation for the given data. DOE++ plots \ln S{{S}_{E}}\,\! values against \lambda \,\! values because the range of S{{S}_{E}}\,\! values is large and if this is not done, all values cannot be displayed on the same plot. The range of search for the best \lambda \,\! value in the software is from -5\,\! to 5\,\!, because larger values of of \lambda \,\! are usually not meaningful. DOE++ also displays a recommended transformation based on the best \lambda \,\! value obtained as per the second table.


Recommended Box-Cox power transformations.


Confidence intervals on the selected \lambda \,\! values are also available. Let S{{S}_{E}}(\lambda )\,\! be the value of S{{S}_{E}}\,\! corresponding to the selected value of \lambda \,\!. Then, to calculate the 100 (1- \alpha \,\!) percent confidence intervals on \lambda \,\!, we need to calculate S{{S}^{*}}\,\! as shown next:


S{{S}^{*}}=S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right)\,\!


The required limits for \lambda \,\! are the two values of \lambda \,\! corresponding to the value S{{S}^{*}}\,\! (on the plot of S{{S}_{E}}\,\! against \lambda \,\!). If the limits for \lambda \,\! do not include the value of one, then the transformation is applicable for the given data. Note that the power transformations are not defined for response values that are negative or zero. DOE++ deals with negative and zero response values using the following equations (that involve addition of a suitable quantity to all of the response values if a zero or negative response value is encountered).


\begin{align}
 y(i)& =  y(i)+\left| {{y}_{\min }} \right|\times 1.1 & \text{Negative Response} \\ 
 y(i)& =  y(i)+1                                      & \text{Zero Response}
\end{align}\,\!


Here {{y}_{\min }}\,\! represents the minimum response value and \left| {{y}_{\min }} \right|\,\! represents the absolute value of the minimum response.

Example

To illustrate the Box-Cox method, consider the experiment given in the first table. Transformed response values for various values of \lambda \,\! can be calculated using the equation for {Y}^{\lambda}\,\! given in Box-Cox Method. Knowing the hat matrix, H\,\!, S{{S}_{E}}\,\! values corresponding to each of these \lambda \,\! values can easily be obtained using {{y}^{\lambda \prime }}[I-H]{{y}^{\lambda }}\,\!. S{{S}_{E}}\,\! values calculated for \lambda \,\! values between -5\,\! and 5\,\! for the given data are shown below:


\begin{matrix}
   \lambda  & {} & S{{S}_{E}} & \ln S{{S}_{E}}  \\
   {} & {} & {} & {}  \\
   -5 & {} & 5947.8 & 8.6908  \\
   -4 & {} & 1946.4 & 7.5737  \\
   -3 & {} & 696.5 & 6.5461  \\
   -2 & {} & 282.2 & 5.6425  \\
   -1 & {} & 135.8 & 4.9114  \\
   0 & {} & 83.9 & 4.4299  \\
   1 & {} & 74.5 & 4.3108  \\
   2 & {} & 101.0 & 4.6154  \\
   3 & {} & 190.4 & 5.2491  \\
   4 & {} & 429.5 & 6.0627  \\
   5 & {} & 1057.6 & -6.9638  \\
\end{matrix}\,\!


A plot of \ln S{{S}_{E}}\,\! for various \lambda \,\! values, as obtained from DOE++, is shown in the following figure. The value of \lambda \,\! that gives the minimum S{{S}_{E}}\,\! is identified as 0.7841. The S{{S}_{E}}\,\! value corresponding to this value of \lambda \,\! is 73.74. A 90% confidence interval on this \lambda \,\! value is calculated as follows. S{{S}^{*}}\,\! can be obtained as shown next:


\begin{align}
S{{S}^{*}}= & S{{S}_{E}}(\lambda )\left( 1+\frac{t_{\alpha /2,dof(S{{S}_{E}})}^{2}}{dof(S{{S}_{E}})} \right) \\ 
= & 73.74\left( 1+\frac{t_{0.05,9}^{2}}{9} \right) \\ 
= & 73.74\left( 1+\frac{{{1.833}^{2}}}{9} \right) \\ 
= & 101.27  
\end{align}\,\!


Therefore, \ln S{{S}^{*}}=4.6178\,\!. The \lambda \,\! values corresponding to this value from the following figure are -0.4689\,\! and 2.0054\,\!. Therefore, the 90% confidence limits on are -0.4689\,\! and 2.0054\,\!. Since the confidence limits include the value of 1, this indicates that a transformation is not required for the data in the first table.


Box-Cox power transformation plot for the data in the first table.
Personal tools
ReliaWiki.org
Create a book