# Unbalanced Two Way ANOVA Calculator

No repeated measures - a repeated measure occurs when the same subjects are measured under multiple conditions or at multiple time points.

Two factor ANOVA with replication - enter

**all the replications in one cell**separated by Enter or , (comma).

ANOVA without replication - enter

**one value per cell**.

The tool ignores empty cells or non-numeric cells.

## Unbalanced two way ANOVA calculator

#### Models

There are many possible models, this calculator deal currently only with the following balanced models:**Fixed effect model (A-Fixed, B-Fixed), no repeats**- both factors are fixed.**Mixed effect model (A-Random, B-Fixed), no repeats**- factor A is random, factor B is fixed, each subject is measured only once.**Mixed effect model (A-Fixed, B-Random), no repeats**- factor A is fixed, factor B is random, each subject is measured only once.**Random effect model (A-Random, B-Random), no repeats**

#### Design

##### Balanced design

The balanced design has the same number of observations in each cell - each combination of factor.

Since there are no overlaps between any combination of the factors and the interaction : SS_{T} = SS_{A} + SS_{B} + SS_{AB} + SS_{E}.

SS_{T}

##### Unbalanced design

When the model is **unbalanced**, it leads to correlations. When the distribution of data between the cells matches the population distribution, the correlation will exist only between each factor and the interactions. However, when the distribution of data between the cells does not match the population distribution, correlations will occur both between each factor and the interactions, and among the factors themselves.

When correlation exists, there is overlap among the sum of squares (SS). If we calculate the SS as we do for the balanced model, the result will be incorrect, leading to a SS larger than the actual SS. Hence, you don't know how to allocate the shared SS between the two factors and between the factors and the interaction. There are several methods for dealing with the shared sum of squares.

##### Unbalanced design - Type I (Sequential Sum of Squares):

Type I - sequential, the first some of squares (SS) you calculate get the shared some of squares. In this case the order is matter!. Following the sum of squares formulas:

Factor A: SS_{A}.

Factor B: SS_{B|A} = SSR(y = β_{0} + β_{1}A + β_{2}B) - SS_{A}.

Interaction AB: SS_{AB|A,B} = SSR(y = β_{0} + β_{1}A + β_{2}B + β_{3}AB) - SSR(y = β_{0} + β_{1}A + β_{2}B).

##### Unbalanced design - Type II (Partial Sum of Squares):

Type II - conservative, it assumes there is no interaction between the factors, it ignores the shared SS between the factors. Following the sum of squares formulas:

Factor A: SS_{A|B} = SSR(y = β_{0} + β_{1}A + β_{2}B) - SS_{B}.

Factor B: SS_{B|A} = SSR(y = β_{0} + β_{1}A + β_{2}B) - SS_{A}.

In this case we assume no interaction, but we also test it using the following:

Interaction AB: SS_{AB|A,B} = SSR(y = β_{0} + β_{1}A + β_{2}B + β_{3}AB) - SSR(y = β_{0} + β_{1}A + β_{2}B).

##### Unbalanced design - Type III (Marginal Sum of Squares)

Type III - assumes there is interaction between the factors, it ignores all the shared SS between the factors and between the factors and the interactions. Following the sum of squares formulas:

Factor A: SS_{A|B, AB} = SSR(y = β_{0} + β_{1}A + β_{2}B + β_{3}AB) - SSR(y = β_{0} + β_{1}B + β_{2}AB).

Factor B: SS_{B|A, AB} = SSR(y = β_{0} + β_{1}A + β_{2}B + β_{3}AB) - SSR(y = β_{0} + β_{1}A + β_{2}AB).

In this case we assume no interaction, but we also test it using the following:

Interaction AB: SS_{AB|A,B} = SSR(y = β_{0} + β_{1}A + β_{2}B + β_{3}AB) - SSR(y = β_{0} + β_{1}A + β_{2}B).

If the interaction does not exist in the population, the Type II method is a more powerful test than the Type III method"

#### Glossary

SS_{T} is the sum of squared differences between the dependent variable and the grand mean.

SS_{Model} = SS_{A} + SS_{B} + SS_{AB} + SS_{E}.

For balanced model and Type I, SS_{Model} = SS_{T}

#### Targets

The two way ANOVA test checks the following targets using sample data.- Checks if the difference between
**Factor A**averages of two or more categories is significant - Checks if the difference between
**Factor B**averages of two or more categories is significant - Checks if there is an interaction between
**Factor A**and**Factor B**

The F statistic represents the ratio of the variance between the groups and the variance inside the groups. Unlike many other statistic tests, the smaller the F statistic the more likely the averages are equal.

**Right-tailed**F test, for ANOVA test you can use only the right tail. Why?

## Two-way ANOVA

_{0}: μ

_{1}= .. = μ

_{a}

_{0}: μ

_{1}= .. = μ

_{b}

H

_{0}: Interaction(A

_{i}B

_{j}) = 0 (∀ i = 1 to a, j = 1 to b)

There is no interaction between variable A and variable B, i.e., for all the cells, the effect of variable A on the cells' means is not depend on the effect of variable B, and vice versa.

Fixed Model | Mixed Model | Random Model | Mixed Repeated | ||||

F_{A}= | MS_{A} | F_{A}= | MS_{A} | F_{A}= | MS_{A} | F_{A}= | MS_{A} |

MS_{E} | MS_{AB} | MS_{AB} | MS_{SWA} | ||||

F_{B}= | MS_{B} | F_{B}= | MS_{B} | F_{B}= | MS_{B} | F_{B}= | MS_{B} |

MS_{E} | MS_{E} | MS_{AB} | MS_{BSWA} | ||||

F_{AB}= | MS_{AB} | F_{AB}= | MS_{AB} | F_{AB}= | MS_{AB} | F_{AB}= | MS_{AB} |

MS_{E} | MS_{E} | MS_{E} | MS_{BSWA} |

## Assumptions

- The dependent variable is continuous (ratio or interval)
- Two categorical independent variables
- Independent observations (no repeated measure)
- The residuals distribution is normal
- Homogeneity of variances, a similar variance for each cell

## Required Sample Data

Sample data from all compared groups |

## Parameters

**a**- the number of categories in variable A, number of rows.

**b**- the number of categories in variable B, number of columns.

**n**- sample side of category i of variable A (row i).

_{i}**n**- sample side of category j of variable B (column j).

_{j}**n**- sample side of cell i,j (row i, column j). In the balance n

_{i,j}_{i,j}=n/(a*b)

**n**- overall sample side, includes all the groups (Σn

_{i,j}, i=1 to a, j=1 to b).

**Ȳ**- average of all the observations of category i of variable A (row i).

_{i}**Ȳ**- average of all the observations of category j of variable B (column j).

_{j}**Ȳ**- overall average (ΣY

_{i,j,k}/ n, i=1 to a, j=1 to b, k=1 to n

_{i,j}).

### Repeated measures ANOVA

**s**- represent the order of subject in category i (subject 1 in category 1 is different than subject 1 in category 2)

**sub**- number of subjects per cell, cell is one combination of variable A and variable B. For the balance design: N=a*b*sub.

**Ȳ**- subject's average, ΣY

_{i,s}_{i,j,s}for subject i,s ,the average of all the observations of subject s of category j of variable B (column j).

**Ȳ**- overall average (ΣY

_{i,j,s}/ n

## Results calculations

### Sum of squares

The sum of squares accumulates the squared differences related to the effect we try to estimate.**SS**- the squared differences related to the effect of variable A. You compare the average of every category to the total average. The same value as the sum of squares between groups in one way ANOVA.

_{A}**SS**- the same as SS

_{B}_{A}, for variable B.

**SS**- the squared differences related to the effect of the combination of variable A and variable B in each cell, Since we try to understand the influence of the interaction AB, the interaction of the specific value of variable A and the specific value of variable B, we take the average of each cell, remove the influence of variable A and variable B, and compare to the total average.

_{AB}**A effect**= Ȳ

_{i}- Ȳ

**B effect**= Ȳ

_{j}- Ȳ

**AB effect**= Cell average - A effect - B effect - Total average.

= Ȳ

_{i,j}- (Ȳ

_{i}- Ȳ) - (Ȳ

_{j}- Ȳ) - Ȳ.

= Ȳ

_{i,j}- Ȳ

_{i}- Ȳ

_{j}+ Ȳ.

Take the square of each difference

Ȳ

_{i,j}- Ȳ

_{i}- Ȳ

_{j}+ Ȳ)

^{2}.

Count the square differences of each value in the cell, hence multiply by the sample size of each cell (n

_{i,j}).

SS

_{AB}=Σ

_{i}

^{a}Σ

_{j}

^{b}n

_{i,j}(Ȳ

_{i,j}- Ȳ

_{i}- Ȳ

_{j}+ Ȳ)

^{2}

### Fixed and Random Effects

The fixed and random effects are related to the independent variables ().#### Fixed Effect

The effect is constant across individuals.- The categories of the variable contains the entire categories' list
- The effect of this variable is interesting. The difference between the categories is important
- There is no know pattern on the difference between the categories

#### Random Effect

The effect vary across individuals, the individuals may be people, products.- The categories' list is only a sample from the entire categories' list
- The effect of this variable is not interesting by itself. The difference between the categories is not important.
- There is no know pattern on the difference between the categories

A sample from the entire groups' population.

There is no pattern about the difference between the schools, and if there will be a pattern, it will be another factor, like school's size.

Each school is not important by itself.

**interaction field**or the

**model**, the following ANOVA table and diagram will be adjusted!

### ANOVA table - With interaction - Type II

Source | Degrees of Freedom (DF) | Sum of Squares (SS) | Mean Square (MS) | F statistic | p-value |
---|---|---|---|---|---|

Factor A (rows)Between the categories of factor A | DF_{A} = a - 1 | SS_{A} = SS(A|B) =SS(A, B) - SS(B) | MS_{A} = SS_{A} / DF_{A} | F_{A} = MS_{A} / MS_{E} | P(x > F_{A}) |

Factor B (Columns)Between the categories of factor B | DF_{B} = b - 1 | SS_{B} = SS(B|A) =SS(A, B) - SS(A) | MS_{B} = SS_{B} / DF_{B} | F_{B} = MS_{B} / MS_{E} | P(x > F_{B}) |

Interaction ABBetween the cells after reducing factor A and factor B effects | DF_{AB} = (a - 1)(b - 1) | SS_{AB} = SS(AB|A, B) =SS(A, B, AB) - SS(A, B) | MS_{AB} = SS_{AB} / DF_{AB} | F_{AB} = MS_{AB} / MS_{E} | P(x > F_{AB}) |

ErrorWithin the cells | DF_{E} = n - a*b | SS_{E}=Σ_{i}^{a}Σ_{j}^{b}Σ_{k}^{ni,j}(Y_{i,j,k} - Ȳ_{i,j})^{2} | MS_{E} = SS_{E} / DF_{E} | ||

TotalAll the deviations from the average | DF_{T} = n - 1 | SS_{T}=Σ_{i}^{a}Σ_{j}^{b}Σ_{k}^{ni,j}(Y_{i,j,k} - Ȳ)^{2}SS _{T}=Sample Variance*(n-1) | MS_{E} = S^{2} = SS_{T} / (n - 1) |

### Sum of squares diagram - with interaction

This is the In the following diagram you may see the differences per each observation Y_{i,j,k} that used to calculate the sum of squares.

A effect: Ȳ_{i} - Ȳ.

B effect: Ȳ_{j} - Ȳ.

_{i,j}- Ȳ

_{i}- Ȳ

_{j}+ Ȳ.

Error: Y

_{i,j,k}- Ȳ

_{i,j}.

_{i,j,k}- Ȳ.