You may override this value.">Effect size:
Calculates and test the correlation.
The covariance checks the relationship between two variables.
The covariance range is unlimited from negative infinity to positive infinity. For independent variables, the covariance is zero.
Positive covariance - changes go in the same direction, when one variable increases usually also the second variable increases, and when one variable decreases usually also the second variable decreases.
Negative covariance - opposite direction, when one variable increases usually the second variable decreases, and when one variable decreases usually the second variable increases.
SXY = | Σ(xi-x̄)(yi-ȳ) |
n - 1 |
You may say that there is a correlation between two variables, or statistical association, when the value of one variable may at least partially predict the value of the other variable.
The correlation is a standardized covariance, the correlation range is between -1 and 1.
The correlation ignores the cause and effect question, is X depends on Y or Y depends on X or both variables depend on the third variable Z.
Similarly to the covariance, for independent variables, the correlation is zero.
Positive correlation - changes go in the same direction, when one variable increases usually also the second variable increases, and when one variable decreases usually also the second variable decreases.
Negative correlation - opposite direction, when one variable increases usually the second variable decreases, and when one variable decreases usually the second variable increases.
Perfect correlation - When you know the value of one variable you may calculate the exact value of the second variable. For a perfect positive correlation r = 1. and for a perfect negative correlation r = -1.
The Pearson correlation coefficient is a type of correlation, that measure linear association between two variables
ρXY = | E[(X-E[X])(Y-E[Y])] |
σXσY |
ρ = | Cov(X,Y) |
σXσY |
r = | Σ(xi - x̄)(yi - ȳ) |
√(Σ(xi - x̄)2Σ(yi - ȳ)2 ) |
r = | SXY |
SXSY |
The correlation value is also the correlation effect size.
Define the level of the effect size is only a rule of thumb. Following Cohen's guidelines (Cohen 1988 - pg 413)
Correlation value(r) | Level |
---|---|
|r| < 0.1 | Very small |
0.1 ≤ |r| < 0.3 | Small |
0.3 ≤ |r| < 0.5 | Medium |
0.5 ≤ |r| | Large |
When the null assumption is ρ0 = 0, independent variables, and X and Y have bivariate normal distribution or the sample size is large, then you may use the t-test.
When ρ0 ≠ 0, the sample distribution will not be symmetrical, hence you can't use the t distribution. In this case, you should use the Fisher transformation to transform the distribution.
After using the transformation the sample distribution tends toward the normal distribution.
Spearman's rank correlation coefficient is a non-parametric statistic that measures the monotonic association between two variables.
What is the monotonic association? when one variable increases usually also the second variable increases, or when one variable increases usually the second variable decreases.
You may use Spearman's rank correlation when two variables do not meet the Pearson correlation assumptions. as in the following cases:
Rank the data separately for each variable and then calculate the Pearson correlation of the ranked data.
The smallest value gets 1, the second 2, etc. Even when ranking the opposite way, largest value as 1, the result will be the same correlation value.
When the data contains repeated values, each value gets the average of the ranks. In the example below, value 8 ranks are 4 and 5, hence both values will get the average rank: (4 + 5)/2 = 4.5.
X | Y |
---|---|
7.3 | 7 |
8 | 6.6 |
5.4 | 5.4 |
2.7 | 3.7 |
8 | 9.9 |
9.1 | 11 |
X | Y |
---|---|
3 | 4 |
4.5 | 3 |
2 | 2 |
1 | 1 |
4.5 | 5 |
6 | 6 |
When ρ0 ≠ 0, the distribution is not symmetric, in this case, the tool will use the normal distribution over the Fisher transformation.
When ρ0 = 0, you have several options:
The confidence interval based on Fisher transformation supports better results.
t = | r√(n - 2) |
1 - r2 |
z = | r' - ρ'0 |
σ' |
Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers.