**The probability that the test will reject an incorrect H _{0}, for a predefined effect size**

While the significance level (α) represents the probability of rejecting a correct H

This a risk we are not willing to take. Usually α is 0.05 meaning there is a 5% chance of the above error (Type I Error).

We plan the study and determine the required priori test power

The statistical power represents the probability of the test to reject an incorrect H0, but it depends on the H

Commonly, a priori power of

statistical power = 1 - β .

β is the probability of a type II error, meaning the test won't reject an incorrect H

Be careful not to confuse the priori power with the observed power, they are different measurements.

The priori power is calculated before collecting data, while the observed power is calculated after collecting data.

The observed power represents the probability of the test to reject H0 based on the observed effect.
It is directly correlated to the significance level (α). When accepting H0, the observed effect will be small and the observed power will be weak,
on the contrary, when rejecting H0, the observed effect will be big and the observed power will be strong.

In this tool I prefer to treat the expect effect as a proportion from the H_{0} value

H

The test need to reject H_{0} when μ = μ_{1} = μ_{0} ± d

Since z is symmetrical it get the same result when calculating for μ_{0} + d or μ_{0} - d

$$p(z < Z_1) = \alpha/2 \:,\: p(z < Z_2) = 1-\alpha/2$$
H_{0} The test's accepted range for x is: [R1 , R2].
Reject H_{0} when x< R1 or x > R2

- $$p(z < \frac { R_1 - \mu_0} {\frac {\sigma }{\sqrt{n}}}) = \alpha/2 \:\rightarrow\: R_1= \mu_0 + Z_{\alpha/2}* \frac {\sigma }{\sqrt{n}}$$
- $$p(z < \frac { R_2 - \mu_0} {\frac {\sigma }{\sqrt{n}}}) = 1-\alpha/2 \:\rightarrow\: R_2= \mu_0 + Z_{1-\alpha/2}* \frac {\sigma }{\sqrt{n}}$$
- Now we can calculate the probability to reject H
_{0}when we know the mean is μ_{1}instead of μ_{0}, $$\overline{X}\:distribution\:N(\mu_1, \frac {\sigma }{\sqrt{n}}) $$ - $$power = p(\overline{x} < R_1 |\mu_=\mu_1 ) + p(\overline{x} > R_2 |\mu_=\mu_1) $$
- $$power = p(z < \frac { R_1 - \mu_1} {\frac {\sigma }{\sqrt{n}}}) + 1 - p(z < \frac { R_2 - \mu_1} {\frac {\sigma }{\sqrt{n}}})$$

H

The test need to reject H_{0} when μ = μ_{1} = μ_{0} - d

$$p(z < Z_1) = \alpha$$
H_{0} The test's accepted range for x is: [R1 , ∞ ]. Reject H_{0} when x < R1

- $$p(z < \frac { R_1 - \mu_0} {\frac {\sigma }{\sqrt{n}}}) = \alpha \:\rightarrow\: R_1= \mu_0 + Z_{\alpha}* \frac {\sigma }{\sqrt{n}}$$
- Now we can calculate the probability to reject H
_{0}when we know the mean is μ_{1}instead of μ_{0}, $$\overline{X}\:distribution\:N(\mu_1, \frac {\sigma }{\sqrt{n}}) $$ - $$power = p(\overline{x} < R_1 |\mu_=\mu_1 ) $$
- $$power = p(z < \frac { R_1 - \mu_1} {\frac {\sigma }{\sqrt{n}}})$$

H

The test need to reject H_{0} when μ = μ_{1} = μ_{0} + d

$$p(z < Z_2) = 1-\alpha$$
H_{0} The test's accepted range for x is: [-∞ , R2]. We reject H_{0} when x > R2

- $$p(z < \frac { R_2 - \mu_0} {\frac {\sigma }{\sqrt{n}}}) = 1-\alpha \:\rightarrow\: R_2= \mu_0 + Z_{1-\alpha}* \frac {\sigma }{\sqrt{n}}$$
- Now we can calculate the probability to reject H
_{0}when we know the mean is μ_{1}instead of μ_{0}, $$\overline{X}\:distribution\:N(\mu_1, \frac {\sigma }{\sqrt{n}}) $$ - $$power = p(\overline{x} > R_2 |\mu_=\mu_1) $$
- $$power = 1 - p(z < \frac { R_2 - \mu_1} {\frac {\sigma }{\sqrt{n}}})$$

A measurement of the size of a statistical phenomenon, for example a mean difference, correlation, etc. there are different ways to measure the effect size.

the pure effect as it, for example if needs to identify change of 1 mm in the size of mechanical part, the effect size is 1mm

Used when there is no clear cut definition regard the required effect, or the scale is arbitrary,or to compare between different researches with different scales

this site uses the Cohen's d as Standardizes effect size

Expected Cohen's d $$\:d=\frac{|\mu_1-\mu_0|}{\sigma}$$
Observed Cohen's d $$\:d=\frac{|\overline{x}-\mu_0|}{\sigma} $$
The Cohen's standardized effect was also named as following:

- 0.2 - small effect
- 0.5 - medium effect
- 0.8 - large effect

Used when there is no clear cut definition regard the required effect, and compare the effect size to the current expected value.

Calculate the effect size as a ratio of the expected value