**What is Significance Level in A/B Testing and CRO?**

Significance level, also known as observed significance or alpha level, is a statistical term representing the probability of rejecting the null hypothesis when it is true. In other words, it is the likelihood of concluding that a difference between two groups is statistically significant when there is no true difference.

**How is significance level calculated?**

The significance level is usually set before conducting a statistical test and is typically denoted by the Greek letter alpha (α). It is commonly set at 0.05 or 0.01, meaning there is a 5% or 1% chance of rejecting the null hypothesis when it is true.

To determine whether a statistical result is significant, researchers compare the p-value of the test to the significance level. The p-value is the probability of observing a test statistic as extreme or more extreme than the one observed, assuming the null hypothesis is true. If the p-value is smaller than the significance level, the result is considered statistically significant, and the null hypothesis is rejected.

**Example of significance** level

For example, suppose a researcher is testing whether there is a difference in the average weight of two groups of people. They set the significance level at 0.05 and obtained a p-value of 0.03. Since the p-value is smaller than the significance level, they conclude that there is a statistically significant difference in the average weight of the two groups.

It is important to note that the significance level is a probability threshold that determines whether a result is statistically significant or not.

However, it does not provide information about the practical significance or importance of the result. Therefore, researchers should consider the effect size and practical significance in addition to the significance level when interpreting statistical results.

**Why is significance level important in A/B testing and CRO?**

Without a significance level, it would be difficult to know whether the observed difference in conversion rates or other metrics is statistically significant or simply the result of random variation. This is important because making decisions based on unreliable data can lead to costly mistakes and missed opportunities.