Hypothesis testing is a statistical technique for determining if a claim made on a population of data is true or untrue based on a sample of data. A guy cannot be pregnant (biologically), which is a claim made about human pregnancy. This factual assertion is assessed for validity as the “null hypothesis” in hypothesis testing (right or wrong).

In most hypothesis tests, the null hypothesis is rejected by default. Because the null hypothesis provides the foundation for validating what we’re testing, it’s important to remember that. The null hypothesis is challenged by the opposing statement named “alternative hypothesis.” Hypothesis testing incorporates both of these arguments to determine if the sample data is correct or incorrect.

A Type I error occurs when a null hypothesis is rejected despite the fact that it is correct and the alternative hypothesis is accepted. In the case of the cover image, the null hypothesis is that a man cannot be pregnant, whereas the alternative hypothesis is that a guy is pregnant. Despite this, the doctor commits a Type I mistake if he rejects the statement. To put it another way, the male is regarded to be pregnant!

Type II mistake occurs when the null hypothesis is accepted even though it is erroneous and the alternate hypothesis is rejected. The null hypothesis is that the woman is not pregnant, while the alternative hypothesis is that she is. Type 2 error example would be – a doctor claims that a woman is not pregnant when she is, this is known as a Type II error, in which the null hypothesis is assumed to be correct and the alternative hypothesis is assumed to be incorrect.

**A type 2 error in hypothesis testing occurs when a test is unable to reject the incorrect null hypothesis**

To put it another way, it leads the consumer to incorrectly refuse the false hypothesis since the test doesn’t have enough predictive strength to discover enough evidence for the alternative hypothesis. A false negative is another name for a type 2 error.

The power of a statistical test is inversely proportional to the type II error. This means that the larger the statistical test’s power, the less likely it is to make a type 2 error. The statistical power is assessed by 1- beta, while the rate of a type 2 error is measured by beta.

**type 1 vs type 2 error → false positive vs false negative error**

The type II error, like the type I error, cannot be totally eliminated from a hypothesis test. The only way of reducing type 2 error and avoiding this type of statistical error is to reduce the likelihood of doing so. Because the chance of a type 2 error is strongly tied to the power of a statistical test, raising the test’s power can reduce the likelihood of the error occurring.

One of the simplest ways to reduce situations when there is a risk of a type II error is simply to increase the sample size. In a hypothesis test, the sample size primarily impacts the level of sampling error, which translates to the capacity to detect differences.

**A greater sample size enhances the likelihood of capturing differences in statistical tests while also boosting the test’s power**

**Increase the level of significance**

Another option is to select a greater relevance level. For example, instead of the usually accepted 0.05 threshold, a researcher can choose a significance threshold of 0.15.

When the null hypothesis is true, a greater significance level means a larger probability of rejecting it.

**The greater the probability of rejecting the null hypothesis, the lower the risk of making a type 2 error, whereas the risk of making a type I error rises**

The risk of wrongly retaining the null hypothesis when it is not applicable to the entire population is known as a type 2 error. In essence, a type 2 error is a false negative.

More severe criteria for rejecting a null hypothesis can reduce type II errors, but this raises the possibility of a false positive.

Type II errors must be weighed against type I errors in terms of likelihood and impact.