Statistical power (also called power of a statistical test and sometimes sensitivity) is the probability that a hypothesis test correctly rejects the null hypothesis. In other words, it represents the likelihood that a study will distinguish an actual effect from one of chance. It is thus a measure of how strong the proof of the null hypothesis is.
Mathematically, it can be expressed as:
Power = Pr(reject H0|H1 is true)
A high statistical power indicates that the test results are probably valid (In fact, the statistical power is the inverse of the probability of a Type II error). A low value shows that the test results are doubtful. Therefore, a minimal level of statistical power is necessary to justify the results.
Statistical power is influenced by four factors:
Consequently, to increase the statistical power of a test, the following actions are possible:
Statistical power finds one of its main applications in the determination of the minimal sample size for a given error tolerance. This minimal sample size has many implications. From an experiment complexity perspective, it defines whether the study can be done or not; as, for example, a huge sample may be too difficult to handle. Similarly, from an economic point of view, it states whether the research is economically feasible or not.
When working with experiments, determining the right sample size is critical. Similarly, when modeling with machine learning technologies. These calculations require skill and statistical knowledge.
LogicPlum’s platform eliminates this need by automating all calculations. In this way, users only need to concentrate on using the models and interpreting the results.
For those interested in Python:
Brownlee, J. (2020). A Gentle Introduction to Statistical Power and Power Analysis in Python. Available at https://machinelearningmastery.com/statistical-power-and-power-analysis-in-python/