# What is Empirical Probability?

Empirical probability, also called experimental probability or relative frequency is a way to estimate probabilities from observations. It is defined as the ratio between the number of outcomes in which an event occurs and the total number of trials.

Mathematically this can be specified as:

Statistically, the empirical probability is an estimate of a probability. This estimation can be improved by creating a more sophisticated statistical model. If the model is accurate, then it can be used to estimate the chances of an event.

The main disadvantage of empirical probability arises when estimating minimal probabilities, such as those close to zero. In these cases, it may be necessary to employ large samples to reach an acceptable level of accuracy.

### Why is Empirical Probability Significant?

Empirical probability is a convenient way to estimate probabilities, as data can be drawn from experiments or historical data sources.

It can also be used to estimate probability distributions, called empirical probability distributions, or relative frequency distributions.

Both estimates have important uses in analytics. Frequency distributions are commonly presented in dashboards in the form of histograms. These graphs are handy at presenting a summary view of a dataset, as they can show location, spread, and skewness. They can also display if the distribution is unimodal, bimodal or multimodal, and help identify outliers.

### Empirical Probability and LogicPlum

LogicPlum’s platform is a tool for machine-learning-based modeling. The great advantage that this platform brings to its users is automation.

Automation allows for sample re-shaping, training, testing hundreds of different models, and selecting the best alternative based on performance measurement. When any of these operations need to estimate a probability or a frequency distribution, the platform’s calculations are done without requiring any human intervention. The advantages of this approach are twofold: first, users don’t need to have a sound knowledge of statistics; and second, in the case of very large samples, technologies such as Hadoop are applied by design. 