Relative Frequency Value

(Redirected from empirical probability)
Jump to navigation Jump to search

A Relative Frequency Value is a Statistical Frequency Value from a relative frequency function (the ratio of an absolute frequency value to the multiset cardinality).



  • (Wikipedia, 2019) ⇒ Retrieved:2019-10-4.
    • The cumulative frequency is the total of the absolute frequencies of all events at or below a certain point in an ordered list of events.[1]

      The relative frequency (or empirical probability) of an event is the absolute frequency normalized by the total number of events:

      [math]\displaystyle{ f_i = \frac{n_i}{N} = \frac{n_i}{\sum_j n_j}. }[/math]

      The values of [math]\displaystyle{ f_i }[/math] for all events [math]\displaystyle{ i }[/math] can be plotted to produce a frequency distribution.

      In the case when [math]\displaystyle{ n_i = 0 }[/math] for certain i, pseudocounts can be added.

  1. Kenney, J. F.; Keeping, E. S. (1962). "Mathematics of Statistics, Part 1 (3rd ed.)". Princeton, NJ: Van Nostrand Reinhold.


    • The relative frequency density of occurrence of an event is the relative frequency of [math]\displaystyle{ i }[/math] divided by the size of the bin used to classify i.
    • For example: If the lower extreme of the class you are measuring the density of is 15 and the upper extreme of the class you are measuring is 30, given a relative frequency of 0.0625, you would calculate the frequency density for this class to be:
      • Relative frequency / (Upper extreme of class − lower extreme of class) = density
      • 0.0625 / (30 − 15) = 0.0625 / 15 = 0.0041666.. That is: 0.00417 to 5 S.F.
    • In biology, relative frequency is the occurrence of a single gene in a specific species that makes up a gene pool.
    • The limiting relative frequency of an event over a long series of trials is the conceptual foundation of the frequency interpretation of probability. In this framework, it is assumed that as the length of the series increases without bound, the fraction of the experiments in which we observe the event will stabilize. This interpretation is often contrasted with Bayesian probability.




  • (Hogg & Ledolter, 1987) ⇒ Robert V. Hogg, and Johannes Ledolter. (1987). “Engineering Statistics.” Macmillan Publishing.
    • The collection of all possible outcomes, namely [math]\displaystyle{ S }[/math] = {H,T}, is called the sample space. Suppose that we are interested in a subset [math]\displaystyle{ A }[/math] of our sample space; for example, in our case, let A={H} represent heads. Repeat this random experiment a number of times, say [math]\displaystyle{ n }[/math], and count the number of times, say [math]\displaystyle{ f }[/math], that the experiment ended in A. Here [math]\displaystyle{ f }[/math] is called the frequency of the event A and the ratio f/n is called the relative frequency of the event [math]\displaystyle{ A }[/math] in the [math]\displaystyle{ n }[/math] trials of the experiment.