Maximum Likelihood Estimation System

From GM-RKB
Jump to: navigation, search

A Maximum Likelihood Estimation System is an continuous optimization system that implements an MLE algorithm to solve an MLE task.



References

2014

2013

# import the packages
import numpy as np
from scipy.optimize import minimize
import scipy.stats as stats
import time
#
# Set up your x values
x = np.linspace(0, 100, num=100)
#
# Set up your observed y values with a known slope (2.4), intercept (5), and sd (4)
yObs = 5 + 2.4*x + np.random.normal(0, 4, 100)
#
# Define the likelihood function where params is a list of initial parameter estimates
def regressLL(params):
   # Resave the initial parameter guesses
   b0 = params [0]
   b1 = params [1]
   sd = params [2]
   #
   # Calculate the predicted values from the initial parameter guesses
   yPred = b0 + b1*x
   #
   # Calculate the negative log-likelihood as the negative sum of the log of a normal
   # PDF where the observed values are normally distributed around the mean (yPred) with a standard deviation of sd
   logLik = -np.sum( stats.norm.logpdf(yObs, loc=yPred, scale=sd) )
   #
   # Tell the function to return the NLL (this is what will be minimized)
   return(logLik)
#
# Make a list of initial parameter guesses (b0, b1, sd)    
initParams = [1, 1, 1]

2012

# draw from a gumbel distribution using the inverse cdf simulation method
e.1 <- -log(-log(runif(10000,0,1))) 
e.2 <- -log(-log(runif(10000,0,1)))
u <- e.2 - e.1  # u follows a logistic distribution (difference between two gumbels.)
fitdistr(u,densfun=dlogis,start=list(location=0,scale=1))