Theoretical derivation of MLE for Gaussian Distribution:

Note: There is a rating embedded within this post, please visit this post to rate it.
As a pre-requisite, check out the previous article on the logic behind deriving the maximum likelihood estimator for a given PDF.

Let X=(x1,x2,…, xN) are the samples taken from Gaussian distribution given by

Calculating the Likelihood

The log likelihood is given by,

Differentiating and equating to zero to find the maxim (otherwise equating the score to zero)

Thus the mean of the samples gives the MLE of the parameter .

For the derivation of other PDFs see the following links
Theoretical derivation of Maximum Likelihood Estimator for Poisson PDF
Theoretical derivation of Maximum Likelihood Estimator for Exponential PDF

See also:

[1] An Introduction to Estimation Theory
[2] Bias of an Estimator
[3] Minimum Variance Unbiased Estimators (MVUE)
[4] Maximum Likelihood Estimation
[5] Maximum Likelihood Decoding
[6] Probability and Random Process
[7] Likelihood Function and Maximum Likelihood Estimation (MLE)

Published by

Mathuranathan

Mathuranathan Viswanathan, is an author @ gaussianwaves.com that has garnered worldwide readership. He is a masters in communication engineering and has 12 years of technical expertise in channel modeling and has worked in various technologies ranging from read channel, OFDM, MIMO, 3GPP PHY layer, Data Science & Machine learning.

Post your valuable comments !!!Cancel reply