Cramér-Rao Lower Bound (CRLB)-Scalar Parameter Estimation

Key focus: Discuss scalar parameter estimation using CRLB. Estimate DC component from observed data in the presence of AWGN noise. Consider a set of observed data samples and is the scalar parameter that is to be estimated from the observed samples. The accuracy of the estimate depends on how well the observed data is influenced … Read more

Cramér-Rao Lower Bound: Introduction

Key concept: Cramér-Rao bound is the lower bound on variance of unbiased estimators that estimate deterministic parameters. Introduction The criteria for existence of having an Minimum Variance Unbiased Estimator (MVUE) was discussed in a previous article. To have an MVUE, it is necessary to have estimates that are unbiased and that give minimum variance (compared … Read more

Score, Fisher Information and Estimator Sensitivity

As we have seen in the previous articles, that the estimation of a parameter from a set of data samples depends strongly on the underlying PDF. The accuracy of the estimation is inversely proportional to the variance of the underlying PDF. That is, less the variance of PDF more is the accuracy of estimation and vice … Read more

Theoretical derivation of MLE for Gaussian Distribution:

As a pre-requisite, check out the previous article on the logic behind deriving the maximum likelihood estimator for a given PDF. Let X=(x1,x2,…, xN) are the samples taken from Gaussian distribution given by Calculating the Likelihood The log likelihood is given by, Differentiating and equating to zero to find the maxim (otherwise equating the score … Read more

Theoretical derivation of MLE for Exponential Distribution:

As a pre-requisite, check out the previous article on the logic behind deriving the maximum likelihood estimator for a given PDF. Let X=(x1,x2,…, xN) are the samples taken from Exponential distribution given by Calculating the Likelihood The log likelihood is given by, Differentiating and equating to zero to find the maxim (otherwise equating the score … Read more

Theoretical derivation of Maximum Likelihood Estimator for Poisson PDF:

Suppose X=(x1,x2,…, xN) are the samples taken from a random distribution whose PDF is parameterized by the parameter . If the PDF of the underlying parameter satisfies some regularity condition (if the log of the PDF is differentiable) then the likelihood function is given by Here is the PDF of the underlying distribution. Hereafter we … Read more

Maximum Likelihood Estimation (MLE) : Understand with example

Key focus: Understand maximum likelihood estimation (MLE) using hands-on example. Know the importance of log likelihood function and its use in estimation problems. Likelihood Function: Suppose X=(x1,x2,…, xN) are the samples taken from a random distribution whose PDF is parameterized by the parameter θ. The likelihood function is given by Here fN(xN;θ) is the PDF … Read more

Maximum Likelihood estimation

Keywords: maximum likelihood estimation, statistical method, probability distribution, MLE, models, practical applications, finance, economics, natural sciences. Introduction Maximum Likelihood Estimation (MLE) is a statistical method used to estimate the parameters of a probability distribution by finding the set of values that maximize the likelihood function of the observed data. In other words, MLE is a … Read more