Consider a set of observed data samples \( X={X_1,X_2,X_3,\cdots X_N} \) and \( \theta \) is the scalar parameter that is to be estimated from the observed samples. The accuracy…

Read More Cramer Rao Lower Bound for Scalar Parameter Estimation

Consider a set of observed data samples \( X={X_1,X_2,X_3,\cdots X_N} \) and \( \theta \) is the scalar parameter that is to be estimated from the observed samples. The accuracy…

Read More Cramer Rao Lower Bound for Scalar Parameter Estimation

The criteria for existence of having an Minimum Variance Unbiased Estimator (MVUE) was discussed in a previous article. To have an MVUE, it is necessary to have estimates that are…

As we have seen in the previous articles, that the estimation of a parameter from a set of data samples depends strongly on the underlying PDF. The accuracy of the estimation…

Read More Score, Fisher Information and Estimator Sensitivity

As a pre-requisite, check out the previous article on the logic behind deriving the maximum likelihood estimator for a given PDF. Let X=(x1,x2,…, xN) are the samples taken from Gaussian…

Read More Theoretical derivation of MLE for Gaussian Distribution:

As a pre-requisite, check out the previous article on the logic behind deriving the maximum likelihood estimator for a given PDF. Let X=(x1,x2,…, xN) are the samples taken from Exponential…

Read More Theoretical derivation of MLE for Exponential Distribution:

Suppose X=(x1,x2,…, xN) are the samples taken from a random distribution whose PDF is parameterized by the parameter $latex \theta $. If the PDF of the underlying parameter satisfies some…

Read More Theoretical derivation of Maximum Likelihood Estimator for Poisson PDF:

Likelihood Function: Suppose \(X=(x_1,x_2,\cdots, x_N)\) are the samples taken from a random distribution whose PDF is parameterized by the parameter \(\theta\). The likelihood function is given by \begin{align*}L(\theta;X) &= \prod_{i=1}^{N}…

Read More Likelihood Function and Maximum Likelihood Estimation (MLE)

One Comment