## Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation

CRLB for scalar parameter estimation was discussed in previous posts. The same concept is extended to vector parameter estimation. Consider a set of deterministic parameters $$\mathbb{\theta}=[ \theta_1, \theta_2, …, \theta_p]^{T}$$ that we wish to estimate. The estimate is denoted in vector form as,\(\mathbb{\hat{\theta}} = [ \hat{\theta_1}, \hat{\theta_2}, …,...

## Normalized CRLB – an alternate form of CRLB and its relation to estimator sensitivity

The variance of an estimate is always greater than or equal to Cramer Rao Lower Bound of the estimate. The CRLB is in turn given by inverse of Fisher following equation concisely summarizes the above point. The Fisher Information can be re-written as Thus the variance of the...

## Cramer Rao Lower Bound for Phase Estimation

Consider the DSB carrier frequency estimation problem given in the introductory chapter to estimation theory. A message is sent across a channel modulated by a sinusoidal carrier with carrier frequency = fc and amplitude= ‘A’. The transmitted signal gets affected by zero-mean AWGN noise when it travels across the...

## Efficient Estimators and CRLB

It has been reiterated that not all estimators are efficient. Even not all the MVUE are efficient. Then how do we quantify whether the estimator designed by us is efficient or not? An efficient estimator is defined as the one that is * Unbiased (Mean of the estimate =...

## Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE)

It was mentioned in one of the earlier articles that CRLB may provide a way to find a MVUE (Minimum Variance Unbiased Estimators). Theorem: There exists an unbiased estimator that attains CRLB if and only if, Here $latex ln \; L(\mathbf{x};\theta)$ is the log likelihood function of x...

## Cramer Rao Lower Bound for Scalar Parameter Estimation

Consider a set of observed data samples $latex X={X_1,X_2,X_3,\cdots X_N}$ and $latex \theta$ is the scalar parameter that is to be estimated from the observed samples. The accuracy of the estimate depends on how well the observed data is influenced by the parameter $latex \theta$. The...
As discussed in the introduction to estimation theory, the goal of an estimation algorithm is to give an estimate of random variable(s) that is unbiased and has minimum variance. This criteria is reproduced here for reference $$E\left\{\hat{f}_0 \right\} = f_0$$ $$\sigma^{2}_{\hat{f}_0}=E\left\{(\hat{f}_0 – E)^2 \right\}$$ In...