Cramer Rao Lower Bound for Scalar Parameter Estimation

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)

Loading...
Consider a set of observed data samples \( X={X_1,X_2,X_3,\cdots X_N} \) and \( \theta \) is the scalar parameter that is to be estimated from the observed samples. The accuracy of the estimate depends on how well the observed data is influenced by the parameter \( \theta \). The observed data is considered as a random data whose PDF is influenced by \( \theta \). The PDF \( p(x,\theta)\) describes the dependence of X on \(\theta\).

If the of PDF \(p(x,\theta)\) depends weakly on \( \theta \) then the estimates will be poor.If the of PDF \( p(x,\theta)\) on depends strongly on \( \theta \) then the estimates will be good.

As seen in the previous section, the curvature of the likelihood function (Fisher Information) is related to the concentration of PDF. More the curvature, more is the concentration of PDF, more will be accuracy of estimates. The Fisher Information is calculated from log likelihood function as,

$$I(\theta) = -E\left [ \frac{\partial^2 ln L(\theta)}{\partial \theta^2} \right ]$$

Under the regularity condition that the score of the log likelihood function is zero,

$$ E\left [ \frac{\partial ln L(\theta) }{\partial \theta } \right ] = 0 \;\;\; \forall\theta $$

The inverse of the Fisher Information gives the Cramer Rao Lower Bound (CRLB).

Theoretical method to find CRLB:

1) Given a model for observed data samples – \(x\), write the log likelihood function as a function of \( \theta \) – \( ln\;L(\theta)\)
2) Keep \(x\) as fixed and take the second partial derivative of the log likelihood function with respect to parameter to be estimated –
\( \frac{\partial^2 ln L(\theta)}{\partial \theta^2} \)
3) If the result depends on \(x\), fix \( \theta\) and take the expected value with respect to \(x\). This step can be skipped if the result does not depend on \(x\).
4) If the result depends on \( \theta \), then evaluate the result at specific values of \( \theta \)
5) Take the reciprocal of the result and negate it.

Derivation of CRLB for an embedded DC component in AWGN Noise:

$$ \text{Data Model: } x[n] = A + w[n], \;\;\; n=0,1,2,\cdots,N-1 $$

Here \(A\) is a constant DC value that has to be estimated from the observed data samples \(x[n]\) and \(w[n]\) is the AWGN noise with zero mean and variance=\(\sigma^2 \).

Given the fact that the samples are influenced by the AWGN noise with zero mean and variance=\(\sigma^2 \), the likelihood function can be written as

The log likelihood function is formed as,

$$ ln\;L(x;A) =-\frac{N}{2}ln(2\pi\sigma^2) – \frac{1}{2\sigma^2}\sum_{n=0}^{N-1}(x[n]-A)^2 $$

Taking the first partial derivative of log likelihood function with respect to A,

Computing the second partial derivative of log likelihood function by differentiating one more time,

The Fisher Information is given by taking the expectation and negating it.

The Cramer Rao Lower Bound is the reciprocal of Fisher Information I(A)

The variance of any estimator that estimates the DC component \(A\) from the given observed samples will always be greater that the CRLB. That is, the CRLB acts as the lower bound for the variance of the estimates. This can be conveniently represented as

$$ var(\hat{A}) \geq \frac{\sigma^2}{N}$$

Tweeking the CRLB:

Now that we have found an expression for CRLB for the estimation of the DC component, we can look for schemes that may affect the CRLB. From the expression of CRLB, following points can be inferred.

1) The CRLB does not depend on the parameter to be estimated (\(A\))
2) The CRLB increases linearly with \(\sigma^2 \)
3) The CRLB decreases inversely with \(N \)

See also:

[1]An Introduction to Estimation Theory
[2]Bias of an Estimator
[3]Minimum Variance Unbiased Estimators (MVUE)
[4]Maximum Likelihood Estimation
[5]Maximum Likelihood Decoding
[6]Probability and Random Process
[7]Likelihood Function and Maximum Likelihood Estimation (MLE)
[8]Score, Fisher Information and Estimator Sensitivity
[9]Introduction to Cramer Rao Lower Bound (CRLB)
[10]Cramer Rao Lower Bound for Scalar Parameter Estimation
[11]Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE)
[12]Efficient Estimators and CRLB
[13]Cramer Rao Lower Bound for Phase Estimation
[14]Normalized CRLB - an alternate form of CRLB and its relation to estimator sensitivity
[15]Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation
[16]The Mean Square Error – Why do we use it for estimation problems
[17]How to estimate unknown parameters using Ordinary Least Squares (OLS)
[18]Essential Preliminary Matrix Algebra for Signal Processing
[19]Why Cholesky Decomposition ? A sample case:
[20]Tests for Positive Definiteness of a Matrix
[21]Solving a Triangular Matrix using Forward & Backward Substitution
[22]Cholesky Factorization and Matlab code
[23]LTI system models for random signals – AR, MA and ARMA models
[24]Comparing AR and ARMA model - minimization of squared error
[25]Yule Walker Estimation
[26]AutoCorrelation (Correlogram) and persistence – Time series analysis
[27]Linear Models - Least Squares Estimator (LSE)
[28]Best Linear Unbiased Estimator (BLUE)
  • Maryam

    Hi, Please help me how simulate it with matlab