Consider a set of observed data samples \( X={X_1,X_2,X_3,\cdots X_N} \) and \( \theta \) is the scalar parameter that is to be estimated from the observed samples. The accuracy of the estimate depends on how well the observed data is influenced by the parameter \( \theta \). The observed data is considered as a random data whose PDF is influenced by \( \theta \). The PDF \( p(x,\theta)\) describes the dependence of X on \(\theta\).

If the of PDF \(p(x,\theta)\) depends weakly on \( \theta \) then the estimates will be poor.If the of PDF \( p(x,\theta)\) on depends strongly on \( \theta \) then the estimates will be good.

As seen in the previous section, the curvature of the likelihood function (Fisher Information) is related to the concentration of PDF. More the curvature, more is the concentration of PDF, more will be accuracy of estimates. The Fisher Information is calculated from log likelihood function as,

$$I(\theta) = -E\left [ \frac{\partial^2 ln L(\theta)}{\partial \theta^2} \right ]$$

Under the regularity condition that the score of the log likelihood function is zero,

$$ E\left [ \frac{\partial ln L(\theta) }{\partial \theta } \right ] = 0 \;\;\; \forall\theta $$

The inverse of the Fisher Information gives the Cramer Rao Lower Bound (CRLB).

## Theoretical method to find CRLB:

1) Given a model for observed data samples – \(x\), write the log likelihood function as a function of \( \theta \) – \( ln\;L(\theta)\)

2) Keep \(x\) as fixed and take the second partial derivative of the log likelihood function with respect to parameter to be estimated –

\( \frac{\partial^2 ln L(\theta)}{\partial \theta^2} \)

3) If the result depends on \(x\), fix \( \theta\) and take the expected value with respect to \(x\). This step can be skipped if the result does not depend on \(x\).

4) If the result depends on \( \theta \), then evaluate the result at specific values of \( \theta \)

5) Take the reciprocal of the result and negate it.

## Derivation of CRLB for an embedded DC component in AWGN Noise:

$$ \text{Data Model: } x[n] = A + w[n], \;\;\; n=0,1,2,\cdots,N-1 $$

Here \(A\) is a constant DC value that has to be estimated from the observed data samples \(x[n]\) and \(w[n]\) is the AWGN noise with zero mean and variance=\(\sigma^2 \).

Given the fact that the samples are influenced by the AWGN noise with zero mean and variance=\(\sigma^2 \), the likelihood function can be written as

The log likelihood function is formed as,

$$ ln\;L(x;A) =-\frac{N}{2}ln(2\pi\sigma^2) – \frac{1}{2\sigma^2}\sum_{n=0}^{N-1}(x[n]-A)^2 $$

Taking the first partial derivative of log likelihood function with respect to A,

Computing the second partial derivative of log likelihood function by differentiating one more time,

The Fisher Information is given by taking the expectation and negating it.

The Cramer Rao Lower Bound is the reciprocal of Fisher Information I(A)

The variance of any estimator that estimates the DC component \(A\) from the given observed samples will always be greater that the CRLB. That is, the CRLB acts as the lower bound for the variance of the estimates. This can be conveniently represented as

$$ var(\hat{A}) \geq \frac{\sigma^2}{N}$$

## Tweeking the CRLB:

Now that we have found an expression for CRLB for the estimation of the DC component, we can look for schemes that may affect the CRLB. From the expression of CRLB, following points can be inferred.

1) The CRLB does not depend on the parameter to be estimated (\(A\))

2) The CRLB increases linearly with \(\sigma^2 \)

3) The CRLB decreases inversely with \(N \)