Normalized CRLB – an alternate form of CRLB

Key focus: Normalized CRLB (Cramér-Rao Lower bound) is an alternate form of CRLB. Let’s explore how normalized CRLB is related to estimator sensitivity.

The variance of an estimate is always greater than or equal to Cramér-Rao Lower Bound of the estimate. The CRLB is in turn given by inverse of Fisher Information. The following equation concisely summarizes the above point.

CRLB equation 1CRLB equation 2

The Fisher Information can be re-written as
CRLB equation 3

Thus the variance of the estimate can be written as
CRLB equation 4

Consider an incremental change in $latex \theta$, that is, $latex \theta \rightarrow \theta + \Delta \theta $. This causes the PDF to change from $latex p(\mathbf{x},\theta) \rightarrow p(\mathbf{x},\theta + \Delta \theta)$. We wish to answer the following question : How sensitive is $latex p(\mathbf{x},\theta) $ to that change ? Sensitivity (denoted by $latex {\tilde{S}_\theta}^P(\mathbf{x}) $) is given by the ratio of change in $latex p(\mathbf{x},\theta)$ to the change in $latex \theta $.

CRLB equation 5

Letting $latex \Delta \theta \rightarrow 0$

CRLB equation 8

From Calculus,

CRLB equation 6

Thus the sensitivity is given by,
CRLB equation 9

The variance of the estimate can now be put in the following form.
CRLB equation 10

The above expression is the normalized version of CRLB. It can be interpreted that the normalized CRLB is equal to the inverse of mean square sensitivity.

Rate this article: [ratings]

Similar topics:

[table id=7 /]

Books by the author

[table id = 23/]

Leave a Comment