Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 1.00 out of 5)
Loading...
CRLB for scalar parameter estimation was discussed in previous posts. The same concept is extended to vector parameter estimation.

Consider a set of deterministic parameters \(\mathbb{\theta}=[ \theta_1, \theta_2, …, \theta_p]^{T} \) that we wish to estimate.
The estimate is denoted in vector form as,\(\mathbb{\hat{\theta}} = [ \hat{\theta_1}, \hat{\theta_2}, …, \hat{\theta_p} ]^{T} \).

Assume that the estimate is unbiased \(E[\hat{\theta}] = \theta \).

Covariance Matrix

For the scalar parameter estimation, the variance of the estimate was considered. For vector parameter estimation, the covariance of the vector of estimates are considered.

The covariance matrix for the vector of estimates is given by

$$ C_{\hat{\theta}} =var ( \hat{\theta} )  = E \left[  ( \hat{\theta} – \theta )( \hat{\theta} – \theta )^T \right] $$

For example, if \(A,B\) and C are the unknown parameters to be estimated, then the  covariance matrix for the parameter vector \( \theta = [A,B,C]^T \) is given by

$$  C_{\hat{\theta}} = \left[  \begin{matrix} var(\hat{A}) & cov(\hat{A},\hat{B}) & cov(\hat{A},\hat{C}) \\  cov(\hat{B},\hat{A}) & var(\hat{B}) & cov(\hat{B},\hat{C}) \\  cov(\hat{C},\hat{A}) & cov(\hat{C},\hat{B}) & var(\hat{C}) \end{matrix} \right]  $$

Fisher Information Matrix

For the scalar parameter estimation, Fisher Information was considered. Same concept is extended for the vector case and is called the Fisher Information Matrix \(I(\theta)\). The ijth element of the Fisher Information Matrix \(I(\theta)\) (evaluated at the true values of the parameter vector) is given by

$$ [I(\theta)] _{ij} = -E \left[  \frac{\delta^2}{\delta \theta_i \delta \theta_j} ln p(x;\theta) \right]  \; \; i,j =1,2,3,…,p$$

CRLB Matrix

Under the same regularity condition (as that of the scalar parameter estimation case),

$$ E \left[ \frac{\delta}{ \delta \theta} ln p(x;\theta) \right] = 0 \;\;\; \forall \theta $$

the CRLB Matrix is given by the inverse of the Fisher Information Matrix

$$ CRLB =  I^{-1}(\theta) $$

Note: For the scale parameter estimation, the CRLB was shown to be the reciprocal of the Fisher Information.

This implies that the covariance of the parameters (diagonal elements) are bound by the CRLB as

$$   \left[ C_{(\hat{\theta})} \right]_{ii} \geq  \left[ I^{-1}(\theta) \right]_{ii}  $$

More generally, the condition given above is represented as

$$ C_{(\hat{\theta})} –  I^{-1}(\theta) \; \; is \; positive \; semi-definite $$

Note: The word positive-semi-definite is the matrix equivalent of saying that a value is greater than or equal to zero. Similarly, the term positive-definite is roughly equivalent of saying that something is definitely greater than zero or definitely positive.

Emphasize was place on diagonal elements in the Fisher Information Matrix. The effect of off-diagonal elements should also be considered.

See Also

[1]An Introduction to Estimation Theory
[2]Bias of an Estimator
[3]Minimum Variance Unbiased Estimators (MVUE)
[4]Maximum Likelihood Estimation
[5]Maximum Likelihood Decoding
[6]Probability and Random Process
[7]Likelihood Function and Maximum Likelihood Estimation (MLE)
[8]Score, Fisher Information and Estimator Sensitivity
[9]Introduction to Cramer Rao Lower Bound (CRLB)
[10]Cramer Rao Lower Bound for Scalar Parameter Estimation
[11]Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE)
[12]Efficient Estimators and CRLB
[13]Cramer Rao Lower Bound for Phase Estimation
[14]Normalized CRLB - an alternate form of CRLB and its relation to estimator sensitivity
[15]Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation
[16]The Mean Square Error – Why do we use it for estimation problems
[17]How to estimate unknown parameters using Ordinary Least Squares (OLS)
[18]Essential Preliminary Matrix Algebra for Signal Processing
[19]Why Cholesky Decomposition ? A sample case:
[20]Tests for Positive Definiteness of a Matrix
[21]Solving a Triangular Matrix using Forward & Backward Substitution
[22]Cholesky Factorization and Matlab code
[23]LTI system models for random signals – AR, MA and ARMA models
[24]Comparing AR and ARMA model - minimization of squared error
[25]Yule Walker Estimation
[26]AutoCorrelation (Correlogram) and persistence – Time series analysis
[27]Linear Models - Least Squares Estimator (LSE)
[28]Best Linear Unbiased Estimator (BLUE)

Normalized CRLB – an alternate form of CRLB and its relation to estimator sensitivity

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)
Loading...
The variance of an estimate is always greater than or equal to Cramer Rao Lower Bound of the estimate. The CRLB is in turn given by inverse of Fisher Information.The following equation concisely summarizes the above point.

CRLB equation 1
CRLB equation 2

The Fisher Information can be re-written as
CRLB equation 3

Thus the variance of the estimate can be written as
CRLB equation 4

Consider an incremental change in \(\theta\), that is, \( \theta \rightarrow \theta + \Delta \theta \). This causes the PDF to change from \( p(\mathbf{x},\theta) \rightarrow p(\mathbf{x},\theta + \Delta \theta)\). We wish to answer the following question : How sensitive is \( p(\mathbf{x},\theta) \) to that change ? Sensitivity (denoted by \({\tilde{S}_\theta}^P(\mathbf{x}) \) ) is given by the ratio of change in \(p(\mathbf{x},\theta)\) to the change in \( \theta \).

CRLB equation 5

Letting \(\Delta \theta \rightarrow 0\)

CRLB equation 8

From Calculus,

CRLB equation 6

Thus the sensitivity is given by,
CRLB equation 9

The variance of the estimate can now be put in the following form.
CRLB equation 10

The above expression is the normalized version of CRLB. It can be interpreted that the normalized CRLB is equal to the inverse of mean square sensitivity.

See also:

[1]An Introduction to Estimation Theory
[2]Bias of an Estimator
[3]Minimum Variance Unbiased Estimators (MVUE)
[4]Maximum Likelihood Estimation
[5]Maximum Likelihood Decoding
[6]Probability and Random Process
[7]Likelihood Function and Maximum Likelihood Estimation (MLE)
[8]Score, Fisher Information and Estimator Sensitivity
[9]Introduction to Cramer Rao Lower Bound (CRLB)
[10]Cramer Rao Lower Bound for Scalar Parameter Estimation
[11]Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE)
[12]Efficient Estimators and CRLB
[13]Cramer Rao Lower Bound for Phase Estimation
[14]Normalized CRLB - an alternate form of CRLB and its relation to estimator sensitivity
[15]Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation
[16]The Mean Square Error – Why do we use it for estimation problems
[17]How to estimate unknown parameters using Ordinary Least Squares (OLS)
[18]Essential Preliminary Matrix Algebra for Signal Processing
[19]Why Cholesky Decomposition ? A sample case:
[20]Tests for Positive Definiteness of a Matrix
[21]Solving a Triangular Matrix using Forward & Backward Substitution
[22]Cholesky Factorization and Matlab code
[23]LTI system models for random signals – AR, MA and ARMA models
[24]Comparing AR and ARMA model - minimization of squared error
[25]Yule Walker Estimation
[26]AutoCorrelation (Correlogram) and persistence – Time series analysis
[27]Linear Models - Least Squares Estimator (LSE)
[28]Best Linear Unbiased Estimator (BLUE)

Cramer Rao Lower Bound for Phase Estimation

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)
Loading...
Consider the DSB carrier frequency estimation problem given in the introductory chapter to estimation theory. A message is sent across a channel modulated by a sinusoidal carrier with carrier frequency = \(f_c\) and amplitude= \(A\). The transmitted signal gets affected by zero-mean AWGN noise when it travels across the medium. The receiver receives the signal and digitizes it for further processing.

To recover the message at the receiver, one has to know every details of the sinusoid: 1) Amplitude-‘A’ 2) Carrier Frequency – \( f_c\) and 3). Any uncertainty in its phase – \( \phi_c \).

Given a set of digitized samples \(x[n]\) and assuming that both amplitude and carrier frequency are known, we are tasked with the objective of estimating the phase of the embedded sinusoid (cosine wave). For analyzing this scenario we should have a model to begin with.

The digitized samples at the receiver are modeled as

$$ x[n] = A cos(2 \pi f_c n+ \phi_c ) + w[n] , \;\;\; n=0,1,2,\cdots,N-1 $$

Here A and \( f_c \) are assumed to be known and \(w[n]\) is an AWGN noise with mean \(\mu=0\) and variance=\( \sigma^2 \). We will use CRLB and try to find an efficient estimator to estimate the phase component.

CRLB for Phase Estimation:

As a pre-requisite to this article, readers are advised to go through the previous chapter on “Steps to find CRLB”

In order to derive CRLB, we need to have a PDF (Probability Density Function) to begin with. Since the underlying noise is modeled as an AWGN noise with mean \(\mu=0\) and variance=\( \sigma^2 \), the PDF of the observed sample that gets affected by this noise is given by a multivariate Gaussian distribution function.

Cramer Rao Lower Bound for Phase Estimation
Cramer Rao Lower Bound for Phase Estimation
Cramer Rao Lower Bound for Phase Estimation

The sample mean is given by

Cramer Rao Lower Bound for Phase Estimation

The PDF is re-written as
Cramer Rao Lower Bound for Phase Estimation

Since the observed samples x[n] are fixed in the above equation, we will use the likelihood notation instead of PDF notation. That is, \( p(\mathbf{x;\phi }) \) is simply rewritten as \( L(\mathbf{x;\phi })\). The log likelihood function is given by

Cramer Rao Lower Bound for Phase Estimation

For simplicity,we will denote \( \phi_c \) as \( \phi \). Next, take the first partial derivative of log likelihood function with respect to \( \phi \)

Cramer Rao Lower Bound for Phase Estimation

Taking the second partial derivative of the log likelihood function,

Cramer Rao Lower Bound for Phase Estimation

Since the above term is still dependent on the observed samples \(x[n]\), take expectation of the entire equation to average out the variations.

Cramer Rao Lower Bound for Phase Estimation

Cramer Rao Lower Bound for Phase Estimation

Let’s derive the terms like fisher information, CRLB and find out whether we can find an efficient estimator from the equations.

Fisher Information:

The Fisher Information can be derived using

Cramer Rao Lower Bound for Phase Estimation

Cramer Rao Lower Bound:

The CRLB is the reciprocal of Fisher Information.

Cramer Rao Lower Bound for Phase Estimation

The variance of any estimator estimating the phase of the carrier for given problem will always be higher than this CRLB. That is,

$$ var(\hat{\phi}) \geq \frac{2\sigma^2}{NA^2} $$

As we can see from the above result, that the variance of the estimates \(var(\hat{\phi}) \to CRLB \) as \( N \to \infty \). Such estimators are called “Asymptotically Efficient Estimators”.

Asymptotically Efficient Estimators

An efficient estimator exists if and only if the first partial derivative of log likelihood function can be written in the form

Cramer Rao Lower Bound for Phase Estimation

Re-writing our earlier result,

Cramer Rao Lower Bound for Phase Estimation

We can clearly see that the above two equations are not having the same form. Thus, an efficient estimator does not exist for this problem.

See also:

[1]An Introduction to Estimation Theory
[2]Bias of an Estimator
[3]Minimum Variance Unbiased Estimators (MVUE)
[4]Maximum Likelihood Estimation
[5]Maximum Likelihood Decoding
[6]Probability and Random Process
[7]Likelihood Function and Maximum Likelihood Estimation (MLE)
[8]Score, Fisher Information and Estimator Sensitivity
[9]Introduction to Cramer Rao Lower Bound (CRLB)
[10]Cramer Rao Lower Bound for Scalar Parameter Estimation
[11]Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE)
[12]Efficient Estimators and CRLB
[13]Cramer Rao Lower Bound for Phase Estimation
[14]Normalized CRLB - an alternate form of CRLB and its relation to estimator sensitivity
[15]Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation
[16]The Mean Square Error – Why do we use it for estimation problems
[17]How to estimate unknown parameters using Ordinary Least Squares (OLS)
[18]Essential Preliminary Matrix Algebra for Signal Processing
[19]Why Cholesky Decomposition ? A sample case:
[20]Tests for Positive Definiteness of a Matrix
[21]Solving a Triangular Matrix using Forward & Backward Substitution
[22]Cholesky Factorization and Matlab code
[23]LTI system models for random signals – AR, MA and ARMA models
[24]Comparing AR and ARMA model - minimization of squared error
[25]Yule Walker Estimation
[26]AutoCorrelation (Correlogram) and persistence – Time series analysis
[27]Linear Models - Least Squares Estimator (LSE)
[28]Best Linear Unbiased Estimator (BLUE)

Efficient Estimators and CRLB

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)
Loading...
It has been reiterated that not all estimators are efficient. Even not all the MVUE are efficient. Then how do we quantify whether the estimator designed by us is efficient or not?

An efficient estimator is defined as the one that is
* Unbiased (Mean of the estimate = true value of the parameter)
* Attains CRLB

How to Identify Efficient Estimators?

As mentioned in the previous article, the second partial derivative of log likelihood function of the observed signal model may be (not true always) written in a form like the one below.

$$\frac{\partial\;ln\;L(\mathbf{x};\theta) }{\partial \theta } = I(\theta)g(\mathbf{x}-\theta) $$

If we can write the CRLB equation in the above form, then the estimator \( g(\mathbf{x})\) is an efficient estimator.

Example:

In an another previous article, CRLB for an estimator that estimates the DC component from a set of observed samples (affected with AWGN noise) was derived. The intermediate step that derived the above requirement for the scenario is given below

CRLB and efficient estimator

From the above equation, it can be ascertained that the efficient estimator exists for the case and it is given by \( g(\mathbf{x})\). The efficient estimator is simply given by sample mean of the observed samples.

CRLB and efficient estimator

See also:

[1]An Introduction to Estimation Theory
[2]Bias of an Estimator
[3]Minimum Variance Unbiased Estimators (MVUE)
[4]Maximum Likelihood Estimation
[5]Maximum Likelihood Decoding
[6]Probability and Random Process
[7]Likelihood Function and Maximum Likelihood Estimation (MLE)
[8]Score, Fisher Information and Estimator Sensitivity
[9]Introduction to Cramer Rao Lower Bound (CRLB)
[10]Cramer Rao Lower Bound for Scalar Parameter Estimation
[11]Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE)
[12]Efficient Estimators and CRLB
[13]Cramer Rao Lower Bound for Phase Estimation
[14]Normalized CRLB - an alternate form of CRLB and its relation to estimator sensitivity
[15]Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation
[16]The Mean Square Error – Why do we use it for estimation problems
[17]How to estimate unknown parameters using Ordinary Least Squares (OLS)
[18]Essential Preliminary Matrix Algebra for Signal Processing
[19]Why Cholesky Decomposition ? A sample case:
[20]Tests for Positive Definiteness of a Matrix
[21]Solving a Triangular Matrix using Forward & Backward Substitution
[22]Cholesky Factorization and Matlab code
[23]LTI system models for random signals – AR, MA and ARMA models
[24]Comparing AR and ARMA model - minimization of squared error
[25]Yule Walker Estimation
[26]AutoCorrelation (Correlogram) and persistence – Time series analysis
[27]Linear Models - Least Squares Estimator (LSE)
[28]Best Linear Unbiased Estimator (BLUE)

Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE)

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)
Loading...
It was mentioned in one of the earlier articles that CRLB may provide a way to find a MVUE (Minimum Variance Unbiased Estimators).

Theorem:

There exists an unbiased estimator that attains CRLB if and only if,

Cramer Rao Lower Bound Equation

Here \( ln \; L(\mathbf{x};\theta) \) is the log likelihood function of x parameterized by \(\theta\) – the parameter to be estimated, \( I(\theta)\) is the Fisher Information and \( g(x)\) is some function.

Then, the estimator that attains CRLB is given by

Cramer Rao Lower Bound Equation

Steps to find MVUE using CRLB:

If we could write the equation (as given above) in terms of Fisher Matrix and some function \( g(x)\) then \(g(x)\) is a Minimum Variable Unbiased Estimator.
1) Given a signal model \( x \), compute \(\frac{\partial\;ln\;L(\mathbf{x};\theta) }{\partial \theta }\)
2) Check if the above computation can be put in the form like the one given in the above theorem
3) Then \(g(\mathbf{x})\) given an MVUE

Let’s look at how CRLB can be used to find an MVUE for a signal that has a DC component embedded in AWGN noise.

Finding a MVUE to estimate DC component embedded in noise:

Consider the signal model where a DC component – \(A\) is embedded in an AWGN noise with zero mean and variance=\(\sigma \).
Our goal is to find an MVUE that could estimate the DC component from the observed samples \(x[n]\).

$$x[n] = A + w[n], \;\;\; n=0,1,2,\cdots,N-1 $$

We calculate CRLB and see if it can help us find a MVUE.

From the previous derivation

Cramer Rao Lower Bound Equation

From the above equation we can readily identify \( I(A)\) and \(g(\mathbf{x})\) as follows

Cramer Rao Lower Bound Equation

Thus,the Fisher Information \(I(A)\) and the MVUE \(g(\mathbf{x})\) are given by

Cramer Rao Lower Bound Equation

Thus for a signal model which has a DC component in AWGN, the sample mean of observed samples \(x[n]\) gives a Minimum Variance Unbiased Estimator to estimate the DC component.

See also:

[1]An Introduction to Estimation Theory
[2]Bias of an Estimator
[3]Minimum Variance Unbiased Estimators (MVUE)
[4]Maximum Likelihood Estimation
[5]Maximum Likelihood Decoding
[6]Probability and Random Process
[7]Likelihood Function and Maximum Likelihood Estimation (MLE)
[8]Score, Fisher Information and Estimator Sensitivity
[9]Introduction to Cramer Rao Lower Bound (CRLB)
[10]Cramer Rao Lower Bound for Scalar Parameter Estimation
[11]Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE)
[12]Efficient Estimators and CRLB
[13]Cramer Rao Lower Bound for Phase Estimation
[14]Normalized CRLB - an alternate form of CRLB and its relation to estimator sensitivity
[15]Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation
[16]The Mean Square Error – Why do we use it for estimation problems
[17]How to estimate unknown parameters using Ordinary Least Squares (OLS)
[18]Essential Preliminary Matrix Algebra for Signal Processing
[19]Why Cholesky Decomposition ? A sample case:
[20]Tests for Positive Definiteness of a Matrix
[21]Solving a Triangular Matrix using Forward & Backward Substitution
[22]Cholesky Factorization and Matlab code
[23]LTI system models for random signals – AR, MA and ARMA models
[24]Comparing AR and ARMA model - minimization of squared error
[25]Yule Walker Estimation
[26]AutoCorrelation (Correlogram) and persistence – Time series analysis
[27]Linear Models - Least Squares Estimator (LSE)
[28]Best Linear Unbiased Estimator (BLUE)

Cramer Rao Lower Bound for Scalar Parameter Estimation

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)
Loading...
Consider a set of observed data samples \( X={X_1,X_2,X_3,\cdots X_N} \) and \( \theta \) is the scalar parameter that is to be estimated from the observed samples. The accuracy of the estimate depends on how well the observed data is influenced by the parameter \( \theta \). The observed data is considered as a random data whose PDF is influenced by \( \theta \). The PDF \( p(x,\theta)\) describes the dependence of X on \(\theta\).

If the of PDF \(p(x,\theta)\) depends weakly on \( \theta \) then the estimates will be poor.If the of PDF \( p(x,\theta)\) on depends strongly on \( \theta \) then the estimates will be good.

As seen in the previous section, the curvature of the likelihood function (Fisher Information) is related to the concentration of PDF. More the curvature, more is the concentration of PDF, more will be accuracy of estimates. The Fisher Information is calculated from log likelihood function as,

$$I(\theta) = -E\left [ \frac{\partial^2 ln L(\theta)}{\partial \theta^2} \right ]$$

Under the regularity condition that the score of the log likelihood function is zero,

$$ E\left [ \frac{\partial ln L(\theta) }{\partial \theta } \right ] = 0 \;\;\; \forall\theta $$

The inverse of the Fisher Information gives the Cramer Rao Lower Bound (CRLB).

Theoretical method to find CRLB:

1) Given a model for observed data samples – \(x\), write the log likelihood function as a function of \( \theta \) – \( ln\;L(\theta)\)
2) Keep \(x\) as fixed and take the second partial derivative of the log likelihood function with respect to parameter to be estimated –
\( \frac{\partial^2 ln L(\theta)}{\partial \theta^2} \)
3) If the result depends on \(x\), fix \( \theta\) and take the expected value with respect to \(x\). This step can be skipped if the result does not depend on \(x\).
4) If the result depends on \( \theta \), then evaluate the result at specific values of \( \theta \)
5) Take the reciprocal of the result and negate it.

Derivation of CRLB for an embedded DC component in AWGN Noise:

$$ \text{Data Model: } x[n] = A + w[n], \;\;\; n=0,1,2,\cdots,N-1 $$

Here \(A\) is a constant DC value that has to be estimated from the observed data samples \(x[n]\) and \(w[n]\) is the AWGN noise with zero mean and variance=\(\sigma^2 \).

Given the fact that the samples are influenced by the AWGN noise with zero mean and variance=\(\sigma^2 \), the likelihood function can be written as

The log likelihood function is formed as,

$$ ln\;L(x;A) =-\frac{N}{2}ln(2\pi\sigma^2) – \frac{1}{2\sigma^2}\sum_{n=0}^{N-1}(x[n]-A)^2 $$

Taking the first partial derivative of log likelihood function with respect to A,

Computing the second partial derivative of log likelihood function by differentiating one more time,

The Fisher Information is given by taking the expectation and negating it.

The Cramer Rao Lower Bound is the reciprocal of Fisher Information I(A)

The variance of any estimator that estimates the DC component \(A\) from the given observed samples will always be greater that the CRLB. That is, the CRLB acts as the lower bound for the variance of the estimates. This can be conveniently represented as

$$ var(\hat{A}) \geq \frac{\sigma^2}{N}$$

Tweeking the CRLB:

Now that we have found an expression for CRLB for the estimation of the DC component, we can look for schemes that may affect the CRLB. From the expression of CRLB, following points can be inferred.

1) The CRLB does not depend on the parameter to be estimated (\(A\))
2) The CRLB increases linearly with \(\sigma^2 \)
3) The CRLB decreases inversely with \(N \)

See also:

[1]An Introduction to Estimation Theory
[2]Bias of an Estimator
[3]Minimum Variance Unbiased Estimators (MVUE)
[4]Maximum Likelihood Estimation
[5]Maximum Likelihood Decoding
[6]Probability and Random Process
[7]Likelihood Function and Maximum Likelihood Estimation (MLE)
[8]Score, Fisher Information and Estimator Sensitivity
[9]Introduction to Cramer Rao Lower Bound (CRLB)
[10]Cramer Rao Lower Bound for Scalar Parameter Estimation
[11]Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE)
[12]Efficient Estimators and CRLB
[13]Cramer Rao Lower Bound for Phase Estimation
[14]Normalized CRLB - an alternate form of CRLB and its relation to estimator sensitivity
[15]Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation
[16]The Mean Square Error – Why do we use it for estimation problems
[17]How to estimate unknown parameters using Ordinary Least Squares (OLS)
[18]Essential Preliminary Matrix Algebra for Signal Processing
[19]Why Cholesky Decomposition ? A sample case:
[20]Tests for Positive Definiteness of a Matrix
[21]Solving a Triangular Matrix using Forward & Backward Substitution
[22]Cholesky Factorization and Matlab code
[23]LTI system models for random signals – AR, MA and ARMA models
[24]Comparing AR and ARMA model - minimization of squared error
[25]Yule Walker Estimation
[26]AutoCorrelation (Correlogram) and persistence – Time series analysis
[27]Linear Models - Least Squares Estimator (LSE)
[28]Best Linear Unbiased Estimator (BLUE)

Introduction to Cramer Rao Lower Bound (CRLB)

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 4.00 out of 5)
Loading...
The criteria for existence of having an Minimum Variance Unbiased Estimator (MVUE) was discussed in a previous article. To have an MVUE, it is necessary to have estimates that are unbiased and that give minimum variance (compared to the true parameter value). This is given by the following two equations
$$ E\left\{\hat{f}_0 \right\} = f_0 $$

$$ \sigma^{2}_{\hat{f}_0}=E\left\{(\hat{f}_0 – E[\hat{f}_0])^2 \right\} \; should \; be \; minimum $$

For a MVUE, it is easier to verify the first criteria (unbiased-ness) using the first equation, but verifying the second criteria (minimum variance) is tricky. We can only calculate the variance of the estimator, but how can we make sure that it is “the minimum”? How can we make sure that an estimator that is designed gives the minimum variance? There may exist other numerous unbiased estimators (which we may not know) that may give minimum variance. Other words, how do we make sure that our estimate is the best MVUE in the world? Cramer-Rao Lower Bound (CRLB) may come to our rescue.

Cramer-Rao Lower Bound (CRLB):

Harald Cramer and Radhakrishna Rao derived a way to express the lower bound on the variance of unbiased estimators that estimate deterministic parameters. This lower bound is called as the Cramer-Rao Lower Bound (CRLB).

If \(\hat{\theta} \) is an unbiased estimate of a deterministic parameter \(\theta \), then the relationship between the variance of the estimates ( \( {{\sigma}^2}_{\hat{\theta}}\)) and CRLB can be expressed as

$$ {{\sigma}^2}_{\hat{\theta}} \left( \theta \right ) \geq CRLB \left( \theta \right) \Rightarrow {\sigma}_{\hat{\theta}} \left( \theta \right ) \geq \sqrt{CRLB \left( \theta \right)} $$

CRLB tell us the best minimum variance that we can expect to get from an unbiased estimator.

Applications of CRLB include :

1) Making judgment on proposed estimators. Estimators whose variance is not close to CRLB are considered inferior.
2) To do feasibility studies as to whether a particular estimator/system can meet given specifications. It is also used to rule out impossible estimators – No estimator can beat CRLB.
3) Benchmark for comparing unbiased estimators
4) It may sometimes provide MVUE. If an unbiased estimator achieved CRLB, it means that it is a MVUE.

Feasibility Studies :

Derivation of CRLB for a particular given scenario or proposed algorithm of estimation is often found in research texts. The derived theoretical CRLB for a system/algorithm is compared with actual variance of the implemented system and conclusions are drawn. For example, in the paper titled “A Novel Frequency Synchronization Algorithm and its Cramer Rao Bound in Practical UWB Environment for MB-OFDM Systems”[1] – a frequency offset estimation algorithm was proposed for estimating frequency offsets in multi-band orthogonal frequency division multiplexing (MB-OFDM) systems. The performance of the algorithm was studied by BER analysis (Eb/N0 Vs BER curves). Additionally,the estimator performance is further validated by comparing the simulated estimator variance with the derived theoretical CRLB for four UWB channel models.

Reference

[1] Debrati et al,“A Novel Frequency Synchronization Algorithm and its Cramer Rao Bound in Practical UWB Environment for MB-OFDM Systems”, RADIOENGINEERING, VOL. 18, NO. 1, APRIL 2009

See also:

[1] An Introduction to Estimation Theory
[2] Bias of an Estimator
[3] Minimum Variance Unbiased Estimators (MVUE)
[4] Maximum Likelihood Estimation
[5] Maximum Likelihood Decoding
[6] Probability and Random Process
[7] Likelihood Function and Maximum Likelihood Estimation (MLE)
[8] Score, Fisher Information and Estimator Sensitivity

Minimum Variance Unbiased Estimators (MVUE)

1 Star2 Stars3 Stars4 Stars5 Stars (4 votes, average: 4.00 out of 5)
Loading...

As discussed in the introduction to estimation theory, the goal of an estimation algorithm is to give an estimate of random variable(s) that is unbiased and has minimum variance. This criteria is reproduced here for reference

$$ E\left\{\hat{f}_0 \right\} = f_0 $$

$$ \sigma^{2}_{\hat{f}_0}=E\left\{(\hat{f}_0 – E[\hat{f}_0])^2 \right\} $$

In the above equations \( f_0 \) is the transmitted carrier frequency and \( \hat{f}_0 \) is the estimated frequency based on a set of observed data ( See previous article).

Existence of Minimum Variance Unbiased Estimator (MVUE):

The estimator described above is called Minimum Variance Unbiased Estimator (MVUE) since, the estimates are unbiased as well as they have minimum variance. Sometimes there may not exist any MVUE for a given scenario or set of data. This can happen in two ways
1) No existence of unbiased estimators
2) Even if we have unbiased estimator, none of them gives uniform minimum variance.

Consider that we have three unbiased estimators \( g_1 \) , \( g_2 \) and \( g_3 \) that gives estimates of a deterministic parameter \( \theta \). Let the unbiased estimates be \( \hat{\theta}_1 \) , \( \hat{\theta}_2 \) and \( \hat{\theta}_3 \) respectively.

Figure 1 illustrates two scenarios for the existence of an MVUE among the three estimators. In Figure 1a, the third estimator gives uniform minimum variance compared to other two estimators. In Figure 1b, none of the estimator gives minimum variance that is uniform across the entire range of \(\theta\).

Illustration of existence of Minimum Variable Unbiased Estimator (MVUE)
Figure 1: Illustration of existence of Minimum Variable Unbiased Estimator (MVUE)

Methods to find MVU Estimator:

1) Determine Cramer-Rao Lower Bound (CRLB) and check if some estimator satisfies it. If an estimator exists whose variance equals the CRLB for each value of \( \theta \), then it must be the MVU estimator. In may happen that no estimator exists that achieve CRLB.

2) Use Rao-Blackwell-Lechman-Scheffe (RBLS) Theorem: Find a sufficient statistic and find a function of the sufficient statistic. This function gives the MVUE. This approach is rarely used in practice.

3) Restrict the solution to find linear estimators that are unbiased. This gives Minimum Variance Linear Unbiased Estimator (MVLUE). This method gives MVLUE only if the problem is truly linear.

External Links:

[1] Notes on Cramer Rao Lower Bound (CRLB)
[2] Notes on Rao-Blackwell-Lechman-Scheffe (RBLS) Theorem

See Also

[1]An Introduction to Estimation Theory
[2]Bias of an Estimator
[3]Minimum Variance Unbiased Estimators (MVUE)
[4]Maximum Likelihood Estimation
[5]Maximum Likelihood Decoding
[6]Probability and Random Process
[7]Likelihood Function and Maximum Likelihood Estimation (MLE)
[8]Score, Fisher Information and Estimator Sensitivity
[9]Introduction to Cramer Rao Lower Bound (CRLB)
[10]Cramer Rao Lower Bound for Scalar Parameter Estimation
[11]Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE)
[12]Efficient Estimators and CRLB
[13]Cramer Rao Lower Bound for Phase Estimation
[14]Normalized CRLB - an alternate form of CRLB and its relation to estimator sensitivity
[15]Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation
[16]The Mean Square Error – Why do we use it for estimation problems
[17]How to estimate unknown parameters using Ordinary Least Squares (OLS)
[18]Essential Preliminary Matrix Algebra for Signal Processing
[19]Why Cholesky Decomposition ? A sample case:
[20]Tests for Positive Definiteness of a Matrix
[21]Solving a Triangular Matrix using Forward & Backward Substitution
[22]Cholesky Factorization and Matlab code
[23]LTI system models for random signals – AR, MA and ARMA models
[24]Comparing AR and ARMA model - minimization of squared error
[25]Yule Walker Estimation
[26]AutoCorrelation (Correlogram) and persistence – Time series analysis
[27]Linear Models - Least Squares Estimator (LSE)
[28]Best Linear Unbiased Estimator (BLUE)