Cramér-Rao Lower Bound (CRLB)-Vector Parameter Estimation

Key focus: Applying Cramér-Rao Lower Bound (CRLB) for vector parameter estimation. Know about covariance matrix, Fisher information matrix & CRLB matrix.

CRLB for Vector Parameter Estimation

CRLB for scalar parameter estimation was discussed in previous posts. The same concept is extended to vector parameter estimation.

Consider a set of deterministic parameters \mathbb{\theta}=[ \theta_1, \theta_2, ..., \theta_p]^{T} that we wish to estimate.
The estimate is denoted in vector form as, \mathbb{\hat{\theta}} = [ \hat{\theta_1}, \hat{\theta_2}, ..., \hat{\theta_p} ]^{T}.

Assume that the estimate is unbiased E[\hat{\theta}] = \theta .

Covariance Matrix

For the scalar parameter estimation, the variance of the estimate was considered. For vector parameter estimation, the covariance of the vector of estimates are considered.

The covariance matrix for the vector of estimates is given by

C_{\hat{\theta}} =var ( \hat{\theta} )  = E \left[  ( \hat{\theta} - \theta )( \hat{\theta} - \theta )^T \right]

For example, if A,B and C are the unknown parameters to be estimated, then the  covariance matrix for the parameter vector \theta = [A,B,C] ^T is given by

C_{\hat{\theta}} = \left[  \begin{matrix} var(\hat{A}) & cov(\hat{A},\hat{B}) & cov(\hat{A},\hat{C}) \\  cov(\hat{B},\hat{A}) & var(\hat{B}) & cov(\hat{B},\hat{C}) \\  cov(\hat{C},\hat{A}) & cov(\hat{C},\hat{B}) & var(\hat{C}) \end{matrix} \right] 

Fisher Information Matrix

For the scalar parameter estimation, Fisher Information was considered. Same concept is extended for the vector case and is called the Fisher Information Matrix I(\theta). The ijth element of the Fisher Information Matrix I(\theta) (evaluated at the true values of the parameter vector) is given by

 [I(\theta)] _{ij} = \displaystyle{-E \left[  \frac{\delta^2}{\delta \theta_i \delta \theta_j} ln \; p(x;\theta) \right]}  \; \; i,j =1,2,3,\cdots,p

CRLB Matrix

Under the same regularity condition (as that of the scalar parameter estimation case),

\displaystyle{E \left[ \frac{\delta}{ \delta \theta} ln\; p(x;\theta) \right] = 0} \;\;\; \forall \theta

the CRLB Matrix is given by the inverse of the Fisher Information Matrix

CRLB =  I^{-1}(\theta)

Note: For the scale parameter estimation, the CRLB was shown to be the reciprocal of the Fisher Information.

This implies that the covariance of the parameters (diagonal elements) are bound by the CRLB as

  \left[ C_{(\hat{\theta})} \right]_{ii} \geq  \left[ I^{-1}(\theta) \right]_{ii} 

More generally, the condition given above is represented as

C_{(\hat{\theta})} -  I^{-1}(\theta) \; \; is \; positive \; semi-definite

Note: The word positive-semi-definite is the matrix equivalent of saying that a value is greater than or equal to zero. Similarly, the term positive-definite is roughly equivalent of saying that something is definitely greater than zero or definitely positive.

Emphasize was place on diagonal elements in the Fisher Information Matrix. The effect of off-diagonal elements should also be considered.

For further reading

[1] Debrati et al,“A Novel Frequency Synchronization Algorithm and its Cramer Rao Bound in Practical UWB Environment for MB-OFDM Systems”, RADIOENGINEERING, VOL. 18, NO. 1, APRIL 2009.↗

Similar topics:

[1]An Introduction to Estimation Theory
[2]Bias of an Estimator
[3]Minimum Variance Unbiased Estimators (MVUE)
[4]Maximum Likelihood Estimation
[5]Maximum Likelihood Decoding
[6]Probability and Random Process
[7]Likelihood Function and Maximum Likelihood Estimation (MLE)
[8]Score, Fisher Information and Estimator Sensitivity
[9]Introduction to Cramer Rao Lower Bound (CRLB)
[10]Cramer Rao Lower Bound for Scalar Parameter Estimation
[11]Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE)
[12]Efficient Estimators and CRLB
[13]Cramer Rao Lower Bound for Phase Estimation
[14]Normalized CRLB - an alternate form of CRLB and its relation to estimator sensitivity
[15]Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation
[16]The Mean Square Error – Why do we use it for estimation problems
[17]How to estimate unknown parameters using Ordinary Least Squares (OLS)
[18]Essential Preliminary Matrix Algebra for Signal Processing
[19]Why Cholesky Decomposition ? A sample case:
[20]Tests for Positive Definiteness of a Matrix
[21]Solving a Triangular Matrix using Forward & Backward Substitution
[22]Cholesky Factorization - Matlab and Python
[23]LTI system models for random signals – AR, MA and ARMA models
[24]Comparing AR and ARMA model - minimization of squared error
[25]Yule Walker Estimation
[26]AutoCorrelation (Correlogram) and persistence – Time series analysis
[27]Linear Models - Least Squares Estimator (LSE)
[28]Best Linear Unbiased Estimator (BLUE)

Books by the author:

Wireless Communication Systems in Matlab
Wireless Communication Systems in Matlab
Second Edition(PDF)

PoorBelow averageAverageGoodExcellent (168 votes, average: 3.71 out of 5)

Digital modulations using Python
Digital Modulations using Python
(PDF ebook)

PoorBelow averageAverageGoodExcellent (124 votes, average: 3.60 out of 5)

digital_modulations_using_matlab_book_cover
Digital Modulations using Matlab
(PDF ebook)

PoorBelow averageAverageGoodExcellent (129 votes, average: 3.71 out of 5)

Hand-picked Best books on Communication Engineering
Best books on Signal Processing

 

1 thought on “Cramér-Rao Lower Bound (CRLB)-Vector Parameter Estimation”

Post your valuable comments !!!