Why Cholesky Decomposition ? A sample case:

PoorBelow averageAverageGoodExcellent (6 votes, average: 3.00 out of 5)

Matrix inversion is seen ubiquitously in signal processing applications. For example, matrix inversion is an important step in channel estimation and equalization. For instance, in GSM normal burst, 26 bits of training sequence are put in place with 114 bits of information bits. When the burst travels over the air interface (channel), it is subject to distortions due to channel effect like Inter Symbol Interference (ISI). It becomes necessary to estimate the channel impulse response (H) and equalize the effects of the channel, before attempting to decode/demodulate the information bits. The training bits are used to estimate the channel impulse response.

If the transmitted signal “x” travels over a multipath fading channel (H) with AWGN noise “w”, the received signal is modeled as

y = Hx + w

A Minimum Mean Square Error (MMSE) linear equalizer employed to cancel out the effects of ISI, attempts to minimize the error between equalizer output – “\hat{x} ” and the transmitted signal “x “. If the AWGN noise power is \sigma^2 , then the equalizer is represented by the following equation[1].

Note that the expression involves the computation of matrix inversion – .

Matrix inversion is a tricky subject. Not all matrices are invertible. Furthermore, ordinary matrix inversion technique of finding the adjoint of a matrix and using it to invert the matrix will consume lots of memory and computation time. Physical layer algorithm (PHY) designers typically use Cholesky decomposition to invert the matrix. This helps to reduce the computational complexity of matrix inversion.

Reference:

[1] Marius Vollmer et al, ”Comparative Study of Joint Detection Technique for DS-CDMA Based Mobile Radio System”,IEEE Journal on selected areas in communications,Vol. 19, NO. 8, August

See Also:

[1]An Introduction to Estimation Theory
[2]Bias of an Estimator
[3]Minimum Variance Unbiased Estimators (MVUE)
[4]Maximum Likelihood Estimation
[5]Maximum Likelihood Decoding
[6]Probability and Random Process
[7]Likelihood Function and Maximum Likelihood Estimation (MLE)
[8]Score, Fisher Information and Estimator Sensitivity
[9]Introduction to Cramer Rao Lower Bound (CRLB)
[10]Cramer Rao Lower Bound for Scalar Parameter Estimation
[11]Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE)
[12]Efficient Estimators and CRLB
[13]Cramer Rao Lower Bound for Phase Estimation
[14]Normalized CRLB - an alternate form of CRLB and its relation to estimator sensitivity
[15]Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation
[16]The Mean Square Error – Why do we use it for estimation problems
[17]How to estimate unknown parameters using Ordinary Least Squares (OLS)
[18]Essential Preliminary Matrix Algebra for Signal Processing
[19]Why Cholesky Decomposition ? A sample case:
[20]Tests for Positive Definiteness of a Matrix
[21]Solving a Triangular Matrix using Forward & Backward Substitution
[22]Cholesky Factorization - Matlab and Python
[23]LTI system models for random signals – AR, MA and ARMA models
[24]Comparing AR and ARMA model - minimization of squared error
[25]Yule Walker Estimation
[26]AutoCorrelation (Correlogram) and persistence – Time series analysis
[27]Linear Models - Least Squares Estimator (LSE)
[28]Best Linear Unbiased Estimator (BLUE)

Post your valuable comments !!!