Cholesky Factorization and Matlab code

1 Star2 Stars3 Stars4 Stars5 Stars (4 votes, average: 4.25 out of 5)
Loading...

Any \( n \times n\) symmetric positive definite matrix \( A \) can be factored as

$$ A=LL^T $$

where \( L \) is \(n \times n\) lower triangular matrix. The lower triangular matrix \(L\) is often called “Cholesky Factor of \( A \)”. The matrix \( L \) can be interpreted as square root of the positive definite matrix \(A\).

Basic Algorithm to find Cholesky Factorization:

Note: In the following text, the variables represented in Greek letters represent scalar values, the variables represented in small Latin letters are column vectors and the variables represented in capital Latin letters are Matrices.

Given a positive definite matrix \( A \), it is partitioned as follows.

Cholesky Factorization

\( \alpha_{11}, \lambda_{11} = \) first element of \(A\) and \(L\) respectively
\( a_{21} , l_{21} =\) column vector at first column starting from second row of \(A\) and \(L\) respectively
\( A_{22} , L_{22} =\) remaining lower part of the matrix of \(A\) and \(L\) respectively of size \( (n-1) \times (n-1)\)

Having partitioned the matrix \(A\) as shown above, the Cholesky factorization can be computed by the following iterative procedure.

Steps in computing the Cholesky factorization:

Step 1: Compute the scalar: \( \lambda_{11}=\sqrt{\alpha_{11}}\)
Step 2: Compute the column vector: \( l_{21}= a_{21}/\lambda_{11}\)
Step 3: Compute the matrix : \( A_{22}=l_{21} l^T_{21}+L_{22}L^T_{22}\)
Step 4: Replace \( A\) with \( A_{22}\), i.e, \( A=A_{22}\)
Step 5: Repeat from step 1 till the matrix size at Step 4 becomes \(1 \times 1\).

Matlab Program (implementing the above algorithm):

Function 1: [F]=cholesky(A,option)

Function 2: x=isPositiveDefinite(A)

Sample Run:

A is a randomly generated positive definite matrix. To generate a random positive definite matrix check the link in “external link” section below.

>> A=[3.3821 ,0.8784,0.3613,-2.0349; 0.8784, 2.0068, 0.5587, 0.1169; 0.3613, 0.5587, 3.6656, 0.7807; -2.0349, 0.1169, 0.7807, 2.5397];

$$A=\begin{bmatrix}
3.3821 & 0.8784 & 0.3613 & -2.0349 \\
0.8784 & 2.0068 & 0.5587 & 0.1169 \\
0.3613 & 0.5587 & 3.6656 & 0.7807 \\
-2.0349 & 0.1169 & 0.7807 & 2.5397
\end{bmatrix}$$

>> cholesky(A,’Lower’) 

$$ ans=\begin{bmatrix}
1.8391 & 0 & 0 & 0 \\
0.4776 & 1.3337 & 0 & 0 \\
0.1964 & 0.3486 & 1.8723 & 0 \\
-1.1065 & 0.4839 & 0.4430 & 0.9408
\end{bmatrix} $$

>> cholesky(A,’upper’) 

$$ans=\begin{bmatrix}
1.8391 & 0.4776 & 0.1964 & -1.1065\\
0 & 1.3337 & 0.3486 & 0.4839\\
0 & 0 & 1.8723 & 0.4430\\
0 & 0 & 0 & 0.9408
\end{bmatrix}$$

External Link

[1] Matlab script by Pablo Ñañez to generate positive definite matrix – http://www.mathworks.com/matlabcentral/newsreader/view_thread/163489

See also:

[1]An Introduction to Estimation Theory
[2]Bias of an Estimator
[3]Minimum Variance Unbiased Estimators (MVUE)
[4]Maximum Likelihood Estimation
[5]Maximum Likelihood Decoding
[6]Probability and Random Process
[7]Likelihood Function and Maximum Likelihood Estimation (MLE)
[8]Score, Fisher Information and Estimator Sensitivity
[9]Introduction to Cramer Rao Lower Bound (CRLB)
[10]Cramer Rao Lower Bound for Scalar Parameter Estimation
[11]Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE)
[12]Efficient Estimators and CRLB
[13]Cramer Rao Lower Bound for Phase Estimation
[14]Normalized CRLB - an alternate form of CRLB and its relation to estimator sensitivity
[15]Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation
[16]The Mean Square Error – Why do we use it for estimation problems
[17]How to estimate unknown parameters using Ordinary Least Squares (OLS)
[18]Essential Preliminary Matrix Algebra for Signal Processing
[19]Why Cholesky Decomposition ? A sample case:
[20]Tests for Positive Definiteness of a Matrix
[21]Solving a Triangular Matrix using Forward & Backward Substitution
[22]Cholesky Factorization and Matlab code
[23]LTI system models for random signals – AR, MA and ARMA models
[24]Comparing AR and ARMA model - minimization of squared error
[25]Yule Walker Estimation
[26]AutoCorrelation (Correlogram) and persistence – Time series analysis
[27]Linear Models - Least Squares Estimator (LSE)
[28]Best Linear Unbiased Estimator (BLUE)

Recommended Books: