Construct autocorrelation Matrix in Matlab & Python

Auto-correlation, also called series correlation, is the correlation of a given sequence with itself as a function of time lag. Cross-correlation is a more generic term, which gives the correlation between two different sequences as a function of time lag.

Given two sequences and , the cross-correlation at times separated by lag i is given by ( denotes complex conjugate operation)

Auto-correlation is a special case of cross-correlation, where x=y. One can use a brute force method (using for loops implementing the above equation) to compute the auto-correlation sequence. However, other alternatives are also at your disposal.

Method 1: Auto-correlation using xcorr function

Matlab

For a N-dimensional given vector x, the Matlab command xcorr(x,x) or simply xcorr(x) gives the auto-correlation sequence. For the input sequence x=[1,2,3,4], the command xcorr(x) gives the following result.

>> x=[1 2 3 4]
>> acf=xcorr(x)
acf= 4   11   20   30   20   11  4

Python

In Python, autocorrelation of 1-D sequence can be obtained using numpy.correlate function. Set the parameter mode=’full’ which is useful for calculating the autocorrelation as a function of lag.

import numpy as np
x = np.asarray([1,2,3,4])
np.correlate(x, x,mode='full')
# output: array([ 4, 11, 20, 30, 20, 11,  4])

Method 2: Auto-correlation using Convolution

Auto-correlation sequence can be computed as the convolution between the given sequence and the reversed (flipped) version of the conjugate of the sequence.The conjugate operation is not needed if the input sequence is real.

Matlab

>> x=[1 2 3 4]
>> acf=conv(x,fliplr(conj(x)))
acf= 4   11   20   30   20   11  4

Python

import numpy as np
x = np.asarray([1,2,3,4])
np.convolve(x,np.conj(x)[::-1])
# output: array([ 4, 11, 20, 30, 20, 11,  4])

Method 3: Autocorrelation using Toeplitz matrix

Autocorrelation sequence can be found using Toeplitz matrices. An example for using Toeplitz matrix structure for computing convolution is given here. The same technique is extended here, where one signal is set as input sequence and the other is just the flipped version of its conjugate. The conjugate operation is not needed if the input sequence is real.

Matlab

>> x=[1 2 3 4]
>> acf=toeplitz([x zeros(1,length(x)-1)],[x(1) zeros(1,length(conj(x))-1)])*fliplr(x).'
acf= 4   11   20   30   20   11  4

Python

import numpy as np
from scipy.linalg import toeplitz
x = np.asarray([1,2,3,4])
toeplitz(np.pad(x, (0,len(x)-1),mode='constant'),np.pad([x[0]], (0,len(x)-1),mode='constant'))@x[::-1]
# output: array([ 4, 11, 20, 30, 20, 11, 4])

Method 4: Auto-correlation using FFT/IFFT

Auto-correlation sequence can be found using FFT/IFFT pairs. An example for using FFT/IFFT for computing convolution is given here. The same technique is extended here, where one signal is set as input sequence and the other is just the flipped version of its conjugate.The conjugate operation is not needed if the input sequence is real.

Matlab

>> x=[1 2 3 4]
>> L = 2*length(x)-1
>> acf=ifft(fft(x,L).*fft(fliplr(conj(x)),L))
acf= 4   11   20   30   20   11  4

Python

import numpy as np
from scipy.fftpack import fft,ifft

x = np.asarray([1,2,3,4])

L = 2*len(x) - 1
ifft(fft(x,L)*fft(np.conj(x)[::-1],L))
#output: array([ 4+0j, 11+0j, 20+0j, 30+0j, 20+0j, 11+0j, 4+0j])

Note that in all the above cases, due to the symmetry property of auto-correlation function, the center element represents .

Construction the Auto-correlation Matrix

Auto-correlation matrix is a special form of matrix constructed from auto-correlation sequence. It takes the following form.

The auto-correlation matrix is easily constructed, once the auto-correlation sequence is known. The auto-correlation matrix is a Hermitian matrix as well as a Toeplitz matrix. This property is exploited in the following code for constructing the Auto-Correlation matrix.

Matlab

>> x=[1+j 2+j 3-j] %x is complex
>> acf=conv(x,fliplr(conj(x)))% %using Method 2 to compute Auto-correlation sequence
>>Rxx=acf(3:end); % R_xx(0) is the center element
>>Rx=toeplitz(Rxx,[Rxx(1) conj(Rxx(2:end))])

Python

import numpy as np
x = np.asarray([1+1j,2+1j,3-1j]) #x is complex
acf = np.convolve(x,np.conj(x)[::-1]) # using Method 2 to compute Auto-correlation sequence
Rxx=acf[2:]; # R_xx(0) is the center element
Rx = toeplitz(Rxx,np.hstack((Rxx[0], np.conj(Rxx[1:]))))

Result:

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

References:

[1] Reddi.S.S,”Eigen Vector properties of Toeplitz matrices and their application to spectral analysis of time series, Signal Processing, Vol 7,North-Holland, 1984,pp 46-56.↗
[2] Robert M. Gray,”Toeplitz and circulant matrices – an overview”,Department of Electrical Engineering,Stanford University,Stanford 94305,USA.↗
[3] Matlab documentation help on Toeplitz command.↗

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

(172 votes, average: 3.66 out of 5)

Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

(127 votes, average: 3.58 out of 5)

Checkout Added to cart
digital_modulations_using_matlab_book_cover
Digital Modulations using Matlab
(PDF ebook)

(134 votes, average: 3.63 out of 5)

Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Why Cholesky Decomposition ? A sample case:

Note: There is a rating embedded within this post, please visit this post to rate it.
Matrix inversion is seen ubiquitously in signal processing applications. For example, matrix inversion is an important step in channel estimation and equalization. For instance, in GSM normal burst, 26 bits of training sequence are put in place with 114 bits of information bits. When the burst travels over the air interface (channel), it is subject to distortions due to channel effect like Inter Symbol Interference (ISI). It becomes necessary to estimate the channel impulse response (H) and equalize the effects of the channel, before attempting to decode/demodulate the information bits. The training bits are used to estimate the channel impulse response.

If the transmitted signal “x” travels over a multipath fading channel (H) with AWGN noise “w”, the received signal is modeled as

A Minimum Mean Square Error (MMSE) linear equalizer employed to cancel out the effects of ISI, attempts to minimize the error between equalizer output – “” and the transmitted signal ““. If the AWGN noise power is , then the equalizer is represented by the following equation[1].

Note that the expression involves the computation of matrix inversion – .

Matrix inversion is a tricky subject. Not all matrices are invertible. Furthermore, ordinary matrix inversion technique of finding the adjoint of a matrix and using it to invert the matrix will consume lots of memory and computation time. Physical layer algorithm (PHY) designers typically use Cholesky decomposition to invert the matrix. This helps to reduce the computational complexity of matrix inversion.

Reference:

[1] Marius Vollmer et al, ”Comparative Study of Joint Detection Technique for DS-CDMA Based Mobile Radio System”,IEEE Journal on selected areas in communications,Vol. 19, NO. 8, August

See Also:

[1]An Introduction to Estimation Theory
[2]Bias of an Estimator
[3]Minimum Variance Unbiased Estimators (MVUE)
[4]Maximum Likelihood Estimation
[5]Maximum Likelihood Decoding
[6]Probability and Random Process
[7]Likelihood Function and Maximum Likelihood Estimation (MLE)
[8]Score, Fisher Information and Estimator Sensitivity
[9]Introduction to Cramer Rao Lower Bound (CRLB)
[10]Cramer Rao Lower Bound for Scalar Parameter Estimation
[11]Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE)
[12]Efficient Estimators and CRLB
[13]Cramer Rao Lower Bound for Phase Estimation
[14]Normalized CRLB - an alternate form of CRLB and its relation to estimator sensitivity
[15]Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation
[16]The Mean Square Error – Why do we use it for estimation problems
[17]How to estimate unknown parameters using Ordinary Least Squares (OLS)
[18]Essential Preliminary Matrix Algebra for Signal Processing
[19]Why Cholesky Decomposition ? A sample case:
[20]Tests for Positive Definiteness of a Matrix
[21]Solving a Triangular Matrix using Forward & Backward Substitution
[22]Cholesky Factorization - Matlab and Python
[23]LTI system models for random signals – AR, MA and ARMA models
[24]Comparing AR and ARMA model - minimization of squared error
[25]Yule Walker Estimation
[26]AutoCorrelation (Correlogram) and persistence – Time series analysis
[27]Linear Models - Least Squares Estimator (LSE)
[28]Best Linear Unbiased Estimator (BLUE)

Matrix Algebra for Signal Processing

Key focus : Essential matrix algebra: formation of matrices, determinants, rank, inverse & transpose of matrix and solving simultaneous equations.

I thought of making a post on Cholesky Decomposition, which is a very essential technique in digital signal processing applications like generating correlated random variables, solving linear equations, channel estimation etc.., But jumping straight to the topic of Cholesky Decomposition will leave many flummoxed/confused. So I decided to touch on some essentials in basic matrix algebra before taking up advanced topics.

Prerequisite:

Basic knowledge on topics like formation of matrices, determinants, rank of a matrix, inverse & transpose of matrix, solving a system of simultaneous equation using matrix algebra.

The prerequisites listed above being fulfilled, you will learn different types of matrices in this post.

1. Vector

A matrix with only one row or one column.

2. Transpose of a Matrix

Transpose of a matrix is formed by interchanging the elements from row to columns. For example, the first row of the matrix becomes first column in the transpose matrix, second row of the matrix becomes second column in the transpose matrix and so on.

3. Square Matrix:

Matrix with equal number of rows and columns.
(Note: The sample matrices shown below are of 3×3 dimension. They can be readily extended to nxn dimension)

Square Matrix is further classified into

4. Symmetric Matrix :

If a square matrix and its transpose are equal, then it is called a symmetric square matrix or simply symmetric matrix.

5. Diagonal Matrix

A special kind of symmetric matrix, with zeros in off-diagonal locations.

6. Scalar matrix

A special kind of diagonal matrix, with the equal values at the diagonal

7. Identity Matrix

It is a special type of scalar matrix, where the leading diagonals are one. It is denoted by I

8. Inverse of a Matrix or the notion of non-Singular Matrix:

Inverse matrices are defined for square matrices. For non-square matrices, inverses do not exist. The product of a square matrix and its inverse yields an identity matrix.
For a square matrix A.

Here denotes the inverse of square matrix .

It is possible to have a square matrix but not invertible. Such non-invertible square matrices are called
Singular Matrices. Matrix that are invertible are called non-singular matrices.

9. Singular Matrix:

A non-invertible square matrix. Inverse does not exist.

10. Orthogonal Matrix

An invertible square matrix, whose transpose and inverse are equal. If for an invertible square matrix A,

then, the matrix or equivalently is called an orthogonal matrix.

11. Complex Matrix :

A matrix with complex elements.

12. Complex Square Matrix

A complex matrix with with equal number of rows and columns

13. Conjugate Transpose or Hermitian Transpose:

Similar to the transpose of a matrix defined above. But the only difference is : put the complex conjugate form of the element when interchanging from rows to columns. For a complex square matrix , the conjugate transpose is denoted by

14. Hermitian Matrix:

A complex square matrix whose conjugate transpose is equal to its own. For a complex square matrix , the Hermitian Matix is denoted by . For a Hermitian matrix, the element of the complex square matrix located at i-th row and j-th column will be equal to the complex conjugate of the element located at j-th row and i-th column.

That is,

15. Triangular Matrix:

A special type of square matrix. Further classified into : lower triangular and upper triangular matrix. For a lower triangular matrix, all the elements above the main diagonal are zeros.For an upper triangular matrix, all the elements below the main diagonal are zeros.

16. Positive-Definite Matrix:

It is defined for both real matrices and complex matrices. It is matrix equivalent to positivity of real numbers. For example, the numbers +3,+5 and +6 are definitely positive. Similarly a positive definite matrix is defined to be definitely positive if it satisfies the following condition.

For real case: A n x n symmetric square real matrix – is said to be positive definite for all if and only if,

For complex case: A n x n symmetric square complex matrix – is said to be positive definite for all if and only if,

17. Positive-semi-definite Matrix:

It is defined for both real matrices and complex matrices. It is matrix equivalent to semi-positivity of real numbers. For example, the numbers +3,+5 and +6 are definitely positive. But the set is not positive definite since it includes a zero element which is neither positive nor negative. Similarly a positive semi-definite matrix is defined to be semi-definitely positive if it satisfies the following condition.

For real case: A n x n symmetric square real matrix – is said to be positive semi-definite for all if and only if,

For complex case: A n x n symmetric square complex matrix – is said to be positive semi-definite for all if and only if,

Note : The notation stands for a set of real numbers. The notation denotes a set of n real numbers in n-dimensional space.

18. Negative-Definite Matrix:

It is defined for both real matrices and complex matrices. It is matrix equivalent to negativity of real numbers. For example, the numbers -3,-5 and -6 are definitely negative. Similarly a negative definite matrix is defined to be definitely negative if it satisfies the following condition.

For real case: A n x n symmetric square real matrix – is said to be negative definite for all if and only if,

For complex case: A n x n symmetric square complex matrix – is said to be negative definite for all if and only if,

Note : The notation stands for a set of real numbers. The notation denotes a set of n real numbers in n-dimensional space.

19. Negative-semi-definite Matrix:

It is defined for both real matrices and complex matrices. It is matrix equivalent to semi-negativity of real numbers. For example, the numbers -3,-5 and -6 are definitely negative. But the set is not negative definite since it includes a zero element which is neither positive nor negative. Similarly a negative semi-definite matrix is defined to be semi-definitely negative if it satisfies the following condition.

For real case: A n x n symmetric square real matrix – is said to be negative semi-definite for all if and only if,

For complex case: A n x n symmetric square complex matrix – is said to be negative semi-definite for all if and only if,

Note : The notation stands for a set of real numbers. The notation denotes a set of n real numbers in n-dimensional space.

20. Indefinite Matrix :

A Matrix, that is neither positive definite, positive semi-definite, negative definite nor negative semi-definite is called an indefinite Matrix.

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

External References:

[1] Vector
[2] Transpose of a Matrix
[3] Square Matrix
[4] Symmetric Matrix
[5] Diagonal Matrix
[6] Scalar Matrix
[7] Identity Matrix
[8] Inverse Matrix
[9] Singular Matrix
[10] Orthogonal Matrix
[11] Complex Matrix
[12] Complex Square Matrix
[13] Conjugate Transpose
[14] Hermitian Matrix
[15] Triangular Matirx
[16] Positive definite, negative definite and semi-definite Matrices