Why can’t I just use a matrix to solve ARMA?

Linear-Time-Invariant-System-LTI-system-model

Key focus: “Why can’t I just use a matrix to solve ARMA?” The answer is right there in the shape of the surface—you can’t solve a “warped” landscape with a linear equation. Introduction In signal modeling, our goal is to find a set of coefficients (ak​ and bk​) that best describe an observed signal. We … Read more

Shaping Randomness: A Guide to AR, MA, and ARMA Models

Key focus: AR, MA & ARMA models express the nature of transfer function of LTI system. Understand the basic idea behind those models & know their frequency responses. How do you describe a complex, random signal—like the sound of a human voice or the fluctuating power of a fading channel—using only a few numbers? The … Read more

Cramér-Rao Lower Bound (CRLB)-Vector Parameter Estimation

Key focus: Applying Cramér-Rao Lower Bound (CRLB) for vector parameter estimation. Know about covariance matrix, Fisher information matrix & CRLB matrix. CRLB for Vector Parameter Estimation CRLB for scalar parameter estimation was discussed in previous posts. The same concept is extended to vector parameter estimation. Consider a set of deterministic parameters $latex \mathbb{\theta}=[ \theta_1, \theta_2, … Read more

Introducing The Kalman Filter

Introducing The Kalman Filter – Ramsey Faragher PDF Text: click here PDF Text: click here Note: Click the playlist icon (located at the top left corner of the video frame) to watch all lectures Video Lectures: Watch, Listen and Learn !!! † Link will take you to external sites Disclaimer: All the materials posted in … Read more

Minimum-variance unbiased estimator (MVUE)

As discussed in the introduction to estimation theory, the goal of an estimation algorithm is to give an estimate of random variable(s) that is unbiased and has minimum variance. This criteria is reproduced here for reference $latex E\left\{\hat{f}_0 \right\} = f_0 &s=1$ $latex \sigma^{2}_{\hat{f}_0}= E \left\{ ( \hat{f}_0 – E [ \hat{f}_0 ] )^2 \right\} … Read more

Cholesky decomposition: Python & Matlab

Cholesky decomposition is an efficient method for inversion of symmetric positive-definite matrices. Let’s demonstrate the method in Python and Matlab. Cholesky factor Any $n \times n$ symmetric positive definite matrix $A $ can be factored as $$A=LL^T $$ where $L$ is $n \times n$ lower triangular matrix. The lower triangular matrix $L$ is often called … Read more

Check Positive Definite Matrix in Matlab

[ratings] It is often required to check if a given matrix is positive definite or not. Three methods to check the positive definiteness of a matrix were discussed in a previous article . I will utilize the test method 2 to implement a small matlab code to check if a matrix is positive definite.The test … Read more

Solve Triangular Matrix – Forward & Backward Substitution

Key focus: Know the expressions to solve triangular matrix using forward and backward substituting techniques and the FLOPS required for solving it. Forward Substitution: Consider a set of equations in a matrix form $latex Ax=b $, where A is a lower triangular matrix with non-zero diagonal elements. The equation is re-written in full matrix form … Read more

Tests for Positive Definiteness of a Matrix

In order to perform Cholesky Decomposition of a matrix, the matrix has to be a positive definite matrix. I have listed down a few simple methods to test the positive definiteness of a matrix. Methods to test Positive Definiteness: Remember that the term positive definiteness is valid only for symmetric matrices. Test method 1: Existence … Read more

Why Cholesky Decomposition ? A sample case:

[ratings] Matrix inversion is seen ubiquitously in signal processing applications. For example, matrix inversion is an important step in channel estimation and equalization. For instance, in GSM normal burst, 26 bits of training sequence are put in place with 114 bits of information bits. When the burst travels over the air interface (channel), it is … Read more