Statistical measures for stochastic signals

Key focus: Discuss statistical measures for stochastic signals : mean, variance, skewness, kurtosis, histogram, scatterplot, cross-correlation and auto-correlation.

Deterministic and stochastic signals

A deterministic signal is exactly predictable for the given time span of interest. It could be expressed using analytic form (example: x(t) = sin (2 π fc t) ).

Many of the signals that are encountered in real world signal processing applications cannot be expressed or predicted using analytic equations, because they have an element of uncertainty associated with them. As a consequence, such signals are characterized using statistical concepts. Therefore, these signals are outside the realm of deterministic signals and are classified as stochastic signals.

For example, we look at an electrical circuit and monitor the voltage across a resistor for a period of time. Under an applied electric field, atomic particles inside resister tend to randomly move and it manifests as heat. This random thermal motion causes random fluctuation in the voltage measured across the resistor. Therefore, the measured voltage can be characterized by a probability distribution and can be analyzed using statistical methods, but it cannot be predicted with precision. This is an example of signal that is stochastic function of time. Such functions usually evolve according to certain probabilistic laws and are assumed to be generated by an underlying stochastic process (thermal motion in the example above).

Examples for deterministic and stochastic signals
Figure 1: Examples for deterministic and stochastic signals

Given the amplitude of the stochastic voltage signal at time , now we know, we cannot predict the value at t’. However, if we observed the signal for a sufficient amount of time, we can empirically determine its probability distribution based on which we should be able to answer questions like

  • Given the amplitude of the voltage at time t, what is the average (expected or mean) of the voltage at time t’?
  • How much can we expect the voltage at time t’, to fluctuate from the mean ? In other words, we are interested in the variance of the voltage at time t’.
  • What is the probability that the voltage at t’ exceeds or falls below a certain threshold value ?
  • How the voltages at times t and t’ are related ? In other words, we are interested in correlation.

Summary of descriptive statistical measures

Different statistical measures are available to gather, review, analyze and draw conclusions from stochastic signals. Some of the statistical measures are summarized below:

Quantitative measures of shape:

In many statistical analysis, the fundamental task is to get a general idea about the shape of a distribution and it is done by using moments.

Measure of central tendency – mean – the first moment:

Measures of central tendency attempt to identify the central position in the distribution of samples that make up the stochastic signal. Mean, mode and median are different measures of central tendency. In signal processing, we are mostly interested in computing mean which is the average of values of the given samples. Given a discrete random variable X with probability mass function pX(x) the mean is denoted by

\[\mu = E \left[ X\right] = \sum_{x: p_X(x) > 0} x p_X(x) \]

Measure of dispersion – variance – the second moment:

Measures of dispersion describe how the values in the signal samples are spread out around a central location. Variance and standard deviation are some of the measures of dispersion. If the values are widely dispersed, the central location is said to be less representative of the values as a whole. If the values are tightly dispersed, the central location is considered more reliable.

For a discrete random variable X with probability mass function pX(x) , the variance (σX2) is given by the second central moment. The term central moment implies that this is measured relative to mean. For an electrical signal, the second moment is proportional to the average power.

\[\sigma_X^2 = E \left[ \left(X – \mu \right)^2\right] = \sum_{x: p_X(x) > 0} \left(x – \mu \right)^2 p_X(x) \]

The square root of variance is standard deviation

\[\sigma_X = \sqrt{E \left[ \left(X – \mu \right)^2\right]}\]

Figure 2, demonstrates the use of histogram for getting a visual perception of shape of the distribution of samples from two different stochastic signals. Though the signals look similar in nature in time domain, their histogram reveals different picture altogether. The central location (mean) of the first signal is around zero (no DC shift in the time domain view) and for the second signal the mean is at 0.75. The average power of first signal varies widely (histogram is widely spread out) compared to that of the second signal.

Figure 2: Histogram is a visual method that provides a general idea about the shape of a distribution.

Higher order moments – skewness and kurtosis:

Further characterization of the shape of a distribution includes higher order moments : skewness and kurtosis. They identify anomalies and outliers in many signal processing applications.

Skewness provides a measure to quantify the presence of asymmetry in the shape of the distribution. Actually, skewness measures the relative size of the tails in a distribution. Presence of asymmetry manifests as non-zero value for the standardized third central moment. The term “standardized” stands for the normalization of the third central moment by σ3.

\[Skewness = \frac{E \left[ \left( X – \mu \right)^3 \right]}{\sigma^3}\]
Figure 3: A random sequence showing positive skewness (asymmetry in shape) in its distribution

Kurtosis measures the amount of probability in the two tails of a distribution, relative to a normal distribution of same variance. Kurtosis is 3 for normal distribution (perfect bell shaped curve). If the kurtosis of a distribution is greater than 3, it implies that the tails are heavier compared to that of normal distribution. A value less than 3 for kurtosis implies lighter tails compared to that of the normal distribution. Essentially, kurtosis is a measure of outliers. Kurtosis is calculated as the standardized fourth central moment.

\[Kurtosis = \frac{E \left[ \left( X – \mu \right)^4 \right]}{\sigma^4}\]
Figure 4: Histogram showing kurtosis of a random sequence vs. kurtosis of normal distribution

Measures of association

Statistics, such as measures of central tendency, dispersion and higher order moments, describe about a single distribution are called univariate statistics. If we are interested in the relationship between two or more variables, we have to move to at least the realm of bivariate statistics.

Measures of association, summarize the size of association between two variables. Most measures of associates are scaled to a range of values. For example, a measure of association can be construed to have a range 0 to 1, where the value 0 implies no relationship and a value of 1 implies perfect relationship between the two variables under study. In another case, the measure can range from -1 to 1, which can help us determine if the two variables have negative or positive relationship between each other.

Scatter plot

Scatter plot is a great way to visually assess the nature of relationship between two variables. Following figure contains the scatter plots between different stochastic signals, exhibiting different strengths of relationships between the signals.

Figure 5: Scatter plot of stochastic signals exhibiting different strengths of relationship

Correlation

Correlation functions are commonly used in signal processing, for measuring the similarity of two signals. They are especially used in signal detection and pattern recognition.

Cross-correlation

Cross-correlation is commonly used for measuring the similarity between two different signals. In signal processing, cross-correlation measures the similarity of two waveforms as a function of time lags applied to one of the waveform.

For discrete-time waveforms – x[n] and y[n], cross correlation is defined as

\[Corr_{xy}[l] = \sum_{m=-\infty}^{\infty} x[n]^{\ast} y[n+l] = \sum_{m=-\infty}^{\infty} x[n-l]^{\ast} y[n]\]

where, * denotes complex conjugate operation and l is the discrete time lags applied to one of the waveforms. That is, the cross-correlation can be calculated as the dot product of a sequence with each shifted version of another sequence.

Auto-correlation

Auto-correlation is the cross-correlation of a waveform/sequence with itself.

For discrete-time waveform – x[n], auto-correlation is defined as

\[Corr_{xx}[l] = \sum_{m=-\infty}^{\infty} x[n]^{\ast} x[n+l] = \sum_{m=-\infty}^{\infty} x[n-l]^{\ast} x[n] \]

where, * denotes complex conjugate operation and l is the discrete time lags applied to the copy of the same waveform.

Correlation properties are useful for identifying/distinguishing a known bit sequence from a set of other possible known sequences. For example, in GPS technology, each satellite is assigned a unique 10-bit Gold code sequence (210 = 1023 possible combinations). Cross-correlation between different Gold code sequence is very low, since the Gold codes are orthogonal to each other. However, the auto-correlation of a Gold code is maximum at zero lag. The satellites are identified using these correlation properties of Gold codes. (I have described the hardware implementation of Gold codes, it can be accessed here).

Following plots illustrate the application of auto-correlation for audio analysis. The auto-correlation of an audio signal will have a peak at zero lag (i.e, where there is no time shifting when computing the correlation) as shown in Figure 6. Figure 7 contains the same audio file that is synthetically processed to produce reverberation characteristics. By looking at the time series plot in Figure 7, we cannot infer anything. However, the auto-correlation plot reveals the reverberation characteristics embedded in the audio.

Figure 6: Normalized auto-correlation of an original sound

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

For further reading

[1] Steven M. Kay, “Fundamentals of Statistical Signal Processing, Volume I: Estimation Theory”, ISBN: 978-0133457117, Prentice Hall, Edition 1, 1993.↗

Generate color noise using Auto-Regressive (AR) model

Key focus: Learn how to generate color noise using auto regressive (AR) model. Apply Yule Walker equations for generating power law noises: pink noise, Brownian noise.

Auto-Regressive (AR) model

An uncorrelated Gaussian random sequence can be transformed into a correlated Gaussian random sequence using an AR time-series model. If a time series random sequence is assumed to be following an auto-regressive model of form,

where is the uncorrelated Gaussian sequence of zero mean and variance , the natural tendency is to estimate the model parameters . Least Squares method can be applied here to find the model parameters, but the computations become cumbersome as the order increases. Fortunately, the AR model coefficients can be solved for using Yule Walker equations.

Yule Walker equations relate auto-regressive model parameters to the auto-correlation of random process . Finding the model parameters using Yule-Walker equations, is a two step process:

1. Given , estimate auto-correlation of the process . If is already specified as a function, utilize it as it is (see auto-correlation equations for Jakes spectrum or Doppler spectrum in section 11.3.2 in the book).

2. Solve Yule Walker equations to find the model parameters and the noise sigma .

Yule-Walker equations

Yule-Walker equations can be compactly written as

Equation (2) Yule Walker equation

Written in matrix form, the Yule-Walker equations that comprises of a set of linear equations and unknown parameters.

Representing equation (3) in a compact form,

The AR model parameters can be found by solving

After solving for the model parameters , the noise variance can be found by applying the estimated values of in equation (2) by setting . The aryule command (in Matlab and Python’s spectrum package) efficiently solves the Yule-Walker equations using Levinson Algorithm [1][2]. Once the model parameters are obtained, the AR model can be implemented as an \emph{infinte impulse response (IIR)} filter of form

Example: power law noise generation

The power law in the power spectrum characterizes the fluctuating observables in many natural systems. Many natural systems exhibit some noise which is a stochastic process with a power spectral density having a power exponent that can take values . Simply put, noise is a colored noise with a power spectral density of over its entire frequency range.

The noise can be classified into different types based on the value of .

Violet noise – = -2, the power spectral density is proportional to .
Blue noise – = -1, the power spectral density is proportional to .
White noise – = 0, the power spectral density is flat across the whole spectrum.
Pink noise – = 1, the power spectral density is proportional to , i.e, it decreases by per octave with increase in frequency.
Brownian noise – = 2, the power spectral density is proportional to , therefore it decreases by per octave with increase in frequency.

The power law noise can be generated by sequencing a zero-mean white noise through an auto-regressive (AR) filter of order :

where, is a zero-mean white noise process. Referring the AR generation method described in [3], the coefficients of the AR filter can be generated as

which can be implemented as an infinite impulse response (IIR) filter using the filter transfer function described in equation (6).

The following script implements this method and the sample results are plotted in the next Figure.

Refer the book for the Matlab code

Figure 1: Simulated color noise samples and their PSD estimates: pink noise (α =1) and Brown noise (α =2)

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

References

[1] Gene H. Golub, Charles F. Van Loan, Matrix Computations, ISBN-9780801854149, Johns Hopkins University Press, 1996, p. 143.↗
[2] J. Durbin, The fitting of time series in models, Review of the International Statistical Institute, 28:233-243, 1960.↗
[3] Kasdin, N.J. Discrete Simulation of Colored Noise and Stochastic Processes and Power Law Noise Generation, Proceedings of the IEEE, Vol. 83, No. 5, 1995, pp. 802-827.↗

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Topics in this chapter

Random Variables - Simulating Probabilistic Systems
● Introduction
Plotting the estimated PDF
● Univariate random variables
 □ Uniform random variable
 □ Bernoulli random variable
 □ Binomial random variable
 □ Exponential random variable
 □ Poisson process
 □ Gaussian random variable
 □ Chi-squared random variable
 □ Non-central Chi-Squared random variable
 □ Chi distributed random variable
 □ Rayleigh random variable
 □ Ricean random variable
 □ Nakagami-m distributed random variable
Central limit theorem - a demonstration
● Generating correlated random variables
 □ Generating two sequences of correlated random variables
 □ Generating multiple sequences of correlated random variables using Cholesky decomposition
Generating correlated Gaussian sequences
 □ Spectral factorization method
 □ Auto-Regressive (AR) model

Generating colored noise with Jakes PSD: Spectral factorization

The aim of this article is to demonstrate the application of spectral factorization method in generating colored noise having Jakes power spectral density. Before continuing, I urge the reader to go through this post: Introduction to generating correlated Gaussian sequences.

This article is part of the book
Wireless Communication Systems in Matlab (second edition), ISBN: 979-8648350779 available in ebook (PDF) format and Paperback (hardcopy) format.

In spectral factorization method, a filter is designed using the desired frequency domain characteristics (like PSD) to transform an uncorrelated Gaussian sequence into a correlated sequence . In the model shown in Figure 1, the input to the LTI system is a white noise whose amplitude follows Gaussian distribution with zero mean and variance and the power spectral density (PSD) of the white noise is a constant across all frequencies.

The white noise sequence drives the LTI system with frequency response producing the signal of interest . The PSD of the output process is therefore

Figure 1: Relationship among various power spectral densities in a filtering process

If the desired power spectral density of the colored noise sequence is given, assuming , the impulse response of the LTI filter can be found by taking the inverse Fourier transform of the frequency response

Once, the impulse response of the filter is obtained, the colored noise sequence can be produced by driving the filter with a zero-mean white noise sequence of unit variance.

Example: Generating colored noise with Jakes PSD

For example, we wish to generate a Gaussian noise sequence whose power spectral density follows the normalized Jakes power spectral density (see section 11.3.2 in the book) given by

Applying spectral factorization method, the frequency response of the desired filter is

The impulse response of the filter is [1]

where, is the fractional Bessel function of the first kind, is the sampling interval for implementing the digital filter and is a constant. The impulse response of the filter can be normalized by dividing by .

The filter can be implemented as a finite impulse response (FIR) filter structure. However, the FIR implementation requires that the impulse response be truncated to a reasonable length. Such truncation leads to ringing effects due to Gibbs phenomenon. To avoid distortions due to truncation, the filter impulse response is usually windowed using a window function such as Hamming window.

where, the Hamming window is defined as

The function given in the book in section 2.6.1 implements a windowed Jakes filter using the aforementioned equations. The impulse response and the spectral characteristics of the filter are plotted in Figure 2.

Figure 2: Impulse response & spectrum of windowed Jakes filter ( fmax = 10Hz; Ts = 0:01s; N = 512)

A white noise can be transformed into colored noise sequence with Jakes PSD, by processing the white noise through the implemented filter. The script (given in the book in section 2.6.1)  illustrates this concept by transforming a white noise sequence into a colored noise sequence. The simulated noise samples and its PSD are plotted in Figure 3.

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

Reference

[1] Jeruchim et., al, Simulation of communication systems – modeling, methodology, and techniques, second edition, Kluwer academic publishers, 2002, ISBN: 0306462672.↗

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Topics in this chapter

Random Variables - Simulating Probabilistic Systems
● Introduction
Plotting the estimated PDF
● Univariate random variables
 □ Uniform random variable
 □ Bernoulli random variable
 □ Binomial random variable
 □ Exponential random variable
 □ Poisson process
 □ Gaussian random variable
 □ Chi-squared random variable
 □ Non-central Chi-Squared random variable
 □ Chi distributed random variable
 □ Rayleigh random variable
 □ Ricean random variable
 □ Nakagami-m distributed random variable
Central limit theorem - a demonstration
● Generating correlated random variables
 □ Generating two sequences of correlated random variables
 □ Generating multiple sequences of correlated random variables using Cholesky decomposition
Generating correlated Gaussian sequences
 □ Spectral factorization method
 □ Auto-Regressive (AR) model

Generate correlated Gaussian sequence (colored noise)

Key focus: Colored noise sequence (a.k.a correlated Gaussian sequence), is a non-white random sequence, with non-constant power spectral density across frequencies.

Introduction

Speaking of Gaussian random sequences such as Gaussian noise, we generally think that the power spectral density (PSD) of such Gaussian sequences is flat.We should understand that the PSD of a Gausssian sequence need not be flat. This bring out the difference between white and colored random sequences, as captured in Figure 1.

A white noise sequence is defined as any random sequence whose PSD is constant across all frequencies. Gaussian white noise is a Gaussian random sequence, whose amplitude is gaussian distributed and its PSD is a constant. Viewed in another way, a constant PSD in frequency domain implies that the average auto-correlation function in time-domain is an impulse function (Dirac-delta function). That is, the amplitude of noise at any given time instant is correlated only with itself. Therefore, such sequences are also referred as uncorrelated random sequences. White Gaussian noise processes are completely characterized by its mean and variance.

This article is part of the book
Wireless Communication Systems in Matlab (second edition), ISBN: 979-8648350779 available in ebook (PDF) format and Paperback (hardcopy) format.

Figure 1: Power spectral densities of white noise and colored noise

A colored noise sequence is simply a non-white random sequence, whose PSD varies with frequency. For a colored noise, the amplitude of noise at any given time instant is correlated with the amplitude of noise occurring at other instants of time. Hence, colored noise sequences will have an auto-correlation function other than the impulse function. Such sequences are also referred as correlated random sequences. Colored
Gaussian noise processes are completely characterized by its mean and the shaped of power spectral density (or the shape of auto-correlation function).

In mobile channel model simulations, it is often required to generate correlated Gaussian random sequences with specified mean and power spectral density (like Jakes PSD or Gaussian PSD given in section 11.3.2 in the book). An uncorrelated Gaussian sequence can be transformed into a correlated sequence through filtering or linear transformation, that preserves the Gaussian distribution property of amplitudes, but alters only the correlation property (equivalently the power spectral density). We shall see two methods to generate colored Gaussian noise for given mean and PSD shape

Spectral factorization method
Auto-regressive (AR) model

Motivation

Let’s say we observe a real world signal that has an arbitrary spectrum . We would like to describe the long sequence of using very few parameters, as in applications like linear predictive coding (LPC). The modeling approach, described here, tries to answer the following two questions:

• Is it possible to model the first order (mean/variance) and second order (correlations, spectrum) statistics of the signal just by shaping a white noise spectrum using a transfer function ? (see Figure 1).
• Does this produce the same statistics (spectrum, correlations, mean and variance) for a white noise input ?

If the answer is yes to the above two questions, we can simply set the modeled parameters of the system and excite the system with white noise, to produce the desired real world signal. This reduces the amount to data we wish to transmit in a communication system application. This approach can be used to transform an uncorrelated white Gaussian noise sequence to a colored Gaussian noise sequence with desired spectral properties.

Linear time invariant (LTI) system model

In the given model, the random signal is observed. Given the observed signal , the goal here is to find a model that best describes the spectral properties of under the following assumptions
• The sequence is WSS (wide sense stationary) and ergodic.
• The input sequence to the LTI system is white noise, whose amplitudes follow Gaussian distribution with zero-mean and variance with flat the power spectral density.
• The LTI system is BIBO (bounded input bounded output) stable.

Read the continuation of this post : Spectral factorization method

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

Reference

[1] Jeruchim et., al, Simulation of communication systems – modeling, methodology, and techniques, second edition, Kluwer academic publishers, 2002, ISBN: 0306462672.↗

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Topics in this chapter

Random Variables - Simulating Probabilistic Systems
● Introduction
Plotting the estimated PDF
● Univariate random variables
 □ Uniform random variable
 □ Bernoulli random variable
 □ Binomial random variable
 □ Exponential random variable
 □ Poisson process
 □ Gaussian random variable
 □ Chi-squared random variable
 □ Non-central Chi-Squared random variable
 □ Chi distributed random variable
 □ Rayleigh random variable
 □ Ricean random variable
 □ Nakagami-m distributed random variable
Central limit theorem - a demonstration
● Generating correlated random variables
 □ Generating two sequences of correlated random variables
 □ Generating multiple sequences of correlated random variables using Cholesky decomposition
Generating correlated Gaussian sequences
 □ Spectral factorization method
 □ Auto-Regressive (AR) model

AutoCorrelation (Correlogram) and persistence – Time series analysis

The agenda for the subsequent series of articles is to introduce the idea of autocorrelation, AutoCorrelation Function (ACF), Partial AutoCorrelation Function (PACF) , using ACF and PACF in system identification.

Introduction

Given time series data (stock market data, sunspot numbers over a period of years, signal samples received over a communication channel etc.,), successive values in the time series often correlate with each other. This series correlation is termed persistence or inertia and it leads to increased power in the lower frequencies of the frequency spectrum. Persistence can drastically reduces the degrees of freedom in time series modeling (AR, MA , ARMA models). In the test for statistical significance, presence of persistence complicates the test as it reduces the number of independent observations.

Autocorrelation function

Correlation of a time series with its own past and future values- is called autocorrelation.  It is also referred as “lagged or series correlation”. Positive autocorrelation is an indication of a specific form of persistence, the tendency of a system to remain in the same state from one observation to the next (example: continuous runs of 0’s or 1’s). If a time series exhibits correlation, the future values of the samples probabilistic-ally depend on the current & past samples. Thus the existence of autocorrelation can be exploited in prediction as well as modeling time series. Autocorrelation can be accessed using the following tools

● Time series plot
● Lagged scatterplot
● AutoCorrelation Function (ACF)

Generating a sample time series

For the purpose of illustration, let’s begin by generating two time series data using Auto-Regressive AR(1) process. AR(1) process relates the current sample x[n] of the output of an LTI system, its immediate past sample x[n-1] and the white noise term w[n].

A generic AR(1) system is given by

Here and are the model parameters which we will tweak to generate different set of time series data and is a constant which will be set to zero . Thus the model can be equivalently written as

Let’s generate two time series data from the above model.

Model 1: a0=0, a1=1

The “Filter” function in Matlab will be utilized to generate the output process x[n]. The filter function, in its basic form – X=filter(B,A,W), takes three inputs. The vectors B and A denote the numerator and denominator co-efficients (model parameters here) of the transfer function of the LTI system in standard difference equation form, W is the white noise vector to the LTI filter and the output of filter is X.

The transfer function of model 1 is therefore,

Where B=1 and A=[1 -1] and  the input W is a white noise – which can be generate using randn function. Therefore, the above model can be implemented with the command x=filter(1,[1 -1,randn(1000,1)) generate 1000 samples of x[n]

A=[1 -1];  %model co-effs
% generating using numerator/denominator form with noise
x1 = filter(1,A,randn(1000,1));
plot(x1,’b’);

Model 2: a0=1, a1=0.5

Transfer function of this model is

Where B=1 and A=[1 -0.5] and  the input W is a white noise – which can be generated using randn function. Therefore, the above model can be implemented with the command x=filter(1,[1 -0.5], randn(1000,1)) to generate 1000 samples of x[n]

A=[1 -0.5];  %model co-effs
% generating using numerator/denominator form with noise
x2 = filter(1,A,randn(1000,1));
plot(x2,’r’);
Time-series plot of two models – where one model shows persistence and the other does not

In the plot above, the output from model 1 exhibits persistence or positive correlation – positive deviations from mean tend to be followed by positive deviations for some duration and the negative deviations from mean tend to be followed by negative deviations for sometime. When the positive deviations are followed by negative deviations or vice-versa, it is a characteristic of negative correlation. Positive correlations are strong indications of long runs of several consecutive observations above or below mean. Negative correlations indicate low incidence of such runs. The output of the model 2 always jumps around the mean value and there is no consistent departure from the mean – no persistence (no positive correlation). The interpretation of time series plots for clues on persistence is a subjective matter and is left for trained eyes. However, it can be considered as a preliminary analysis.

Persistence – an indication of non-stationarity:

For time series analysis, it is imperative to work with stationary process. Many of the formulated theorems in statistical signal processing assume a series to be stationary (atleast in weak sense). Processes whose Probability Density Functions do not change with time are termed stationary (sub classifications include strict sense stationarity (SSS), weak sense stationarity (WSS) etc.,). For analysis, the joint probability distribution must remain unchanged should there be any shift in the time series. Time series with persistence – changing mean with time – are non-stationary – therefore many theorems in signal processing will not apply as such.

Plotting the histogram of the two series (see next figure) , we can immediately identify that the data generated by model 1 is non-stationary  – histogram varies between selected portion of the signal. Whereas, the histogram of the output from model 2 is pretty much same – therefore, this is a stationary signal and is suitable for further analysis.

Persistence – non-stationary and stationary signal

Lagged Scatter Plots

Autocorrelation trend can also be ascertained by lagged scatter plots. In lagged scatter plots, the samples of time series are plotted against one another with one lag at a time. A Strong positive autocorrelation will show of as a linear positive slope for the particular lag value. If the scatter plot is random, it indicates no-correlation for the particular lag.

figure;
x12 = x1(1:end-1);
x12 = x1(1:end-1);
x21 = x1(2:end);
x13 = x1(1:end-2);
x31 = x1(3:end);
x14 = x1(1:end-3);
x41 = x1(4:end);
x15 = x1(1:end-4);
x51 = x1(5:end);
subplot(2,2,1)
plot(x12,x21,'b*');
xlabel('X_1'); ylabel('X_2');
subplot(2,2,2)
plot(x13,x31,'b*');
xlabel('X_1'); ylabel('X_3');
subplot(2,2,3)
plot(x14,x41,'b*');
xlabel('X_1'); ylabel('X_4');
subplot(2,2,4)
plot(x15,x51,'b*');
xlabel('X_1'); ylabel('X_5');

figure;
x12 = x2(1:end-1);
x12 = x2(1:end-1);
x21 = x2(2:end);
x13 = x2(1:end-2);
x31 = x2(3:end);
x14 = x2(1:end-3);
x41 = x2(4:end);
x15 = x2(1:end-4);
x51 = x2(5:end);
subplot(2,2,1)
plot(x12,x21,'b*');
xlabel('X_1'); ylabel('X_2');
subplot(2,2,2)
plot(x13,x31,'b*');
xlabel('X_1'); ylabel('X_3');
subplot(2,2,3)
plot(x14,x41,'b*');
xlabel('X_1'); ylabel('X_4');
subplot(2,2,4)
plot(x15,x51,'b*');
xlabel('X_1'); ylabel('X_5');

The scatter plot of the model 1 for the first four lags indicate strong positive correlation at all the four lag values. The scatter plot of model 2 indicates a slightly positive correlation for lag=1 and no correlation for remaining lags. This trend can be clearing seen if we plot the Auto Correlation Function (ACF).

Auto Correlation Function (ACF) or Correlogram

ACF plot summarizes the correlation of a time series at various lags. It plots the correlation co-efficient of the series lagged by 1 delay at a time in the sample plot.  Plotting the ACF for the output from both the models with the code below.

[x1c,lags] = xcorr(x1,100,'coeff');
%Plotting only positive lag values - autocorrelation is symmetric
stem(lags(101:end),x1c(101:end));
[x2c,lags] = xcorr(x2,100,'coeff');
stem(lags(101:end),x2c(101:end))

The ACF plot of model 1 indicates strong persistence across all the lags. The ACF plot of model 2 indicates significant correlation only at lag 1 (and lag 0 will obviously correlate fully) which concurs with the lagged scatter plots.

Auto Correlation Function or correlogram

Correlogram has very few significant spikes at very small lags and cuts off drastically/dies down quickly for stationary series. Thus model 2 produces stationary series, where as model 1 does not. Also, model 2 is suitable for further time series analysis.

Continue reading on constructing an autocorrelation matrix…

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

Articles in this series

[1]An Introduction to Estimation Theory
[2]Bias of an Estimator
[3]Minimum Variance Unbiased Estimators (MVUE)
[4]Maximum Likelihood Estimation
[5]Maximum Likelihood Decoding
[6]Probability and Random Process
[7]Likelihood Function and Maximum Likelihood Estimation (MLE)
[8]Score, Fisher Information and Estimator Sensitivity
[9]Introduction to Cramer Rao Lower Bound (CRLB)
[10]Cramer Rao Lower Bound for Scalar Parameter Estimation
[11]Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE)
[12]Efficient Estimators and CRLB
[13]Cramer Rao Lower Bound for Phase Estimation
[14]Normalized CRLB - an alternate form of CRLB and its relation to estimator sensitivity
[15]Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation
[16]The Mean Square Error – Why do we use it for estimation problems
[17]How to estimate unknown parameters using Ordinary Least Squares (OLS)
[18]Essential Preliminary Matrix Algebra for Signal Processing
[19]Why Cholesky Decomposition ? A sample case:
[20]Tests for Positive Definiteness of a Matrix
[21]Solving a Triangular Matrix using Forward & Backward Substitution
[22]Cholesky Factorization - Matlab and Python
[23]LTI system models for random signals – AR, MA and ARMA models
[24]Comparing AR and ARMA model - minimization of squared error
[25]Yule Walker Estimation
[26]AutoCorrelation (Correlogram) and persistence – Time series analysis
[27]Linear Models - Least Squares Estimator (LSE)
[28]Best Linear Unbiased Estimator (BLUE)

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Yule Walker Estimation and simulation in Matlab

If a time series data is assumed to be following an Auto-Regressive (AR(N)) model of given form,

the natural tendency is to estimate the model parameters a1,a2,…,aN.

Least squares method can be applied here to estimate the model parameters but the computations become cumbersome as the order N increases.

Fortunately, the AR model co-efficients can be solved for using Yule Walker Equations.

Yule-Walker Equations

Yule Walker equations relate auto-regressive model parameters to auto-covariance rxx[k] of random process x[n]. It can be applied to find the AR(N) model parameters from a given time-series data x[n] that indeed can be assumed to be AR process (visually examining auto-correlation function (ACF) and partial auto-correlation function (PACF) may give clue on whether the data can be assumed as an AR or MA process with appropriate model order N).

Steps to find model parameters using Yule-Walker Equations:

  • Given x[n], estimate auto-covariance of the process rxx[k]
  • Solve Yule Walker Equations to find ak (k=0,1,…,N) and σ2 from

To solve Yule-Walker equations, one has to know the model order. Although the Yule-Walker equations can be used to find the model parameters, it cannot give any insight into the model order N directly. Several methods exists to ascertain the model order.

  • ACF and PACF Autocorrelation Function (ACF) and Partial Autocorrelation Function (PACF) may give insight into the model order of the underlying process
  • Akaike Information Criterion (AIC)[1] It associates the number of model parameters and the goodness of fit. It also associates a penalty factor to avoid over-fitting (increasing model order always improves the fit – the fitting process has to be stopped somewhere)
  • Bayesian Information Criterion (BIC)[2] Very similar to AIC
  • Cross Validation[3] The given time series is broken into subsets. One subset of data is taken at a time and model parameters estimated. The estimated model parameters are cross validated with data from the remaining subsets.

Deriving Yule-Walker Equations

A generic AR model

can be written in compact form as

Note that the scaling factor x[n] term is a0=1

Multiplying (2) by x[n-l]

One can readily identify the auto-correlation and cross-correlation terms as

The next step is to compute the identified cross-correlation rwx[l] term and relate it to the auto-correlation term rxx[l-k]

The term x[n-l] can also be obtained from equation (1) as

Note that data and noise are always uncorrelated, therefore: x[n-k-l]w[n]=0. Also the auto-correlation of noise is zero at all lags except at lag 0 where its value is equal to σ2 (remember the flat Power Spectral Density of white noise and its auto-correlation). These two properties are used in the following steps. Restricting the lags only to positive values and zero ,

Substituting (6) in (4),

Here there are two cases to be solved – l>0 and l=0
For l>0 case, equation (7) becomes,

Clue: notice the lower limit of the summation which changed from k=0 to k=1 .

Now, equation (8) can be written in matrix form

This is the Yule-Walker Equations which comprises of a set of N linear equations and N unknown parameters.

Representing equation (9) in a compact form

The solutions can be solved by

Once we solve for , equivalently ak, – the model parameters, the noise variance σ2 can be found by applying the estimated values of ak in equation (7) by setting l=0

Matlab’s “aryule” efficiently solves the “Yule-Walker” equations using “Levinson Algorithm”[4][5]

Simulation:

Let’s generate an AR(3) process and pretend that we do not anything about the model parameters. We will take this as input data to Yule-Walker and check if it can estimate the model parameters properly

Generating the data from the AR(3) process given above

a=[1  0.9   0.6 0.5]; %model parameters
x=filter(1,a,randn(1000,1)); %data generation with gaussian noise - 1000 samples
plot(x);
title('Sample data from AR(3) process');
xlabel('Sample index');
ylabel('Value');
Figure 1: Simulated data from AR(3) process

Plot the periodogram (PSD plot) for reference. The PSD plots from the estimated model will be checked against this reference plot later.

figure(2);
periodogram(x); hold on; %plot the original frequency response of the data

Running the simulation for three model orders and checking which model order suits the best.

N=[2,3,4];

for i=1:3,
    [a,p] = aryule(x,N(i))
    [H,w]=freqz(sqrt(p),a);
    hp = plot(w/pi,20*log10(2*abs(H)/(2*pi))); %plotting in log scale
end
xlabel('Normalized frequency (\times \pi rad/sample)');
ylabel('One-sided PSD (dB/rad/sample)');
legend('PSD estimate of x','PSD of model output N=2','PSD of model output N=3','PSD of model output N=4');
Figure 2: Power Spectral density of modeled data

The estimated model parameters and the noise variances computed by the Yule-Walker system are given below. It can be ascertained that the estimated parameters are approximately same as that of what is expected. See how the error decreases as the model order N increases. The optimum model order in this case is N=3 since the error has not changed significantly when increasing the order and also the model parameter a4 of the AR(4) process is not-significantly different from zero.

Figure 3: Estimated model parameters and prediction errors for the given model model order

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

References

[1] Akaike H. (1998) Information Theory and an Extension of the Maximum Likelihood Principle. In: Parzen E., Tanabe K., Kitagawa G. (eds) Selected Papers of Hirotugu Akaike. Springer Series in Statistics (Perspectives in Statistics). Springer, New York, NY. https://doi.org/10.1007/978-1-4612-1694-0_15
[2] McQuarrie, A. D. R., and Tsai, C.-L., 1998. Regression and Time Series Model Selection. World Scientific.↗
[3] Arlot, Sylvain, and Alain Celisse. “A survey of cross-validation procedures for model selection.” Statistics surveys 4 (2010): 40-79.
[4] J. Durbin.The fitting of time series in models.Review of the International Statistical Institute, 28:233-243, 1960
[5] G. H. Golub and C. F. Van Loan, “Matrix Computations”, The Johns Hopkins University Press, third edition, 1996.↗

Related topics

[1]An Introduction to Estimation Theory
[2]Bias of an Estimator
[3]Minimum Variance Unbiased Estimators (MVUE)
[4]Maximum Likelihood Estimation
[5]Maximum Likelihood Decoding
[6]Probability and Random Process
[7]Likelihood Function and Maximum Likelihood Estimation (MLE)
[8]Score, Fisher Information and Estimator Sensitivity
[9]Introduction to Cramer Rao Lower Bound (CRLB)
[10]Cramer Rao Lower Bound for Scalar Parameter Estimation
[11]Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE)
[12]Efficient Estimators and CRLB
[13]Cramer Rao Lower Bound for Phase Estimation
[14]Normalized CRLB - an alternate form of CRLB and its relation to estimator sensitivity
[15]Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation
[16]The Mean Square Error – Why do we use it for estimation problems
[17]How to estimate unknown parameters using Ordinary Least Squares (OLS)
[18]Essential Preliminary Matrix Algebra for Signal Processing
[19]Why Cholesky Decomposition ? A sample case:
[20]Tests for Positive Definiteness of a Matrix
[21]Solving a Triangular Matrix using Forward & Backward Substitution
[22]Cholesky Factorization - Matlab and Python
[23]LTI system models for random signals – AR, MA and ARMA models
[24]Comparing AR and ARMA model - minimization of squared error
[25]Yule Walker Estimation
[26]AutoCorrelation (Correlogram) and persistence – Time series analysis
[27]Linear Models - Least Squares Estimator (LSE)
[28]Best Linear Unbiased Estimator (BLUE)

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Solving ARMA model – minimization of squared error

Key focus: Can a unique solution exists when solving ARMA (Auto Regressive Moving Average) model ? Apply minimization of squared error to find out.

As discussed in the previous post, the ARMA model is a generalized model that is a mix of both AR and MA model. Given a signal x[n], AR model is easiest to find when compared to finding a suitable ARMA process model. Let’s see why this is so.

AR model error and minimization

In the AR model, the present output sample x[n] and the past N-1 output samples determine the source input w[n]. The difference equation that characterizes this model is given by

The model can be viewed from another perspective, where the input noise w[n] is viewed as an error – the difference between present output sample x[n] and the predicted sample of x[n] from the previous N-1 output samples. Let’s term this “AR model error“. Rearranging the difference equation,

The summation term inside the brackets are viewed as output sample predicted from past N-1 output samples and their difference being the error w[n].

Least squared estimate of the co-efficients – ak are found by evaluating the first derivative of the squared error with respect to ak and equating it to zero – finding the minima.From the equation above, w2[n] is the squared error that we wish to minimize. Here, w2[n] is a quadratic equation of unknown model parameters ak. Quadratic functions have unique minima, therefore it is easier to find the Least Squared Estimates of ak by minimizing w2[n].

ARMA model error and minimization

The difference equation that characterizes this model is given by

Re-arranging, the ARMA model error w[n] is given by

Now, the predictor (terms inside the brackets) considers weighted combinations of past values of both input and output samples.

The squared error, w2[n] is NOT a quadratic function and we have two sets of unknowns – ak and bk. Therefore, no unique solution may be available to minimize this squared error-since multiple minima pose a difficult numerical optimization problem.

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

Related topics:

[1]An Introduction to Estimation Theory
[2]Bias of an Estimator
[3]Minimum Variance Unbiased Estimators (MVUE)
[4]Maximum Likelihood Estimation
[5]Maximum Likelihood Decoding
[6]Probability and Random Process
[7]Likelihood Function and Maximum Likelihood Estimation (MLE)
[8]Score, Fisher Information and Estimator Sensitivity
[9]Introduction to Cramer Rao Lower Bound (CRLB)
[10]Cramer Rao Lower Bound for Scalar Parameter Estimation
[11]Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE)
[12]Efficient Estimators and CRLB
[13]Cramer Rao Lower Bound for Phase Estimation
[14]Normalized CRLB - an alternate form of CRLB and its relation to estimator sensitivity
[15]Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation
[16]The Mean Square Error – Why do we use it for estimation problems
[17]How to estimate unknown parameters using Ordinary Least Squares (OLS)
[18]Essential Preliminary Matrix Algebra for Signal Processing
[19]Why Cholesky Decomposition ? A sample case:
[20]Tests for Positive Definiteness of a Matrix
[21]Solving a Triangular Matrix using Forward & Backward Substitution
[22]Cholesky Factorization - Matlab and Python
[23]LTI system models for random signals – AR, MA and ARMA models
[24]Comparing AR and ARMA model - minimization of squared error
[25]Yule Walker Estimation
[26]AutoCorrelation (Correlogram) and persistence – Time series analysis
[27]Linear Models - Least Squares Estimator (LSE)
[28]Best Linear Unbiased Estimator (BLUE)

For further reading

[1] Thiesson et al, “ARMA time series modeling with graphical models”, Proceedings of the Twentieth Conference on Uncertainty in Artificial Intelligence, July 2004↗

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Understand AR, MA and ARMA models

Key focus: AR, MA & ARMA models express the nature of transfer function of LTI system. Understand the basic idea behind those models & know their frequency responses.

Introduction

Signal models are used to analyze stationary univariate time series. The goal of signal modeling is to estimate the process from which the desired signal is generated. Though the concept described here is related to the topic of “system identification”, they are quite different.

A signal model is an unique combination of a filter and a source input, that may fall into any of the following categories

  • Filter: state-space model, AR, MA, ARMA (see below)
  • Source:pulse, pulse train, white noise,…

Motivation

Let’s say we observe a real world signal x[n] that has a spectrum x[ɷ] (the spectrum can be arbitrary – bandpass, baseband etc..,). We would like to describe the long sequence of x[n] using very few parameters (application : Linear Predictive Coding (LPC) ). The modelling approach, described here, tries to answer the following two questions.

  • Is it possible to model the first order (mean/variance) and second order (correlations, spectrum) statistics of the signal just by shaping a white noise spectrum using a transfer function ? (see Figure 1)
  • Does this produce the same statistics (spectrum, correlations, mean and variance) for a white noise input ?

If the answer is “yes” to the above two questions, we can simply set the modeled parameters of the system and excite the system with white (flat) noise to produce the desired real world signal. This reduces the amount to data we wish to transmit in a communication system application.

Figure 1: Shaping a white noise spectrum (flat spectrum) to achieve desired spectrum

LTI system model

In the model given below, the random signal x[n] is observed. Given the observed signal x[n], the goal here is to find a model that best describes the spectral properties of x[n] under the following assumptions

x[n] is WSS (Wide Sense Stationary) and ergodic.
● The input signal to the LTI system is white noise following Gaussian distribution – zero mean and variance σ2.
● The LTI system is BIBO (Bounded Input Bounded Output) stable.

Figure 2: Linear Time Invariant (LTI) system – signal model

In the model shown above, the input to the LTI system is a white noise following Gaussian distribution – zero mean and variance σ2. The power spectral density (PSD) of the noise w[n] is

The noise process drives the LTI system with frequency response H(e) producing the signal of interest x[n]. The PSD of the output process is therefore,

Three cases are possible given the nature of the transfer function of the LTI system that is under investigation here.

  • Auto Regressive (AR) models : H(e) is an all-poles system
  • Moving Average (MA) models : H(e) is an all-zeros system
  • Auto Regressive Moving Average (ARMA) models : H(e) is a pole-zero system

Auto Regressive (AR) models (all-poles model)

In the AR model, the present output sample x[n] and the past N output samples determine the source input w[n]. The difference equation that characterizes this model is given by

Here, the LTI system is an Infinite Impulse Response (IIR) filter. This is evident from the fact that the above equation considered past samples of x[n] when determining w[n], there by creating a feedback loop from the output of the filter.

The frequency response of the IIR filter is well known

Figure 3: Spectrum of all-pole transfer function (representing AR model)

The transfer function H(e) is an all-pole transfer function (when the denominator is set to zero, the transfer function goes to infinity -> creating peaks in the spectrum). Poles are best suited to model resonant peaks in a given spectrum. At the peaks, the poles are closer to unit circle. This model is well suited for modeling peaky spectra.

Read all articles tagged Auto-regressive model.

Moving Average (MA) models (all-zeros model)

In the MA model, the present output sample x[n] is determined by the present source input w[n] and past N samples of source input w[n]. The difference equation that characterizes this model is given by

Here, the LTI system is an Finite Impulse Response (FIR) filter. This is evident from the fact that the above equation that no feedback is involved from output to input.

The frequency response of the FIR filter is well known

The transfer function H(e) is an all-zero transfer function (when the numerator is set to zero, the transfer function goes to zero -> creating nulls in the spectrum). Zeros are best suited to model sharp nulls in a given spectrum.

Figure 4: Spectrum of all-zeros transfer function (representing MA model)

Auto Regressive Moving Average (ARMA) model (pole-zero model)

ARMA model is a generalized model that is a combination of AR and MA model. The output of the filter is linear combination of both weighted inputs (present and past samples) and weight outputs (present and past samples). The difference equation that characterizes this model is given by

The frequency response of this generalized filter is well known

The transfer function H(e) is a pole-zero transfer function. It is best suited for modelling complex spectra having well defined resonant peaks and nulls.

Next post: Comparing AR and ARMA model – minimization of squared error

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

Related topics

[1]An Introduction to Estimation Theory
[2]Bias of an Estimator
[3]Minimum Variance Unbiased Estimators (MVUE)
[4]Maximum Likelihood Estimation
[5]Maximum Likelihood Decoding
[6]Probability and Random Process
[7]Likelihood Function and Maximum Likelihood Estimation (MLE)
[8]Score, Fisher Information and Estimator Sensitivity
[9]Introduction to Cramer Rao Lower Bound (CRLB)
[10]Cramer Rao Lower Bound for Scalar Parameter Estimation
[11]Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE)
[12]Efficient Estimators and CRLB
[13]Cramer Rao Lower Bound for Phase Estimation
[14]Normalized CRLB - an alternate form of CRLB and its relation to estimator sensitivity
[15]Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation
[16]The Mean Square Error – Why do we use it for estimation problems
[17]How to estimate unknown parameters using Ordinary Least Squares (OLS)
[18]Essential Preliminary Matrix Algebra for Signal Processing
[19]Why Cholesky Decomposition ? A sample case:
[20]Tests for Positive Definiteness of a Matrix
[21]Solving a Triangular Matrix using Forward & Backward Substitution
[22]Cholesky Factorization - Matlab and Python
[23]LTI system models for random signals – AR, MA and ARMA models
[24]Comparing AR and ARMA model - minimization of squared error
[25]Yule Walker Estimation
[26]AutoCorrelation (Correlogram) and persistence – Time series analysis
[27]Linear Models - Least Squares Estimator (LSE)
[28]Best Linear Unbiased Estimator (BLUE)

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing