Simulate additive white Gaussian noise (AWGN) channel

In this article, the relationship between SNR-per-bit (Eb/N0) and SNR-per-symbol (Es/N0) are defined with respect to M-ary signaling schemes. Then the complex baseband model for an AWGN channel is discussed, followed by the theoretical error rates of various modulations over the additive white Gaussian noise (AWGN) channel. Finally, the complex baseband models for digital modulators and detectors developed in previous chapter of this book, are incorporated to build a complete communication system model.

If you would like to know more about the simulation and analysis of white noise, I urge you to read this article: White noise: Simulation & Analysis using Matlab.

Signal to noise ratio (SNR) definitions

Assuming a channel of bandwidth B, received signal power Pr and the power spectral density (PSD) of noise N0/2, the signal to noise ratio (SNR) is given by

Let a signal’s energy-per-bit is denoted as Eb and the energy-per-symbol as Es, then γb=Eb/N0 and γs=Es/N0 are the SNR-per-bit and the SNR-per-symbol respectively.

For uncoded M-ary signaling scheme with k = log2(M) bits per symbol, the signal energy per modulated symbol is given by

The SNR per symbol is given by

AWGN channel model

In order to simulate a specific SNR point in performance simulations, the modulated signal from the transmitter needs to be added with random noise of specific strength. The strength of the generated noise depends on the desired SNR level which usually is an input in such simulations. In practice, SNRs are specified in dB. Given a specific SNR point for simulation, let’s see how we can simulate an AWGN channel that adds correct level of white noise to the transmitted symbols.

simulation model for additive white Gaussian noise (awgn) channel
Figure 1: Simplified simulation model for awgn channel

Consider the AWGN channel model given in Figure 1. Given a specific SNR point to simulate, we wish to generate a white Gaussian noise vector of appropriate strength and add it to the incoming signal. The method described can be applied for both waveform simulations and the complex baseband simulations. In following text, the term SNR (γ) refers to γb = Eb/N0 when the modulation is of binary type (example: BPSK). For multilevel modulations such as QPSK and MQAM, the term SNR refers to γs = Es/N0.

(1) Assume, s is a vector that represents the transmitted signal. We wish to generate a vector r that represents the signal after passing through the AWGN channel. The amount of noise added by the AWGN channel is controlled by the given SNR – γ

(2) For waveform simulation model, let the given oversampling ratio is denoted as L. On the other hand, if you are using the complex baseband models, set L=1.

(3) Let N denotes the length of the vector s. The signal power for the vector s can be measured as,

(4) The required power spectral density of the noise vector n is computed as

(5) Assuming complex IQ plane for all the digital modulations, the required noise variance (noise power) for generating Gaussian random noise is given by

(6) Generate the noise vector n drawn from normal distribution with mean set to zero and the standard deviation computed from the equation given above

(7) Finally add the generated noise vector (n) to the signal (s)

Matlab code

The following custom function written in Matlab, can be used for adding AWGN noise to an incoming signal. It can be used in waveform simulation as well as complex baseband simulation models.

%author - Mathuranathan Viswanathan (gaussianwaves.com
%This code is part of the books: Wireless communication systems using Matlab & Digital modulations using Matlab.

function [r,n,N0] = add_awgn_noise(s,SNRdB,L)
%Function to add AWGN to the given signal
%[r,n,N0]= add_awgn_noise(s,SNRdB) adds AWGN noise vector to signal
%'s' to generate a %resulting signal vector 'r' of specified SNR
%in dB. It also returns the noise vector 'n' that is added to the
%signal 's' and the spectral density N0 of noise added
%
%[r,n,N0]= add_awgn_noise(s,SNRdB,L) adds AWGN noise vector to
%signal 's' to generate a resulting signal vector 'r' of specified
%SNR in dB. The parameter 'L' specifies the oversampling ratio used
%in the system (for waveform simulation). It also returns the noise
%vector 'n' that is added to the signal 's' and the spectral
%density N0 of noise added
 s_temp=s;
 if iscolumn(s), s=s.'; end; %to return the result in same dim as 's'
 gamma = 10ˆ(SNRdB/10); %SNR to linear scale
 
 if nargin==2, L=1; end %if third argument is not given, set it to 1
 
 if isvector(s),
  P=L*sum(abs(s).ˆ2)/length(s);%Actual power in the vector
 else %for multi-dimensional signals like MFSK
  P=L*sum(sum(abs(s).ˆ2))/length(s); %if s is a matrix [MxN]
 end
 
 N0=P/gamma; %Find the noise spectral density
 if(isreal(s)),
  n = sqrt(N0/2)*randn(size(s));%computed noise
 else
  n = sqrt(N0/2)*(randn(size(s))+1i*randn(size(s)));%computed noise
 end
 
 r = s + n; %received signal
 
 if iscolumn(s_temp), r=r.'; end;%return r in original format as s
end

Python code

The following custom function written in Python 3, can be used for adding AWGN noise to an incoming signal. It can be used in waveform simulation as well as complex baseband simulation models.

# author - Mathuranathan Viswanathan (gaussianwaves.com
# This code is part of the book Digital Modulations using Python

from numpy import sum,isrealobj,sqrt
from numpy.random import standard_normal

def awgn(s,SNRdB,L=1):
    """
    AWGN channel
    Add AWGN noise to input signal. The function adds AWGN noise vector to signal 's' to generate a resulting signal vector 'r' of specified SNR in dB. It also
    returns the noise vector 'n' that is added to the signal 's' and the power spectral density N0 of noise added
    Parameters:
        s : input/transmitted signal vector
        SNRdB : desired signal to noise ratio (expressed in dB) for the received signal
        L : oversampling factor (applicable for waveform simulation) default L = 1.
    Returns:
        r : received signal vector (r=s+n)
"""
    gamma = 10**(SNRdB/10) #SNR to linear scale
    if s.ndim==1:# if s is single dimensional vector
        P=L*sum(abs(s)**2)/len(s) #Actual power in the vector
    else: # multi-dimensional signals like MFSK
        P=L*sum(sum(abs(s)**2))/len(s) # if s is a matrix [MxN]
    N0=P/gamma # Find the noise spectral density
    if isrealobj(s):# check if input is real/complex object type
        n = sqrt(N0/2)*standard_normal(s.shape) # computed noise
    else:
        n = sqrt(N0/2)*(standard_normal(s.shape)+1j*standard_normal(s.shape))
    r = s + n # received signal
return r

Theoretical symbol error rates for digital modulations in AWGN channel

Denoting the symbol error rate (SER) as , SNR-per-bit as and SNR-per-symbol as , the symbol error rates for various modulation schemes over AWGN channel are listed in Table 1 (refer [1]).

Table 1: Theoretical symbol error rate for various modulations in AWGN channel

The theoretical symbol error rates are coded as a reusable function. In this implementation, erfc function is used instead of the Q function shown in the Table 4.1. The following equation describes the relationship between the erfc function and the Q function.

Unified simulation model for performance simulation

In the previous chapter of the books, the code implementation for complex baseband models for various digital modulators and demodulator are given. Using these models, we can create a unified simulation code for simulating the performance of various modulation techniques over AWGN channel.

The complete simulation model for performance simulation over AWGN channel is given in Figure 2. The figure is illustrated for a coherent communication system model (applicable for MPSK/MQAM/MPAM modulations)

Figure 2: Complete simulation model for a communication system with AWGN channel

The Matlab code implementing the aforementioned simulation model is given in the books. Here, an unified approach is employed to simulate the performance of any of the given modulation technique – MPSK, MQAM, MPAM or MFSK (MFSK simulation technique is available in the following books: Digital Modulations using Python and Digital Modulations using Matlab).

This article is part of the following books
Digital Modulations using Matlab : Build Simulation Models from Scratch, ISBN: 978-1521493885
Digital Modulations using Python ISBN: 978-1712321638
Wireless communication systems in Matlab ISBN: 979-8648350779
All books available in ebook (PDF) and Paperback formats

The simulation code will automatically choose the selected modulation type, performs Monte Carlo simulation, computes symbol error rates and plots them against the theoretical symbol error rates. The simulated performance results obtained for MQAM and MPSK modulations are shown in the Figure 3 and Figure 4.

Figure 3: Simulated symbol error rate performance of M-QAM modulation over AWGN channel

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

References

[1] Andrea Goldsmith, Wireless Communications, Cambridge University Pres, first edition, August 8, 2005.↗

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Generate multiple sequences of correlated random variables

In the previous post, a method for generating two sequences of correlated random variables was discussed. Generation of multiple sequences of correlated random variables, given a correlation matrix is discussed here.

Correlation Matrix

Correlation matrix defines correlation among N variables. It is a symmetric matrix with the element equal to the correlation coefficient between the and the variable. The diagonal elements (correlations of variables with themselves) are always equal to 1.

Sample problem:

Let’s say we would like to generate three sets of random sequences X,Y,Z with the following correlation relationships.

  1. Correlation co-efficient between X and Y is 0.5
  2. Correlation co-efficient between X and Z is 0.3
  3. Obviously the variable X  correlates with itself 100% – i.e, correlation-coefficient is 1

Putting all these relationships in a compact matrix form, gives the correlation matrix. We take arbitrary correlation value (0.3) for the relationship between Y and Z.

Now, the task is to generate three sets of random numbers X,Y and Z that follows the relationship above. The problem can be addressed in many ways. Two most common methods finding the solution are

  1. Cholesky Decomposition method
  2. Spectral Decomposition ( also called Eigen Vector Decomposition) method

The Cholesky Decomposition method is discussed here.

This article is part of the book
Wireless Communication Systems in Matlab (second edition), ISBN: 979-8648350779 available in ebook (PDF) format and Paperback (hardcopy) format.

Generating Correlated random number using Cholesky Decomposition:

Cholesky decomposition is the matrix equivalent of taking square root operation on a given matrix. As with any scalar values, positive square root is only possible if the given number is a positive (Imaginary roots do exist otherwise). Similarly, if a matrix need to be decomposed into square-root equivalent, the matrix need to be positive definite.

The method discussed here, seeks to decompose the given correlation matrix using Cholesky decomposition.

where U and L are upper and lower triangular matrices. We will consider Upper triangular matrix here. Equivalently, lower triangular matrix can also be used, in which case the order of output needs to be reversed.

For this decomposition to work, the correlation matrix should be positive definite. The correlated random sequences (where X,Y,Z are column vectors) that follow the above relationship can be generated by multiplying the uncorrelated random numbers R  with U .

Steps to follow:

Generate three sequences of uncorrelated random numbers R – each drawn from a normal distribution. For this case, the R matrix will be of size where k is the number of  samples we wish to generate and we allocate the k samples in three columns, where the columns indicate the place holder for each variable X, Y and Z. Multiply this matrix with the Cholesky decomposed upper triangular version of the correlation matrix.

Python code

import numpy as np
from scipy.linalg import cholesky
from scipy.stats import pearsonr #to calculate correlation coefficient

#for plotting and visualization
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('fivethirtyeight')
import seaborn as sns

C = np.array([[1, -0.5, 0.3], 
              [-0.5, 1, 0.2],
              [0.3, 0.2, 1]]) #Construct correlation matrix
U = cholesky(C) #Cholesky decomposition
R = np.random.randn(10000,3) #Three uncorrelated sequences
Rc = R @ U #Array of correlated random sequences

#compute and display correlation coeff from generated sequences
def pearsonCorr(x, y, **kws): 
    (r, _) = pearsonr(x, y) #returns Pearson’s correlation coefficient, 2-tailed p-value)
    ax = plt.gca()
    ax.annotate("r = {:.2f} ".format(r),xy=(.7, .9), xycoords=ax.transAxes)
    
#Visualization
df = pd.DataFrame(data=Rc, columns=['X','Y','Z'])
graph = sns.pairplot(df)
graph.map(pearsonCorr)
Figure 1: Pairplot of correlated random variables generated using Cholesky decomposition (Python)

Matlab code

x=[  1  0.5 0.3; 0.5  1  0.3; 0.3 0.3  1 ;]; %Correlation matrix
U=chol(x); %Cholesky decomposition 

R=randn(10000,3); %Random data in three columns each for X,Y and Z
Rc=R*U; %Correlated matrix Rc=[X Y Z]

%Verify Correlation-Coeffs of generated vectors
coeffMatrixXX=corrcoef(Rc(:,1),Rc(:,1));
coeffMatrixXY=corrcoef(Rc(:,1),Rc(:,2));
coeffMatrixXZ=corrcoef(Rc(:,1),Rc(:,3));

%Extract the required correlation coefficients
coeffXX=coeffMatrixXX(1,2) %Correlation Coeff for XX;
coeffXY=coeffMatrixXY(1,2) %Correlation Coeff for XY;
coeffXZ=coeffMatrixXZ(1,2) %Correlation Coeff for XZ;

%Scatterplots
subplot(3,1,1)
plot(Rc(:,1),Rc(:,1),'b.')
title(['Scatterd Plot - X and X calculated \rho=' num2str(coeffXX)])
xlabel('X')
ylabel('X')

subplot(3,1,2)
plot(Rc(:,1),Rc(:,2),'r.')
title(['Scatterd Plot - X and Y calculated \rho=' num2str(coeffXY)])
xlabel('X')
ylabel('Y')

subplot(3,1,3)
plot(Rc(:,1),Rc(:,3),'m.')
title(['Scatterd Plot - X and Z calculated \rho=' num2str(coeffXZ)])
xlabel('X')
ylabel('Z')

Scattered plots to verify the simulated data

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

Further reading

[1] Richard Taylor, “Interpretation of correlation coefficient: A basic review”, Journal of diagnostic medical sonography, Jan/Feb 1990.↗

Topics in this chapter

Random Variables - Simulating Probabilistic Systems
● Introduction
Plotting the estimated PDF
● Univariate random variables
 □ Uniform random variable
 □ Bernoulli random variable
 □ Binomial random variable
 □ Exponential random variable
 □ Poisson process
 □ Gaussian random variable
 □ Chi-squared random variable
 □ Non-central Chi-Squared random variable
 □ Chi distributed random variable
 □ Rayleigh random variable
 □ Ricean random variable
 □ Nakagami-m distributed random variable
Central limit theorem - a demonstration
● Generating correlated random variables
 □ Generating two sequences of correlated random variables
 □ Generating multiple sequences of correlated random variables using Cholesky decomposition
Generating correlated Gaussian sequences
 □ Spectral factorization method
 □ Auto-Regressive (AR) model

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Methods to compute linear convolution

Mathematical details of convolution, its relationship to polynomial multiplication and the application of Toeplitz matrices in computing linear convolution are discussed in the previous article. A short survey of different techniques to compute discrete linear convolution (with Matlab code) is given here.

Definition

Given an LTI (Linear Time Invariant) system with impulse response \(h[n]\) and an input sequence \(x[n]\), the output of the system \(y[n]\) is obtained by convolving the input sequence and impulse response.

\[ y[k]=h[n] \ast x[n] = \sum_{i= -\infty}^{\infty} x[i] h[k-i] \]

where, the sequence \(x[n]\) is of length \(N\) and \(h[n]\) is of length \(M\).

This article is part of the following books
Digital Modulations using Matlab : Build Simulation Models from Scratch, ISBN: 978-1521493885
Digital Modulations using Python ISBN: 978-1712321638
Wireless communication systems in Matlab ISBN: 979-8648350779
All books available in ebook (PDF) and Paperback formats

Method 1 – Brute-Force Method:

The above equation can simply be implemented using nested for-loops, which consume the most computational time of all the methods given here. Typically, the computational complexity is \(O(n^2)\) time.

Python

def conv_brute_force(x,h):
	"""
	Brute force method to compute convolution
	Parameters:
	x, h : numpy vectors
	Returns:
		y : convolution of x and h
	"""
	N=len(x)
	M=len(h)
	y = np.zeros(N+M-1) #array filled with zeros
	
	for i in np.arange(0,N):
		for j in np.arange(0,M):
			y[i+j] = y[i+j] + x[i] * h[j]
return y

Matlab

for i = 0:n+m,
   yi = 0;
for i = 0 :n,
   for k = 0 to m,
      y[i+k] = y[i+k] + x[i] · h[k];

Method 2 – Using Toeplitz Matrix:

When the sequences \(h[n]\) and \(x[n]\) are represented as matrices, the linear convolution operation can be equivalently represented as

\[y=h \ast x=x \ast h=Toeplitz(h) X\]

Assume that the sequence \(h[n]\) is of length 4 given by \(h[n]=[h_0,h_1,h_2,h_3]\) and the sequence \(x[n]\) is of length 3 given by \(x[n]=[x_0,x_1,x_2]\). The convolution \(h[n] \ast x[n]\) is given by

\[y[k]=h[n] \ast x[n] = \displaystyle{\sum_{i=-\infty}^{\infty}x[i]h[k-i]}, \quad\quad k=0,1,\cdots,5\]

Equivalent representation of the above convolution can be written as

\[\begin{bmatrix}y[0]\\y[1]\\y[2]\\y[3]\\y[4]\\y[5]\end{bmatrix}=\begin{bmatrix}h[0] &0 & 0 \\h[1] &h[0] & 0 \\h[2] & h[1] & h[0] \\h[3] & h[2] & h[1] \\0 & h[3] & h[2] \\0 & 0 & h[3]\end{bmatrix}\begin{bmatrix}x[0]\\x[1]\\x[2] \end{bmatrix}\]

The matrix representing the incremental delays of \(h[n]\) used in the above equation is a special form of matrix called Toeplitz matrix. Toeplitz matrix have constant entries along their diagonals. Toeplitz matrices are used to model systems that posses shift invariant properties. The property of shift invariance is evident from the matrix structure itself. Since we are modelling a Linear Time Invariant system[1], Toeplitz matrices are our natural choice. On a side note, a special form of Toeplitz matrix called “circulant matrix” is used in applications involving circular convolution and Discrete Fourier Transform (DFT)[2].

For python code: refer the book – Digital modulations using Python

Matlab has inbuilt function to compute Toeplitz matrix from given vector. Toeplitz matrix of sequence \(h\) is given as

\[Toeplitz (h) = \begin{bmatrix}h[0] &0 & 0 \\h[1] &h[0] & 0 \\h[2] & h[1] & h[0] \\h[3] & h[2] & h[1] \\0 & h[3] & h[2] \\0 & 0 & h[3]\end{bmatrix}\]

In Matlab, it is constructed by calling Toeplitz function[3] with two arguments. The first argument is the first column of the above matrix -\( [h_0,h_1,h_2,h_3,0,0]\) and the second argument is the first row of the above matrix – \([h_0,0,0]\).

y=toeplitz([h0 h1 h2 h3 0 0],[h0 0 0])*x.';

In the generalized case, where \(x\) and \(h\) are of any arbitrary lengths (say \(N\) and \(M\)), the code can be modified as follows [here it is assumed that \( length (x) \geq length(h)\) ]

k=toeplitz([h zeros(1,length(x)-1)],[h(1) zeros(1,length(x)-1])*x.'

Method 3: Using FFT:

Computation of convolution using FFT (Fast Fourier Transform) has the advantage of reduced computational complexity when the length of inputs are large. To compute convolution, take FFT of the two sequences \(x\) and \(h\) with FFT length set to convolution output length \(length (x)+length(h)-1\), multiply the results and convert back to time-domain using IFFT (Inverse Fast Fourier Transform). Note that FFT is a direct implementation of circular convolution in time domain. Here we are attempting to compute linear convolution using circular convolution (or FFT) with zero-padding either one of the input sequence. This causes inefficiency when compared to circular convolution. Nevertheless, this method still provides \(O\left(\frac{N}{log_2N}\right)\) savings over brute-force method.

\[y(n) = IFFT[FFT_L (X) \ast FFT_L (H) ], \quad \; L=2^{nextpower2(N+M-1)}\]

Usually, the following algorithm is suffice which ignores additional zeros in the output terms.

\[y(n) = IFFT [ FFT_L (X) \ast FFT_L (H)], \quad \; L=N+M-1\]

Python

from scipy.fftpack import fft,ifft
y=ifft(fft(x,L)*(fft(h,L))) #Convolution using FFT and IFFT

Matlab

y=ifft(fft(x,L).*(fft(h,L)))

Other Methods:

If the input sequence is of infinite length or very very large as in many real time applications, block processing methods like Overlap-Add and Overlap-Save methods can be used to compute convolution in a faster and efficient way.

Standard Algorithms for Fast Convolution:

Standard algorithms for computing convolution that greatly reduce the computational complexity are

1) Cook-Toom Algorithm
2) Modified Cook-Toom Algorithm
3) Winograd Algorithm
4) Modified Winograd Algorithm
5) Iterated Convolution

Refer [4] -Fast Algorithms for Signal Processing by Richard E. Blahut for more details on the above algorithms (see reference section below).

Matlab Code

Implementing method 2 (convolution using Toeplitz matrix transformation) and method 3 (convolution using FFT) and comparing against Matlab’s standard conv function:

%Create random vectors for test
x=rand(1,7)+1i*rand(1,7);
h=rand(1,3)+1i*rand(1,3);
L=length(x)+length(h)-1;

%Convolution Using Toeplitz matrix
y1=toeplitz([h zeros(1,length(x)-1)],[h(1) zeros(1,length(x)-1)])*x.'
%Convolution FFT
y2=ifft(fft(x,L).*(fft(h,L))).'
%Matlab's standard function
y3=conv(h,x).'

Simulation output:

Comparing method 2 and method 3 against Matlab’s standard convolution function returns identical results

\[y1 =\begin{bmatrix} 0.2428 + 0.4536i\\ 0.1512 + 0.6668i\\ 0.8663 + 1.1188i\\ 0.5332 + 1.0162i\\ 0.8589 + 0.9822i\\ 0.4633 + 1.1006i\\ 0.4024 + 1.4706i\\ 0.3225 + 0.9790i\\ 0.2324 + 0.8227i\end{bmatrix}; y2 = \begin{bmatrix}0.2428 + 0.4536i\\ 0.1512 + 0.6668i\\ 0.8663 + 1.1188i\\ 0.5332 + 1.0162i\\ 0.8589 + 0.9822i\\ 0.4633 + 1.1006i\\ 0.4024 + 1.4706i\\ 0.3225 + 0.9790i\\ 0.2324 + 0.8227i\\ \end{bmatrix};y3 = \begin{bmatrix}0.2428 + 0.4536i\\ 0.1512 + 0.6668i\\ 0.8663 + 1.1188i\\ 0.5332 + 1.0162i\\ 0.8589 + 0.9822i\\ 0.4633 + 1.1006i\\ 0.4024 + 1.4706i\\ 0.3225 + 0.9790i\\ 0.2324 + 0.8227i\end{bmatrix}\]

References:

[1] Reddi.S.S,”Eigen Vector properties of Toeplitz matrices and their application to spectral analysis of time series, Signal Processing, Vol 7, North-Holland, 1984, pp 46-56.↗
[2] Robert M. Gray,”Toeplitz and circulant matrices – an overview”,Deptartment of Electrical Engineering,Stanford University,Stanford 94305,USA.↗
[3] Matlab documentation help on Toeplitz command.↗
[4] Richard E. Blahut,”Fast Algorithms for Signal Processing”,ISBN-13: 978-0521190497,Cambridge University Press,August 16, 2010.↗

Topics in this chapter

Essentials of Signal Processing
● Generating standard test signals
 □ Sinusoidal signals
 □ Square wave
 □ Rectangular pulse
 □ Gaussian pulse
 □ Chirp signal
Interpreting FFT results - complex DFT, frequency bins and FFTShift
 □ Real and complex DFT
 □ Fast Fourier Transform (FFT)
 □ Interpreting the FFT results
 □ FFTShift
 □ IFFTShift
Obtaining magnitude and phase information from FFT
 □ Discrete-time domain representation
 □ Representing the signal in frequency domain using FFT
 □ Reconstructing the time domain signal from the frequency domain samples
● Power spectral density
Power and energy of a signal
 □ Energy of a signal
 □ Power of a signal
 □ Classification of signals
 □ Computation of power of a signal - simulation and verification
Polynomials, convolution and Toeplitz matrices
 □ Polynomial functions
 □ Representing single variable polynomial functions
 □ Multiplication of polynomials and linear convolution
 □ Toeplitz matrix and convolution
Methods to compute convolution
 □ Method 1: Brute-force method
 □ Method 2: Using Toeplitz matrix
 □ Method 3: Using FFT to compute convolution
 □ Miscellaneous methods
Analytic signal and its applications
 □ Analytic signal and Fourier transform
 □ Extracting instantaneous amplitude, phase, frequency
 □ Phase demodulation using Hilbert transform
Choosing a filter : FIR or IIR : understanding the design perspective
 □ Design specification
 □ General considerations in design

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Cholesky decomposition: Python & Matlab

Cholesky decomposition is an efficient method for inversion of symmetric positive-definite matrices. Let’s demonstrate the method in Python and Matlab.

Cholesky factor

Any symmetric positive definite matrix can be factored as

where is lower triangular matrix. The lower triangular matrix is often called “Cholesky Factor of ”. The matrix can be interpreted as square root of the positive definite matrix .

Basic Algorithm to find Cholesky Factorization:

Note: In the following text, the variables represented in Greek letters represent scalar values, the variables represented in small Latin letters are column vectors and the variables represented in capital Latin letters are Matrices.

Given a positive definite matrix , it is partitioned as follows.

first element of and respectively
column vector at first column starting from second row of and respectively
remaining lower part of the matrix of and respectively of size

Having partitioned the matrix as shown above, the Cholesky factorization can be computed by the following iterative procedure.

Steps in computing the Cholesky factorization:

Step 1: Compute the scalar:
Step 2: Compute the column vector:
Step 3: Compute the matrix :
Step 4: Replace with , i.e,
Step 5: Repeat from step 1 till the matrix size at Step 4 becomes .

Matlab Program (implementing the above algorithm):

Function 1: [F]=cholesky(A,option)

function [F]=cholesky(A,option)
%Function to find the Cholesky factor of a Positive Definite matrix A
%Author Mathuranathan for https://www.gaussianwaves.com
%Licensed under Creative Commons: CC-NC-BY-SA 3.0
%A = positive definite matrix
%Option can be one of the following 'Lower','Upper'
%L = Cholesky factorizaton of A such that A=LL^T
%If option ='Lower', then it returns the Cholesky factored matrix L in
%lower triangular form
%If option ='Upper', then it returns the Cholesky factored matrix L in
%upper triangular form    

%Test for positive definiteness (symmetricity need to satisfy)
%Check if the matrix is symmetric
if ~isequal(A,A'),
    error('Input Matrix is not Symmetric');
end
    
if isPositiveDefinite(A),
    [m,n]=size(A);
    L=zeros(m,m);%Initialize to all zeros
    row=1;col=1;
    j=1;
    for i=1:m,
        a11=sqrt(A(1,1));
        L(row,col)=a11;
        if(m~=1), %Reached the last partition
            L21=A(j+1:m,1)/a11;
            L(row+1:end,col)=L21;
            A=(A(j+1:m,j+1:m)-L21*L21');
            [m,n]=size(A);
            row=row+1;
            col=col+1;
        end
    end
    switch nargin
        case 2
            if strcmpi(option,'upper'),F=L';
            else
                if strcmpi(option,'lower'),F=L;
                else error('Invalid option');
                end
            end
        case 1
            F=L;
        otherwise
            error('Not enough input arguments')
    end
else
        error('Given Matrix A is NOT Positive definite');
end
end

Function 2: x=isPositiveDefinite(A)

function x=isPositiveDefinite(A)
%Function to check whether a given matrix A is positive definite
%Author Mathuranathan for https://www.gaussianwaves.com
%Licensed under Creative Commons: CC-NC-BY-SA 3.0
%Returns x=1, if the input matrix is positive definite
%Returns x=0, if the input matrix is not positive definite
[m,~]=size(A);
    
%Test for positive definiteness
x=1; %Flag to check for positiveness
for i=1:m
    subA=A(1:i,1:i); %Extract upper left kxk submatrix
    if(det(subA)<=0); %Check if the determinent of the kxk submatrix is +ve
        x=0;
        break;
    end
end
%For debug
%if x, display('Given Matrix is Positive definite');
%else display('Given Matrix is NOT positive definite'); end    
end

Sample Run:

A is a randomly generated positive definite matrix. To generate a random positive definite matrix check the link in “external link” section below.

>> A=[3.3821 ,0.8784,0.3613,-2.0349; 0.8784, 2.0068, 0.5587, 0.1169; 0.3613, 0.5587, 3.6656, 0.7807; -2.0349, 0.1169, 0.7807, 2.5397];

>> cholesky(A,’Lower’) 

>> cholesky(A,’upper’) 

Python (numpy)

Let us verify the above results using Python’s Numpy package. The numpy package numpy.linalg contains the cholesky function for computing the Cholesky decomposition (returns in lower triangular matrix form). It can be summoned as follows

>>> import numpy as np

>>> A=[[3.3821 ,0.8784,0.3613,-2.0349],\
       [0.8784, 2.0068, 0.5587, 0.1169],\
       [0.3613, 0.5587, 3.6656, 0.7807],\
       [-2.0349, 0.1169, 0.7807, 2.5397]]
>>> np.linalg.cholesky(A)
>>>
array([[ 1.83904867,  0.        ,  0.        ,  0.        ],
       [ 0.47763826,  1.33366476,  0.        ,  0.        ],
       [ 0.19646027,  0.34856065,  1.87230041,  0.        ],
       [-1.106496  ,  0.48393333,  0.44298574,  0.94071184]])

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

Related Topics

[1]An Introduction to Estimation Theory
[2]Bias of an Estimator
[3]Minimum Variance Unbiased Estimators (MVUE)
[4]Maximum Likelihood Estimation
[5]Maximum Likelihood Decoding
[6]Probability and Random Process
[7]Likelihood Function and Maximum Likelihood Estimation (MLE)
[8]Score, Fisher Information and Estimator Sensitivity
[9]Introduction to Cramer Rao Lower Bound (CRLB)
[10]Cramer Rao Lower Bound for Scalar Parameter Estimation
[11]Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE)
[12]Efficient Estimators and CRLB
[13]Cramer Rao Lower Bound for Phase Estimation
[14]Normalized CRLB - an alternate form of CRLB and its relation to estimator sensitivity
[15]Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation
[16]The Mean Square Error – Why do we use it for estimation problems
[17]How to estimate unknown parameters using Ordinary Least Squares (OLS)
[18]Essential Preliminary Matrix Algebra for Signal Processing
[19]Why Cholesky Decomposition ? A sample case:
[20]Tests for Positive Definiteness of a Matrix
[21]Solving a Triangular Matrix using Forward & Backward Substitution
[22]Cholesky Factorization - Matlab and Python
[23]LTI system models for random signals – AR, MA and ARMA models
[24]Comparing AR and ARMA model - minimization of squared error
[25]Yule Walker Estimation
[26]AutoCorrelation (Correlogram) and persistence – Time series analysis
[27]Linear Models - Least Squares Estimator (LSE)
[28]Best Linear Unbiased Estimator (BLUE)

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Non-central Chi square distribution

Note: There is a rating embedded within this post, please visit this post to rate it.

If squares of k independent standard normal random variables are added, it gives rise to central Chi-squared distribution with ‘k’ degrees of freedom. Instead, if squares of k independent normal random variables with non-zero means are added, it gives rise to non-central Chi-squared distribution. Non-central Chi-square distribution is related to Ricean distribution, whereas the central Chi-squared distribution is related to Rayleigh distribution.

The non-central Chi-squared distribution is a generalization of Chi-square distribution. A non-central Chi squared distribution is defined by two parameters: 1) degrees of freedom () and 2) non-centrality parameter .

As we know from previous article, the degrees of freedom specify the number of independent random variables we want to square and sum-up to make the Chi-squared distribution. Non-centrality parameter is the sum of squares of means of the each independent underlying normal random variable.

The non-centrality parameter is given by

The PDF of the non-central Chi-squared distribution having degrees of freedom and non-centrality parameter is given by

Here, the random variable is central Chi-squared distributed with degrees of freedom. The factor gives the probabilities of Poisson distribution. Thus, the PDF of the non-central Chi-squared distribution can be termed as the weighted sum of Chi-squared probabilities where the weights being equal to the probabilities of Poisson distribution.

Method of Generating non-central Chi-squared random variable:

The procedure for generating the samples from a non-central Chi-squared random variable is as follows.

● For a given degree of freedom , let the normal random variables be with variances and mean respectively.
● The goal is to add squares of these independent normal random variables with variances set to one and means satisfying the condition set by equation (1).
● Set and
● Generate standard normal random variables and one normal random variable with and
● Squaring and summing-up all the random variables gives the non-central Chi-squared random variable.
● The PDF of the generated samples can be plotted using the histogram method described here.

Matlab Code:

Check this book for full Matlab code.
Wireless Communication Systems using Matlab – by Mathuranathan Viswanathan

Python Code:

Python numpy package has a nocentral_chisquare() generator, which can be used in a straightforward manner to obtain the non-central Chi square distributed sequences.

#---------Non-central Chi square distribution gaussianwaves.com-----
import numpy as np
import matplotlib.pyplot as plt
#%matplotlib inline
plt.style.use('ggplot')

ks=np.asarray([2,4]) #degrees of freedoms to simulate
ldas = np.asarray([1,2,3]) #non-centrality parameters to simulate
nSamp=1000000 #number of samples to generate

fig, ax = plt.subplots(ncols=1, nrows=1, constrained_layout=True)

for i,k in enumerate(ks):
    for j,lda in enumerate(ldas):
        #Generate non-central Chi-squared distributed random numbers
        X = np.random.noncentral_chisquare(df=k, nonc = lda, size = nSamp)
        ax.hist(X,bins=500,density=True,label=r'$k$={} $\lambda$={}'.format(k,lda),\
        histtype='step',alpha=0.75, linewidth=3)

ax.set_xlim(left=0,right=30);ax.legend()
ax.set_title('PDFs of non-central Chi square distribution');
plt.show()
Figure 1: Simulated PDFs of non-central Chi-Squared random variables

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

For further reading

[1] David A. Harville, “Linear Models and the Relevant Distributions and Matrix Algebra”, 978-1138578333, Chapman and Hall/CRC, 1 edition, March 2018.↗

Similar topics

Random Variables - Simulating Probabilistic Systems
● Introduction
Plotting the estimated PDF
● Univariate random variables
 □ Uniform random variable
 □ Bernoulli random variable
 □ Binomial random variable
 □ Exponential random variable
 □ Poisson process
 □ Gaussian random variable
 □ Chi-squared random variable
 □ Non-central Chi-Squared random variable
 □ Chi distributed random variable
 □ Rayleigh random variable
 □ Ricean random variable
 □ Nakagami-m distributed random variable
Central limit theorem - a demonstration
● Generating correlated random variables
 □ Generating two sequences of correlated random variables
 □ Generating multiple sequences of correlated random variables using Cholesky decomposition
Generating correlated Gaussian sequences
 □ Spectral factorization method
 □ Auto-Regressive (AR) model

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

MPSK modulation: simulate in Matlab & Python

A generic complex baseband simulation technique, to simulate all M-ary phase shift keying (M-PSK) modulation techniques is given here. The given simulation code is very generic, and it plots both simulated and theoretical symbol error rates for all MPSK modulation techniques.

M-ary phase shift keying (M-PSK) modulation

In phase shift keying, all the information gets encoded in the phase of the carrier signal. The M-PSK modulator transmits a series of information symbols drawn from the set m∈{1,2,…,M}. Each transmitted symbol holds k bits of information (k=log2(M)). The information symbols are modulated using M-PSK mapping.

Figure 1: Signal space constellations for various MPSK modulations

The general expression for a M-PSK signal set is given by

Here, M denotes the modulation order and it defines the number of constellation points in the reference constellation. The value of M depends on the parameter k – the number of bits we wish to squeeze in a single MPSK symbol. For example if we wish to squeeze in 3 bits (k=3) in one transmit symbol, then M = 2k = 23 = 8 and this results in 8-PSK configuration. M=2 gives binary phase shift keying (BPSK) configuration. The configuration with M=4 is referred as quadrature phase shift keying (QPSK). The parameter A is the amplitude scaling factor. Using trigonometric identity, equation (1) can be separated into cosine and sine basis functions as follows

This can be expressed as a combination of in-phase and quadrature phase components on an I-Q plane as

Normalizing the amplitude as , the points on the reference constellation will be placed on the unit circle. The MPSK modulator is constructed based on this equation and the ideal constellations for M=4,8 and 16 PSK modulations are shown in Figure 1.

Matlab code

Full Matlab code available in the book Digital Modulations using Matlab – build simulation models from scratch

function [s,ref]=mpsk_modulator(M,d)
%Function to MPSK modulate the vector of data symbols - d
%[s,ref]=mpsk_modulator(M,d) modulates the symbols defined by the
%vector d using MPSK modulation, where M specifies the order of
%M-PSK modulation and the vector d contains symbols whose values
%in the range 1:M. The output s is the modulated output and ref
%represents the reference constellation that can be used in demod
   ref_i= 1/sqrt(2)*cos(((1:1:M)-1)/M*2*pi);
   ref_q= 1/sqrt(2)*sin(((1:1:M)-1)/M*2*pi);
   ref = ref_i+1i*ref_q;
   s = ref(d); %M-PSK Mapping
end

Python code

Full Python code available in the book Digital Modulations using Python

modem.py: PSK modem - derived class
class PSKModem(Modem):
	# Derived class: PSKModem
	def __init__(self, M):
		#Generate reference constellation
		m = np.arange(0,M) #all information symbols m={0,1,...,M-1}
		I = 1/np.sqrt(2)*np.cos(m/M*2*np.pi)
		Q = 1/np.sqrt(2)*np.sin(m/M*2*np.pi)
		constellation = I + 1j*Q #reference constellation
		Modem.__init__(self, M, constellation, name='PSK') #set the modem attributes
.
.
.

This article is part of the following books
Digital Modulations using Matlab : Build Simulation Models from Scratch, ISBN: 978-1521493885
Digital Modulations using Python ISBN: 978-1712321638
All books available in ebook (PDF) and Paperback formats

M-PSK demodulation (coherent detection)

Generally the two main categories of detection techniques, commonly applied for detecting the digitally modulated data are coherent detection and non-coherent detection.

In the vector simulation model for the coherent detection, the transmitter and receiver agree on the same
reference constellation for modulating and demodulating the information. The modulators generate the reference constellation for the selected modulation type. The same reference constellation should be used if coherent detection is selected as the method of demodulating the received data vector.

On the other hand, in the non-coherent detection, the receiver is oblivious to the reference constellation used at the transmitter. The receiver uses methods like envelope detection to demodulate the data.

The IQ detection technique is an example of coherent detection. In the IQ detection technique, the first step is to compute the pair-wise Euclidean distance between the given two vectors – reference array and the received symbols corrupted with noise. Each symbol in the received symbol vector (represented on a p-dimensional plane) should be compared with every symbol in the reference array. Next, the symbols, from the reference array, that provide the minimum Euclidean distance are returned.

Let x=(x1,x2,…,xp) and y=(y1,y2,…,yp) be two points in p-dimensional space. The Euclidean distance between them is given by

The pair-wise Euclidean distance between two sets of vectors, say x and y, on a p-dimensional space, can be computed using the vectorized code. The vectorized code returns the ideal signaling points from matrix y that provides the minimum Euclidean distance. Since the vectorized implementation is devoid of nested for-loops, the program executes significantly faster for larger input matrices. The given code is very generic in the sense that it can be easily reused to implement optimum coherent receivers for any N-dimensional digital modulation technique (Please refer the books Digital Modulations using Matlab and Digital Modulations using Python for complete simulation code) .

Matlab code

Full Matlab code available in the book Digital Modulations using Matlab

function [dCap]= mpsk_detector(M,r)
%Function to detect MPSK modulated symbols
%[dCap]= mpsk_detector(M,r) detects the received MPSK signal points
%points - 'r'. M is the modulation level of MPSK
   ref_i= 1/sqrt(2)*cos(((1:1:M)-1)/M*2*pi);
   ref_q= 1/sqrt(2)*sin(((1:1:M)-1)/M*2*pi);
   ref = ref_i+1i*ref_q; %reference constellation for MPSK
   [˜,dCap]= iqOptDetector(r,ref); %IQ detection
end

Python code

Full Python code available in the book Digital Modulations using Python

Performance simulation results

The simulation results for error rate performance of M-PSK modulations over AWGN channel and Rayleigh flat-fading channel is given in the following figures.

Figure 2: Error rate performance of MPSK modulations in AWGN channel

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

Reference

[1] John G. Proakis, “Digital Communciations”, McGraw-Hill; 5th edition.↗

Related Topics

Digital Modulators and Demodulators - Complex Baseband Equivalent Models
Introduction
Complex baseband representation of modulated signal
Complex baseband representation of channel response
● Modulators for amplitude and phase modulations
 □ Pulse Amplitude Modulation (M-PAM)
 □ Phase Shift Keying Modulation (M-PSK)
 □ Quadrature Amplitude Modulation (M-QAM)
● Demodulators for amplitude and phase modulations
 □ M-PAM detection
 □ M-PSK detection
 □ M-QAM detection
 □ Optimum detector on IQ plane using minimum Euclidean distance
● M-ary FSK modulation and detection
 □ Modulator for M orthogonal signals
 □ M-FSK detection

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Understand Moving Average Filter with Python & Matlab

The moving average filter is a simple Low Pass FIR (Finite Impulse Response) filter commonly used for smoothing an array of sampled data/signal. It takes samples of input at a time and takes the average of those -samples and produces a single output point. It is a very simple LPF (Low Pass Filter) structure that comes handy for scientists and engineers to filter unwanted noisy component from the intended data.

As the filter length increases (the parameter ) the smoothness of the output increases, whereas the sharp transitions in the data are made increasingly blunt. This implies that this filter has excellent time domain response but a poor frequency response.

The MA filter performs three important functions:

1) It takes input points, computes the average of those -points and produces a single output point
2) Due to the computation/calculations involved, the filter introduces a definite amount of delay
3) The filter acts as a Low Pass Filter (with poor frequency domain response and a good time domain response).

Implementation

The difference equation for a -point discrete-time moving average filter with input represented by the vector and the averaged output vector , is 

\[y[n] = \displaystyle{\frac{1}{L} \sum_{k=0}^{L-1}x[n-k]} \quad\quad (1) \]

For example, a -point Moving Average FIR filter takes the current and previous four samples of input and calculates the average. This operation is represented as shown in the Figure 1 with the following difference equation for the input output relationship in discrete-time.

\[\begin{aligned} y[n] &= \displaystyle{\frac{1}{5} \left(x[n] + x[n-1] + x[n-2] + x[n-3] + x[n-4] \right) } \\ &= 0.2 \left(x[n] + x[n-1] + x[n-2] + x[n-3] + x[n-4] \right) \end{aligned} \quad\quad (2) \]
Figure 1: Discrete-time 5-point Moving Average FIR filter

The unit delay shown in Figure 1 is realized by either of the two options:

  • Representing the input samples as an array in the computer memory and processing them
  • Using D-Flip flop shift registers for digital hardware implementation. If each discrete value of the input is represented as a -bit signal line from ADC (analog to digital converter), then we would require 4 sets of 12-Flip flops to implement the -point moving average filter shown in Figure 1.

Z-Transform and Transfer function

In signal processing, delaying a signal by sample period (unit delay) is equivalent to multiplying the Z-transform by . By applying this idea, we can find the Z-transform of the -point moving average filter in equation (2) as

\[ \begin{aligned} y[n] &= 0.2 \left(x[n] + x[n-1] + x[n-2] + x[n-3] + x[n-4] \right) \\ Y[z] &= 0.2 \left(z^0 + z^{-1} + z^{-2} + z^{-3} + z^{-4} \right) X[z] \\ Y[z] &= 0.2 \left(1 + z^{-1} + z^{-2} + z^{-3} + z^{-4} \right) X[z] \quad\quad (3) \end{aligned}\]

Similarly, the Z-transform of the generic -sample Moving Average filter of equation (1) is 

\[x Y(z) = \left(\displaystyle{\frac{1}{L} \sum_{k=0}^{L-1} z^{-k}} \right) X(z) \quad\quad (4) \]

The transfer function describes the input-output relationship of the system and for the -point Moving Average filter, the transfer function is given by

\[H(z) = \displaystyle{\frac{Y(z)}{X(z)}} =\displaystyle{\frac{1}{L} \sum_{k=0}^{L-1} z^{-k}}  \quad\quad (5)\]

Simulating the filter in Matlab and Python

In Matlab, we can use the filter function or conv (convolution) to implement the moving average FIR filter.

In general, the Z-transform of a discrete-time filter’s output is related to the Z-transform of the input by

\[H(z) = \displaystyle{\frac{Y(z)}{X(z)}} = \displaystyle{\frac{b_0 + b_1 z^{-1} + b_2 z^{-2} + \cdots + b_n z^{-n}}{a_0 + a_1 z^{-1} + a_2 z^{-2} + \cdots + a_m z^{-m}}} \quad\quad (6) \]

where, and are the filter coefficients and the order of the filter is the maximum of and

For implementing equation (6) using the filter function, the Matlab function is called as

B = [b_0, b_1, b_2, ..., b_n] %numerator coefficients
A = [a_0, a_1, a_2, ..., a_m] %denominator coefficients
y = filter(B,A,x) %filter input x and get result in y

We can note from the difference equation and transfer function of the -point moving average filter, that following values for the numerator coefficients and denominator coefficients .

\[\begin{aligned} \{b_i\} &= \{1/L, 1/L, …., 1/L\} &,i=0,1,\cdots,L-1 \\ \{a_i\} &= {1} &,i=0 \end{aligned} \quad\quad (7)\]

Therefore, the -point moving average filter can be coded as

B = [0.2, 0.2, 0.2, 0.2, 0.2] %numerator coefficients
A = [1] %denominator coefficients
y = filter(B,A,x) %filter input x and get result in y

The numerator coefficients for the moving average filter can be conveniently expressed in short notion as shown below

 L = 5
B = ones(1,L)/L %numerator coefficients
A = [1] %denominator coefficients
x = rand(1,10) %random samples for x
y = filter(B,A,x) %filter input x and get result in y

When using conv function to implement the moving average filter, the following code can be used

L = 5;
x = rand(1,10) %random samples for x;
y = conv(x,ones(1,L)/L)

There exists a difference between using conv function and filter function for implementing an FIR filter. The conv function gives the result of complete convolution and the length of the result is length(x)+ L -1. Whereas, the filter function gives the output that is of same length as that of the input .

In python, the filtering operation can be performed using the lfilter and convolve functions available in the scipy signal processing package. The equivalent python code is shown below.

import numpy as np
from scipy import signal
L=5 #L-point filter
b = (np.ones(L))/L #numerator co-effs of filter transfer function
a = np.ones(1)  #denominator co-effs of filter transfer function
x = np.random.randn(10) #10 random samples for x
y = signal.convolve(x,b) #filter output using convolution

y = signal.lfilter(b,a,x) #filter output using lfilter function

Pole-zero plot and frequency response

A pole-zero plot for a filter transfer function , displays the pole and zero locations in the z-plane. In the pole-zero plot, the zeros occur at locations (frequencies) where and the poles occur at locations (frequencies) where . Equivalently, zeros occurs at frequencies for which the numerator of the transfer function in equation 6 becomes zero and the poles occurs at frequencies for which the denominator of the transfer function becomes zero.

In a pole-zero plot, the locations of the poles are usually marked by cross () and the zeros are marked as circles (). The poles and zeros of a transfer function effectively define the system response and determines the stability and performance of the filtering system.

In Matlab, the pole-zero plot and the frequency response of the -point moving average can be obtained as follows.

L=11
zplane([ones(1,L)]/L,1); %pole-zero plot
w = -pi:(pi/100):pi; %to plot frequency response
freqz([ones(1,L)]/L,1,w); %plot magnitude and phase response
Figure 2: Pole-Zero plot for L=11-point Moving Average filter
Figure 3: Magnitude and phase response of L=11-point Moving Average filter

The magnitude and phase frequency responses can be coded in Python as follows

[updated] See my Google colab for full Python code

import numpy as np
from scipy import signal
import matplotlib.pyplot as plt

L=11 #L-point filter
b = (np.ones(L))/L #numerator co-effs of filter transfer function
a = np.ones(1)  #denominator co-effs of filter transfer function

w, h = signal.freqz(b,a)
plt.subplot(2, 1, 1)
plt.plot(w, 20 * np.log10(abs(h)))
plt.ylabel('Magnitude [dB]')
plt.xlabel('Frequency [rad/sample]')

plt.subplot(2, 1, 2)
angles = np.unwrap(np.angle(h))
plt.plot(w, angles)
plt.ylabel('Angle (radians)')
plt.xlabel('Frequency [rad/sample]')
plt.show()
Figure 4: Impulse response, Pole-zero plot, magnitude and phase response of L=11 moving average filter

Case study:

Following figures depict the time domain & frequency domain responses of a -point Moving Average filter. A noisy square wave signal is driven through the filter and the time domain response is obtained.

Figure 5: Processing a signal through Moving average filter of various lengths

On the first plot, we have the noisy square wave signal that is going into the moving average filter. The input is noisy and our objective is to reduce the noise as much as possible. The next figure is the output response of a 3-point Moving Average filter. It can be deduced from the figure that the 3-point Moving Average filter has not done much in filtering out the noise. We increase the filter taps to 10-points and we can see that the noise in the output has reduced a lot, which is depicted in next figure.

With L=51 tap filter, though the noise is almost zero, the transitions are blunted out drastically (observe the slope on the either side of the signal and compare them with the ideal brick wall transitions in the input signal).

From the frequency response of lower order filters (L=3, L=10), it can be asserted that the roll-off is very slow and the stop band attenuation is not good. Given this stop band attenuation, clearly, the moving average filter cannot separate one band of frequencies from another. As we increase the filter order to 51, the transitions are not preserved in time domain. Good performance in the time domain results in poor performance in the frequency domain, and vice versa. Compromise need for optimal filter design.

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

References:

[1] Filtering – OpenCourseWare – MIT.↗
[2] HC Chen et al.,”Moving Average Filter with its application to QRS detection”,IEEE Computers in Cardiology, 2003.↗

Similar topics

Essentials of Signal Processing
● Generating standard test signals
 □ Sinusoidal signals
 □ Square wave
 □ Rectangular pulse
 □ Gaussian pulse
 □ Chirp signal
Interpreting FFT results - complex DFT, frequency bins and FFTShift
 □ Real and complex DFT
 □ Fast Fourier Transform (FFT)
 □ Interpreting the FFT results
 □ FFTShift
 □ IFFTShift
Obtaining magnitude and phase information from FFT
 □ Discrete-time domain representation
 □ Representing the signal in frequency domain using FFT
 □ Reconstructing the time domain signal from the frequency domain samples
● Power spectral density
Power and energy of a signal
 □ Energy of a signal
 □ Power of a signal
 □ Classification of signals
 □ Computation of power of a signal - simulation and verification
Polynomials, convolution and Toeplitz matrices
 □ Polynomial functions
 □ Representing single variable polynomial functions
 □ Multiplication of polynomials and linear convolution
 □ Toeplitz matrix and convolution
Methods to compute convolution
 □ Method 1: Brute-force method
 □ Method 2: Using Toeplitz matrix
 □ Method 3: Using FFT to compute convolution
 □ Miscellaneous methods
Analytic signal and its applications
 □ Analytic signal and Fourier transform
 □ Extracting instantaneous amplitude, phase, frequency
 □ Phase demodulation using Hilbert transform
Choosing a filter : FIR or IIR : understanding the design perspective
 □ Design specification
 □ General considerations in design

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

QPSK – Quadrature Phase Shift Keying

Quadrature Phase Shift Keying (QPSK) is a form of phase modulation technique, in which two information bits (combined as one symbol) are modulated at once, selecting one of the four possible carrier phase shift states.

Figure 1: Waveform simulation model for QPSK modulation

The QPSK signal within a symbol duration \(T_{sym}\) is defined as

\[s(t) = A \cdot cos \left[2 \pi f_c t + \theta_n \right], \quad \quad 0 \leq t \leq T_{sym},\; n=1,2,3,4 \quad \quad (1) \]

where the signal phase is given by

\[\theta_n = \left(2n – 1 \right) \frac{\pi}{4} \quad \quad (2)\]

Therefore, the four possible initial signal phases are \(\pi/4, 3 \pi/4, 5 \pi/4\) and \(7 \pi/4\) radians. Equation (1) can be re-written as

\[\begin{align} s(t) &= A \cdot cos \theta_n \cdot cos \left( 2 \pi f_c t\right) – A \cdot sin \theta_n \cdot sin \left( 2 \pi f_c t\right) \\ &= s_{ni} \phi_i(t) + s_{nq} \phi_q(t) \quad\quad \quad \quad \quad\quad \quad \quad \quad\quad \quad \quad \quad\quad \quad \quad (3) \end{align} \]

The above expression indicates the use of two orthonormal basis functions: \( \left\langle \phi_i(t),\phi_q(t)\right\rangle\) together with the inphase and quadrature signaling points: \( \left\langle s_{ni}, s_{nq}\right\rangle\). Therefore, on a two dimensional co-ordinate system with the axes set to \( \phi_i(t)\) and \(\phi_q(t)\), the QPSK signal is represented by four constellation points dictated by the vectors \(\left\langle s_{ni}, s_{nq}\right\rangle\) with \( n=1,2,3,4\).

This article is part of the following books
Digital Modulations using Matlab : Build Simulation Models from Scratch, ISBN: 978-1521493885
Digital Modulations using Python ISBN: 978-1712321638
All books available in ebook (PDF) and Paperback formats

The transmitter

The QPSK transmitter, shown in Figure 1, is implemented as a matlab function qpsk_mod. In this implementation, a splitter separates the odd and even bits from the generated information bits. Each stream of odd bits (quadrature arm) and even bits (in-phase arm) are converted to NRZ format in a parallel manner.

Refer Digital Modulations using Matlab : Build Simulation Models from Scratch for full Matlab code.
Refer Digital Modulations using Python for full Python code

File 1: qpsk_mod.m: QPSK modulator

function [s,t,I,Q] = qpsk_mod(a,fc,OF)
%Modulate an incoming binary stream using conventional QPSK
%a - input binary data stream (0's and 1's) to modulate
%fc - carrier frequency in Hertz
%OF - oversampling factor (multiples of fc) - at least 4 is better
%s - QPSK modulated signal with carrier
%t - time base for the carrier modulated signal
%I - baseband I channel waveform (no carrier)
%Q - baseband Q channel waveform (no carrier)
L = 2*OF;%samples in each symbol (QPSK has 2 bits in each symbol)
ak = 2*a-1; %NRZ encoding 0-> -1, 1->+1
I = ak(1:2:end);Q = ak(2:2:end);%even and odd bit streams
I=repmat(I,1,L).'; Q=repmat(Q,1,L).';%even/odd streams at 1/2Tb baud
I = I(:).'; Q = Q(:).'; %serialize
fs = OF*fc; %sampling frequency
t=0:1/fs:(length(I)-1)/fs; %time base
iChannel = I.*cos(2*pi*fc*t);qChannel = -Q.*sin(2*pi*fc*t);
s = iChannel + qChannel; %QPSK modulated baseband signal

The timing diagram for BPSK and QPSK modulation is shown in Figure 2. For BPSK modulation the symbol duration for each bit is same as bit duration, but for QPSK the symbol duration is twice the bit duration: \(T_{sym}=2T_b\). Therefore, if the QPSK symbols were transmitted at same rate as BPSK, it is clear that QPSK sends twice as much data as BPSK does. After oversampling and pulse shaping, it is intuitively clear that the signal on the I-arm and Q-arm are BPSK signals with symbol duration \(2T_b\). The signal on the in-phase arm is then multiplied by \(cos (2 \pi f_c t)\) and the signal on the quadrature arm is multiplied by \(-sin (2 \pi f_c t)\). QPSK modulated signal is obtained by adding the signal from both in-phase and quadrature arms.

Note: The oversampling rate for the simulation is chosen as \(L=2 f_s/f_c\), where \(f_c\) is the given carrier frequency and \(f_s\) is the sampling frequency satisfying Nyquist sampling theorem with respect to the carrier frequency (\(f_s \geq f_c\)). This configuration gives integral number of carrier cycles for one symbol duration.

Figure 2: Timing diagram for BPSK and QPSK modulations

The receiver

Due to its special relationship with BPSK, the QPSK receiver takes the simplest form as shown in Figure 3. In this implementation, the I-channel and Q-channel signals are individually demodulated in the same way as that of BPSK demodulation. After demodulation, the I-channel bits and Q-channel sequences are combined into a single sequence. The function qpsk_demod implements a QPSK demodulator as per Figure 3.

Read more about QPSK, implementation of their modulator and demodulator, performance simulation in these books:

Figure 3: Waveform simulation model for QPSK demodulation

Performance simulation over AWGN

The complete waveform simulation for the aforementioned QPSK modulation and demodulation is given next. The simulation involves, generating random message bits, modulating them using QPSK modulation, addition of AWGN channel noise corresponding to the given signal-to-noise ratio and demodulating the noisy signal using a coherent QPSK receiver. The waveforms at the various stages of the modulator are shown in the Figure 4.

Figure 4: Simulated QPSK waveforms at the transmitter side

The performance simulation for the QPSK transmitter-receiver combination was also coded in the code given above and the resulting bit-error rate performance curve will be same as that of conventional BPSK. A QPSK signal essentially combines two orthogonally modulated BPSK signals. Therefore, the resulting performance curves for QPSK – \(E_b/N_0\) Vs. bits-in-error – will be same as that of conventional BPSK.

QPSK variants

QPSK modulation has several variants, three such flavors among them are: Offset QPSK, π/4-QPSK and π/4-DQPSK.

Offset-QPSK

Offset-QPSK is essentially same as QPSK, except that the orthogonal carrier signals on the I-channel and the Q-channel are staggered (one of them is delayed in time). In OQPSK, the orthogonal components cannot change states at the same time, this is because the components change state only at the middle of the symbol periods (due to the half symbol offset in the Q-channel). This eliminates 180° phase shifts all together and the phase changes are limited to 0° or 90° every bit period.

Elimination of 180° phase shifts in OQPSK offers many advantages over QPSK. Unlike QPSK, the spectrum of OQPSK remains unchanged when band-limited [1]. Additionally, OQPSK performs better than QPSK when subjected to phase jitters [2]. Further improvements to OQPSK can be obtained if the phase transitions are avoided altogether – as evident from continuous modulation schemes like Minimum Shift Keying (MSK) technique.

π/4-QPSK and π/4-DQPSK

In π/4-QPSK, the signaling points of the modulated signals are chosen from two QPSK constellations that are just shifted π/4 radians (45°) with respect to each other. Switching between the two constellations every successive bit ensures that the phase changes are confined to odd multiples of 45°. Therefore, phase transitions of 90° and 180° are eliminated.

π/4-QPSK preserves the constant envelope property better than QPSK and OQPSK. Unlike QPSK and OQPSK schemes, π/4-QPSK can be differentially encoded, therefore enabling the use of both coherent and non-coherent demodulation techniques. Choice of non-coherent demodulation results in simpler receiver design. Differentially encoded π/4-QPSK is referred as π/4-DQPSK.

Read more about QPSK and its variants, implementation of their modulator and demodulator, performance simulation in these books:

Constellation diagram

The phase transition properties of the different variants of QPSK schemes, are easily investigated using constellation diagram. Refer this article that discusses the method to plot signal space constellations, for the various modulations used in the transmitter.

Refer Digital Modulations using Matlab : Build Simulation Models from Scratch for full Matlab code.
Refer Digital Modulations using Python for full Python code

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

References

[1] S. A. Rhodes, “Effects of hardlimiting on bandlimited transmissions with conventional and offset QPSK modulation”, in Proc. Nat. TeIecommun. Conf., Houston, TX, 1972, PP. 20F/1-20F/7
[2] S. A. Rhodes, “Effect of noisy phase reference on coherent detection of offset QPSK signals”, IEEE Trans. Commun., vol. COM-22, PP. 1046-1055, Aug. 1974.↗

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Topics in this chapter

Digital Modulators and Demodulators - Passband Simulation Models
Introduction
Binary Phase Shift Keying (BPSK)
 □ BPSK transmitter
 □ BPSK receiver
 □ End-to-end simulation
Coherent detection of Differentially Encoded BPSK (DEBPSK)
● Differential BPSK (D-BPSK)
 □ Sub-optimum receiver for DBPSK
 □ Optimum noncoherent receiver for DBPSK
Quadrature Phase Shift Keying (QPSK)
 □ QPSK transmitter
 □ QPSK receiver
 □ Performance simulation over AWGN
● Offset QPSK (O-QPSK)
● π/p=4-DQPSK
● Continuous Phase Modulation (CPM)
 □ Motivation behind CPM
 □ Continuous Phase Frequency Shift Keying (CPFSK) modulation
 □ Minimum Shift Keying (MSK)
Investigating phase transition properties
● Power Spectral Density (PSD) plots
Gaussian Minimum Shift Keying (GMSK)
 □ Pre-modulation Gaussian Low Pass Filter
 □ Quadrature implementation of GMSK modulator
 □ GMSK spectra
 □ GMSK demodulator
 □ Performance
● Frequency Shift Keying (FSK)
 □ Binary-FSK (BFSK)
 □ Orthogonality condition for non-coherent BFSK detection
 □ Orthogonality condition for coherent BFSK
 □ Modulator
 □ Coherent Demodulator
 □ Non-coherent Demodulator
 □ Performance simulation
 □ Power spectral density

BPSK – Binary Phase Shift Keying

Key focus: BPSK, Binary Phase Shift Keying, bpsk modulation, bpsk demodulation, BPSK matlab, BPSK python implementation, BPSK constellation

BPSK – introduction

BPSK stands for Binary Phase Shift Keying. It is a type of modulation used in digital communication systems to transmit binary data over a communication channel.

In BPSK, the carrier signal is modulated by changing its phase by 180 degrees for each binary symbol. Specifically, a binary 0 is represented by a phase shift of 180 degrees, while a binary 1 is represented by no phase shift.

BPSK is a straightforward and effective modulation method and is frequently utilized in applications where the communication channel is susceptible to noise and interference. It is also utilized in different wireless communication systems like Wi-Fi, Bluetooth, and satellite communication.

Implementation details

Binary Phase Shift Keying (BPSK) is a two phase modulation scheme, where the 0’s and 1’s in a binary message are represented by two different phase states in the carrier signal: \(\theta=0^{\circ}\) for binary 1 and \(\theta=180^{\circ}\) for binary 0.

This article is part of the following books
Digital Modulations using Matlab : Build Simulation Models from Scratch, ISBN: 978-1521493885
Digital Modulations using Python ISBN: 978-1712321638
All books available in ebook (PDF) and Paperback formats

In digital modulation techniques, a set of basis functions are chosen for a particular modulation scheme. Generally, the basis functions are orthogonal to each other. Basis functions can be derived using Gram Schmidt orthogonalization procedure [1]. Once the basis functions are chosen, any vector in the signal space can be represented as a linear combination of them. In BPSK, only one sinusoid is taken as the basis function. Modulation is achieved by varying the phase of the sinusoid depending on the message bits. Therefore, within a bit duration \(T_b\), the two different phase states of the carrier signal are represented as,

\begin{align*} s_1(t) &= A_c\; cos\left(2 \pi f_c t \right), & 0 \leq t \leq T_b \quad \text{for binary 1}\\ s_0(t) &= A_c\; cos\left(2 \pi f_c t + \pi \right), & 0 \leq t \leq T_b \quad \text{for binary 0} \end{align*}

where, \(A_c\) is the amplitude of the sinusoidal signal, \(f_c\) is the carrier frequency \(Hz\), \(t\) being the instantaneous time in seconds, \(T_b\) is the bit period in seconds. The signal \(s_0(t)\) stands for the carrier signal when information bit \(a_k=0\) was transmitted and the signal \(s_1(t)\) denotes the carrier signal when information bit \(a_k=1\) was transmitted.

The constellation diagram for BPSK (Figure 3 below) will show two constellation points, lying entirely on the x axis (inphase). It has no projection on the y axis (quadrature). This means that the BPSK modulated signal will have an in-phase component but no quadrature component. This is because it has only one basis function. It can be noted that the carrier phases are \(180^{\circ}\) apart and it has constant envelope. The carrier’s phase contains all the information that is being transmitted.

BPSK transmitter

A BPSK transmitter, shown in Figure 1, is implemented by coding the message bits using NRZ coding (\(1\) represented by positive voltage and \(0\) represented by negative voltage) and multiplying the output by a reference oscillator running at carrier frequency \(f_c\).

Figure 1: BPSK transmitter

The following function (bpsk_mod) implements a baseband BPSK transmitter according to Figure 1. The output of the function is in baseband and it can optionally be multiplied with the carrier frequency outside the function. In order to get nice continuous curves, the oversampling factor (\(L\)) in the simulation should be appropriately chosen. If a carrier signal is used, it is convenient to choose the oversampling factor as the ratio of sampling frequency (\(f_s\)) and the carrier frequency (\(f_c\)). The chosen sampling frequency must satisfy the Nyquist sampling theorem with respect to carrier frequency. For baseband waveform simulation, the oversampling factor can simply be chosen as the ratio of bit period (\(T_b\)) to the chosen sampling period (\(T_s\)), where the sampling period is sufficiently smaller than the bit period.

Refer Digital Modulations using Matlab : Build Simulation Models from Scratch for full Matlab code.
Refer Digital Modulations using Python for full Python code

File 1: bpsk_mod.m: Baseband BPSK modulator

function [s_bb,t] = bpsk_mod(ak,L)
%Function to modulate an incoming binary stream using BPSK(baseband)
%ak - input binary data stream (0's and 1's) to modulate
%L - oversampling factor (Tb/Ts)
%s_bb - BPSK modulated signal(baseband)
%t - generated time base for the modulated signal
N = length(ak); %number of symbols
a = 2*ak-1; %BPSK modulation
ai=repmat(a,1,L).'; %bit stream at Tb baud with rect pulse shape
ai = ai(:).';%serialize
t=0:N*L-1; %time base
s_bb = ai;%BPSK modulated baseband signal

BPSK receiver

A correlation type coherent detector, shown in Figure 2, is used for receiver implementation. In coherent detection technique, the knowledge of the carrier frequency and phase must be known to the receiver. This can be achieved by using a Costas loop or a Phase Lock Loop (PLL) at the receiver. For simulation purposes, we simply assume that the carrier phase recovery was done and therefore we directly use the generated reference frequency at the receiver – \(cos( 2 \pi f_c t)\).

Figure 2: Coherent detection of BPSK (correlation type)

In the coherent receiver, the received signal is multiplied by a reference frequency signal from the carrier recovery blocks like PLL or Costas loop. Here, it is assumed that the PLL/Costas loop is present and the output is completely synchronized. The multiplied output is integrated over one bit period using an integrator. A threshold detector makes a decision on each integrated bit based on a threshold. Since, NRZ signaling format was used in the transmitter, the threshold for the detector would be set to \(0\). The function bpsk_demod, implements a baseband BPSK receiver according to Figure 2. To use this function in waveform simulation, first, the received waveform has to be downconverted to baseband, and then the function may be called.

Refer Digital Modulations using Matlab : Build Simulation Models from Scratch for full Matlab code.
Refer Digital Modulations using Python for full Python code

File 2: bpsk_demod.m: Baseband BPSK detection (correlation receiver)

function [ak_cap] = bpsk_demod(r_bb,L)
%Function to demodulate an BPSK(baseband) signal
%r_bb - received signal at the receiver front end (baseband)
%N - number of symbols transmitted
%L - oversampling factor (Tsym/Ts)
%ak_cap - detected binary stream
x=real(r_bb); %I arm
x = conv(x,ones(1,L));%integrate for L (Tb) duration
x = x(L:L:end);%I arm - sample at every L
ak_cap = (x > 0).'; %threshold detector

End-to-end simulation

The complete waveform simulation for the end-to-end transmission of information using BPSK modulation is given next. The simulation involves: generating random message bits, modulating them using BPSK modulation, addition of AWGN noise according to the chosen signal-to-noise ratio and demodulating the noisy signal using a coherent receiver. The topic of adding AWGN noise according to the chosen signal-to-noise ratio is discussed in section 4.1 in chapter 4. The resulting waveform plots are shown in the Figure 2.3. The performance simulation for the BPSK transmitter/receiver combination is also coded in the program shown next (see chapter 4 for more details on theoretical error rates).

The resulting performance curves will be same as the ones obtained using the complex baseband equivalent simulation technique in Figure 4.4 of chapter 4.

Refer Digital Modulations using Matlab : Build Simulation Models from Scratch for full Matlab code.
Refer Digital Modulations using Python for full Python code

File 3: bpsk_wfm_sim.m: Waveform simulation for BPSK modulation and demodulation

Figure 3: (a) Baseband BPSK signal,(b) transmitted BPSK signal – with carrier, (c) constellation at transmitter and (d) received signal with AWGN noise

References:

[1] Lloyd N. Trefethen, David Bau III , Numerical linear algebra, Philadelphia: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-361-9, pp.56

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Topics in this chapter

Digital Modulators and Demodulators - Passband Simulation Models
Introduction
Binary Phase Shift Keying (BPSK)
 □ BPSK transmitter
 □ BPSK receiver
 □ End-to-end simulation
Coherent detection of Differentially Encoded BPSK (DEBPSK)
● Differential BPSK (D-BPSK)
 □ Sub-optimum receiver for DBPSK
 □ Optimum noncoherent receiver for DBPSK
Quadrature Phase Shift Keying (QPSK)
 □ QPSK transmitter
 □ QPSK receiver
 □ Performance simulation over AWGN
● Offset QPSK (O-QPSK)
● π/p=4-DQPSK
● Continuous Phase Modulation (CPM)
 □ Motivation behind CPM
 □ Continuous Phase Frequency Shift Keying (CPFSK) modulation
 □ Minimum Shift Keying (MSK)
Investigating phase transition properties
● Power Spectral Density (PSD) plots
Gaussian Minimum Shift Keying (GMSK)
 □ Pre-modulation Gaussian Low Pass Filter
 □ Quadrature implementation of GMSK modulator
 □ GMSK spectra
 □ GMSK demodulator
 □ Performance
● Frequency Shift Keying (FSK)
 □ Binary-FSK (BFSK)
 □ Orthogonality condition for non-coherent BFSK detection
 □ Orthogonality condition for coherent BFSK
 □ Modulator
 □ Coherent Demodulator
 □ Non-coherent Demodulator
 □ Performance simulation
 □ Power spectral density

Central Limit Theorem – a demonstration

Central Limit Theorem – What is it ?

The central limit theorem (CLT) is a fundamental concept in statistics and probability theory that explains how the sum of independent and identically distributed random variables behaves. The theorem states that as the number of these variables increases, the distribution of their sum tends to become more like a normal distribution, even if the variables themselves are not normally distributed.

CLT states that the sum of independent and identically distributed (i.i.d) random variables (with finite mean and variance) approaches normal distribution as sample size \(N \rightarrow \infty\). In simpler terms, the theorem states that under certain general conditions, the sum of independent observations that follow same underlying distribution approximates to normal distribution. The approximation steadily improves as the number of observations increase. The underlying distribution of the independent observation can be anything – binomial, Poisson, exponential, Chi-Squared etc.

Why CLT ?

CLT is an important concept in statistics because it allows us to make inferences about a population based on a sample, even if we do not know the distribution of the population. It is used in many statistical techniques, such as hypothesis testing and confidence intervals.

Applications of CLT

Central limit theorem (CLT) is applied in a vast range of applications including (but not limited to) signal processing, channel modeling, random process, population statistics, engineering research, predicting the confidence intervals, hypothesis testing, etc. One such application in signal processing is – deriving the response of a cascaded series of low pass filters by applying the CLT. In the article titled ‘the central limit theorem and low-pass filters‘ the author has illustrated how the response of a cascaded series of low pass filters approaches Gaussian shape as the number of filters in the series increase [1].

In digital communication, the effect of noise on a communication channel is modeled as additive Gaussian white noise. This follows from the fact that the noise from many physical channels can be considered approximately Gaussian. For example, the random movement of electrons in the semiconductor devices gives rise to shot noise whose effect can be approximated to Gaussian distribution by applying central limit theorem.

Law of large numbers and CLT

there is a connection between the central limit theorem and the law of large numbers.

The law of large numbers is another important theorem in probability theory, which states that as the number of independent and identically distributed (iid) random variables increases, the average of those variables converges to the expected value of the distribution. In other words, as the sample size increases, the sample mean becomes more and more representative of the true population mean.

The central limit theorem, on the other hand, describes the distribution of the sum of iid random variables, and shows that as the sample size increases, the distribution of the sum approaches a normal distribution.

Both the law of large numbers and the CLT deal with the behavior of the sum or average of iid random variables as the sample size gets larger. The law of large numbers describes the behavior of the sample mean, while the CLT describes the behavior of the sum of the variables.

In essence, the law of large numbers is a precursor to the central limit theorem, as it establishes the fact that the sample mean becomes more and more representative of the true population mean as the sample size increases, and the central limit theorem shows that the distribution of the sum of iid random variables approaches a normal distribution as the sample size gets larger.

Demonstration using Python

For Matlab code, please refer the following book – Wireless communication systems in Matlab – by Mathuranathan Viswanathan

The following Python code illustrate how the theorem comes to play when the number of observations is increased for two separate experiments: rolling \(N\) unbiased dice and tossing \(N\) unbiased coins. The code generates \(N\) i.i.d discrete uniform random variables that generates uniform random numbers from the set \(\left\{1,k\right\}\). In the case of the dice rolling experiment, \(k\) is set to \(6\), thus simulating the random pick from the sample space \(S=\left\{1,2,3,4,5,6\right\}\) with equal probability. For the coin tossing experiment, \(k\) is set to \(2\), thus simulating the sample space of \(S=\left\{1,2\right\}\) representing head or tail events with equal probability. Rest of the code is self explanatory.

Python code

#---------Central limit theorem - Author: Mathuranathan #gaussianwaves.com -----------------------
#
import numpy as np
import matplotlib.pyplot as plt
#%matplotlib inline

numIterations = np.asarray([1,2,5,10,50,100]); #number of i.i.d RVs
experiment = 'dice' #valid values: 'dice', 'coins'
maxNumForExperiment = {'dice':6,'coins':2} #max numbers represented on dice or coins
nSamp=100000

k = maxNumForExperiment[experiment]

fig, fig_axes = plt.subplots(ncols=3, nrows=2, constrained_layout=True)

for i,N in enumerate(numIterations):
    y = np.random.randint(low=1,high=k+1,size=(N,nSamp)).sum(axis=0)
    row = i//3;col=i%3;
    bins=np.arange(start=min(y),stop=max(y)+2,step=1)
    fig_axes[row,col].hist(y,bins=bins,density=True)
    fig_axes[row,col].set_title('N={} {}'.format(N,experiment))
plt.show()
Figure 1: Demonstrating central limit theorem using N numbers of dice
Figure 2: Demonstrating central limit theorem using N numbers of coins

References

[1] Engelberg, “The central limit theorem and low-pass filters”, Proceedings of the 2004 11th IEEE International Conference on Electronics, Circuits and Systems, 13-15 Dec. 2004, pp. 65-68.↗

Similar topics:

Random Variables - Simulating Probabilistic Systems
● Introduction
Plotting the estimated PDF
● Univariate random variables
 □ Uniform random variable
 □ Bernoulli random variable
 □ Binomial random variable
 □ Exponential random variable
 □ Poisson process
 □ Gaussian random variable
 □ Chi-squared random variable
 □ Non-central Chi-Squared random variable
 □ Chi distributed random variable
 □ Rayleigh random variable
 □ Ricean random variable
 □ Nakagami-m distributed random variable
Central limit theorem - a demonstration
● Generating correlated random variables
 □ Generating two sequences of correlated random variables
 □ Generating multiple sequences of correlated random variables using Cholesky decomposition
Generating correlated Gaussian sequences
 □ Spectral factorization method
 □ Auto-Regressive (AR) model

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing