Generate color noise using Auto-Regressive (AR) model

Key focus: Learn how to generate color noise using auto regressive (AR) model. Apply Yule Walker equations for generating power law noises: pink noise, Brownian noise.

Auto-Regressive (AR) model

An uncorrelated Gaussian random sequence can be transformed into a correlated Gaussian random sequence using an AR time-series model. If a time series random sequence is assumed to be following an auto-regressive model of form,

where is the uncorrelated Gaussian sequence of zero mean and variance , the natural tendency is to estimate the model parameters . Least Squares method can be applied here to find the model parameters, but the computations become cumbersome as the order increases. Fortunately, the AR model coefficients can be solved for using Yule Walker equations.

Yule Walker equations relate auto-regressive model parameters to the auto-correlation of random process . Finding the model parameters using Yule-Walker equations, is a two step process:

1. Given , estimate auto-correlation of the process . If is already specified as a function, utilize it as it is (see auto-correlation equations for Jakes spectrum or Doppler spectrum in section 11.3.2 in the book).

2. Solve Yule Walker equations to find the model parameters and the noise sigma .

Yule-Walker equations

Yule-Walker equations can be compactly written as

Equation (2) Yule Walker equation

Written in matrix form, the Yule-Walker equations that comprises of a set of linear equations and unknown parameters.

Representing equation (3) in a compact form,

The AR model parameters can be found by solving

After solving for the model parameters , the noise variance can be found by applying the estimated values of in equation (2) by setting . The aryule command (in Matlab and Python’s spectrum package) efficiently solves the Yule-Walker equations using Levinson Algorithm [1][2]. Once the model parameters are obtained, the AR model can be implemented as an \emph{infinte impulse response (IIR)} filter of form

Example: power law noise generation

The power law in the power spectrum characterizes the fluctuating observables in many natural systems. Many natural systems exhibit some noise which is a stochastic process with a power spectral density having a power exponent that can take values . Simply put, noise is a colored noise with a power spectral density of over its entire frequency range.

The noise can be classified into different types based on the value of .

Violet noise – = -2, the power spectral density is proportional to .
Blue noise – = -1, the power spectral density is proportional to .
White noise – = 0, the power spectral density is flat across the whole spectrum.
Pink noise – = 1, the power spectral density is proportional to , i.e, it decreases by per octave with increase in frequency.
Brownian noise – = 2, the power spectral density is proportional to , therefore it decreases by per octave with increase in frequency.

The power law noise can be generated by sequencing a zero-mean white noise through an auto-regressive (AR) filter of order :

where, is a zero-mean white noise process. Referring the AR generation method described in [3], the coefficients of the AR filter can be generated as

which can be implemented as an infinite impulse response (IIR) filter using the filter transfer function described in equation (6).

The following script implements this method and the sample results are plotted in the next Figure.

Refer the book for the Matlab code

Figure 1: Simulated color noise samples and their PSD estimates: pink noise (α =1) and Brown noise (α =2)

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

References

[1] Gene H. Golub, Charles F. Van Loan, Matrix Computations, ISBN-9780801854149, Johns Hopkins University Press, 1996, p. 143.↗
[2] J. Durbin, The fitting of time series in models, Review of the International Statistical Institute, 28:233-243, 1960.↗
[3] Kasdin, N.J. Discrete Simulation of Colored Noise and Stochastic Processes and Power Law Noise Generation, Proceedings of the IEEE, Vol. 83, No. 5, 1995, pp. 802-827.↗

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Topics in this chapter

Random Variables - Simulating Probabilistic Systems
● Introduction
Plotting the estimated PDF
● Univariate random variables
 □ Uniform random variable
 □ Bernoulli random variable
 □ Binomial random variable
 □ Exponential random variable
 □ Poisson process
 □ Gaussian random variable
 □ Chi-squared random variable
 □ Non-central Chi-Squared random variable
 □ Chi distributed random variable
 □ Rayleigh random variable
 □ Ricean random variable
 □ Nakagami-m distributed random variable
Central limit theorem - a demonstration
● Generating correlated random variables
 □ Generating two sequences of correlated random variables
 □ Generating multiple sequences of correlated random variables using Cholesky decomposition
Generating correlated Gaussian sequences
 □ Spectral factorization method
 □ Auto-Regressive (AR) model

Generating colored noise with Jakes PSD: Spectral factorization

The aim of this article is to demonstrate the application of spectral factorization method in generating colored noise having Jakes power spectral density. Before continuing, I urge the reader to go through this post: Introduction to generating correlated Gaussian sequences.

This article is part of the book
Wireless Communication Systems in Matlab (second edition), ISBN: 979-8648350779 available in ebook (PDF) format and Paperback (hardcopy) format.

In spectral factorization method, a filter is designed using the desired frequency domain characteristics (like PSD) to transform an uncorrelated Gaussian sequence into a correlated sequence . In the model shown in Figure 1, the input to the LTI system is a white noise whose amplitude follows Gaussian distribution with zero mean and variance and the power spectral density (PSD) of the white noise is a constant across all frequencies.

The white noise sequence drives the LTI system with frequency response producing the signal of interest . The PSD of the output process is therefore

Figure 1: Relationship among various power spectral densities in a filtering process

If the desired power spectral density of the colored noise sequence is given, assuming , the impulse response of the LTI filter can be found by taking the inverse Fourier transform of the frequency response

Once, the impulse response of the filter is obtained, the colored noise sequence can be produced by driving the filter with a zero-mean white noise sequence of unit variance.

Example: Generating colored noise with Jakes PSD

For example, we wish to generate a Gaussian noise sequence whose power spectral density follows the normalized Jakes power spectral density (see section 11.3.2 in the book) given by

Applying spectral factorization method, the frequency response of the desired filter is

The impulse response of the filter is [1]

where, is the fractional Bessel function of the first kind, is the sampling interval for implementing the digital filter and is a constant. The impulse response of the filter can be normalized by dividing by .

The filter can be implemented as a finite impulse response (FIR) filter structure. However, the FIR implementation requires that the impulse response be truncated to a reasonable length. Such truncation leads to ringing effects due to Gibbs phenomenon. To avoid distortions due to truncation, the filter impulse response is usually windowed using a window function such as Hamming window.

where, the Hamming window is defined as

The function given in the book in section 2.6.1 implements a windowed Jakes filter using the aforementioned equations. The impulse response and the spectral characteristics of the filter are plotted in Figure 2.

Figure 2: Impulse response & spectrum of windowed Jakes filter ( fmax = 10Hz; Ts = 0:01s; N = 512)

A white noise can be transformed into colored noise sequence with Jakes PSD, by processing the white noise through the implemented filter. The script (given in the book in section 2.6.1)  illustrates this concept by transforming a white noise sequence into a colored noise sequence. The simulated noise samples and its PSD are plotted in Figure 3.

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

Reference

[1] Jeruchim et., al, Simulation of communication systems – modeling, methodology, and techniques, second edition, Kluwer academic publishers, 2002, ISBN: 0306462672.↗

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Topics in this chapter

Random Variables - Simulating Probabilistic Systems
● Introduction
Plotting the estimated PDF
● Univariate random variables
 □ Uniform random variable
 □ Bernoulli random variable
 □ Binomial random variable
 □ Exponential random variable
 □ Poisson process
 □ Gaussian random variable
 □ Chi-squared random variable
 □ Non-central Chi-Squared random variable
 □ Chi distributed random variable
 □ Rayleigh random variable
 □ Ricean random variable
 □ Nakagami-m distributed random variable
Central limit theorem - a demonstration
● Generating correlated random variables
 □ Generating two sequences of correlated random variables
 □ Generating multiple sequences of correlated random variables using Cholesky decomposition
Generating correlated Gaussian sequences
 □ Spectral factorization method
 □ Auto-Regressive (AR) model

Generate correlated Gaussian sequence (colored noise)

Key focus: Colored noise sequence (a.k.a correlated Gaussian sequence), is a non-white random sequence, with non-constant power spectral density across frequencies.

Introduction

Speaking of Gaussian random sequences such as Gaussian noise, we generally think that the power spectral density (PSD) of such Gaussian sequences is flat.We should understand that the PSD of a Gausssian sequence need not be flat. This bring out the difference between white and colored random sequences, as captured in Figure 1.

A white noise sequence is defined as any random sequence whose PSD is constant across all frequencies. Gaussian white noise is a Gaussian random sequence, whose amplitude is gaussian distributed and its PSD is a constant. Viewed in another way, a constant PSD in frequency domain implies that the average auto-correlation function in time-domain is an impulse function (Dirac-delta function). That is, the amplitude of noise at any given time instant is correlated only with itself. Therefore, such sequences are also referred as uncorrelated random sequences. White Gaussian noise processes are completely characterized by its mean and variance.

This article is part of the book
Wireless Communication Systems in Matlab (second edition), ISBN: 979-8648350779 available in ebook (PDF) format and Paperback (hardcopy) format.

Figure 1: Power spectral densities of white noise and colored noise

A colored noise sequence is simply a non-white random sequence, whose PSD varies with frequency. For a colored noise, the amplitude of noise at any given time instant is correlated with the amplitude of noise occurring at other instants of time. Hence, colored noise sequences will have an auto-correlation function other than the impulse function. Such sequences are also referred as correlated random sequences. Colored
Gaussian noise processes are completely characterized by its mean and the shaped of power spectral density (or the shape of auto-correlation function).

In mobile channel model simulations, it is often required to generate correlated Gaussian random sequences with specified mean and power spectral density (like Jakes PSD or Gaussian PSD given in section 11.3.2 in the book). An uncorrelated Gaussian sequence can be transformed into a correlated sequence through filtering or linear transformation, that preserves the Gaussian distribution property of amplitudes, but alters only the correlation property (equivalently the power spectral density). We shall see two methods to generate colored Gaussian noise for given mean and PSD shape

Spectral factorization method
Auto-regressive (AR) model

Motivation

Let’s say we observe a real world signal that has an arbitrary spectrum . We would like to describe the long sequence of using very few parameters, as in applications like linear predictive coding (LPC). The modeling approach, described here, tries to answer the following two questions:

• Is it possible to model the first order (mean/variance) and second order (correlations, spectrum) statistics of the signal just by shaping a white noise spectrum using a transfer function ? (see Figure 1).
• Does this produce the same statistics (spectrum, correlations, mean and variance) for a white noise input ?

If the answer is yes to the above two questions, we can simply set the modeled parameters of the system and excite the system with white noise, to produce the desired real world signal. This reduces the amount to data we wish to transmit in a communication system application. This approach can be used to transform an uncorrelated white Gaussian noise sequence to a colored Gaussian noise sequence with desired spectral properties.

Linear time invariant (LTI) system model

In the given model, the random signal is observed. Given the observed signal , the goal here is to find a model that best describes the spectral properties of under the following assumptions
• The sequence is WSS (wide sense stationary) and ergodic.
• The input sequence to the LTI system is white noise, whose amplitudes follow Gaussian distribution with zero-mean and variance with flat the power spectral density.
• The LTI system is BIBO (bounded input bounded output) stable.

Read the continuation of this post : Spectral factorization method

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

Reference

[1] Jeruchim et., al, Simulation of communication systems – modeling, methodology, and techniques, second edition, Kluwer academic publishers, 2002, ISBN: 0306462672.↗

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Topics in this chapter

Random Variables - Simulating Probabilistic Systems
● Introduction
Plotting the estimated PDF
● Univariate random variables
 □ Uniform random variable
 □ Bernoulli random variable
 □ Binomial random variable
 □ Exponential random variable
 □ Poisson process
 □ Gaussian random variable
 □ Chi-squared random variable
 □ Non-central Chi-Squared random variable
 □ Chi distributed random variable
 □ Rayleigh random variable
 □ Ricean random variable
 □ Nakagami-m distributed random variable
Central limit theorem - a demonstration
● Generating correlated random variables
 □ Generating two sequences of correlated random variables
 □ Generating multiple sequences of correlated random variables using Cholesky decomposition
Generating correlated Gaussian sequences
 □ Spectral factorization method
 □ Auto-Regressive (AR) model

Plot histogram and estimated PDF in Matlab

Key focus: With examples, let’s estimate and plot the probability density function of a random variable using Matlab histogram function.

Generation of random variables with required probability distribution characteristic is of paramount importance in simulating a communication system. Let’s see how we can generate a simple random variable, estimate and plot the probability density function (PDF) from the generated data and then match it with the intended theoretical PDF. Normal random variable is considered here for illustration. Other types of random variables like uniform, Bernoulli, binomial, Chi-squared, Nakagami-m are illustrated in the next section.

Note: If you are inclined towards programming in Python, visit this article

Step 1: Create the random variable

A survey of commonly used fundamental methods to generate a given random variable is given in [1]. For this demonstration, we will consider the normal random variable with the following parameters : – mean and – standard deviation. First generate a vector of randomly distributed random numbers of sufficient length (say 100000) with some valid values for and . There are more than one way to generate this. Some of them are given below.

This article is part of the book
Wireless Communication Systems in Matlab (second edition), ISBN: 979-8648350779 available in ebook (PDF) format and Paperback (hardcopy) format.

● Method 1: Using the in-built random function (requires statistics toolbox)

mu=0;sigma=1;%mean=0,deviation=1
L=100000; %length of the random vector
R = random('Normal',mu,sigma,L,1);%method 1

●  Method 2: Using randn function that generates normally distributed random numbers having and = 1

mu=0;sigma=1;%mean=0,deviation=1
L=100000; %length of the random vector
R = randn(L,1)*sigma + mu; %method 2

● Method 3: Box-Muller transformation [2] method using rand function that generates uniformly distributed random numbers

 mu=0;sigma=1;%mean=0,deviation=1
L=100000; %length of the random vector
U1 = rand(L,1); %uniformly distributed random numbers U(0,1)
U2 = rand(L,1); %uniformly distributed random numbers U(0,1)
Z = sqrt(-2log(U1)).cos(2piU2);%Standard Normal distribution
R = Z*sigma+mu;%Normal distribution with mean and sigma

Step 2: Plot the estimated histogram

Typically, if we have a vector of random numbers that is drawn from a distribution, we can estimate the PDF using the histogram tool.  Matlab supports two in-built functions to compute and plot histograms:

● hist – introduced before R2006a
● histogram – introduced in R2014b

Which one to use ? Matlab’s help page points that the hist function is not recommended for several reasons and the issue of inconsistency is one among them. The histogram function is the recommended function to use.

Estimate and plot the normalized histogram using the recommended ‘histogram’ function. And for verification, overlay the theoretical PDF for the intended distribution. When using the histogram function to plot the estimated PDF from the generated random data, use ‘pdf’ option for ‘Normalization’ option. Do not use the ‘probability’ option for ‘Normalization’ option, as it will not match the theoretical PDF curve.

histogram(R,'Normalization','pdf'); %plot estimated pdf from the generated data

X = -4:0.1:4; %range of x to compute the theoretical pdf
fx_theory = pdf('Normal',X,mu,sigma); %theoretical normal probability density
hold on; plot(X,fx_theory,'r'); %plot computed theoretical PDF
title('Probability Density Function'); xlabel('values - x'); ylabel('pdf - f(x)'); axis tight;
legend('simulated','theory');
Estimated PDF (using histogram function) and the theoretical PDF

However, if you do not have Matlab version that was released before R2014b, use the ‘hist’ function and get the histogram frequency counts () and the bin-centers (). Using these data, normalize the frequency counts using the overall area under the histogram. Plot this normalized histogram and overlay the theoretical PDF for the chosen parameters.

%For those who don't have access to 'histogram' function
%get un-normalized values from hist function with same number of bins as histogram function
numBins=50; %choose appropriately
[f,x]=hist(R,numBins); %use hist function and get unnormalized values
figure; plot(x,f/trapz(x,f),'b-*');%plot normalized histogram from the generated data

X = -4:0.1:4; %range of x to compute the theoretical pdf
fx_theory =   pdf('Normal',X,mu,sigma); %theoretical normal probability density
hold on; plot(X,fx_theory,'r'); %plot computed theoretical PDF
title('Probability Density Function'); xlabel('values - x'); ylabel('pdf - f(x)'); axis tight;
legend('simulated','theory');

Step 3: Theoretical PDF:

The given code snippets above,  already include the command to plot the theoretical PDF by using the ‘pdf’ function in Matlab. It you do not have access to this function, you could use the following equation for computing the theoretical PDF

The code snippet for that purpose is given next.

X = -4:0.1:4; %range of x to compute the theoretical pdf
fx_theory = 1/sqrt(2*pi*sigma^2)*exp(-0.5*(X-mu).^2./sigma^2);
plot(X,fx_theory,'k'); %plot computed theoretical PDF

Note:  The functions – ‘random’ and ‘pdf’ , requires statistics toolbox.

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

References:

[1] John Mount, ‘Six Fundamental Methods to Generate a Random Variable’, January 20, 2012.↗
[2] Thomas, D. B., Luk. W., Leong, P. H. W., and Villasenor, J. D. 2007. Gaussian random number generators. ACM Comput. Surv. 39, 4, Article 11 (October 2007), 38 pages DOI = 10.1145/1287620.1287622 http://doi.acm.org/10.1145/1287620.1287622.↗

Topics in this chapter

Random Variables - Simulating Probabilistic Systems
● Introduction
Plotting the estimated PDF
● Univariate random variables
 □ Uniform random variable
 □ Bernoulli random variable
 □ Binomial random variable
 □ Exponential random variable
 □ Poisson process
 □ Gaussian random variable
 □ Chi-squared random variable
 □ Non-central Chi-Squared random variable
 □ Chi distributed random variable
 □ Rayleigh random variable
 □ Ricean random variable
 □ Nakagami-m distributed random variable
Central limit theorem - a demonstration
● Generating correlated random variables
 □ Generating two sequences of correlated random variables
 □ Generating multiple sequences of correlated random variables using Cholesky decomposition
Generating correlated Gaussian sequences
 □ Spectral factorization method
 □ Auto-Regressive (AR) model

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Simulate additive white Gaussian noise (AWGN) channel

In this article, the relationship between SNR-per-bit (Eb/N0) and SNR-per-symbol (Es/N0) are defined with respect to M-ary signaling schemes. Then the complex baseband model for an AWGN channel is discussed, followed by the theoretical error rates of various modulations over the additive white Gaussian noise (AWGN) channel. Finally, the complex baseband models for digital modulators and detectors developed in previous chapter of this book, are incorporated to build a complete communication system model.

If you would like to know more about the simulation and analysis of white noise, I urge you to read this article: White noise: Simulation & Analysis using Matlab.

Signal to noise ratio (SNR) definitions

Assuming a channel of bandwidth B, received signal power Pr and the power spectral density (PSD) of noise N0/2, the signal to noise ratio (SNR) is given by

Let a signal’s energy-per-bit is denoted as Eb and the energy-per-symbol as Es, then γb=Eb/N0 and γs=Es/N0 are the SNR-per-bit and the SNR-per-symbol respectively.

For uncoded M-ary signaling scheme with k = log2(M) bits per symbol, the signal energy per modulated symbol is given by

The SNR per symbol is given by

AWGN channel model

In order to simulate a specific SNR point in performance simulations, the modulated signal from the transmitter needs to be added with random noise of specific strength. The strength of the generated noise depends on the desired SNR level which usually is an input in such simulations. In practice, SNRs are specified in dB. Given a specific SNR point for simulation, let’s see how we can simulate an AWGN channel that adds correct level of white noise to the transmitted symbols.

Figure 1: Simplified simulation model for awgn channel

Consider the AWGN channel model given in Figure 1. Given a specific SNR point to simulate, we wish to generate a white Gaussian noise vector of appropriate strength and add it to the incoming signal. The method described can be applied for both waveform simulations and the complex baseband simulations. In following text, the term SNR (γ) refers to γb = Eb/N0 when the modulation is of binary type (example: BPSK). For multilevel modulations such as QPSK and MQAM, the term SNR refers to γs = Es/N0.

(1) Assume, s is a vector that represents the transmitted signal. We wish to generate a vector r that represents the signal after passing through the AWGN channel. The amount of noise added by the AWGN channel is controlled by the given SNR – γ

(2) For waveform simulation model, let the given oversampling ratio is denoted as L. On the other hand, if you are using the complex baseband models, set L=1.

(3) Let N denotes the length of the vector s. The signal power for the vector s can be measured as,

(4) The required power spectral density of the noise vector n is computed as

(5) Assuming complex IQ plane for all the digital modulations, the required noise variance (noise power) for generating Gaussian random noise is given by

(6) Generate the noise vector n drawn from normal distribution with mean set to zero and the standard deviation computed from the equation given above

(7) Finally add the generated noise vector (n) to the signal (s)

Matlab code

The following custom function written in Matlab, can be used for adding AWGN noise to an incoming signal. It can be used in waveform simulation as well as complex baseband simulation models.

%author - Mathuranathan Viswanathan (gaussianwaves.com
%This code is part of the books: Wireless communication systems using Matlab & Digital modulations using Matlab.

function [r,n,N0] = add_awgn_noise(s,SNRdB,L)
%Function to add AWGN to the given signal
%[r,n,N0]= add_awgn_noise(s,SNRdB) adds AWGN noise vector to signal
%'s' to generate a %resulting signal vector 'r' of specified SNR
%in dB. It also returns the noise vector 'n' that is added to the
%signal 's' and the spectral density N0 of noise added
%
%[r,n,N0]= add_awgn_noise(s,SNRdB,L) adds AWGN noise vector to
%signal 's' to generate a resulting signal vector 'r' of specified
%SNR in dB. The parameter 'L' specifies the oversampling ratio used
%in the system (for waveform simulation). It also returns the noise
%vector 'n' that is added to the signal 's' and the spectral
%density N0 of noise added
 s_temp=s;
 if iscolumn(s), s=s.'; end; %to return the result in same dim as 's'
 gamma = 10ˆ(SNRdB/10); %SNR to linear scale
 
 if nargin==2, L=1; end %if third argument is not given, set it to 1
 
 if isvector(s),
  P=L*sum(abs(s).ˆ2)/length(s);%Actual power in the vector
 else %for multi-dimensional signals like MFSK
  P=L*sum(sum(abs(s).ˆ2))/length(s); %if s is a matrix [MxN]
 end
 
 N0=P/gamma; %Find the noise spectral density
 if(isreal(s)),
  n = sqrt(N0/2)*randn(size(s));%computed noise
 else
  n = sqrt(N0/2)*(randn(size(s))+1i*randn(size(s)));%computed noise
 end
 
 r = s + n; %received signal
 
 if iscolumn(s_temp), r=r.'; end;%return r in original format as s
end

Python code

The following custom function written in Python 3, can be used for adding AWGN noise to an incoming signal. It can be used in waveform simulation as well as complex baseband simulation models.

# author - Mathuranathan Viswanathan (gaussianwaves.com
# This code is part of the book Digital Modulations using Python

from numpy import sum,isrealobj,sqrt
from numpy.random import standard_normal

def awgn(s,SNRdB,L=1):
    """
    AWGN channel
    Add AWGN noise to input signal. The function adds AWGN noise vector to signal 's' to generate a resulting signal vector 'r' of specified SNR in dB. It also
    returns the noise vector 'n' that is added to the signal 's' and the power spectral density N0 of noise added
    Parameters:
        s : input/transmitted signal vector
        SNRdB : desired signal to noise ratio (expressed in dB) for the received signal
        L : oversampling factor (applicable for waveform simulation) default L = 1.
    Returns:
        r : received signal vector (r=s+n)
"""
    gamma = 10**(SNRdB/10) #SNR to linear scale
    if s.ndim==1:# if s is single dimensional vector
        P=L*sum(abs(s)**2)/len(s) #Actual power in the vector
    else: # multi-dimensional signals like MFSK
        P=L*sum(sum(abs(s)**2))/len(s) # if s is a matrix [MxN]
    N0=P/gamma # Find the noise spectral density
    if isrealobj(s):# check if input is real/complex object type
        n = sqrt(N0/2)*standard_normal(s.shape) # computed noise
    else:
        n = sqrt(N0/2)*(standard_normal(s.shape)+1j*standard_normal(s.shape))
    r = s + n # received signal
return r

Theoretical symbol error rates for digital modulations in AWGN channel

Denoting the symbol error rate (SER) as , SNR-per-bit as and SNR-per-symbol as , the symbol error rates for various modulation schemes over AWGN channel are listed in Table 1 (refer [1]).

Table 1: Theoretical symbol error rate for various modulations in AWGN channel

The theoretical symbol error rates are coded as a reusable function. In this implementation, erfc function is used instead of the Q function shown in the Table 4.1. The following equation describes the relationship between the erfc function and the Q function.

Unified simulation model for performance simulation

In the previous chapter of the books, the code implementation for complex baseband models for various digital modulators and demodulator are given. Using these models, we can create a unified simulation code for simulating the performance of various modulation techniques over AWGN channel.

The complete simulation model for performance simulation over AWGN channel is given in Figure 2. The figure is illustrated for a coherent communication system model (applicable for MPSK/MQAM/MPAM modulations)

Figure 2: Complete simulation model for a communication system with AWGN channel

The Matlab code implementing the aforementioned simulation model is given in the books. Here, an unified approach is employed to simulate the performance of any of the given modulation technique – MPSK, MQAM, MPAM or MFSK (MFSK simulation technique is available in the following books: Digital Modulations using Python and Digital Modulations using Matlab).

This article is part of the following books
Digital Modulations using Matlab : Build Simulation Models from Scratch, ISBN: 978-1521493885
Digital Modulations using Python ISBN: 978-1712321638
Wireless communication systems in Matlab ISBN: 979-8648350779
All books available in ebook (PDF) and Paperback formats

The simulation code will automatically choose the selected modulation type, performs Monte Carlo simulation, computes symbol error rates and plots them against the theoretical symbol error rates. The simulated performance results obtained for MQAM and MPSK modulations are shown in the Figure 3 and Figure 4.

Figure 3: Simulated symbol error rate performance of M-QAM modulation over AWGN channel

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

References

[1] Andrea Goldsmith, Wireless Communications, Cambridge University Pres, first edition, August 8, 2005.↗

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Random Variables, CDF and PDF

Random Variable:

In a “coin-flipping” experiment, the outcome is not known prior to the experiment, that is we cannot predict it with certainty (non-deterministic/stochastic). But we know the all possible outcomes – Head or Tail. Assign real numbers to the all possible events (this is called “sample space”), say “0” to “Head” and “1” to “Tail”, and associate a variable “X” that could take these two values. This variable “X” is called a random variable, since it can randomly take any value ‘0’ or ‘1’ before performing the actual experiment.

Obviously, we do not want to wait till the coin-flipping experiment is done. Because the outcome will lose its significance, we want to associate some probability to each of the possible event. In the coin-flipping experiment, all outcomes are equally probable (given that the coin is fair and unbiased). This means that we can say that the probability of getting Head ( our random variable X = 0 ) as well that of getting Tail ( X =1 ) is 0.5 (i.e. 50-50 chance for getting Head/Tail).

This can be written as,

Cumulative Distribution Function:

Mathematically, a complete description of a random variable is given be “Cumulative Distribution Function”- FX(x). Here the bold faced “X” is a random variable and “x” is a dummy variable which is a place holder for all possible outcomes ( “0” and “1” in the above mentioned coin flipping experiment). The Cumulative Distribution Function is defined as,

If we plot the CDF for our coin-flipping experiment, it would look like the one shown in the figure on your right.
The example provided above is of discrete nature, as the values taken by the random variable are discrete (either “0” or “1”) and therefore the random variable is called Discrete Random Variable.

If the values taken by the random variables are of continuous nature (Example: Measurement of temperature), then the random variable is called Continuous Random Variable and the corresponding cumulative distribution function will be smoother without discontinuities.

Probability Distribution function :

Consider an experiment in which the probability of events are as follows. The probabilities of getting the numbers 1,2,3,4 individually are respectively. It will be more convenient for us if we have an equation for this experiment which will give these values based on the events. For example, the equation for this experiment can be given by where . This equation ( equivalently a function) is called probability distribution function.

Probability Density function (PDF) and Probability Mass Function(PMF):

Its more common deal with Probability Density Function (PDF)/Probability Mass Function (PMF) than CDF.

The PDF (defined for Continuous Random Variables) is given by taking the first derivate of CDF.

For discrete random variable that takes on discrete values, is it common to defined Probability Mass Function.

The previous example was simple. The problem becomes slightly complex if we are asked to find the probability of getting a value less than or equal to 3. Now the straight forward approach will be to add the probabilities of getting the values which comes out to be . This can be easily modeled as a probability density function which will be the integral of probability distribution function with limits 1 to 3.

Based on the probability density function or how the PDF graph looks, PDF fall into different categories like binomial distribution, Uniform distribution, Gaussian distribution, Chi-square distribution, Rayleigh distribution, Rician distribution etc. Out of these distributions, you will encounter Gaussian distribution or Gaussian Random variable in digital communication very often.

Mean:

The mean of a random variable is defined as the weighted average of all possible values the random variable can take. Probability of each outcome is used to weight each value when calculating the mean. Mean is also called expectation (E[X])

For continuos random variable X and probability density function fX(x)

For discrete random variable X, the mean is calculated as weighted average of all possible values (xi) weighted with individual probability (pi)

Variance :

Variance measures the spread of a distribution. For a continuous random variable X, the variance is defined as

For discrete case, the variance is defined as

Standard Deviation () is defined as the square root of variance

Properties of Mean and Variance:

For a constant – “c” following properties will hold true for mean

For a constant – “c” following properties will hold true for variance

PDF and CDF define a random variable completely. For example: If two random variables X and Y have the same PDF, then they will have the same CDF and therefore their mean and variance will be same.
On the otherhand, mean and variance describes a random variable only partially. If two random variables X and Y have the same mean and variance, they may or may not have the same PDF or CDF.

Gaussian Distribution :

Gaussian PDF looks like a bell. It is used most widely in communication engineering. For example , all channels are assumed to be Additive White Gaussian Noise channel. What is the reason behind it ? Gaussian noise gives the smallest channel capacity with fixed noise power. This means that it results in the worst channel impairment. So the coding designs done under this most adverse environment will give superior and satisfactory performance in real environments. For more information on “Gaussianity” refer [1]

The PDF of the Gaussian Distribution (also called as Normal Distribution) is completely characterized by its mean () and variance(),

Since PDF is defined as the first derivative of CDF, a reverse engineering tell us that CDF can be obtained by taking an integral of PDF.
Thus to get the CDF of the above given function,

Equations for PDF and CDF for certain distributions are consolidated below

Probability Distribution Probability Density Function(PDF) Cumulative Distribution Function (CDF)
Gaussian/Normal Distribution –

Reference :

[1] S.Pasupathy, “Glories of Gaussianity”, IEEE Communications magazine, Aug 1989 – 1, pp 38.

Topics in this chapter

Random Variables - Simulating Probabilistic Systems
● Introduction
Plotting the estimated PDF
● Univariate random variables
 □ Uniform random variable
 □ Bernoulli random variable
 □ Binomial random variable
 □ Exponential random variable
 □ Poisson process
 □ Gaussian random variable
 □ Chi-squared random variable
 □ Non-central Chi-Squared random variable
 □ Chi distributed random variable
 □ Rayleigh random variable
 □ Ricean random variable
 □ Nakagami-m distributed random variable
Central limit theorem - a demonstration
● Generating correlated random variables
 □ Generating two sequences of correlated random variables
 □ Generating multiple sequences of correlated random variables using Cholesky decomposition
Generating correlated Gaussian sequences
 □ Spectral factorization method
 □ Auto-Regressive (AR) model

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing