White Noise : Simulation and Analysis using Matlab

Definition

A random process (or signal for your visualization) with a constant power spectral density (PSD) function is a white noise process.

Power Spectral Density

Power Spectral Density function (PSD) shows how much power is contained in each of the spectral component. For example, for a sine wave of fixed frequency, the PSD plot will contain only one spectral component present at the given frequency. PSD is an even function and so the frequency components will be mirrored across the Y-axis when plotted. Thus for a sine wave of fixed frequency, the double sided plot of PSD will have two components – one at +ve frequency and another at –ve frequency of the sine wave. (Know how to plot PSD/FFT in Python & in Matlab)

Gaussian and Uniform White Noise:

A white noise signal (process) is constituted by a set of independent and identically distributed (i.i.d) random variables. In discrete sense, the white noise signal constitutes a series of samples that are independent and generated from the same probability distribution. For example, you can generate a white noise signal using a random number generator in which all the samples follow a given Gaussian distribution. This is called White Gaussian Noise (WGN) or Gaussian White Noise. Similarly, a white noise signal generated from a Uniform distribution is called Uniform White Noise.

Gaussian Noise and Uniform Noise are frequently used in system modelling. In modelling/simulation, white noise can be generated using an appropriate random generator. White Gaussian Noise can be generated using randn function in Matlab which generates random numbers that follow a Gaussian distribution. Similarly, rand function can be used to generate Uniform White Noise in Matlab that follows a uniform distribution. When the random number generators are used, it generates a series of random numbers from the given distribution. Let’s take the example of generating a White Gaussian Noise of length 10 using randn function in Matlab – with zero mean and standard deviation=1.

>> mu=0;sigma=1;
>> noise= sigma *randn(1,10)+mu
noise =   -1.5121    0.7321   -0.1621    0.4651    1.4284    1.0955   -0.5586    1.4362   -0.8026    0.0949

What is i.i.d ?

This simply generates 10 random numbers from the standard normal distribution. As we know that a white process is seen as a random process composing several random variables following the same Probability Distribution Function (PDF). The 10 random numbers above are generated from the same PDF (standard normal distribution). This condition is called “identically distributed” condition. The individual samples given above are “independent” of each other. Furthermore, each sample can be viewed as a realization of one random variable. In effect, we have generated a random process that is composed of realizations of 10 random variables. Thus, the process above is constituted from “independent identically distributed” (i.i.d) random variables.

Strictly and weakly defined white noise:

Since the white noise process is constructed from i.i.d random variable/samples, all the samples follow the same underlying probability distribution function (PDF). Thus, the Joint Probability Distribution function of the process will not change with any shift in time. This is called a stationary process. Hence, this noise is a stationary process. As with a stationary process which can be classified as Strict Sense Stationary (SSS) and Wide Sense Stationary (WSS) processes, we can have white noise that is SSS and white noise that is WSS. Correspondingly they can be called strictly defined white noise signal and weakly defined white noise signal.

What’s with Covariance Function/Matrix ?

A white noise signal, denoted by \(x(t)\), is defined in weak sense is a more practical condition. Here, the samples are statistically uncorrelated and identically distributed with some variance equal to \(\sigma^2\). This condition is specified by using a covariance function as

\[COV \left(x_i, x_j \right) = \begin{cases} \sigma^2, & \quad i = j \\ 0, & \quad i \neq j \end{cases}\]

Why do we need a covariance function? Because, we are dealing with a random process that is composed of \(n\) random variables (10 variables in the modelling example above). Such a process is viewed as multivariate random vector or multivariate random variable.

For multivariate random variables, Covariance function specified how each of the \(n\) variables in the given random process behaves with respect to each other. Covariance function generalizes the notion of variance to multiple dimensions.

The above equation when represented in the matrix form gives the covariance matrix of the white noise random process. Since the random variables in this process are statistically uncorrelated, the covariance function contains values only along the diagonal.

\[C_{xx} = \begin{bmatrix} \sigma^2 & \cdots & 0 \\ \vdots & \sigma^2 & \vdots \\ 0 & \cdots & \sigma^2\end{bmatrix} = \sigma^2 \mathbf{I} \]

The matrix above indicates that only the auto-correlation function exists for each random variable. The cross-correlation values are zero (samples/variables are statistically uncorrelated with respect to each other). The diagonal elements are equal to the variance and all other elements in the matrix are zero.The ensemble auto-correlation function of the weakly defined white noise is given by This indicates that the auto-correlation function of weakly defined white noise process is zero everywhere except at lag \(\tau=0\).

\[R_{xx}(\tau) = E \left[ x(t) x^*(t-\tau)\right] = \sigma^2 \delta (\tau)\]

Related topic: Constructing the auto-correlation matrix in Matlab

Frequency Domain Characteristics:

Wiener-Khintchine Theorem states that for Wide Sense Stationary Process (WSS), the power spectral density function \(S_{xx}(f)\) of a random process can be obtained by Fourier Transform of auto-correlation function of the random process. In continuous time domain, this is represented as

\[S_{xx}(f) = F \left[R_{xx}(\tau) \right] = \int_{-\infty}^{\infty} R_{xx} (\tau) e ^{- j 2 \pi f \tau} d \tau\]

For the weakly defined white noise process, we find that the mean is a constant and its covariance does not vary with respect to time. This is a sufficient condition for a WSS process. Thus we can apply Weiner-Khintchine Theorem. Therefore, the power spectral density of the weakly defined white noise process is constant (flat) across the entire frequency spectrum (Figure 1). The value of the constant is equal to the variance or power of the noise signal.

\[S_{xx}(f) = F \left[R_{xx}(\tau) \right] = \int_{-\infty}^{\infty} \sigma^2 \delta (\tau) e ^{- j 2 \pi f \tau} d \tau = \sigma^2 \int_{-\infty}^{\infty} \delta (\tau) e ^{- j 2 \pi f \tau} = \sigma^2\]
Wiener Khinchin Theorem
Figure 1: Weiner-Khintchine theorem illustrated

Testing the characteristics of White Gaussian Noise in Matlab:

Generate a Gaussian white noise signal of length \(L=100,000\) using the randn function in Matlab and plot it. Let’s assume that the pdf is a Gaussian pdf with mean \(\mu=0\) and standard deviation \(\sigma=2\). Thus the variance of the Gaussian pdf is \(\sigma^2=4\). The theoretical PDF of Gaussian random variable is given by

\[f_X(x) = \frac{1}{\sigma \sqrt{2 \pi}} \cdot exp \left[ – \frac{\left( x – \mu\right)^2}{2 \sigma^2} \right] \]

More simulation techniques available in the following ebooks
Digital Modulations using Matlab
Digital Modulations using Python
Wireless Communication systems in Matlab

clear all; clc; close all;
L=100000; %Sample length for the random signal
mu=0;
sigma=2;
X=sigma*randn(L,1)+mu;

figure();
subplot(2,1,1)
plot(X);
title(['White noise : \mu_x=',num2str(mu),' \sigma^2=',num2str(sigma^2)])
xlabel('Samples')
ylabel('Sample Values')
grid on;
White noise simulated in Matlab
Figure 2: Simulated noise samples

Plot the histogram of the generated noise signal and verify the histogram by plotting against the theoretical pdf of the Gaussian random variable.

If you are inclined towards programming in Python, go here to know about plotting histogram using Matplotlib package.

subplot(2,1,2)
n=100; %number of Histrogram bins
[f,x]=hist(X,n);
bar(x,f/trapz(x,f)); hold on;
%Theoretical PDF of Gaussian Random Variable
g=(1/(sqrt(2*pi)*sigma))*exp(-((x-mu).^2)/(2*sigma^2));
plot(x,g);hold off; grid on;
title('Theoretical PDF and Simulated Histogram of White Gaussian Noise');
legend('Histogram','Theoretical PDF');
xlabel('Bins');
ylabel('PDF f_x(x)');
Plot of White Noise Histogram in Matlab
Figure 3: Plot of simulated & theoretical PDF for Gaussian RV

Compute the auto-correlation function of the white noise. The computed auto-correlation function has to be scaled properly. If the ‘xcorr’ function (inbuilt in Matlab) is used for computing the auto-correlation function, use the ‘biased’ argument in the function to scale it properly.

figure();
Rxx=1/L*conv(flipud(X),X);
lags=(-L+1):1:(L-1);

%Alternative method
%[Rxx,lags] =xcorr(X,'biased'); 
%The argument 'biased' is used for proper scaling by 1/L
%Normalize auto-correlation with sample length for proper scaling

plot(lags,Rxx); 
title('Auto-correlation Function of white noise');
xlabel('Lags')
ylabel('Correlation')
grid on;
Autocorrelation of White noise in Matlab
Figure 4: Autocorrelation function of generated noise

Simulating the PSD:

Simulating the Power Spectral Density (PSD) of the white noise is a little tricky business. There are two issues here 1) The generated samples are of finite length. This is synonymous to applying truncating an infinite series of random samples. This implies that the lags are defined over a fixed range. ( FFT and spectral leakage – an additional resource on this topic can be found here) 2) The random number generators used in simulations are pseudo-random generators. Due these two reasons, you will not get a flat spectrum of psd when you apply Fourier Transform over the generated auto-correlation values.The wavering effect of the psd can be minimized by generating sufficiently long random signal and averaging the psd over several realizations of the random signal.

Simulating Gaussian White Noise as a Multivariate Gaussian Random Vector:

To verify the power spectral density of the white noise, we will use the approach of envisaging the noise as a composite of \(N\) Gaussian random variables. We want to average the PSD over \(L\) such realizations. Since there are \(N\) Gaussian random variables (\(N\) individual samples) per realization, the covariance matrix \( C_{xx}\) will be of dimension \(N \times N\). The vector of mean for this multivariate case will be of dimension \(1 \times N\).

Cholesky decomposition of covariance matrix gives the equivalent standard deviation for the multivariate case. Cholesky decomposition can be viewed as square root operation. Matlab’s randn function is used here to generate the multi-dimensional Gaussian random process with the given mean matrix and covariance matrix.

%Verifying the constant PSD of White Gaussian Noise Process
%with arbitrary mean and standard deviation sigma

mu=0; %Mean of each realization of Noise Process
sigma=2; %Sigma of each realization of Noise Process

L = 1000; %Number of Random Signal realizations to average
N = 1024; %Sample length for each realization set as power of 2 for FFT

%Generating the Random Process - White Gaussian Noise process
MU=mu*ones(1,N); %Vector of mean for all realizations
Cxx=(sigma^2)*diag(ones(N,1)); %Covariance Matrix for the Random Process
R = chol(Cxx); %Cholesky of Covariance Matrix
%Generating a Multivariate Gaussian Distribution with given mean vector and
%Covariance Matrix Cxx
z = repmat(MU,L,1) + randn(L,N)*R;

Compute PSD of the above generated multi-dimensional process and average it to get a smooth plot.

%By default, FFT is done across each column - Normal command fft(z)
%Finding the FFT of the Multivariate Distribution across each row
%Command - fft(z,[],2)
Z = 1/sqrt(N)*fft(z,[],2); %Scaling by sqrt(N);
Pzavg = mean(Z.*conj(Z));%Computing the mean power from fft

normFreq=[-N/2:N/2-1]/N;
Pzavg=fftshift(Pzavg); %Shift zero-frequency component to center of spectrum
plot(normFreq,10*log10(Pzavg),'r');
axis([-0.5 0.5 0 10]); grid on;
ylabel('Power Spectral Density (dB/Hz)');
xlabel('Normalized Frequency');
title('Power spectral density of white noise');
PSD power spectral density of White noise in Matlab
Figure 5: Power spectral density of generated noise

The PSD plot of the generated noise shows almost fixed power in all the frequencies. In other words, for a white noise signal, the PSD is constant (flat) across all the frequencies (\(- \infty\) to \(+\infty\)). The y-axis in the above plot is expressed in dB/Hz unit. We can see from the plot that the \(constant \; power = 10 log_{10}(\sigma^2) = 10 log_{10}(4) = 6\; dB\).

Application

In channel modeling, we often come across additive white Gaussian noise (AWGN) channel. To know more about the channel model and its simulation, continue reading this article: Simulate AWGN channel in Matlab & Python.

Rate this article: PoorBelow averageAverageGoodExcellent (54 votes, average: 4.44 out of 5)

References:

[1] Robert Grover Brown, Introduction to Random Signal Analysis and Kalman Filtering. John Wiley and Sons, 1983.↗
[2] Athanasios Papoulis, Probability, Random Variables, and Stochastic Processes, 3rd ed. WCB/McGraw-Hill, 1991.↗

Books by the author

Wireless Communication Systems in Matlab
Wireless Communication Systems in Matlab
Second Edition(PDF)

PoorBelow averageAverageGoodExcellent (159 votes, average: 3.81 out of 5)

Digital modulations using Python
Digital Modulations using Python
(PDF ebook)

PoorBelow averageAverageGoodExcellent (122 votes, average: 3.60 out of 5)

digital_modulations_using_matlab_book_cover
Digital Modulations using Matlab
(PDF ebook)

PoorBelow averageAverageGoodExcellent (125 votes, average: 3.69 out of 5)

Hand-picked Best books on Communication Engineering
Best books on Signal Processing

33 thoughts on “White Noise : Simulation and Analysis using Matlab”

  1. Dear sir, can you guide me that how to add non non uniform noise (other than AWGN) for early fault detection process in a dynamic system

    Reply
  2. Sir, can you guide me, how can i join multiple plots in a series of x axis with different value in y axis, and x axis must be in an increasing order in matlab

    Reply
  3. Hi, I’m trying to figure out how to convert the last example to an actual noise density with units of V/sqrt(Hz), assuming your random variable is volts sampled at 1kHz (for sake of discussion), to tie together the idea that the standard deviation = RMS = the integration of noise density (ND) across the bandwidth of interest (BW) as is commonly done in opamp or sensor noise analysis. So I’d like to be able to say RMS(volts) = ND * sqrt(BW). I think the value you show (6dBV = 4.0 V), actually has V^2 units, and you haven’t divided by any spectral bandwidth, but since you’re normalizing the BW to 1.0Hz (from -0.5Hz to 0.5Hz), no division is actually necessary in this particular case. So I *think* to convert to the type of noise density I’m looking for, one must divide the mean of the output array by something, then take the square root, to arrive at a single “noise density” figure which is valid for the whole spectrum of interest. For the example you show, I cheat and directly calculate the noise density ND as 2.0/sqrt(500 Hz) = 0.08944 V/sqrt(Hz). Thus 0.08944 * sqrt(500) = 2.0. But this is cheating because I know the answer I want, so I’m not exactly sure how to get there from the PSD array (which is essentially the RMS^2 times some factor). If this makes any sense, can you please shed some light on this? Thanks very much for the great post.

    Reply
  4. Sir,
    When I give cov(A), where A is randn(m,n), it is supposed to make a diagonal matrix, but it is not happening. Can you please explain why?

    Reply
    • If you want to test the covariance of white noise sequence, you need to take two realizations of the noise process and find the covariance matrix.

      A1 = randn(1,10000); %realization 1 of zero mean, unit variance white noise process
      A2 = randn(1,10000); %realization 2 of zero mean, unit variance white noise process

      cov(A1,A2)

      ans =
      0.9909 , 0.0045
      -0.0045 , 0.9999

      Diagonal elements will approximate to unity as the length of the sequences are increased further. This indicates that the variance of the underlying process is close to unity

      Equivalently, the two separate realizations can be stuffed as two separate columns of the same variable as
      A = randn(1000,2); %two separate realizations in each column
      cov(A)
      ans =
      0.9902 , -0.0233
      -0.0233 , 1.0857

      Extending it to three realizations
      A = randn(1000,3)
      cov(A)
      ans=
      0.9841 , 0.0000 , -0.0455
      0.0000 , 0.9933 , 0.0523
      -0.0455 , 0.0523 , 0.9862

      Reply
  5. ok,Tks.Could you please tell me what is the definition of the power in frequency domain of white gauss noise?

    Reply
    • The power spectrum of white noise is flat across all frequencies extending from -infinity to +infinity Hertz, as illustrated in the noise power spectral density graph on the right (First Figure).

      Reply
      • Tks,but what’s the expession of the power in frequency domain?Is this as follows? where,N is the FFT number,X(n) is the Fourier transform of x(n).

        Reply
        • yes, that is correct. In Matlab it can be computed as follows

          Z = 1/sqrt(N)*fft(z,[],2); %Scaling by sqrt(N);
          Pzavg = mean(Z.*conj(Z));%Computing the mean power from fft

          Reply
  6. @humourmind:disqus can you please tell me how to plot power spectral density bar graph? Using FFT gives PSD line graph. In my problem I have to calculate PSD of a signal from 0 to 9 Hz having .25 Hz bandwidth and then plot its bar graph. I would really appreciate if you could provide me matlab code for the same. Thanks.

    Reply
  7. hello sir ,is there any book or online material which focuses on how statistical properties of samples change when gaussian white noise is added to it for instance white noise has weighted identity covariance mat but when it is added to data and if now we find covariance mat of distorted data what form should it take

    Reply
  8. Hello Sir, Does the book have implementation for OFDM receiver? I mainly need to refer the packet detection code, cross correlation code and carrier frequency output code for it. Thanks!

    Reply
    • It has only a conceptual level (cyclic prefix, FFT/IFFT, parallel-serial & serial-parallel converters ) implementation of OFDM transmitter & receiver. Code for packet detection/cross-correlation/carrier frequency is not available. However, the simulation for the concept of cross-correlation is applied in another chapter dealing with spread spectrum

      Reply
  9. hello,
    does the book has more code about generating white noise and random walk processes?i am interested in simulating an inertial measurement unit’s errors in matlab.will the book be useful for me?

    Reply
  10. Hey there, do u have a way to change (demodulation) a digital signal into bits using Matlab?? If please can elaborate on how to do so??????

    Reply
  11. dear sir do you hane any idea about function regression to calculat coeffienet and lamda use the white gaussian noise …. plz help me ….

    thx

    Reply
  12. In the given link,he takes three noisy sensor data and estimates the best value from the three available data using kalman

    Reply
  13. i am trying to find the time delay between two signals in matlab. the two signals are added with gaussian white noise.then i found the covariance matrix.the det of cov matrix is always zero at the delay point.i used the joint entropy based time delay estimation which gave me infinite value at the delay point.how to get a finite value from this

    Reply
    • When the correlation is zero, it means that no correlation exist between the signals.
      For the time delay estimation to work, there exist some significant amount correlation between the shifted versions of the signal.

      Check if the noise added to signal is too severe. In this case the correlation may go to zero at delay point (autocorrelation of noise vs noise is zero at all time lags except at zero lag).

      Try if you can get reasonable value using the inbuilt Matlab function – “D = finddelay(X,Y)”

      Reply

Post your valuable comments !!!