Basic operations on signal sequences – Addition

Key focus: How to implement the basic addition operation on two discrete time signal sequences. Python code for signal addition is provided.

Signal addition

Given two discrete-time sequences \(x_1[n]\) and \(x_2[n]\), the addition of these two sequences is represented as \(x_1[n]+ x_2[n]\). The start of the sample at \(n=0\) can be different for these two sequences.

For example, if

\[ \begin{align} x_1[n] &= \left\{ -3, 12, -4, \underset{\uparrow}{9}, 2, 15\right\} \\ x_2[n] &= \left\{ -6, \underset{\uparrow}{4}, -2, 10\right\} \end{align} \]

the position of the sample at \(n=0\) is different in these sequences. Also, the length of these sequences are different.

The addition operation should take these differences into account. The following python code adjusts for these differences before computing the addition operation. It creates two sequences and of equal length that spans the minimum and maximum indices of and . The sequences and are first filled with zeros and then the sequences & are copied over to & at corresponding positions. The final result is computed as the sum of and .

import numpy as np

def signal_add(x1, n1, x2, n2):
    '''
    Computes y(n) = x1(n) + x2(n)
    ---------------------------------
    [y, n] = signal_add(x1, n1, x2, n2)
    
    Parameters
    ----------
    n1 = indices of first sequence x1
    n2 = indices of second sequence x2
    x1 = first sequence defined for indices n1
    x2 = second sequence defined for indices n2
    Returns
    -------
    y = output sequence defined over indices n that
    covers whole range of n1 and n2
    '''

    n_start = min(min(n1), min(n2))
    n_end = max(max(n1), max(n2))
    n = np.arange(n_start, n_end + 1)  # duration of y(n)

    y1 = np.zeros_like(n, dtype='complex_')
    y2 = np.zeros_like(n, dtype='complex_')

    mask1 = (n >= n1[0]) & (n <= n1[-1])
    mask2 = (n >= n2[0]) & (n <= n2[-1])

    y1[np.where(mask1)[0]] = x1[np.where(n1 == n[mask1])]
    y2[np.where(mask2)[0]] = x2[np.where(n2 == n[mask2])]

    y = y1 + y2

    return y, n

The following code snippet uses the function above to compute the addition of two sequences \(x_1[n]\) and \(x_2[n]\), defined as

\[\begin{align} x_1[n] &= cos \left( 0.03 \pi n \right), & 0 \leq n \leq 50 \\ x_2[n] &= e^{0.06 n}, & -30 \leq 0 \leq n \end{align}\]
n1 = np.arange(0,50)
n2 = np.arange(-30,10)
x1 = np.cos(0.03*np.pi*n1)
x2 = np.exp(0.06*n2)
[y,n] = signal_add(x1, n1, x2, n2)

These discrete sequences are plotted (Figure 1) using the code below.

import matplotlib.pyplot as plt
f, (ax1, ax2, ax3) = plt.subplots(3, 1, sharex=True, sharey=True)

ax1.stem(n1, x1, 'k', label = '$x_1[n]$')
ax2.stem(n2, x2, 'b', label = '$x_2[n]$')
ax3.stem(n, y, 'r', label = '$y[n] = x_1[n] + x_2[n]$')

ax1.legend(loc="upper right")
ax2.legend(loc="upper right")
ax3.legend(loc="upper right")

ax1.set_xlim((min(n), max(n)+1))
ax1.set_title('Addition of signals')
plt.show()
Addition of discrete signals in signals and systems. Signal addition
Figure 1: Addition of discrete signals

Real-valued exponential sequence

In digital signal processing, we utilize various elementary sequences for the purpose of analysis. In this series, we will see such sequences. One such elementary sequence is the real-valued exponential sequence. (see the articles on unit sample sequence, unit step sequence, complex exponential sequence)

An exponential sequence in signals and systems is a discrete-time sequence that exhibits exponential growth or decay. It is characterized by a constant base raised to the power of the index. The general form of the sequence is given by:

\[x[n] = A \cdot r^n, \quad \forall n; \quad r \in \mathbb{R}\]

where:

  • \(x[n]\) is the value of the sequence at index \(n\).
  • \(A\) is the initial amplitude or value at \(n = 0\).
  • \(r\) is the constant base, which determines the growth or decay behavior.
  • \(n\) represents the index of the sequence.

If the value of r is greater than 1, the sequence grows exponentially as n increases, resulting in exponential growth. Conversely, if r is between 0 and 1, the sequence decays exponentially, approaching zero as n increases.

Exponential sequences find various applications in fields such as finance, physics, and telecommunications. In signal processing and system analysis, these sequences are fundamental components used to model and analyze various system behaviors, such as stability, convergence, and frequency response.

Python code that implements a real-valued exponential sequence for various values of \(r\).

import matplotlib.pyplot as plt
import numpy as np

def exponential_sequence(n, A, r):
    return A * np.power(r, n)

# Define the range of n
n = np.arange(0, 25)

# Define the values of r
r_values = [0.5, 0.8, 1.2]

# Plotting the exponential sequences for various values of r
fig, axs = plt.subplots(len(r_values), 1, figsize=(8, 6))

for i, r in enumerate(r_values):
    # Generate the exponential sequence for the current value of r
    x = exponential_sequence(n, 1, r)

    # Plot the exponential sequence in the current subplot
    axs[i].stem(n, x, 'k', use_line_collection=True)
    axs[i].set_xlabel('n')
    axs[i].set_ylabel(f'x[n], r={r}')
    axs[i].set_title(f'Exponential Sequence, r={r}')
    axs[i].grid(True)

# Adjust spacing between subplots
plt.tight_layout()

# Display the plot
plt.show()
Figure 1: Real-valued exponential sequence \(x[n] = A \cdot r^n\) for \(r = 0.5, 0.8, 1.5\)

Applications

Real-valued exponential sequences have various applications in different fields, including:

Signal Processing: Exponential sequences are used to model and analyze signals in fields like audio processing, image processing, and telecommunications. They are fundamental in Fourier analysis, frequency response analysis, and filter design.

System Analysis: Exponential sequences are essential in understanding and characterizing the behavior of linear time-invariant (LTI) systems. They help analyze system stability, impulse response, and frequency response.

Finance: Exponential sequences find applications in finance and economics for modeling compound interest, population growth, investment returns, and other exponential growth/decay phenomena.

Physics: In physics, exponential sequences are used to describe natural phenomena such as radioactive decay, charging/discharging of capacitors, and decay of electrical or mechanical systems.

Control Systems: Exponential sequences play a crucial role in control systems engineering. They are used to model system dynamics, analyze stability, and design controllers for desired response characteristics.

Probability and Statistics: Exponential sequences are utilized in probability and statistics to model various distributions, including the exponential distribution, which represents events occurring randomly and independently over time.

Machine Learning: Exponential sequences are used in machine learning algorithms for tasks such as feature scaling, regularization, and gradient descent optimization.

These are just a few examples of the broad range of applications where real-valued exponential sequences are utilized. Their ability to represent exponential growth or decay makes them a valuable tool for modeling and understanding dynamic systems and phenomena in various disciplines.

Unit Step Sequence

In digital signal processing, we utilize various elementary sequences for the purpose of analysis. In this series, we will see such sequences. One such elementary sequence is the unit step sequence (see the articles on unit sample sequence, unit step sequence, real-valued exponential sequence, complex exponential sequence).

Unit Step Sequence

A unit step sequence is a discrete-time signal that represents a step function. In discrete-time signal processing, a sequence is a set of values indexed by integers. A unit step sequence, denoted as u[n], is defined as follows:

\[ u[n] = \begin{cases} 1 , & \quad n \geq 0 \\ 0, & \quad n \lt 0\end{cases} = \left\{ \cdots,0, 0, \underset{\uparrow}{1}, 1, 1, 1, \cdots\right\} \]

In this definition, the value of \(u[n]\) is 0 for negative values of \(n\) (\(n \lt 0\)), and it is 1 for non-negative values of \(n (n \geq 0)\). The up-arrow indicates the sample at \(n=0\)

Graphically, the sequence looks like a step function, where the value remains zero for negative indices and abruptly jumps to 1 at n=0, continuing at that value for positive indices.

In the equation above, the sequence value of 1 start at index 0. In signal processing, the starting place (where the value 1 starts) can be shifted. The generalized formula for generated a shifted sequence will be

\[ u[n – n_0] = \begin{cases} 1 , & \quad n \geq n_0 \\ 0, & \quad n \lt n_0\end{cases}\]

The following python code implements a unit step sequence over the interval \(n_1 \leq n \leq n_2\)

import matplotlib.pyplot as plt
import numpy as np

def unit_step_sequence(n0, n1, n2):
    """
    Generate unit step sequence u(n - n0); n1<=n<=n
    n0 = number of samples to offset/shift
    n1 = starting number to generate the sequence index
    n2 = ending number to generate the sequence index
    """
    n = np.arange(n1,n2+1)
    u = np.zeros_like(n)
    u[n >= n0] = 1
    return u

# Generate the shifted unit step sequence for a given range and shift value
n0 = 2  # Shift value
n1 = -3
n2 = 9
u = unit_step_sequence(n0, n1, n2)

fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)

# Plotting the shifted unit step sequence
plt.stem(np.arange(n1,n2+1), u,'r', markerfmt='o', basefmt='k', use_line_collection=True)
plt.xlabel('n')
plt.ylabel(r'$u[n-n_0]$')
plt.title(r'Discrete-time Unit Step Sequence $u[n-2]$')
plt.grid(True)
plt.show()
Figure 1: Discrete unit step sequence \(u[n-n_0]\); shifted by \(n_0=2\), generated over the interval \(-3 <= n <= 9\)

Applications

The unit step sequence is often used as a fundamental building block in signal processing and system analysis. It serves as a basic reference for studying and analyzing other signals and systems. By manipulating the sequence, one can derive other useful sequences and functions, such as shifted step sequences, ramp sequences, and more complex signals.

This sequence is particularly important in the field of discrete-time systems, where it is used to analyze system behavior and characterize properties like stability, causality, and linearity. It is also employed in various applications, such as digital filters, signal modeling, and signal reconstruction.

References

[1] Prof. Alan V. Oppenheim, Lecture 2: Discrete-Time Signals and Systems, Part 1, RES.6-008, MIT OCW, Spring 2011

Unit sample sequence

In digital signal processing, we utilize various elementary sequences for the purpose of analysis. In this series, we will see such sequences. One such elementary sequence is the unit sample sequence (see the articles on unit sample sequence, unit step sequence, real-valued exponential sequence, complex exponential sequence)

A unit sample sequence, also known as an impulse sequence or delta sequence, is a discrete sequence that consists of a single sample with the value of 1 at a specific index, and all other samples are zero. It is commonly represented as a discrete-time impulse function or delta function.

Mathematically, a unit sample sequence can be defined as:

\[x[n] = \delta[n – n_0] = \begin{cases} 1 & \quad n = n_0 \\ 0 & \quad n \neq n_0 \end{cases} \]

Where:

  • \(x[n]\) represents the value of the sequence at index \(n\).
  • \(δ[n]\) is the discrete-time impulse function or delta function.
  • \(n_0\) is the index at which the impulse occurs. At this index, \(x[n]\) has the value of 1, and for all other indices, \(x[n]\) is zero.

This sequence represents a localized impulse or sudden change at that particular index. The unit sample sequence is commonly used in signal processing and system analysis to study the response of systems to impulses or abrupt changes. It serves as a fundamental tool for representing and analyzing signals and systems.

In python, the scipy.signal.unit_impulse function can be used to generate an impulse sequence in SciPy. For 1-D signals, the first argument to the function is the number of samples requested for generating the unit_impulse and the second argument is the offset (i.e, \(n_0\))

import matplotlib.pyplot as plt
import numpy as np
from scipy import signal

n = 11 #number of samples
n0 = 5 # Offset (Shift index)
imp = signal.unit_impulse(n, n0) #unit impulse

fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)

plt.stem(np.arange(0,n),imp,'r', markerfmt='ro', basefmt='k')
plt.xlabel('Sample #')
plt.ylabel(r'$\delta[n-n_0]$')
plt.title('Discrete-time Unit Impulse')
plt.show()
Figure 1: Discrete-time unit impulse

Applications of unit sample sequences

Unit sample sequences, also known as impulse sequences, have several applications in digital signal processing. Here are a few common applications:

System Analysis: Unit sample sequences are used to analyze and characterize the behavior of systems, such as filters and signal processors, in response to impulses or sudden changes.

Convolution: Unit sample sequences are essential in the mathematical operation of convolution, which is used for filtering, signal analysis, and processing tasks.

Signal Reconstruction: Unit sample sequences are employed in reconstructing continuous signals from their sampled versions using techniques like impulse sampling and interpolation.

System Identification: Unit sample sequences can be utilized to estimate the impulse response of a system, allowing for system identification and modeling.

These are just a few examples of the diverse applications of unit sample sequences in digital signal processing. They serve as fundamental tools for analyzing signals, characterizing systems, and performing various signal processing operations.

References

[1] Prof. Alan V. Oppenheim, Lecture 2: Discrete-Time Signals and Systems, Part 1, RES.6-008, MIT OCW, Spring 2011

Analog and Discrete signals

In the context of signal and systems, analog and discrete signals are two different types of signals that convey information.

Analog signal

An analog signal is a continuous signal that varies smoothly over time. It can take on any value within a certain range. Analog signals are represented by physical quantities such as voltage, current, or sound waves. For example, the varying voltage produced by a microphone when recording sound is an analog signal. Analog signals are typically represented as continuous waveforms.

Let’s consider an example of a simple analog signal:

\[x(t) = A \cdot sin \left(2 \pi f t + \phi \right)\]

    In this equation:

    • x(t) represents the value of the analog signal at time t.
    • A is the amplitude of the signal, which determines its maximum value.
    • f is the frequency of the signal, which represents the number of cycles per unit of time.
    • φ is the phase of the signal, which represents the offset or starting point of the waveform.

    This equation describes a sinusoidal analog signal, where the value of the signal varies continuously over time. The signal can have an infinite number of values at any given instant.

    Discrete signal

    On the other hand, a discrete signal is a signal that is defined only at specific instances of time and takes on a finite set of values. Discrete signals are often derived from analog signals by a process called sampling, where the continuous analog signal is measured or sampled at regular intervals. Each sample represents the value of the signal at a particular instant. These samples can be stored and processed using digital systems. Examples of discrete signals include digital audio, digital images, and the output of a digital sensor.

    Discrete signals are commonly used in digital signal processing and can be represented using mathematical equations.

    The general equation for a discrete signal can be written as:

    \[x[n] = f(n)\]

    In this equation:

    • x[n] represents the value of the discrete signal at time instance n.
    • f(n) is the function that determines the value of the signal at each specific time instance.

    The function f(n) can take various forms depending on the specific characteristics of the discrete signal. For example, let’s start with the equation for the analog sinusoidal signal:

    \[x(t) = A \cdot sin \left(2 \pi f t + \phi \right)\]

    To obtain the discrete version of this signal, we need to sample it at regular intervals. The sampling process involves measuring the analog signal at equidistant points in time.

    Let’s define the sampling period as \(T_s\), which represents the time between two consecutive samples. The sampling rate is the inverse of the sampling period and is denoted as \(f_s = 1 / T_s\).

    Now, we can express the discrete version of the sinusoidal signal as:

    \[x[n] = x(n T_s) = A \cdot sin(2 \pi f n T_s + \phi)\]

    In this equation:

    • x[n] represents the value of the discrete signal at sample index n.
    • f is the frequency of the sinusoidal signal in hertz
    • n represents the sample index, indicating which sample we are considering.
    • \(T_s\) is the sampling period.
    • \(f_s\) is the sampling frequency, which is the reciprocal of the sampling period.

    By substituting \(nT_s\) for t in the analog sinusoidal signal equation, we obtain the discrete version of the sinusoidal signal. The discrete signal represents the sampled values of the original analog signal at each specific time instance, \(nT_s\).

    It’s important to note that the accuracy of the discrete signal representation depends on the sampling rate. According to the Nyquist-Shannon sampling theorem, for real signals, the sampling rate should be at least twice the maximum frequency of the analog signal to avoid aliasing and accurately reconstruct the signal from its samples.

    Python code

    Following is an example Python code that simulates an analog sinusoidal signal, samples it to obtain a discrete version, and overlays the two signals for comparison

    import numpy as np
    import matplotlib.pyplot as plt
    
    # Parameters for the analog signal
    amplitude = 1.0  # Amplitude of the signal
    frequency = 2.0  # Frequency of the signal in Hz
    phase = 0.0  # Phase of the signal in radians
    
    # Parameters for the discrete signal
    sampling_rate = 10  # Number of samples per second
    num_samples = 20
    
    # Time arrays for the analog and discrete signals
    t_analog = np.linspace(0, num_samples / sampling_rate, num_samples * 10)  # Higher resolution for analog signal
    n_discrete = np.arange(num_samples)
    
    # Generate the analog signal
    analog_signal = amplitude * np.sin(2 * np.pi * frequency * t_analog + phase)
    
    # Sample the analog signal to obtain the discrete signal
    discrete_signal = amplitude * np.sin(2 * np.pi * frequency * n_discrete / sampling_rate + phase)
    
    # Plot the analog and discrete signals
    plt.plot(t_analog, analog_signal, label='Analog Signal')
    plt.stem(n_discrete / sampling_rate, discrete_signal, 'r', markerfmt='ro', basefmt=' ', label='Discrete Signal')
    plt.xlabel('Time')
    plt.ylabel('Amplitude')
    plt.title('Analog and Discrete Sinusoidal Signals')
    # Move the legend outside the figure
    plt.legend(loc='upper right', bbox_to_anchor=(1.1, 1))
    plt.grid(True)
    plt.show()

    Resulting plot

    Figure 1: Simulated analog and discrete sinusoidal signals

    Design FIR filter to reject unwanted frequencies

    Let’s see how to design a simple digital FIR filter to reject unwanted frequencies in an incoming signal. As a per-requisite, I urge you to read through this post: Introduction to digital filter design

    Background on transfer function

    The transfer function of a system provides the underlying support for ascertaining vital system response characteristics without solving the difference equations of the system. As mentioned earlier, the transfer function of a generic filter in Z-domain is given by ratio of polynomials in z

    \[H(z) = \frac{ \displaystyle{\sum_{i=0}^{M} b_k z^{-1}}}{ 1 + \displaystyle{\sum_{i=1}^{N} a_k z^{-1}} } \quad\quad (1)\]

    The values of z when H(z) =0 are called zeros of H(z). The values of z when H(z) = ∞ are called poles of H(z).

    It is often easy to solve for zeros {zi} and poles {pj}, when the polynomials in the numerator and denominator are expressed as resolvable factors.

    \[H(z) = \frac{ \displaystyle{\sum_{i=0}^{M} b_k z^{-1}}}{ 1 + \displaystyle{\sum_{i=1}^{N} a_k z^{-1}} } = \frac{N(z)}{D(z)} = \frac{b_M}{a_N} \frac{(z – z_1)(z – z_2)\cdots (z – z_M)}{(z – p_1)(z – p_2)\cdots (z – p_N)} \quad\quad (2) \]

    The zeros {zi} are obtained by finding the roots of the equation

    \[N(z) = 0 \quad\quad (3)\]

    The poles {pj} are obtained by finding the roots of the equation

    \[D(z) = 0 \quad\quad (4) \]

    Pole-zero plots are suited for visualizing the relationship between the Z-domain and the frequency response characteristics. As mentioned before, the frequency response of the system H(e) can be computed by evaluating the transfer function H(z) at specific values of z = e. Because the frequency response is periodic with period , it is sufficient to evaluate the frequency response for the range -π <= ω < π (that is one loop around the unit circle on the z-plane starting from z=-1 and ending at the same point).

    FIR filter design

    FIR filters contain only zeros and no poles in the pole-zero plot (in fact all the poles sit at the origin for a causal FIR). For an FIR filter, the location of zeros of H(z) on the unit-circle nullifies specific frequencies. So, to design an FIR filter to nullify specific frequency ω, we just have to place the zeros at corresponding locations on the unit circle (z=e) where the gain of the filter needs to be 0. Let’s illustrate this using an example.

    For this illustration, we would use this audio file as an input to the filtering system. As a first step in the filter design process, we should understand the nature of the input signal. Discrete-time Fourier transform (DTFT) is a tool for analyzing the frequency domain characteristics of the given signal.

    The following function is used to compute the DTFT of the sequence read from the audio file.

    import numpy as np
    import matplotlib.pyplot as plt
    
    from math import ceil,log,pi,cos
    from scipy.fftpack import fft,fftfreq,fftshift
    
    def compute_DTFT(x,M=0):
        """
        Compute DTFT of the given sequence x
        M is the desired length for computing DTFT (optional).
        Returns the DTFT X[k] and corresponding frequencies w (omega) arranged as -pi to pi
        """
        N = max(M,len(x))
        N = 2**(ceil(log(N)/log(2)))
        
        X = fftshift(fft(x,N))
        w = 2*pi*fftshift(fftfreq(N))    
        return (X,w)

    Let’s read the audio file, load the samples as a signal sequence , and plot the sequence in time-domain/frequency domain (using DTFT).

    from scipy.io.wavfile import read
    
    samplerate, x = read('speechWithNoise.wav')
    duration = len(x)/samplerate
    time = np.arange(0,duration,1/samplerate)
    
    fig1, (ax1,ax2) = plt.subplots(nrows=2,ncols=1)
    ax1.plot(time,x)
    ax1.set_xlabel('Time (s)')
    ax1.set_ylabel('Amplitude')
    ax1.set_title('speechWithNoise.wav - x[n]')
    
    (X,w)= compute_DTFT(x)
    ax2.plot(w,abs(X))
    ax2.set_xlabel('Normalized frequency (radians/sample)')
    ax2.set_ylabel('|X[k]|')
    ax2.set_title('Magnitude vs. Frequency')
    Figure 1: Time-domain and frequency domain characteristics of the given audio sample

    The magnitude vs. frequency plot simply shows huge spikes at θ=±1.32344 radians. The location of the spikes are captured by using the numpy.argmax function.

    maxIndex = np.argmax(X)
    theta = w[maxIndex]
    print(theta)

    Since a sinusoid can be mathematically represented as

    \[x[n] = cos (\theta n) = \frac{1}{2}\left( e^{j \theta n} + e^{-j \theta n }\right) \quad\quad (5)\]

    The two spikes at θ=±1.32344 radians in the frequency domain, will manifest as a sinusoidal signal in the time domain.

    Zooming in the area between θ= ±0.4 radians, the frequency domain plot reveals a hidden signal.

    Figure 2: Hidden signal revealed in frequency domain

    Now, our goal is to design an FIR filter that should reject the sinusoid at θ=±1.32344 radians, so that only the hidden signal gets filtered in.

    Since the sinusoid that we want to reject is occurring at some θ radians in the frequency domain, for the FIR filter design, we place two zeros at

    \[z_1 = e^{j \theta} \quad\quad z_2 = e^{-j \theta}\quad\quad (6)\]

    Therefore, the transfer function of the filter system is given by

    \[\begin{aligned} H_f(z) &= \left( 1 – z_1 z^{-1}\right)\left(1 – z_2 z^{-1} \right) \\ &= \left( 1 – e^{j \theta} z^{-1}\right)\left(1 -e^{-j \theta}z^{-1} \right) \\ & = 1 – 2\;cos\left(\theta\right)z^{-1} + z^{-2} \end{aligned}\]

    This is a second order FIR filter. The coefficients of the filter are

    \[b_0 = 1,\; b_1 = – 2 cos (\theta),\; b_2 = 1 \text{ and } a_0=1\]

    For the given problem, to should reject the sinusoid at θ=±1.32344 radians, we set θ=1.32344 in the filter coefficients above.

    Filter in action

    Filter the input audio signal through the designed filter and plot the filtered output in time-domain and frequency domain. The lfilter function from scipy.signal package↗ is utilized for the filtering operation.

    from scipy.signal import lfilter
    y_signal = lfilter(b, a, x)
    fig3, (ax3,ax4) = plt.subplots(nrows=1,ncols=2)
    ax3.plot(time,y_signal,'g')
    ax3.set(title='Noise removed speech - y[n]',xlabel='Time (s)',ylabel='Amplitude')
    
    (Y,w)= compute_DTFT(y_signal)
    ax4.plot(w,abs(Y)/max(abs(Y)),'r')
    ax4.set(title='Frequency content of Y',xlabel='Normalized frequency (radians/sample)',ylabel='Magnitude - |Y[k]|')

    The filter has effectively removed the sinusoidal noise, as evident from both time-domain and frequency domain plots.

    Figure 4: Extracted speech and its frequency content

    Save the filtered output signal as .wav file for audio playback

    from scipy.io.wavfile import write
    output_data = np.asarray(y_signal, dtype=np.int16)#convert y to int16 format
    write("noise_removed_output.wav", samplerate, output_data)

    Characteristics of the designed filter

    Let’s compute the double sided frequency response of the designed FIR filter. The frequency response of the designed FIR digital filter is computed using freqz function from the scipy.signal package↗.

    from scipy.signal import freqz
    b = [1,-2*cos(theta),1] #filter coefficients
    a = [1]
    w, h = freqz(b,a,whole=True)#frequency response h[e^(jw)]
    #whole = True returns output for whole range 0 to 2*pi
    #To plot double sided response, use fftshift
    w = w - 2*np.pi*(w>=np.pi) #convert to range -pi to pi
    w = fftshift(w)
    h = fftshift(h)

    Plot the magnitude response, phase response, pole-zero plot and the impulse response of the designed filter.

    #Plot Magnitude-frequency response
    fig2, (ax) = plt.subplots(nrows=2,ncols=2)
    ax[0,0].plot(w,abs(h),'b')
    ax[0,0].set(title='Magnitude response',xlabel='Frequency [radians/sample]',ylabel='Magnitude [dB]')
    ax[0,0].grid();ax[0,0].axis('tight');
    
    #Plot phase response
    angles = np.unwrap(np.angle(h))
    ax[0,1].plot(w,angles,'r')
    ax[0,1].set(title='Phase response',xlabel='Frequency [radians/sample]',ylabel='Angles [radians]')
    ax[0,1].grid();ax[0,1].axis('tight');
    
    #Transfer function to pole-zero representation
    from scipy.signal import tf2zpk
    z, p, k = tf2zpk(b, a)
    
    #Plot pole-zeros on a z-plane
    from  matplotlib import patches
    patch = patches.Circle((0,0), radius=1, fill=False,
                        color='black', ls='dashed')
    ax[1,0].add_patch(patch)
    ax[1,0].plot(np.real(z), np.imag(z), 'xb',label='Zeros')
    ax[1,0].plot(np.real(p), np.imag(p), 'or',label='Poles')
    ax[1,0].legend(loc=2)
    ax[1,0].set(title='Pole-Zero Plot',ylabel='Real',xlabel='Imaginary')
    ax[1,0].grid()
    
    #Impulse response
    #create an impulse signal
    imp = np.zeros(20)
    imp[0] = 1
    
    from scipy.signal import lfilter
    y_imp = lfilter(b, a, imp) #drive the impulse through the filter
    ax[1,1].stem(y_imp,linefmt='-.')
    ax[1,1].set(title='Impulse response',xlabel='Sample index [n]',ylabel='Amplitude')
    ax[1,1].axis('tight')
    Figure 3: Magnitude response, phase response, pole-zero plot and impulse response of the designed second order FIR filter

    Questions

    Use the comment form below to answer the following questions

    1) Is the filter that we designed a lowpass, highpass, bandpass or a bandstop filter ?
    2) To reject a single sinusoidal signal, why two zeros are needed in the above filter design ?
    3) What do you understand from the phase response plotted above ?

    Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

    Similar topics

    Essentials of Signal Processing
    ● Generating standard test signals
     □ Sinusoidal signals
     □ Square wave
     □ Rectangular pulse
     □ Gaussian pulse
     □ Chirp signal
    Interpreting FFT results - complex DFT, frequency bins and FFTShift
     □ Real and complex DFT
     □ Fast Fourier Transform (FFT)
     □ Interpreting the FFT results
     □ FFTShift
     □ IFFTShift
    Obtaining magnitude and phase information from FFT
     □ Discrete-time domain representation
     □ Representing the signal in frequency domain using FFT
     □ Reconstructing the time domain signal from the frequency domain samples
    ● Power spectral density
    Power and energy of a signal
     □ Energy of a signal
     □ Power of a signal
     □ Classification of signals
     □ Computation of power of a signal - simulation and verification
    Polynomials, convolution and Toeplitz matrices
     □ Polynomial functions
     □ Representing single variable polynomial functions
     □ Multiplication of polynomials and linear convolution
     □ Toeplitz matrix and convolution
    Methods to compute convolution
     □ Method 1: Brute-force method
     □ Method 2: Using Toeplitz matrix
     □ Method 3: Using FFT to compute convolution
     □ Miscellaneous methods
    Analytic signal and its applications
     □ Analytic signal and Fourier transform
     □ Extracting instantaneous amplitude, phase, frequency
     □ Phase demodulation using Hilbert transform
    Choosing a filter : FIR or IIR : understanding the design perspective
     □ Design specification
     □ General considerations in design

    Books by the author


    Wireless Communication Systems in Matlab
    Second Edition(PDF)

    Note: There is a rating embedded within this post, please visit this post to rate it.
    Checkout Added to cart

    Digital Modulations using Python
    (PDF ebook)

    Note: There is a rating embedded within this post, please visit this post to rate it.
    Checkout Added to cart

    Digital Modulations using Matlab
    (PDF ebook)

    Note: There is a rating embedded within this post, please visit this post to rate it.
    Checkout Added to cart
    Hand-picked Best books on Communication Engineering
    Best books on Signal Processing

    Digital filter design – Introduction

    Key focus: Develop basic understanding of digital filter design. Learn about fundamentals of FIR and IIR filters and the design choices.

    Analog filters and digital filters are the two major classification of filters, depending on the type of signal signal they process. An analog filter, processes continuous-time signal analog signals. Whereas, digital filters process sampled, discrete-time signals.

    Design of digital filters, involve the use of both frequency domain and time domain techniques.This is because, the filter specifications are often specified in frequency domain and the implementation is done in time-domain in the form of difference equations. Z-transform and discrete-time frequency transform (DTFT) are typical tools used for frequency domain analysis of filters.

    Linear Time-Invariant Causal filter

    The choice of filter and the design process depends on design specification, application and the performance issues associates with them. In this section, the focus is on the principles used for designing linear time-invariant causal filters.

    A linear system obeys the principle of superposition. This means that an arbitrary signal can be represented as the weighted sums of shifted unit impulse functions. In a linear time-invariant filtering system, the filter coefficients do not change with time. Therefore, if the input signal is time-shifted, there will be a corresponding time-shift in the output signal. The term causal implies that the output of the system depends only on the present and past samples of input or output, and not on the future samples. Causality is required for real-time applications.

    Solution for LTI causal filtering

    The input-output relationship in a linear time-invariant causal filter system (shown above) is mathematically described using the following difference equation

    \[ \begin{align} \sum_{k=0}^{N} a_k y[n-k] &= \sum_{i=0}^{M} b_i x[n-i] \\ \Rightarrow y[n] &= \sum_{i=0}^{M} b_i x[n-i] – \sum_{k=1}^{N} a_k y[n-k] \end{align} \quad \quad (1) \]

    where, x[n] and y[n] are the input and output (sampled discrete-time) signals of the filtering system, ak and bi are the coefficients of the filter that is programmed to certain values for achieving the given filtering task. The values of M and N determine the number of such coefficients, in other words, the filter has M zeros and N poles.

    Linear time-invariant systems are completely characterized by it response to an impulse signal δ[n]. Therefore, the analysis and solution for a given filtering task, is easily achieved by using the impulse response h[n], which is the response of the LTI system to an impulse signal δ[n] .

    Therefore, the solution for the recursive equation in [1] is found by exciting the input with an impulse , that is x[n]= δ[n].

    \[h[n] = \sum_{i=0}^{M} b_i \delta[n-i] – \sum_{k=1}^{N} a_k h[n-k] \quad \quad (2) \]

    Z-transform and DTFT

    Z-transform and discrete-time frequency transform (DTFT) are important tools for analyzing difference equations and frequency response of filters. Z-transform converts a discrete-time signal into a complex frequency domain representation.

    The impulse response of a discrete-time causal system is analyzed using the unilateral or one-sided Z-transform. The unilateral Z-transform of a discrete-time signal x(n) is given by

    where, n is an integer and z is a complex number with magnitude r and complex argument ω in radians.

    \[ z = re^{j \omega} = r \cdot \left[ cos(\omega) + j\; sin (\omega) \right] \quad \quad (4)\]
    Figure 1: Z-transform and Z-plane

    If we let |z| = r = 1, then the equation for Z-transform transforms into discrete-time Fourier transform.

    The Z-plane contains all values of z, whereas, the unit circle (defined by |z|=r=1) contains only z=e values. Therefore, we can state that Z-transform may exist anywhere on the Z-plane and DTFT exists only on the unit circle. One period of DTFT is equivalent to one loop around the unit circle on the Z-plane.

    One of the important properties of Z-transform is with regards to time-shifting of discrete-time samples. A delay of K samples in the time domain is equivalent to multiplication by z-k in the Z-transform domain.

    \[x[n-K] \rightleftharpoons z^{-k} X(z) \quad \quad (7)\]

    Re-writing equation (1) in Z-domain,

    Therefore, the transfer function of the generic digital filter is given by

    From equations (5) and (6), the frequency response of the system can be computed by evaluating the transfer function H(z) on the unit circle (i.e, |z| = r =1 → z = e)

    IIR filter

    From implementation standpoint, there are two classes of digital filters: infinite impulse response (IIR) and finite impulse response (FIR) filters.

    When ak ≠ 0 as in equation (2), the filter structure is characterized by the presence of feedback elements. Due to the presence of feedback elements, the impulse response of the filter may not become zero beyond certain point in time, but continues indefinitely and hence the name infinite impulse response (IIR) filter.

    \[IIR: \quad \quad h[n] = \sum_{i=0}^{M} b_i \delta[n-i] – \sum_{k=1}^{N} a_k h[n-k], \quad a_k \neq 0 \quad \quad (11) \]

    Therefore, equation (9) and (10) are essentially the transfer function and the frequency response of an IIR filtering system.

    There exist different methods for implementing the filter structure. Examples include, direct form I structure, direct form II structure, lattice structure, transposition, state space representation etc.., Each method has its own advantage and disadvantage. For example, some method may be robust to precision issues, coefficient sensitivity, lesser memory and computation requirement. The methods are chosen depending on the application.

    Figure 2 shows the direct form I signal flow graph for the generic IIR filter described by equation (1). The boxes represented by Z-1 are unit delays. This is because, the Z-transform of unit delay is Z-1

    Figure 2: Direct form I signal flow graph for an IIR filter

    FIR filter

    When ak=0 for all ks, there is no feedback in the filter structure, no poles in the pole-zero plot (in fact all the poles sit at the origin for a causal FIR). The equation is no longer recursive. The impulse response of such filter dies out (becomes zero) beyond certain point in time and hence the name finite impulse response (FIR) filter.

    Setting x[n] = δ[n] and ak=0 for all k, the impulse response of an FIR filter is given by,

    \[FIR: \quad \quad h[n] = \sum_{i=0}^{M} b_i \delta[n-i] \quad \quad (13) \]

    Evaluating the z-transform of impulse input x[n] = δ[n] , in Z-domain following mapping holds

    From equation (13) and (14), the transfer function of the FIR filter in Z-domain is given by

    The frequency response is given by evaluating the transfer function on the unit circle |z|=1.

    The direct form I signal flow graph for the FIR filter is shown in Figure 3

    Figure 3: Direct form I signal flow graph for FIR filter

    Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

    References

    [1] Z- transform – MIT open course ware ↗

    Books by the author


    Wireless Communication Systems in Matlab
    Second Edition(PDF)

    Note: There is a rating embedded within this post, please visit this post to rate it.
    Checkout Added to cart

    Digital Modulations using Python
    (PDF ebook)

    Note: There is a rating embedded within this post, please visit this post to rate it.
    Checkout Added to cart

    Digital Modulations using Matlab
    (PDF ebook)

    Note: There is a rating embedded within this post, please visit this post to rate it.
    Checkout Added to cart
    Hand-picked Best books on Communication Engineering
    Best books on Signal Processing

    Similar topics

    Essentials of Signal Processing
    ● Generating standard test signals
     □ Sinusoidal signals
     □ Square wave
     □ Rectangular pulse
     □ Gaussian pulse
     □ Chirp signal
    Interpreting FFT results - complex DFT, frequency bins and FFTShift
     □ Real and complex DFT
     □ Fast Fourier Transform (FFT)
     □ Interpreting the FFT results
     □ FFTShift
     □ IFFTShift
    Obtaining magnitude and phase information from FFT
     □ Discrete-time domain representation
     □ Representing the signal in frequency domain using FFT
     □ Reconstructing the time domain signal from the frequency domain samples
    ● Power spectral density
    Power and energy of a signal
     □ Energy of a signal
     □ Power of a signal
     □ Classification of signals
     □ Computation of power of a signal - simulation and verification
    Polynomials, convolution and Toeplitz matrices
     □ Polynomial functions
     □ Representing single variable polynomial functions
     □ Multiplication of polynomials and linear convolution
     □ Toeplitz matrix and convolution
    Methods to compute convolution
     □ Method 1: Brute-force method
     □ Method 2: Using Toeplitz matrix
     □ Method 3: Using FFT to compute convolution
     □ Miscellaneous methods
    Analytic signal and its applications
     □ Analytic signal and Fourier transform
     □ Extracting instantaneous amplitude, phase, frequency
     □ Phase demodulation using Hilbert transform
    Choosing a filter : FIR or IIR : understanding the design perspective
     □ Design specification
     □ General considerations in design

    Choosing FIR or IIR ? Understand design perspective

    “What is the best filter that I should use? FIR or IIR ?” is often the question asked by many. There exists two different types of Linear Time Invariant (LTI) filters from transfer function standpoint : FIR (Finite Impulse Response) and IIR (Infinite Impulse Response) filters and myriad design techniques for designing them.  The mere fact that there exists so many techniques for designing a filter, suggests that there is no single optimal filter design.  One has to weigh-in the pros and cons of choosing a filter design by considering the factors discussed here.

    Before proceeding to know the secrets of choosing a filter, I urge you to brush up the fundamentals of digital filter design.

    Design specification:

    A filter design starts with a specification. We may have a simple specification that just calls for removing an unwanted frequency component or it can be a complete design specification that calls for various parameters like – amount of ripples allowed in passband, stop band attenuation, transition width etc.., The design specification usually calls for satisfying one or more of the following:

    ● Desired Magnitude response –
    ● Desired Phase response –
    ● Tolerance specifications – that specifies how much the filter response is allowed to vary when compared with ideal response. Examples include how much ripples allowed in passband, stop band etc..,

    Figure 1: FIR and IIR filters can be realized as direct-form structures. (a) Structure with feed-forward elements only – typical for FIR designs. (b) Structure with feed-back paths – generally results in IIR filters. Other structures like lattice implementations are also available.

    Given the specification above, the goal of a filter design process to choose the parameters , , and , such that the transfer function of the filter

    yields the desired response: . In other words, the design process also involves choosing the number and location of the zeros zeros () and poles () in the pole-zero plot.

    Two types of filter can manifest from the given transfer function above.

    ● When , there is no feedback in the filter structure, no poles in the pole-zero plot (in fact all the poles sit at the origin). The impulse response of such filter dies out (becomes zero) beyond certain point of time and it is classified as Finite Impulse Response (FIR) filter. It provides linear phase characteristic in the passband.
    ● When , the filter structure is characterized by the presence of feedback elements. Due to the presence of feedback elements, the impulse response of the filter may not become zero beyond certain point, but continues indefinitely and hence the name Infinite Impulse Response (IIR) filter.
    ● Caution: In most cases, the presence of feedback elements provide infinite impulse response. It is not always true. There are some exceptional cases where the presence of feedback structure may result in finite impulse response. For example, a moving average filter will have a finite impulse response. The output of a moving average filter can be described using a recursive formula, which will result in a structure with feedback elements.

    General considerations in design:

    As specified earlier, the choice of filter and the design process depends on design specification, application and the performance issues associates with them. However, the following general considerations are applied in practical design.

    Minimizing number of computations

    In order to minimize memory requirements for storing the filter co-efficients , and to minimize the number of computations, ideally we would like to be as small as possible. For the same specification, IIR filters result in much lower order when compared to its FIR counter part. Therefore, IIR filters are efficient when viewed from this standpoint.

    Need for real-time processing

    The given application may require processing of input samples in real-time or the input samples may exist in a recorded state (example: video/audio playback, image processing applications, audio compression). From this perspective, we have two types of filter systems

    ● Causal Filter
    — Filter output depends on present and past input samples, not on the future samples. The output may also depend on the past output samples, as in IIR filters. Strictly no future samples.
    — Such filters are very much suited for real-time applications.

    ● Non-Causal Filter
    — There are many practical cases where a non-causal filter is required. Typically, such application warrants some form of post-processing, where the entire data stream is already stored in memory.
    — In such cases, a filter can be designed that can take in all type of input samples : present, past and future, for processing. These filters are classified as non-causal filters.
    — Non-causal filters have much simpler design methods.

    It can be often seen in many signal processing texts, that the causal filters are practically realizable. That does not mean non-causal filters are not practically implementable. In fact both types of filters are implementable and you can see them in many systems today. The question you must ask is : whether your application requires real-time processing or processing of pre-recorded samples. If the application requires real-time processing, causal filters must be used. Otherwise, non-causal filters can be used.

    Consequences of causal filter:

    If the application requires real-time processing, causal filters are the only choice for implementation. Following consequences must be considered if causality is desired.

    Ideal filters with finite bands of zero response (example: brick-wall filters), cannot be implemented using causal filter structure. A direct consequence of causal filter is that the response cannot be ideal. Hence, we must design the filter that provides a close approximation to the desired response . If tolerance specification is given, it has to be met.

    For example, in the case of designing a low pass filter with given passband frequency () and stopband frequencies (), additional tolerance specifications like allowable passband ripple factor (), stopband ripple factor () need to be considered for the design,. Therefore, the practical filter design involves choosing and and then designing the filter with and that satisfies all the given requirements/responses. Often, iterative procedures may be required to satisfy all the above (example: Parks and McClellan algorithm used for designing optimal causal FIR filters [1]).

    Figure 2: A sample filter design specification

    For a causal filter, frequency response’s real part and the imaginary part become Hilbert transform pair [2]. Hence the magnitude and phase responses become interdependent.

    Stability

    For a causal LTI digital filter will be BIBO (Bounded Input Bounded Output) stable, if and only if the impulse response is absolutely summable.

    Impulse response of FIR filters are always bounded and hence they are inherently stable. On the other hand, an IIR filter may become unstable if not designed properly.

    Consider an IIR filter implemented using a floating point processor that has enough accuracy to represent all the coefficients in the transfer function below

    The corresponding impulse response is plotted in Figure 3(a). The plot shows that the impulse response decays rapidly to zero as increases. For this case, the sum in equation (2) will be finite. Hence this IIR filter is stable.

    Suppose, if we were to implement the same filter in a fixed point processor and we are forced to round-off the co-efficients to 2 digits after the decimal point, the  same transfer function looks like this

    The corresponding impulse response  plotted in Figure 3(b)  shows that the impulse response increases rapidly towards a constant value as  increases. For this case, the sum in equation (2) will tend to infinity. Hence the implemented IIR filter is unstable.

    Figure 3: Impact of poorly implemented IIR filter on stability. (a) Stable IIR filter, (b) The same IIR filter becomes unstable due to rounding effects.

    Therefore, it is imperative that an IIR filter implementation need to be tested for stability.  To analyze the stability of the filter, the infinite sum in equation (2) need to be computed and it is often difficult to compute this sum. Analysis of pole-zero plot is an alternate solution for this problem. To have a stable causal filter, the poles of the transfer function should lie completely strictly inside the unit circle on the pole-zero plot. The pole-zero plot for the above given transfer functions , are plotted in Figure 4. It shows that for the transfer function , all the poles lie within the unit circle (the region of stability) and hence it is a stable IIR filter. On the other hand,  for the transfer function ,  one poles lie exactly on the unit circle (ie, it is just out of the region of stability) and hence it is an unstable IIR filter.

    Linear phase requirement

    In many signal processing applications, it is needed that a digital filter should not alter the angular relationship between the real and imaginary components of a signal, especially in the passband. In otherwords, the phase relationship between the signal components should be preserved in the filter’s passband.  If not, we have phase distortion.

    Phase distortion is a concern in many signal processing applications. For example, in phase modulations like GMSK [3],the entire demodulation process hinges on the phase relationship between the inphase and quadrature components of the incoming signal. If we have a phase distorting filter in the demodulation chain, the entire detection process goes for a toss. Hence, we have to pay attention to the phase characteristics of such filters. To have no phase distortion when processing a signal through a filter, every spectral component of the signal inside the passband should be delayed by the same amount time delay measured in samples. In other words, the phase response in the passband should be a linear function (straight line) of frequency (except for the phase wraps at the band edges).  A filter that satisfies this property is called a linear phase filter. FIR filters provide perfect linear phase characteristic in the passband region (Figure 5) and hence avoids phase distortion. All IIR filters provide non-linear phase characteristic. If a real-time application warrants for zero phase distortion, FIR filters are the immediate choice for design.

    It is intuitive to see the phase response of a generalized linear phase filter should follow the relationship , where is the slope and is the intercept when viewing the linear relationship between the frequency and the phase response in the passband (Figure 5). The phase delay and group delay are the important filter characteristics considered for ascertaining the phase distortion and they relate to the intercept and the slope of the phase response in the passband. Linear phase filters are characterized by constant group delay. Any deviation in the group delay from the constant value inside the passband, indicates presence of certain degree of non-linearity in the phase and hence causes phase distortion.

    Figure 5: FIR filter showing linear phase characteristics in the passband

    Phase delay is the time delay experienced by each spectral component of the input signal. For a filter with the frequency response , the phase delay response defined in terms of phase response as

    Group delay is the delay experienced by a group of spectral components within a narrow frequency interval about [4]. The group delay response is defined as the negative derivative of the phase response .

    For the generalized linear phase filter, the group delay and phase delay are given by

    Summary of design choices

    ● IIR filters are efficient, they can provide similar magnitude response for fewer coefficients or lower sidelobes for same number of coefficients
    ● For linear phase requirement, FIR filters are the immediate choice for the design
    ● FIR filters are inherently stable. IIR filters are susceptible to finite length words effects of fixed point arithmetic and hence the design has to be rigorously tested for stability.
    ● IIR filters provide less average delay compared to its equivalent FIR counterpart. If the filter has to be used in a feedback path in a system, the amount of filter delay is very critical as it affects the stability of the overall system.
    ● Given a specification, an IIR design can be easily deduced based on closed-form expressions. However, satisfying the design requirements using an FIR design generally requires iterative procedures.

    Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

    References:

    [1] J. H. McClellan, T. W. Parks, and L. R. Rabiner, “A Computer Program for Designing Optimum FIR Linear Phase Digital Filters,” IEEE Trans, on Audio and Electroacoustics, Vol. AU-21, No. 6, pp. 506-526, December 1973.↗

    [2] Frank R. Kschischang, ‘The Hilbert Transform’, Department of Electrical and Computer Engineering, University of Toronto, October 22, 2006.↗

    [3]  Thierry Turletti, ‘GMSK in a nutshell’, Telemedia Networks and Systems Group Laboratory for Computer Science, Massachussets Institute of Technology April, 96.↗

    [4] Julius O. Smith III, ‘Introduction to digital filters – with audio applications’, Center for Computer Research in Music and Acoustics (CCRMA), Department of Music, Stanford University.↗

    Topics in this chapter

    Essentials of Signal Processing
    ● Generating standard test signals
     □ Sinusoidal signals
     □ Square wave
     □ Rectangular pulse
     □ Gaussian pulse
     □ Chirp signal
    Interpreting FFT results - complex DFT, frequency bins and FFTShift
     □ Real and complex DFT
     □ Fast Fourier Transform (FFT)
     □ Interpreting the FFT results
     □ FFTShift
     □ IFFTShift
    Obtaining magnitude and phase information from FFT
     □ Discrete-time domain representation
     □ Representing the signal in frequency domain using FFT
     □ Reconstructing the time domain signal from the frequency domain samples
    ● Power spectral density
    Power and energy of a signal
     □ Energy of a signal
     □ Power of a signal
     □ Classification of signals
     □ Computation of power of a signal - simulation and verification
    Polynomials, convolution and Toeplitz matrices
     □ Polynomial functions
     □ Representing single variable polynomial functions
     □ Multiplication of polynomials and linear convolution
     □ Toeplitz matrix and convolution
    Methods to compute convolution
     □ Method 1: Brute-force method
     □ Method 2: Using Toeplitz matrix
     □ Method 3: Using FFT to compute convolution
     □ Miscellaneous methods
    Analytic signal and its applications
     □ Analytic signal and Fourier transform
     □ Extracting instantaneous amplitude, phase, frequency
     □ Phase demodulation using Hilbert transform
    Choosing a filter : FIR or IIR : understanding the design perspective
     □ Design specification
     □ General considerations in design

    Books by the author


    Wireless Communication Systems in Matlab
    Second Edition(PDF)

    Note: There is a rating embedded within this post, please visit this post to rate it.
    Checkout Added to cart

    Digital Modulations using Python
    (PDF ebook)

    Note: There is a rating embedded within this post, please visit this post to rate it.
    Checkout Added to cart

    Digital Modulations using Matlab
    (PDF ebook)

    Note: There is a rating embedded within this post, please visit this post to rate it.
    Checkout Added to cart
    Hand-picked Best books on Communication Engineering
    Best books on Signal Processing

    Foundations of Signal Processing – Martin Vetterli, Jelena Kovacevic, and Vivek K Goyal

     Download here ->

    Key focus: Free ebook on Signal processing

    Read it here: https://www.gaussianwaves.com/gaussianwaves/wp-content/uploads/2017/02/FSP_v1.1_2014.pdf

    Licensed under Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License

    Simulate additive white Gaussian noise (AWGN) channel

    In this article, the relationship between SNR-per-bit (Eb/N0) and SNR-per-symbol (Es/N0) are defined with respect to M-ary signaling schemes. Then the complex baseband model for an AWGN channel is discussed, followed by the theoretical error rates of various modulations over the additive white Gaussian noise (AWGN) channel. Finally, the complex baseband models for digital modulators and detectors developed in previous chapter of this book, are incorporated to build a complete communication system model.

    If you would like to know more about the simulation and analysis of white noise, I urge you to read this article: White noise: Simulation & Analysis using Matlab.

    Signal to noise ratio (SNR) definitions

    Assuming a channel of bandwidth B, received signal power Pr and the power spectral density (PSD) of noise N0/2, the signal to noise ratio (SNR) is given by

    Let a signal’s energy-per-bit is denoted as Eb and the energy-per-symbol as Es, then γb=Eb/N0 and γs=Es/N0 are the SNR-per-bit and the SNR-per-symbol respectively.

    For uncoded M-ary signaling scheme with k = log2(M) bits per symbol, the signal energy per modulated symbol is given by

    The SNR per symbol is given by

    AWGN channel model

    In order to simulate a specific SNR point in performance simulations, the modulated signal from the transmitter needs to be added with random noise of specific strength. The strength of the generated noise depends on the desired SNR level which usually is an input in such simulations. In practice, SNRs are specified in dB. Given a specific SNR point for simulation, let’s see how we can simulate an AWGN channel that adds correct level of white noise to the transmitted symbols.

    Figure 1: Simplified simulation model for awgn channel

    Consider the AWGN channel model given in Figure 1. Given a specific SNR point to simulate, we wish to generate a white Gaussian noise vector of appropriate strength and add it to the incoming signal. The method described can be applied for both waveform simulations and the complex baseband simulations. In following text, the term SNR (γ) refers to γb = Eb/N0 when the modulation is of binary type (example: BPSK). For multilevel modulations such as QPSK and MQAM, the term SNR refers to γs = Es/N0.

    (1) Assume, s is a vector that represents the transmitted signal. We wish to generate a vector r that represents the signal after passing through the AWGN channel. The amount of noise added by the AWGN channel is controlled by the given SNR – γ

    (2) For waveform simulation model, let the given oversampling ratio is denoted as L. On the other hand, if you are using the complex baseband models, set L=1.

    (3) Let N denotes the length of the vector s. The signal power for the vector s can be measured as,

    (4) The required power spectral density of the noise vector n is computed as

    (5) Assuming complex IQ plane for all the digital modulations, the required noise variance (noise power) for generating Gaussian random noise is given by

    (6) Generate the noise vector n drawn from normal distribution with mean set to zero and the standard deviation computed from the equation given above

    (7) Finally add the generated noise vector (n) to the signal (s)

    Matlab code

    The following custom function written in Matlab, can be used for adding AWGN noise to an incoming signal. It can be used in waveform simulation as well as complex baseband simulation models.

    %author - Mathuranathan Viswanathan (gaussianwaves.com
    %This code is part of the books: Wireless communication systems using Matlab & Digital modulations using Matlab.
    
    function [r,n,N0] = add_awgn_noise(s,SNRdB,L)
    %Function to add AWGN to the given signal
    %[r,n,N0]= add_awgn_noise(s,SNRdB) adds AWGN noise vector to signal
    %'s' to generate a %resulting signal vector 'r' of specified SNR
    %in dB. It also returns the noise vector 'n' that is added to the
    %signal 's' and the spectral density N0 of noise added
    %
    %[r,n,N0]= add_awgn_noise(s,SNRdB,L) adds AWGN noise vector to
    %signal 's' to generate a resulting signal vector 'r' of specified
    %SNR in dB. The parameter 'L' specifies the oversampling ratio used
    %in the system (for waveform simulation). It also returns the noise
    %vector 'n' that is added to the signal 's' and the spectral
    %density N0 of noise added
     s_temp=s;
     if iscolumn(s), s=s.'; end; %to return the result in same dim as 's'
     gamma = 10ˆ(SNRdB/10); %SNR to linear scale
     
     if nargin==2, L=1; end %if third argument is not given, set it to 1
     
     if isvector(s),
      P=L*sum(abs(s).ˆ2)/length(s);%Actual power in the vector
     else %for multi-dimensional signals like MFSK
      P=L*sum(sum(abs(s).ˆ2))/length(s); %if s is a matrix [MxN]
     end
     
     N0=P/gamma; %Find the noise spectral density
     if(isreal(s)),
      n = sqrt(N0/2)*randn(size(s));%computed noise
     else
      n = sqrt(N0/2)*(randn(size(s))+1i*randn(size(s)));%computed noise
     end
     
     r = s + n; %received signal
     
     if iscolumn(s_temp), r=r.'; end;%return r in original format as s
    end

    Python code

    The following custom function written in Python 3, can be used for adding AWGN noise to an incoming signal. It can be used in waveform simulation as well as complex baseband simulation models.

    # author - Mathuranathan Viswanathan (gaussianwaves.com
    # This code is part of the book Digital Modulations using Python
    
    from numpy import sum,isrealobj,sqrt
    from numpy.random import standard_normal
    
    def awgn(s,SNRdB,L=1):
        """
        AWGN channel
        Add AWGN noise to input signal. The function adds AWGN noise vector to signal 's' to generate a resulting signal vector 'r' of specified SNR in dB. It also
        returns the noise vector 'n' that is added to the signal 's' and the power spectral density N0 of noise added
        Parameters:
            s : input/transmitted signal vector
            SNRdB : desired signal to noise ratio (expressed in dB) for the received signal
            L : oversampling factor (applicable for waveform simulation) default L = 1.
        Returns:
            r : received signal vector (r=s+n)
    """
        gamma = 10**(SNRdB/10) #SNR to linear scale
        if s.ndim==1:# if s is single dimensional vector
            P=L*sum(abs(s)**2)/len(s) #Actual power in the vector
        else: # multi-dimensional signals like MFSK
            P=L*sum(sum(abs(s)**2))/len(s) # if s is a matrix [MxN]
        N0=P/gamma # Find the noise spectral density
        if isrealobj(s):# check if input is real/complex object type
            n = sqrt(N0/2)*standard_normal(s.shape) # computed noise
        else:
            n = sqrt(N0/2)*(standard_normal(s.shape)+1j*standard_normal(s.shape))
        r = s + n # received signal
    return r

    Theoretical symbol error rates for digital modulations in AWGN channel

    Denoting the symbol error rate (SER) as , SNR-per-bit as and SNR-per-symbol as , the symbol error rates for various modulation schemes over AWGN channel are listed in Table 1 (refer [1]).

    Table 1: Theoretical symbol error rate for various modulations in AWGN channel

    The theoretical symbol error rates are coded as a reusable function. In this implementation, erfc function is used instead of the Q function shown in the Table 4.1. The following equation describes the relationship between the erfc function and the Q function.

    Unified simulation model for performance simulation

    In the previous chapter of the books, the code implementation for complex baseband models for various digital modulators and demodulator are given. Using these models, we can create a unified simulation code for simulating the performance of various modulation techniques over AWGN channel.

    The complete simulation model for performance simulation over AWGN channel is given in Figure 2. The figure is illustrated for a coherent communication system model (applicable for MPSK/MQAM/MPAM modulations)

    Figure 2: Complete simulation model for a communication system with AWGN channel

    The Matlab code implementing the aforementioned simulation model is given in the books. Here, an unified approach is employed to simulate the performance of any of the given modulation technique – MPSK, MQAM, MPAM or MFSK (MFSK simulation technique is available in the following books: Digital Modulations using Python and Digital Modulations using Matlab).

    This article is part of the following books
    Digital Modulations using Matlab : Build Simulation Models from Scratch, ISBN: 978-1521493885
    Digital Modulations using Python ISBN: 978-1712321638
    Wireless communication systems in Matlab ISBN: 979-8648350779
    All books available in ebook (PDF) and Paperback formats

    The simulation code will automatically choose the selected modulation type, performs Monte Carlo simulation, computes symbol error rates and plots them against the theoretical symbol error rates. The simulated performance results obtained for MQAM and MPSK modulations are shown in the Figure 3 and Figure 4.

    Figure 3: Simulated symbol error rate performance of M-QAM modulation over AWGN channel

    Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

    References

    [1] Andrea Goldsmith, Wireless Communications, Cambridge University Pres, first edition, August 8, 2005.↗

    Books by the author


    Wireless Communication Systems in Matlab
    Second Edition(PDF)

    Note: There is a rating embedded within this post, please visit this post to rate it.
    Checkout Added to cart

    Digital Modulations using Python
    (PDF ebook)

    Note: There is a rating embedded within this post, please visit this post to rate it.
    Checkout Added to cart

    Digital Modulations using Matlab
    (PDF ebook)

    Note: There is a rating embedded within this post, please visit this post to rate it.
    Checkout Added to cart
    Hand-picked Best books on Communication Engineering
    Best books on Signal Processing