OFDM and Cyclic Prefix – a handson demonstration using Matlab

Download and read in PDF

1 Star2 Stars3 Stars4 Stars5 Stars (7 votes, average: 4.86 out of 5)
This tutorial is a part of the website http://www.gaussianwaves.com and created under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. You must credit the author in your work if you remix, tweak, and build upon the work below. More such tricks can be found in the ebook : Simulation of Digital communication systems using Matlab by Mathuranathan Viswanathan}


A OFDM transceiver architecture is typically implemented using IFFT and FFT blocks. In a OFDM transmitter, the modulated information symbols are assigned to invidual subcarriers, forming a single OFDM symbol in frequency domain. The OFDM symbol in frequency domain is converted to time domain using IFFT block. The time domain samples are then transmitted across a channel that is modeled using Channel Impulse Response (CIR). On the otherhand, the receiver converts the time domain samples of the received OFDM symbols and converts them to frequency domain for further demodulation of the information in the individual carriers.

In reality, a multipath channel or any channel in nature acts as a \(linear\) filter on the transmitted OFDM symbols. Mathematically, the transmitted OFDM symbol (denoted as \(s[n]\)) gets \(linearly\) convolved with the CIR \(h[n]\) and gets corrupted with additive white gaussian noise – designated as \(w[n]\). Denoting \(linear\;convolution\) as ‘\(\ast\)’ and using ‘\(N\)’-point DFT, received signal in discrete-time can be represented as

$$ r[n] = h[n] \ast s[n] + w[n] \;\;\; (\ast \rightarrow linear\;convolution ) \;\;\;\;\;\; (1) $$

The idea behind using OFDM is to combat frequency selective fading , where different frequency components of a transmitted signal can undergo different levels of fading. The OFDM divides the available spectrum in to small chunks called sub-channels. Within the subchannels, the fading experienced by the individual modulated symbol can be considered flat. This opens-up the possibility of using a simple frequency domain equalizer to neutralize the channel effects in the individual subchannels.

Circular convolution and designing a simple frequency domain equalizer

From one of the DFT properties, we know that the \(circular\;convolution\) of two sequences in time domain is equivalent to the multiplication of their individual responses in frequency domain.

Let \(s[n]\) and \(h[n]\) are two sequences of length \(N\) with their DFTs denoted as \(S[k]\) and \(H[k]\) respectively. Denoting \(circular\;convolution\) as \( \circledast \),

$$ h[n] \circledast s[n] \;\;\;\; \equiv H[k] S[k] \;\;\;\;\;\; (2) $$

If we ignore channel noise in the OFDM transmission, the received signal is written as
$$ r[n] = h[n] \ast s[n] \;\;\;\;\;\;\;\;\; (\ast \rightarrow linear\;convolution ) \;\;\;\;\;\; (3) $$

Note that the channel performs a linear convolution operation on the transmitted signal. Instead, if the channel performs circular convolution, then the equation would have been
$$ r[n] = h[n] \circledast s[n] \;\;\;\;\;\;\;\;\; (\circledast \rightarrow circular\;convolution ) \;\;\;\;\;\; (4) $$

Then, by applying the DFT property given in equation \(2\),
$$ r[n] = h[n] \circledast s[n] \;\;\;\; \equiv R[k] = H[k] S[k] \;\;\;\;\;\; (5) $$

As a consequence, the channel effects can be neutralized at the receiver using a simple frequency domain equalizer (actually this is a zero-forcing equalizer when viewed in time domain) that just inverts the \(estimated\) channel response and multiplies it with the frequency response of the received signal to obtain the estimates of the OFDM symbols in the subcarriers as

$$ \hat{S}[k] = \frac{R[k]}{H[k]} \;\;\;\;\;\; (6) $$

Demonstrating the role of Cyclic Prefix

The simple frequency domain equalizer shown in equation\(6\) is possible only if the channel performs circular convolution. But in nature, all channels perform linear convolution. The linear convolution can be converted into circular convolution by adding Cyclic Prefix (CP) in the OFDM architecture. Let’s understand this by demonstration.

To simply stuffs, we will create two randomized vectors for \(s[n]\) and \(h[n]\). \(s[n]\) is of length \(8\), the channel impulse response \(h[n]\) is of length \(3\) and we intend to use \(N=8\)-point DFT/IDFT wherever applicable.

Now, convolve the vectors \(\mathbf{s}\) and \(\mathbf{h}\) linearly and circularly. The outputs are plotted in Figure \(1\). We note that the linear convolution and the circular convolution does not yield identical results.

Difference between linear convolution and circular convolution
Figure 1: Difference between linear convolution and circular convolution

Let’s append a cyclic prefix to the signal \(\mathbf{s}\) by copying last \(N_{cp}\) symbols from \(\mathbf{s}\) and pasting it to its front. Since the channel delay is 3 symbols (CIR of length 3), we need to add \emph{atleast} 2 CP symbols.

Lets assume that we send the cyclic-prefixed OFDM symbol \(\mathbf{s_{cp}}\) through the channel which perform linear filtering

Compare the outputs due to cir_s_h \(=\mathbf{s} \circledast \mathbf{h} \) and lin_scp_h \(=\mathbf{s}_{cp} \ast \mathbf{h} \). We can immediately recognize that that the middle section of lin_scp_h is exactly same as the cir_s_h vector. This is shown in the figure \(2\).

Equivalence of circular convolution and linear convolution with cyclic prefixed signal
Figure 2: Equivalence of circular convolution and linear convolution with cyclic prefixed signal

So far we have faked the channel to perform circular convolution by adding a cyclic extension to the OFDM symbol. We are not done yet. At the receiver, we are only interested in the the middle section of the lin_scp_h which is the channel output. The first block in the OFDM receiver removes the additional symbols from the front and back of the channel output. This becomes the received symbol \(\mathbf{r}\) after the removal of cyclic prefix in the receiver.

Verifying DFT property

The DFT property given in equation \(5\) can be re-written as
$$ \begin{align} r[n] & = IDFT \left\{ R[k] \right\} \\ & = IDFT \left\{H[k] S[k] \right\} \\ & = IDFT \left\{DFT \left( h[n] \right) DFT \left( s[n] \right) \right\} \\ & = h[n] \circledast s[n] \end{align}\;\;\;\;\;\; (7) $$

To verify this in Matlab, take N-point DFTs of the received signal and CIR. Then, we can see that the IDFT of product of DFTs of \(\mathbf{s}\) and \(\mathbf{h}\) will be equal to the N-point circular convolution of \(\mathbf{s}\) and \(\mathbf{h}\)

Rate this : 1 Star2 Stars3 Stars4 Stars5 Stars (7 votes, average: 4.86 out of 5)

Recommended Signal Processing Books

[Quiz] Signal Processing for Communication systems – part 1

General quiz on signal processing for communication systems

Logical Effort

1 Star2 Stars3 Stars4 Stars5 Stars (2 votes, average: 3.50 out of 5)


In today’s digital world the most important aspect of any processor is how fast can it function and support multiple applications. Often, the chip  design engineers are confronted with bewildering questions in the design process of a logic:

  • What is the best topology to represent a given function ?
  • How many logic stages provides the least delay ?
  • How wide the transistors should be in order to have the design optimized for area?

Logic designer often use Logical Effort to arrive at these conclusions. It uses a simple model for delay calculations and helps to make rapid comparisons between alternative structures.

CMOS inverter and sizing ratio:

As we all know, gates are made up of transistors. The most basic gate is the NOT gate ,famously known as an inverter. Once the properties and operations of the basic inverter are clearly understood, designing more complex structures such as NAND gates, adders, multipliers and even a full scale microprocessor is greatly simplified.

CMOS inverter

Figure 1: CMOS Inverter

Figure 1, shows the circuit diagram of a CMOS inverter. Here, the input to the  inverter is designated as \(A\) and output is designated as \(A^{*}\). Important thing to note here is that the small signal model of the inverter with sizing \(2:1\) is \(3*C_{in}\).Now, the question is, why is the sizing taken to be \(2:1\) ?. The major concept behind it revolves around the switching time or \(t_p\)

When the input is \(1\) (logic High), transistor N-MOS is conducting and P-mos is not conducting and hence the charge stored in output capacitor discharges, hence the output = \(0\) (logic Low). Consequently when the input is \(0\) (logic Low), N-MOS is not in conduction mode while P-MOS is in conduction mode hence the output capacitor charges, i.e, output = 1 (logic High). Since this charging and discharging takes some time (\( \propto R_{eqn} \times C_{L} \)),  there will always be some delay or transition between the change in input and output. This is the major source of delay. Since both transistors don’t have similar properties, the individual gate charging-discharging will be different. Hence, the gate sizing should be done in such a way that the resultant delay should be minimum.

Figure 2 depicts the gate dimensions (\(W\) and \(L\)) on a N-type MOSFET. The ratio of \(W / L\) ratio of PMOS to that of the NMOS transistor is called transistor sizing ration \(\beta\). Normally, the PMOS and NMOS components have the same channel length \(L=L_{p}=L_{n}\)  as dictated by the minimum feature size of a given process technology.  In a static CMOS design, the driving strengths of the NMOS and the PMOS transistors are balanced by making the PMOS section wider than the NMOS section. Generally, the width of the PMOS section is chosen as 2 to 4 times the width of the NMOS section. This is done to maximize noise margins and to obtain symmetrical properties between the PMOS and NMOS transistor.

N-MOSFET with dimensions marked

Figure 2: N-MOSFET with dimensions marked

Figure 3, shows the definition of various timing parameters used to characterize logic gates.  With respect to the output of a logic gate, the following timing parameters are relevant for this discussion.

Definition of timing parameters of logic gates

Figure 3: Definition of timing parameters of logic gates

  • \(t_{pHL}\)  = The amount of time taken to change the output from “high to low” level when the input changes. It is measured between 90% and 10% level of the amplitude of the output signal.
  • \(t_{pLH}\)=  The amount of time taken to change the output from “low to high” level when the input changes. It is measured between 10% and 90% level of the amplitude of the output signal
  • \(t_p\)= The average time taken to change the output , computed as \(t_p= (t_{pHL}+t_{pLH})/2\)

Figure 4, shows the propagation delay of CMOS inverter as a function of sizing ratio \(\beta\). As we can see from the graph \(t_{pLH} = t_{pHL}\) when \(\beta\) is slightly greater than \(2\). But \(t_p\) is least when \(\beta\) is slightly lesser than \(2\). Hence, in general the transistor sizing ratio is taken as \(\beta=2\).

gate sizing and timing parameters of CMOS
Figure 4: Propagation delay of CMOS inverter as a function of sizing ratio β

Inverter delay

Let’s assume a symmetrical inverter with identical rise-time and fall-time properties. Let \(C_L\) be the \(loading \; capacitance\) which is composed on intrinsic capacitance \(C_{int}\) and the capacitance due to extrinsic components \(C_{ext}\).

$$ C_L = C_{int} + C_{ext} $$

Given the \(equivalent \; resistance\) of the gate \(R_{eq}\) and load capacitance \(C_L\), the propagation delay is given by

$$\begin{align*} t_p & = 0.69 \times R_{eq} \times C_L \\ & = 0.69 \times R_{eq} \left( C_{int}+C_{ext} \right ) \\ & = 0.69 \times R_{eq} \times C_{int} \left( 1+ \frac{C_{ext}}{C_{int}} \right) \\ & = t_{p0} \left (1+ \frac{C_{ext}}{C_{int}} \right ) \end{align*}$$

where \(t_{p0} = 0.69 \times R_{eq} \times C_{int} \) is the delay of inverter loaded by its intrinsic capacitance and is called \(intrinsic \; or \; unloaded \; delay\).

Logical Effort Delay Model

As we know, the delay (\(d\)) of logic gates have two components:
$$ d=p + f $$
where, \(p\) – Parasitic delay – is the intrinsic delay of the gate
\(f\) – Effort delay

\(Effort \; delay\) has two components, logical effort (\(g\)) and electrical effort (\(h=C_{out}/C_{in}\)). The effort delay is given by \(f=g \times h\).

\(Electrical \; effort\) can be defined as the effective fanout of the gate, or the ratio of input capacitance \(C_{in}\) of gate to that of load.

\(Logical \; effort\) is defined as the ratio of the input capacitance of a gate to the input capacitance of an inverter delivering the same output current. It is defined as the number of times worse it is at delivering output current than would be an inverter with identical input capacitance.

Logical effort (\(g\)) depends upon following parameters:

  • Complexity of the logic function
  • Depends mainly  on topology, not sizing.
  • To a very little extent it may depend on the electrical process of the fabrication.

The logical effort heavily depends upon the topology because every topology has different resultant input capacitance which influences the overall logical effort

A NAND gate is shown in Figure 5. Here input capacitance for input A or B is \(4 \times C_{in}\) while the input capacitance for Inverter was \( 3 \times C_{in}\). The logical effort of Inverter is considered to be \(1\), or consider it to be a reference for all the delay oriented calculations in transistor logic. Hence the logical effort of NAND gate will be \(4/3\).

NAND gate

Figure 5: NAND gate

Following table shows the logical efforts of other common gates/topologies

Logical efforts of common gates

Table 1: Logical efforts of common gates

Rate this article: 1 Star2 Stars3 Stars4 Stars5 Stars (2 votes, average: 3.50 out of 5)

Recommended books

[Quiz] Combinational Circuits

This quiz tests the your basic knowledge on Combinational Circuits

Quiz on Logic gates

This quiz tests the your basic knowledge on Digital Logic Gates

How to interpret FFT results – obtaining magnitude and phase information

1 Star2 Stars3 Stars4 Stars5 Stars (11 votes, average: 4.64 out of 5)

In the previous post, Interpretation of frequency bins, frequency axis arrangement (fftshift/ifftshift) for complex DFT were discussed. In this post, I intend to show you how to obtain magnitude and phase information from the FFT results.


In this discussion, I will take an arbitrary cosine function of the form \(x(t)= A cos \left(2 \pi f_c t + \phi \right) \) and proceed step by step as follows

  1. Represent the signal \(x(t)\) in computer (discrete-time) and plot the signal (time domain)
  2. Represent the signal in frequency domain using FFT (\(X[k]\))
  3. Extract amplitude and phase information from the FFT result
  4. Reconstruct the time domain signal from the frequency domain samples

 For more such examples check this ebook : Simulation of Digital Communications using Matlab – by Mathuranathan Viswanathan

1. Discrete-time domain representation

Consider a cosine signal of  amplitude \(A=0.5\), frequency \(f_c=10 Hz \) and phase \(\phi= \pi/6 \) radians  (or \(30^{\circ}\) )

$$ x(t) = 0.5 cos \left( 2 \pi 10 t + \pi/6 \right) $$

In order to represent the continuous time signal \(x(t)\) in computer memory, we need to sample the signal at sufficiently high rate (according to Nyquist sampling theorem). I have chosen a oversampling factor of \(32\) so that the sampling frequency will be \(f_s = 32 \times f_c \), and that gives \(640\) samples in a \(2\) seconds duration of the waveform record.

Cosine wave with phase shift

Represent the signal in frequency domain using FFT

Lets represent the signal in frequency domain using the FFT function. The FFT function computes \(N\)-point complex DFT. The length of the transformation \(N\) should cover the signal of interest otherwise we will some loose valuable information in the conversion process to frequency domain. However, we can choose a reasonable length if we know about the nature of the signal.

For example, the cosine signal of our interest is periodic in nature and is of length $640$ samples (for 2 seconds duration signal). We can simply use a lower number $N=256$ for computing the FFT. In this case, only the first $256$ time domain samples will be considered for taking FFT. No need to worry about loss of information in this case, as the 256 samples will have sufficient number of cycles using which we can calculate the frequency information.

In the code above, \(fftshift\) is used only for obtaining a nice double-sided frequency spectrum that delineates negative frequencies and positive frequencies in order. This transformation is not necessary. A scaling factor \(1/N\) was used to account for the difference between the FFT implementation in Matlab and the text definition of complex DFT.

3a. Extract amplitude of frequency components (amplitude spectrum)

The FFT function computes the complex DFT and the hence the results in a sequence of complex numbers of form \(X_{re} + j X_{im} \). The amplitude spectrum is obtained
$$ |X[k]| = \sqrt{X_{re}^2 + X_{im}^2 } $$

For obtaining a double-sided plot, the ordered frequency axis (result of fftshift) is computed based on the sampling frequency and the amplitude spectrum is plotted.

Amplitude spectrum

3b. Extract phase of frequency components (phase spectrum)

Extracting the correct phase spectrum is a tricky business. I will show you why it is so. The phase of the spectral components are computed as
$$ \angle X[k] = tan^{-1} \left( \frac{X_{im}}{X_{re}} \right) $$

That equation looks naive, but one should be careful when computing the inverse tangents using computers. The obvious choice for implementation seems to be the \(atan\) function in Matlab. However, usage of \(atan\) function will prove disastrous unless additional precautions are taken. The \(atan\) function computes the inverse tangent over two quadrants only, i.e, it will return values only in the \( [-\pi/2 , \pi/2] \) interval. Therefore, the phase need to be unwrapped properly. We can simply fix this issue by computing the inverse tangent over all the four quadrants using the \(atan2(X_{img},X_{re})\) function.

Lets compute and plot the phase information using \(atan2\) function and see how the phase spectrum looks

Noisy phase spectrum

The phase spectrum is completely noisy. Unexpected !!!. The phase spectrum is noisy due to fact that the inverse tangents are computed from the \(ratio\) of imaginary part to real part of the FFT result. Even a small floating rounding off error will amplify the result and manifest incorrectly as useful phase information (read how a computer program approximates very small numbers).

To understand, print the first few samples from the FFT result and observe that they are not absolute zeros (they are very small numbers in the order \(10^{-16}\). Computing inverse tangent will result in incorrect results

The solution is to define a tolerance threshold and ignore all the computed phase values that are below the threshold.

The recomputed phase spectrum is plotted below. The phase spectrum has correctly registered the \(30^{\circ}\) phase shift at the frequency \(f=10 Hz\). The phase spectrum is anti-symmetric (\( \phi=-30^{\circ} \) at \(f=-10 Hz \) ), which is expected for real-valued signals.
Phase spectrum after fixing rounding off errors

4. Reconstruct the time domain signal from the frequency domain samples

Reconstruction of the time domain signal from the frequency domain sample is pretty straightforward

Reconstructed signal

The reconstructed signal has preserved the same initial phase shift and the frequency of the original signal. Note: The length of the reconstructed signal is only 256 sample long (~ 0.8 seconds duration), this is because the size of FFT is considered as \(N=256\). Since the signal is periodic it is not a concern. For more complicated signals, appropriate FFT length (better to use a value that is larger than the length of the signal) need to be used.

Rate this article: 1 Star2 Stars3 Stars4 Stars5 Stars (11 votes, average: 4.64 out of 5)

Articles in this series:
How to Interpret FFT results – complex DFT, frequency bins and FFTShift
How to Interpret FFT results – obtaining Magnitude and Phase information (this article)
FFT and Spectral Leakage
How to plot FFT using Matlab – FFT of basic signals : Sine and Cosine waves
Generating Basic signals – Square Wave and Power Spectral Density using FFT
Generating Basic signals – Rectangular Pulse and Power Spectral Density using FFT
Generating Basic Signals – Gaussian Pulse and Power Spectral Density using FFT
Chirp Signal – Frequency Sweeping – FFT and power spectral density
Constructing the Auto Correlation Matrix using FFT

 For more such examples check this ebook : Simulation of Digital Communications using Matlab – by Mathuranathan Viswanathan

Recommended Signal Processing Books

How to Interpret FFT results – complex DFT, frequency bins and FFTShift

1 Star2 Stars3 Stars4 Stars5 Stars (12 votes, average: 4.33 out of 5)

Four types of Fourier Transforms:

Simulation of digital communication systems using Matlab

Often, one is confronted with the problem of converting a time domain signal to frequency domain and vice-versa. Fourier Transform is an excellent tool to achieve this conversion and is ubiquitously used in many applications. In signal processing , a time domain signal can be \(continuous\) or \(discrete\) and it can be \(aperiodic\) or \(periodic\). This gives rise to four types of Fourier transforms.

Table 1: Four types of Fourier Transform
TransformNature of time domain signalNature of frequency spectrum
Fourier Transform (FT),
(a.k.a Continuous Time Fourier Transform (CTFT))
continuous, non-periodicnon-periodic,continuous
Discrete-time Fourier Transform (DTFT)discrete, non-periodicperiodic,continuous
Fourier Series (FS)continuous, periodicnon-periodic, discrete
Discrete Fourier Transform (DFT)discrete, periodicperiodic,discrete

Note that when the signal is discrete in one domain, it will be periodic in other domain. Similarly, if the signal is continuous in one domain, it will be aperiodic (non-periodic) in another domain. For simplicity, let’s not venture into the specific equations for each of the transforms above. We will limit our discussion to DFT, that is widely available as part of software packages like Matlab, Scipy(python) etc.., however we can approximate other transforms using DFT.

Real version and Complex version:

For each of the listed transforms above, there exist a real version and complex version. The real version of the transform, takes in a real numbers and gives two sets of real frequency domain points – one set representing coefficients over \(cosine\) basis function and the other set representing the co-efficient over \(sine\) basis function. The complex version of the transforms represent positive and negative frequencies in a single array. The complex versions are flexible that it can process both complex valued signals and real valued signals. The following figure captures the difference between real DFT and complex DFT

Figure 1: Real and complex DFT

Real DFT:

Consider the case of N-point \(real\) DFT , it takes in N  samples of \(real-valued\) time domain waveform \(x[n]\) and gives two arrays of length \(N/2+1\) each set projected on cosine and sine functions respectively.

$$\begin{align} X_{re}[k] &= \frac{2}{N} \sum_{n=0}^{N-1} \displaystyle x[n] cos\left( \frac{2 \pi k n}{N} \right)  \nonumber \\ X_{im}[k] &= -\frac{2}{N} \sum_{n=0}^{N-1} \displaystyle x[n] sin\left( \frac{2 \pi k n}{N} \right)  \nonumber \end{align}$$

Here, the time domain index \(n\) runs from \(0 \rightarrow  N\), the frequency domain index \(k\) runs from \(0 \rightarrow  N/2\)

The real-valued time domain signal \(x[n]\) can be synthesized from the real DFT pairs as

$$ x[n] =\sum_{k=0}^{N/2} \displaystyle X_{re}[K] cos\left( \frac{2 \pi k n}{N} \right)  –   X_{im}[K] sin\left( \frac{2 \pi k n}{N} \right)$$

Caveat: When using the synthesis equation, the values \(X_{re}[0]\) and \(X_{re}[N/2] \) must be divided by two. This problem is due to the fact that we restrict the analysis to real-values only. These type of problems can be avoided by using complex version of DFT.

Complex DFT:

Consider the case of N-point \(complex\) DFT, it takes in N samples of \(complex-valued\) time domain waveform \(x[n]\) and produces an array \(X[k]\) of length \(N\).

$$X[k]=\frac{1}{N} \sum_{n=0}^{N-1} x[n] e^{-j2 \pi k n/N}$$

The arrays values are interpreted as follows

  • \(X[0]\) represents DC frequency component
  • Next \(N/2\) terms are positive frequency components with \(X[N/2]\) being the Nyquist frequency (which is equal to half of sampling frequency)
  • Next \(N/2-1\) terms are negative frequency components (note: negative frequency components are the phasors rotating in opposite direction, they can be optionally omitted depending on the application)

The corresponding synthesis equation (reconstruct \(x[n]\) from frequency domain samples \(X[k]\)) is

$$x[n]=\sum_{k=0}^{N-1} X[k] e^{j2 \pi k n/N} $$

From these equations we can see that the real DFT is computed by projecting the signal on cosine and sine basis functions. However, the complex DFT projects the input signal on exponential basis functions (Euler’s formula connects these two concepts).

When the input signal in the time domain is real valued, the complex DFT zero-fills the imaginary part during computation (That’s its flexibility and avoids the caveat needed for real DFT). The following figure shows how to interpret the raw FFT results in Matlab that computes complex DFT. The specifics will be discussed next with an example.

Figure 2: Interpretation of frequencies in complex DFT output

Fast Fourier Transform (FFT)

The FFT function in Matlab  is an algorithm published in 1965 by J.W.Cooley and J.W.Tuckey for efficiently calculating the DFT. It exploits the special structure of DFT when the signal length is a power of 2, when this happens, the computation complexity is significantly reduced.  FFT length is generally considered as power of 2 – this is called \(radix-2\) FFT which exploits the twiddle factors. The FFT length can be odd as used in this particular FFT implementation – Prime-factor FFT algorithm where the FFT length factors into two co-primes.

FFT is widely available in software packages like Matlab, Scipy etc.., FFT in Matlab/Scipy implements the complex version of DFT. Matlab’s FFT implementation computes the complex DFT that is very similar to above equations except for the scaling factor. For comparison, the Matlab’s FFT implementation computes the complex DFT and its inverse as

$$X[k]=\sum_{n=0}^{N-1} x[n] e^{-j2 \pi k n/N}$$
$$x[n]=\frac{1}{N}  \sum_{k=0}^{N-1} X[k] e^{j2 \pi k n/N} $$

The Matlab commands that implement the above equations are \(FFT\) and \(IFFT\) respectively. The corresponding syntax is as follows

Interpreting the FFT results

Lets assume that the \(x[n]\) is the time domain cosine signal of frequency \(f_c=10Hz\) that is sampled at a frequency \(f_s=32*fc\) for representing it in the computer memory.

Figure 3: A 2 seconds record of 10 Hz cosine wave

Lets consider taking a \(N=256\) point FFT, which is the \(8^{th}\) power of \(2\).

Note: The FFT length should be sufficient to cover the entire length of the input signal. If \(N\) is less than the length of the input signal, the input signal will be truncated when computing the FFT. In our case, the cosine wave is of 2 seconds duration and it will have 640 points (a \(10Hz\) frequency wave sampled at 32 times oversampling factor will have \(2 \times 32 \times 10 = 640\) samples in 2 seconds of the record). Since our input signal is periodic, we can safely use \(N=256\) point FFT, anyways the FFT will extend the signal when computing the FFT (see additional topic on spectral leakage that explains this extension concept).

Due to Matlab’s index starting at 1, the DC component of the FFT decomposition is present at index 1.

That’s pretty easy. Note that the index for the raw FFT are integers from \(1 \rightarrow N\). We need to process it to convert these integers to \(frequencies\). That is where the \(sampling\) frequency counts.

Each point/bin in the FFT output array is spaced by the frequency resolution \(\Delta f\), that is calculated as
$$ \Delta f = \frac{f_s}{N} $$
where, \(f_s\) is the sampling frequency and \(N\) is the FFT size that is considered. Thus, for our example, each point in the array is spaced by the frequency resolution
$$ \Delta f = \frac{f_s}{N} = \frac{32*f_c}{256} = \frac{320}{256} = 1.25 Hz$$

Now, the \(10 Hz\) cosine signal will leave a spike at the 8th sample (10/1.25=8), which is located at index 9 (See next figure).

Therefore, from the frequency resolution, the entire frequency axis can be computed as

Now we can plot the absolute value of the FFT against frequencies as

The following plot shows the frequency axis and the sample index as it is for the complex FFT output.

Figure 4: Magnitude response from FFT output plotted against sample index (top) and computed frequencies (bottom)

After the frequency axis is properly transformed with respect to the sampling frequency, we note that the cosine signal has registered a spike at \(10 Hz\). In addition to that, it has also registered a spike at \(256-8=248^{th}\) sample that belongs to negative frequency portion. Since we know the nature of the signal, we can optionally ignore the negative frequencies. The sample at the Nyquist frequency (\(f_s/2 \)) mark the boundary between the positive and negative frequencies.

Note that the complex numbers surrounding the Nyquist index are complex conjugates and they represent positive and negative frequencies respectively.


From the plot we see that the frequency axis starts with DC, followed by positive frequency terms which is in turn followed by the negative frequency terms. To introduce proper order in the x-axis, one can use \(FFTshift\) function Matlab, which arranges the frequencies in order: negative frequencies \(\rightarrow\) DC \(\rightarrow\) positive frequencies. The fftshift function need to be carefully used when \(N\) is odd.

For even N, the original order returned by FFT  is as follows (note: all indices below corresponds to Matlab’s index)

  • \(X[1]\) represents DC frequency component
  • \(X[2]\) to \(X[N/2]\) terms are positive frequency components
  • \(X[N/2+1]\) is the Nyquist frequency (\(F_s/2\)) that is common to both positive and negative frequencies. We will consider it as part of negative frequencies to have the same equivalence with the fftshift function.
  • \(X[N/2+1]\) to \(X[N]\) terms are considered as negative frequency components

FFTshift shifts the DC component to the center of the spectrum. It is important to remember that the Nyquist frequency at the (N/2+1)th Matlab index is common to both positive and negative frequency sides. FFTshift command puts the Nyquist frequency in the negative frequency side. This is captured in the following illustration.

Figure 5: Role of FFTShift in ordering the frequencies

Therefore, when \(N\) is even, ordered frequency axis is set as
$$f = \Delta f \left[ -\frac{N}{2}:1:\frac{N}{2}-1 \right] = \frac{f_s}{N} \left[ -\frac{N}{2}:1:\frac{N}{2}-1 \right] $$

When \(N\) is odd, the ordered frequency axis should be set as
$$f = \Delta f \left[ -\frac{N+1}{2}:1:\frac{N+1}{2}-1 \right] = \frac{f_s}{N} \left[ -\frac{N+1}{2}:1:\frac{N+1}{2}-1 \right] $$

The following code snippet, computes the fftshift using both the manual method and using the Matlab’s in-build command. The results are plotted by superimposing them on each other. The plot shows that both the manual method and fftshift method are in good agreement.

Figure 6: Magnitude response of FFT result after applying FFTShift : plotted against sample index (top) and against computed frequencies (bottom)

Comparing the bottom figures in the Figure 4 and Figure 6, we see that the ordered frequency axis is more meaningful to interpret.


One can undo the effect of fftshift by employing \(ifftshift\) function. The \(ifftshift\) function restores the raw frequency order. If the FFT output is ordered using \(fftshift\) function, then one must restore the frequency components back to original order BEFORE taking \(IFFT\). Following statements are equivalent.

Some observations on FFTShift and IFFTShift

When \(N\) is odd and for an arbitrary sequence, the fftshift and ifftshift functions will produce different results. However, when they are used in tandem, it restores the original sequence.

When \(N\) is even and for an arbitrary sequence, the fftshift and ifftshift functions will produce the same result. When they are used in tandem, it restores the original sequence.

Rate this article: 1 Star2 Stars3 Stars4 Stars5 Stars (12 votes, average: 4.33 out of 5)

Recommended Signal Processing Books

Articles in this series:
How to Interpret FFT results – complex DFT, frequency bins and FFTShift
How to Interpret FFT results – obtaining Magnitude and Phase information
FFT and Spectral Leakage
How to plot FFT using Matlab – FFT of basic signals : Sine and Cosine waves
Generating Basic signals – Square Wave and Power Spectral Density using FFT
Generating Basic signals – Rectangular Pulse and Power Spectral Density using FFT
Generating Basic Signals – Gaussian Pulse and Power Spectral Density using FFT
Chirp Signal – Frequency Sweeping – FFT and power spectral density
Constructing the Auto Correlation Matrix using FFT

Lauched : New Q&A forum for discussions

1 Star2 Stars3 Stars4 Stars5 Stars (2 votes, average: 3.50 out of 5)

‘Gaussianwaves Answers’ – a new Q&A type forum is open and is ready for use by everyone. This discussion forum is open to all, and all are encouraged to contribute their knowledge to this forum.

Posts in this forum can be about anything related to any areas in electronics & communication engineering – that you’ve discovered, or mastered and that you think may be helpful to other forum users.

As with most of the other forums, all question/posts will be open for answers/replies, so that the ideas presented can be discussed and questions can be answered. If you’re the author of any question and you decide some changes are necessary please go ahead and edit it.

To get things started, I have posted “Your Best Resources To Learn Signal Processing” as the first post in the new Q&A site.

Some of the salient features of this forum include:

Amazing Features

  • Votes – Registered user can vote up and down on both question and answers.
  • Subscription – User can subscribe to tags, categories or question and get notified for activities.
  • Points and Reputation – User gain or loss defined reputation points for every activity.
  • Flag and moderate – Flag questions for moderation and review by the community.
  • Private Q&A – User can ask private question which are only visible to user level above moderator.
  • Social login – Any visitor can quickly login using their social media accounts, no registration required.
  • Equations – include LATEX formatted equations in your questions and replies (Posting LATEX equations)
  • Codes – include programming codes in your questions and replies (Posting programming codes)
  • Image uploads – Inbuilt image uploader which allows user to attach images in question or answer
  • Comments – Registered users can comment on both question and answer.

User profile

  • Editable Profile – Each user is provided with an editable profile.
  • Follow and followers – Follow another user to get updates and be followed by others.
  • About page – Place for a short bio of yours.
  • Notification page – Notification for users.
  • Subscription page – Shows all subscribed questions, tags and categories of user.

As you’ll see, that all posts are subject to review, editing, or even deletion depending on their content. This is not censorship. Posts will only be edited if they contain some inappropriate content. Post deletions will likely be very rare, and this will occur only if the post is SPAM or the entire post is inappropriate.

This forum could become a very useful knowledge base of user-generated information. The more participation there is, the better and more useful this forum will be, so please contribute!

Access the new forum here

Significance of RMS (Root Mean Square) value

1 Star2 Stars3 Stars4 Stars5 Stars (7 votes, average: 4.14 out of 5)
Loading... Root Mean Square (RMS) value is the most important parameter that signifies the \(size \; of \; a \;signal\).

Defining the term “size”:

In signal processing, a signal is viewed as a function of time. The term “size of a signal” is used to represent “strength of the signal”. It is crucial to know the “size” of a signal used in a certain application. For example, we may be interested to know the amount of electricity needed to power a LCD monitor as opposed to a CRT monitor. Both of these applications are different and have different tolerances. Thus the amount of electricity driving these devices will also be different.

A given signal’s size can be measured in many ways. Some of them are,

  • Total energy
  • Square root of total energy
  • Integral absolute value
  • Maximum or peak absolute value
  • Root Mean Square (RMS) value
  • Average Absolute (AA) value

RMS value

RMS value of a signal (\(x(t)\)) is calculated as the square root of average of squared value of the signal, mathematically represented as $$E_{RMS} = \sqrt{ \frac{1}{T} \int_{0}^{T} x(t)^2 dt} $$ For a signal represented as \(N\) discrete sampled values – \([x_0,x_1,\cdots,x_{N-1}]\), the RMS value is given as $$E_{RMS} = \sqrt{\frac{x_0^2+x_1^2+\cdots+x_{N-1}^2}{N}} $$ If the signal can be represented in Frequency domain as \(X(f)\), then as a result of Parseval’s theorem, the RMS value can be calculated as $$E_{RMS} = \sqrt{\sum \left| \frac{X(f)}{N} \right|^2}$$

Implementing in Matlab:

Significance of RMS value

  •  One of the most important parameter that is used to describe the strength of an Alternating Current (AC)
  • RMS value of an AC voltage/current is equivalent to the DC voltage/current that produces the same heating effect when applied across an identical resistor. Hence, it is also a measure of energy content in a given signal.
  • In statistics, for any zero-mean random stationary signal, the RMS value is same as the standard deviation of the signal. Example : \(Delay \; spread \) of a multipath channel is often calculated as the RMS value of the \(Power \; Delay \; Profile \) (PDP)
  • When two uncorrelated (or orthogonal ) signals are added together, such as noise from two independent sources, the RMS value of their sum is equal to the square-root of sum of the square of their individual RMS values.

Rate this article: 1 Star2 Stars3 Stars4 Stars5 Stars (7 votes, average: 4.14 out of 5)

See also

Basics of – Power and Energy of a signal
Calculation of power of a signal and verifying it through Matlab.

Recommended Books on Signal Processing

Invitation to Submit your work for publication at gaussianwaves.com

GaussianWaves.com invites members and non-members to submit original feature articles in the field of electronics and communication engineering. The intended discipline spans a wide range of sub-disciplines such as signal processing, communications, electronics design, applied mathematics for signal processing, biomedical signal processing, Image processing etc.,

The articles are categorized into following disciplines:

  • Multichannel & multimodal signal processing
  • Audio/Speech processing
  • Biomedical signal processing
  • Image/multimedia processing
  • Analog Communications
  • Digital communications
  • Implementation, design, and hardware for signal processing/communications
  • Statistical signal processing
  • Random process and Probability
  • Electronic Circuits
  • VLSI/embedded/etc..,

Five types of submission are accepted for publication at Gaussianwaves.com:

  • Research article (empirical-quantitative and qualitative- and/or theoretical)
  • Research note/Tutorials
  • College projects
  • Blog posts on a particular topic
  • Code snippets (Matlab,python,C,C++ etc..,)
  • Book reviews

Acceptable formats : MS DOC with relevant images (only original articles in the name of the author accepted)
Mode of submission : email to reach.gaussianwaves@gmail.com

Kind regards,