What is one reason to use the attenuator function that is present on many HF transceivers?
Correct
The attenuator reduces very strong signals’ strength to avoid receiver overload.
Incorrect
The attenuator reduces very strong signals’ strength to avoid receiver overload.
Question 2 of 15
2. Question
Category: Communication systems
Which of the following is a function of a Digital Signal Processor?
Correct
A Digital Signal Processor (DSP) can help remove noise from received signals.
Incorrect
A Digital Signal Processor (DSP) can help remove noise from received signals.
Question 3 of 15
3. Question
Category: Communication systems
Which of the following is an advantage of a receiver DSP IF filter as compared to an analog filter?
Correct
A DSP IF filter offers a wide variety of filter shapes and bandwidths, as compared to analog filters.
Incorrect
A DSP IF filter offers a wide variety of filter shapes and bandwidths, as compared to analog filters.
Question 4 of 15
4. Question
Category: Communication systems
Why is it good to match receiver bandwidth to the bandwidth of the operating mode?
Correct
Matching the receiver bandwidth to the bandwidth of the operating mode results in the best signal to noise ratio, as noise outside the signal receive band is filtered.
Incorrect
Matching the receiver bandwidth to the bandwidth of the operating mode results in the best signal to noise ratio, as noise outside the signal receive band is filtered.
Question 5 of 15
5. Question
Category: Communication systems
Which of the following is needed for a Digital Signal Processor IF filter?
Correct
To produce a DSP IF filter, an analog to digital converter feeds into a digital processor chip, and the output of the chip is fed into a digital to analog converter.
Incorrect
To produce a DSP IF filter, an analog to digital converter feeds into a digital processor chip, and the output of the chip is fed into a digital to analog converter.
Question 6 of 15
6. Question
Category: Communication systems
How is Digital Signal Processor filtering accomplished?
Correct
DSP filtering requires converting an analog signal into digital form, then operating on the digital signal with a digital processor.
Incorrect
DSP filtering requires converting an analog signal into digital form, then operating on the digital signal with a digital processor.
Question 7 of 15
7. Question
Category: Communication systems
What is meant by the term “software defined radio” (SDR)?
Correct
In an SDR radio, most major signal processing functions are performed by software.
Incorrect
In an SDR radio, most major signal processing functions are performed by software.
Question 8 of 15
8. Question
Category: Communication systems
Which of the following is an advantage of a transceiver controlled by a direct digital synthesizer (DDS)?
In today’s digital world the most important aspect of any processor is how fast can it function and support multiple applications. Often, the chip design engineers are confronted with bewildering questions in the design process of a logic:
What is the best topology to represent a given function ?
How many logic stages provides the least delay ?
How wide the transistors should be in order to have the design optimized for area?
Logic designer often use Logical Effort to arrive at these conclusions. It uses a simple model for delay calculations and helps to make rapid comparisons between alternative structures.
CMOS inverter and sizing ratio:
As we all know, gates are made up of transistors. The most basic gate is the NOT gate ,famously known as an inverter. Once the properties and operations of the basic inverter are clearly understood, designing more complex structures such as NAND gates, adders, multipliers and even a full scale microprocessor is greatly simplified.
Figure 1: CMOS Inverter
Figure 1, shows the circuit diagram of a CMOS inverter. Here, the input to the inverter is designated as \(A\) and output is designated as \(A^{*}\). Important thing to note here is that the small signal model of the inverter with sizing \(2:1\) is \(3*C_{in}\).Now, the question is, why is the sizing taken to be \(2:1\) ?. The major concept behind it revolves around the switching time or \(t_p\)
When the input is \(1\) (logic High), transistor N-MOS is conducting and P-mos is not conducting and hence the charge stored in output capacitor discharges, hence the output = \(0\) (logic Low). Consequently when the input is \(0\) (logic Low), N-MOS is not in conduction mode while P-MOS is in conduction mode hence the output capacitor charges, i.e, output = 1 (logic High). Since this charging and discharging takes some time (\( \propto R_{eqn} \times C_{L} \)), there will always be some delay or transition between the change in input and output. This is the major source of delay. Since both transistors don’t have similar properties, the individual gate charging-discharging will be different. Hence, the gate sizing should be done in such a way that the resultant delay should be minimum.
Figure 2 depicts the gate dimensions (\(W\) and \(L\)) on a N-type MOSFET. The ratio of \(W / L\) ratio of PMOS to that of the NMOS transistor is called transistor sizing ration \(\beta\). Normally, the PMOS and NMOS components have the same channel length \(L=L_{p}=L_{n}\) as dictated by the minimum feature size of a given process technology. In a static CMOS design, the driving strengths of the NMOS and the PMOS transistors are balanced by making the PMOS section wider than the NMOS section. Generally, the width of the PMOS section is chosen as 2 to 4 times the width of the NMOS section. This is done to maximize noise margins and to obtain symmetrical properties between the PMOS and NMOS transistor.
Figure 2: N-MOSFET with dimensions marked
Figure 3, shows the definition of various timing parameters used to characterize logic gates. With respect to the output of a logic gate, the following timing parameters are relevant for this discussion.
Figure 3: Definition of timing parameters of logic gates
\(t_{pHL}\) = The amount of time taken to change the output from “high to low” level when the input changes. It is measured between 90% and 10% level of the amplitude of the output signal.
\(t_{pLH}\)= The amount of time taken to change the output from “low to high” level when the input changes. It is measured between 10% and 90% level of the amplitude of the output signal
\(t_p\)= The average time taken to change the output , computed as \(t_p= (t_{pHL}+t_{pLH})/2\)
Figure 4, shows the propagation delay of CMOS inverter as a function of sizing ratio \(\beta\). As we can see from the graph \(t_{pLH} = t_{pHL}\) when \(\beta\) is slightly greater than \(2\). But \(t_p\) is least when \(\beta\) is slightly lesser than \(2\). Hence, in general the transistor sizing ratio is taken as \(\beta=2\).
Figure 4: Propagation delay of CMOS inverter as a function of sizing ratio β
Inverter delay
Let’s assume a symmetrical inverter with identical rise-time and fall-time properties. Let \(C_L\) be the \(loading \; capacitance\) which is composed on intrinsic capacitance \(C_{int}\) and the capacitance due to extrinsic components \(C_{ext}\).
$$ C_L = C_{int} + C_{ext} $$
Given the \(equivalent \; resistance\) of the gate \(R_{eq}\) and load capacitance \(C_L\), the propagation delay is given by
where \(t_{p0} = 0.69 \times R_{eq} \times C_{int} \) is the delay of inverter loaded by its intrinsic capacitance and is called \(intrinsic \; or \; unloaded \; delay\).
Logical Effort Delay Model
As we know, the delay (\(d\)) of logic gates have two components:
$$ d=p + f $$
where, \(p\) – Parasitic delay – is the intrinsic delay of the gate
\(f\) – Effort delay
\(Effort \; delay\) has two components, logical effort (\(g\)) and electrical effort (\(h=C_{out}/C_{in}\)). The effort delay is given by \(f=g \times h\).
\(Electrical \; effort\) can be defined as the effective fanout of the gate, or the ratio of input capacitance \(C_{in}\) of gate to that of load.
\(Logical \; effort\) is defined as the ratio of the input capacitance of a gate to the input capacitance of an inverter delivering the same output current. It is defined as the number of times worse it is at delivering output current than would be an inverter with identical input capacitance.
Logical effort (\(g\)) depends upon following parameters:
Complexity of the logic function
Depends mainly on topology, not sizing.
To a very little extent it may depend on the electrical process of the fabrication.
The logical effort heavily depends upon the topology because every topology has different resultant input capacitance which influences the overall logical effort
A NAND gate is shown in Figure 5. Here input capacitance for input A or B is \(4 \times C_{in}\) while the input capacitance for Inverter was \( 3 \times C_{in}\). The logical effort of Inverter is considered to be \(1\), or consider it to be a reference for all the delay oriented calculations in transistor logic. Hence the logical effort of NAND gate will be \(4/3\).
Figure 5: NAND gate
Following table shows the logical efforts of other common gates/topologies
Table 1: Logical efforts of common gates
Rate this article: (2 votes, average: 3.50 out of 5)
Loading...
In the previous post, Interpretation of frequency bins, frequency axis arrangement (fftshift/ifftshift) for complex DFT were discussed. In this post, I intend to show you how to obtain magnitude and phase information from the FFT results.
Outline
In this discussion, I will take an arbitrary cosine function of the form \(x(t)= A cos \left(2 \pi f_c t + \phi \right) \) and proceed step by step as follows
Represent the signal \(x(t)\) in computer (discrete-time) and plot the signal (time domain)
Represent the signal in frequency domain using FFT (\(X[k]\))
Extract amplitude and phase information from the FFT result
Reconstruct the time domain signal from the frequency domain samples
Consider a cosine signal of amplitude \(A=0.5\), frequency \(f_c=10 Hz \) and phase \(\phi= \pi/6 \) radians (or \(30^{\circ}\) )
$$ x(t) = 0.5 cos \left( 2 \pi 10 t + \pi/6 \right) $$
In order to represent the continuous time signal \(x(t)\) in computer memory, we need to sample the signal at sufficiently high rate (according to Nyquist sampling theorem). I have chosen a oversampling factor of \(32\) so that the sampling frequency will be \(f_s = 32 \times f_c \), and that gives \(640\) samples in a \(2\) seconds duration of the waveform record.
MATLAB
1
2
3
4
5
6
7
8
9
10
11
A=0.5;%amplitude of the cosine wave
fc=10;%frequency of the cosine wave
phase=30;%desired phase shift of the cosine in degrees
t=0:1/fs:2-1/fs;%2 seconds duration
fs=32*fc;%sampling frequency with oversampling factor 32
phi=phase*pi/180;%convert phase shift in degrees in radians
x=A*cos(2*pi*fc*t+phi);%time domain signal with phase shift
figure;plot(t,x);%plot the signal
Represent the signal in frequency domain using FFT
Lets represent the signal in frequency domain using the FFT function. The FFT function computes \(N\)-point complex DFT. The length of the transformation \(N\) should cover the signal of interest otherwise we will some loose valuable information in the conversion process to frequency domain. However, we can choose a reasonable length if we know about the nature of the signal.
For example, the cosine signal of our interest is periodic in nature and is of length $640$ samples (for 2 seconds duration signal). We can simple use a lower number $N=256$ for computing the FFT. In this case, only the first $256$ time domain samples will be considered for taking FFT. No not need to worry about loss of information in this case, as the 256 samples will have sufficient number of cycles using which we can calculate the frequency information.
MATLAB
1
2
N=256;%FFT size
X=1/N*fftshift(fft(x,N));%N-point complex DFT
In the code above, \(fftshift\) is used only for obtaining a nice double-sided frequency spectrum that delineates negative frequencies and positive frequencies in order. This transformation is not necessary. A scaling factor \(1/N\) was used to account for the difference between the FFT implementation in Matlab and the text definition of complex DFT.
3a. Extract amplitude of frequency components (amplitude spectrum)
The FFT function computes the complex DFT and the hence the results in a sequence of complex numbers of form \(X_{re} + j X_{im} \). The amplitude spectrum is obtained
$$ |X[k]| = \sqrt{X_{re}^2 + X_{im}^2 } $$
For obtaining a double-sided plot, the ordered frequency axis (result of fftshift) is computed based on the sampling frequency and the amplitude spectrum is plotted.
MATLAB
1
2
3
4
5
df=fs/N;%frequency resolution
sampleIndex=-N/2:N/2-1;%ordered index for FFT plot
f=sampleIndex*df;%x-axis index converted to ordered frequencies
stem(f,abs(X));%magnitudes vs frequencies
xlabel('f (Hz)');ylabel('|X(k)|');
3b. Extract phase of frequency components (phase spectrum)
Extracting the correct phase spectrum is a tricky business. I will show you why it is so. The phase of the spectral components are computed as
$$ \angle X[k] = tan^{-1} \left( \frac{X_{im}}{X_{re}} \right) $$
That equation looks naive, but one should be careful when computing the inverse tangents using computers. The obvious choice for implementation seems to be the \(atan\) function in Matlab. However, usage of \(atan\) function will prove disastrous unless additional precautions are taken. The \(atan\) function computes the inverse tangent over two quadrants only, i.e, it will return values only in the \( [-\pi/2 , \pi/2] \) interval. Therefore, the phase need to be unwrapped properly. We can simply fix this issue by computing the inverse tangent over all the four quadrants using the \(atan2(X_{img},X_{re})\) function.
Lets compute and plot the phase information using \(atan2\) function and see how the phase spectrum looks
MATLAB
1
2
phase=atan2(imag(X),real(X))*180/pi;%phase information
plot(f,phase);%phase vs frequencies
The phase spectrum is completely noisy. Unexpected !!!. The phase spectrum is noisy due to fact that the inverse tangents are computed from the \(ratio\) of imaginary part to real part of the FFT result. Even a small floating rounding off error will amplify the result and manifest incorrectly as useful phase information (read how a computer program approximates very small numbers).
To understand, print the first few samples from the FFT result and observe that they are not absolute zeros (they are very small numbers in the order \(10^{-16}\). Computing inverse tangent will result in incorrect results
The solution is to define a tolerance threshold and ignore all the computed phase values that are below the threshold.
MATLAB
1
2
3
4
5
6
7
X2=X;%store the FFT results in another array
%detectnoise(verysmallnumbers(eps))andignorethem
threshold=max(abs(X))/10000;%tolerance threshold
X2(abs(X)<threshold)=0;%maskout values that are below the threshold
phase=atan2(imag(X2),real(X2))*180/pi;%phase information
plot(f,phase);%phase vs frequencies
The recomputed phase spectrum is plotted below. The phase spectrum has correctly registered the \(30^{\circ}\) phase shift at the frequency \(f=10 Hz\). The phase spectrum is anti-symmetric (\( \phi=-30^{\circ} \) at \(f=-10 Hz \) ), which is expected for real-valued signals.
4. Reconstruct the time domain signal from the frequency domain samples
Reconstruction of the time domain signal from the frequency domain sample is pretty straightforward
MATLAB
1
2
3
x_recon=N*ifft(ifftshift(X),N);%reconstructed signal
t=[0:1:length(x_recon)-1]/fs;%recompute time index
plot(t,x_recon);%reconstructed signal
The reconstructed signal has preserved the same initial phase shift and the frequency of the original signal. Note: The length of the reconstructed signal is only 256 sample long (~ 0.8 seconds duration), this is because the size of FFT is considered as \(N=256\). Since the signal is periodic it is not a concern. For more complicated signals, appropriate FFT length (better to use a value that is larger than the length of the signal) need to be used.
Rate this article: (6 votes, average: 4.33 out of 5)
Loading...
Often, one is confronted with the problem of converting a time domain signal to frequency domain and vice-versa. Fourier Transform is an excellent tool to achieve this conversion and is ubiquitously used in many applications. In signal processing , a time domain signal can be \(continuous\) or \(discrete\) and it can be \(aperiodic\) or \(periodic\). This gives rise to four types of Fourier transforms.
Table 1: Four types of Fourier Transform
Transform
Nature of time domain signal
Nature of frequency spectrum
Fourier Transform (FT),
(a.k.a Continuous Time Fourier Transform (CTFT))
continuous, non-periodic
non-periodic,continuous
Discrete-time Fourier Transform (DTFT)
discrete, non-periodic
periodic,continuous
Fourier Series (FS)
continuous, periodic
non-periodic, discrete
Discrete Fourier Transform (DFT)
discrete, periodic
periodic,discrete
Note that when the signal is discrete in one domain, it will be periodic in other domain. Similarly, if the signal is continuous in one domain, it will be aperiodic (non-periodic) in another domain. For simplicity, let’s not venture into the specific equations for each of the transforms above. We will limit our discussion to DFT, that is widely available as part of software packages like Matlab, Scipy(python) etc.., however we can approximate other transforms using DFT.
Real version and Complex version:
For each of the listed transforms above, there exist a real version and complex version. The real version of the transform, takes in a real numbers and gives two sets of real frequency domain points – one set representing coefficients over \(cosine\) basis function and the other set representing the co-efficient over \(sine\) basis function. The complex version of the transforms represent positive and negative frequencies in a single array. The complex versions are flexible that it can process both complex valued signals and real valued signals. The following figure captures the difference between real DFT and complex DFT
Real DFT:
Consider the case of N-point \(real\) DFT , it takes in N samples of \(real-valued\) time domain waveform \(x[n]\) and gives two arrays of length \(N/2+1\) each set projected on cosine and sine functions respectively.
Here, the time domain index \(n\) runs from \(0 \rightarrow N\), the frequency domain index \(k\) runs from \(0 \rightarrow N/2\)
The real-valued time domain signal \(x[n]\) can be synthesized from the real DFT pairs as
$$ x[n] =\sum_{k=0}^{N/2} \displaystyle X_{re}[K] cos\left( \frac{2 \pi k n}{N} \right) – X_{im}[K] sin\left( \frac{2 \pi k n}{N} \right)$$
Caveat: When using the synthesis equation, the values \(X_{re}[0]\) and \(X_{re}[N/2] \) must be divided by two. This problem is due to the fact that we restrict the analysis to real-values only. These type of problems can be avoided by using complex version of DFT.
Complex DFT:
Consider the case of N-point \(complex\) DFT, it takes in N samples of \(complex-valued\) time domain waveform \(x[n]\) and produces an array \(X[k]\) of length \(N\).
$$X[k]=\frac{1}{N} \sum_{n=0}^{N-1} x[n] e^{-j2 \pi k n/N}$$
The arrays values are interpreted as follows
\(X[0]\) represents DC frequency component
Next \(N/2\) terms are positive frequency components with \(X[N/2]\) being the Nyquist frequency (which is equal to half of sampling frequency)
Next \(N/2-1\) terms are negative frequency components (note: negative frequency components are the phasors rotating in opposite direction, they can be optionally omitted depending on the application)
The corresponding synthesis equation (reconstruct \(x[n]\) from frequency domain samples \(X[k]\)) is
$$x[n]=\sum_{k=0}^{N-1} X[k] e^{j2 \pi k n/N} $$
From these equations we can see that the real DFT is computed by projecting the signal on cosine and sine basis functions. However, the complex DFT projects the input signal on exponential basis functions (Euler’s formula connects these two concepts).
When the input signal in the time domain is real valued, the complex DFT zero-fills the imaginary part during computation (That’s its flexibility and avoids the caveat needed for real DFT). The following figure shows how to interpret the raw FFT results in Matlab that computes complex DFT. The specifics will be discussed next with an example.
Fast Fourier Transform (FFT)
The FFT function in Matlab is an algorithm published in 1965 by J.W.Cooley and J.W.Tuckey for efficiently calculating the DFT. It exploits the special structure of DFT when the signal length is a power of 2, when this happens, the computation complexity is significantly reduced. FFT length is generally considered as power of 2 – this is called \(radix-2\) FFT which exploits the twiddle factors. The FFT length can be odd as used in this particular FFT implementation – Prime-factor FFT algorithm where the FFT length factors into two co-primes.
FFT is widely available in software packages like Matlab, Scipy etc.., FFT in Matlab/Scipy implements the complex version of DFT. Matlab’s FFT implementation computes the complex DFT that is very similar to above equations except for the scaling factor. For comparison, the Matlab’s FFT implementation computes the complex DFT and its inverse as
$$X[k]=\sum_{n=0}^{N-1} x[n] e^{-j2 \pi k n/N}$$
$$x[n]=\frac{1}{N} \sum_{k=0}^{N-1} X[k] e^{j2 \pi k n/N} $$
The Matlab commands that implement the above equations are \(FFT\) and \(IFFT\) respectively. The corresponding syntax is as follows
1
2
X=fft(x,N)%computeX[k]
x=ifft(X,N)%computex[n]
Interpreting the FFT results
Lets assume that the \(x[n]\) is the time domain cosine signal of frequency \(f_c=10Hz\) that is sampled at a frequency \(f_s=32*fc\) for representing it in the computer memory.
MATLAB
1
2
3
4
5
6
fc=10;%frequency of the carrier
fs=32*fc;%sampling frequency with oversampling factor=32
t=0:1/fs:2-1/fs;%2 seconds duration
x=cos(2*pi*fc*t);%time domain signal (real number)
Note: The FFT length should be sufficient to cover the entire length of the input signal. If \(N\) is less than the length of the input signal, the input signal will be truncated when computing the FFT. In our case, the cosine wave is of 2 seconds duration and it will have 640 points (a \(10Hz\) frequency wave sampled at 32 times oversampling factor will have \(2 \times 32 \times 10 = 640\) samples in 2 seconds of the record). Since our input signal is periodic, we can safely use \(N=256\) point FFT, anyways the FFT will extend the signal when computing the FFT (see additional topic on spectral leakage that explains this extension concept).
Due to Matlab’s index starting at 1, the DC component of the FFT decomposition is present at index 1.
MATLAB
1
2
>>X(1)
1.1762e-14(approximately equal to zero)
That’s pretty easy. Note that the index for the raw FFT are integers from \(1 \rightarrow N\). We need to process it to convert these integers to \(frequencies\). That is where the \(sampling\) frequency counts.
Each point/bin in the FFT output array is spaced by the frequency resolution \(\Delta f\), that is calculated as
$$ \Delta f = \frac{f_s}{N} $$
where, \(f_s\) is the sampling frequency and \(N\) is the FFT size that is considered. Thus, for our example, each point in the array is spaced by the frequency resolution
$$ \Delta f = \frac{f_s}{N} = \frac{32*f_c}{256} = \frac{320}{256} = 1.25 Hz$$
Now, the \(10 Hz\) cosine signal will leave a spike at the 8th sample (10/1.25=8), which is located at index 9 (See next figure).
MATLAB
1
2
>>abs(X(8:10))%display samples 7 to 9
ans=0.0000128.00000.0000
Therefore, from the frequency resolution, the entire frequency axis can be computed as
MATLAB
1
2
3
4
5
%calculatefrequencybinswithFFT
df=fs/N%frequency resolution
sampleIndex=0:N-1;%raw index for FFT plot
f=sampleIndex*df;%x-axis index converted to frequencies
Now we can plot the absolute value of the FFT against frequencies as
MATLAB
1
2
3
4
subplot(3,1,2);stem(sampleIndex,abs(X));%sample values on x-axis
The following plot shows the frequency axis and the sample index as it is for the complex FFT output.
After the frequency axis is properly transformed with respect to the sampling frequency, we note that the cosine signal has registered a spike at \(10 Hz\). In addition to that, it has also registered a spike at \(256-8=248^{th}\) sample that belongs to negative frequency portion. Since we know the nature of the signal, we can optionally ignore the negative frequencies. The sample at the Nyquist frequency (\(f_s/2 \)) mark the boundary between the positive and negative frequencies.
MATLAB
1
2
3
4
5
6
7
8
9
>>nyquistIndex=N/2+1;
>>X(nyquistIndex-2:nyquistIndex+2).'
ans=
1.0e-13*
-0.2428+0.0404i
-0.1897+0.0999i
-0.3784
-0.1897-0.0999i
-0.2428-0.0404i
Note that the complex numbers surrounding the Nyquist index are complex conjugates and they represent positive and negative frequencies respectively.
FFTShift
From the plot we see that the frequency axis starts with DC, followed by positive frequency terms which is in turn followed by the negative frequency terms. To introduce proper order in the x-axis, one can use \(FFTshift\) function Matlab, which arranges the frequencies in order: negative frequencies \(\rightarrow\) DC \(\rightarrow\) positive frequencies. The fftshift function need to be carefully used when \(N\) is odd.
For even N, the original order returned by FFT is as follows (note: all indices below corresponds to Matlab’s index)
\(X[1]\) represents DC frequency component
\(X[2]\) to \(X[N/2]\) terms are positive frequency components
\(X[N/2+1]\) is the Nyquist frequency (\(F_s/2\)) that is common to both positive and negative frequencies. We will consider it as part of negative frequencies to have the same equivalence with the fftshift function.
\(X[N/2+1]\) to \(X[N]\) terms are considered as negative frequency components
FFTshift shifts the DC component to the center of the spectrum. It is important to remember that the Nyquist frequency at the (N/2+1)th Matlab index is common to both positive and negative frequency sides. FFTshift command puts the Nyquist frequency in the negative frequency side. This is captured in the following illustration.
Therefore, when \(N\) is even, ordered frequency axis is set as
$$f = \Delta f \left[ -\frac{N}{2}:1:\frac{N}{2}-1 \right] = \frac{f_s}{N} \left[ -\frac{N}{2}:1:\frac{N}{2}-1 \right] $$
When \(N\) is odd, the ordered frequency axis should be set as
$$f = \Delta f \left[ -\frac{N+1}{2}:1:\frac{N+1}{2}-1 \right] = \frac{f_s}{N} \left[ -\frac{N+1}{2}:1:\frac{N+1}{2}-1 \right] $$
The following code snippet, computes the fftshift using both the manual method and using the Matlab’s in-build command. The results are plotted by superimposing them on each other. The plot shows that both the manual method and fftshift method are in good agreement.
Comparing the bottom figures in the Figure 4 and Figure 6, we see that the ordered frequency axis is more meaningful to interpret.
IFFTShift
One can undo the effect of fftshift by employing \(ifftshift\) function. The \(ifftshift\) function restores the raw frequency order. If the FFT output is ordered using \(fftshift\) function, then one must restore the frequency components back to original order BEFORE taking \(IFFT\). Following statements are equivalent.
1
2
X=fft(x,N)%computeX[k]
x=ifft(X,N)%computex[n]
1
2
X=fftshift(fft(x,N));%take FFT andrearrange frequency order(thisismainly done forinterpretation)
x=ifft(ifftshift(X),N)%restore raw frequency order andthentake IFFT
Some observations on FFTShift and IFFTShift
When \(N\) is odd and for an arbitrary sequence, the fftshift and ifftshift functions will produce different results. However, when they are used in tandem, it restores the original sequence.
MATLAB
1
2
3
4
5
6
7
8
9
10
>>x=[0,1,2,3,4,5,6,7,8]
012345678
>>fftshift(x)
567801234
>>ifftshift(x)
456780123
>>ifftshift(fftshift(x))
012345678
>>fftshift(ifftshift(x))
012345678
When \(N\) is even and for an arbitrary sequence, the fftshift and ifftshift functions will produce the same result. When they are used in tandem, it restores the original sequence.
MATLAB
1
2
3
4
5
6
7
8
9
10
>>x=[0,1,2,3,4,5,6,7]
01234567
>>fftshift(x)
45670123
>>ifftshift(x)
45670123
>>ifftshift(fftshift(x))
01234567
>>fftshift(ifftshift(x))
01234567
Rate this article: (7 votes, average: 4.86 out of 5)
Loading...
‘Gaussianwaves Answers’ – a new Q&A type forum is open and is ready for use by everyone. This discussion forum is open to all, and all are encouraged to contribute their knowledge to this forum.
Posts in this forum can be about anything related to any areas in electronics & communication engineering – that you’ve discovered, or mastered and that you think may be helpful to other forum users.
As with most of the other forums, all question/posts will be open for answers/replies, so that the ideas presented can be discussed and questions can be answered. If you’re the author of any question and you decide some changes are necessary please go ahead and edit it.
Image uploads – Inbuilt image uploader which allows user to attach images in question or answer
Comments – Registered users can comment on both question and answer.
User profile
Editable Profile – Each user is provided with an editable profile.
Follow and followers – Follow another user to get updates and be followed by others.
About page – Place for a short bio of yours.
Notification page – Notification for users.
Subscription page – Shows all subscribed questions, tags and categories of user.
As you’ll see, that all posts are subject to review, editing, or even deletion depending on their content. This is not censorship. Posts will only be edited if they contain some inappropriate content. Post deletions will likely be very rare, and this will occur only if the post is SPAM or the entire post is inappropriate.
This forum could become a very useful knowledge base of user-generated information. The more participation there is, the better and more useful this forum will be, so please contribute!
(6 votes, average: 4.17 out of 5)
Loading... Root Mean Square (RMS) value is the most important parameter that signifies the \(size \; of \; a \;signal\).
Defining the term “size”:
In signal processing, a signal is viewed as a function of time. The term “size of a signal” is used to represent “strength of the signal”. It is crucial to know the “size” of a signal used in a certain application. For example, we may be interested to know the amount of electricity needed to power a LCD monitor as opposed to a CRT monitor. Both of these applications are different and have different tolerances. Thus the amount of electricity driving these devices will also be different.
A given signal’s size can be measured in many ways. Some of them are,
RMS value of a signal (\(x(t)\)) is calculated as the square root of average of squared value of the signal, mathematically represented as $$E_{RMS} = \sqrt{ \frac{1}{T} \int_{0}^{T} x(t)^2 dt} $$ For a signal represented as \(N\) discrete sampled values – \([x_0,x_1,\cdots,x_{N-1}]\), the RMS value is given as $$E_{RMS} = \sqrt{\frac{x_0^2+x_1^2+\cdots+x_{N-1}^2}{N}} $$ If the signal can be represented in Frequency domain as \(X(f)\), then as a result of Parseval’s theorem, the RMS value can be calculated as $$E_{RMS} = \sqrt{\sum \left| \frac{X(f)}{N} \right|^2}$$
Implementing in Matlab:
1
2
3
4
5
6
7
8
N=100;%length of the signal
x=randn(1,N);%arandom signal totest
X=fft(x);%Frequency domain representation of the signal
RMS1=sqrt(mean(x.^2))%RMS value from time domain samples
RMS2=sqrt(sum(abs(X/N).^2))%RMS value from frequency domain representation
%Result:RMS1-RMS2=1.1102e-16
Significance of RMS value
One of the most important parameter that is used to describe the strength of an Alternating Current (AC)
RMS value of an AC voltage/current is equivalent to the DC voltage/current that produces the same heating effect when applied across an identical resistor. Hence, it is also a measure of energy content in a given signal.
In statistics, for any zero-mean random stationary signal, the RMS value is same as the standard deviation of the signal. Example : \(Delay \; spread \) of a multipath channel is often calculated as the RMS value of the \(Power \; Delay \; Profile \) (PDP)
When two uncorrelated (or orthogonal ) signals are added together, such as noise from two independent sources, the RMS value of their sum is equal to the square-root of sum of the square of their individual RMS values.
Rate this article: (6 votes, average: 4.17 out of 5)
Loading...
GaussianWaves.com invites members and non-members to submit original feature articles in the field of electronics and communication engineering. The intended discipline spans a wide range of sub-disciplines such as signal processing, communications, electronics design, applied mathematics for signal processing, biomedical signal processing, Image processing etc.,
The articles are categorized into following disciplines:
Multichannel & multimodal signal processing
Audio/Speech processing
Biomedical signal processing
Image/multimedia processing
Analog Communications
Digital communications
Implementation, design, and hardware for signal processing/communications
Statistical signal processing
Random process and Probability
Electronic Circuits
VLSI/embedded/etc..,
Five types of submission are accepted for publication at Gaussianwaves.com:
Research article (empirical-quantitative and qualitative- and/or theoretical)
Research note/Tutorials
College projects
Blog posts on a particular topic
Code snippets (Matlab,python,C,C++ etc..,)
Book reviews
Acceptable formats : MS DOC with relevant images (only original articles in the name of the author accepted) Mode of submission : email to reach.gaussianwaves@gmail.com
We live in the age of smart phones that can be loaded with numerous applications to communicate with each other. What’s next ? Where do we go from here ?
Students at MIT Media Labs has answered the call with a novel approach of “Physical Telepresence” that provides the ability to remotely render shapes of objects and people.
Physical telepresence is the result of InFORM interface invented at MIT Media Labs. InForm interface is a self-aware interface, that manipulates not only light but also shapes as well. With this new technology, two remotely connected people can interact with each other by playing a ball game, manipulating shapes/objects together etc..,