Symbol Timing Recovery for QPSK (digital modulations)

The goal of timing recovery is to estimate and correct the sampling instants and phase at the receiver, such that it allows the receiver to decode the transmitted symbols reliably.

What is Symbol timing Recovery :

When transmitting data across a communication system, three things are important: frequency of transmission, phase information and the symbol rate.

In coherent detection/demodulation, both the transmitter and receiver posses the knowledge of exact symbol sampling timing and symbol phase (and/or symbol frequency). While everything is set at the transmitter, the receiver is at the mercy of recovery algorithms to regenerate these information from the incoming signal itself. If the transmission is a passband transmission, the carrier recovery algorithm also recovers the carrier frequency. For phase sensitive systems like BPSK, QPSK etc.., the carrier recovery algorithm recovers the symbol phase so that it is synchronous with the transmitted symbol.

The first part in such a receiver architecture of a MPSK transmitting system is multiplying the incoming signal with sine and cosine components of the carrier wave.

The sine and cosine components are generated using a carrier recovery block (Phase Lock Loop (PLL) or setting a local oscillator to track the variations).

Synchronization in receivers with carrier recovery

Once the in-phase and quadrature signals are separated out properly, the next task is to match each symbol with the transmitted pulse shape such that the overall SNR of the system improves.

Implementing this in digital domain, the architecture described so far would look like this (Note: the subscript of the incoming signal has changed from analog domain to digital domain – i.e. to )

In the digital architecture above, the matched filter is implemented as a simple finite impulse response (FIR) filter whose impulse response is matched to that of the transmitter pulse shape. It helps the receiver in timing recovery and also it improves the overall SNR of the system by suppressing some amount of noise. The incoming signal up to the point before the matched filter, may have fluctuations in the amplitude. The matched filter also behaves like an averaging filter that smooths out the variations in the signal.

Note that in this digital version, the incoming signal is already a sampled signal. It has already passed through an analog to digital converter that sampled the signal at some sampling rate. From the symbol perspective, the symbols have to be sampled at optimum sampling instant to extract its content properly.

This requires a re-sampler, which resamples the averaged signal at the optimum sampling instant. If the original sampling instant is before or after the optimum sampling point, the timing recovery signal will help to re-sample (re-adjust sampling times) accordingly.

Let’s take a simple BPSK transmitter for illustration. This would be equivalent to any of the single arms (in-phase and quadrature phase arms) of a QPSK transmitter or receiver.

An alternate data pattern (symbols) – [+1,-1,+1,+1,\cdots,] is transmitted across the channel. Assume that each symbol occupies Tsym=8 sample time.

clear all; clc;
n=10; %Number of data symbols
Tsym=8; %Symbol time interms of sample time or oversampling rate equivalently
%data=2*(rand(n,1)<0.5)-1;
data=[1 -1 1 -1 1 -1 1 -1 1 -1]'; %BPSK data
bpsk=reshape(repmat(data,1,Tsym)',n*Tsym,1); %BPSK signal

figure('Color',[1 1 1]);
subplot(3,1,1);
plot(bpsk);
title('Ideal BPSK symbols');
xlabel('Sample index [n]');
ylabel('Amplitude')
set(gca,'XTick',0:8:80);
axis([1 80 -2 2]); grid on;

Lets add white gaussian noise (awgn). A random noise of standard deviation 0.25 is generated and added with the generated BPSK symbols.

noise=0.25*randn(size(bpsk)); %Adding some amount of noise
received=bpsk+noise; %Received signal with noise

subplot(3,1,2);
plot(received);
title('Transmitted BPSK symbols (with noise)');
xlabel('Sample index [n]');
ylabel('Amplitude')
set(gca,'XTick',0:8:80);
axis([1 80 -2 2]); grid on;

From the first plot, we see that the transmitted pulse is a rectangular pulse that spans Tsym samples. In the illustration, Tsym=8. The best averaging filter (matched filter) for this case is a rectangular filter (though they are not preferred in practice, I am just using it for simplifying the illustration) that spans 8 samples. Such a rectangular pulse can be mathematically represented in terms of unit step function as

(Another type of averaging filter – “Moving Average Filter” is implemented here)

The resulting rectangular pulse will have a value of 0.5 at the edges of the sampling instants (index 0 and 7) and a value of ‘1’ at the remaining indices in between the edges. Such a rectangular function is indicated below.

The incoming signal is convolved with the averaging filter and the resultant output is given below

impRes=[0.5 ones(1,6) 0.5]; %Averaging Filter -> u[n]-u[n-Tsamp]
yy=conv(received,impRes,'full');
subplot(3,1,3);
plot(yy);
title('Matched Filter (Averaging Filter) output');
xlabel('Sample index [n]');
ylabel('Amplitude');

set(gca,'XTick',0:8:80);
axis([1 80 -10 10]); grid on;

We can note that the averaged output peaks at the locations where the symbol transition occurs. Thus, when the signal is sampled at those ideal locations, the BPSK symbols [+1,-1,+1, …] can be recovered perfectly.

In practice, a square root raised cosine (SRRC) filter is used both at the transmitter and the receiver (as a matched filter) to mitigate inter-symbol interference. An implementation of SRRC filter in Matlab is given here

But the problem here is: “How does the receiver know the ideal sampling instants?”. The solution is “someone has to supply those ideal sampling instants”. A symbol time recovery circuit is used for this purpose.

Coming back to the receiver architecture, lets add a symbol time recovery circuit that supplies the recovered timing instants. The signal will be re-sampled at those instants supplied by the recovery circuit.

The Algorithm behind Symbol Timing Recovery:

Different algorithms exist for symbol timing recovery and synchronization. An “Early/Late Symbol Recovery algorithm” is illustrated here.

The algorithm starts by selecting an arbitrary sample at some time (denoted by T). It captures the two adjacent samples (on either side of the sampling instant T) that are separated by δ seconds. The sample at the index T-δ is called Early Sample and the sample at the index T+δ is called Late Sample. The timing error is generated by comparing the amplitudes of the early and late samples. The next symbol sampling time instant is either advanced or delayed based on the sign of difference between the early and late sample.

1) If the Early Sample = Late Sample : The peak occurs at the on-time sampling instant T. No adjustment in the timing is needed.
2) If |Early Sample| > |Late Sample| : Late timing, the sampling time is offset so that the next symbol is sampled T-δ/2 seconds after the current sampling time.
3) If |Early Sample| < |Late Sample| : Early timing, the sampling time is offset so that the next symbol is sampled T+δ/2 seconds after the current sampling time.

These three situations are shown next.

There exist many variations to the above mentioned algorithm. The Early/Late synchronization technique given here is the simplest one taken for illustration.

Let’s complete the architecture with a signal quantization and constellation de-mapping block which gives out the estimated demodulated symbols.

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

For Further Reading:

[1] Technique for implementing an Early-Late Gate Synchronization structure for DPSK.↗
[2] Ying Li et al,”Hardware Implementation of Symbol Synchronization for Underwater FSK”, IEEE International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing (SUTC), p 82 – 88, 2010.↗
[3] Heinrich Meyr & Gerd Ascheid,”Digital Communication Receivers: Synchronization in Digital Communication Volume I, Phase-, Frequency-Locked Loops, and Amplitude Control (Wiley and Signal Processing)”,John Wiley & Sons; Volume 1 edition (March 1990),ISBN-13: 978-0471501930.↗
[4] Umberto Mengali,”Synchronization Techniques for Digital Receivers (Applications of Communications Theory)”,Springer; 1997 edition (October 31, 1997),ISBN-13: 978-0306457258.↗

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Introduction to concepts in probability

Note: There is a rating embedded within this post, please visit this post to rate it.

What is Probability?

Probability is a branch of mathematics that deals with uncertainty. The term “probability” is used to quantify the degree of belief or confidence that something is true (or false). It gives us the likelihood of occurrence of a given event. It is expressed as a number that could take any value in the closed interval [0,1]

Consider the following experiment describing a simple communication system. A user transmits data through a noisy medium and another user receives it. Here, the sender utters a single alphabet on the phone. Due to the noise characteristics of the communication medium, we do not know whether the user at the destination will be able to hear what the sender has already spoken. Before performing the experiment, we would like to know the likelihood that the user at the destination hears the particular syllable (given the noise characteristics). This likelihood of the particular event is called probability of the event.

Experiment:

Any activity that can produce observable results is called an experiment. For example, tossing a coin (observable results: Head/Tail), Rolling a die (observable results: numbers on the faces of the die), drawing a card from a deck (observable results: symbols, numbers and alphabets on the cards), sending & receiving bits in a communication system (observable results: bits/alphabets transferred or voltage level at the receiver).

Sample Space:

Given an experiment, the sample space comprises a set of all possible outcomes of the experiment. It plays the role of the universal set when modeling the experiment. It is denoted by the letter – ‘S’. Following examples illustrate the sample spaces for various experiments.

Event:

It is also a set of outcomes of an experiment. It is a subset of sample space. Each time the experiment is run, either a particular event occurs or it does not occur. Events are associated with a probability number.

Types of Events:

Events can be classified according to their relationship with one another. The following table shows the classification of events and their definition.

Computing Probability:

The proability of the occurrence of an event (say ‘A’) is given by the ratio of number of ways that particular event can happen and the total number of all possible outcomes.

For example, consider the experiment of an unbiased rolling of a die. The sample space is given by S={1,2,3,4,5,6}. Let’s say that an event is defined as getting ‘4’ when you roll the die. The probability of getting the face with ‘4’ (event) can be calculated as follows.

Axioms of Probability:

Following definitions are assumed for the axioms listed below: ‘S’ denotes the sample space of an experiment, ‘A’ and ‘B’ are events and P(A) denotes the probability of occurrence of event ‘A’.

Properties of Probability:

The definition of probability – has some properties as listed below.


Here the symbol Ø indicates null event, Ā indicates that the event A is NOT occuring.

Joint probability and Marginal probability:

Joint probability is defined as the probability that two or more events occur simultaneously. For two events A and B, the joint probability is denoted by P(A,B) or P(A∩B).

Given two or more events, the marginal probability is the probability of occurrence of a single event. It is also called a-priori probability.

The following table illustrates the concept of computing the joint and marginal probabilities. Here, four events (P, Q, R, S) are used for illustration. For example, the table indicates that the probability of occurrence of both events R & Q is given by b/n. This is the joint probability of R and Q. Adding all the probabilities either row wise or column wise gives us the marginal probability of a single event. For example, adding a/n and b/n gives the marginal probability of event similarly, adding a/n and c/n gives the marginal probability of event P.

Conditional probability or Posteriori probability:

Conditional probabilities (also called posteriori probability) deal with dependent events. It is used to calculate the probability of an event given that some other event has already occurred.

It is denoted as P(B|A)–meaning that ‘the probability of event B given that the event A has occurred already’. It is called “a-posteriori” because it is only available “after” observing A (the first event).

The conditional probability P(B|A) is mathematically computed as

Recommended Books:

Theoretical BER using Matlab – BERTOOL

Note: There is a rating embedded within this post, please visit this post to rate it.
When simulating digital modulations in Matlab, it is useful to verify the simulated BER performance curves against theoretical BER curves.Matlab has an inbuilt visualization tool, ‘BERTOOL’, for this purpose.

Matlab’s BERTOOL supports 6 types of digital modulations over 3 types of channel for plotting theoretical BER. The six supported modulations are PSK,DPSK,OQPSK,PAM,QAM,FSK and the three channel types are : AWGN,Rayleigh,Rician. It also has options to set diversity order, channel coding, synchronization method and demodulation type(coherent/non-coherent).

A brief intro to the tool is given here. You might need to install communication toolbox for invoking this tool.

Invoke the BERTOOL GUI using the command (tested with R2012b)
>>bertool

Set the desired configuration and click plot.

You can export the data from BERTOOL to the workspace. After you have exported it to the workspace, you can plot it against your simulation curve for verification.

For a detailed documentation: Matlab documentation on BERTOOL

Recommended Books:

Log Distance Path Loss or Log Normal Shadowing Model

Log distance path loss model

Log distance path loss model is an extension to the Friis free space model. It is used to predict the propagation loss for a wide range of environments, whereas, the Friis free space model is restricted to unobstructed clear path between the transmitter and the receiver. The model encompasses random shadowing effects due to signal blockage by hills, trees, buildings etc. It is also referred as log normal shadowing model.

Figure 1: Simulated results for log distance path loss model

In the far field region of the transmitter, for distances beyond , if is the path loss at a distance meters from the transmitter, then the path loss at an arbitrary distance is given by

where, is the path loss at an arbitrary distance meters, is the path loss exponent that depends on the type of environment, as given in Table below. Also, is a zero-mean Gaussian distributed random variable with standard deviation expressed in , used only when there is a shadowing effect. The reference path loss , also called close-in reference distance, is obtained by using Friis path loss equation (equation 2 in this post) or by field measurements at . Typically, to for microcell and for a large cell.

This article is part of the book
Wireless Communication Systems in Matlab (second edition), ISBN: 979-8648350779 available in ebook (PDF) format and Paperback (hardcopy) format.

The path-loss exponent (PLE) values given in Table below are for reference only. They may or may not fit the actual environment we are trying to model. Usually, PLE is considered to be known a-priori, but mostly that is not the case. Care must be taken to estimate the PLE for the given environment before design and modeling. PLE is estimated by equating the observed (empirical) values over several time instants, to the established theoretical values. Refer [1] for a literature on PLE estimation in large wireless networks.

logNormalShadowing.m: Function to model Log-normal shadowing (Refer the book for the Matlab code – click here)

The function to implement log-normal shadowing is given above and the test code is given next. Figure 1 shows the received signal when there is no shadowing effect and the case where shadowing exists. The r

The function to implement log-normal shadowing is given above and the test code is given next. Figure 1 above shows the received signal power when there is no shadowing effect and the case when shadowing exists. The results are generated for an environment with PLE n = 2, frequency of transmission f = 2.4 GHz, reference distance d0 = 1 m and standard deviation of the log-normal shadowing σ = 2dB. Results clearly show that the log-normal shadowing introduces randomness in the received signal power, which may put us close to reality.

log_distance_model_test.m: Simulate Log Normal Shadowing for a range of distances

Pt_dBm=0; %Input transmitted power in dBm
Gt_dBi=1; %Gain of the Transmitted antenna in dBi
Gr_dBi=1; %Gain of the Receiver antenna in dBi
f=2.4e9; %Transmitted signal frequency in Hertz
d0=1; %assume reference distance = 1m
d=100*(1:0.2:100); %Array of distances to simulate
L=1; %Other System Losses, No Loss case L=1
sigma=2;%Standard deviation of log Normal distribution (in dB)
n=2; % path loss exponent
%Log normal shadowing (with shadowing effect)
[PL_shadow,Pr_shadow] = logNormalShadowing(Pt_dBm,Gt_dBi,Gr_dBi,f,d0,d,L,sigma,n);
figure;plot(d,Pr_shadow,'b');hold on;
%Friis transmission (no shadowing effect)
[Pr_Friss,PL_Friss] = FriisModel(Pt_dBm,Gt_dBi,Gr_dBi,f,d,L,n);
plot(d,Pr_Friss,'r');grid on;
xlabel('Distance (m)'); ylabel('P_r (dBm)');
title('Log Normal Shadowing Model');legend('Log normal shadowing','Friss model');

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

References

[1] Srinivasan, S.; Haenggi, M., Path loss exponent estimation in large wireless networks, Information Theory and Applications Workshop, pp. 124 – 129, Feb 2009.↗

Topic in this chapter

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Friis Free Space Propagation Model

Friis free space propagation model is used to model the line-of-sight (LOS) path loss incurred in a free space environment, devoid of any objects that create absorption, diffraction, reflections, or any other characteristic-altering phenomenon to a radiated wave. It is valid only in the far field region of the transmitting antenna [1] and is based on the inverse square law of distance which states that the received power at a particular distance from the transmitter decays by a factor of square of the distance.

Figure 1: Received power using Friis model for WiFi transmission at f=2.4 GHz and f=5 GHz

The Friis equation for received power is given by

where, Pr is the received signal power in Watts expressed as a function of separation distance (d meters) between the transmitter and the receiver, Pt is the power of the transmitted signal’s Watts, Gt and Gr are the gains of transmitter and receiver antennas when compared to an isotropic radiator with unit gain, λ is the wavelength of carrier in meters and L represents other losses that is not associated with the propagation loss. The parameter L may include system losses like loss at the antenna, transmission line attenuation, loss at various filters etc. The factor L is usually greater than or equal to 1 with L=1 for no such system losses.

This article is part of the book
Wireless Communication Systems in Matlab (second edition), ISBN: 979-8648350779 available in ebook (PDF) format and Paperback (hardcopy) format.

The Friis equation can be modified to accommodate different environments, on the reason that the received signal decreases as the nth power of distance, where the parameter n is the path-loss exponent (PLE) that takes constant values depending on the environment that is modeled (see Table below} for various empirical values for PLE).

The propagation path loss in free space, denoted as PL, is the loss incurred by the transmitted signal during propagation. It is expressed as the signal loss between the feed points of two isotropic antennas in free space.

The propagation of an electromagnetic signal, through free space, is unaffected by its frequency of transmission and hence has no dependency on the wavelength λ. However, the variable λ exists in the path loss equation to account for the effective aperture of the receiving antenna, which is an indicator of the antenna’s ability to collect power. If the link between the transmitting and receiving antenna is something other than the free space, penetration/absorption losses are also considered in path loss calculation. Material penetrations are fairly dependent on frequency. Incorporation of penetration losses require detailed analysis.

Usually, the transmitted power and the receiver power are specified in terms of dBm (power in decibels with respect to 1 mW) and the antenna gains in dBi (gain in decibels with respect to an isotropic antenna). Therefore, it is often convenient to work in log scale instead of linear scale. The alternative form of Friis equation in log scale is given by

Following function, implements a generic Friis equation that includes the path loss exponent, , whose possible values are listed in Table 1.

FriisModel.m: Function implementing Friis propagation model (Refer the book for the Matlab code – click here)

For example, consider a WiFi (IEEE 802.11n standard↗) transmission-reception system operating at f =2.4 GHz or f =5 GHz band with 0 dBm (1 mW) output power from the transmitter. The gain of the transmitter antenna is 1 dBi and that of receiving antenna is 1 dBi. It is assumed that there is no system loss, therefore L = 1. The following Matlab code uses the Friis equation and plots the received power in dBm for a range of distances (Figure 1 shown above). From the plot, the received power decreases by a factor of 6 dB for every doubling of the distance.

Friis_model_test.m: Friis free space propagation model

%Matlab code to simulate Friis Free space equation
%-----------Input section------------------------
Pt_dBm=52; %Input - Transmitted power in dBm
Gt_dBi=25; %Gain of the Transmitted antenna in dBi
Gr_dBi=15; %Gain of the Receiver antenna in dBi
f=110ˆ9; %Transmitted signal frequency in Hertz d =41935000(1:1:200) ; %Array of input distances in meters
L=1; %Other System Losses, No Loss case L=1
n=2; %Path loss exponent for Free space
%----------------------------------------------------
[PL,Pr_dBm] = FriisModel(Pt_dBm,Gt_dBi,Gr_dBi,f,d,L,n);
plot(log10(d),Pr_dBm); title('Friis Path loss model');
xlabel('log10(d)'); ylabel('P_r (dBm)')

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

References

[1] Allen C. Newell, Near Field Antenna Measurement Theory, Planar, Cylindrical and Spherical, Nearfield Systems Inc.↗

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Topics in this chapter

Wireless channel models – an Introduction

The medium between the transmitting antenna and the receiving antenna is generally termed as “channel”. In wireless transmission, the characteristics of the signal changes as it travels from the transmitter to the receiver. The signal characteristics are due to several phenomena: 1) existence of line of sight path between the antennas, 2) reflection, refraction and diffraction of the signal due to the objects in between the antennas, 3) The relative motion between the transmitter and receiver and the objects in between them, 4) The signal attenuation as it travels through the medium, 5) Noise The received signal can be obtained from the transmitter signal if we can accurately model the channel in between the antennas. It is quite difficult to model the real world environments using wireless channel models. Scientists and engineers have studied various environments and provide us with a way to model the various medium that approximate the real world environments.

Consider a channel with impulse response h(t) between the transmitter and receiver antenna. In addition to the channel impulse response, the transmitted signal x(t) is also corrupted by the additive noise component n(t). The received signal y(t) is obtained by the convolution of h(t) and x(t) added with the noise component n(t).

Components of channel response:

Consider a transmitter (Base station) and receiver (mobile) in a city environment. The medium between the transmitter and receiver are obstructed by several building, trees and other objects. The receiver keeps moving away from the transmitter. We measure the signal power at the receiver as we move away from the receiver. The ratio of received signal power to the transmitted signal power is plotted against the distance and we obtain a graph as shown in the left hand side plot below.

Figure 1: Components of response of a wireless channel

At first the signal appears very random. Upon closer look we can break it down into three main components as shown in the right half of the plot [Goldsmith2005].

1) Propagation Path Loss
2) Shadowing or slow fading
3) Multipath fading

Propagation Path Loss:

Path loss modeling is used to model following scenarios:

* Predict the signal loss between the transmitter and the receiver when a clear unobstructed path exist between them.

*Used to model the attenuation effect by various environments as the receiver and transmitted are separated by a distance.

The clear unobstructed path between the transmitter and the receiver is called Line Of Sight (LOS) path.

The most important modeling methods available for path loss modelling are
* Friis Free Space Propagation Model – models path loss
* Log Distance Path Loss or Log Normal Shadowing Model – models both path loss and shadowing
* Empirical models:
* Hata – Okumura Model
* COST 231 Extension to Hata Model
* COST 231-Walfish-Ikegami Model
* Erceg Model
* Stanford University Interim (SUI) Channel Models
* ITU Path Loss Models

I have discussed in detail some of the propagation models in these articles. Do check them out.

Shadowing :

If the environment contains objects like building and trees, some part of the transmitted signal gets affected by absorption, reflection, diffraction and scattering. This effect is called shadowing. It is also referred as slow fading or long-term fading.

Multipath fading:

A signal traveling in an environment may get reflected by several objects on the path. This gives rise to several reflected signals. The reflected signals arrive at the receiver at different time instants and with different intensities leading to multipath propagation. Depending on the phase of the each individual reflected signal, the received signal power may increase or decrease. A small variation in the phase of the each reflected signal from each multipath may lead to significant difference in the total received power. This phenomena is also referred as fast fading or short-term fading.

Figure 2: Illustrating manifestation of multipath propagation

The multiple reflected copies of the transmitted signal arrive at the receiver at different time instants and at different power levels. This characteristic of the multipath phenomena is described by Power Delay Profile (PDP) as shown below. Power Delay profile gives a list of different time delays of each multipath (time delays are due to different length of the each individual reflected path) and the associated power levels.

This article discusses power delay profile in details.

Figure 3: A typical discrete power delay profile plot for a multipath channel with 3 paths

The maximum delay for which the received power level becomes negligible is called maximum delay spread τmax. Sometimes Root-Mean-Square value τrms is used instead of the maximum value.

This article discusses power delay profile in details.

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

Reference

[Goldsmith2005] Andrea Goldsmith, “Wireless Communications,” Cambridge University Press, 2005,644 pp.↗

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Tips & Tricks : Indexing in Matlab

Let’s review indexing techniques in Matlab: Indexing one dimensional array, two dimensional array, logical indexing, reversiong a vector – are covered.

Consider a sample vector in Matlab.

>> x=[9  21  6  8  7  3  18  -1  0  4]
ans=
   9  21  6  8  7  3  18  -1  0  4

Index with single value

>> x(2)
ans =
  21

Index with range of values

>> x([1 4 6])
ans =
    9  8  3

Select a range of elements using ‘:’ operator.

Example: Select elements with index ranging from 1 to 5.

>> x(1:5)
ans =
     9  21  6  8  7

Making a new vector by swapping two halves of the vector x

>> x([6:10 1:5])
ans =
    3  18  -1  0  4  9  21  6  8  7

To refer the last element of the matrix x use ‘end’ operator

>> x(end)
ans =
4

You can do arithmetic on the ‘end’ operator. Selecting all elements except the last element

>> x(1:end-1)
ans =
   9   21   6   8  7  3  18  -1  0

Reversing the vector

>> x(end:-1:1)
ans =
     4  0  -1  18  3  7  8  6  21  9

Using the end operator in a range. Selecting from fifth element to the end

>> x(5:end)
ans =
   7  3  18  -1  0  4

Replacing specific elements in the array by placing the new values on the right side of the expression. Replacing the values of ‘x’ at positions [2,5,8] with new values

>> x([2 5 8])=[11 14 -6]
x =
   9  11  6  8  14  3  18  -6  0  4

Replacing specific elements with a same value. Replacing the values of ‘x’ at position [1 and 10] with 40

>> x([1 10]) = 40
x =
   40  11  6  8  14  3  18  -6  0  40

Two dimensional Arrays:

>> x=magic(4) %Create a magic array with 4x4 elements
x =
   16  2  3 13
    5 11 10  8
    9  7  6 12
    4 14 15  1

The two dimensional array elements are accessed with two subscript indices like x(i,j) – where ‘i’ represents row and ‘j’ represents column. Accessing the element located at second row and third column

>> x(2,3)
ans =
   10

The subscript indices can also be vectors. Selecting the elements in the first row – 4th,2nd and 1st column & third row – 4th,2nd and 1st column

>> x([1 3],[4 2 1])
ans =
    13   2   16
    12   7    9

The ‘:’ operator is the short form for 1:end. It is usually used to select all the elements in a specific column of row. Accessing all elements on the third row

>> x(3,:)
ans =
   9  7  6  12

Extracting the last row using the “end” operator and the “:” operator

>> x(end,:)
ans =
   4  14  15  1

Logical Indexing:

Used to select the elements of a matrix that satisfy some criteria given by an expression

>> x=magic(4) %Create a magic array with 4x4 elements
x =
  16   2   3  13
   5  11  10   8
   9   7   6  12
   4  14  15   1

Getting all elements of the matrix whose value is above 10

>> x(x&gt;10)
ans =
    16
    11
    14
    15
    13
    12

Getting the row and column subscripts of those elements using the “find” function.

>> [i,j]=find(x&gt;10)
i =
     1
     2
     4
     4
     1
     3
j =
     1
     2
     2
     3
     4
     4

Sometimes, when you run your script you might encounter ‘Inf’ (IEEE arithmetic representation for positive infinity) or -Inf( representation for negative infinity) values sitting in a matrix. See the following example

>> k=[1 4 5 0 6 7 0 -1]
x=1./k %Finding the reciprocal of each element and storing it as a vector
k =
     1     4     5     0     6     7     0    -1
x =
    1.0000    0.2500    0.2000       Inf    0.1667    0.1429       Inf   -1.0000

Your rest of the script may need you to fix this by replacing the Inf with 0. This can be achieved by using “isinf” function in Matlab. The “isinf” function returns a logical array which will specify whether an element is Inf (It also compares -Inf).

>> x(isinf(x))=0
x =
    1.0000    0.2500    0.2000         0    0.1667    0.1429         0   -1.0000

There are other functions similar to “isinf” like “isspace”,”isnan” etc.., that look for the specific condition to satisfy and returns a logical array depending on the result.

Browse articles tagged for Matlab code…

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

Other Resources on Matlab’s Matrix Indexing:

[1] Matlab documentation on Matrix indexing↗
[2]A Book detailing how MATLAB code can be written in order to maximize its effectiveness.
Richard K Johnson, “The Elements of MATLAB Style “, Cambridge University Press,ISBN-13: 978-0521732581↗

Cholesky decomposition: Python & Matlab

Cholesky decomposition is an efficient method for inversion of symmetric positive-definite matrices. Let’s demonstrate the method in Python and Matlab.

Cholesky factor

Any symmetric positive definite matrix can be factored as

where is lower triangular matrix. The lower triangular matrix is often called “Cholesky Factor of ”. The matrix can be interpreted as square root of the positive definite matrix .

Basic Algorithm to find Cholesky Factorization:

Note: In the following text, the variables represented in Greek letters represent scalar values, the variables represented in small Latin letters are column vectors and the variables represented in capital Latin letters are Matrices.

Given a positive definite matrix , it is partitioned as follows.

first element of and respectively
column vector at first column starting from second row of and respectively
remaining lower part of the matrix of and respectively of size

Having partitioned the matrix as shown above, the Cholesky factorization can be computed by the following iterative procedure.

Steps in computing the Cholesky factorization:

Step 1: Compute the scalar:
Step 2: Compute the column vector:
Step 3: Compute the matrix :
Step 4: Replace with , i.e,
Step 5: Repeat from step 1 till the matrix size at Step 4 becomes .

Matlab Program (implementing the above algorithm):

Function 1: [F]=cholesky(A,option)

function [F]=cholesky(A,option)
%Function to find the Cholesky factor of a Positive Definite matrix A
%Author Mathuranathan for https://www.gaussianwaves.com
%Licensed under Creative Commons: CC-NC-BY-SA 3.0
%A = positive definite matrix
%Option can be one of the following 'Lower','Upper'
%L = Cholesky factorizaton of A such that A=LL^T
%If option ='Lower', then it returns the Cholesky factored matrix L in
%lower triangular form
%If option ='Upper', then it returns the Cholesky factored matrix L in
%upper triangular form    

%Test for positive definiteness (symmetricity need to satisfy)
%Check if the matrix is symmetric
if ~isequal(A,A'),
    error('Input Matrix is not Symmetric');
end
    
if isPositiveDefinite(A),
    [m,n]=size(A);
    L=zeros(m,m);%Initialize to all zeros
    row=1;col=1;
    j=1;
    for i=1:m,
        a11=sqrt(A(1,1));
        L(row,col)=a11;
        if(m~=1), %Reached the last partition
            L21=A(j+1:m,1)/a11;
            L(row+1:end,col)=L21;
            A=(A(j+1:m,j+1:m)-L21*L21');
            [m,n]=size(A);
            row=row+1;
            col=col+1;
        end
    end
    switch nargin
        case 2
            if strcmpi(option,'upper'),F=L';
            else
                if strcmpi(option,'lower'),F=L;
                else error('Invalid option');
                end
            end
        case 1
            F=L;
        otherwise
            error('Not enough input arguments')
    end
else
        error('Given Matrix A is NOT Positive definite');
end
end

Function 2: x=isPositiveDefinite(A)

function x=isPositiveDefinite(A)
%Function to check whether a given matrix A is positive definite
%Author Mathuranathan for https://www.gaussianwaves.com
%Licensed under Creative Commons: CC-NC-BY-SA 3.0
%Returns x=1, if the input matrix is positive definite
%Returns x=0, if the input matrix is not positive definite
[m,~]=size(A);
    
%Test for positive definiteness
x=1; %Flag to check for positiveness
for i=1:m
    subA=A(1:i,1:i); %Extract upper left kxk submatrix
    if(det(subA)<=0); %Check if the determinent of the kxk submatrix is +ve
        x=0;
        break;
    end
end
%For debug
%if x, display('Given Matrix is Positive definite');
%else display('Given Matrix is NOT positive definite'); end    
end

Sample Run:

A is a randomly generated positive definite matrix. To generate a random positive definite matrix check the link in “external link” section below.

>> A=[3.3821 ,0.8784,0.3613,-2.0349; 0.8784, 2.0068, 0.5587, 0.1169; 0.3613, 0.5587, 3.6656, 0.7807; -2.0349, 0.1169, 0.7807, 2.5397];

>> cholesky(A,’Lower’) 

>> cholesky(A,’upper’) 

Python (numpy)

Let us verify the above results using Python’s Numpy package. The numpy package numpy.linalg contains the cholesky function for computing the Cholesky decomposition (returns in lower triangular matrix form). It can be summoned as follows

>>> import numpy as np

>>> A=[[3.3821 ,0.8784,0.3613,-2.0349],\
       [0.8784, 2.0068, 0.5587, 0.1169],\
       [0.3613, 0.5587, 3.6656, 0.7807],\
       [-2.0349, 0.1169, 0.7807, 2.5397]]
>>> np.linalg.cholesky(A)
>>>
array([[ 1.83904867,  0.        ,  0.        ,  0.        ],
       [ 0.47763826,  1.33366476,  0.        ,  0.        ],
       [ 0.19646027,  0.34856065,  1.87230041,  0.        ],
       [-1.106496  ,  0.48393333,  0.44298574,  0.94071184]])

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

Related Topics

[1]An Introduction to Estimation Theory
[2]Bias of an Estimator
[3]Minimum Variance Unbiased Estimators (MVUE)
[4]Maximum Likelihood Estimation
[5]Maximum Likelihood Decoding
[6]Probability and Random Process
[7]Likelihood Function and Maximum Likelihood Estimation (MLE)
[8]Score, Fisher Information and Estimator Sensitivity
[9]Introduction to Cramer Rao Lower Bound (CRLB)
[10]Cramer Rao Lower Bound for Scalar Parameter Estimation
[11]Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE)
[12]Efficient Estimators and CRLB
[13]Cramer Rao Lower Bound for Phase Estimation
[14]Normalized CRLB - an alternate form of CRLB and its relation to estimator sensitivity
[15]Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation
[16]The Mean Square Error – Why do we use it for estimation problems
[17]How to estimate unknown parameters using Ordinary Least Squares (OLS)
[18]Essential Preliminary Matrix Algebra for Signal Processing
[19]Why Cholesky Decomposition ? A sample case:
[20]Tests for Positive Definiteness of a Matrix
[21]Solving a Triangular Matrix using Forward & Backward Substitution
[22]Cholesky Factorization - Matlab and Python
[23]LTI system models for random signals – AR, MA and ARMA models
[24]Comparing AR and ARMA model - minimization of squared error
[25]Yule Walker Estimation
[26]AutoCorrelation (Correlogram) and persistence – Time series analysis
[27]Linear Models - Least Squares Estimator (LSE)
[28]Best Linear Unbiased Estimator (BLUE)

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart

Digital Modulations using Matlab
(PDF ebook)

Note: There is a rating embedded within this post, please visit this post to rate it.
Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Check Positive Definite Matrix in Matlab

Note: There is a rating embedded within this post, please visit this post to rate it.
It is often required to check if a given matrix is positive definite or not. Three methods to check the positive definiteness of a matrix were discussed in a previous article .

I will utilize the test method 2 to implement a small matlab code to check if a matrix is positive definite.The test method 2 relies on the fact that for a positive definite matrix, the determinants of all upper-left sub-matrices are positive.The following Matlab code uses an inbuilt Matlab function -‘det’ – which gives the determinant of an input matrix.

Furthermore, the successive upper \(k \times k\) sub-matrices are got by using the following notation

subA=A(1:i,1:i)

I will explain how this notation works to give the required sub-matrices.

Consider a sample matrix given below

$$ \begin{bmatrix} 1 & 2 & 3\\ 4 & 5 & 6\\ 7 & 8 & 9 \end{bmatrix}$$

The sub-matrices for the various combinations for row and column values for the above mentioned code snippet is given below

Notation Submatrix Comment
subA=A(1:1,1:1) \( \begin{bmatrix} 1 \end{bmatrix}\) Select elements from 1st row-1st column to 1st row-1st column
subA=A(1:2,1:2) \( \begin{bmatrix} 1 & 2 \\ 4 & 5 \end{bmatrix}\) Select elements from 1st row-1st column to 2nd row-2nd column
subA=A(1:3,1:3) \( \begin{bmatrix} 1 & 2 & 3\\ 4 & 5 & 6 \\ 7 & 8 & 9\end{bmatrix}\) Select elements from 1st row-1st column to 3rd row-3rd column
subA=A(1:3,1:2) \( \begin{bmatrix} 1 & 2 \\ 4 & 5 \\ 7 & 8 \end{bmatrix}\) Select elements from 1st row-1st column to 3rd row-2nd column

Matlab Code to test if a matrix is positive definite:

function x=isPositiveDefinite(A)
%Function to check whether a given matrix A is positive definite
%Author Mathuranathan for https://www.gaussianwaves.com
%Returns x=1, if the input matrix is positive definite
%Returns x=0, if the input matrix is not positive definite
%Throws error if the input matrix is not symmetric

    %Check if the matrix is symmetric
    [m,n]=size(A); 
    if m~=n,
        error('A is not Symmetric');
    end
    
    %Test for positive definiteness
    x=1; %Flag to check for positiveness
    for i=1:m
        subA=A(1:i,1:i); %Extract upper left kxk submatrix
        if(det(subA)<=0); %Check if the determinent of the kxk submatrix is +ve
            x=0;
            break;
        end
    end
    
    if x
        display('Given Matrix is Positive definite');
    else
        display('Given Matrix is NOT positive definite');
    end      
end

Sample run:

>> A=[1 2 3; 4 5 6; 7 8 9]
\(A =\begin{bmatrix}
1 & 2 & 3\\
4 & 5 & 6\\
7 & 8 & 9\end{bmatrix}\)
>> x=isPositiveDefinite(A)
Given Matrix is NOT positive definite
x = 0
------------------------------------------
>> A=[25 15 -5; 15 18 0;-5 0 11]
\(A =\begin{bmatrix}
25 & 15 & -5\\
15 & 18 & 0\\
-5 & 0 & 11 \end{bmatrix}\)
>> x=isPositiveDefinite(A)
Given Matrix is Positive definite
x = 1
------------------------------------------
>> A=[1 2 3; 4 5 6]
\(A =\begin{bmatrix}
1 & 2 & 3\\
4 & 5 & 6 \end{bmatrix}\)
>> x=isPositiveDefinite(A)
Error using isPositiveDefinite (line 11)
A is not Symmetric
------------------------------------------