## Recent Articles

Tips & Tricks - Indexing in Matlab

Consider a sample vector in Matlab. >>x=[9 21 6 8 7 3 18 -1 0 4] ans= 9 21 6 8 7 3 18 -1 0 4 Index with single value >>x(2) ans = 21 Index with range of values >> x([1 4 6]) ans = 9 8 3 Select a range of elements using ':' operator. Example: Select elements with index ranging from 1 to 5. >> x(1:5) ans = 9 21 6 8 7 Making a new vector by swapping two halves of the vector x >> x([6:10 1:5]) ans = 3 18 -1 0 4 9 21 6 8 7 To refer the last element of the matrix x use 'end' operator >> x(end) ans = 4 You can do arithmetic on the 'end' operator. Selecting all elements except the last element >> x(1:end-1) ans = 9 21 6 8 7 3 18 -1 0 Reversing the vector >> x(end:-1:1) ans = 4 0 -1 18 3 7 8 6 21 9 Using the end operator in a range. Selecting from fifth element to the end >> x(5:end) ans = 7 3 18 -1 0 4 Replacing specific elements in the array by placing the new values on the right side of the expression. Replacing the values of 'x' at positions [2,5,8] with new values >> x([2 5 8])=[11 14 -6] x = 9 11 6 8 14 3 18 -6 0 4 Replacing specific elements with a same value. Replacing the values of 'x' at position [1 and 10] with 40 >> x([1 10]) = 40 x = 40 11 6 8 14 3 18 -6 0 40 Two dimensional Arrays: >> x=magic(4) %Create a magic array with 4x4 elements x = 16 2 3 13 5 11 10 8 9 7 6 12 4 14 15 1 The two dimensional array elements are accessed with two subscript indices like x(i,j) - where read more

Cholesky Factorization and Matlab code

Any $latex n \times n$ positive definite matrix $latex A$ can be factored as $latex A=LL^T &s=2$ where $latex L$ is nxn lower triangular matrix. The lower triangular matrix $latex L$ is often called “Cholesky Factor of $latex A$”. The matrix $latex L$ can be interpreted as square root of the positive definite matrix A. Basic Algorithm to find Cholesky Factorization: Note: In the following text, the variables represented in Greek letters represent scalar values, the variables represented in small Latin letters are column vectors and the variables represented in capital Latin letters are Matrices. Given a positive definite matrix $latex A$, it is partitioned as follows. $latex \alpha_{11}, \lambda_{11}$ = first element of $latex A$ and $latex L$ respectively $latex a_{21} , l_{21}$ = column vector at first column starting from second row of $latex A$ and $latex L$ respectively $latex A_{22} , L_{22}$ = remaining lower part of the matrix of $latex A$ and $latex L$ respectively of size $latex (n-1) \times (n-1)$ Having partitioned the matrix $latex A$ as shown above, the Cholesky factorization can be computed by the following iterative procedure. Steps in computing the Cholesky factorization: Step 1: Compute the scalar: $latex \lambda_{11}=\sqrt{\alpha_{11}}$ Step 2: Compute the column vector: $latex l_{21}= a_{21}/\lambda_{11}$ Step 3: Compute the matrix : $latex A_{22}=l_{21} l^T_{21}+L_{22}L^T_{22}$ Step 4: Replace $latex A$ with $latex A_{22}$, i.e, $latex A=A_{22}$ Step 5: Repeat from step 1 till the matrix size at Step 4 becomes 1x1. Matlab Program (implementing the above algorithm): Function 1: [F]=cholesky(A,option) function [F]=cholesky(A,option) %Function to find the Cholesky factor of a Positive Definite matrix A %Author Mathuranathan for http://www.gaussianwaves.com %Licensed under Creative Commons: CC-NC-BY-SA 3.0 %A = positive definite matrix %Option can be one of the following 'Lower','Upper' %L = Cholesky factorizaton of A such that A=LL^T %If option ='Lower', then it returns the Cholesky factored matrix L in %lower triangular form %If option ='Upper', then it returns the Cholesky factored matrix L in %upper triangular form %Test for positive definiteness if isPositiveDefinite(A), [m,n]=size(A); L=zeros(m,m);%Initialize to all zeros row=1; col=1; j=1; for i=1:m, a11=sqrt(A(1,1)); L(row,col)=a11; read more

Check Positive Definite Matrix in Matlab

It is often required to check if a given matrix is positive definite or not. Three methods to check the positive definiteness of a matrix were discussed in a previous article . I will utilize the test method 2 to implement a small matlab code to check if a matrix is positive definite.The test method 2 relies on the fact that for a positive definite matrix, the determinants of all upper-left sub-matrices are positive.The following Matlab code uses an inbuilt Matlab function -'det' - which gives the determinant of an input matrix. Furthermore, the successive upper kxk sub-matrices are got by using the following notation subA=A(1:i,1:i) I will explain how this notation works to give the required sub-matrices. Consider a sample matrix given below $latex \begin{bmatrix} 1 & 2 & 3\\ 4 & 5 & 6\\ 7 & 8 & 9 \end{bmatrix}$ The sub-matrices for the various combinations for row and column values for the above mentioned code snippet is given below NotationSubmatrixComment subA=A(1:1,1:1)$latex \begin{bmatrix} 1 \end{bmatrix}$Select elements from 1st row-1st column to 1st row-1st column subA=A(1:2,1:2)$latex \begin{bmatrix} 1 & 2 \\ 4 & 5 \end{bmatrix}$Select elements from 1st row-1st column to 2nd row-2nd column subA=A(1:3,1:3)$latex \begin{bmatrix} 1 & 2 & 3\\ 4 & 5 & 6 \\ 7 & 8 & 9\end{bmatrix}$Select elements from 1st row-1st column to 3rd row-3rd column subA=A(1:3,1:2)$latex \begin{bmatrix} 1 & 2 \\ 4 & 5 \\ 7 & 8 \end{bmatrix}$Select elements from 1st row-1st column to 3rd row-2nd column Matlab Code to test if a matrix is positive definite: function x=isPositiveDefinite(A) %Function to check whether a given matrix A is positive definite %Author Mathuranathan for http://www.gaussianwaves.com %Returns x=1, if the input matrix is positive definite %Returns x=0, if the input matrix is not positive definite %Throws error if the input matrix is not symmetric %Check if the matrix is symmetric [m,n]=size(A); if m~=n, error('A is not Symmetric'); end %Test for positive definiteness x=1; %Flag to check for positiveness for i=1:m subA=A(1:i,1:i); %Extract upper left kxk submatrix if(det(subA)> A=[1 2 3; 4 5 6; 7 8 9] A = 1 2 3 4 5 6 7 8 read more

Solving a Triangular Matrix using Forward & Backward Substitution

Forward Substitution: Consider a set of equations in a matrix form $latex Ax=b$, where A is a lower triangular matrix with non-zero diagonal elements. The equation is re-written in full matrix form as It can be solved using the following algorithm From the DSP implementation point of view, computation of $latex x_1$ requires one FLoating Point Operation per Second (FLOPS) – only one division. Computing $latex x_2$ will require 3 FLOPS – 1 multiplication, 1 division and 1 subtraction, $latex x_3$ will require 5 FLOPS – 2 multiplications, 1 division and two subtractions. Thus the computation of $latex x_{mm}$ will require $latex (2n-1)$ FLOPS. Thus the overall FLOPS required for forward substitution is $latex 1+3+5+\cdots+(2m-1)$ $latex = m^2$ FLOPS Backward substitution: Consider a set of equations in a matrix form $latex Ax=b$, where A is a upper triangular matrix with non-zero diagonal elements. The equation is re-written in full matrix form as Solved using the following algorithm This one also requires $latex m^2$ FLOPS. More on Estimation Theory: Recommended Books on Estimation read more

A tutorial on Fourier Analysis - Fourier Series

Fourier analysis and Fourier Synthesis: Fourier analysis – a term named after the French mathematician Joseph Fourier, is the process of breaking down a complex function and expressing it as a combination of simpler functions. The opposite process of combining simpler functions to reconstruct the complex function is termed as Fourier Synthesis. Mostly, the simpler functions are chosen to be sine and cosine functions. Thus, the term “Fourier analysis” expresses a complex function in terms of sine and cosine terms and the term “Fourier Analysis” reconstructs the complex function from the sine and cosine terms. Frequency is the measure of number of repetitive occurrences of a particular event. By definition, a sine wave is a smooth curve that repeats at a certain frequency. Thus, the term “frequency” and sine are almost synonymous. A cosine wave is also a sine wave but with 90* phase shift. Therefore, when you talk about sine and cosine functions, you are taking in terms of “frequencies”. That is why in signal processing, the Fourier analysis is applied in frequency (or spectrum) analysis. Fourier series, Continuous Fourier Transform, Discrete Fourier Transform, and Discrete Time Fourier Transform are some of the variants of Fourier analysis. Fourier series: Applied on functions that are periodic. A periodic function is broken down and expressed in terms of sine and cosine terms. In mathematics, the term “series” represents a sum of sequence of numbers. For example we can make a series with a sequence of numbers that follows Geometric Progression (common ratio between the numbers) Common ratio =3 : 1+3+9+27+… An infinite series is a series that has infinite number of terms. If the elements of the infinite series has a common ratio less than 1, then there is a possibility of the sum converging at a particular value. Fourier series falls under the category of trigonometric infinite series, where the individual elements of the series are expressed trigonometrically. The construct of the Fourier series is given by Here f(x) is the complex periodic function we wish to break down in terms of sine and cosine basis functions. The coefficients a0,a1,… and b1,b2,… can be found by Functions and Symmetry: It is necessary to classify the functions according to its symmetry properties. Doing so will save computation time and effort. Functions either fall into “odd symmetry” or “even symmetry” or “no symmetry”. Symmetry can be ascertained by plotting the function in a graph paper and folding it along the y axis. Symmetry read more

Data Acquisition through Wireless Sensor Networks – Part 2

Data Acquisition through Wireless Sensor Networks - Part 1

Authors: Akhilesh Agrawal & Rishav Rej - Birla Institute of Technology and Science (BITS), Pilani Hyderabad Campus Abstract: This report deals with the design, development and implementation of a temperature Sensor using ZigBee module interfaced with the PIC microcontroller. The main aim of the work undertaken in this report is to sense the temperature and to display the result on the CENTRAL SERVER using the ZigBee technology. ZigBee operates in the industrial, scientific and medical (ISM) radio bands; 868 MHz in Europe,915MHz in the USA and 2.4 GHz in most jurisdictions worldwide. The technology is intended to be simpler and cheaper than other WPANs such as Bluetooth. The most capable ZigBee node type is said to require only about 10 % of the software of a typical Bluetooth or Wireless Internet node, while the simplest nodes are about 2%. However, actual code sizes are much higher, more like 50 % of the Bluetooth code size. In this work undertaken in the design & development of the temperature sensor, it senses the temperature and after amplification is then fed to the micro controller, this is then connected to the ZigBee module, which transmits the data and at the other end the ZigBee reads the data and displays on to the CENTRAL SERVER. The software developed is highly accurate and works at a very high speed. The method developed shows the effectiveness of the scheme employed. Keywords—Zigbee, Microcontroller, PIC, Transmitter, Receiver, Synchronous, Blue tooth, Communication. I. Introduction This section gives a brief introduction about the work, which describes all the components namely Zigbee, temperature sensor, PIC18F4550. Zigbee is a wireless network protocol specifically designed for low data rate sensors and control networks. Also, a brief literature survey of the work related to the research topic is also presented in the following paragraphs Zigbee is a consortium of software, hardware and services companies that have developed a common standard for wireless, networking of sensors and controllers. While other wireless standards are concerned with exchanging large amounts of data, Zigbee is for devices that have smaller throughout needs. The other driving factors are low cost, high reliability, high security, low battery usage, simplicity and inter-operability with other Zigbee devices. Compared to other wireless protocols, Zigbee wireless protocol offers low complexity. It also offers three frequency bands of operation along with a number of network configurations and optional security capability. It requires a supply voltage in the range of 2.8V to 3.3V. read more

Top books on basics of Communication Systems

Tests for Positive Definiteness of a Matrix

In order to perform Cholesky Decomposition of a matrix, the matrix has to be a positive definite matrix. I have listed down a few simple methods to test the positive definiteness of a matrix. Methods to test Positive Definiteness: Remember that the term positive definiteness is valid only for symmetric matrices. Test method 1: Existence of all Positive Pivots For a matrix to be positive definite, all the pivots of the matrix should be positive. Hmm.. What is a pivot ? Pivots: Pivots are the first non-zero element in each row of a matrix that is in Row-Echelon form. Row-Echelon form of a matrix is the final resultant matrix of Gaussian Elimination technique. In the following matrices, pivots are encircled. A positive definite matrix will have all positive pivots. Only the second matrix shown above is a positive definite matrix. Also, it is the only symmetric matrix. Test method 2: Determinants of all upper-left sub-matrices are positive: Determinant of all $latex k \times k$ upper-left sub-matrices must be positive. Break the matrix in to several sub matrices, by progressively taking $latex k \times k$ upper-left elements. If the determinants of all the sub-matrices are positive, then the original matrix is positive definite. Is the following matrix Positive Definite? $latex \begin{bmatrix} 2 & -1 & 0 \\ -1 & 2 & -1 \\ 0 & -1 & 2 \\ \end{bmatrix} &s=2$ Find the determinants of all possible $latex k \times k$ upper sub-matrices. $latex \begin{vmatrix} 2 \end{vmatrix} = 2 &s=2$ $latex \begin{vmatrix} 2 & -1 \\-1 & 2 \\\end{vmatrix} = 3 &s=2$ $latex \begin{vmatrix} 2 & -1 & 0 \\-1 & 2 & -1 \\0 & -1 & 2 \\ \end{vmatrix} = 4 &s=2$ $latex \mbox{All determinants are Positive -> Positive Definite}$ Test method 3: All Positive Eigen Values. If all the Eigen values of the symmetric matrix are positive, then it is a positive definite matrix. Is if following matrix Positive definite ? $latex \begin{bmatrix} 9 & 0 & -8 \\ 6 & -5 & -2 \\ -9 & 3 & 3 \end{bmatrix} &s=2$ $latex \mbox{Eigen Values are : -5.90, -1.57, 14.48} &s=2$ Since, not all the Eigen Values are positive, the above matrix is NOT a positive definite matrix. There exist several methods to determine positive definiteness of a matrix. The method listed here are simple and can be done manually for smaller matrices. Following online tools will be of help: 1) Online tool to generate Eigen Values and Eigen Vectors 2) Transforming a matrix to reduced-row-Echelon form See also: Books on Estimation read more

Why Cholesky Decomposition ? A sample case:

Matrix inversion is seen ubiquitously in signal processing applications. For example, matrix inversion is an important step in channel estimation and equalization. For instance, in GSM normal burst, 26 bits of training sequence are put in place with 114 bits of information bits. When the burst travels over the air interface (channel), it is subject to distortions due to channel effect like Inter Symbol Interference (ISI). It becomes necessary to estimate the channel impulse response (H) and equalize the effects of the channel, before attempting to decode/demodulate the information bits. The training bits are used to estimate the channel impulse response. If the transmitted signal “x” travels over a multipath fading channel (H) with AWGN noise “w”, the received signal is modeled as $latex y = Hx + w &s=2$ A Minimum Mean Square Error (MMSE) linear equalizer employed to cancel out the effects of ISI, attempts to minimize the error between equalizer output – "$latex \hat{x}$" and the transmitted signal "$latex x$". If the AWGN noise power is $latex \sigma^2$, then the equalizer is represented by the following equation[1]. Note that the expression involves the computation of matrix inversion - . Matrix inversion is a tricky subject. Not all matrices are invertible. Furthermore, ordinary matrix inversion technique of finding the adjoint of a matrix and using it to invert the matrix will consume lots of memory and computation time. Physical layer algorithm (PHY) designers typically use Cholesky decomposition to invert the matrix. This helps to reduce the computational complexity of matrix inversion. Reference: [1] Marius Vollmer et al, ”Comparative Study of Joint Detection Technique for DS-CDMA Based Mobile Radio System”,IEEE Journal on selected areas in communications,Vol. 19, NO. 8, August See Also: read more

Essential Preliminary Matrix Algebra for Signal Processing

I thought of making a post on Cholesky Decomposition, which is a very essential technique in digital signal processing applications like generating correlated random variables, solving linear equations, channel estimation etc.., But jumping straight to the topic of Cholesky Decomposition will leave many flummoxed/confused. So I decided to touch on some essentials in basic matrix algebra before taking up advanced topics. Prerequisite: Basic knowledge on topics like formation of matrices, determinants, rank of a matrix, inverse & transpose of matrix, solving a system of simultaneous equation using matrix algebra. The prerequisites listed above being fulfilled, you will learn different types of matrices in this post. 1. Vector A matrix with only one row or one column. $latex \text{Row Vector with 5 elements: } A = \begin{bmatrix}a & b & c & d & e \end{bmatrix} &s=2$ $latex \text{Column Vector with 5 elements: } B = \begin{bmatrix}a \\ b \\ c \\ d \\ e \end{bmatrix} &s=2$ 2. Transpose of a Matrix Transpose of a matrix is formed by interchanging the elements from row to columns. For example, the first row of the matrix becomes first column in the transpose matrix, second row of the matrix becomes second column in the transpose matrix and so on. $latex A = \begin{bmatrix} a & b & c \\ d & e & f\\ g & h & i \end{bmatrix} \;\;\; A^T = \begin{bmatrix} a & d & g \\ b & e & h\\ c & f & i \end{bmatrix} &s=2$ $latex (A^T)_{ij} = A_{ji} &s=2$ 3. Square Matrix: Matrix with equal number of rows and columns. (Note: The sample matrices shown below are of 3x3 dimension. They can be readily extended to nxn dimension) $latex \begin{bmatrix} a & b & c \\ d & e & f\\ g & h & i \end{bmatrix} &s=2$ Square Matrix is further classified into 4. Symmetric Matrix : If a square matrix and its transpose are equal, then it is called a symmetric square matrix or simply symmetric matrix. $latex A = \begin{bmatrix} a & b & c \\ b & d & e\\ c & e & f \end{bmatrix} \;\;\;\;\;A^T = \begin{bmatrix} a & b & c \\ b & d & e\\ c & e & f \end{bmatrix} \;\;\;\;\;\Rightarrow A = A^T &s=2$ 5. Diagonal Matrix A special kind of symmetric matrix, with zeros in off-diagonal locations. $latex A = \begin{bmatrix} a & 0 & 0 \\ 0 & b & 0\\ 0 & 0 & c \end{bmatrix} &s=2$ 6. Scalar matrix A special kind of diagonal matrix, with read more

How to estimate unknown parameters using Ordinary Least Squares (OLS)

As mentioned in the previous post, it is often required to estimate parameters that are unknown to the receiver. For example, if a fading channel is encountered in a communication system, it is desirable to estimate the channel response and cancel out the fading effects during reception. Assumption and model selection: For any estimation algorithm, certain assumptions have to be made. One of the main assumption is choosing an appropriate model for estimation. The chosen model should produce minimum statistical deviation and therefore should provide a “good fit”. A metric is often employed to determine the “goodness of fit”. Tests like “likelihood ratio test”, “Chi-Square test”, “Akaike Information Criterion” etc.., are used to measure the goodness of the assumed statistical model and decisions are made on the validity of the model assumption. To keep things simple, we will consider only polynomial models. Observed data and Least Squares estimation: Suppose, an experiment is performed and the following data points are observed in the following table. The variable “x” is the input to the experiment and “y” is the output obtained. The input variable is often called “regressor”, “independent variable”, “manipulated variable”, etc. Similarly, the regression analysis jargon, the output variable or the observed variable is called “observed variable”, “explanatory variable”, “regressand” , “response variable” etc.  The aim of the experiment is to fit the experiment into a model with appropriate parameters. Once we know the model, we no longer need to perform the experiment to determine the output for any given arbitrary input. All we need to do is to use the model and generate the desired output. Generic outline of an estimation algorithm: We have a very small number of data points and it is four in our case. To illustrate the concept, we will choose linear model for parameter estimation. Higher order models will certainly give better performance. Remember!!! More the polynomial order, more is the number of parameters to be estimated and therefore the computational complexity will be more. It is necessary to strike a balance between the required performance and the model order. We assume that the data points follow a linear trend. So, the linear model is chosen for the estimation problem. $latex y = \alpha_2 x + \alpha_1 &s=2$ Now the estimation problem simplifies to finding the parameters $latex \alpha_2$ and $latex \alpha_1$. We can write the relationship between the observed variable and the input variable as Next step is to solve for the above read more

The Mean Square Error - Why do we use it for estimation problems

"Mean Square Error", abbreviated as MSE, is an ubiquitous term found in texts on estimation theory. Have you ever wondered what this term actually means and why is this getting used in estimation theory very often ? Any communication system has a transmitter, a channel or medium to communicate and a receiver. Given the channel impulse response and the channel noise, the goal of a receiver is to decipher what was sent from the transmitter. A simple channel is usually characterized by a channel response - $latex h$ and an additive noise term - $latex n$. In time domain, this can be written as $latex y = h \circledast x + n &s=2$ Here, $latex \circledast$ is the convolution operation. Equivalently, in frequency domain, the convolution operation is equivalent to multiplication in frequency domain and vice-versa. $latex Y = HX + N &s=2$ Remember!!! Capitalized letters indicate frequency domain representation and small caps indicate time domain representation. The frequency domain equation looks simple and the only spoiler is the noise term. The receiver receives the information over the channel corrupted by noise and tries to decipher what was sent from the transmitter - $latex X$. If the noise is cancelled out in the receiver, $latex N =0$, the observed spectrum at the receiver will look like, $latex Y = HX &s=2$ Now, to know $latex X$, the receiver has to know $latex H$. Then it can simple divide the observed spectrum $latex Y$ with the channel frequency response $latex H$ to get $latex X$. Unfortunately, things are not that easy. Cancellation of noise from the received samples/spectrum is the hardest part. Complete nullification of noise at the receiver is hardest to achieve and the entire communication system design engineering revolves around reducing this noise to minimum acceptable level to achieve acceptable performance. Given the noise term, how do we know $latex H$ from the observed/received spectrum $latex Y$. This is a classical estimation problem. Usually a known sequence ( pilot sequence in OFDM and training sequence in GSM etc.., ) is transmitted and sent across the channel and from that the channel response $latex H$ or the impulse response $latex h$ is estimated. This estimated channel response is used to decipher the transmitted spectrum/sequence when receiving the actual data. This type of estimation is useful if and only if the channel response remains constant read more

HDD servo system using GA based Fixed Structure Robust Loop Shaping Control

Abstract: The capacity of the Hard Disk Drive (HDD) is increasing at very aggressive rate this leads to higher Tracks Per Inch (TPI). The servo system in HDD needs to improve in tracking performance and robustness. H∞ robust controller is one of the most popular techniques for the mentioned problem; however, order of this controller is usually much higher than that of the plant, making it difficult to implement. To overcome this problem, this paper proposes a new technique for designing a robust controller for HDD Voice Coil motor (VCM) actuator. The control design problem, H∞ loop shaping with structured controller, is solved by using GA. Infinity norm of the transfer function from disturbances to states is used as the cost function in our G.A optimization problem. The proposed technique not only solves the problem of complicated and high order controller but also retains the robust performance of conventional H∞ control technique. In addition, structured pre-filter is also designed by using GA to enhance the performance in time-domain tracking. Simulation results in a HDD control system show the effectiveness of the proposed technique. Full Paper: HDD servo system using GA based Fixed Structure Robust Loop Shaping Control About the Authors: *Amar Nath is an expert in Servo Engineering working with Seagate Technologies, Singapore - the world's top hard-disk manufacturer. *S. Kaitwanidvilai is with the Electrical Engg Department and Center of Excellence for Innovative Energy Systems KMITL,Bangkok,Thailand. ***Reproduced with permission from the first author of the read more