Convolutional Coding

Convolutional codes differ from block codes by means of method of operation. A convolutional encoder operates over serial data, whereas block codes operates over a block of input data. Also different is the utilization of memory elements in the convolutional encoders. In the case of block codes, there is no memory element involved in the generation of encoded data.

Convolutional codes are specified as (n,k,L) , where n is the number of output bits from the encoder, k is the number of input bits to the encoder and L is the constraint length of the encoder. Disparate expressions for constraint length are often found in different text books but the fundamental idea is same The constraint length is used to calculate the number of memory stages or flipflops used in the encoder. As long as we know L and the underlying formula we can calculate the number of memory stages (m). So it does not really matter which expression for L is used. The constraint length is expressed as

$latex L = k(m+1)&s=2&fg=0000A0$

In some text books it is expressed as L=k x (m) and in some other books it is even expressed as L=m+1.We will use the first expression throughout our discussion.

We will take up a simple convolutional code (2,1,3) where n=2, k=1 and L=3 ( the expression L=k(m+1) is used).Lets construct the encoder from the above information. The encoder is constructed with 1 input bit, 2 output bits and 2 memory elements. Note that the L=k(m+1) expression leads to 2 memory elements. Here these two memory elements are used to store the past 2 input bits. If the expression L=k*m is used and for a (2,1,3) encoder (L=3), the number of memory elements will be 3, where these 3 memory elements are used to store past 3 input bits . So the expression for constraint length has to be carefully interpreted, otherwise any misinterpretation will lead to a different encoder structure altogether.

Now we know the number of bits going into the encoder , number of bits coming out from it and the number of memory elements. Till now the encoder is like a black box to us in the sense that we don’t know how the memory elements are utilized to generate the output bits from the input. To fully understand the encoder structure we need something called “generator polynomials” that tell us how the memory elements are linked to achieve encoding. The generator polynomials for a specific convolutional encoder set (n,k,L) are usually found through simulation. The set (n,k,L) along with n generator polynomials completely describes a convolutional encoder.

For our (2,1,3) encoder , we use two generator polynomials, one for each output.

$latex g_0=\left [1 0 1 \right ]&s=2&fg=0000A0$

$latex g_1=\left [1 1 1\right ]&s=2&fg=0000A0$

Lets put everything to make the encoder structure.

Encoder Structure
Encoder Structure

Encoder structure,State Diagram and Trellis :

The convolutional encoder can also be represented using a finite state machine. The entire behavior of a convolutional encoder is depicted by a state diagram. The number of states in a state diagram depends on the number of memory elements in the encoder. If number of memory elements is m then the number of states in the state diagram will be 2m.For the (2,1,3) convolutional encoder the number of states will be 4 , i.e., the last two memory elements are used to store the past inputs I -1 and I -2. Transition between the states depends on the present input I 0.The solid line in the state diagram indicates the transitions due to the input I0 =0 and dotted lines are used to represent the transitions due to the input I0= 1. The output bits generated during each state transition is written in dark red color (along the transistion lines).

State diagrams can be easily constructed with the help of a state table as shown below.

State Table - (2,1,3) Convolutional Encoder

Current State
Next State

State Diagram for Convolutional Encoder
State Diagram for (2,1,3) Convolutional Encoder

A trellis diagram is constructed from the given state diagram. Trellis diagram helps us to understand the concept of Viterbi Decoding algorithm that is used to decode the convolutional encoded data.

Trellis Diagram - Single Stage Trellis Diagram depicting four stages

A single stage Trellis is illustrated for understanding. The four possible states of the encoder are depicted as four horizontal rows. Present states are considered to be on the left side and the possible next states on the right side of the trellis. The connections between the present states and the next states exactly follow the state diagram representation. Again, the output bits generated during each state transition are indicated in dark red color. A multiple stage Trellis is constructed by replicating the single stage and juxtaposing each of the single stage side by side as shown in the figure.

The concept of Viterbi Decoding will be discussed in the next article

Recommended Books

More Recommended Books at our Integrated Book Store

  • Kumud Altmayer

    Mr. Mathuranathan,

    Can you please add an appendix in you book with sample Convolutional codes. I bought the book as well. Thank you!

  • Anna

    sir, i bought the book but i do not find code for convolutional encoding and decoding.. could you please send the code to my mail id.. It is so urgent.. It will be very much helpful if you send as soon as possible.

    • Hi Anna,

      The code for convolutional decoder is a bit complex and is not available in the ebook. Sorry if the book missed your expectation.

      However, if you have a communication toolbox, you can try this


      t = poly2trellis([4 3],[4 5 17;7 4 2]); % Define trellis.
      code = convenc(ones(100,1),t); % Encode a string of ones.

      Decoding (Hard decision):
      tb = 2; % Traceback length for decoding
      decoded = vitdec(code,t,tb,’trunc’,’hard’); % Decode.

      • Anna

        Sir,if you could please say the error in the code given below it will be helpful.

        close all
        clear all

        nbitpersym = 52; % number of bits per OFDM symbol (same as the number of subcarriers for BPSK)
        nsym = 10^4; % number of symbols
        len_fft = 64; % fft size
        sub_car = 52; % number of data subcarriers
        EbNo = 0:2:12;
        Td =3.2*10e-6;%%data symbol duration
        Tcp =.8*10e-6;%%cyclic prefix duration= .25Td

        EsNo= EbNo + 10*log10(sub_car /len_fft)+ 10*log10(Td/(Td+Tcp)); % symbol to noise ratio

        snr=EsNo – 10*log10(sub_car/len_fft); % snr as to be used by awgn fn.

        M = modem.pskmod(2); % modulation object

        % Generating data

        t = poly2trellis([4 3],[4 5 17;7 4 2]); % Define trellis.
        enc = convenc(t_data,t); % Encode a string of ones.

        % modulating data

        mod_data = modulate(M,enc);

        % serial to parallel conversion

        par_data = reshape(mod_data,nbitpersym,nsym).’;

        % pilot insertion

        pilot_ins_data=[zeros(nsym,6) par_data(:,[1:nbitpersym/2]) zeros(nsym,1) par_data(:,[nbitpersym/2+1:nbitpersym]) zeros(nsym,5)] ;

        % fourier transform time doamain data and normalizing the data

        IFFT_data = (len_fft/sqrt(sub_car))*ifft(fftshift(pilot_ins_data.’)).’;

        % addition cyclic prefix

        cylic_add_data = [IFFT_data(:,[49:64]) IFFT_data].’;

        % parallel to serial coversion

        ser_data = reshape(cylic_add_data,80*nsym,1);

        % passing thru channel

        for ii=1:length(snr)

        chan_awgn = sqrt(80/52)*awgn(ser_data,snr(ii)); % awgn addition

        ser_to_para = reshape(chan_awgn,80,nsym).’; % serial to parallel coversion

        cyclic_pre_rem = ser_to_para(:,[17:80]); %cyclic prefix removal

        FFT_recdata =(sqrt(52)/64)*fftshift(fft(cyclic_pre_rem.’)).’; % freq domain transform

        rem_pilot = FFT_recdata (:,[6+[1:nbitpersym/2] 7+[nbitpersym/2+1:nbitpersym] ]); %pilot removal

        ser_data_1 = reshape(rem_pilot.’,nbitpersym*nsym,1); % serial coversion

        z=modem.pskdemod(2); %demodulation object

        demod_Data = demodulate(z,ser_data_1); %demodulating the data
        tb = 2; % Traceback length for decoding
        decoded = vitdec(code,ser_data,tb,’trunc’,’hard’); % Decode

        [no_of_error(ii),ratio(ii)]=biterr(t_data,decoded) ; % error rate calculation

        theoryBer = (1/2)*erfc(sqrt(10.^(EbNo/10)));
        semilogy (EbNo,theoryBer,’-k’);
        axis([0 12 10^-5 1])
        title(‘Bit error probability curve for BPSK using OFDM’);
        legend(‘simulated awgn’,’theoretical awgn’)

  • madhu4117

    hello sir i am doing a project on MIMO-OFDM system. Is that book ” Simulation of Digital Communication Systems Using Matlab [eBook] – Second Edition ” contains about ‘convolutional coding and Interleaving of MIMO-OFDM’ so that I will buy that

    • Hi,
      MIMO-OFDM is not discussed in the ebook. Only basic OFDM and its simulation is available.

      • Madhusudan Kar

        Thank you sir,
        Sir can you please recommend any book for MIMO-OFDM in which I can get some MATLAB codes for this.

  • Pingback: Viterbi Decoding of Convolutional codes | GaussianWaves()