Maximum Likelihood Decoding
Maximum Likelihood Decoding:
Consider a set of possible codewords (valid codewords – set ‘Y’) generated by an encoder in the transmitter side. We pick one codeword out of this set ( call it ‘y’ ) and transmit it via a Binary Symmetric Channel (BSC) with probability of error p ( To know what is a BSC – click here ). At the receiver side we receive the distorted version of ‘y’ ( call this erroneous codeword ‘x’).
Maximum Likelihood Decoding chooses one codeword from ‘Y’ (the list of all possible codewords) which maximizes the following probability.
Meaning that the receiver computes P(y1,x) , P(y2,x) , P(y3,x),…,P(yn,x). and chooses a codeword (y) which gives the maximum probability.
Examples for “Prediction” and “Estimation” :
Example of Maximum Likelihood Decoding:
Let y=11001001 and x=10011001 . Assuming Binomial distribution model for the event with probability of error 0.1 (i.e the reliability of the BSC is 1-p = 0.9), the distance between codewords is y-x = 2 . For binomial model,
where d=the hamming distance between the received and the sent codewords
p= error probability of the BSC.
1-p = reliability of BSC
As mentioned earlier, in practice y is not known at the receiver. Lets see how to estimate P(y received | x sent) when y is unknown based on the binomial model.
Since the receiver is unaware of the particular y corresponding to the x received, the receiver computes P(y received | x sent) for each codeword in Y. The “y” which gives the maximum probability is concluded as the codeword that was sent.
See also: An Introduction to Estimation Theory
 Maximum Likelihood Estimation
 Probability and Random Process
 Hard and Soft decision decoding