# Maximum Likelihood Decoding

**Maximum Likelihood Decoding:**

Consider a set of possible codewords (valid codewords – set \(Y\)) generated by an encoder in the transmitter side. We pick one codeword out of this set ( call it \(y\) ) and transmit it via a Binary Symmetric Channel (BSC) with probability of error p ( To know what is a BSC –** ****click here**** **). At the receiver side we receive the distorted version of \(y\) ( call this erroneous codeword \(x\)).

Maximum Likelihood Decoding chooses one codeword from \(Y\) (the list of all possible codewords) which maximizes the following probability.

$$\mathbb{P}(y\:sent\mid x\:received )$$

Meaning that the receiver computes \(P(y_1,x) , P(y_2,x) , P(y_3,x),\cdots,P(y_n,x)\). and chooses a codeword (\(y\)) which gives the maximum probability.

**refer here**

^{[1]}**Examples for “Prediction” and “Estimation” :**

**Example of Maximum Likelihood Decoding:**

Let \(y=11001001\) and \(x=10011001\) . Assuming Binomial distribution model for the event with probability of error \(0.1\) (i.e the reliability of the BSC is \(1-p = 0.9\)), the distance between codewords is \(y-x = 2\) . For binomial model,

$$\mathbb{P}(y\:received\mid x\:sent ) = (1-p)^{n-d}.p^{d}$$

where d=the hamming distance between the received and the sent codewords

\(p\)= error probability of the BSC.

\(1-p\) = reliability of BSC

**Note :**Here, Hamming distance is used to compute the probability. So the decoding can be called as “minimum distance decoding” (which minimizes the Hamming distance) or “maximum likelihood decoding”. Euclidean distance may also be used to compute the conditional probability.

As mentioned earlier, in practice \(y\) is not known at the receiver. Lets see how to estimate \(P(y received \mid x sent)\) when \(y\) is unknown based on the binomial model.

Since the receiver is unaware of the particular y corresponding to the x received, the receiver computes \(P(y received \mid x sent)\) for each codeword in \(Y\). The \(y\) which gives the maximum probability is concluded as the codeword that was sent.

### Reference :

**
**

## See Also

**
**

Pingback: Minimum Variance Unbiased Estimators (MVUE) | GaussianWaves()

Pingback: An Introduction to Estimation Theory | GaussianWaves()

Pingback: Likelihood Function and Maximum Likelihood Estimation (MLE) | GaussianWaves()

Pingback: Theoretical derivation of MLE for Exponential Distribution: | GaussianWaves()

Pingback: Introduction to Cramer Rao Lower Bound (CRLB) | GaussianWaves()

Pingback: Theoretical derivation of MLE for Gaussian Distribution: | GaussianWaves()

Pingback: Bias of an Estimator | GaussianWaves()

Pingback: Score, Fisher Information and Estimator Sensitivity » GaussianWaves()