Cramer Rao Lower Bound for Phase Estimation

Key focus: Derive the Cramer-Rao lower bound for phase estimation applied to DSB transmission. Find out if an efficient estimator actually exists for phase estimation.

Problem formulation

Consider the DSB carrier frequency estimation problem given in the introductory chapter to estimation theory. A message is sent across a channel modulated by a sinusoidal carrier with carrier frequency = fc and amplitude= A. The transmitted signal gets affected by zero-mean AWGN noise when it travels across the medium. The receiver receives the signal and digitizes it for further processing.

To recover the message at the receiver, one has to know every details of the sinusoid:
1) Amplitude-A
2) Carrier Frequency – fc
3) Any uncertainty in its phase – ϕc.

Given a set of digitized samples x[n] and assuming that both amplitude and carrier frequency are known, we are tasked with the objective of estimating the phase of the embedded sinusoid (cosine wave). For analyzing this scenario we should have a model to begin with.

The digitized samples at the receiver are modeled as

x[n] = A cos \left(2 \pi f_c n+ \phi_c \right) + w[n] ,\quad n=0,1,\cdots,N-1

Here A and fc are assumed to be known and w[n] is an AWGN noise with mean μ=0 and variance=σ2. We will use CRLB and try to find an efficient estimator to estimate the phase component.

CRLB for Phase Estimation:

As a pre-requisite to this article, readers are advised to go through the previous article on “Steps to find CRLB”

In order to derive CRLB, we need to have a PDF (Probability Density Function) to begin with. Since the underlying noise is modeled as an AWGN noise with mean μ=0 and variance=σ2, the PDF of the observed sample that gets affected by this noise is given by a multivariate Gaussian distribution function.

\begin{aligned} p(\mathbf{x} ; \phi) &= \prod_{n-0}^{N-1} \frac{1}{\sqrt{2 \pi \sigma^2}} exp \left[ -\frac{\left( \mathbf{x} - \mu_x \right)^2}{2 \sigma^2} \right] \\ &= \prod_{n-0}^{N-1} \frac{1}{\sqrt{2 \pi \sigma^2}} exp \left[ -\frac{\left( x[n] - \mu_x \right)^2}{2 \sigma^2} \right] \\ &=  \frac{1}{\left( 2 \pi \sigma^2 \right)^{\frac{N}{2}}} exp \left[ - \frac{1}{2 \sigma^2} \sum_{n=0}^{N-1} \left( x[n] - \mu_x \right)^2 \right] \end{aligned}

The sample mean is given by

\begin{aligned} \mu_x &=  \frac{1}{N} \sum_{n=0}^{N-1} x[n] = \frac{1}{N} \sum_{n=0}^{N-1} \left[ A\;cos \left\{ 2 \pi f_c n + \phi_c \right\} + w[n] \right] \\ &= \frac{1}{N} \sum_{n=0}^{N-1} \left[ A\;cos \left\{ 2 \pi f_c n + \phi_c \right\} \right] + \frac{1}{N} \sum_{n=0}^{N-1} w[n] \\ &= \frac{1}{N} \sum_{n=0}^{N-1} \left[ A\;cos \left\{ 2 \pi f_c n + \phi_c \right\} \right] + 0 \\ &= \frac{1}{N} \sum_{n=0}^{N-1} \left[ A\;cos \left\{ 2 \pi f_c n + \phi_c \right\} \right] \end{aligned}

The PDF is re-written as

\displaystyle{ p(\mathbf{x}; \phi) = \frac{1}{\left( 2 \pi \sigma^2 \right)^{\frac{N}{2}}} exp \left[ -\frac{1}{2 \sigma^2} \sum_{n=0}^{N-1} \left[ x[n] - A cos\left( 2 \pi f_c n + \phi_c \right)\right]^2  \right]}

Since the observed samples x[n] are fixed in the above equation, we will use the likelihood notation instead of PDF notation. That is, p(\mathbf{x;\phi }) is simply rewritten in terms of log likelihood as L(\mathbf{x;\phi }). The log likelihood function is given by

\displaystyle{ ln\; L(\mathbf{x;\phi }) = - \frac{N}{2} ln \left( 2 \pi \sigma^2\right) + \left[  - \frac{1}{2 \sigma^2} \sum_{n=0}^{N-1} \left( x[n] - A cos \left\{ 2 \pi f_c n + \phi_c \right\} \right)^2 \right] }

For simplicity,we will denote ϕc as ϕ. Next, take the first partial derivative of log likelihood function with respect to ϕ.

\displaystyle{ \begin{aligned}\frac{\partial \; ln \; L( \mathbf{x;\phi})}{\partial \phi} &= -\frac{1}{\sigma^2} \sum_{n-0}^{N-1} \left[ x[n] - A cos \left( 2 \pi f_c n + \phi \right) \right] A sin \left( 2 \pi f_c n + \phi \right) \\ &=-\frac{A}{\sigma^2} \sum_{n-0}^{N-1} \left[ x[n] sin (2 \pi f_c n + \phi) - \frac{A}{2} sin( 4 \pi f_c n + 2 \phi) \right] \end{aligned}}

Taking the second partial derivative of the log likelihood function,

\displaystyle{ \frac{\partial^2 \; ln \; L( \mathbf{x;\phi})}{\partial^2 \phi} = -\frac{A}{\sigma^2} \sum_{n-0}^{N-1} \left[ x[n] cos (2 \pi f_c n + \phi) - A cos( 4 \pi f_c n + 2 \phi) \right] }

Since the above term is still dependent on the observed samples \(x[n]\), take expectation of the entire equation to average out the variations.

\displaystyle{\begin{aligned} -E \left[\frac{\partial^2 \; ln \; L(\mathbf{x;\phi})}{\partial^2 \phi} \right] &= \frac{A}{\sigma^2} \sum_{n=0}^{N-1} \left[ E(x[n])cos\left( 2 \pi f_c n + \phi \right) - A \; cos\left( 4 \pi f_c n + 2 \phi \right) \right] \\ &= \frac{A}{\sigma^2} \sum_{n=0}^{N-1} \left[ A \; cos^2 \left( 2 \pi f_c n + \phi \right) - A \; cos \left( 4 \pi f_c n + 2 \phi \right)\right] \\ &= \frac{A^2}{\sigma^2} \sum_{n=0}^{N-1} \left[ \frac{1}{2} + \frac{1}{2} cos \left( 4 \pi f_c n + 2 \phi \right) - cos\left( 4 \pi f_c n + 2 \phi \right) \right] \\ &=\frac{A^2}{2 \sigma^2} \sum_{n=0}^{N-1} \left[ 1 - cos \left( 4 \pi f_c n + 2 \phi\right)\right] \\ & \approx \frac{A^2}{2 \sigma^2} \left[ N - 0 \right] = \frac{N A^2}{2 \sigma^2}\end{aligned}}

Let’s derive the terms like fisher information, CRLB and find out whether we can find an efficient estimator from the equations.

Fisher Information:

The Fisher Information for the given problem is

\displaystyle{ I(\phi) = -E \left[\frac{\partial^2 \; ln \; L(\mathbf{x;\phi})}{\partial^2 \phi} \right] = \frac{N A^2}{ 2 \sigma^2}}

Cramer Rao Lower Bound:

The CRLB is the reciprocal of Fisher Information.

\displaystyle{CRLB = \frac{1}{I(\phi)} = \frac{2 \sigma^2}{N A^2}}

The variance of any estimator estimating the phase of the carrier for given problem will always be higher than this CRLB. That is,

\displaystyle{ var(\hat{\phi}) \geq \frac{2\sigma^2}{NA^2}}

As we can see from the above result, that the variance of the estimates var(\hat{\phi}) \to CRLB as N \to \infty . Such estimators are called Asymptotically Efficient Estimators.

Asymptotically Efficient Estimators and cramer rao lower bound
Figure 1: Asymptotically Efficient Estimator and the Cramer-Rao lower bound

An efficient estimator exists if and only if the first partial derivative of log likelihood function can be written in the form

\displaystyle{ \frac{\partial \; ln \; L(\mathbf{x;\phi}) }{\partial \phi} = I(\phi) \left[ g(\mathbf{x}) - \phi \right]}

Re-writing our earlier result,

\displaystyle{ \frac{\partial \; ln \; L(\mathbf{x;\phi}) }{\partial \phi} = - \frac{A}{\sigma^2} \sum_{n=0}^{N-1} \left[ x[n] \; sin \left( 2 \pi f_c n + \phi\right) - \frac{A}{2} sin \left( 4 \pi f_c n + 2 \phi \right) \right]}

We can clearly see that the above two equations are not having the same form. Thus, an efficient estimator does not exist for this problem.

Rate this article: PoorBelow averageAverageGoodExcellent (6 votes, average: 4.33 out of 5)

For further reading

[1] Steven M. Kay, “Fundamentals of Statistical Signal Processing, Volume I: Estimation Theory”, ISBN: 978-0133457117, Prentice Hall, Edition 1, 1993.↗

Related Topics

[1]An Introduction to Estimation Theory
[2]Bias of an Estimator
[3]Minimum Variance Unbiased Estimators (MVUE)
[4]Maximum Likelihood Estimation
[5]Maximum Likelihood Decoding
[6]Probability and Random Process
[7]Likelihood Function and Maximum Likelihood Estimation (MLE)
[8]Score, Fisher Information and Estimator Sensitivity
[9]Introduction to Cramer Rao Lower Bound (CRLB)
[10]Cramer Rao Lower Bound for Scalar Parameter Estimation
[11]Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE)
[12]Efficient Estimators and CRLB
[13]Cramer Rao Lower Bound for Phase Estimation
[14]Normalized CRLB - an alternate form of CRLB and its relation to estimator sensitivity
[15]Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation
[16]The Mean Square Error – Why do we use it for estimation problems
[17]How to estimate unknown parameters using Ordinary Least Squares (OLS)
[18]Essential Preliminary Matrix Algebra for Signal Processing
[19]Why Cholesky Decomposition ? A sample case:
[20]Tests for Positive Definiteness of a Matrix
[21]Solving a Triangular Matrix using Forward & Backward Substitution
[22]Cholesky Factorization - Matlab and Python
[23]LTI system models for random signals – AR, MA and ARMA models
[24]Comparing AR and ARMA model - minimization of squared error
[25]Yule Walker Estimation
[26]AutoCorrelation (Correlogram) and persistence – Time series analysis
[27]Linear Models - Least Squares Estimator (LSE)
[28]Best Linear Unbiased Estimator (BLUE)

Books by the author

Wireless Communication Systems in Matlab
Wireless Communication Systems in Matlab
Second Edition(PDF)

PoorBelow averageAverageGoodExcellent (168 votes, average: 3.71 out of 5)

Digital modulations using Python
Digital Modulations using Python
(PDF ebook)

PoorBelow averageAverageGoodExcellent (124 votes, average: 3.60 out of 5)

digital_modulations_using_matlab_book_cover
Digital Modulations using Matlab
(PDF ebook)

PoorBelow averageAverageGoodExcellent (129 votes, average: 3.71 out of 5)

Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Post your valuable comments !!!