next up previous [pdf]

Next: Comparison of Wilson-Burg and Up: The Wilson-Burg method of Previous: Introduction

Method description

Spectral factorization constructs a minimum-phase signal from its spectrum. The algorithm, suggested by Wilson (1969), approaches this problem directly with Newton's iterative method. In a $ Z$ -transform notation, Wilson's method implies solving the equation

$\displaystyle S(Z) = A(Z)\bar{A}(1/Z)$ (1)

for a given spectrum $ S(Z)$ and unknown minimum-phase signal $ A(Z)$ with an iterative linearization
$\displaystyle S(Z)$ $\displaystyle =$ $\displaystyle A_t(Z)\bar A_t(1/Z)+
A_t( Z)[\bar A_{t+1}(1/Z)-\bar A_t(1/Z)]+
\bar A_t(1/Z)[ A_{t+1}( Z)- A_t( Z)]$  
  $\displaystyle =$ $\displaystyle A_t( Z) \bar A_{t+1}(1/Z) + \bar A_t(1/Z) A_{t+1}(Z)
- A_t(Z)\bar A_t(1/Z)
\;,$ (2)

where $ A_t(Z)$ denotes the signal estimate at iteration $ t$ . Starting from some initial estimate $ A_0(Z)$ , such as $ A_0(Z)=1$ , one iteratively solves the linear system (2) for the updated signal $ A_{t+1}(Z)$ . Wilson (1969) presents a rigorous proof that iteration (2) operates with minimum-phase signals provided that the initial estimate $ A_0(Z)$ is minimum-phase. The original algorithm estimates the new approximation $ A_{t+1}(Z)$ by matrix inversion implied in the solution of the system.

Burg (1998, personal communication) recognized that dividing both sides of equation (2) by $ \bar A_t(1/Z) A_t(Z)$ leads to a particularly convenient form, where the terms on the left are symmetric, and the two terms on the right are correspondingly strictly causal and anticausal:

$\displaystyle 1 \ +\ {S(Z) \over \bar A_t(1/Z)\ A_t(Z)} = {A_{t+1}(Z) \over A_t(Z)} \ +\ {\bar A_{t+1}(1/Z) \over \bar A_t(1/Z)}$ (3)

Equation (3) leads to the Wilson-Burg algorithm, which accomplishes spectral factorization by a recursive application of convolution (polynomial multiplication) and deconvolution (polynomial division). The algorithm proceeds as follows:

  1. Compute the left side of equation (3) using forward and adjoint polynomial division.
  2. Abandon negative lags, to keep only the causal part of the signal, and also keep half of the zero lag. This gives us $ A_{t+1}(Z)/A_t(Z)$ .
  3. Multiply out (convolve) the denominator $ A_t(Z)$ . Now we have the desired result $ A_{t+1}(Z)$ .
  4. Iterate until convergence.

iter $ a_0$ $ a_1$ $ a_2$ $ a_3$
0 1.000000 0.000000 0.000000 0.000000
1 36.523964 23.737839 6.625787 0.657103
2 26.243151 25.726116 8.471050 0.914951
3 24.162354 25.991493 8.962727 0.990802
4 24.001223 25.999662 9.000164 0.999200
5 24.000015 25.999977 9.000029 0.999944
6 23.999998 26.000002 9.000003 0.999996
7 23.999998 26.000004 9.000001 1.000000
8 23.999998 25.999998 9.000000 1.000000
9 24.000000 26.000000 9.000000 1.000000

Table 1. Example convergence of the Wilson-Burg iteration

An example of the Wilson-Burg convergence is shown in Table 1 on a simple 1-D signal. The autocorrelation $ S(Z)$ in this case is $ 1334 + 867 \left(Z + 1/Z\right) + 242
\left(Z^2 + 1/Z^2\right) + 24 \left(Z^3 + 1/Z^3\right)$ , and the corresponding minimum-phase signal is $ A(Z) = (2+Z)(3+Z)(4+Z) = 24 +
26 Z + 9 Z^2 + Z^3$ . A quadratic rate of convergence is visible from the table. The convergence slows down for signals whose polynomial roots are close to the unit circle (Wilson, 1969).

The clear advantage of the Wilson-Burg algorithm in comparison with the original Wilson algorithm is in the elimination of the expensive matrix inversion step. Only convolution and deconvolution operations are used at each iteration step.



Subsections
next up previous [pdf]

Next: Comparison of Wilson-Burg and Up: The Wilson-Burg method of Previous: Introduction

2014-02-15