WIENER FILTERS

application of Wiener filters in channel equalisation, time-delay estimation and additive noise reduction. A case study of the frequency response of a...

0 downloads 105 Views 198KB Size
Advanced Digital Signal Processing and Noise Reduction, Second Edition. Saeed V. Vaseghi Copyright © 2000 John Wiley & Sons Ltd ISBNs: 0-471-62692-9 (Hardback): 0-470-84162-1 (Electronic)

6 WIENER FILTERS 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8

Wiener Filters: Least Square Error Estimation Block-Data Formulation of the Wiener Filter Interpretation of Wiener Filters as Projection in Vector Space Analysis of the Least Mean Square Error Signal Formulation of Wiener Filters in the Frequency Domain Some Applications of Wiener Filters The Choice of Wiener Filter Order Summary

W

iener theory, formulated by Norbert Wiener, forms the foundation of data-dependent linear least square error filters. Wiener filters play a central role in a wide range of applications such as linear prediction, echo cancellation, signal restoration, channel equalisation and system identification. The coefficients of a Wiener filter are calculated to minimise the average squared distance between the filter output and a desired signal. In its basic form, the Wiener theory assumes that the signals are stationary processes. However, if the filter coefficients are periodically recalculated for every block of N signal samples then the filter adapts itself to the average characteristics of the signals within the blocks and becomes block-adaptive. A block-adaptive (or segment adaptive) filter can be used for signals such as speech and image that may be considered almost stationary over a relatively small block of samples. In this chapter, we study Wiener filter theory, and consider alternative methods of formulation of the Wiener filter problem. We consider the application of Wiener filters in channel equalisation, time-delay estimation and additive noise reduction. A case study of the frequency response of a Wiener filter, for additive noise reduction, provides useful insight into the operation of the filter. We also deal with some implementation issues of Wiener filters.

179

Least Square Error Estimation

6.1 Wiener Filters: Least Square Error Estimation Wiener formulated the continuous-time, least mean square error, estimation problem in his classic work on interpolation, extrapolation and smoothing of time series (Wiener 1949). The extension of the Wiener theory from continuous time to discrete time is simple, and of more practical use for implementation on digital signal processors. A Wiener filter can be an infinite-duration impulse response (IIR) filter or a finite-duration impulse response (FIR) filter. In general, the formulation of an IIR Wiener filter results in a set of non-linear equations, whereas the formulation of an FIR Wiener filter results in a set of linear equations and has a closed-form solution. In this chapter, we consider FIR Wiener filters, since they are relatively simple to compute, inherently stable and more practical. The main drawback of FIR filters compared with IIR filters is that they may need a large number of coefficients to approximate a desired response. Figure 6.1 illustrates a Wiener filter represented by the coefficient vector w. The filter takes as the input a signal y(m), and produces an output signal xˆ (m) , where xˆ (m) is the least mean square error estimate of a desired or target signal x(m). The filter input–output relation is given by P −1

xˆ ( m ) = ∑ wk y ( m − k ) k =0 T

(6.1)

=w y where m is the discrete-time index, yT=[y(m), y(m–1), ..., y(m–P–1)] is the filter input signal, and the parameter vector wT=[w0, w1, ..., wP–1] is the Wiener filter coefficient vector. In Equation (6.1), the filtering operation is expressed in two alternative and equivalent forms of a convolutional sum and an inner vector product. The Wiener filter error signal, e(m) is defined as the difference between the desired signal x(m) and the filter output signal xˆ (m) : e ( m ) = x ( m ) − xˆ ( m ) (6.2) = x ( m) − w T y In Equation (6.2), for a given input signal y(m) and a desired signal x(m), the filter error e(m) depends on the filter coefficient vector w.

180

Wiener Filters

Input y(m)

y(m–1) –1

w w=R

y(m–2) z–1

z

w

0

...

z–1

y(m–P–1)

w

w

P–1

2

1

–1

r

yy xy

FIR Wiener Filter Desired signal x(m)

^ x(m)

Figure 6.1 Illustration of a Wiener filter structure.

To explore the relation between the filter coefficient vector w and the error signal e(m) we expand Equation (6.2) for N samples of the signals x(m) and y(m):  e(0)   x (0)   y (0)       e (1)   x (1)   y (1)  e( 2)  =  x ( 2)  −  y ( 2)            e ( N − 1)   x ( N − 1)   y ( N − 1)     







y ( −1) y (0) y (1)

y ( −2 ) y ( −1) y (0)

y ( N − 2)

y ( N − 3)





    

y (1 − P )   y(2 − P)  y (3 − P )    y ( N − P ) 



 w0     w1   w   2    w   P −1 



(6.3) In a compact vector notation this matrix equation may be written as e = x − Yw

(6.4)

where e is the error vector, x is the desired signal vector, Y is the input signal matrix and Yw = xˆ is the Wiener filter output signal vector. It is assumed that the P initial input signal samples [y(–1), . . ., y(–P–1)] are either known or set to zero. In Equation (6.3), if the number of signal samples is equal to the number of filter coefficients N=P, then we have a square matrix equation, and there is a unique filter solution w, with a zero estimation error e=0, such

181

Least Square Error Estimation

that xˆ =Yw = x . If N < P then the number of signal samples N is insufficient to obtain a unique solution for the filter coefficients, in this case there are an infinite number of solutions with zero estimation error, and the matrix equation is said to be underdetermined. In practice, the number of signal samples is much larger than the filter length N>P; in this case, the matrix equation is said to be overdetermined and has a unique solution, usually with a non-zero error. When N>P, the filter coefficients are calculated to minimise an average error cost function, such as the average absolute value of error E [|e(m)|], or the mean square error E [e2(m)], where E [.] is the expectation operator. The choice of the error function affects the optimality and the computational complexity of the solution. In Wiener theory, the objective criterion is the least mean square error (LSE) between the filter output and the desired signal. The least square error criterion is optimal for Gaussian distributed signals. As shown in the followings, for FIR filters the LSE criterion leads to a linear and closedform solution. The Wiener filter coefficients are obtained by minimising an 2 average squared error function E [e (m)] with respect to the filter coefficient vector w. From Equation (6.2), the mean square estimation error is given by

E[ e 2 ( m )] = E[( x ( m ) − w T y ) 2 ] = E[ x 2 ( m )] − 2 w T E[ yx ( m )] + w T E[ yy T ]w

(6.5)

= rxx ( 0 ) − 2w T r yx + w T R yy w where Ryy=E [y(m)yT(m)] is the autocorrelation matrix of the input signal and rxy=E [x(m)y(m)] is the cross-correlation vector of the input and the desired signals. An expanded form of Equation (6.5) can be obtained as P −1

P −1

P −1

k =0

k =0

j =0

E[ e 2 ( m )] = rxx ( 0 ) − 2 ∑ wk ryx ( k ) + ∑ wk

∑ w j ryy ( k − j )

(6.6)

where ryy(k) and ryx(k) are the elements of the autocorrelation matrix Ryy and the cross-correlation vector rxy respectively. From Equation (6.5), the mean square error for an FIR filter is a quadratic function of the filter coefficient vector w and has a single minimum point. For example, for a filter with only two coefficients (w0, w1), the mean square error function is a

182

Wiener Filters

E [e2]

w

w

1

optimal

w

0

Figure 6.2 Mean square error surface for a two-tap FIR filter.

bowl-shaped surface, with a single minimum point, as illustrated in Figure 6.2. The least mean square error point corresponds to the minimum error power. At this optimal operating point the mean square error surface has zero gradient. From Equation (6.5), the gradient of the mean square error function with respect to the filter coefficient vector is given by

∂ E[ e 2 ( m )] = − 2E[ x ( m ) y ( m )]+ 2w T E[ y ( m ) y T ( m )] ∂w

(6.7)

T

= − 2 r yx + 2 w R yy where the gradient vector is defined as

∂ ∂ ∂ ∂   ∂ = , , , , ∂ w  ∂ w0 ∂ w1 ∂ w2 ∂ wP −1 

T

(6.8)

The minimum mean square error Wiener filter is obtained by setting Equation (6.7) to zero: Ryy w = r yx (6.9)

183

Least Square Error Estimation

or, equivalently, w = R −yy1 r yx

(6.10)

In an expanded form, the Wiener filter solution Equation (6.10) can be written as ryy (1) ryy ( 2)  w0   ryy (0)    ryy (0) ryy (1)  w1   ryy (1)  w  =  r ( 2) r yy (1) r yy (0)  2   yy         w   r ( P − 1) r ( P − 2) r ( P − 3) yy yy  P −1   yy

 ryy ( P − 1)  −1  ryx (0)      ryy ( P − 2)   ryx (1)     r yy ( P − 3)   ryx ( 2)            ryy ( 0)   r ( P − 1)    yx  (6.11)

From Equation (6.11), the calculation of the Wiener filter coefficients requires the autocorrelation matrix of the input signal and the crosscorrelation vector of the input and the desired signals. In statistical signal processing theory, the correlation values of a random process are obtained as the averages taken across the ensemble of different realisations of the process as described in Chapter 3. However in many practical situations there are only one or two finite-duration realisations of the signals x(m) and y(m). In such cases, assuming the signals are correlation-ergodic, we can use time averages instead of ensemble averages. For a signal record of length N samples, the time-averaged correlation values are computed as ryy (k ) =

1 N

N −1

∑ y ( m) y ( m + k )

(6.12)

m =0

Note from Equation (6.11) that the autocorrelation matrix Ryy has a highly regular Toeplitz structure. A Toeplitz matrix has constant elements along the left–right diagonals of the matrix. Furthermore, the correlation matrix is also symmetric about the main diagonal elements. There are a number of efficient methods for solving the linear matrix Equation (6.11), including the Cholesky decomposition, the singular value decomposition and the QR decomposition methods.

184

Wiener Filters

6.2 Block-Data Formulation of the Wiener Filter In this section we consider an alternative formulation of a Wiener filter for a block of N samples of the input signal [y(0), y(1), ..., y(N–1)] and the desired signal [x(0), x(1), ..., x(N–1)]. The set of N linear equations describing the Wiener filter input/output relation can be written in matrix form as

 xˆ (0)   y (0)     xˆ (1)   y (1)  xˆ (2)   y ( 2) =      xˆ ( N − 2)   y ( N − 2)     xˆ ( N − 1)   y ( N − 1)





y ( −1)

y ( −2)

y (0)

y (−1)

y (1)

y ( 0)

y ( N − 3)

y ( N − 4)

y ( N − 2)

y ( N − 3)





     

y (2 − P ) y (3 − P ) y (4 − P )



y ( N − P) y ( N + 1 − P)

y (1 − P)   w0    y (2 − P )   w1  y (3 − P)   w2        y ( N − 1 − P ) wP − 2    y ( N − P )   wP −1 





(6.13) Equation (6.13) can be rewritten in compact matrix notation as xˆ =Y w

(6.14)

The Wiener filter error is the difference between the desired signal and the filter output defined as e = x − xˆ (6.15) = x − Yw The energy of the error vector, that is the sum of the squared elements of the error vector, is given by the inner vector product as e T e = ( x − Y w) T ( x −Y w) = x T x − x TYw − w TY T x + w TY TYw

(6.16)

The gradient of the squared error function with respect to the Wiener filter coefficients is obtained by differentiating Equation (6.16):

∂ e Te = − 2 x TY + 2 w TY TY ∂w

(6.17)

185

Block-Data Formulation

The Wiener filter coefficients are obtained by setting the gradient of the squared error function of Equation (6.17) to zero, this yields

(Y TY )w = Y T x or

(

w = Y TY

)−1Y T x

(6.18) (6.19)

Note that the matrix YTY is a time-averaged estimate of the autocorrelation matrix of the filter input signal Ryy, and that the vector YTx is a timeaveraged estimate of rxy the cross-correlation vector of the input and the desired signals. Theoretically, the Wiener filter is obtained from minimisation of the squared error across the ensemble of different realisations of a process as described in the previous section. For a correlation-ergodic process, as the signal length N approaches infinity the block-data Wiener filter of Equation (6.19) approaches the Wiener filter of Equation (6.10):

( )

lim  w = Y TY N →∞ 

−1 T

Y x  = R −yy1rxy 

(6.20)

Since the least square error method described in this section requires a block of N samples of the input and the desired signals, it is also referred to as the block least square (BLS) error estimation method. The block estimation method is appropriate for processing of signals that can be considered as time-invariant over the duration of the block.

6.2.1 QR Decomposition of the Least Square Error Equation An efficient and robust method for solving the least square error Equation (6.19) is the QR decomposition (QRD) method. In this method, the N × P signal matrix Y is decomposed into the product of an N × N orthonormal matrix Q and a P × P upper-triangular matrix R as R  QY =   0

(6.21)

186

Wiener Filters

T T where 0 is the ( N − P) × P null matrix, Q Q = QQ = I , and the uppertriangular matrix R is of the form

 r00   0  0 R =  0     0

r01 r11 0 0  0

r02 r12 r22 0  0

r03 r13 r23 r33  0

 r0 P −1    r1P −1   r2 P −1    r3 P −1       rP −1P −1 

(6.22)

Substitution of Equation (6.21) in Equation (6.18) yields T

T

R  R   R    QQ T  w =   Q x 0 0 0

(6.23)

From Equation (6.23) we have R   w = Q x 0

(6.24)

From Equation (6.24) we have

R w = xQ

(6.25)

where the vector xQ on the right hand side of Equation (6.25) is composed of the first P elements of the product Qx. Since the matrix R is uppertriangular, the coefficients of the least square error filter can be obtained easily through a process of back substitution from Equation (6.25), starting with the coefficient w P −1 = x Q ( P − 1) / rP −1P −1 . The main computational steps in the QR decomposition are the determination of the orthonormal matrix Q and of the upper triangular matrix R. The decomposition of a matrix into QR matrices can be achieved using a number of methods, including the Gram-Schmidt orthogonalisation method, the Householder method and the Givens rotation method.

187

Interpretation of Wiener Filters as Projection in Vector Space

Error signal

e(m) e = e(m–1) e(m–2)

Clean signal x(m) x = x(m–1) x(m–2)

x^ =

y(m) y = y(m–1) 2 y(m–2)

^x(m) ^x(m–1) ^x(m–2)

Noisy signal

Noisy signal y(m–1) y = y(m–2) 1 y(m–3)

Figure 6.3 The least square error projection of a desired signal vector x onto a plane containing the input signal vectors y1 and y2 is the perpendicular projection of x shown as the shaded vector.

6.3 Interpretation of Wiener Filters as Projection in Vector Space In this section, we consider an alternative formulation of Wiener filters where the least square error estimate is visualized as the perpendicular minimum distance projection of the desired signal vector onto the vector space of the input signal. A vector space is the collection of an infinite number of vectors that can be obtained from linear combinations of a number of independent vectors. In order to develop a vector space interpretation of the least square error estimation problem, we rewrite the matrix Equation (6.11) and express the filter output vector xˆ as a linear weighted combination of the column vectors of the input signal matrix as

188

Wiener Filters

 xˆ (0)   y (0)   y (−1)   y (1 − P)           xˆ (1)   y (1)   y (0)   y (2 − P)   xˆ (2)   y (2)   y (1)   y (3 − P)     = w0   + w1   +  + wP −1               xˆ ( N − 2)   y ( N − 2)   y ( N − 3)   y ( N − 1 − P)           xˆ ( N − 1)   y ( N − 1)   y ( N − 2)   y( N − P)  (6.26) In compact notation, Equation (6.26) may be written as xˆ = w0 y 0 + w1 y1 +  + wP −1 y P −1

(6.27)

In Equation (6.27) the signal estimate xˆ is a linear combination of P basis vectors [y0, y1, . . ., yP–1], and hence it can be said that the estimate xˆ is in the vector subspace formed by the input signal vectors [y0, y1, . . ., yP–1]. In general, the P N-dimensional input signal vectors [y0, y1, . . ., yP–1] in Equation (6.27) define the basis vectors for a subspace in an Ndimensional signal space. If P, the number of basis vectors, is equal to N, the vector dimension, then the subspace defined by the input signal vectors encompasses the entire N-dimensional signal space and includes the desired signal vector x. In this case, the signal estimate xˆ = x and the estimation error is zero. However, in practice, N>P, and the signal space defined by the P input signal vectors of Equation (6.27) is only a subspace of the Ndimensional signal space. In this case, the estimation error is zero only if the desired signal x happens to be in the subspace of the input signal, otherwise the best estimate of x is the perpendicular projection of the vector x onto the vector space of the input signal [y0, y1, . . ., yP–1]., as explained in the following example. Example 6.1 Figure 6.3 illustrates a vector space interpretation of a simple least square error estimation problem, where yT=[y(2), y(1), y(0), y(– 1)] is the input signal, xT=[x(2), x(1), x(0)] is the desired signal and wT=[w0, w1] is the filter coefficient vector. As in Equation (6.26), the filter output can be written as

Analysis of the Least Mean Square Error Signal

 xˆ (2)   y (2)   y (1)         xˆ (1)  = w0  y (1)  + w1  y (0)   xˆ (0)   y (0)   y (−1)       

189

(6.28)

In Equation (6.28), the input signal vectors y1T =[y(2), y(1), y(0)] and y 2T =[y(1), y(0), y(−1)] are 3-dimensional vectors. The subspace defined by the linear combinations of the two input vectors [y1, y2] is a 2-dimensional plane in a 3-dimensional signal space. The filter output is a linear combination of y1 and y2, and hence it is confined to the plane containing these two vectors. The least square error estimate of x is the orthogonal projection of x on the plane of [y1, y2] as shown by the shaded vector xˆ . If the desired vector happens to be in the plane defined by the vectors y1 and y2 then the estimation error will be zero, otherwise the estimation error will be the perpendicular distance of x from the plane containing y1 and y2.

6.4 Analysis of the Least Mean Square Error Signal The optimality criterion in the formulation of the Wiener filter is the least mean square distance between the filter output and the desired signal. In this section, the variance of the filter error signal is analysed. Substituting the Wiener equation Ryyw=ryx in Equation (6.5) gives the least mean square error:

E[e 2 (m)] = rxx (0) − w T r yx = rxx (0) − w T R yy w

(6.29)

Now, for zero-mean signals, it is easy to show that in Equation (6.29) the term wTRyyw is the variance of the Wiener filter output xˆ (m) :

σ x2ˆ = E [ xˆ 2 (m)] = w T R yy w

(6.30)

Therefore Equation (6.29) may be written as

σ e2 = σ x2 − σ x2ˆ

(6.31)

190

Wiener Filters

2 2 2 2 2 2 where σ x =E[ x (m)], σ xˆ =E [ xˆ (m)] and σ e =E[e (m)] are the variances of the desired signal, the filter estimate of the desired signal and the error signal respectively. In general, the filter input y(m) is composed of a signal component xc(m) and a random noise n(m):

y ( m) = x c ( m ) + n ( m )

(6.32)

where the signal xc(m) is the part of the observation that is correlated with the desired signal x(m), and it is this part of the input signal that may be transformable through a Wiener filter to the desired signal. Using Equation (6.32) the Wiener filter error may be decomposed into two distinct components: P

e ( m ) = x ( m ) − ∑ wk y ( m − k ) k =0 P

  P =  x ( m ) − ∑ wk x c ( m − k )  − ∑ wk n ( m − k )   k =0 k =0

(6.33)

or e ( m ) = e x ( m ) +e n ( m )

(6.34)

where ex(m) is the difference between the desired signal x(m) and the output of the filter in response to the input signal component xc(m), i.e. P −1

e x ( m ) = x ( m ) − ∑ wk x c ( m − k )

(6.35)

k =0

and en(m) is the error in the output due to the presence of noise n(m) in the input signal: P −1

e n ( m ) = − ∑ wk n ( m − k )

(6.36)

k =0

The variance of filter error can be rewritten as

σ e2 = σ e2x + σ e2n

(6.37)

191

Formulation of Wiener Filter in the Frequency Domain

Note that in Equation (6.34), ex(m) is that part of the signal that cannot be recovered by the Wiener filter, and represents distortion in the signal output, and en(m) is that part of the noise that cannot be blocked by the Wiener filter. Ideally, ex(m)=0 and en(m)=0, but this ideal situation is possible only if the following conditions are satisfied: (a) The spectra of the signal and the noise are separable by a linear filter. (b) The signal component of the input, that is xc(m), is linearly transformable to x(m). (c) The filter length P is sufficiently large. The issue of signal and noise separability is addressed in Section 6.6.

6.5 Formulation of Wiener Filters in the Frequency Domain In the frequency domain, the Wiener filter output Xˆ ( f ) is the product of the input signal Y(f) and the filter frequency response W(f): Xˆ ( f ) = W ( f )Y ( f )

(6.38)

The estimation error signal E(f) is defined as the difference between the desired signal X(f) and the filter output Xˆ ( f ), E ( f ) = X ( f ) − Xˆ ( f ) = X ( f ) − W ( f )Y ( f )

(6.39)

and the mean square error at a frequency f is given by

[

]

2 E  E ( f )  = E (X ( f ) −W ( f )Y ( f ) )* (X ( f ) −W ( f )Y ( f ) )





(6.40)

where E[· ] is the expectation function, and the symbol * denotes the complex conjugate. Note from Parseval’s theorem that the mean square error in time and frequency domains are related by

192

Wiener Filters N −1

∑e

m =0

1/ 2 2

( m) =

∫ E( f )

2

df

(6.41)

−1 / 2

To obtain the least mean square error filter we set the complex derivative of Equation (6.40) with respect to filter W(f) to zero

∂ E [| E ( f ) | 2 ] = 2W ( f ) PYY ( f ) − 2 PXY ( f ) = 0 ∂W ( f )

(6.42)

where PYY(f)=E[Y(f)Y*(f)] and PXY(f)=E[X(f)Y*(f)] are the power spectrum of Y(f), and the cross-power spectrum of Y(f) and X(f) respectively. From Equation (6.42), the least mean square error Wiener filter in the frequency domain is given as P (f) W ( f ) = XY (6.43) PYY ( f ) Alternatively, the frequency-domain Wiener filter Equation (6.43) can be obtained from the Fourier transform of the time-domain Wiener Equation (6.9): P −1

∑ ∑ wk ryy (m − k )e − jωm = ∑ ryx (n)e − jωm m k =0

(6.44)

m

From the Wiener–Khinchine relation, the correlation and power-spectral functions are Fourier transform pairs. Using this relation, and the Fourier transform property that convolution in time is equivalent to multiplication in frequency, it is easy to show that the Wiener filter is given by Equation (6.43).

6.6 Some Applications of Wiener Filters In this section, we consider some applications of the Wiener filter in reducing broadband additive noise, in time-alignment of signals in multichannel or multisensor systems, and in channel equalisation.

193

Some Applications of Wiener Filters 20

20 log W(f)

0 -20 -40 -60 -80 -100 -60

-40

-20

0

20

40

60

SNR (dB)

Figure 6.4 Variation of the gain of Wiener filter frequency response with SNR.

6.6.1 Wiener Filter for Additive Noise Reduction Consider a signal x(m) observed in a broadband additive noise n(m)., and model as y(m) = x(m) + n(m) (6.45) Assuming that the signal and the noise are uncorrelated, it follows that the autocorrelation matrix of the noisy signal is the sum of the autocorrelation matrix of the signal x(m) and the noise n(m): Ryy = Rxx + Rnn

(6.46)

rxy = rxx

(6.47)

and we can also write

where Ryy, Rxx and Rnn are the autocorrelation matrices of the noisy signal, the noise-free signal and the noise respectively, and rxy is the crosscorrelation vector of the noisy signal and the noise-free signal. Substitution of Equations (6.46) and (6.47) in the Wiener filter, Equation (6.10), yields w = ( R xx + Rnn )−1 r xx

(6.48)

Equation (6.48) is the optimal linear filter for the removal of additive noise. In the following, a study of the frequency response of the Wiener filter provides useful insight into the operation of the Wiener filter. In the frequency domain, the noisy signal Y(f) is given by

194

Signal Noise Wiener filter

1.0 Wiener filter magnitude W(f)

Signal and noise magnitude spectrum

Wiener Filters

0.0 Frequency

(f)

Figure 6.5 Illustration of the variation of Wiener frequency response with signal spectrum for additive white noise. The Wiener filter response broadly follows the signal spectrum.

Y ( f ) = X ( f )+ N ( f )

(6.49)

where X(f) and N(f) are the signal and noise spectra. For a signal observed in additive random noise, the frequency-domain Wiener filter is obtained as W( f ) =

PXX ( f ) PXX ( f ) + PNN ( f )

(6.50)

where PXX(f) and PNN(f) are the signal and noise power spectra. Dividing the numerator and the denominator of Equation (6.50) by the noise power spectra PNN(f) and substituting the variable SNR(f)=PXX(f)/PNN(f) yields W( f ) =

SNR ( f ) SNR ( f ) + 1

(6.51)

where SNR is a signal-to-noise ratio measure. Note that the variable, SNR(f) is expressed in terms of the power-spectral ratio, and not in the more usual terms of log power ratio. Therefore SNR(f)=0 corresponds to − ∞ dB. From Equation (6.51), the following interpretation of the Wiener filter frequency response W(f) in terms of the signal-to-noise ratio can be

195

Some Applications of Wiener Filters Magnitude

Signal Noise

Separable spectra

Magnitude

(a)

Overlapped spectra

Frequency

(b)

Frequency

Figure 6.6 Illustration of separability: (a) The signal and noise spectra do not overlap, and the signal can be recovered by a low-pass filter; (b) the signal and noise spectra overlap, and the noise can be reduced but not completely removed.

deduced. For additive noise, the Wiener filter frequency response is a real positive number in the range 0 ≤ W ( f ) ≤ 1 . Now consider the two limiting cases of (a) a noise-free signal SNR ( f ) = ∞ and (b) an extremely noisy signal SNR(f)=0. At very high SNR, W ( f ) ≈1 , and the filter applies little or no attenuation to the noise-free frequency component. At the other extreme, when SNR(f)=0, W(f)=0. Therefore, for additive noise, the Wiener filter attenuates each frequency component in proportion to an estimate of the signal to noise ratio. Figure 6.4 shows the variation of the Wiener filter response W(f), with the signal-to-noise ratio SNR(f). An alternative illustration of the variations of the Wiener filter frequency response with SNR(f) is shown in Figure 6.5. It illustrates the similarity between the Wiener filter frequency response and the signal spectrum for the case of an additive white noise disturbance. Note that at a spectral peak of the signal spectrum, where the SNR(f) is relatively high, the Wiener filter frequency response is also high, and the filter applies little attenuation. At a signal trough, the signal-to-noise ratio is low, and so is the Wiener filter response. Hence, for additive white noise, the Wiener filter response broadly follows the signal spectrum. 6.6.2 Wiener Filter and the Separability of Signal and Noise A signal is completely recoverable from noise if the spectra of the signal and the noise do not overlap. An example of a noisy signal with separable signal and noise spectra is shown in Figure 6.6(a). In this case, the signal

196

Wiener Filters

and the noise occupy different parts of the frequency spectrum, and can be separated with a low-pass, or a high-pass, filter. Figure 6.6(b) illustrates a more common example of a signal and noise process with overlapping spectra. For this case, it is not possible to completely separate the signal from the noise. However, the effects of the noise can be reduced by using a Wiener filter that attenuates each noisy signal frequency in proportion to an estimate of the signal-to-noise ratio as described by Equation (6.51).

6.6.3 The Square-Root Wiener Filter In the frequency domain, the Wiener filter output Xˆ ( f ) is the product of the input frequency X(f) and the filter response W(f) as expressed in Equation (6.38). Taking the expectation of the squared magnitude of both sides of Equation (6.38) yields the power spectrum of the filtered signal as

E[| Xˆ ( f ) | 2 ] = W ( f ) 2 E[| Y ( f ) | 2 ] = W ( f ) 2 PYY ( f )

(6.52)

Substitution of W(f) from Equation (6.43) in Equation (6.52) yields 2 PXY (f) 2 ˆ E[| X ( f ) | ] = PYY ( f )

(6.53)

Now, for a signal observed in an uncorrelated additive noise we have PYY ( f )= PXX ( f )+ PNN ( f )

(6.54)

PXY ( f )= PXX ( f )

(6.55)

and

Substitution of Equations (6.54) and (6.55) in Equation (6.53) yields

E[| Xˆ ( f ) | 2 ] =

2 PXX (f) PXX ( f ) + PNN ( f )

(6.56)

Now, in Equation (6.38) if instead of the Wiener filter, the square root of the Wiener filter magnitude frequency response is used, the result is

197

Some Applications of Wiener Filters

Xˆ ( f ) = W ( f ) 1 / 2 Y ( f )

(6.57)

and the power spectrum of the signal, filtered by the square-root Wiener filter, is given by

E [| Xˆ ( f ) | 2 ] = [W ( f ) 1/ 2 ] E[ Y ( f ) 2 ] = 2

PXY ( f ) PYY ( f ) = PXY ( f ) (6.58) PYY ( f )

Now, for uncorrelated signal and noise Equation (6.58) becomes

E[| Xˆ ( f ) | 2 ] = PXX ( f )

(6.59)

Thus, for additive noise the power spectrum of the output of the square-root Wiener filter is the same as the power spectrum of the desired signal.

6.6.4 Wiener Channel Equaliser Communication channel distortions may be modelled by a combination of a linear filter and an additive random noise source as shown in Figure 6.7. The input/output signals of a linear time invariant channel can be modelled as P −1

y ( m ) = ∑ hk x ( m − k ) + n ( m )

(6.60)

k =0

where x(m) and y(m) are the transmitted and received signals, [hk] is the impulse response of a linear filter model of the channel, and n(m) models the channel noise. In the frequency domain Equation (6.60) becomes Y ( f )= X ( f )H ( f )+ N ( f )

(6.61)

where X(f), Y(f), H(f) and N(f) are the signal, noisy signal, channel and noise spectra respectively. To remove the channel distortions, the receiver is followed by an equaliser. The equaliser input is the distorted channel output, and the desired signal is the channel input. Using Equation (6.43) it is easy to show that the Wiener equaliser in the frequency domain is given by

198

Wiener Filters

noise n(m)

x(m)

Distortion

y(m)

H (f)

Equaliser

^x(m)

H –1 (f)

f

f

Figure 6.7 Illustration of a channel model followed by an equaliser.

W ( f )=

PXX ( f ) H * ( f ) PXX ( f ) H ( f ) 2 + PNN ( f )

(6.62)

where it is assumed that the channel noise and the signal are uncorrelated. In the absence of channel noise, PNN(f)=0, and the Wiener filter is simply the inverse of the channel filter model W(f)=H–1(f). The equalisation problem is treated in detail in Chapter 15.

6.6.5 Time-Alignment of Signals in Multichannel/Multisensor Systems In multichannel/multisensor signal processing there are a number of noisy and distorted versions of a signal x(m), and the objective is to use all the observations in estimating x(m), as illustrated in Figure 6.8, where the phase and frequency characteristics of each channel is modelled by a linear filter. As a simple example, consider the problem of time-alignment of two noisy records of a signal given as y1 (m)= x(m)+n1 (m) y 2 ( m)= A x ( m − D )+ n 2 ( m)

(6.63) (6.64)

where y1(m) and y2(m) are the noisy observations from channels 1 and 2, n1(m) and n2(m) are uncorrelated noise in each channel, D is the time delay of arrival of the two signals, and A is an amplitude scaling factor. Now assume that y1(m) is used as the input to a Wiener filter and that, in the absence of the signal x(m), y2(m) is used as the “desired” signal. The error signal is given by

199

Some Applications of Wiener Filters

n1(m) x(m)

y1(m)

h1(m)

^ x(m) w1(m)

n2(m) x(m)

h2(m) . . .

x(m)

^x(m)

y2(m) w2(m) . . .

nK(m)

hK(m)

yK(m)

^x(m) wK(m)

Figure 6.8 Illustration of a multichannel system where Wiener filters are used to time-align the signals from different channels. P −1

e ( m ) = y 2 ( m ) − ∑ wk y1 ( m ) k =0

P −1    P −1  =  A x ( m − D ) − ∑ wk x ( m )  +  ∑ wk n1 ( m )  + n2 ( m )    k =0  k =0

(6.65)

The Wiener filter strives to minimise the terms shown inside the square brackets in Equation (6.65). Using the Wiener filter Equation (6.10), we have w = R −y11y1 r y1 y 2 (6.66) = (R xx + Rn 1 n 1 ) −1 Arxx ( D ) where rxx(D)=E [x(PD)x(m)]. The frequency-domain equivalent of Equation (6.65) can be derived as W( f )=

PXX ( f ) Ae − jωD PXX ( f ) + PN1 N1 ( f )

(6.67)

Note that in the absence of noise, the Wiener filter becomes a pure phase (or a pure delay) filter with a flat magnitude response.

200

Wiener Filters

X(f)=Y(f) – N(f)

Noisy signal Noisy signal spectrum estimator

W( f ) = Y(f)

Silence Detector

X(f) Y(f)

Wiener coefficient vector

Noise spectrum estimator

Figure 6.9 Configuration of a system for estimation of frequency Wiener filter.

6.6.6 Implementation of Wiener Filters The implementation of a Wiener filter for additive noise reduction, using Equations (6.48)–(6.50), requires the autocorrelation functions, or equivalently the power spectra, of the signal and noise. The noise power spectrum can be obtained from the signal-inactive, noise-only, periods. The assumption is that the noise is quasi-stationary, and that its power spectra remains relatively stationary between the update periods. This is a reasonable assumption for many noisy environments such as the noise inside a car emanating from the engine, aircraft noise, office noise from computer machines, etc. The main practical problem in the implementation of a Wiener filter is that the desired signal is often observed in noise, and that the autocorrelation or power spectra of the desired signal are not readily available. Figure 6.9 illustrates the block-diagram configuration of a system for implementation of a Wiener filter for additive noise reduction. An estimate of the desired signal power spectra is obtained by subtracting an estimate of the noise spectra from that of the noisy signal. A filter bank implementation of the Wiener filter is shown in Figure 6.10, where the incoming signal is divided into N bands of frequencies. A first-order integrator, placed at the output of each band-pass filter, gives an estimate of the power spectra of the noisy signal. The power spectrum of the original signal is obtained by subtracting an estimate of the noise power spectrum from the noisy signal. In a Bayesian implementation of the Wiener filter, prior models of speech and noise, such as hidden Markov models, are used to obtain the power spectra of speech and noise required for calculation of the filter coefficients.

201

The Choice of Wiener Filter Order

Y(f1 ) BPF(f1 )

.

2

Y2 (f1 )

y(m)

ρ

..

W(f1 ) =

X2 (f1 ) Y2 (f1 )

^ x(m)

N2 (f1 ) Z

. . .

X2 (f1 )

–1

. . . Y(fN)

BPF(fN)

.

2

W(fN) =

X2 (fN) Y2 (fN)

Y2 (fN)

ρ

X2 (fN)

..

N2 (fN) Z

–1

Figure 6.10 A filter-bank implementation of a Wiener filter.

6.7 The Choice of Wiener Filter Order The choice of Wiener filter order affects: (a) the ability of the filter to remove distortions and reduce the noise; (b) the computational complexity of the filter; and (c) the numerical stability of the of the Wiener solution, Equation (6.10). The choice of the filter length also depends on the application and the method of implementation of the Wiener filter. For example, in a filter-bank implementation of the Wiener filter for additive noise reduction, the number of filter coefficients is equal to the number of filter banks, and typically the

202

Wiener Filters

number of filter banks is between 16 to 64. On the other hand for many applications, a direct implementation of the time-domain Wiener filter requires a larger filter length say between 64 and 256 taps. A reduction in the required length of a time-domain Wiener filter can be achieved by dividing the time domain signal into N sub-band signals. Each sub-band signal can then be decimated by a factor of N. The decimation results in a reduction, by a factor of N, in the required length of each sub-band Wiener filter. In Chapter 14, a subband echo canceller is described.

6.8 Summary A Wiener filter is formulated to map an input signal to an output that is as close to a desired signal as possible. This chapter began with the derivation of the least square error Wiener filter. In Section 6.2, we derived the blockdata least square error Wiener filter for applications where only finitelength realisations of the input and the desired signals are available. In such cases, the filter is obtained by minimising a time-averaged squared error function. In Section 6.3, we considered a vector space interpretation of the Wiener filters as the perpendicular projection of the desired signal onto the space of the input signal. In Section 6.4, the least mean square error signal was analysed. The mean square error is zero only if the input signal is related to the desired signal through a linear and invertible filter. For most cases, owing to noise and/or nonlinear distortions of the input signal, the minimum mean square error would be non-zero. In Section 6.5, we derived the Wiener filter in the frequency domain, and considered the issue of separability of signal and noise using a linear filter. Finally in Section 6.6, we considered some applications of Wiener filters in noise reduction, time-delay estimation and channel equalisation.

Bibliography AKAIKE H. (1974) A New Look at Statistical Model Identification. IEEE Trans. Automatic Control, AC-19, pp. 716–23. ALEXANDER S.T. (1986) Adaptive Signal Processing Theory and Applications. Springer-Verlag, New York.

Bibliography

203

ANDERSON B.D. and MOOR J.B. (1979) Linear Optimal Control. PrenticeHall, Englewood Cliffs, NJ. DORNY C.N. (1975) A Vector Space Approach to Models and Optimisation. Wiley, New York. DURBIN J. (1959) Efficient Estimation of Parameters in Moving Average Models. Biometrica, 46, pp. 306–16. GIORDANO A.A. and HSU F.M. (1985) Least Square Estimation with Applications to Digital Signal Processing. Wiley, New York. GIVENS W. (1958) Computation of Plane Unitary Rotations Transforming a General Matrix to Triangular Form. SIAM J. Appl. Math. 6, pp. 26– 50. GOLUB G.H. and REINSCH (1970) Singular Value Decomposition and Least Squares Solutions. Numerical Mathematics, 14, pp. 403–20. GOLUB G.H. and VAN LOAN C.F. (1983). Matrix Computations. Johns Hopkins University Press, Baltimore, MD. GOLUB G.H., and VAN LOAN C.F. (1980). An Analysis of the Total Least Squares Problem. SIAM Journal of Numerical Analysis, 17, pp. 883– 93. HALMOS P.R. (1974) Finite-Dimensional Vector Spaces. Springer-Verlag, New York. HAYKIN S. (1991) Adaptive Filter Theory, 2nd Ed. Prentice-Hall, Englewood Cliffs, NJ. HOUSEHOLDER A.S.(1964) The Theory of Matrices in Numerical Analysis. Blaisdell, Waltham, MA. KAILATH T. (1974) A View of Three Decades of Linear Filtering Theory. IEEE Trans. Info. Theory, IT-20, pp. 146–81. KAILATH T. (1977) Linear Least Squares Estimation, Benchmark Papers in Electrical Engineering and Computer science. Dowden, Hutchinson &Ross. KAILATH T. (1980) Linear Systems, Prentice-Hall, Englewood Cliffs, NJ. KLEMA V.C. and LAUB A. J. (1980) The Singular Value Decomposition: Its Computation and Some Applications. IEEE Trans. Automatic Control, AC-25, pp. 164-76. KOLMOGROV A.N. (1939) Sur l’ Interpolation et Extrapolation des Suites Stationaires. Comptes Rendus de l’Academie des Sciences, 208, pp. 2043-2046. LAWSON C.L. and HANSON R.J. (1974) Solving Least Squares Problems. Prentice-Hall, Englewood Cliffs, NJ. ORFANIDIS S.J. (1988) Optimum Signal Procesing: An Introduction, 2nd Ed. Macmillan, New York.

204

Wiener Filters

SCHARF L.L. (1991) Statistical Signal Processing: Detection, Estimation, and Time Series Analysis, Addison Wesley, Reading, MA. STRANG G. (1976) Linear Algebra and Its Applications, 3rd Ed. Harcourt Brace Jovanovich, San Diego, California. WIENER N. (1949) Extrapolation, Interpolation and Smoothing of Stationary Time Series. MIT Press Cambridge, MA. WILKINSON J.H. (1965) The Algebraic Eigenvalue Problem. Oxford University Press. WHITTLE P.W. (1983) Prediction and Regulation by Linear Least-Squares Methods. University of Minnesota Press, Minneapolis, Minnesota. WOLD H. (1954) The Analysis of Stationary Time Series, 2nd Ed. Almquist and Wicksell, Uppsala.