Filters can also be classified as to whether they are linear, and whether they are time invariant. Non-linear filters were determined to be beyond the scope of this project. The LMS filter is a time varying, linear filter which makes it a relatively straightforward filter to implement and understand. Strictly speaking, the filter is not actually linear, but its final output results behave as though the filter were linear.
GRAPHIC TO BE INSERTED HERE (LMS Filter Schematic)
It can be seen that the LMS filter looks remarkably similar to a standard FIR filter. The input signal passes through a sequence of delays, each of which has a coefficient associated with it. These coefficients are the "weights" of the adaptive filter. The difference between the FIR and LMS filter is that after each sample is processed, the output signal is compared to some reference signal and via some algorithm, the weights are updated in and LMS adaptive filter.
The algorithm is sometimes referred to as the "steepest descent algorithm". We begin by examining the equations in the diagram below
INSERT GRAPHIC OF EQUATIONS DESCRIBING LEAST SQUARES
By the first equation it can be seen that we obtain an output, 'y', by convolving the input with the filter weights. The error in the output signal is defined as the difference between the output signal and some reference signal, or "desired" signal, 'd'. The gradient of this error amount (squared) determines the direction in which the weights must be adjusted in order to minimize the error.
The actual adjustment to the weights is then determined by taking the previous value of the weights and adding the gradient quantity by some factor, mu. It turns out that there is not a precise mathematical definition of what the value of mu should be. It controls the rate of convergence of the adaptive filter. Large values cause the filter to converge rapidly (sometimes within a few samples), while smaller values cause the filter to converge slowly. While it would seem that large values make the most sense, large values of mu, in the presence of a signal with widely varying noise levels, can cause the filter to oscillate or ring about its desired convergence point. In one early experiment, our filter oscillated to such a large extent that it exceeded the floating number capacity of the machine within a few samples.
The reference signal is the signal (or an approximation to it) that is to be removed from the primary or input signal stream. For example, in a noise cancellation application, the reference signal must be something akin to the noise to be removed from the primary signal.
Depending upon the nature of the filter application, the reference signal is either generated within the system as a result of filter operation, or it is applied from an external source. How the reference signal is derived is explained in the next section.
INSERT DIAGRAM OF FOUR SYSTEM SCHEMATICS
The Type I - Identification Filter
Let us assume that we have some "plant" that provides an unknown impulse response. The signal that feeds the plant is also fed to the filter. Note that "noise" is not an issue here. The output of the filter and the plant are subtracted from one another. The result is the "reference" or "desired" signal which is sent back to the filter to cause it to adjust its weights. When the difference is zero, the reference signal is zero, implying that there is nothing to be removed from the input. Thus, the filter will no longer adjust weights. Also, if the difference is zero, it follows that the impulse repsonse of the filter must be identical to that of the plant.
The Type II - Inverse Modelling Filter
Let us again assume that some input is fed into a plant of unknown impulse response. However, now the output of the plant is fed into the filter. Now, the input, delayed so that it equals the delay it underwent in the plant, is subtracted from with the filter output. Again, the difference is the reference signal which is sent to the filter to adjust weights. Note now though, that if the reference signal goes to zero, it means that the output of the filter must be identical to that of the input signal. Thus, the filter now has the inverse impulse response of the plant.
Type III - Predictive Filter
The input, after being delayed by one sample is sent to the filter. The output of the filter is then subtracted directly from the input which was not delayed. In essence, the filter is always one sample behind the input, and if its reference signal is to go to zero, it must "guess" what the next sample will be so that it can generate the correct output so that the reference signal goes to zero.
These types of filters are used in an attempt to remove random noise from a signal content. The LMS filter, relying upon statistical methods, will not work for random noise, and hence, will not be found in this application.
Type IV - Noise Cancellation
The last type of filter system requires that two separate signals be fed to it. The primary signal does not go through the filter at all, an odd thought one might suppose, when the whole purpose is to filter noise from the primary signal. Instead, the reference signal is fed into the filter, and the filter output is subtracted from the primary signal. Let us assume for a moment that the reference signal is an exact duplicate of the noise that is found in the primary signal. It is clear that the subtraction will leave an error which is exactly the wanted signal. Because this signal must have minimum power (all the reference signal is removed), the filter weight will remain constant so long as the reference signal remains constant.
Suppose now that the reference signal is not an exact duplicate of the noise in the primary signal. It may be shifted in time (delayed, which means a phase shift in the frequency domain), it may have a different amplitude, and it may be at a slightly different frequency. Now, in order to minimize power in the output signal, the filter must adapt to make the reference signal match as closely as possible to the noise in the primary signal. This can occur only if the noise in the primary signal and the reference signal are statistically correlated, and if the noise and the input signal in the primary signal are NOT correlated.
The Matlab function we wrote, which is the adaptive filter, is shown in the illustration below.
function [err,output,tap_wts] = adapt(len,u,d)
% LMS Active Adaptive Filter Routine
% copyright 1996, Ian Gravagne, all rights
reserved
%
% [err,y,W] = ADAPT(len,u,d)
%
% len : the desired FIR filter length
% u : a vector of input values to the adaptive
filter
% d : a vector of desired, or reference values.
This vector must be
% the same length as the input vector. If
"d" is not specified,
% it will assumed to be 0. (i.e. the filter
will converge to zero
% output. % err : err = d - y where y is the
output of the FIR filter.
% y : the output of the filter when u is applied.
% W : a column vector describing the wieghts
of the FIR filter.
%
% set up filter for zero or non-zero output
U = zeros(len,1);
y(n) = W'*U;
e(n) = d(n) - y(n);
W = W + .025*U*conj(e(n));
end
if nargout == 1,
err = e;
else
err = e;
output = y;
tap_wts = W;
end
Given the preceeding explanations of filter operation, it should be clear to the reader that the function takes two inputs (the primary and reference signal), and produces an output signal. In this routine weights are recomputed after every sample; it is also possible to recompute weights after 'n' samples, called a 'block' weighting routine. An example of this type of function will be found in the Results and Discussion Section.
We ran the following series of signal simulations using Matlab:
11/05/96
The project will begin with a review of available reference material, both to flesh out the basics of the project and to better define its scope. We have made the following group assignments:
10/31/96
blah, blah, woof, woof.