In this section, I am going to explore the Daubechies Wavelet in one dimension. The Haar Wavelet can be described as two filters, a low pass, and a high pass filter the seperate the signal into two different bands. The Daubecies Wavelet does this as well, however, the Daubechies Wavelet has more vanishing moments. In the words of Mr. Peyrê, "Daubechies Wavelets extend the Haar Wavelets by using longer filters, that produce smoother scaling functions and wavelets." (1) Mr. Peyrê goes on to say how there is no one Wavelet that is the best in all applications. The best Wavelet to use depends on the nature of the signal itself. In the case of the Daubechies Wavelet, it is very well suited for linear signals. (2)
From here, the perspective of filters is heavily used. I have started reading "Wavelets and Filter Banks," by Gilbert Strang and Truong Nguyen. This does an excellent job of explaining Wavelets from a filtering perspective. For example, the Haar Wavelet coefficients are generated by a downsampled low pass and high pass filter. To downsample, one removes the odd numbered rows from their output. This done because after one runs their signal through two filters, they now have a signal that is twice the length of their original signal. Going from Haar to Daubechies, requires one to construct a longer filter, which Mr. Peyrê kindly does for us with his compute_wavelet_filter function:
The frequency response of these filters is plotted below.
With these filters, we then take a signal and perform a convolution with each of them, followed by a downsampling. No worries though, the orignal signal can still be reconstructed with upsampling, where one puts a 0 in between each row of their downsampled matrix. When the inverse of the downsampled filtering matrix is it's tranpose, the set or bank of filters is called orthogonal. When the inverse is not necessarily the transpose, the system is called biorthogonal. Again, I highly recommend "Wavelets and Filter Banks," as it will described the above in much further detail.
Here again is the signal used with the Haar Wavelets in the 1-D Haar Wavelet section.
Then, the wavelet iterations are begun. This is best described on Mr. Peyrê's website , but I will put it into my own words.
The wavelet coefficients will be stored in a vector called "fw." Note that we are performing the Forward Wavelet Transform. Referencing the multi-resolution aspect of
Wavelets again, the number of scales are defined to be .
At each stage, two sets of coefficients are generated from the two filters, downsampled, then concatenated into the fw matrix. At each stage, the parts of the fw matrix
are updated by new coefficients.
The coefficients generated by the first filter make up the first half of the transformed plot. In subsequent iterations, one generates coarser scale coefficients by filtering the previous iterations version of fw. Note that the fw is made of the wavelet coefficients generated in the previous iteration. For example, the above plot titled "Transformed" is comprised of the concatenated entries of the downsampled filters. In the next iteration, the first half of fw, will be filtered. That is, the half that looks very similar to the signal itself, but is only half as long. This is a result of the downsampling.
This plot shows the first three stages of coefficients from the "g" filter. Note that in these plots, Mr. Peyrê is plotting the wavelet coefficients as if they are 512 in length. They are really 256, 128, 64...in size. Matlab is able to accomplish that task. The last stage of the FWT (Forward Wavelet Transform) contains only one coefficient in each filter. Each stage achives a "zooming in" on the signal, as if looking at Earth, then the United States, then the East Coast, then New York, then Manhattan, etc. All of the coefficients from the FWT are shown below.
Now, we work to reconstruct the signal. Wavelets allow us to reconstruct the signal at various stages, by omitting the Wavelet coefficients from latter reconstruction stages. One starts at the lowest resolution scale, which contains only one coefficient. Recursively, one upsamples both coefficients, then convolves them with the reverse of the orignal filters, h and g.
The formula above shows the upsampling and convolution with the reverse of the filters h and g. The plot below shows the partial reconstruction's from the high pass Wavelet Coefficients.
As one might expect, at each stage, the signal reconstruction gets better because more high frequency details are being added. These are not necessarily the most fundamental structures of the signal - lesser stages offer a crude reconstruction. When j = 5, one can still get a relative sense of the orignal signal. By adding more coefficients in stages 6, 7, and 8, the reconstruction approximation gets better. Of course, the familiar nonlinear thresholding is implemented. To recap, one decides on a value for which the absolute value of any coefficient below that will be set to zero. Mr. Peyrê chooses .5 as his thresholding value. This is very easy to implement, and is done by Mr. Peyrê in a single line of code. The plot below shows the orignal calculated Wavelet Coefficients, and then the filtered ones.
As one might expect, keeping more coefficients produces a reconstruction that more closely resembles the orignal signal.
The Haar and Daubechies Wavelets clearly differ from each other. The Haar Wavelet is shorter in size, and discontinuous. The main difference between the two is the number of vanishing moments, described extremely well by user, "EMS," on mathstack.. I re-phrase his words here. If a Wavelet scaling function can generate polynomials up to degree p - 1, then then it is said to have p vanishing moments. This means that scaling fucntion can be used to represent functions of degree p -1. As more vanishing moments are added, one can represent more complex functions with fewer Wavelet coefficients. Additionally, if one takes the Fourier Transform of the wavelet, then the first p derivatives of the Fourier Transform will be zero when evaluated at zero. The Fourier Transform zeros correspond to integrals in time/space domain that must be zero for the Wavelet. Ultimately, this means that when one is trying to represent functions with Wavelets, the Wavelet can be expected to represent the function accurately up to the p + 1 derivative of the function. That is, if the p + 1 derivative has non-trivial characteristics. That is very fascinating, and I owe Mr. EMS a debt of gratitude!
Next, I move to Daubechies Wavelets in 2-D
Works Cited
(1) Peyrè, Gabriel. "1-D Daubechies Wavelets." 1-D Daubechies Wavelets. N.p., 2010. Web. 20 June 2014.
(2) Peyrè, Gabriel. "1-D Daubechies Wavelets." 1-D Daubechies Wavelets. N.p., 2010. Web. 20 June 2014.
(3) Peyrè, Gabriel. "1-D Daubechies Wavelets." 1-D Daubechies Wavelets. N.p., 2010. Web. 27 June 2014.
G. Peyré, The Numerical Tours of Signal Processing - Advanced Computational Signal and Image Processing, IEEE Computing in Science and Engineering, vol. 13(4), pp. 94-97, 2011.