Years: 2024 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1993 1992 1991
Ai, Q.Y.H., Chen, W., So, T.Y., Lam, W.K.J., Jiang, B., Poon, D.M.C., Qamar, S., Mo, F.K.F., Blu, T., Chan, Q., Ma, B.B.Y., Hui, E.P., Chan, K.C.A. & King, A.D.,"Quantitative T1(ρ) MRI of the Head and Neck Discriminates Carcinoma and Benign Hyperplasia in the Nasopharynx", American Journal of Neuroradiology, 2020. |
BACKGROUND AND PURPOSE: T1ρ imaging is a new quantitative MR imaging pulse sequence with the potential to discriminate between malignant and benign tissue. In this study, we evaluated the capability of T1ρ imaging to characterize tissue by applying T1ρ imaging to malignant and benign tissue in the nasopharynx and to normal tissue in the head and neck. MATERIALS AND METHODS: Participants with undifferentiated nasopharyngeal carcinoma and benign hyperplasia of the nasopharynx prospectively underwent T1ρ imaging. T1ρ measurements obtained from the histogram analysis for nasopharyngeal carcinoma in 43 participants were compared with those for benign hyperplasia and for normal tissue (brain, muscle, and parotid glands) in 41 participants using the Mann-Whitney U test. The area under the curve of significant T1ρ measurements was calculated and compared using receiver operating characteristic analysis and the Delong test, respectively. A P < 0.05 indicated statistical significance. RESULTS: There were significant differences in T1ρ measurements between nasopharyngeal carcinoma and benign hyperplasia and between nasopharyngeal carcinoma and normal tissue (all, P < 0.05). Compared with benign hyperplasia, nasopharyngeal carcinoma showed a lower T1ρ mean (62.14 versus 65.45 × ms), SD (12.60 versus 17.73 × ms), and skewness (0.61 versus 0.76) (all P < 0.05), but no difference in kurtosis (P = 0.18). The T1ρ SD showed the highest area under the curve of 0.95 compared with the T1ρ mean (area under the curve = 0.72) and T1ρ skewness (area under the curve = 0.72) for discriminating nasopharyngeal carcinoma and benign hyperplasia (all, P < 0.05). CONCLUSIONS: Quantitative T1ρ imaging has the potential to discriminate malignant from benign and normal tissue in the head and neck. |
@article{blu2020g, author = "Ai, Q.Y.H. and Chen, W. and So, T.Y. and Lam, W.K.J. and Jiang, B. and Poon, D.M.C. and Qamar, S. and Mo, F.K.F. and Blu, T. and Chan, Q. and Ma, B.B.Y. and Hui, E.P. and Chan, K.C.A. and King, A.D.", title = "Quantitative {T1}\(\rho\) {MRI} of the Head and Neck Discriminates Carcinoma and Benign Hyperplasia in the Nasopharynx", journal = "American Journal of Neuroradiology", year = "2020", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2020g", doi = "10.3174/ajnr.A6828" } |
Alexandru, R., Blu, T. & Dragotti, P.-L.,"D-SLAM: Diffusion Source Localization and Trajectory Mapping", Proceedings of the Forty-fifth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'20), Barcelona, Spain, pp. 5600-5604, May 4--8, 2020. |
We consider physical fields induced by a finite number of instantaneous diffusion sources, which we sample using a mobile sensor, along unknown trajectories composed of multiple linear segments. We address the problem of estimating the sources, as well as the trajectory of the mobile sensor. Within this framework, we propose a method for localizing sources of unknown amplitudes, and known activation times. The reconstruction method we propose maps the measurements obtained using the mobile sensor to a sequence of generalized field samples. From these generalized samples, we can then retrieve the locations of the sources as well as the trajectory of the sensor (up to a linear geometric transformation). |
@inproceedings{blu2020a, author = "Alexandru, R. and Blu, T. and Dragotti, P.-L.", title = "{D-SLAM}: Diffusion Source Localization and Trajectory Mapping", booktitle = "Proceedings of the Forty-fifth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'20})", month = "May 4--8,", year = "2020", pages = "5600--5604", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2020a" } |
Alexandru, R., Blu, T. & Dragotti, P.-L.,"Diffusion SLAM: Localising diffusion sources from samples taken by location-unaware mobile sensors", IEEE Transactions on Signal Processing, Vol. 69, pp. 5539-5554, 2021. |
We consider diffusion fields induced by multiple localised and instantaneous sources. We assume a mobile sensor samples the field, uniformly along a piecewise linear trajectory, which is unknown. The problem we address is the estimation of the amplitudes and locations of the diffusion sources, as well as of the trajectory of the sensor. We first propose a method for diffusion source localisation and trajectory mapping (D-SLAM) in 2D, where we assume the activation times of the sources are known and the evolution of the diffusion field over time is negligible. The reconstruction method we propose maps the measurements obtained using the mobile sensor to a sequence of generalised field samples. From these generalised samples, we can then retrieve the locations of the sources as well as the trajectory of the sensor (up to a 2D orthogonal geometric transformation). We then relax these assumptions and show that we can perform D-SLAM also in the case of unknown activation times, from samples of a time-varying field, as well as in 3D spaces. Finally, simulation results on both synthetic and real data further validate the proposed framework. |
@article{blu2021c, author = "Alexandru, R. and Blu, T. and Dragotti, P.-L.", title = "Diffusion {SLAM}: Localising diffusion sources from samples taken by location-unaware mobile sensors", journal = "IEEE Transactions on Signal Processing", year = "2021", volume = "69", pages = "5539--5554", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2021c" } |
Alexandru, R., Blu, T. & Dragotti, P.-L.,"Localising Diffusion Sources from Samples Taken Along Unknown Parametric Trajectories", Proceedings of the Twenty-Ninth European Signal Processing Conference (EUSIPCO), Dublin, Ireland, pp. 2199-2203, August 23--27 2021. |
In a recent paper we showed that it is possible to localise diffusion sources observed with a mobile sensor whilst simultaneously estimating the piecewise linear trajectory of the sensor. Here we address the case in which the sensor moves along an arbitrary unknown parametric trajectory and we show that by solving a linear system of equations, we can retrieve the inner products between the parameters of the trajectory. From these inner products we then retrieve the curve parameters up to an orthogonal transformation, which allows us to also perfectly estimate the amplitudes of the sources and find their locations up to an orthogonal transformation. |
@inproceedings{blu2021e, author = "Alexandru, R. and Blu, T. and Dragotti, P.-L.", title = "Localising Diffusion Sources from Samples Taken Along Unknown Parametric Trajectories", booktitle = "Proceedings of the Twenty-Ninth European Signal Processing Conference ({EUSIPCO})", month = "August 23--27", year = "2021", pages = "2199--2203", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2021e" } |
Barbotin, Y., Van De Ville, D., Blu, T. & Unser, M.,"Fast Computation of Polyharmonic B-Spline Autocorrelation Filters", IEEE Signal Processing Letters, Vol. 15, pp. 773-776, 2008. |
A fast computational method is given for the Fourier transform of the polyharmonic B-spline autocorrelation sequence in d dimensions. The approximation error is exponentially decaying with the number of terms taken into account. The algorithm improves speed upon a simple truncated-sum approach. Moreover, it is virtually independent of the spline's order. The autocorrelation filter directly serves for various tasks related to polyharmonic splines, such as interpolation, orthonormalization, and wavelet basis design. |
@article{blu2008a, author = "Barbotin, Y. and Van De Ville, D. and Blu, T. and Unser, M.", title = "Fast Computation of Polyharmonic \mbox{B-Spline} Autocorrelation Filters", journal = "{IEEE} Signal Processing Letters", year = "2008", volume = "15", pages = "773--776", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008a" } |
Batenkov, D., Bhandari, A. & Blu, T.,"Rethinking Super-Resolution: The Bandwidth Selection Problem", Proceedings of the Forty-fourth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'19), Brighton, UK, pp. 5087-5091, March 12--17, 2019. |
Super-resolution is the art of recovering spikes from low-pass projections in the Fourier domain. Over the last decade specifically, several significant advancements linked with mathematical guarantees and recovery algorithms have been made. Most super-resolution algorithms rely on a two-step procedure: deconvolution followed by high-resolution frequency estimation. However, for this to work, exact bandwidth of low-pass filter must be known; an assumption that is central to the mathematical model of super-resolution. On the flip side, when it comes to practice, smoothness rather than bandlimitedness is a much more applicable property. Since smooth pulses decay quickly, one may still capitalize on the existing super-resolution algorithms provided that the essential bandwidth is known. This problem has not been discussed in literature and is the theme of our work. In this paper, we start with an experiment to show that super-resolution is sensitive to bandwidth selection. This raises the question of how to select the optimal bandwidth. To this end, we propose a bandwidth selection criterion which works by minimizing a proxy of estimation error that is dependent of bandwidth. Our criterion is easy to compute, and gives reasonable results for experimentally acquired data, thus opening interesting avenues for further investigation, for instance the relationship to Cram\'er-Rao bounds. |
@inproceedings{blu2019a, author = "Batenkov, D. and Bhandari, A. and Blu, T.", title = "Rethinking Super-Resolution: The Bandwidth Selection Problem", booktitle = "Proceedings of the Forty-fourth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'19})", month = "March 12--17,", year = "2019", pages = "5087--5091", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2019a" } |
Bathellier, B., Van De Ville, D., Blu, T., Unser, M. & Carleton, A.,"Wavelet-Based Multi-Resolution Statistics for Optical Imaging Signals: Application to Automated Detection of Odour Activated Glomeruli in the Mouse Olfactory Bulb", NeuroImage, Vol. 34 (3), pp. 1020-1035, February 1, 2007. |
Optical imaging techniques offer powerful solutions to capture brain networks processing in animals, especially when activity is distributed in functionally distinct spatial domains. Despite the progress in imaging techniques, the standard analysis procedures and statistical assessments for this type of data are still limited. In this paper, we perform two in vivo non-invasive optical recording techniques in the mouse olfactory bulb, using a genetically expressed activity reporter fluorescent protein (synaptopHfluorin) and intrinsic signals of the brain. For both imaging techniques, we show that the odour-triggered signals can be accurately parameterized using linear models. Fitting the models allows us to extract odour specific signals with a reduced level of noise compared to standard methods. In addition, the models serve to evaluate statistical significance, using a wavelet-based framework that exploits spatial correlation at different scales. We propose an extension of this framework to extract activation patterns at specific wavelet scales. This method is especially interesting to detect the odour inputs that segregate on the olfactory bulb in small spherical structures called glomeruli. Interestingly, with proper selection of wavelet scales, we can isolate significantly activated glomeruli and thus determine the odour map in an automated manner. Comparison against manual detection of glomeruli shows the high accuracy of the proposed method. Therefore, beyond the advantageous alternative to the existing treatments of optical imaging signals in general, our framework propose an interesting procedure to dissect brain activation patterns on multiple scales with statistical control. Supplementary data |
@article{blu2007a, author = "Bathellier, B. and Van De Ville, D. and Blu, T. and Unser, M. and Carleton, A.", title = "Wavelet-Based Multi-Resolution Statistics for Optical Imaging Signals: {A}pplication to Automated Detection of Odour Activated Glomeruli in the Mouse Olfactory Bulb", journal = "NeuroImage", month = "February 1,", year = "2007", volume = "34", number = "3", pages = "1020--1035", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007a" } |
Berent, J., Dragotti, P.-L. & Blu, T.,"Sampling Piecewise Sinusoidal Signals with Finite Rate of Innovation Methods", IEEE Transactions on Signal Processing, Vol. 58 (2), pp. 613-625, February 2010. |
We consider the problem of sampling piecewise sinusoidal signals. Classical sampling theory does not enable perfect reconstruction of such signals since they are not band-limited. However, they can be characterized by a finite number of parameters, namely, the frequency, amplitude, and phase of the sinusoids and the location of the discontinuities. In this paper, we show that under certain hypotheses on the sampling kernel, it is possible to perfectly recover the parameters that define the piecewise sinusoidal signal from its sampled version. In particular, we show that, at least theoretically, it is possible to recover piecewise sine waves with arbitrarily high frequencies and arbitrarily close switching points. Extensions of the method are also presented such as the recovery of combinations of piecewise sine waves and polynomials. Finally, we study the effect of noise and present a robust reconstruction algorithm that is stable down to SNR levels of 7 [dB]. |
@article{blu2010a, author = "Berent, J. and Dragotti, P.-L. and Blu, T.", title = "Sampling Piecewise Sinusoidal Signals with Finite Rate of Innovation Methods", journal = "{IEEE} Transactions on Signal Processing", month = "February", year = "2010", volume = "58", number = "2", pages = "613--625", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010a" } |
Bergner, S., Van De Ville, D., Blu, T. & Möller, T.,"On Sampling Lattices with Similarity Scaling Relationships", Proceedings of the Eighth International Workshop on Sampling Theory and Applications (SampTA'09), Marseille, France, May 18--22, 2009. |
We provide a method for constructing regular sampling lattices in arbitrary dimensions together with an integer dilation matrix. Subsampling using this dilation matrix leads to a similarity-transformed version of the lattice with a chosen density reduction. These lattices are interesting candidates for multidimensional wavelet constructions with a limited number of subbands. |
@inproceedings{blu2009a, author = "Bergner, S. and Van De Ville, D. and Blu, T. and M{\"{o}}ller, T.", title = "On Sampling Lattices with Similarity Scaling Relationships", booktitle = "Proceedings of the Eighth International Workshop on Sampling Theory and Applications ({SampTA'09})", month = "May 18--22,", year = "2009", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2009a" } |
Bhandari, A. & Blu, T.,"FRI Sampling and Time-Varying Pulses: Some Theory and Four Short Stories", Proceedings of the Forty-second IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'17), New Orleans, LA, USA, pp. 3804-3808, March 5--9, 2017. |
The field of signal processing is replete with exemplary problems where the measurements amount to time-delayed and amplitude scaled echoes of some template function or a pulse. When the inter-pulse spacing is favorable, something as primitive as a matched filter serves the purpose of identifying time-delays and amplitudes. When the inter-pulse spacing poses an algorithmic challenge, high-resolution methods such as finite-rate-of-innovation (FRI) may be used. However, in many practical cases of interest, the template function may be distorted due to physical properties of propagation and transmission. Such cases can not be handled well by existing signal models. Inspired by problems in spectroscopy, radar, photoacoustic imaging and ultra-wide band arrays, on which we base our case studies, in this work we take a step towards recovering spikes from time-varying pulses. To this end, we re-purpose the FRI method and extend its utility to the case of phase distorted pulses. Application of our algorithm on the above-mentioned case studies results in substantial improvement in peak-signal-to-noise ratio, thus promising interesting future directions. |
@inproceedings{blu2017a, author = "Bhandari, A. and Blu, T.", title = "{FRI} Sampling and Time-Varying Pulses: Some Theory and Four Short Stories", booktitle = "Proceedings of the Forty-second {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'17})", month = "March 5--9,", year = "2017", pages = "3804--3808", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2017a" } |
Blanc, P., Blu, T., Ranchin, T., Wald, L. & Aloisi, R.,"Using Iterated Rational Filter Banks Within the ARSIS Concept for Producing 10 m Landsat Multispectral Images", International Journal of Remote Sensing, Vol. 19 (12), pp. 2331-2343, August 1998. |
The ARSIS concept is designed to increase the spatial resolution of an image without modification of its spectral contents, by merging structures extracted from a higher resolution image of the same scene, but in a different spectral band. It makes use of wavelet transforms and multiresolution analysis. It is currently applied in an operational way with dyadic wavelet transforms that limit the merging of images whose ratio of their resolution is a power of 2. Rational discrete wavelet transforms can be approximated numerically by rational filter banks which would enable a more general merging. Indeed, in theory, the ratio of the resolution of the images to merge is a power of a certain family of rational numbers. The aim of this paper is to examine whether the use of those approximations of rational wavelet transforms are efficient within the ARSIS concept. This work relies on a particular case: the merging of a 10 m SPOT Panchromatic image and a 30 m Landsat Thematic Mapper multispectral image to synthesize 10m multispectral image TM-HR. |
@article{blu1998a, author = "Blanc, P. and Blu, T. and Ranchin, T. and Wald, L. and Aloisi, R.", title = "Using Iterated Rational Filter Banks Within the {ARSIS} Concept for Producing 10 {m} {L}andsat Multispectral Images", journal = "International Journal of Remote Sensing", month = "August", year = "1998", volume = "19", number = "12", pages = "2331--2343", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1998a" } |
Blu, T.,"Iterated Rational Filter Banks---Underlying Limit Functions", Proceedings of the IEEE Signal Processing Society Digital Signal Processing Workshop, Utica, USA, pp. 1.8.1-1.8.2, September 13--16, 1992. |
The term “Rational Filter Bank” (RFB) stands for “Filter Bank with Rational Rate Changes”. An analysis two-band RFB critically sampled is shown with its synthesis counterpart in figure 1. G stands typically for a low-pass FIR filter, whereas H is high-pass FIR. We are interested, in this paper in the iteration of the sole low-pass branch, which leads, in the integer case (q = 1), to a wavelet decomposition. Kovacevic and Vetterli have wondered whether iterated RFB could involve too, a discrete wavelet transform. Actually, Daubechies proved that whenever p/q is not an integert and G is FIR, this could not be the case. We here show that despite this discouraging feature, there still exists, not only one function (then shifted), as in the integer case, but an infinite set of compactly supported functions φs(t). More importantly, under certain conditions, these functions appear to be "almost" the shifted version of one sole function. These φs are constructed the same way as in the dyadic case (p = 2, q = 1), that is to say by the iteration of the low-pass branch of a synthesis RFB, but in this case the initialization is meaningful. |
@inproceedings{blu1992a, author = "Blu, T.", title = "Iterated Rational Filter Banks---{U}nderlying Limit Functions", booktitle = "Proceedings of the {IEEE} Signal Processing Society Digital Signal Processing Workshop", month = "September 13--16,", year = "1992", pages = "1.8.1--1.8.2", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1992a" } |
Blu, T.,"Iterated Filter Banks with Rational Rate Changes---Connection with Discrete Wavelet Transforms", IEEE Transactions on Signal Processing, Vol. 41 (12), pp. 3232-3244, December 1993. |
Some properties of two-band filter banks with rational rate changes ("rational filter banks") are first reviewed. Focusing then on iterated rational filter banks, compactly supported limit functions are obtained, in the same manner as previously done for dyadic schemes, allowing a characterization of such filter banks. These functions are carefully studied and the properties they share with the dyadic case are highlighted. They are experimentally observed to verify a "shift property" (strictly verified in the dyadic ease) up to an error which can be made arbitrarily small when their regularity increases. In this case, the high-pass outputs of an iterated filter bank can be very close to samples of a discrete wavelet transform with the same rational dilation factor. Straightforward extension of the formalism of multiresolution analysis is also made. Finally, it is shown that if one is ready to put up with the loss of the shift property, rational iterated filter banks can be used in the same manner as if they were dyadic filter banks, with the advantage that rational dilation factors can be chosen closer to 1. |
@article{blu1993a, author = "Blu, T.", title = "Iterated Filter Banks with Rational Rate Changes---{C}onnection with Discrete Wavelet Transforms", journal = "{IEEE} Transactions on Signal Processing", month = "December", year = "1993", volume = "41", number = "12", pages = "3232--3244", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1993a" } |
Blu, T.,"Shift Error in Iterated Rational Filter Banks", Proceedings of the Eighth European Signal Processing Conference (EUSIPCO'96), Trieste, Italy, Vol. {II}, pp. 1199-1202, September 10--13, 1996. |
For FIR filters, limit functions generated in iterated rational schemes are not invariant under shift operations, unlike what happens in the dyadic case: this feature prevents an analysis iterated rational filter bank (AIRFB) behaving exactly as a discrete wavelet transform, even though an adequate choice of the generating filter makes it possible to minimize its consequences. This paper indicates how to compute the error between an "average" shifted function and these limit functions, an open problem until now. Also connections are pointed out between this shift error and the selectivity of the AIRFB. |
@inproceedings{blu1996a, author = "Blu, T.", title = "Shift Error in Iterated Rational Filter Banks", booktitle = "Proceedings of the Eighth European Signal Processing Conference ({EUSIPCO'96})", month = "September 10--13,", year = "1996", volume = "{II}", pages = "1199--1202", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1996a" } |
Blu, T.,"An Iterated Rational Filter Bank for Audio Coding", Proceedings of the Third IEEE Signal Processing Society International Symposium on Time-Frequency and Time-Scale Analysis (IEEE-SP'96), Paris, France, pp. 81-84, June 18--21, 1996. |
This paper proposes a regular third-of-an-octave filter bank for high fidelity audio coding. The originality here is twofold: first, the filter bank is an iterated orthonormal rational filter bank for which the generating filters have been designed so that its outputs closely approximate a wavelet transform. This is different from the known coding algorithms which all use an integer filter bank, and most often a uniform one; second, the masking procedure itself is modelized with the help of a wavelet transform unlike the classical procedure in which a short time spectrum is computed and which gives rise to unwanted preecho effects. The masking procedure is then made equivalent to a quantization procedure. A simple non-optimized algorithm has been worked out in order to show the benefits of such a structure, especially in terms of preecho (which is perceptually inaudible), and the disadvantages, especially as far as delay is concerned. |
@inproceedings{blu1996b, author = "Blu, T.", title = "An Iterated Rational Filter Bank for Audio Coding", booktitle = "Proceedings of the Third {IEEE} Signal Processing Society International Symposium on Time-Frequency and Time-Scale Analysis ({IEEE-SP'96})", month = "June 18--21,", year = "1996", pages = "81--84", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1996b" } |
Blu, T.,"Bancs de filtres itérés en fraction d'octave --- Application au codage de son (Iterated Rational Filter Banks with an Application to Audio Coding)", School: ENST Paris, Nr. 96 E 009, 1996. in French. |
The following thesis is merely focused on the iteration of discrete schemes which allow fractional sampling: this is a generalization of better known time-frequency tools, namely the dyadic schemes. It is shown in particular how classical results can be extended to the ``rational'' case, and how the arising new problems are solved... One of the most interesting results is indeed the existence of limit functions associated to the iterated schemes, thus providing an interpretation of the iterated rational filter bank as the discrete form of a time-scale transform. However, the fact that shift invariance between these functions is not preserved prevents the time-scale transform to behave exactly as a wavelet transform. The amount of shift error is quantified under the name ``amnesia''. The pro¬perties of the limit functions (regularity, amnesia among others) are studied in details, while the implications on the iterated filter bank are explicited. On the other side, the discrete features of the filter bank are studied as far as implementation (finite precision) or filter design are concerned: in the latter case, an algorithm which proved to be very efficient is described. Finally, an application to high fidelity sound coding has been implemented: a new formulation of the psychoacoustic masking effect under the form of a wavelet transform, and the application of the theoretical considerations developed in the previous chapters, lead to a new coding algorithm whose main characteristic is the inaudibility of the classical preecho effect. |
@phdthesis{blu1996c, author = "Blu, T.", title = "Bancs de filtres it\'er\'es en fraction d'octave --- Application au codage de son (Iterated Rational Filter Banks with an Application to Audio Coding)", school = "ENST Paris, Nr. 96 E 009", year = "1996", note = "in French", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1996c" } |
Blu, T.,"A New Design Algorithm for Two-Band Orthonormal Rational Filter Banks and Orthonormal Rational Wavelets", IEEE Transactions on Signal Processing, Vol. 46 (6), pp. 1494-1504, June 1998. |
In this paper, we present a new algorithm for the design of orthonormal two-band rational filter banks. Owing to the connection between iterated rational filter banks and rational wavelets, this is also a design algorithm for orthonormal rational wavelets. It is basically a simple iterative procedure, which explains its exponential convergence and adaptability under various linear constraints (e.g., regularity). Although the filters obtained from this algorithm are suboptimally designed, they show excellent frequency selectivity. After an in-depth account of the algorithm, we discuss the properties of the rational wavelets generated by some designed filters. In particular, we stress the possibility to design "almost" shift error-free wavelets, which allows the implementation of a rational wavelet transform. |
@article{blu1998b, author = "Blu, T.", title = "A New Design Algorithm for Two-Band Orthonormal Rational Filter Banks and Orthonormal Rational Wavelets", journal = "{IEEE} Transactions on Signal Processing", month = "June", year = "1998", volume = "46", number = "6", pages = "1494--1504", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1998b" } |
Blu, T.,"The Generalized Annihilation Property---A Tool For Solving Finite Rate of Innovation Problems", Proceedings of the Eighth International Workshop on Sampling Theory and Applications (SampTA'09), Marseille, France, May 18--22, 2009. |
We describe a property satisfied by a class of nonlinear systems of equations that are of the form $\F(\Omega)\X=\Y$. Here $\F(\Omega)$ is a matrix that depends on an unknown $K$-dimensional vector $\Omega$, $\X$ is an unknown $K$-dimensional vector and $\Y$ is a vector of $N$ $\ge K$) given measurements. Such equations are encountered in superresolution or sparse signal recovery problems known as ``Finite Rate of Innovation'' signal reconstruction. We show how this property allows to solve explicitly for the unknowns $\Omega$ and $\X$ by a direct, non-iterative, algorithm that involves the resolution of two linear systems of equations and the extraction of the roots of a polynomial and give examples of problems where this type of solutions has been found useful. |
@inproceedings{blu2009b, author = "Blu, T.", title = "The Generalized Annihilation Property---A Tool For Solving Finite Rate of Innovation Problems", booktitle = "Proceedings of the Eighth International Workshop on Sampling Theory and Applications ({SampTA'09})", month = "May 18--22,", year = "2009", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2009b" } |
Blu, T.,"The SURE-LET Methodology---A Prior-Free Approach to Signal and Image Denoising", Plenary Presentation at the Eighth International Workshop on Sampling Theory and Applications (SampTA'09), Marseille, France, May 18--22, 2009. |
A novel methodology for restoring signal/images from noisy measurements will be presented. Contrary to the usual approaches (Bayesian, sparse-based), there is no prior modelization of the noiseless signal. Instead, it is the reconstruction algorithm itself that is parametrized, or approximated (using a Linear Expansion of Thresholds: LET). These parameters are then optimized by minimizing an estimate of the MSE between the (unknown) noiseless signal and the one processed by the algorithm. Surprisingly but admirably, it is possible to build such an estimate --- Stein's Unbiased Risk Estimate (SURE) --- using the noisy signal only, and without making any hypothesis on the noiseless signal. The only hypothesis is on the statistics of the noise (additive, Gaussian). Examples on image denoising are shown to validate the efficiency of this methodology. |
@conference{blu2009c, author = "Blu, T.", title = "The {SURE-LET} Methodology---{A} Prior-Free Approach to Signal and Image Denoising", booktitle = "Plenary Presentation at the Eighth International Workshop on Sampling Theory and Applications ({SampTA'09})", month = "May 18--22,", year = "2009", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2009c" } |
Blu, T.,"Image Denoising and the SURE-LET Methodology", Tutorial Presentation at APSIPA Annual Summit and Conference 2010, Singapore, December 14--17, 2010. |
Image denoising consists in approximating the noiseless image by performing some, usually non-linear, processing of the noisy image. Most standard techniques involve assumptions on the result of this processing (sparsity, low high-frequency contents, etc.); i.e., the denoised image. Instead, the SURE-LET methodology that we promote consists in approximating the processing itself (seen as a function) in some linear combination of elementary non-linear processings (LET: Linear Expansion of Thresholds), and to optimize the coefficients of this combination by minimizing a statistically unbiased estimate of the Mean Square Error (SURE: Stein's Unbiased Risk Estimate, for additive Gaussian noise). This tutorial will introduce the technique to the attendance, will outline its advantages (fast, noise-robust, flexible, image adaptive). A very complete set of results will be shown and compared with the state-of-the-art. Extensions of the approach to Poisson noise reduction with application to microscopy imaging will also be shown. |
@conference{blu2010b, author = "Blu, T.", title = "Image Denoising and the {SURE-LET} Methodology", booktitle = "Tutorial Presentation at APSIPA Annual Summit and Conference 2010", month = "December 14--17,", year = "2010", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010b" } |
Blu, T.,"Sparsity Through Annihilation: Algorithms and Applications", Keynote Presentation at the Tenth IEEE International Conference on Signal Processing (ICSP'10), Beijing, China, October 24--28, 2010. |
The problem of reconstructing or estimating partially observed or sampled signals is an old and important one, and finds application in many areas that involve data acquisition. Traditional sampling and reconstruction approaches are heavily influenced by the classical Shannon sampling theory which gives an exact sampling and interpolation formula for bandlimited signals. Recently, the classical Shannon sampling framework has been extended to classes of non-bandlimited structured signals, which we call signals with Finite Rate of Innovation. In these new sampling schemes, the prior that the signal is sparse in a basis or in a parametric space takes the form of a linear system of equations expressing the annihilation of signal-derived quantities. The coefficients of this annihilation system are then related in a non-linear way (e.g., polynomial roots) to the sparse signal parameters; i.e., its "innovations". This leads to new exact reconstruction formulas and fast algorithms that achieve such reconstructions. We will show how these algorithms are able to deal succesfully with noise issues, leading to statistically optimal recovery, and we will exemplify this theory with a number of applications that benefit from these novel schemes. |
@conference{blu2010c, author = "Blu, T.", title = "Sparsity Through Annihilation: Algorithms and Applications", booktitle = "Keynote Presentation at the Tenth {IEEE} International Conference on Signal Processing ({ICSP'10})", month = "October 24--28,", year = "2010", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010c" } |
Blu, T.,"Linear Expansion of Thresholds: A Tool for Approximating Image Processing Algorithms", Keynote Presentation at the ninth International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI'16), Datong, China, October 15--17, 2016. |
Contrary to the usual processing approaches which consist in approximating
all the pixels of an image (often by optimizing some criterion),
we propose to approximate the processing itself using a linear combination
of a few basic non-linear processings --- "thresholds". Accordingly,
we term this approach "Linear Expansion of Thresholds" (LET). |
@conference{blu2016f, author = "Blu, T.", title = "Linear Expansion of Thresholds: A Tool for Approximating Image Processing Algorithms", booktitle = "Keynote Presentation at the ninth International Congress on Image and Signal Processing, BioMedical Engineering and Informatics ({CISP-BMEI'16})", month = "October 15--17,", year = "2016", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2016f" } |
Blu, T., Bay, H. & Unser, M.,"A New High-Resolution Processing Method for the Deconvolution of Optical Coherence Tomography Signals", Proceedings of the First IEEE International Symposium on Biomedical Imaging (ISBI'02), Washington, USA, Vol. {III}, pp. 777-780, July 7--10, 2002. |
We show the feasibility and the potential of a new signal processing algorithm for the high-resolution deconvolution of OCT signals. Our technique relies on the description of the measures in a parametric form, each set of four parameters describing the optical characteristics of a physical interface (e.g., complex refractive index, depth). Under the hypothesis of a Gaussian source light, we show that it is possible to recover the 4K parameters corresponding to K interfaces using as few as 4K uniform samples of the OCT signal. With noisy data, we can expect the robustness of our method to increase with the oversampling rate—or with the redundancy of the measures. The validation results show that the quality of the estimation of the parameters (in particular the depth of the interfaces) is narrowly linked to the noise level of the OCT measures—and not to the coherence length of the source light—and to their degree of redundancy. |
@inproceedings{blu2002a, author = "Blu, T. and Bay, H. and Unser, M.", title = "A New High-Resolution Processing Method for the Deconvolution of Optical Coherence Tomography Signals", booktitle = "Proceedings of the First {IEEE} International Symposium on Biomedical Imaging ({ISBI'02})", month = "July 7--10,", year = "2002", volume = "{III}", pages = "777--780", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002a" } |
Blu, T., Dragotti, P.-L., Vetterli, M., Marziliano, P. & Coulot, L.,"Sparse Sampling of Signal Innovations", IEEE Signal Processing Magazine, Vol. 25 (2), pp. 31-40, March 2008. |
Signal acquisition and reconstruction is at the heart of signal processing, and sampling theorems provide the bridge between the continuous and the discrete-time worlds. The most celebrated and widely used sampling theorem is often attributed to Shannon (and many others, from Whittaker to Kotel′nikov and Nyquist, to name a few) and gives a sufficient condition, namely bandlimitedness, for an exact sampling and interpolation formula. The sampling rate, at twice the maximum frequency present in the signal, is usually called the Nyquist rate. Bandlimitedness, however, is not necessary as is well known but only rarely taken advantage of [1]. In this broader, nonbandlimited view, the question is: when can we acquire a signal using a sampling kernel followed by uniform sampling and perfectly reconstruct it? The Shannon case is a particular example, where any signal from the subspace of bandlimited signals, denoted by BL, can be acquired through sampling and perfectly interpolated from the samples. Using the sinc kernel, or ideal low-pass filter, nonbandlimited signals will be projected onto the subspace BL. The question is: can we beat Shannon at this game, namely, acquire signals from outside of BL and still perfectly reconstruct? An obvious case is bandpass sampling and variations thereof. Less obvious are sampling schemes taking advantage of some sort of sparsity in the signal, and this is the central theme of this article. That is, instead of generic bandlimited signals, we consider the sampling of classes of nonbandlimited parametric signals. This allows us to circumvent Nyquist and perfectly sample and reconstruct signals using sparse sampling, at a rate characterized by how sparse they are per unit of time. In some sense, we sample at the rate of innovation of the signal by complying with Occam's razor principle [known as Lex Parcimoniæ or Law of Parsimony: Entia non svnt mvltiplicanda præter necessitatem, or, “Entities should not be multiplied beyond necessity” (from Wikipedia)]. Besides Shannon's sampling theorem, a second basic result that permeates signal processing is certainly Heisenberg's uncertainty principle, which suggests that a singular event in the frequency domain will be necessarily widely spread in the time domain. A superficial interpretation might lead one to believe that a perfect frequency localization requires a very long time observation. That this is not necessary is demonstrated by high resolution spectral analysis methods, which achieve very precise frequency localization using finite observation windows [2], [3]. The way around Heisenberg resides in a parametric approach, where the prior that the signal is a linear combination of sinusoids is put to contribution. If by now you feel uneasy about slaloming around Nyquist, Shannon, and Heisenberg, do not worry. Estimation of sparse data is a classic problem in signal processing and communications, from estimating sinusoids in noise, to locating errors in digital transmissions. Thus, there is a wide variety of available techniques and algorithms. Also, the best possible performance is given by the Cramér-Rao lower bounds for this parametric estimation problem, and one can thus check how close to optimal a solution actually is. We are thus ready to pose the basic questions of this article. Assume a sparse signal (be it in continuous or discrete time) observed through a sampling device that is a smoothing kernel followed by regular or uniform sampling. What is the minimum sampling rate (as opposed to Nyquist's rate, which is often infinite in cases of interest) that allows to recover the signal? What classes of sparse signals are possible? What are good observation kernels, and what are efficient and stable recovery algorithms? How does observation noise influence recovery, and what algorithms will approach optimal performance? How will these new techniques impact practical applications, from inverse problems to wideband communications? And finally, what is the relationship between the presented methods and classic methods as well as the recent advances in compressed sensing and sampling? References
Erratum: There is an unfortunate discrepancy between (3) and (9-10). To be consistent with (3), the factor τ expressing the proportionality between ym ˆ and xm ˆ has to be replaced by N/B in (10). |
@article{blu2008b, author = "Blu, T. and Dragotti, P.-L. and Vetterli, M. and Marziliano, P. and Coulot, L.", title = "Sparse Sampling of Signal Innovations", journal = "{IEEE} Signal Processing Magazine", month = "March", year = "2008", volume = "25", number = "2", pages = "31--40", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008b" } |
Blu, T. & Lebrun, J.,"Analyse temps-fréquence linéaire II: représentations de type ondelettes", Temps-fréquence, concepts et outils, Paris, France, pp. 101-138, Hermès, 2005. |
La théorie des ondelettes est née au milieu des années quatre-vingts pour répondre aux problèmes de résolution temps-fréquence des méthodes de type Fourier. En effet, nombre de signaux non-stationnaires nécessitent une analyse dont la résolution fréquentielle (respectivement temporelle) varie avec la localisation temporelle (respectivement fréquentielle). C'est pour permettre cette flexibilité que les ondelettes, un nouveau concept d'analyse dite «multi-résolution» ou «multi-échelle», ont vu le jour. Après une présentation succincte de la transformation en ondelettes continue, nous nous focaliserons sur sa version discrète, notamment l'algorithme de Mallat, qui est à la transformation en ondelettes ce que la FFT est à la transformée de Fourier. Nous considérerons également l'important problème de la conception de filtres générateurs d'ondelettes (filtres de Daubechies, par exemple). Par ailleurs, nous étudierons de récentes généralisations ou extensions (en particulier, multi-ondelettes, paquets d'ondelettes et frames) rendues nécessaires par certaines limitations de la théorie des ondelettes. Enfin, nous détaillerons quelques applications qui font le succès actuel des ondelettes et plus généralement, des méthodes temps-échelle (compression et débruitage, mise en correspondance d'images, …). L'un des buts de ce chapitre aura ainsi été de mettre en évidence la fertilisation croisée entre des approches parfois assez théoriques, où mathématiques et sciences de l'ingénieur se marient avec bonheur. |
@incollection{blu2005a, author = "Blu, T. and Lebrun, J.", title = "Analyse temps-fr{\'{e}}quence lin{\'{e}}aire {II}: repr{\'{e}}sentations de type ondelettes", booktitle = "Temps-fr{\'{e}}quence, concepts et outils", publisher = "Herm{\`{e}}s", year = "2005", pages = "101--138", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005a" } |
Blu, T. & Lebrun, J.,"Linear time-frequency analysis II: wavelet-type representations", Time-Frequency Analysis - Concepts and Methods, London UK, pp. 93-130, Wiley-ISTE, 2008. |
Wavelet theory was born in the mid-1980s in response to the time-frequency resolution problems of Fourier-type methods. Indeed, many non-stationary signals call for an analysis whose spectral (resp. temporal) resolution varies with the temporal (resp. spectral) localization. It is to allow this flexibility that wavelets, a new analysis concept called "multi-resolution" or "multi-scale," have been brought to light. After a brief presentation of the continuous wavelet transform, we shall focus on its discrete version, notably the Mallat algorithm, which is for the wavelet transform what the FFT is for the Fourier transform. We shall also consider the important problem of the design of wavelet generator filters (Daubechies filters, for example). Furthermore, we shall study some recent generalizations or extensions (in particular, multi-wavelets, wavelet packets, and frames) that were motivated by certain limitations of wavelet theory. Finally, we shall discuss some applications that caused the present success of wavelets and, more generally, of time-scale methods (compression and denoising, aligning images, etc.). One of the aims of this chapter will thus be to demonstrate the cross-fertilization between sometimes quite theoretical approaches, where mathematics and engineering sciences are happily united. |
@incollection{blu2008c, author = "Blu, T. and Lebrun, J.", title = "Linear time-frequency analysis {II}: wavelet-type representations", booktitle = "Time-Frequency Analysis - Concepts and Methods", publisher = "Wiley-ISTE", year = "2008", pages = "93--130", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008c" } |
Blu, T. & Luisier, F.,"The SURE-LET Approach to Image Denoising", IEEE Transactions on Image Processing, Vol. 16 (11), pp. 2778-2786, November 2007. |
We propose a new approach to image denoising, based on the image-domain minimization of an estimate of the mean squared error—Stein's unbiased risk estimate (SURE). Unlike most existing denoising algorithms, using the SURE makes it needless to hypothesize a statistical model for the noiseless image. A key point of our approach is that, although the (nonlinear) processing is performed in a transformed domain—typically, an undecimated discrete wavelet transform, but we also address nonorthonormal transforms—this minimization is performed in the image domain. Indeed, we demonstrate that, when the transform is a “tight” frame (an undecimated wavelet transform using orthonormal filters), separate subband minimization yields substantially worse results. In order for our approach to be viable, we add another principle, that the denoising process can be expressed as a linear combination of elementary denoising processes—linear expansion of thresholds (LET). Armed with the SURE and LET principles, we show that a denoising algorithm merely amounts to solving a linear system of equations which is obviously fast and efficient. Quite remarkably, the very competitive results obtained by performing a simple threshold (image-domain SURE optimized) on the undecimated Haar wavelet coefficients show that the SURE-LET principle has a huge potential. |
@article{blu2007b, author = "Blu, T. and Luisier, F.", title = "The {SURE-LET} Approach to Image Denoising", journal = "{IEEE} Transactions on Image Processing", month = "November", year = "2007", volume = "16", number = "11", pages = "2778--2786", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007b" } |
Blu, T. & Luisier, F.,"Image Denoising and the SURE-LET Methodology", Tutorial Presentation at the Seventeenth International Conference on Image Processing (ICIP'2010), Hong Kong, China, September 26--29, 2010. |
Image denoising consists in approximating the noiseless image by performing some, usually non-linear, processing of the noisy image. Most standard techniques involve assumptions on the result of this processing (sparsity, low high-frequency contents, etc.); i.e., the denoised image. Instead, the SURE-LET methodology that we promote consists in approximating the processing itself (seen as a function) in some linear combination of elementary non-linear processings (LET: Linear Expansion of Thresholds), and to optimize the coefficients of this combination by minimizing a statistically unbiased estimate of the Mean Square Error (SURE: Stein's Unbiased Risk Estimate, for additive Gaussian noise). This tutorial will introduce the technique to the attendance, will outline its advantages (fast, noise-robust, flexible, image adaptive). A very complete set of results will be shown and compared with the state-of-the-art. Extensions of the approach to Poisson noise reduction with application to microscopy imaging will also be shown. |
@conference{blu2010d, author = "Blu, T. and Luisier, F.", title = "Image Denoising and the {SURE-LET} Methodology", booktitle = "Tutorial Presentation at the Seventeenth International Conference on Image Processing ({ICIP'2010})", month = "September 26--29,", year = "2010", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010d" } |
Blu, T., Moulin, P. & Gilliam, C.,"Approximation order of the LAP optical flow algorithm", Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP'15), Québec City, Canada, pp. 48-52, September 27--30, 2015. |
Estimating the displacements between two images is often addressed using a small displacement assumption, which leads to what is known as the optical flow equation. We study the quality of the underlying approximation for the recently developed Local All-Pass (LAP) optical flow algorithm, which is based on another approach—displacements result from filtering. While the simplest version of LAP computes only first-order differences, we show that the order of LAP approximation is quadratic, unlike standard optical flow equation based algorithms for which this approximation is only linear. More generally, the order of approximation of the LAP algorithm is twice larger than the differentiation order involved. The key step in the derivation is the use of Padé approximants. |
@inproceedings{blu2015b, author = "Blu, T. and P. Moulin and C. Gilliam", title = "Approximation order of the {LAP} optical flow algorithm", booktitle = "Proceedings of the 2015 {IEEE} International Conference on Image Processing ({ICIP'15})", month = "September 27--30,", year = "2015", pages = "48--52", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2015b" } |
Blu, T. & Rioul, O.,"Wavelet Regularity of Iterated Filter Banks with Rational Sampling Changes", Proceedings of the Eighteenth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'93), Minneapolis, USA, Vol. {III}, pp. 213-216, April 27--30, 1993. |
The regularity property was first introduced by wavelet theory for octave-band dyadic filter banks. In the present work, the authors provide a detailed theoretical analysis of the regularity property in the more flexible case of filter banks with rational sampling changes. Such filter banks provide a finer analysis of fractions of an octave, and regularity is as important as in the dyadic case. Sharp regularity estimates for any filter bank are given. The major difficulty of the rational case, as compared with the dyadic case, is that one obtains wavelets that are not shifted versions of each other at a given scale. It is shown, however, that, under regularity conditions, shift invariance can almost be obtained. This is a desirable property for, e.g., coding applications and for efficient filter bank implementation of a continuous wavelet transform. |
@inproceedings{blu1993b, author = "Blu, T. and Rioul, O.", title = "Wavelet Regularity of Iterated Filter Banks with Rational Sampling Changes", booktitle = "Proceedings of the Eighteenth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'93})", month = "April 27--30,", year = "1993", volume = "{III}", pages = "213--216", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1993b" } |
Blu, T., Sühling, M., Thévenaz, P. & Unser, M.,"Approximation Order: Why the Asymptotic Constant Matters", Second Pacific Rim Conference on Mathematics (PRCM'01), Taipei, Taiwan, pp. {II}.3-{II}.4, January 4--8, 2001. |
We consider the approximation (either interpolation, or least-squares) of L2 functions in the shift-invariant space VT = spank∈Z { φ(t ⁄ T − k) } that is generated by the single shifted function φ. We measure the approximation error in an L2 sense and evaluate the asymptotic equivalent of this error as the sampling step T tends to zero. Let ƒ ∈ L2 and ƒT be its approximation in VT. It is well-known that, if φ satisfies the Strang-Fix conditions of order L, and under mild technical constraints, || ƒ − ƒT || L2 = O(TL). In this presentation however, we want to be more accurate and concentrate on the constant Cφ which is such that || ƒ − ƒT || L2 = Cφ || ƒ(L) || L2 TL + o(TL). |
@inproceedings{blu2001a, author = "Blu, T. and S{\"{u}}hling, M. and Th{\'{e}}venaz, P. and Unser, M.", title = "Approximation Order: {W}hy the Asymptotic Constant Matters", booktitle = "Second Pacific Rim Conference on Mathematics ({PRCM'01})", month = "January 4--8,", year = "2001", pages = "{II}.3--{II}.4", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001a" } |
Blu, T., Thévenaz, P. & Unser, M.,"Minimum Support Interpolators with Optimum Approximation Properties", Proceedings of the 1998 IEEE International Conference on Image Processing (ICIP'98), Chicago, USA, Vol. {III}, pp. 242-245, October 4--7, 1998. |
We investigate the functions of given approximation order L that have the smallest support. Those are shown to be linear combinations of the B-spline of degree L-1 and its L-1 first derivatives. We then show how to find the functions that minimize the asymptotic approximation constant among this finite dimension space; in particular, a tractable induction relation is worked out. Using these functions instead of splines, we observe that the approximation error is dramatically reduced, not only in the limit when the sampling step tends to zero, but also for higher values up to the Shannon rate. Finally, we show that those optimal functions satisfy a scaling equation, although less simple than the usual two-scale difference equation. |
@inproceedings{blu1998c, author = "Blu, T. and Th{\'{e}}venaz, P. and Unser, M.", title = "Minimum Support Interpolators with Optimum Approximation Properties", booktitle = "Proceedings of the 1998 {IEEE} International Conference on Image Processing ({ICIP'98})", month = "October 4--7,", year = "1998", volume = "{III}", pages = "242--245", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1998c" } |
Blu, T., Thévenaz, P. & Unser, M.,"Generalized Interpolation: Higher Quality at no Additional Cost", Proceedings of the 1999 IEEE International Conference on Image Processing (ICIP'99), Kobe, Japan, Vol. {III}, pp. 667-671, October 25--28, 1999. |
We extend the classical interpolation method to generalized interpolation. This extension is done by replacing the interpolating function by a non-interpolating function that is applied to prefiltered data, in order to preserve the interpolation condition. We show, both theoretically and practically, that this approach performs much better than classical methods, for the same computational cost. |
@inproceedings{blu1999a, author = "Blu, T. and Th{\'{e}}venaz, P. and Unser, M.", title = "Generalized Interpolation: {H}igher Quality at no Additional Cost", booktitle = "Proceedings of the 1999 {IEEE} International Conference on Image Processing ({ICIP'99})", month = "October 25--28,", year = "1999", volume = "{III}", pages = "667--671", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1999a" } |
Blu, T., Thévenaz, P. & Unser, M.,"MOMS: Maximal-Order Interpolation of Minimal Support", IEEE Transactions on Image Processing, Vol. 10 (7), pp. 1069-1080, July 2001. |
We consider the problem of interpolating a signal using a linear combination of shifted versions of a compactly-supported basis function φ(x). We first give the expression of the φ's that have minimal support for a given accuracy (also known as "approximation order"). This class of functions, which we call maximal-order-minimal-support functions (MOMS), is made of linear combinations of the B-spline of same order and of its derivatives. We provide the explicit form of the MOMS that maximize the approximation accuracy when the step-size is small enough. We compute the sampling gain obtained by using these optimal basis functions over the splines of same order. We show that it is already substantial for small orders and that it further increases with the approximation order L. When L is large, this sampling gain becomes linear; more specifically, its exact asymptotic expression is (2 L ⁄ (π × e)). Since the optimal functions are continuous, but not differentiable, for even orders, and even only piecewise continuous for odd orders, our result implies that regularity has little to do with approximating performance. These theoretical findings are corroborated by experimental evidence that involves compounded rotations of images. |
@article{blu2001b, author = "Blu, T. and Th{\'{e}}venaz, P. and Unser, M.", title = "{MOMS}: {M}aximal-Order Interpolation of Minimal Support", journal = "{IEEE} Transactions on Image Processing", month = "July", year = "2001", volume = "10", number = "7", pages = "1069--1080", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001b" } |
Blu, T., Thévenaz, P. & Unser, M.,"How a Simple Shift Can Significantly Improve the Performance of Linear Interpolation", Proceedings of the 2002 IEEE International Conference on Image Processing (ICIP'02), Rochester, USA, Vol. {III}, pp. 377-380, September 22--25, 2002. |
We present a simple, original method to improve piecewise linear interpolation with uniform knots: We shift the sampling knots by a fixed amount, while enforcing the interpolation property. Thanks to a theoretical analysis, we determine the optimal shift that maximizes the quality of our shifted linear interpolation. Surprisingly enough, this optimal value is nonzero and it is close to 1 ⁄ 5. We confirm our theoretical findings by performing a cumulative rotation experiment, which shows a significant increase of the quality of the shifted method with respect to the standard one. Most interesting is the fact that we get a quality similar to that of high-quality cubic convolution at the computational cost of linear interpolation. |
@inproceedings{blu2002b, author = "Blu, T. and Th{\'{e}}venaz, P. and Unser, M.", title = "How a Simple Shift Can Significantly Improve the Performance of Linear Interpolation", booktitle = "Proceedings of the 2002 {IEEE} International Conference on Image Processing ({ICIP'02})", month = "September 22--25,", year = "2002", volume = "{III}", pages = "377--380", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002b" } |
Blu, T., Thévenaz, P. & Unser, M.,"Complete Parameterization of Piecewise-Polynomial Interpolation Kernels", IEEE Transactions on Image Processing, Vol. 12 (11), pp. 1297-1309, November 2003. |
Every now and then, a new design of an interpolation kernel shows up in the literature. While interesting results have emerged, the traditional design methodology proves laborious and is riddled with very large systems of linear equations that must be solved analytically. In this paper, we propose to ease this burden by providing an explicit formula that will generate every possible piecewise-polynomial kernel given its degree, its support, its regularity, and its order of approximation. This formula contains a set of coefficients that can be chosen freely and do not interfere with the four main design parameters; it is thus easy to tune the design to achieve any additional constraints that the designer may care for. |
@article{blu2003a, author = "Blu, T. and Th{\'{e}}venaz, P. and Unser, M.", title = "Complete Parameterization of Piecewise-Polynomial Interpolation Kernels", journal = "{IEEE} Transactions on Image Processing", month = "November", year = "2003", volume = "12", number = "11", pages = "1297--1309", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003a" } |
Blu, T., Thévenaz, P. & Unser, M.,"Linear Interpolation Revitalized", IEEE Transactions on Image Processing, Vol. 13 (5), pp. 710-719, May 2004. |
We present a simple, original method to improve piecewise-linear interpolation with uniform knots: we shift the sampling knots by a fixed amount, while enforcing the interpolation property. We determine the theoretical optimal shift that maximizes the quality of our shifted linear interpolation. Surprisingly enough, this optimal value is nonzero and close to 1⁄5. We confirm our theoretical findings by performing several experiments: a cumulative rotation experiment and a zoom experiment. Both show a significant increase of the quality of the shifted method with respect to the standard one. We also observe that, in these results, we get a quality that is similar to that of the computationally more costly “high-quality” cubic convolution. Erratum
|
@article{blu2004a, author = "Blu, T. and Th{\'{e}}venaz, P. and Unser, M.", title = "Linear Interpolation Revitalized", journal = "{IEEE} Transactions on Image Processing", month = "May", year = "2004", volume = "13", number = "5", pages = "710--719", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004a" } |
Blu, T., Thévenaz, P. & Unser, M.,"High-Quality Causal Interpolation for Online Unidimensional Signal Processing", Proceedings of the Twelfth European Signal Processing Conference (EUSIPCO'04), Wien, Austria, pp. 1417-1420, September 6--10, 2004. |
We present a procedure for designing interpolation kernels that are adapted to time signals; i.e., they are causal, even though they do not have a finite support. The considered kernels are obtained by digital IIR filtering of a finite support function that has maximum approximation order. We show how to build these kernel starting from the all-pole digital filter and we give some practical design examples. |
@inproceedings{blu2004b, author = "Blu, T. and Th{\'{e}}venaz, P. and Unser, M.", title = "High-Quality Causal Interpolation for Online Unidimensional Signal Processing", booktitle = "Proceedings of the Twelfth European Signal Processing Conference ({EUSIPCO'04})", month = "September 6--10,", year = "2004", pages = "1417--1420", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004b" } |
Blu, T. & Unser, M.,"Quantitative L^2 Error Analysis for Interpolation Methods and Wavelet Expansions", Proceedings of the 1997 IEEE International Conference on Image Processing (ICIP'97), Santa Barbara, USA, Vol. {I}, pp. 663-666, October 26--29, 1997. |
Our goal in this paper is to set a theoretical basis for the comparison of re-sampling and interpolation methods. We consider the general problem of the approximation of an arbitrary continuously-defined function f(x)—not necessarily bandlimited—when we vary the sampling step T. We present an accurate L2 computation of the induced approximation error as a function of T for a general class of linear approximation operators including interpolation and other kinds of projectors. This new quantitative result provides exact expressions for the asymptotic development of the error as T→0, and also sharp (asymptotically exact) upper bounds. |
@inproceedings{blu1997a, author = "Blu, T. and Unser, M.", title = "Quantitative {$L^{2}$} Error Analysis for Interpolation Methods and Wavelet Expansions", booktitle = "Proceedings of the 1997 {IEEE} International Conference on Image Processing ({ICIP'97})", month = "October 26--29,", year = "1997", volume = "{I}", pages = "663--666", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1997a" } |
Blu, T. & Unser, M.,"A Quantitative Fourier Analysis of the Linear Approximation Error by Wavelets", Wavelet Applications Workshop, Monte Verità, Switzerland, September 28--October 2, 1998. |
We introduce a simple method—integration of the power spectrum against a Fourier kernel—for computing the approximation error by wavelets. This method is powerful enough to recover all classical L2 results in approximation theory (Strang-Fix theory), and also to provide new error estimates that are sharper and asymptotically exact. |
@inproceedings{blu1998d, author = "Blu, T. and Unser, M.", title = "A Quantitative {F}ourier Analysis of the Linear Approximation Error by Wavelets", booktitle = "Wavelet Applications Workshop", month = "September 28--October 2,", year = "1998", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1998d" } |
Blu, T. & Unser, M.,"Quantitative Fourier Analysis of Approximation Techniques: Part II---Wavelets", IEEE Transactions on Signal Processing, Vol. 47 (10), pp. 2796-2806, October 1999. |
In a previous paper, we proposed a general Fourier method which provides an accurate prediction of the approximation error, irrespective of the scaling properties of the approximating functions. Here, we apply our results when these functions satisfy the usual two-scale relation encountered in dyadic multiresolution analysis. As a consequence of this additional constraint, the quantities introduced in our previous paper can be computed explicitly as a function of the refinement filter. This is in particular true for the asymptotic expansion of the approximation error for biorthonormal wavelets, as the scale tends to zero. One of the contributions of this paper is the computation of sharp, asymptotically optimal upper bounds for the least-squares approximation error. Another contribution is the application of these results to B-splines and Daubechies scaling functions, which yields explicit asymptotic developments and upper bounds. Thanks to these explicit expressions, we can quantify the improvement that can be obtained by using B-splines instead of Daubechies wavelets. In other words, we can use a coarser spline sampling and achieve the same reconstruction accuracy as Daubechies: Specifically, we show that this sampling gain converges to pi as the order tends to infinity. Please consult also the companion paper by T. Blu, M. Unser, "Quantitative Fourier Analysis of Approximation Techniques: Part I—Interpolators and Projectors," IEEE Transactions on Signal Processing, vol. 47, no. 10, pp. 2783-2795, October 1999. |
@article{blu1999b, author = "Blu, T. and Unser, M.", title = "Quantitative {F}ourier Analysis of Approximation Techniques: {P}art {II}---{W}avelets", journal = "{IEEE} Transactions on Signal Processing", month = "October", year = "1999", volume = "47", number = "10", pages = "2796--2806", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1999b" } |
Blu, T. & Unser, M.,"Quantitative Fourier Analysis of Approximation Techniques: Part I---Interpolators and Projectors", IEEE Transactions on Signal Processing, Vol. 47 (10), pp. 2783-2795, October 1999. |
We present a general Fourier-based method that provides an accurate prediction of the approximation error as a function of the sampling step T. Our formalism applies to an extended class of convolution-based signal approximation techniques, which includes interpolation, generalized sampling with prefiltering, and the projectors encountered in wavelet theory. We claim that we can predict the L2-approximation error, by integrating the spectrum of the function to approximate—not necessarily bandlimited—against a frequency kernel E(ω) that characterizes the approximation operator. This prediction is easier, yet more precise than was previously available. Our approach has the remarkable property of providing a global error estimate that is the average of the true approximation error over all possible shifts of the input function. Our error prediction is exact for stationary processes, as well as for bandlimited signals. We apply this method to the comparison of standard interpolation and approximation techniques. Our method has interesting implications for approximation theory. In particular, we use our results to obtain some new asymptotic expansions of the error as T tends to 0, and also to derive improved upper bounds of the kind found in the Strang-Fix theory. We finally show how we can design quasi-interpolators that are near-optimal in the least-squares sense. Please consult also the companion paper by T. Blu, M. Unser, "Quantitative Fourier Analysis of Approximation Techniques: Part II—Wavelets," IEEE Transactions on Signal Processing, vol. 47, no. 10, pp. 2796-2806, October 1999. |
@article{blu1999c, author = "Blu, T. and Unser, M.", title = "Quantitative {F}ourier Analysis of Approximation Techniques: {P}art {I}---{I}nterpolators and Projectors", journal = "{IEEE} Transactions on Signal Processing", month = "October", year = "1999", volume = "47", number = "10", pages = "2783--2795", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1999c" } |
Blu, T. & Unser, M.,"Approximation Error for Quasi-Interpolators and (Multi-) Wavelet Expansions", Applied and Computational Harmonic Analysis, Vol. 6 (2), pp. 219-251, March 1999. |
We investigate the approximation properties of general polynomial preserving operators that approximate a function into some scaled subspace of L2 via an appropriate sequence of inner products. In particular, we consider integer shift-invariant approximations such as those provided by splines and wavelets, as well as finite elements and multi-wavelets which use multiple generators. We estimate the approximation error as a function of the scale parameter T when the function to approximate is sufficiently regular. We then present a generalized sampling theorem, a result that is rich enough to provide tight bounds as well as asymptotic expansions of the approximation error as a function of the sampling step T. Another more theoretical consequence is the proof of a conjecture by Strang and Fix, stating the equivalence between the order of a multi-wavelet space and the order of a particular subspace generated by a single function. Finally, we consider refinable generating functions and use the two-scale relation to obtain explicit formulae for the coefficients of the asymptotic development of the error. The leading constants are easily computable and can be the basis for the comparison of the approximation power of wavelet and multi-wavelet expansions of a given order L. |
@article{blu1999d, author = "Blu, T. and Unser, M.", title = "Approximation Error for Quasi-Interpolators and (Multi-) Wavelet Expansions", journal = "Applied and Computational Harmonic Analysis", month = "March", year = "1999", volume = "6", number = "2", pages = "219--251", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1999d" } |
Blu, T. & Unser, M.,"A Theoretical Analysis of the Projection Error onto Discrete Wavelet Subspaces", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing VII, Denver, USA, Vol. 3813, pp. 273-281, July 19--23, 1999. |
A filterbank decomposition can be seen as a series of projections onto several discrete wavelet subspaces. In this presentation, we analyze the projection onto one of them—the low-pass one, since many signals tend to be low-pass. We prove a general but simple formula that allows the computation of the l2-error made by approximating the signal by its projection. This result provides a norm for evaluating the accuracy of a complete decimation/interpolation branch for arbitrary analysis and synthesis filters; such a norm could be useful for the joint design of an analysis and synthesis filter, especially in the non-orthonormal case. As an example, we use our framework to compare the efficiency of different wavelet filters, such as Daubechies' or splines. In particular, we prove that the error made by using a Daubechies' filter downsampled by 2 is of the same order as the error using an orthonormal spline filter downsampled by 6. This proof is valid asymptotically as the number of regularity factors tends to infinity, and for a signal that is essentially low-pass. This implies that splines bring an additional compression gain of at least 3 over Daubechies' filters, asymptotically. |
@inproceedings{blu1999e, author = "Blu, T. and Unser, M.", title = "A Theoretical Analysis of the Projection Error onto Discrete Wavelet Subspaces", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {VII}", month = "July 19--23,", year = "1999", volume = "3813", pages = "273--281", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1999e" } |
Blu, T. & Unser, M.,"The Fractional Spline Wavelet Transform: Definition and Implementation", Proceedings of the Twenty-Fifth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'00), Istanbul, Turkey, Vol. {I}, pp. 512-515, June 5--9, 2000. |
We define a new wavelet transform that is based on a recently defined family of scaling functions: the fractional B-splines. The interest of this family is that they interpolate between the integer degrees of polynomial B-splines and that they allow a fractional order of approximation. The orthogonal fractional spline wavelets essentially behave as a fractional differentiators. This property seems promising for the analysis of 1/fα; noise that can be whitened by an appropriate choice of the degree of the spline transform. We present a practical FFT-based algorithm for the implementation of these fractional wavelet transforms, and give some examples of processing. |
@inproceedings{blu2000a, author = "Blu, T. and Unser, M.", title = "The Fractional Spline Wavelet Transform: {D}efinition and Implementation", booktitle = "Proceedings of the Twenty-Fifth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'00})", month = "June 5--9,", year = "2000", volume = "{I}", pages = "512--515", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000a" } |
Blu, T. & Unser, M.,"Wavelets, Fractals, and Radial Basis Functions", IEEE Transactions on Signal Processing, Vol. 50 (3), pp. 543-553, March 2002. IEEE Signal Processing Society's 2003 Best Paper Award. |
Wavelets and radial basis functions (RBFs) lead to two distinct ways of representing signals in terms of shifted basis functions. RBFs, unlike wavelets, are nonlocal and do not involve any scaling, which makes them applicable to nonuniform grids. Despite these fundamental differences, we show that the two types of representation are closely linked together …through fractals. First, we identify and characterize the whole class of self-similar radial basis functions that can be localized to yield conventional multiresolution wavelet bases. Conversely, we prove that for any compactly supported scaling function φ(x), there exists a one-sided central basis function ρ+(x) that spans the same multiresolution subspaces. The central property is that the multiresolution bases are generated by simple translation of ρ+ without any dilation. We also present an explicit time-domain representation of a scaling function as a sum of harmonic splines. The leading term in the decomposition corresponds to the fractional splines: a recent, continuous-order generalization of the polynomial splines. IEEE Signal Processing Society's 2003 Best Paper Award |
@article{blu2002c, author = "Blu, T. and Unser, M.", title = "Wavelets, Fractals, and Radial Basis Functions", journal = "{IEEE} Transactions on Signal Processing", month = "March", year = "2002", volume = "50", number = "3", pages = "543--553", note = "IEEE Signal Processing Society's 2003 \textbf{Best Paper Award}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002c" } |
Blu, T. & Unser, M.,"Harmonic Spline Series Representation of Scaling Functions", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing X, San Diego, USA, Vol. 5207, pp. 120-124, August 3--8, 2003. Part I. |
We present here an explicit time-domain representation of any compactly supported dyadic scaling function as a sum of harmonic splines. The leading term in the decomposition corresponds to the fractional splines, a recent, continuous-order generalization of the polynomial splines. |
@inproceedings{blu2003b, author = "Blu, T. and Unser, M.", title = "Harmonic Spline Series Representation of Scaling Functions", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {X}", month = "August 3--8,", year = "2003", volume = "5207", pages = "120--124", note = "Part {I}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003b" } |
Blu, T. & Unser, M.,"A Complete Family of Scaling Functions: The (α,τ)-Fractional Splines", Proceedings of the Twenty-Eighth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'03), Hong Kong, China, Vol. {VI}, pp. 421-424, April 6--10, 2003. |
We describe a new family of scaling functions, the (α, τ)-fractional splines, which generate valid multiresolution analyses. These functions are characterized by two real parameters: α, which controls the width of the scaling functions; and τ, which specifies their position with respect to the grid (shift parameter). This new family is complete in the sense that it is closed under convolutions and correlations. We give the explicit time and Fourier domain expressions of these fractional splines. We prove that the family is closed under generalized fractional differentiations, and, in particular, under the Hilbert transformation. We also show that the associated wavelets are able to whiten 1⁄ƒλ-type noise, by an adequate tuning of the spline parameters. A fast (and exact) FFT-based implementation of the fractional spline wavelet transform is already available. We show that fractional integration operators can be expressed as the composition of an analysis and a synthesis iterated filterbank. |
@inproceedings{blu2003c, author = "Blu, T. and Unser, M.", title = "A Complete Family of Scaling Functions: {T}he (${\alpha},{\tau}$)-Fractional Splines", booktitle = "Proceedings of the Twenty-Eighth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'03})", month = "April 6--10,", year = "2003", volume = "{VI}", pages = "421--424", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003c" } |
Blu, T. & Unser, M.,"Quantitative L^2 Approximation Error of a Probability Density Estimate Given by It Samples", Proceedings of the Twenty-Ninth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'04), Montréal, Canada, Vol. {III}, pp. 952-955, May 17--21, 2004. |
We present a new result characterized by an exact integral expression for the approximation error between a probability density and an integer shift invariant estimate obtained from its samples. Unlike the Parzen window estimate, this estimate avoids recomputing the complete probability density for each new sample: only a few coefficients are required making it practical for real-time applications. We also show how to obtain the exact asymptotic behavior of the approximation error when the number of samples increases and provide the trade-off between the number of samples and the sampling step size. |
@inproceedings{blu2004c, author = "Blu, T. and Unser, M.", title = "Quantitative $\mathbf{L}^{2}$ Approximation Error of a Probability Density Estimate Given by It Samples", booktitle = "Proceedings of the Twenty-Ninth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'04})", month = "May 17--21,", year = "2004", volume = "{III}", pages = "952--955", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004c" } |
Blu, T. & Unser, M.,"Optimal Interpolation of Fractional Brownian Motion Given Its Noisy Samples", Proceedings of the Thirty-First IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'06), Toulouse, France, pp. {III}-860-{III}-863, May 14--19, 2006. |
We consider the problem of estimating a fractional Brownian motion known only from its noisy samples at the integers. We show that the optimal estimator can be expressed using a digital Wiener-like filter followed by a simple time-variant correction accounting for nonstationarity. Moreover, we prove that this estimate lives in a symmetric fractional spline space and give a practical implementation for optimal upsampling of noisy fBm samples by integer factors. |
@inproceedings{blu2006a, author = "Blu, T. and Unser, M.", title = "Optimal Interpolation of Fractional {B}rownian Motion Given Its Noisy Samples", booktitle = "Proceedings of the Thirty-First {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'06})", month = "May 14--19,", year = "2006", pages = "{III}-860--{III}-863", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2006a" } |
Blu, T. & Unser, M.,"Self-Similarity: Part II---Optimal Estimation of Fractal Processes", IEEE Transactions on Signal Processing, Vol. 55 (4), pp. 1364-1378, April 2007. |
In a companion paper (see Self-Similarity: Part I—Splines and Operators), we characterized the class of scale-invariant convolution operators: the generalized fractional derivatives of order γ. We used these operators to specify regularization functionals for a series of Tikhonov-like least-squares data fitting problems and proved that the general solution is a fractional spline of twice the order. We investigated the deterministic properties of these smoothing splines and proposed a fast Fourier transform (FFT)-based implementation. Here, we present an alternative stochastic formulation to further justify these fractional spline estimators. As suggested by the title, the relevant processes are those that are statistically self-similar; that is, fractional Brownian motion (fBm) and its higher order extensions. To overcome the technical difficulties due to the nonstationary character of fBm, we adopt a distributional formulation due to Gel′fand. This allows us to rigorously specify an innovation model for these fractal processes, which rests on the property that they can be whitened by suitable fractional differentiation. Using the characteristic form of the fBm, we then derive the conditional probability density function (PDF) p(BH(t)|Y), where Y = {BH(k)+n[k]}k∈Z are the noisy samples of the fBm BH(t) with Hurst exponent H. We find that the conditional mean is a fractional spline of degree 2H, which proves that this class of functions is indeed optimal for the estimation of fractal-like processes. The result also yields the optimal [minimum mean-square error (MMSE)] parameters for the smoothing spline estimator, as well as the connection with kriging and Wiener filtering. Please consult also the companion paper by M. Unser, T. Blu, "Self-Similarity: Part I—Splines and Operators," IEEE Transactions on Signal Processing, vol. 55, no. 4, pp. 1352-1363, April 2007. |
@article{blu2007c, author = "Blu, T. and Unser, M.", title = "Self-Similarity: {P}art {II}---{O}ptimal Estimation of Fractal Processes", journal = "{IEEE} Transactions on Signal Processing", month = "April", year = "2007", volume = "55", number = "4", pages = "1364--1378", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007c" } |
Blu, T., Unser, M. & Thévenaz, P.,"Optimizing Basis Functions for Best Approximation", Fifth International Conference on Curves and Surfaces (ICCS'02), Saint Malo, France, June 27--July 3, 2002. |
By evaluating approximation theoretic quantities we show how to compute explicitely the basis generators that minimize the approximation error for a full set of functions to approximate. We give several examples of this optimization, either to get the best generators that have maximal order for minimum support [1], or to design the best interpolation scheme with classical generators, such as B-splines [2]. We present practical examples that visually confirm the validity of our approach. References
|
@inproceedings{blu2002d, author = "Blu, T. and Unser, M. and Th{\'{e}}venaz, P.", title = "Optimizing Basis Functions for Best Approximation", booktitle = "Fifth International Conference on Curves and Surfaces ({ICCS'02})", month = "June 27--July 3,", year = "2002", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002d" } |
Blu, T., Unser, M., Thévenaz, P. & Sühling, M.,"Interpolation Method and Apparatus", International Patent WO2003021474, 2003. |
Method of interpolating digital samples using interpolation functions that are shifted by an arbitrary shift value relative to said samples. It will be shown that there is a non-zero and non-trivial optimal value for this shift value for which the approximation error is minimized. |
@misc{blu2003d, author = "Blu, T. and Unser, M. and Th{\'{e}}venaz, P. and S{\"u}hling, M.", title = "Interpolation Method and Apparatus", year = "2003", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003d" } |
Chacko, N., Liebling, M. & Blu, T.,"Discretization of continuous convolution operators for accurate modeling of wave propagation in digital holography", Journal of the Optical Society of America A, Vol. 30 (10), pp. 2012-2020, October 2013. |
Discretization of continuous (analog) convolution operators by direct sampling of the convolution kernel and use of fast Fourier transforms (FFT) is highly efficient. However, it assumes the input and output signals are band-limited, a condition rarely met in practice, where signals have finite support or abrupt edges and sampling is non-ideal. Here, we propose to approximate signals in analog, shift-invariant function spaces, which do not need to be band-limited, resulting in discrete coefficients for which we derive discrete convolution kernels that accurately model the analog convolution operator while taking into account non-ideal sampling devices (such as finite fill-factor cameras). This approach retains the efficiency of direct sampling but not its limiting assumption. We propose fast forward and inverse algorithms that handle finite-length, periodic, and mirror-symmetric signals with rational sampling rates. We provide explicit convolution kernels for computing coherent wave propagation in the context of digital holography. When compared to band-limited methods in simulations, our method leads to fewer reconstruction artifacts when signals have sharp edges or when using non-ideal sampling devices. |
@article{blu2013a, author = "Chacko, N. and Liebling, M. and Blu, T.", title = "Discretization of continuous convolution operators for accurate modeling of wave propagation in digital holography", journal = "Journal of the Optical Society of America A", month = "October", year = "2013", volume = "30", number = "10", pages = "2012--2020", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2013a" } |
Condat, L., Blu, T. & Unser, M.,"Beyond Interpolation: Optimal Reconstruction by Quasi-Interpolation", Proceedings of the 2005 IEEE International Conference on Image Processing (ICIP'05), Genova, Italy, Vol. {I}, pp. 33-36, September 11--14, 2005. Best Student Paper Award. |
We investigate the use of quasi-interpolating approximation schemes, to construct an estimate of an unknown function from its given discrete samples. We show theoretically and with practical experiments that such methods perform better than classical interpolation, for the same computation cost. ICIP'05 Best Student Paper Award |
@inproceedings{blu2005b, author = "Condat, L. and Blu, T. and Unser, M.", title = "Beyond Interpolation: {O}ptimal Reconstruction by Quasi-Interpolation", booktitle = "Proceedings of the 2005 {IEEE} International Conference on Image Processing ({ICIP'05})", month = "September 11--14,", year = "2005", volume = "{I}", pages = "33--36", note = "\textbf{Best Student Paper Award}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005b" } |
Condat, L., Van De Ville, D. & Blu, T.,"Hexagonal Versus Orthogonal Lattices: A New Comparison Using Approximation Theory", Proceedings of the 2005 IEEE International Conference on Image Processing (ICIP'05), Genova, Italy, Vol. {III}, pp. 1116-1119, September 11--14, 2005. |
We provide a new comparison between hexagonal and orthogonal lattices, based on approximation theory. For each of the lattices, we select the “natural” spline basis function as generator for a shift-invariant function space; i.e., the tensor-product B-splines for the orthogonal lattice and the non-separable hex-splines for the hexagonal lattice. For a given order of approximation, we compare the asymptotic constants of the error kernels, which give a very good indication of the approximation quality. We find that the approximation quality on the hexagonal lattice is consistently better, when choosing lattices with the same sampling density. The area sampling gain related to these asymptotic constants quickly converges when the order of approximation of the basis functions increases. Surprisingly, nearest-neighbor interpolation does not allow to profit from the hexagonal grid. For practical purposes, the second-order hex-spline (i.e., constituted by linear patches) appears as a particularly useful candidate to exploit the advantages of hexagonal lattices when representing images on them. |
@inproceedings{blu2005c, author = "Condat, L. and Van De Ville, D. and Blu, T.", title = "Hexagonal Versus Orthogonal Lattices: {A} New Comparison Using Approximation Theory", booktitle = "Proceedings of the 2005 {IEEE} International Conference on Image Processing ({ICIP'05})", month = "September 11--14,", year = "2005", volume = "{III}", pages = "1116--1119", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005c" } |
Delpretti, S., Luisier, F., Ramani, S., Blu, T. & Unser, M.,"Multiframe SURE-LET Denoising of Timelapse Fluorescence Microscopy Images", Proceedings of the Fifth IEEE International Symposium on Biomedical Imaging (ISBI'08), Paris, France, pp. 149-152, May 14--17, 2008. |
Due to the random nature of photon emission and the various internal noise sources of the detectors, real timelapse fluorescence microscopy images are usually modeled as the sum of a Poisson process plus some Gaussian white noise. In this paper, we propose an adaptation of our SURE-LET denoising strategy to take advantage of the potentially strong similarities between adjacent frames of the observed image sequence. To stabilize the noise variance, we first apply the generalized Anscombe transform using suitable parameters automatically estimated from the observed data. With the proposed algorithm, we show that, in a reasonable computation time, real fluorescence timelapse microscopy images can be denoised with higher quality than conventional algorithms. |
@inproceedings{blu2008d, author = "Delpretti, S. and Luisier, F. and Ramani, S. and Blu, T. and Unser, M.", title = "Multiframe {SURE-LET} Denoising of Timelapse Fluorescence Microscopy Images", booktitle = "Proceedings of the Fifth {IEEE} International Symposium on Biomedical Imaging ({ISBI'08})", month = "May 14--17,", year = "2008", pages = "149--152", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008d" } |
Depeursinge, C., Cuche, É., Colomb, T., Massatch, P., Marian, A., Montfort, F., Liebling, M., Blu, T., Unser, M., Marquet, P. & Magistretti, P.J.,"Digital Holography Applied to Microscopy: A New Imaging Modality in the Sub-Wavelength Range", Hundertvierte Jahrestagung der Deutschen Gesellschaft für angewandte Optik (DGaO), Münster (Westfalen), Germany, June 10--14, 2003. |
Digital holographic microscopy appears as a new imaging technique with high resolution and real time observation capabilities: longitudinal resolutions of a few nanometers in air and a few tenths of nanometers in liquids are achievable, provided that optical signals diffracted by the object can be rendered sufficiently large. Living biological cells in culture, have been observed with around 40 nanometers in height and half of a micron in width. The originality of our approach is to provide both a slightly modified microscope design, yielding digital holograms of microscopic objects and an interactive computer environment to easily reconstruct wavefronts from digital holograms. |
@inproceedings{blu2003e, author = "Depeursinge, C. and Cuche, {\'{E}}. and Colomb, T. and Massatch, P. and Marian, A. and Montfort, F. and Liebling, M. and Blu, T. and Unser, M. and Marquet, P. and Magistretti, P.J.", title = "Digital Holography Applied to Microscopy: {A} New Imaging Modality in the Sub-Wavelength Range", booktitle = "Hundertvierte Jahrestagung der Deutschen Gesellschaft f{\"{u}}r angewandte Optik ({DGaO})", month = "June 10--14,", year = "2003", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003e" } |
Doğan, Z., Blu, T. & Van De Ville, D.,"Eigensensing And Deconvolution For The Reconstruction Of Heat Absorption Profiles From Photoacoustic Tomography Data", Proceedings of the Tenth IEEE International Symposium on Biomedical Imaging (ISBI'13), San Francisco, USA, pp. 1142-1145, April 7--11, 2013. |
Photoacoustic tomography (PAT) is a relatively recent imaging modality
that is promising for breast cancer detection and breast screening.
It combines the high intrinsic contrast of optical radiation with
acoustic imaging at submillimeter spatial resolution through the
photoacoustic effect of absorption and thermal expansion. However,
image reconstruction from boundary measurements of the propagating
wave field is still a challenging inverse problem. Here we propose a new theoretical framework, for which we coin the term eigensensing, to recover the heat absorption profile of the tissue. One of the main features of our method is that there is no explicit forward model that needs to be used within a (usually) slow iterative scheme. Instead, the eigensensing principle allow us to computationally obtain several intermediate images that are blurred by known convolution kernels which are chosen as the eigenfunctions of the spatial Laplace operator. The source image can then be reconstructed by a joint deconvolution algorithm that uses the intermediate images as input. Moreover, total variation regularization is added to make the inverse problem well-posed and to favor piecewise-smooth images. |
@inproceedings{blu2013b, author = "Do\u{g}an, Z. and Blu, T. and Van De Ville, D.", title = "Eigensensing And Deconvolution For The Reconstruction Of Heat Absorption Profiles From Photoacoustic Tomography Data", booktitle = "Proceedings of the Tenth {IEEE} International Symposium on Biomedical Imaging ({ISBI'13})", month = "April 7--11,", year = "2013", pages = "1142--1145", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2013b" } |
Doğan, Z., Blu, T. & Van De Ville, D.,"Finite-rate-of-innovation for the inverse source problem of radiating fields", Sampling Theory in Signal and Image Processing, Vol. 13 (3), pp. 271-294, 2014. |
Finite-rate-of-innovation (FRI) is a framework that has been developed
for the sampling and reconstruction of specific classes of signals,
in par- ticular non-bandlimited signals that are characterized byfinitely
many pa- rameters. It has been shown that by using specific sampling
kernels that reproduce polynomials or exponentials (i.e., satisfy
Strang-Fix condition), it is possible to design non-iterative and
fast reconstruction algorithms. In fact, the innovative part of the
signal can be reconstructed perfectly using Prony's method (the annihilatingfilter). In this paper, we propose an adapted FRI framework to deal with the inverse source problem of radiating fields from boundary measurements. In particular, we consider the case where the source signals are modelled as stream of Diracs in 3-D and, we assume that the induced field governed by the Helmholtz equation is measured on a boundary. First, we propose a technique, termed "sensing principle"—also known as the reciprocity gap principle—to provide a link between the physical measurements and the source signal through a surface integral. We have shown that it is possible to design sensing schemes in complex domain using holomorphic functions such that they allow to determine the positions of the sources with a non-iterative algorithm using an adapted annihilating filter method. |
@article{blu2014g, author = "Do\u{g}an, Z. and Blu, T. and Van De Ville, D.", title = "Finite-rate-of-innovation for the inverse source problem of radiating fields", journal = "Sampling Theory in Signal and Image Processing", year = "2014", volume = "13", number = "3", pages = "271--294", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2014g" } |
Doğan, Z., Gilliam, C., Blu, T. & Van De Ville, D.,"Reconstruction of Finite Rate of Innovation Signals with Model-Fitting Approach", IEEE Transactions on Signal Processing, Vol. 63 (22), pp. 6024-6036, November 2015. |
Finite rate of innovation (FRI) is a recent framework for sampling and reconstruction of a large class of parametric signals that are characterized by finite number of innovations (parameters) per unit interval. In the absence of noise, exact recovery of FRI signals has been demonstrated. In the noisy scenario, there exist techniques to deal with non-ideal measurements. Yet, the accuracy and resiliency to noise and model mismatch are still challenging problems for real-world applications. We address the reconstruction of FRI signals, specifically a stream of Diracs, from few signal samples degraded by noise and we propose a new FRI reconstruction method that is based on a model-fitting approach related to the structured-TLS problem. The model-fitting method is based on minimizing the training error, that is, the error between the computed and the recovered moments (i.e., the FRI-samples of the signal), subject to an annihilation system. We present our framework for three different constraints of the annihilation system. Moreover, we propose a model order selection framework to determine the innovation rate of the signal; i.e., the number of Diracs by estimating the noise level through the training error curve. We compare the performance of the model-fitting approach with known FRI reconstruction algorithms and Cramer-Rao's lower bound (CRLB) to validate these contributions. |
@article{blu2015h, author = "Do\u{g}an, Z. and Gilliam, C. and Blu, T. and Van De Ville, D.", title = "Reconstruction of Finite Rate of Innovation Signals with Model-Fitting Approach", journal = "IEEE Transactions on Signal Processing", month = "November", year = "2015", volume = "63", number = "22", pages = "6024--6036", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2015h" } |
Doğan, Z., Jovanovic, I., Blu, T. & Van De Ville, D.,"3D reconstruction of wave-propagated point sources from boundary measurements using joint sparsity and finite rate of innovation", Proceedings of the Ninth IEEE International Symposium on Biomedical Imaging (ISBI'12), Barcelona, Spain, pp. 1575-1578, May 2--5, 2012. |
Reconstruction of point sources from boundary measurements is a challenging problem in many applications. Recently, we proposed a new sensing and non-iterative reconstruction scheme for systems governed by the three-dimensional wave equation. The points sources are described by their magnitudes and positions. The core of the method relies on the principles of finite-rate-of-innovation, and allows retrieving the parameters in the continuous domain without discretization. Here we extend the method when the source configuration shows joint sparsity for different temporal frequencies; i.e., the sources have same positions for different frequencies, not necessarily the same magnitudes. We demonstrate that joint sparsity improves upon the robustness of the estimation results. In addition, we propose a modified multi-source version of Dijkstra's algorithm to recover the Z parameters. We illustrate the feasibility of our method to reconstruct multiple sources in a 3-D spherical geometry. |
@inproceedings{blu2012a, author = "Do\u{g}an, Z. and Jovanovic, I. and Blu, T. and Van De Ville, D.", title = "{3D} reconstruction of wave-propagated point sources from boundary measurements using joint sparsity and finite rate of innovation", booktitle = "Proceedings of the Ninth {IEEE} International Symposium on Biomedical Imaging ({ISBI'12})", month = "May 2--5,", year = "2012", pages = "1575--1578", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2012a" } |
Doğan, Z., Jovanovic, I., Blu, T. & Van De Ville, D.,"Localization of Point Sources in Wave Fields From Boundary Measurements Using New Sensing Principle", Proceedings of the Tenth International Workshop on Sampling Theory and Applications (SampTA'13), Bremen, Germany, pp. 321-324, July 1--5, 2013. |
We address the problem of localizing point sources in 3D from boundary
measurements of a wave field. Recently, we proposed the sensing principle
which allows extracting volumetric samples of the unknown source
distribution from the boundary measurements. The extracted samples
allow a non-iterative reconstruction algorithm that can recover the
parameters of the source distribution projected on a 2-D plane in
the continuous domain without any discretization. Here we extend the method for the 3-D localization of multiple point sources by combining multiple 2-D planar projections. In particular, we propose a three-step algorithm to retrieve the locations by means of multiplanar application of the sensing principle. First, we find the projections of the locations onto several 2-D planes. Second, we propose a greedy algorithm to pair the solutions in each plane. Third, we retrieve the 3D locations by least squares regression. |
@inproceedings{blu2013c, author = "Do\u{g}an, Z. and Jovanovic, I. and Blu, T. and Van De Ville, D.", title = "Localization of Point Sources in Wave Fields From Boundary Measurements Using New Sensing Principle", booktitle = "Proceedings of the Tenth International Workshop on Sampling Theory and Applications ({SampTA'13})", month = "July 1--5,", year = "2013", pages = "321--324", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2013c" } |
Doğan, Z., Jovanovic, I., Blu, T. & Van De Ville, D.,"Application of a New Sensing Principle for Photoacoustic Imaging of Point Absorbers", SPIE BiOS Photons Plus Ultrasound: Imaging and Sensing 2013. Proceedings of the SPIE., San Francisco, USA, Vol. 8581, pp. 8581-144P1-7, February 2--7, 2013. |
Photoacoustic tomography (PAT) is a hybrid imaging method, which combines ultrasonic and optical imaging modalities, in order to overcome their respective weaknesses and to combine their strengths. It is based on the reconstruction of optical absorption properties of the tissue from the measurements of a photoacoustically-generated pressure field. Current methods consider laser excitation, under thermal and stress confinement assumptions, which leads to the generation of a propagating pressure field. Conventional reconstruction techniques then recover the initial pressure field based on the boundary measurements by iterative reconstruction algorithms in time- or Fourier-domain. Here, we propose an application of a new sensing principle that allows for efficient and non-iterative reconstruction algorithm for imaging point absorbers in PAT. We consider a closed volume surrounded by a measurement surface in an acoustically homogeneous medium and we aim at recovering the positions and the amount of heat absorbed by these absorbers. We propose a two-step algorithm based on proper choice of so-called sensing functions. Specifically, in the first step, we extract the projected positions on the complex plane and the weights by a sensing function that is well-localized on the same plane. In the second step, we recover the remaining z-location by choosing a proper set of plane waves. We show that the proposed families of sensing functions are sufficient to recover the parameters of the unknown sources without any discretization of the domain. We extend the method for sources that have joint-sparsity; i.e., the absorbers have the same positions for different frequencies. We evaluate the performance of the proposed algorithm using simulated and noisy sensor data and we demonstrate the improvement obtained by exploiting joint sparsity. |
@inproceedings{blu2013i, author = "Do\u{g}an, Z. and Jovanovic, I. and Blu, T. and Van De Ville, D.", title = "Application of a New Sensing Principle for Photoacoustic Imaging of Point Absorbers", booktitle = "SPIE BiOS Photons Plus Ultrasound: Imaging and Sensing 2013. Proceedings of the SPIE.", month = "February 2--7,", year = "2013", volume = "8581", pages = "8581-144P1--7", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2013i" } |
Doğan, Z., Tsiminaki, V., Jovanovic, I., Blu, T. & Van De Ville, D.,"Localization of point sources for systems governed by the wave equation", Wavelets and Sparsity XIV. Proceedings of the SPIE, San Diego, USA, Vol. 8138, pp. 81380P1-11, August 21--24, 2011. |
Analytic sensing has recently been proposed for source localization from boundary measurements using a generalization of the finite-rate-of-innovation framework. The method is tailored to the quasi-static electromagnetic approximation, which is commonly used in electroencephalography. In this work, we extend analytic sensing for physical systems that are governed by the wave equation; i.e., the sources emit signals that travel as waves through the volume and that are measured at the boundary over time. This source localization problem is highly ill-posed (i.e., the unicity of the source distribution is not guaranteed) and additional assumptions about the sources are needed. We assume that the sources can be described with finite number of parameters, particularly, we consider point sources that are characterized by their position and strength. This assumption makes the solution unique and turns the problem into parametric estimation. Following the framework of analytic sensing, we propose a two-step method. In the first step, we extend the reciprocity gap functional concept to wave-equation based test functions; i.e., well-chosen test functions can relate the boundary measurements to generalized measure that contain volumetric information about the sources within the domain. In the second step-again due to the choice of the test functions - we can apply the finite-rate-of-innovation principle; i.e., the generalized samples can be annihilated by a known filter, thus turning the non-linear source localization problem into an equivalent root-finding one. We demonstrate the feasibility of our technique for a 3-D spherical geometry. The performance of the reconstruction algorithm is evaluated in the presence of noise and compared with the theoretical limit given by Cramer-Rao lower bounds. |
@inproceedings{blu2011a, author = "Do\u{g}an, Z. and Tsiminaki, V. and Jovanovic, I. and Blu, T. and Van De Ville, D.", title = "Localization of point sources for systems governed by the wave equation", booktitle = "Wavelets and Sparsity XIV. Proceedings of the SPIE", month = "August 21--24,", year = "2011", volume = "8138", pages = "81380P1--11", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2011a" } |
Dragotti, P.-L., Vetterli, M. & Blu, T.,"Sampling Moments and Reconstructing Signals of Finite Rate of Innovation: Shannon Meets Strang-Fix", IEEE Transactions on Signal Processing, Vol. 55 (5), pp. 1741-1757, May 2007. Part 1. |
Consider the problem of sampling signals which are not bandlimited, but still have a finite number of degrees of freedom per unit of time, such as, for example, nonuniform splines or piecewise polynomials, and call the number of degrees of freedom per unit of time the rate of innovation. Classical sampling theory does not enable a perfect reconstruction of such signals since they are not bandlimited. Recently, it was shown that, by using an adequate sampling kernel and a sampling rate greater or equal to the rate of innovation, it is possible to reconstruct such signals uniquely [1]. These sampling schemes, however, use kernels with infinite support, and this leads to complex and potentially unstable reconstruction algorithms. In this paper, we show that many signals with a finite rate of innovation can be sampled and perfectly reconstructed using physically realizable kernels of compact support and a local reconstruction algorithm. The class of kernels that we can use is very rich and includes functions satisfying Strang-Fix conditions, exponential splines and functions with rational Fourier transform. This last class of kernels is quite general and includes, for instance, any linear electric circuit. We thus show with an example how to estimate a signal of finite rate of innovation at the output of an RC circuit. The case of noisy measurements is also analyzed, and we present a novel algorithm that reduces the effect of noise by oversampling. References
|
@article{blu2007d, author = "Dragotti, P.-L. and Vetterli, M. and Blu, T.", title = "Sampling Moments and Reconstructing Signals of Finite Rate of Innovation: {S}hannon Meets {S}trang-{F}ix", journal = "{IEEE} Transactions on Signal Processing", month = "May", year = "2007", volume = "55", number = "5", pages = "1741--1757", note = "Part 1", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007d" } |
Feilner, M., Blu, T. & Unser, M.,"Statistical Analysis of fMRI Data Using Orthogonal Filterbanks", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing VII, Denver, USA, Vol. 3813, pp. 551-560, July 19--23, 1999. |
Functional magnetic resonance imaging (fMRI) is a recent technique that allows the measurement of brain metabolism (local concentration of deoxyhemoglobin using BOLD contrast) while subjects are performing a specific task. A block paradigm produces alternating sequences of images (e.g., rest versus motor task). In order to detect and localize areas of cerebral activation, one analyzes the data using paired differences at the voxel level. As an alternative to the traditional approach which uses Gaussian spatial filtering to reduce measurement noise, we propose to analyze the data using an orthogonal filterbank. This procedure is intended to simplify and eventually imp ove the statistical analysis. The system is designed to concentrate the signal into a fewer number of components thereby improving the signal-to-noise ratio. Thanks to the orthogonality property, we can test the filtered components independently on a voxel-by-voxel basis; this testing procedure is optimal fo i.i.d. measurement noise. The number of components to test is also reduced because of down-sampling. This offers a straightforward approach to increasing the sensitivity of the analysis (lower detection threshold) while applying the standard Bonferroni correction fo multiple statistical tests. We present experimental results to illustrate the procedure. In addition, we discuss filter design issues. In particular, we introduce a family of orthogonal filters which are such that any integer reduction m can be implemented as a succession of elementary reductions m1 to mp where m = m1 ... mp is a prime number factorization of m. |
@inproceedings{blu1999f, author = "Feilner, M. and Blu, T. and Unser, M.", title = "Statistical Analysis of {fMRI} Data Using Orthogonal Filterbanks", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {VII}", month = "July 19--23,", year = "1999", volume = "3813", pages = "551--560", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1999f" } |
Feilner, M., Blu, T. & Unser, M.,"Analysis of fMRI Data Using Spline Wavelets", Proceedings of the Tenth European Signal Processing Conference (EUSIPCO'00), Tampere, Finland, Vol. {IV}, pp. 2013-2016, September 4--8, 2000. |
Our goal is to detect and localize areas of activation in the brain from sequences of fMRI images. The standard approach for reducing the noise contained in the fMRI images is to apply a spatial Gaussian filter which entails some loss of details. Here instead, we consider a wavelet solution to the problem, which has the advantage of retaining high-frequency information. We use fractional-spline orthogonal wavelets with a continuously-varying order parameter alpha; by adjusting alpha, we can balance spatial resolution against frequency localization. The activation pattern is detected by performing multiple (Bonferroni-corrected) t-tests in the wavelet domain. This pattern is then localized by inverse wavelet transform of a thresholded coefficient map. In order to compare transforms and to select the best alpha, we devise a simulation study for the detection of a known activation pattern. We also apply our methodology to the analysis of acquired fMRI data for a motor task. |
@inproceedings{blu2000b, author = "Feilner, M. and Blu, T. and Unser, M.", title = "Analysis of {fMRI} Data Using Spline Wavelets", booktitle = "Proceedings of the Tenth European Signal Processing Conference ({EUSIPCO'00})", month = "September 4--8,", year = "2000", volume = "{IV}", pages = "2013--2016", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000b" } |
Feilner, M., Blu, T. & Unser, M.,"Optimizing Wavelets for the Analysis of fMRI Data", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing VIII, San Diego, USA, Vol. 4119, pp. 626-637, July 31--August 4, 2000. |
Ruttimann et al. have proposed to use the wavelet transform for the detection and localization of activation patterns in functional magnetic resonance imaging (fMRI). Their main idea was to apply a statistical test in the wavelet domain to detect the coefficients that are significantly different from zero. Here, we improve the original method in the case of non-stationary Gaussian noise by replacing the original z-test by a t-test that takes into account the variability of each wavelet coefficient separately. The application of a threshold that is proportional to the residual noise level, after the reconstruction by an inverse wavelet transform, further improves the localization of the activation pattern in the spatial domain. A key issue is to find out which wavelet and which type of decomposition is best suited for the detection of a given activation pattern. In particular, we want to investigate the applicability of alternative wavelet bases that are not necessarily orthogonal. For this purpose, we consider the various brands of fractional spline wavelets (orthonormal, B-spline, and dual) which are indexed by a continuously-varying order parameter α. We perform an extensive series of tests using simulated data and compare the various transforms based on their false detection rate (type I + type II errors). In each case, we observe that there is a strongly optimal value of α and a best number of scales that minimizes the error. We also find that splines generally outperform Daubechies wavelets and that they are quite competitive with SPM (the standard analysis method used in the field), although it uses much simpler statistics. An interesting practical finding is that performance is strongly correlated with the number of coefficients detected in the wavelet domain, at least in the orthonormal and B-spline cases. This suggest that it is possible to optimize the structural wavelet parameters simply by maximizing the number of wavelet counts, without any prior knowledge of the activation pattern. Some examples of analysis of real data are also presented. |
@inproceedings{blu2000c, author = "Feilner, M. and Blu, T. and Unser, M.", title = "Optimizing Wavelets for the Analysis of {fMRI} Data", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {VIII}", month = "July 31--August 4,", year = "2000", volume = "4119", pages = "626--637", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000c" } |
Fong, M.C.M., Minett, J.W., Blu, T. & Wang, W.S.Y.,"Brain-computer interface (BCI): Is it strictly necessary to use random sequences in visual spellers?", Proceedings of the tenth Asia Pacific Conference on Computer Human Interaction (APCHI 2012), Matsue, Japan, pp. 109-118, August 28--31, 2012. |
The P300 speller is a standard paradigm for brain-computer interfacing (BCI) based on electroencephalography (EEG). It exploits the fact that the user's selective attention to a target stimulus among a random sequence of stimuli enhances the magnitude of the P300 evoked potential. The present study questions the necessity of using random sequences of stimulation. In two types of experimental runs, subjects attended to a target stimulus while the stimuli, four in total, were each intensified twelve times, in either random order or deterministic order. The 32-channel EEG data were analyzed offline using linear discriminant analysis (LDA). Similar classification accuracies of 95.3% and 93.2% were obtained for the random and deterministic runs, respectively, using the data associated with 3 sequences of stimulation. Furthermore, using a montage of 5 posterior electrodes, the two paradigms attained identical accuracy of 92.4%. These results suggest that: (a) the use of random sequences is not necessary for effective BCI performance; and (b) deterministic sequences can be used in some BCI speller applications. |
@inproceedings{blu2012b, author = "Fong, M. C. M. and Minett, J. W. and Blu, T. and Wang, W. S. Y.", title = "Brain-computer interface ({BCI}): Is it strictly necessary to use random sequences in visual spellers?", booktitle = "Proceedings of the tenth Asia Pacific Conference on Computer Human Interaction ({APCHI 2012})", month = "August 28--31,", year = "2012", pages = "109--118", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2012b" } |
Fong, M.C.M., Minett, J.W., Blu, T. & Wang, W.S.Y.,"Towards a Neural Measure of Perceptual Distance---Classification of Electroencephalographic Responses to Synthetic Vowels", Proceedings of the Fifteenth Annual Conference of the International Speech Communication Association (Interspeech 2014), Singapore, pp. 2595-2599, 14--18 September, 2014. |
How vowels are organized cortically has previously been studied using auditory evoked potentials (AEPs), one focus of which is to determine whether perceptual distance could be inferred using AEP components. The present study extends this line of research by adopting a machine-learning framework to classify evoked responses to four synthetic mid-vowels differing only in second formant frequency (F2 = 840, 1200, 1680, and 2280 Hz). 6 subjects attended 4 EEG sessions each on separate days. Classifiers were trained using time-domain data in successive timewindows of various sizes. Results were the most accurate when a window of about 80 ms was used. By integrating the scores from individual classifiers, the maximum mean binary classification rates improved to 70% (10 trials) and 77% (20 trials). To assess how well perceptual distances among the vowels were reflected in our results, discriminability indices (d ) were computed using both the behavioral results in a screening test and the classification results. It was found that the two set of indices were significantly correlated. The pair that was the most (least) discriminable behaviorally was also the most (least) classifiable neurally. Our results support the use of classification methodology for developing a neural measure of perceptual distance. |
@inproceedings{blu2014e, author = "Fong, M. C. M. and Minett, J. W. and Blu, T. and Wang, W. S. Y.", title = "Towards a Neural Measure of Perceptual Distance---Classification of Electroencephalographic Responses to Synthetic Vowels", booktitle = "Proceedings of the Fifteenth Annual Conference of the International Speech Communication Association (Interspeech 2014)", month = "14--18 September,", year = "2014", pages = "2595--2599", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2014e" } |
Forster, B., Blu, T. & Unser, M.,"A New Family of Complex Rotation-Covariant Multiresolution Bases in 2D", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing X, San Diego, USA, Vol. 5207, pp. 475-479, August 3--8, 2003. Part I. |
We present complex rotation-covariant multiresolution families aimed for image analysis. Since they are complex-valued functions, they provide the important phase information, which is missing in the discrete wavelet transform with real wavelets. Our basis elements have nice properties in Hilbert space such as smoothness of fractional order α ∈ R+. The corresponding filters allow a FFT-based implementation and thus provide a fast algorithm for the wavelet transform. |
@inproceedings{blu2003f, author = "Forster, B. and Blu, T. and Unser, M.", title = "A New Family of Complex Rotation-Covariant Multiresolution Bases in {2D}", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {X}", month = "August 3--8,", year = "2003", volume = "5207", pages = "475--479", note = "Part {I}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003f" } |
Forster, B., Blu, T. & Unser, M.,"Complex {B}-Splines and Wavelets", Second International Conference on Computational Harmonic Analysis, Nineteenth Annual Shanks Lecture (CHA'04), Nashville, USA, May 24--30, 2004. |
B-spline multiresolution analyses have proven to be an adequate tool for signal analysis. But for some applications, e.g. in speech processing and digital holography, complex-valued scaling functions and wavelets are more favourable than real ones, since they allow to deduce the crucial phase information. In this talk, we extend the classical resp. fractional B-spline approach to complex B-splines. We perform this by choosing a complex exponent, i.e., a complex order z of the B-spline, and show that this does not influence the basic properties such as smothness and decay, recurrence relations and others. Moreover, the resulting complex B-splines satisfy a two-scale relation and generate a multiresolution analysis of L2(R). We show that the complex B-splines as well as the corresponding wavelets converge to Gabor functions as ℜ(z) increases and ℑ(z) is fixed. Thus they are approximately optimally time-frequency localized. |
@inproceedings{blu2004d, author = "Forster, B. and Blu, T. and Unser, M.", title = "Complex \mbox{{B}-Splines} and Wavelets", booktitle = "Second International Conference on Computational Harmonic Analysis, Nineteenth Annual Shanks Lecture ({CHA'04})", month = "May 24--30,", year = "2004", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004d" } |
Forster, B., Blu, T. & Unser, M.,"Complex {B}-Splines", Applied and Computational Harmonic Analysis, Vol. 20 (2), pp. 261-282, March 2006. |
We propose a complex generalization of Schoenberg's cardinal splines. To this end, we go back to the Fourier domain definition of the B-splines and extend it to complex-valued degrees. We show that the resulting complex B-splines are piecewise modulated polynomials, and that they retain most of the important properties of the classical ones: smoothness, recurrence, and two-scale relations, Riesz basis generator, explicit formulae for derivatives, including fractional orders, etc. We also show that they generate multiresolution analyses of L2(R) and that they can yield wavelet bases. We characterize the decay of these functions which are no-longer compactly supported when the degree is not an integer. Finally, we prove that the complex B-splines converge to modulated Gaussians as their degree increases, and that they are asymptotically optimally localized in the time-frequency plane in the sense of Heisenberg's uncertainty principle. |
@article{blu2006b, author = "Forster, B. and Blu, T. and Unser, M.", title = "Complex \mbox{{B}-Splines}", journal = "Applied and Computational Harmonic Analysis", month = "March", year = "2006", volume = "20", number = "2", pages = "261--282", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2006b" } |
Forster, B., Blu, T., Van De Ville, D. & Unser, M.,"Shift-Invariant Spaces from Rotation-Covariant Functions", Applied and Computational Harmonic Analysis, Vol. 25 (2), pp. 240-265, September 2008. |
We consider shift-invariant multiresolution spaces generated by rotation-covariant functions ρ in ℝ2. To construct corresponding scaling and wavelet functions, ρ has to be localized with an appropriate multiplier, such that the localized version is an element of L2(ℝ2). We consider several classes of multipliers and show a new method to improve regularity and decay properties of the corresponding scaling functions and wavelets. The wavelets are complex-valued functions, which are approximately rotation-covariant and therefore behave as Wirtinger differential operators. Moreover, our class of multipliers gives a novel approach for the construction of polyharmonic B-splines with better polynomial reconstruction properties. |
@article{blu2008e, author = "Forster, B. and Blu, T. and Van De Ville, D. and Unser, M.", title = "Shift-Invariant Spaces from Rotation-Covariant Functions", journal = "Applied and Computational Harmonic Analysis", month = "September", year = "2008", volume = "25", number = "2", pages = "240--265", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008e" } |
Gilliam, C., Bingham, A., Blu, T. & Jelfs, B.,"Time-Varying Delay Estimation Using Common Local All-Pass Filters with Application to Surface Electromyography", Proceedings of the Forty-third IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'18), Calgary, AB, Canada, pp. 841-845, April 15--20, 2018. |
Estimation of conduction velocity (CV) is an important task in the analysis of surface electromyography (sEMG). The problem can be framed as estimation of a time-varying delay (TVD) between electrode recordings. In this paper we present an algorithm which incorporates information from multiple electrodes into a single TVD estimation. The algorithm uses a common all-pass filter to relate two groups of signals at a local level. We also address a current limitation of CV estimators by providing an automated way of identifying the innervation zone from a set of electrode recordings, thus allowing incorporation of the entire array into the estimation. We validate the algorithm on both synthetic and real sEMG data with results showing the proposed algorithm is both robust and accurate. |
@inproceedings{blu2018c, author = "Gilliam, C. and Bingham, A. and Blu, T. and Jelfs, B.", title = "Time-Varying Delay Estimation Using Common Local All-Pass Filters with Application to Surface Electromyography", booktitle = "Proceedings of the Forty-third {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'18})", month = "April 15--20,", year = "2018", pages = "841--845", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2018c" } |
Gilliam, C. & Blu, T.,"Fitting Instead Of Annihilation: Improved Recovery Of Noisy FRI Signals", Proceedings of the Thirty-ninth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'14), Florence, Italy, pp. 51-55, May 4--9, 2014. |
Recently, classical sampling theory has been broadened to include a class of non-bandlimited signals that possess finite rate of innovation (FRI). In this paper we consider the reconstruction of a periodic stream of Diracs from noisy samples. We demonstrate that its noiseless FRI samples can be represented as a ratio of two polynomials. Using this structure as a model, we propose recovering the FRI signal using a model fitting approach rather than an annihilation method. We present an algorithm that fits this model to the noisy samples and demonstrate that it has low computation cost and is more reliable than two state-of-the-art methods. |
@inproceedings{blu2014b, author = "Gilliam, C. and Blu, T.", title = "Fitting Instead Of Annihilation: Improved Recovery Of Noisy {FRI} Signals", booktitle = "Proceedings of the Thirty-ninth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'14})", month = "May 4--9,", year = "2014", pages = "51--55", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2014b" } |
Gilliam, C. & Blu, T.,"Local All-Pass Filters for Optical Flow Estimation", Proceedings of the Fortieth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'15), Brisbane, Australia, pp. 1533-1537, April 19--24, 2015. |
The optical flow is a velocity field that describes the motion of pixels within a sequence (or set) of images. Its estimation plays an important role in areas such as motion compensation, object tracking and image registration. In this paper, we present a novel framework to estimate the optical flow using local all-pass filters. Instead of using the optical flow equation, the framework is based on relating one image to another, on a local level, using an all-pass filter and then extracting the optical flow from the filter. Using this framework, we present a fast novel algorithm for estimating a smoothly varying optical flow, which we term the Local All-Pass (LAP) algorithm. We demonstrate that this algorithm is consistent and accurate, and that it outperforms three state-of-the-art algorithms when estimating constant and smoothly varying flows. We also show initial competitive results for real images. |
@inproceedings{blu2015c, author = "Gilliam, C. and Blu, T.", title = "Local All-Pass Filters for Optical Flow Estimation", booktitle = "Proceedings of the Fortieth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'15})", month = "April 19--24,", year = "2015", pages = "1533--1537", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2015c" } |
Gilliam, C. & Blu, T.,"Finding the minimum rate of innovation in the presence of noise", Proceedings of the Forty-first IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'16), Shanghai, China, pp. 4019-4023, March 20--25, 2016. |
Recently, sampling theory has been broadened to include a class of non-bandlimited signals that possess finite rate of innovation (FRI). In this paper, we consider the problem of determining the minimum rate of innovation (RI) in a noisy setting. First, we adapt a recent model-fitting algorithm for FRI recovery and demonstrate that it achieves the Cramér-Rao bounds. Using this algorithm, we then present a framework to estimate the minimum RI based on fitting the sparsest model to the noisy samples whilst satisfying a mean squared error (MSE) criterion - a signal is recovered if the output MSE is less than the input MSE. Specifically, given a RI, we use the MSE criterion to judge whether our model-fitting has been a success or a failure. Using this output, we present a Dichotomic algorithm that performs a binary search for the minimum RI and demonstrate that it obtains a sparser RI estimate than an existing information criterion approach. |
@inproceedings{blu2016a, author = "Gilliam, C. and Blu, T.", title = "Finding the minimum rate of innovation in the presence of noise", booktitle = "Proceedings of the Forty-first {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'16})", month = "March 20--25,", year = "2016", pages = "4019--4023", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2016a" } |
Gilliam, C. & Blu, T.,"Local All-Pass Geometric Deformations", IEEE Transactions on Image Processing, Vol. 27 (2), pp. 1010-1025, February 2018. |
This paper deals with the estimation of a deformation that describes the geometric transformation between two images. To solve this problem, we propose a novel framework that relies upon the brightness consistency hypothesis - a pixel's intensity is maintained throughout the transformation. Instead of assuming small distortion and linearising the problem (e.g. via Taylor Series expansion), we propose to interpret the brightness hypothesis as an all-pass filtering relation between the two images. The key advantages of this new interpretation are that no restrictions are placed on the amplitude of the deformation or on the spatial variations of the images. Moreover, by converting the all-pass filtering to a linear forward-backward filtering relation, our solution to the estimation problem equates to solving a linear system of equations, which leads to a highly efficient implementation. Using this framework, we develop a fast algorithm that relates one image to another, on a local level, using an all-pass filter and then extracts the deformation from the filter--hence the name "Local All-Pass" (LAP) algorithm. The effectiveness of this algorithm is demonstrated on a variety of synthetic and real deformations that are found in applications such as image registration and motion estimation. In particular, the LAP obtains very accurate results for significantly reduced computation time when compared to a selection of image registration algorithms and is very robust to noise corruption. |
@article{blu2018b, author = "Gilliam, C. and Blu, T.", title = "{L}ocal {A}ll-{P}ass Geometric Deformations", journal = "IEEE Transactions on Image Processing", month = "February", year = "2018", volume = "27", number = "2", pages = "1010--1025", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2018b" } |
Gilliam, C., Küstner, T. & Blu, T.,"3D Motion Flow Estimation Using Local All-Pass Filters", Proceedings of the Thirteenth IEEE International Symposium on Biomedical Imaging (ISBI'16), Prague, Czech Republic, pp. 282-285, April 13--16, 2016. |
Fast and accurate motion estimation is an important tool in biomedical imaging applications such as motion compensation and image registration. In this paper, we present a novel algorithm to estimate motion in volumetric images based on the recently developed Local All-Pass (LAP) optical flow framework. The framework is built upon the idea that any motion can be regarded as a local rigid displacement and is hence equivalent to all-pass filtering. Accordingly, our algorithm aims to relate two images, on a local level, using a 3D all-pass filter and then extract the local motion flow from the filter. As this process is based on filtering, it can be efficiently repeated over the whole image volume allowing fast estimation of a dense 3D motion. We demonstrate the effectiveness of this algorithm on both synthetic motion flows and in-vivo MRI data involving respiratory motion. In particular, the algorithm obtains greater accuracy for significantly reduced computation time when compared to competing approaches. |
@inproceedings{blu2016c, author = "Gilliam, C. and K\"{u}stner, T. and Blu, T.", title = "{3D} Motion Flow Estimation Using Local All-Pass Filters", booktitle = "Proceedings of the Thirteenth {IEEE} International Symposium on Biomedical Imaging ({ISBI'16})", month = "April 13--16,", year = "2016", pages = "282--285", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2016c" } |
Guo, R. & Blu, T.,"FRI Sensing: Sampling Images along Unknown Curves", Proceedings of the Forty-fourth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'19), Brighton, UK, pp. 5132-5136, March 12--17 2019. |
While sensors have been widely used in various applications, an essential current trend of research consists of collecting and fusing the information that comes from many sensors. In this paper, on the contrary, we would like to concentrate on a unique mobile sensor; our goal is to unveil the multidimensional information entangled within a stream of one-dimensional data, called FRI Sensing. Our key finding is that, even if we don't have any position knowledge of the moving sensors, it is still possible to reconstruct the sampling trajectory (up to a linear transformation and a shift), and then reconstruct an image that represents the physical sampling field under certain hypotheses. We further investigate the reconstruction hypotheses and propose novel algorithms that could make this 1D to 2D reconstruction feasible. Experiments show that the proposed approach retrieves the sampling image and trajectory accurately under the developed hypotheses. This method can be applied to geolocation applications, such as indoor localization and submarine navigation. Moreover, we show that the proposed algorithms have the potential to visualize the one-dimensional signal, which may not be sampled from a real 2D/3D physical field (e.g. speech and text signals), as a two- or three-dimensional image. |
@inproceedings{blu2019b, author = "Guo, R. and Blu, T.", title = "{FRI} Sensing: Sampling Images along Unknown Curves", booktitle = "Proceedings of the Forty-fourth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'19})", month = "March 12--17", year = "2019", pages = "5132--5136", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2019b" } |
Guo, R. & Blu, T.,"FRI Sensing: Retrieving the Trajectory of a Mobile Sensor from Its Temporal Samples", IEEE Transactions on Signal Processing, Vol. 68, pp. 5533-5545, 2020. |
In this paper, contrary to current research trend which consists of
fusing (big) data from many different sensors, we focus on one-dimensional
samples collected by a unique mobile sensor (e.g., temperature, pressure,
magnetic field, etc.), without explicit positioning information
(such as GPS). We demonstrate that this stream of 1D data contains
valuable 2D geometric information that can be unveiled by adequate
processing—using a high-accuracy Finite Rate of Innovation
(FRI) algorithm: FRI Sensing. Our key finding is that, despite the absence of any position information, the basic sequence of 1D sensor samples makes it possible to reconstruct the sampling trajectory (up to an affine transformation), and then the image that represents the physical field that has been sampled. We state the FRI Sensing sampling theorem and the hypotheses needed for this trajectory and image reconstruction to be successful. The proof of our theorem is constructive and leads to a very efficient and robust algorithm, which we validate in various conditions. Moreover, although we essentially model the images as finite sums of 2D sinusoids, we also observe that our algorithm works accurately for real textured images. |
@article{blu2020f, author = "Guo, R. and Blu, T.", title = "{FRI} Sensing: Retrieving the Trajectory of a Mobile Sensor from Its Temporal Samples", journal = "IEEE Transactions on Signal Processing", year = "2020", volume = "68", pages = "5533--5545", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2020f" } |
Guo, R. & Blu, T.,"FRI Sensing: 2D Localization from 1D Mobile Sensor Data", Proceedings of the 2020 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Auckland, NZ, pp. 986-991, December 7--10 2020. |
Sensor localization is a basic and important problem in many areas. It often relies on transmission-communication equipment to obtain the sensor geolocation information. However, in this work, on the contrary, our goal is to retrieve the 2D sensor location only from the 1D sensor data. We demonstrate that there is valuable 2D geometric information that can be unveiled hidden within the 1D sampled signal. We investigate the hypotheses needed and propose a very efficient and robust algorithm to realize this 2D localization. This method can be possibly applied to a series of biomedical applications, like robotic endoscopic capsules, medicine tracking, and biological tissue detection. For example, people inject tiny sensors about the size of a grain of sand to monitor human biometrics (like blood ph, etc) and accurate localization plays an essential role in pathological diagnosis. |
@inproceedings{blu2020h, author = "Guo, R. and Blu, T.", title = "{FRI} Sensing: {2D} Localization from {1D} Mobile Sensor Data", booktitle = "Proceedings of the 2020 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)", month = "December 7--10", year = "2020", pages = "986--991", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2020h" } |
Guo, R. & Blu, T.,"Exploring the Geometry of One-Dimensional Signals", IEEE Transactions on Signal Processing, Vol. 69, pp. 5299-5312, 2021. |
The wide availability of inexpensive sensors of all kinds (inertia, magnetic field, light, temperature, pressure, chemicals etc.) makes it possible to empower a host of novel applications. We have shown in a previous paper that, if the field sensed can be expressed as a finite sum of 2D sinusoids, it is possible to reconstruct the sampling curve from the 1D sequence of image samples alone (up to a linear transformation)-without extra positioning information. Here, we explore the validity of this result if, instead, we assume the image to be directional or, as an extreme case, laminar and we simplify our previous approach to the single sinusoid fitting of segments of the 1D samples. We obtain predictive results that quantify the accuracy with which the frequencies found can be used to estimate the slope of the sampling trajectory. We also develop a robust algorithm to retrieve the sampling trajectory and estimate the laminar image that underlies the 1D samples. We finally demonstrate the validity of our approach on synthetic and well-chosen real images. |
@article{blu2021b, author = "Guo, R. and Blu, T.", title = "Exploring the Geometry of One-Dimensional Signals", journal = "IEEE Transactions on Signal Processing", year = "2021", volume = "69", pages = "5299--5312", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2021b", doi = "10.1109/TSP.2021.3112914" } |
Guo, R. & Blu, T.,"Super-Resolving a Frequency Band", IEEE Signal Processing Magazine, Vol. 40 (7), pp. 73-77, 2023. |
This note introduces a simple formula that provides the exact frequency of a pure sinusoid from just two samples of its Discrete-Time Fourier Transform (DTFT). Even when the signal is not a pure sinusoid, this formula still works in a very good approximation (optimally after a single refinement), paving the way for high-resolution frequency tracking of fastly-varying signals, or simply improving the frequency resolution of the peaks of a Discrete Fourier Transform (DFT). |
@article{blu2023c, author = "Guo, Ruiming and Blu, Thierry", title = "Super-Resolving a Frequency Band", journal = "IEEE Signal Processing Magazine", year = "2023", volume = "40", number = "7", pages = "73--77", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2023c", doi = "10.1109/MSP.2023.3311592" } |
Guo, R., Li, Y., Blu, T. & Zhao, H.,"Vector-FRI Recovery of Multi-Sensor Measurements", IEEE Transactions on Signal Processing, Vol. 70, pp. 4369-4380, 2022. |
Thanks to lowering costs, sensors of all kinds have increasingly been used in a wide variety of disciplines and fields, facilitating the rapid development of new technologies and applications. The information of interest (e.g. source location, refractive index, etc.) gets encoded in the measured sensor data, and the key problem is then to decode this information from the sensor measurements. In many cases, sensor data exhibit sparse features--"innovations"--that typically take the form of a finite sum of sinusoids. In practice, the robust retrieval of such encoded information from multi-sensors data (array or network) is difficult due to the non-uniformity of instrument precision and noise (i.e. different across sensors). This motivates the development of a joint sparse ("vector Finite Rate of Innovation") recovery strategy for multisensor data: by fitting the data to a joint parametric model, an accurate sparse recovery can be achieved, even if the noise of the sensors is non-homogenous and correlated. Although developed for one-dimensional sensor data, we show that our method is easily extended to multi-dimensional sensor measurements, e.g. direction-of-arrival data of 2D planar array and interference fringes of underwater acoustics, which provides a generic solution to these applications. A very robust and efficient algorithm is proposed, which we validate in various conditions (simulations, multiple types of real data). |
@article{blu2022d, author = "Guo, R. and Li, Y. and Blu, T. and Zhao, H.", title = "Vector-{FRI} Recovery of Multi-Sensor Measurements", journal = "IEEE Transactions on Signal Processing", year = "2022", volume = "70", pages = "4369--4380", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2022d", doi = "10.1109/TSP.2022.3204402" } |
Hao, Y., Marziliano, P., Vetterli, M. & Blu, T.,"Compression of ECG as a Signal with Finite Rate of Innovation", Proceedings of the Twenty-Seventh Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS'05), Shanghai, China, pp. 7564-7567, September 1--4, 2005. |
Compression of ECG (electrocardiogram) as a signal with finite rate of innovation (FRI) is proposed in this paper. By modelling the ECG signal as the sum of bandlimited and nonuniform linear spline which contains finite rate of innovation (FRI), sampling theory is applied to achieve effective compression and reconstruction of ECG signal. The simulation results show that the performance of the algorithm is quite satisfactory in preserving the diagnostic information as compared to the classical sampling scheme which uses the sinc interpolation. |
@inproceedings{blu2005d, author = "Hao, Y. and Marziliano, P. and Vetterli, M. and Blu, T.", title = "Compression of {ECG} as a Signal with Finite Rate of Innovation", booktitle = "Proceedings of the Twenty-Seventh Annual International Conference of the {IEEE} Engineering in Medicine and Biology Society ({EMBS'05})", month = "September 1--4,", year = "2005", pages = "7564--7567", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005d" } |
Horbelt, S., Muñoz Barrutia, A., Blu, T. & Unser, M.,"Spline Kernels for Continuous-Space Image Processing", Proceedings of the Twenty-Fifth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'00), Istanbul, Turkey, Vol. {IV}, pp. 2191-2194, June 5--9, 2000. |
We present an explicit formula for spline kernels; these are defined as the convolution of several B-splines of variable widths h and degrees n. The spline kernels are useful for continuous signal processing algorithms that involve B-spline inner-products or the convolution of several spline basis functions. We apply our results for the derivation of spline-based algorithms for two classes of problems. The first is the resizing of images with arbitrary scaling factors. The second problem is the computation of the Radon transform and of its inverse; in particular, we present a new spline-based version of the filtered backprojection algorithm for tomographic reconstruction. In both case, our explicit kernel formula allows for the use high degree splines; these offer better approximation and performance than the conventional lower order formulations (e.g., piecewise constant or piecewise linear models). |
@inproceedings{blu2000d, author = "Horbelt, S. and Mu{\~{n}}oz Barrutia, A. and Blu, T. and Unser, M.", title = "Spline Kernels for Continuous-Space Image Processing", booktitle = "Proceedings of the Twenty-Fifth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'00})", month = "June 5--9,", year = "2000", volume = "{IV}", pages = "2191--2194", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000d" } |
Ichige, K., Blu, T. & Unser, M.,"A Study on Spline Functions and Their Applications to Digital Signal and Image Processing", The Telecommunications Advancement Foundation, Vol. 18 (7(1)), pp. 358-365, January 2003. |
本稿では, 2つの異なる関数基底を用いて信号を補間する手法として一般化区分的線形補間法を提案し, こうした関数系の信号処理・画像処理における有用性を検証した結果について報告する。 提案する関数系は, 線形近似と同様に近似オーダー(approximation order)が2であり, 階段関数や折れ線を正 確に再構成できる。関数基底は2つの実パラメータ τ と α によって特徴付けられる。パラメータ τ は関数基底の座標に対応するシフトパラメータであり, もう一方のパラメータaは関数の非対称性をあらわすパラメータである。これらのパラメータを変化させることで, 入力信号・画像に関係なく, 近似精度を向上させ最適化を図ることが可能となることを示す。 この補間手法では, 2つのパラメータを, τ=0.21, α=1 と設定することで, シフト線形補間 (shifted-linear interpolation) を再現することができる。ここでは, このパラメータの組み合わせ以外に, τ=0.21, α=0.58 と設定した場合に, シフト線形補間と同様の精度で信号の補間を行うことができることに注⽬する。シフト線形補間では分解プロセスにおいて IIR フィルタを必要としていたが, 後者のパラメータを設定した場合は FIR フィルタのみで構成可能である。これにより, 後者のパラメータはシフト線形補間におけるギブス (発振) 現象を大いに低減できる。 こうしたパラメータを設定した場合の有効性を, 補間操作を用いてディジタル画像を回転した場合のピーク SN 比 (原画像と回転した画像の信号・ノイズ比), 補間後の画像の最大振幅などを検証することを通して評価する。 |
@article{blu2003g, author = "Ichige, K. and Blu, T. and Unser, M.", title = "A Study on Spline Functions and Their Applications to Digital Signal and Image Processing", journal = "The Telecommunications Advancement Foundation", month = "January", year = "2003", volume = "18", number = "7(1)", pages = "358--365", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003g" } |
Ichige, K., Blu, T. & Unser, M.,"Multiwavelet-Like Bases for High Quality Image Interpolation", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing X, San Diego, USA, Vol. 5207, pp. 153-161, August 3--8, 2003. Part I. |
We present a simple but generalized interpolation method for digital images that uses multiwavelet-like basis functions. Most of interpolation methods uses only one symmetric basis function; for example, standard and shifted piecewise-linear interpolations use the “hat” function only. The proposed method uses q different multiwavelet-like basis functions. The basis functions can be dissymmetric but should preserve the “partition of unity” property for high-quality signal interpolation. The scheme of decomposition and reconstruction of signals by the proposed basis functions can be implemented in a filterbank form using separable IIR implementation. An important property of the proposed scheme is that the prefilters for decomposition can be implemented by FIR filters. Recall that the shifted-linear interpolation requires IIR prefiltering, but we find a new configuration which reaches almost the same quality with the shifted-linear interpolation, while requiring FIR prefiltering only. Moreover, the present basis functions can be explicitly formulated in time-domain, although most of (multi-)wavelets don't have a time-domain formula. We specify an optimum configuration of interpolation parameters for image interpolation, and validate the proposed method by computing PSNR of the difference between multi-rotated images and their original version. |
@inproceedings{blu2003h, author = "Ichige, K. and Blu, T. and Unser, M.", title = "Multiwavelet-Like Bases for High Quality Image Interpolation", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {X}", month = "August 3--8,", year = "2003", volume = "5207", pages = "153--161", note = "Part {I}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003h" } |
Ichige, K., Blu, T. & Unser, M.,"Interpolation of Signals by Generalized Piecewise-Linear Multiple Generators", Proceedings of the Twenty-Eighth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'03), Hong Kong, China, Vol. {VI}, pp. 261-264, April 6--10, 2003. |
This paper presents an interpolation method based on shifted versions of two piecewise linear generators, which provides approximation order 2 like usual piecewise-linear interpolation; i.e., this method is able to represent the constant and the ramp exactly. Our interpolation is characterized by two real parameters: τ, the location of the generators, and α, related to their dissymmetry. By varying these parameters, we show that it is possible to optimize the quality of the approximation, independently of the function to interpolate. We recover the optimal value of shifted-linear interpolation (τ = 0.21 and α = 1) which requires IIR prefiltering, but we also find a new configuration (τ = 0.21 and α = 0.58) which reaches almost the same quality, while requiring FIR filtering only. This new solution is able to greatly reduce the amount of Gibbs oscillations generated in the shifted-linear interpolation scheme. We validate our finding by computing the PSNR of the difference between multi-rotated images and their original version. |
@inproceedings{blu2003i, author = "Ichige, K. and Blu, T. and Unser, M.", title = "Interpolation of Signals by Generalized Piecewise-Linear Multiple Generators", booktitle = "Proceedings of the Twenty-Eighth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'03})", month = "April 6--10,", year = "2003", volume = "{VI}", pages = "261--264", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003i" } |
Inc., Q., Blu, T., Vetterli, M. & Coulot, L.,"Sparse sampling of signal innovations", International patent WO/2009/096995, August 2009. |
Signals, including signals from outside of the subspace of bandlimited signals associated with the Shannon theorem, are acquired while still providing an acceptable reconstruction. In some aspects a denoising process is used in conjunction with sparse sampling techniques. For example, a denoising process utilizing a Cadzow algorithm may be used to reduce the amount of noise associated with sampled information. In some aspects the denoising process may be iterative such that the denoising process is repeated until the samples are denoised to a sufficient degree. In some aspects, the denoising process converts a set of received samples into another set corresponding to a signal with a Finite Rate of Innovation (FRI), or to an approximation of such a signal. The disclosure relates in some aspects to combination of a denoising process with annihilating filter methods to retrieve information from a noisy, sparse sampled signal. The disclosure relates in some aspects to determining a sampling kernel to be used to sample the signal based on noise associated with the signal. The disclosure relates in some aspects to determining the number of samples to obtain from a signal over a period of time based on noise associated with the signal. The disclosure relates in some aspects to determining the finite number of innovations of a received signal. |
@misc{blu2009g, author = "{Qualcomm Inc.} and Blu, T. and Vetterli, M. and Coulot, L.", title = "Sparse sampling of signal innovations", month = "August", year = "2009", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2009g" } |
Jacob, M., Blu, T. & Unser, M.,"A Unifying Approach and Interface for Spline-Based Snakes", Progress in Biomedical Optics and Imaging, vol. 2, no. 27, San Diego, USA, Vol. 4322, pp. 340-347, February 19--22, 2001. Part I. |
In this paper, we present different solutions for improving spline-based snakes. First, we demonstrate their minimum curvature interpolation property, and use it as an argument to get rid of the explicit smoothness constraint. We also propose a new external energy obtained by integrating a non-linearly pre-processed image in the closed region bounded by the curve. We show that this energy, besides being efficiently computable, is sufficiently general to include the widely used gradient-based schemes, Bayesian schemes, their combinations and discriminant-based approaches. We also introduce two initialization modes and the appropriate constraint energies. We use these ideas to develop a general snake algorithm to track boundaries of closed objects, with a user-friendly interface. |
@inproceedings{blu2001c, author = "Jacob, M. and Blu, T. and Unser, M.", title = "A Unifying Approach and Interface for Spline-Based Snakes", booktitle = "Progress in Biomedical Optics and Imaging, vol. 2, no. 27", month = "February 19--22,", year = "2001", volume = "4322", pages = "340-347", note = "Part {I}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001c" } |
Jacob, M., Blu, T. & Unser, M.,"Exact Computation of Area Moments for Spline and Wavelet Curves", Proceedings of the Fifteenth International Conference on Pattern Recognition (ICPR'00), Barcelona, Spain, Vol. {III}, pp. 131-134, September 3--8, 2000. |
We present an exact algorithm for the computation of the moments of a region bounded by a curve represented in a scaling function or wavelet basis. Using Green's theorem, we show that the computation of the area moments is equivalent to applying a suitable multidimensional filter on the coefficients of the curve and thereafter computing a scalar product. We compare this algorithm with existing methods such as pixel-based approaches and approximation of the region by a polygon. |
@inproceedings{blu2000e, author = "Jacob, M. and Blu, T. and Unser, M.", title = "Exact Computation of Area Moments for Spline and Wavelet Curves", booktitle = "Proceedings of the Fifteenth International Conference on Pattern Recognition ({ICPR'00})", month = "September 3--8,", year = "2000", volume = "{III}", pages = "131--134", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000e" } |
Jacob, M., Blu, T. & Unser, M.,"An Exact Method for Computing the Area Moments of Wavelet and Spline Curves", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 23 (6), pp. 633-642, June 2001. |
We present a method for the exact computation of the moments of a region bounded by a curve represented by a scaling function or wavelet basis. Using Green's Theorem, we show that the computation of the area moments is equivalent to applying a suitable multidimensional filter on the coefficients of the curve and thereafter computing a scalar product. The multidimensional filter coefficients are precomputed exactly as the solution of a two-scale relation. To demonstrate the performance improvement of the new method, we compare it with existing methods such as pixel-based approaches and approximation of the region by a polygon. We also propose an alternate scheme when the scaling function is sinc(x). |
@article{blu2001d, author = "Jacob, M. and Blu, T. and Unser, M.", title = "An Exact Method for Computing the Area Moments of Wavelet and Spline Curves", journal = "{IEEE} Transactions on Pattern Analysis and Machine Intelligence", month = "June", year = "2001", volume = "23", number = "6", pages = "633--642", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001d" } |
Jacob, M., Blu, T. & Unser, M.,"An Error Analysis for the Sampling of Periodic Signals", Proceedings of the Fourth International Conference on Sampling Theory and Applications (SampTA'01), Orlando, USA, pp. 45-48, May 13--17, 2001. |
We analyze the representation of periodic signals in a scaling function basis. This representation is sufficiently general to include the widely used approximation schemes like wavelets, splines and Fourier series representation. We derive a closed form expression for the approximation error in the scaling function representation. The error formula takes the simple form of a Parseval like sum, weighted by an appropriate error kernel. This formula may be useful in choosing the right representation for a class of signals. We also experimentally verify the theory in the particular case of description of closed curves. |
@inproceedings{blu2001e, author = "Jacob, M. and Blu, T. and Unser, M.", title = "An Error Analysis for the Sampling of Periodic Signals", booktitle = "Proceedings of the Fourth International Conference on Sampling Theory and Applications ({SampTA'01})", month = "May 13--17,", year = "2001", pages = "45--48", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001e" } |
Jacob, M., Blu, T. & Unser, M.,"Sampling of Periodic Signals: A Quantitative Error Analysis", IEEE Transactions on Signal Processing, Vol. 50 (5), pp. 1153-1159, May 2002. |
We present an exact expression for the L2 error that occurs when one approximates a periodic signal in a basis of shifted and scaled versions of a generating function. This formulation is applicable to a wide variety of linear approximation schemes including wavelets, splines, and bandlimited signal expansions. The formula takes the simple form of a Parseval's-like relation, where the Fourier coefficients of the signal are weighted against a frequency kernel that characterizes the approximation operator. We use this expression to analyze the behavior of the error as the sampling step approaches zero. We also experimentally verify the expression of the error in the context of the interpolation of closed curves. |
@article{blu2002e, author = "Jacob, M. and Blu, T. and Unser, M.", title = "Sampling of Periodic Signals: {A} Quantitative Error Analysis", journal = "{IEEE} Transactions on Signal Processing", month = "May", year = "2002", volume = "50", number = "5", pages = "1153--1159", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002e" } |
Jacob, M., Blu, T. & Unser, M.,"3-D Reconstruction of DNA Filaments from Stereo Cryo-Electron Micrographs", Proceedings of the First IEEE International Symposium on Biomedical Imaging (ISBI'02), Washington, USA, Vol. {II}, pp. 597-600, July 7--10, 2002. |
We propose an algorithm for the 3-D reconstruction of DNA filaments from a pair of stereo cryo-electron micrographs. The underlying principle is to specify a 3-D model of a filament—described as a spline curve—and to fit it to the 2-D data using a snake-like algorithm. To drive the snake, we constructed a ridge-enhancing vector field for each of the images based on the maximum output of a bank of rotating matched filters. The magnitude of the field gives a confidence measure for the presence of a filament and the phase indicates its direction. We also propose a fast algorithm to perform the matched filtering. The snake algorithm starts with an initial curve (input by the user) and evolves it so that its projections on the viewing plane are in maximal agreement with the corresponding vector fields. |
@inproceedings{blu2002f, author = "Jacob, M. and Blu, T. and Unser, M.", title = "{3-D} Reconstruction of {DNA} Filaments from Stereo Cryo-Electron Micrographs", booktitle = "Proceedings of the First {IEEE} International Symposium on Biomedical Imaging ({ISBI'02})", month = "July 7--10,", year = "2002", volume = "{II}", pages = "597--600", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002f" } |
Jacob, M., Blu, T. & Unser, M.,"Efficient Energies and Algorithms for Parametric Snakes", IEEE Transactions on Image Processing, Vol. 13 (9), pp. 1231-1244, September 2004. |
Parametric active contour models are one of the preferred approaches for image segmentation because of their computational efficiency and simplicity. However, they have a few drawbacks which limit their performance. In this paper, we identify some of these problems and propose efficient solutions to get around them. The widely-used gradient magnitude-based energy is parameter dependent; its use will negatively affect the parametrization of the curve and, consequently, its stiffness. Hence, we introduce a new edge-based energy that is independent of the parameterization. It is also more robust since it takes into account the gradient direction as well. We express this energy term as a surface integral, thus unifying it naturally with the region-based schemes. The unified framework enables the user to tune the image energy to the application at hand. We show that parametric snakes can guarantee low curvature curves, but only if they are described in the curvilinear abscissa. Since normal curve evolution do not ensure constant arc-length, we propose a new internal energy term that will force this configuration. The curve evolution can sometimes give rise to closed loops in the contour, which will adversely interfere with the optimization algorithm. We propose a curve evolution scheme that prevents this condition. |
@article{blu2004e, author = "Jacob, M. and Blu, T. and Unser, M.", title = "Efficient Energies and Algorithms for Parametric Snakes", journal = "{IEEE} Transactions on Image Processing", month = "September", year = "2004", volume = "13", number = "9", pages = "1231--1244", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004e" } |
Jacob, M., Blu, T. & Unser, M.,"Shape Estimation of 3-D DNA Molecules from Stereo Cryo-Electron Micro-Graphs", Proceedings of the 2004 IEEE International Conference on Image Processing (ICIP'04), Singapore, Singapore, pp. 1883-1886, October 24--27, 2004. |
We introduce a 3-D parametric active contour algorithm for the shape estimation of DNA molecules from stereo cryo-electron micrographs. We consider a 3-D filament (consisting of a B-spline skeleton and a specified radial profile) and match its projections with the micrographs using an optimization algorithm. To accelerate the evaluation of the projections, we approximate the global model locally by an elongated blob-like template that is designed to be projection-steerable. This means that the 2-D projections of the template at any 3-D orientation can be expressed as a linear combination of a few basis functions. Thus, the matching of the template projections is reduced to evaluating a weighted sum of the inner-products between the basis functions and the micrographs. We choose an internal energy term that penalizes the total curvature magnitude of the curve. We also use a constraint energy term that forces the curve to have a specified length. The sum of these terms along with the image energy obtained from the matching process is minimized using a conjugate-gradient algorithm. We validate the algorithm using real as well as simulated data. |
@inproceedings{blu2004f, author = "Jacob, M. and Blu, T. and Unser, M.", title = "Shape Estimation of {3-D} {DNA} Molecules from Stereo Cryo-Electron Micro-Graphs", booktitle = "Proceedings of the 2004 {IEEE} International Conference on Image Processing ({ICIP'04})", month = "October 24--27,", year = "2004", pages = "1883--1886", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004f" } |
Jacob, M., Blu, T., Vaillant, C., Maddocks, J.H. & Unser, M.,"3-D Shape Estimation of DNA Molecules from Stereo Cryo-Electron Micro-Graphs Using a Projection-Steerable Snake", IEEE Transactions on Image Processing, Vol. 15 (1), pp. 214-227, January 2006. |
We introduce a three-dimensional (3-D) parametric active contour algorithm for the shape estimation of DNA molecules from stereo cryo-electron micrographs. We estimate the shape by matching the projections of a 3-D global shape model with the micrographs; we choose the global model as a 3-D filament with a B-spline skeleton and a specified radial profile. The active contour algorithm iteratively updates the B-spline coefficients, which requires us to evaluate the projections and match them with the micrographs at every iteration. Since the evaluation of the projections of the global model is computationally expensive, we propose a fast algorithm based on locally approximating it by elongated blob-like templates. We introduce the concept of projection-steerability and derive a projection-steerable elongated template. Since the two-dimensional projections of such a blob at any 3-D orientation can be expressed as a linear combination of a few basis functions, matching the projections of such a 3-D template involves evaluating a weighted sum of inner products between the basis functions and the micrographs. The weights are simple functions of the 3-D orientation and the inner-products are evaluated efficiently by separable filtering. We choose an internal energy term that penalizes the average curvature magnitude. Since the exact length of the DNA molecule is known a priori, we introduce a constraint energy term that forces the curve to have this specified length. The sum of these energies along with the image energy derived from the matching process is minimized using the conjugate gradients algorithm. We validate the algorithm using real, as well as simulated, data and show that it performs well. |
@article{blu2006c, author = "Jacob, M. and Blu, T. and Vaillant, C. and Maddocks, J.H. and Unser, M.", title = "{3-D} Shape Estimation of {DNA} Molecules from Stereo Cryo-Electron Micro-Graphs Using a Projection-Steerable Snake", journal = "{IEEE} Transactions on Image Processing", month = "January", year = "2006", volume = "15", number = "1", pages = "214--227", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2006c" } |
Jayashankar, T., Moulin, P., Blu, T. & Gilliam, C.,"LAP-Based Video Frame Interpolation", Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP'19), Taipei, Taiwan, pp. 4195-4199, September 22--25, 2019. |
High-quality video frame interpolation often necessitates accurate motion estimation, which can be obtained using modern optical flow methods. In this paper, we use the recently proposed Local All-Pass (LAP) algorithm to compute the optical flow between two consecutive frames. The resulting flow field is used to perform interpolation using cubic splines. We compare the interpolation results against a well-known optical flow estimation algorithm as well as against a recent convolutional neural network scheme for video frame interpolation. Qualitative and quantitative results show that the LAP algorithm performs fast, high-quality video frame interpolation, and perceptually outperforms the neural network and the Lucas-Kanade method on a variety of test sequences. |
@inproceedings{blu2019f, author = "Jayashankar, T. and Moulin, P. and Blu, T. and Gilliam, C.", title = "{LAP}-Based Video Frame Interpolation", booktitle = "Proceedings of the 2019 {IEEE} International Conference on Image Processing ({ICIP'19})", month = "September 22--25,", year = "2019", pages = "4195--4199", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2019f" } |
Jiang, B., Jin, T., Blu, T. & Chen, W.,"Probing chemical exchange using quantitative spin-lock R_1ρ asymmetry imaging with adiabatic RF pulses", Magnetic Resonance in Medicine, Vol. 82 (5), pp. 1767-1781, November 2019. |
Purpose CEST is commonly used to probe the effects of chemical exchange. Although R1ρ asymmetry quantification has also been described as a promising option for detecting the effects of chemical exchanges, the existing acquisition approaches are highly susceptible to B1 RF and B0 field inhomogeneities. To address this problem, we report a new R1ρ asymmetry imaging approach, AC-iTIP, which is based on the previously reported techniques of irradiation with toggling inversion preparation (iTIP) and adiabatic continuous wave constant amplitude spin-lock RF pulses (ACCSL). We also derived the optimal spin-lock RF pulse B1 amplitude that yielded the greatest R1ρ asymmetry. Methods Bloch-McConnell simulations were used to verify the analytical formula derived for the optimal spin-lock RF pulse B1 amplitude. The performance of the AC-iTIP approach was compared to that of the iTIP approach based on hard RF pulses and the R1ρ-spectrum acquired using adiabatic RF pulses with the conventional fitting method. Comparisons were performed using Bloch-McConnell simulations, phantom, and in vivo experiments at 3.0T. Results The analytical prediction of the optimal B1 was validated. Compared to the other 2 approaches, the AC-iTIP approach was more robust under the influences of B1 RF and B0 field inhomogeneities. A linear relationship was observed between the measured R1ρ asymmetry and the metabolite concentration. Conclusion The AC-iTIP approach could probe the chemical exchange effect more robustly than the existing R1ρ asymmetry acquisition approaches. Therefore, AC-iTIP is a promising technique for metabolite imaging based on the chemical exchange effect. |
@article{blu2019g, author = "Jiang, B. and Jin, T. and Blu, T. and Chen, W.", title = "Probing chemical exchange using quantitative spin-lock ${R}_{1\rho}$ asymmetry imaging with adiabatic {RF} pulses", journal = "Magnetic Resonance in Medicine", month = "November", year = "2019", volume = "82", number = "5", pages = "1767--1781", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2019g" } |
Kandaswamy, D., Blu, T., Spinelli, L., Michel, C. & Van De Ville, D.,"EEG Source Localization by Multi-Planar Analytic Sensing", Proceedings of the Fifth IEEE International Symposium on Biomedical Imaging (ISBI'08), Paris, France, pp. 1075-1078, May 14--17, 2008. |
Source localization from EEG surface measurements is an important problem in neuro-imaging. We propose a new mathematical framework to estimate the parameters of a multidipole source model. To that aim, we perform 2-D analytic sensing in multiple planes. The estimation of the projection on each plane of the dipoles' positions, which is a non-linear problem, is reduced to polynomial root finding. The 3-D information is then recovered as a special case of tomographic reconstruction. The feasibility of the proposed approach is shown for both synthetic and experimental data. |
@inproceedings{blu2008f, author = "Kandaswamy, D. and Blu, T. and Spinelli, L. and Michel, C. and Van De Ville, D.", title = "{EEG} Source Localization by Multi-Planar Analytic Sensing", booktitle = "Proceedings of the Fifth {IEEE} International Symposium on Biomedical Imaging ({ISBI'08})", month = "May 14--17,", year = "2008", pages = "1075--1078", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008f" } |
Kandaswamy, D., Blu, T., Spinelli, L., Michel, C. & Van De Ville, D.,"Local Mulilayer Analytic Sensing for EEG Source Localization: Performance Bounds and Experimental Results", Proceedings of the Eighth IEEE International Symposium on Biomedical Imaging (ISBI'11), Chicago, USA, pp. 479-483, March 30--April 2, 2011. |
Analytic sensing is a new mathematical framework to estimate the parameters of a multi-dipole source model from boundary measurements. The method deploys two working principles. First, the sensing principle relates the boundary measurements to the volumetric interactions of the sources with the so-called "analytic sensor," a test function that is concentrated around a singular point outside the domain of interest. Second, the annihilation principle allows retrieving the projection of the dipoles' positions in a single shot by polynomial root finding. Here, we propose to apply analytic sensing in a local way; i.e., the poles are not surrounding the complete domain. By combining two local projections of the (nearby) dipolar sources, we are able to reconstruct the full 3-D information. We demonstrate the feasibility of the proposed approach for both synthetic and experimental data, together with the theoretical lower bounds of the localization error. |
@inproceedings{blu2011b, author = "Kandaswamy, D. and Blu, T. and Spinelli, L. and Michel, C. and Van De Ville, D.", title = "Local Mulilayer Analytic Sensing for {EEG} Source Localization: Performance Bounds and Experimental Results", booktitle = "Proceedings of the Eighth {IEEE} International Symposium on Biomedical Imaging ({ISBI'11})", month = "March 30--April 2,", year = "2011", pages = "479--483", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2011b" } |
Kandaswamy, D., Blu, T. & Van De Ville, D.,"Analytic Sensing: Direct Recovery of Point Sources from Planar Cauchy Boundary Measurements", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet XII, San Diego, USA, Vol. 6701, pp. 67011Y-1-67011Y-6, August 26--29, 2007. |
Inverse problems play an important role in engineering. A problem that often occurs in electromagnetics (e.g. EEG) is the estimation of the locations and strengths of point sources from boundary data. We propose a new technique, for which we coin the term “analytic sensing”. First, generalized measures are obtained by applying Green's theorem to selected functions that are analytic in a given domain and at the same time localized to “sense” the sources. Second, we use the finite-rate-of-innovation framework to determine the locations of the sources. Hence, we construct a polynomial whose roots are the sources' locations. Finally, the strengths of the sources are found by solving a linear system of equations. Preliminary results, using synthetic data, demonstrate the feasibility of the proposed method. |
@inproceedings{blu2007e, author = "Kandaswamy, D. and Blu, T. and Van De Ville, D.", title = "Analytic Sensing: {D}irect Recovery of Point Sources from Planar {C}auchy Boundary Measurements", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet {XII}", month = "August 26--29,", year = "2007", volume = "6701", pages = "67011Y-1--67011Y-6", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007e" } |
Kandaswamy, D., Blu, T. & Van De Ville, D.,"Analytic Sensing: Noniterative Retrieval of Point Sources from Boundary Measurements", SIAM Journal on Scientific Computing, Vol. 31 (4), pp. 3179-3194, 2009. |
We consider the problem of locating point sources in the planar domain from overdetermined boundary measurements of solutions of Poisson's equation. In this paper, we propose a novel technique, termed "analytic sensing," which combines the application of Green's theorem to functions with vanishing Laplacian—known as the "reciprocity gap" principle—with the careful selection of analytic functions that "sense" the manifestation of the sources in order to determine their positions and intensities. Using this formalism we express the problem at hand as a generalized sampling problem, where the signal to be reconstructed is the source distribution. To determine the positions of the sources, which is a nonlinear problem, we extend the annihilating-filter method, which reduces the problem to solving a linear system of equations for a polynomial whose roots are the positions of the point sources. Once these positions are found, resolving the according intensities boils down to solving a linear system of equations. We demonstrate the performance of our technique in the presence of noise by comparing the achieved accuracy with the theoretical lower bound provided by Cramér-Rao theory. |
@article{blu2009d, author = "Kandaswamy, D. and Blu, T. and Van De Ville, D.", title = "Analytic Sensing: {N}oniterative Retrieval of Point Sources from Boundary Measurements", journal = "{SIAM} Journal on Scientific Computing", year = "2009", volume = "31", number = "4", pages = "3179--3194", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2009d" } |
Kandaswamy, D., Blu, T. & Van De Ville, D.,"Analytic Sensing for Multi-Layer Spherical Models with Application to EEG Source Imaging", Inverse Problems and Imaging, Vol. 7 (4), pp. 1251-1270, November 2013. |
Source imaging maps back boundary measurements to underlying generators
within the domain; e.g., retrieving the parameters of the generating
dipoles from electrical potential measurements on the scalp such
as in electroencephalography (EEG). Fitting such a parametric source
model is non-linear in the positions of the sources and renewed interest
in mathematical imaging has led to several promising approaches. One important step in these methods is the application of a sensing principle that links the boundary measurements to volumetric information about the sources. This principle is based on the divergence theorem and a mathematical test function that needs to be an homogeneous solution of the governing equations (i.e., Poisson's equation). For a specific choice of the test function, we have devised an algebraic non-iterative source localization technique for which we have coined the term "analytic sensing". Until now, this sensing principle has been applied to homogeneous-conductivity spherical models only. Here, we extend it for multi-layer spherical models that are commonly applied in EEG. We obtain a closed-form expression for the test function that can then be applied for subsequent localization. A simulation study show the feasibility of the proposed approach. |
@article{blu2013d, author = "Kandaswamy, D. and Blu, T. and Van De Ville, D.", title = "Analytic Sensing for Multi-Layer Spherical Models with Application to {EEG} Source Imaging", journal = "Inverse Problems and Imaging", month = "November", year = "2013", volume = "7", number = "4", pages = "1251--1270", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2013d" } |
Khalidov, I., Blu, T. & Unser, M.,"Generalized L-Spline Wavelet Bases", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet XI, San Diego, USA, Vol. 5914, pp. 59140F-1-59140F-8, July 31--August 3, 2005. |
We build wavelet-like functions based on a parametrized family of pseudo-differential operators Lv that satisfy some admissibility and scalability conditions. The shifts of the generalized B-splines, which are localized versions of the Green function of Lv, generate a family of L-spline spaces. These spaces have the approximation order equal to the order of the underlying operator. A sequence of embedded spaces is obtained by choosing a dyadic scale progression a = 2i. The consecutive inclusion of the spaces yields the refinement equation, where the scaling filter depends on scale. The generalized L-wavelets are then constructed as basis functions for the orthogonal complements of spline spaces. The vanishing moment property of conventional wavelets is generalized to the vanishing null space element property. In spite of the scale dependence of the filters, the wavelet decomposition can be performed using an adapted version of Mallat's filterbank algorithm. |
@inproceedings{blu2005e, author = "Khalidov, I. and Blu, T. and Unser, M.", title = "Generalized ${\mathrm{L}}$-Spline Wavelet Bases", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet {XI}", month = "July 31--August 3,", year = "2005", volume = "5914", pages = "59140F-1--59140F-8", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005e" } |
Khalidov, I., Van De Ville, D., Blu, T. & Unser, M.,"Construction of Wavelet Bases That Mimic the Behaviour of Some Given Operator", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet XII, San Diego, USA, Vol. 6701, pp. 67010S-1-67010S-7, August 26--29, 2007. |
Probably the most important property of wavelets for signal processing is their multiscale derivative-like behavior when applied to functions. In order to extend the class of problems that can profit of wavelet-based techniques, we propose to build new families of wavelets that behave like an arbitrary scale-covariant operator. Our extension is general and includes many known wavelet bases. At the same time, the method takes advantage a fast filterbank decomposition-reconstruction algorithm. We give necessary conditions for the scale-covariant operator to admit our wavelet construction, and we provide examples of new wavelets that can be obtained with our method. |
@inproceedings{blu2007f, author = "Khalidov, I. and Van De Ville, D. and Blu, T. and Unser, M.", title = "Construction of Wavelet Bases That Mimic the Behaviour of Some Given Operator", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet {XII}", month = "August 26--29,", year = "2007", volume = "6701", pages = "67010S-1--67010S-7", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007f" } |
Küstner, T., Pan, J., Gilliam, C., Qi, H., Cruz, G., Hammernik, K., Yang, B., Blu, T., Rueckert, D., Botnar, R., Prieto, C. & Gatidis, S.,"Deep-Learning Based Motion-Corrected Image Reconstruction in 4D Magnetic Resonance Imaging of the Body Trunk", Proceedings of the 2020 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Auckland, NZ, pp. 976-985, December 7--10 2020. |
Respiratory and cardiac motion can cause artifacts in magnetic resonance imaging of the body trunk if patients cannot hold their breath or triggered acquisitions are not practical. Retrospective correction strategies usually cope with motion by fast imaging sequences with integrated motion tracking under free-movement conditions. These acquisitions perform sub-Nyquist sampling and retrospectively bin the data into the respective motion states, yielding subsampled and motionresolved k-space data. The motion-resolved k-spaces are linked to each other by non-rigid deformation fields. The accurate estimation of such motion is thus an important task in the successful correction of respiratory and cardiac motion. Usually this problem is formulated in image space via diffusion, parametric-spline or optical flow methods. Image-based registration can be however impaired by aliasing artifacts or by estimation from low-resolution images. Subsequently, any motion-corrected reconstruction can be biased by errors in the deformation fields. In this work, we propose a novel deep-learning based motion-corrected 4D (3D spatial + time) image reconstruction which combines a non-rigid registration network and a (3+1)D reconstruction network. Non-rigid motion is estimated directly in k-space based on an optical flow idea and incorporated into the reconstruction network. The proposed method is evaluated on in-vivo 4D motionresolved magnetic resonance images of patients with suspected liver or lung metastases and healthy subjects. |
@inproceedings{blu2020i, author = "K\"ustner, T. and Pan, J. and Gilliam, C. and Qi, H. and Cruz, G. and Hammernik, K. and Yang, B. and Blu, T. and Rueckert, D. and Botnar, R. and Prieto, C. and Gatidis, S.", title = "Deep-Learning Based Motion-Corrected Image Reconstruction in {4D} Magnetic Resonance Imaging of the Body Trunk", booktitle = "Proceedings of the 2020 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)", month = "December 7--10", year = "2020", pages = "976--985", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2020i" } |
Küstner, T., Pan, J., Qi, H., Cruz, G., Gilliam, C., Blu, T., Yang, B., Gatidis, S., Botnar, R. & Prieto, C.,"LAPNet: Non-rigid Registration derived in k-space for Magnetic Resonance Imaging", IEEE Transactions on Medical Imaging, Vol. 40 (12), pp. 3686-3697, December 2021. |
Physiological motion, such as cardiac and respiratory motion, during Magnetic Resonance (MR) image acquisition can cause image artifacts. Motion correction techniques have been proposed to compensate for these types of motion during thoracic scans, relying on accurate motion estimation from undersampled motion-resolved reconstruction. A particular interest and challenge lie in the derivation of reliable non-rigid motion fields from the undersampled motion-resolved data. Motion estimation is usually formulated in image space via diffusion, parametric-spline, or optical flow methods. However, image-based registration can be impaired by remaining aliasing artifacts due to the undersampled motion-resolved reconstruction. In this work, we describe a formalism to perform non-rigid registration directly in the sampled Fourier space, i.e. k-space. We propose a deep-learning based approach to perform fast and accurate non-rigid registration from the undersampled k-space data. The basic working principle originates from the Local All- Pass (LAP) technique, a recently introduced optical flow-based registration. The proposed LAPNet is compared against traditional and deep learning image-based registrations and tested on fully-sampled and highly-accelerated (with two undersampling strategies) 3D respiratory motion-resolved MR images in a cohort of 40 patients with suspected liver or lung metastases and 25 healthy subjects. The proposed LAPNet provided consistent and superior performance to image-based approaches throughout different sampling trajectories and acceleration factors. |
@article{blu2021d, author = "K{\"u}stner, T. and Pan, J. and Qi, H. and Cruz, G. and Gilliam, C. and Blu, T. and Yang, B. and Gatidis, S. and Botnar, R. and Prieto, C.", title = "{LAPNet}: Non-rigid Registration derived in {k}-space for Magnetic Resonance Imaging", journal = "IEEE Transactions on Medical Imaging", month = "December", year = "2021", volume = "40", number = "12", pages = "3686--3697", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2021d" } |
Küstner, T., Schwartz, M., Martirosian, P., Gatidis, S., Seith, F., Gilliam, C., Blu, T., Fayad, H., Visvikis, D., Schick, F., Yang, B., Schmidt, H. & Schwenzer, N.F.,"MR-based respiratory and cardiac motion correction for PET imaging", Medical Image Analysis, Vol. 42, pp. 129-144, December 2017. |
Purpose: To develop a motion correction for Positron-Emission-Tomography (PET) using simultaneously acquired magnetic-resonance (MR) images within 90 seconds. Methods: A 90 seconds MR acquisition allows the generation of a cardiac and respiratory motion model of the body trunk. Thereafter, further diagnostic MR sequences can be recorded during the PET examination without any limitation. To provide full PET scan time coverage, a sensor fusion approach maps external motion signals (respiratory belt, ECG-derived respiration signal) to a complete surrogate signal on which the retrospective data binning is performed. A joint Compressed Sensing reconstruction and motion estimation of the subsampled data provides motion-resolved MR images (respiratory + cardiac). A 1-POINT DIXON method is applied to these MR images to derive a motion-resolved attenuation map. The motion model and the attenuation map are fed to the Customizable and Advanced Software for Tomographic Reconstruction (CASToR) PET reconstruction system in which the motion correction is incorporated. All reconstruction steps are performed online on the scanner via Gadgetron to provide a clinically feasible setup for improved general applicability. The method was evaluated on 36 patients with suspected liver or lung metastasis in terms of lesion quantification (SUVmax, SNR, contrast), delineation (FWHM, slope steepness) and diagnostic confidence level (3-point Likert-scale). Results: A motion correction could be conducted for all patients, however, only in 30 patients moving lesions could be observed. For the examined 134 malignant lesions, an average improvement in lesion quantification of 22%, delineation of 64% and diagnostic confidence level of 23% was achieved. Conclusion: The proposed method provides a clinically feasible setup for respiratory and cardiac motion correction of PET data by simultaneous short-term MRI. The acquisition sequence and all reconstruction steps are publicly available to foster multi-center studies and various motion correction scenarios. |
@article{blu2017j, author = "K\"ustner, T. and Schwartz, M. and Martirosian, P. and Gatidis, S. and Seith, F. and Gilliam, C. and Blu, T. and Fayad, H. and Visvikis, D. and Schick, F. and Yang, B. and Schmidt, H. and Schwenzer, N.F.", title = "{MR}-based respiratory and cardiac motion correction for {PET} imaging", journal = "Medical Image Analysis", month = "December", year = "2017", volume = "42", pages = "129--144", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2017j" } |
Kybic, J., Blu, T. & Unser, M.,"Variational Approach to Tomographic Reconstruction", Progress in Biomedical Optics and Imaging, vol. 2, no. 27, San Diego , USA, Vol. 4322, pp. 30-39, February 19--22, 2001. Part I. |
We formulate the tomographic reconstruction problem in a variational setting. The object to be reconstructed is considered as a continuous density function, unlike in the pixel-based approaches. The measurements are modeled as linear operators (Radon transform), integrating the density function along the ray path. The criterion that we minimize consists of a data term and a regularization term. The data term represents the inconsistency between applying the measurement model to the density function and the real measurements. The regularization term corresponds to the smoothness of the density function. We show that this leads to a solution lying in a finite dimensional vector space which can be expressed as a linear combination of generating functions. The coefficients of this linear combination are determined from a linear equation set, solvable either directly, or by using an iterative approach. Our experiments show that our new variational method gives results comparable to the classical filtered back-projection for high number of measurements (projection angles and sensor resolution). The new method performs better for medium number of measurements. Furthermore, the variational approach gives usable results even with very few measurements when the filtered back-projection fails. Our method reproduces amplitudes more faithfully and can cope with high noise levels; it can be adapted to various characteristics of the acquisition device. |
@inproceedings{blu2001f, author = "Kybic, J. and Blu, T. and Unser, M.", title = "Variational Approach to Tomographic Reconstruction", booktitle = "Progress in Biomedical Optics and Imaging, vol. 2, no. 27", month = "February 19--22,", year = "2001", volume = "4322", pages = "30--39", note = "Part {I}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001f" } |
Kybic, J., Blu, T. & Unser, M.,"Generalized Sampling: A Variational Approach", Proceedings of the Fourth International Conference on Sampling Theory and Applications (SampTA'01), Orlando, USA, pp. 151-154, May 13--17, 2001. |
We consider the problem of reconstructing a multidimensional and multivariate function ƒ: ℜm → ℜn from the discretely and irregularly sampled responses of q linear shift-invariant filters. Unlike traditional approaches which reconstruct the function in some signal space V, our reconstruction is optimal in the sense of a plausibility criterion J. The reconstruction is either consistent with the measures, or minimizes the consistence error. There is no band-limiting restriction for the input signals. We show that important characteristics of the reconstruction process are induced by the properties of the criterion J. We give the reconstruction formula and apply it to several practical cases. |
@inproceedings{blu2001g, author = "Kybic, J. and Blu, T. and Unser, M.", title = "Generalized Sampling: {A} Variational Approach", booktitle = "Proceedings of the Fourth International Conference on Sampling Theory and Applications ({SampTA'01})", month = "May 13--17,", year = "2001", pages = "151--154", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001g" } |
Kybic, J., Blu, T. & Unser, M.,"Generalized Sampling: A Variational Approach---Part II: Applications", IEEE Transactions on Signal Processing, Vol. 50 (8), pp. 1977-1985, August 2002. |
The variational reconstruction theory from a companion paper finds a solution consistent with some linear constraints and minimizing a quadratic plausibility criterion. It is suitable for treating vector and multidimensional signals. Here, we apply the theory to a generalized sampling system consisting of a multichannel filterbank followed by a nonuniform sampling. We provide ready-made formulas, which should permit application of the technique directly to problems at hand. We comment on the practical aspects of the method, such as numerical stability and speed. We show the reconstruction formula and apply it to several practical examples, including new variational formulation of derivative sampling, landmark warping, and tomographic reconstruction. |
@article{blu2002g, author = "Kybic, J. and Blu, T. and Unser, M.", title = "Generalized Sampling: {A} Variational Approach---{P}art {II}: {A}pplications", journal = "{IEEE} Transactions on Signal Processing", month = "August", year = "2002", volume = "50", number = "8", pages = "1977--1985", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002g" } |
Kybic, J., Blu, T. & Unser, M.,"Generalized Sampling: A Variational Approach---Part I: Theory", IEEE Transactions on Signal Processing, Vol. 50 (8), pp. 1965-1976, August 2002. |
We consider the problem of reconstructing a multidimensional vector function fin: ℜm → ℜn from a finite set of linear measures. These can be irregularly sampled responses of several linear filters. Traditional approaches reconstruct in an a priori given space, e.g., the space of bandlimited functions. Instead, we have chosen to specify a reconstruction that is optimal in the sense of a quadratic plausibility criterion J. First, we present the solution of the generalized interpolation problem. Later, we also consider the approximation problem, and we show that both lead to the same class of solutions. Imposing generally desirable properties on the reconstruction largely limits the choice of the criterion J. Linearity leads to a quadratic criterion based on bilinear forms. Specifically, we show that the requirements of translation, rotation, and scale-invariance restrict the form of the criterion to essentially a one-parameter family. We show that the solution can be obtained as a linear combination of generating functions. We provide analytical techniques to find these functions and the solution itself. Practical implementation issues and examples of applications are treated in a companion paper. |
@article{blu2002h, author = "Kybic, J. and Blu, T. and Unser, M.", title = "Generalized Sampling: {A} Variational Approach---{P}art {I}: {T}heory", journal = "{IEEE} Transactions on Signal Processing", month = "August", year = "2002", volume = "50", number = "8", pages = "1965--1976", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002h" } |
Lakshman, H., Schwartz, H., Blu, T. & Wiegand, T.,"Generalized interpolation for motion compensated prediction", Proceedings of the 2011 IEEE International Conference on Image Processing (ICIP'11), Brussels, Belgium, pp. 1213-1216, September 11--14, 2011. |
Fractional sample interpolation with FIR filters is commonly used for motion compensated prediction (MCP). The FIR filtering can be viewed as a signal decomposition using restricted basis functions. The concept of generalized interpolation provides a greater degree of freedom for selecting basis functions. We implemented generalized interpolation using a combination of short IIR and FIR filters. An efficient multiplication-free design of the algorithm that is suited for hardware implementation is shown. Compared to a 6-tap FIR interpolation filter, average rate savings of 3.1% are observed. A detailed analysis of the complexity and memory bandwidth cycles compared to existing interpolation techniques for MCP is provided. |
@inproceedings{blu2011c, author = "Lakshman, H. and Schwartz, H. and Blu, T. and Wiegand, T.", title = "Generalized interpolation for motion compensated prediction", booktitle = "Proceedings of the 2011 {IEEE} International Conference on Image Processing ({ICIP'11})", month = "September 11--14,", year = "2011", pages = "1213--1216", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2011c" } |
Langoju, R.V.V.L., Blu, T. & Unser, M.,"Resolution Enhancement in Optical Coherence Tomography", 2004 Annual Meeting of the Swiss Society of Biomedical Engineering (SSBE'04), Zürich, Switzerland, September 2--3, 2004. poster 9. |
OCT performs high-resolution, cross-sectional tomographic imaging of the internal structure in materials and biological systems by measuring the coherent part of the reflected light. The physical depth resolution in OCT depends on the coherence length of the light source and lies around 10-15μm. The new parametric super-resolution method described in this paper does not depend on the coherence length of the light source, but rather on the noise level of the measurement. The key idea is to describe the OCT measure of a multi layer sample by a parametric model containing the location of the layer and its amplitude. We then find these parameters by minimizing the distance between the model and measure. |
@inproceedings{blu2004g, author = "Langoju, R.V.V.L. and Blu, T. and Unser, M.", title = "Resolution Enhancement in Optical Coherence Tomography", booktitle = "2004 Annual Meeting of the Swiss Society of Biomedical Engineering ({SSBE'04})", month = "September 2--3,", year = "2004", note = "poster 9", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004g" } |
Li, J., Chen, S., Ratner, D., Blu, T., Pianetta, P. & Liu, Y.,"Nanoscale chemical imaging with structured X-ray illumination", Proceedings of the National Academy of Sciences, Vol. 120 (49), pp. e2314542120, 2023. |
High-resolution imaging with compositional and chemical sensitivity is crucial for a wide range of scientific and engineering disciplines. Although synchrotron X-ray imaging through spectromicroscopy has been tremendously successful and broadly applied, it encounters challenges in achieving enhanced detection sensitivity, satisfactory spatial resolution, and high experimental throughput simultaneously. In this work, based on structured illumination, we develop a single-pixel X-ray imaging approach coupled with a generative image reconstruction model for mapping the compositional heterogeneity with nanoscale resolvability. This method integrates a full-field transmission X-ray microscope with an X-ray fluorescence detector and eliminates the need for nanoscale X-ray focusing and raster scanning. We experimentally demonstrate the effectiveness of our approach by imaging a battery sample composed of mixed cathode materials and successfully retrieving the compositional variations of the imaged cathode particles. Bridging the gap between structural and chemical characterizations using X-rays, this technique opens up vast opportunities in the fields of biology, environmental, and materials science, especially for radiation-sensitive samples. |
@article{blu2023f, author = "Li, Jizhou and Chen, Si and Ratner, Daniel and Blu, Thierry and Pianetta, Piero and Liu, Yijin", title = "Nanoscale chemical imaging with structured {X}-ray illumination", journal = "Proceedings of the National Academy of Sciences", year = "2023", volume = "120", number = "49", pages = "e2314542120", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2023f", doi = "10.1073/pnas.2314542120" } |
Li, J., Gilliam, C. & Blu, T.,"A multi-frame optical flow spot tracker", Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP'15), Québec City, Canada, pp. 3670-3674, September 27--30, 2015. |
Accurate and robust spot tracking is a necessary tool for quantitative motion analysis in fluorescence microscopy images. Few trackers however consider the underlying dynamics present in biological systems. For example, the collective motion of cells often exhibits both fast dynamics, i.e. Brownian motion, and slow dynamics, i.e. time-invariant stationary motion. In this paper, we propose a novel, multi-frame, tracker that exploits this stationary motion. More precisely, we first estimate the stationary motion and then use it to guide the spot tracker. We obtain the stationary motion by adapting a recent optical flow algorithm that relates one image to another locally using an all-pass filter. We perform this operation over all the image frames simultaneously and estimate a single, stationary optical flow. We compare the proposed tracker with two existing techniques and show that our approach is more robust to high noise and varying structure. In addition, we also show initial experiments on real microscopy images. |
@inproceedings{blu2015f, author = "Li, J. and Gilliam, C. and Blu, T.", title = "A multi-frame optical flow spot tracker", booktitle = "Proceedings of the 2015 {IEEE} International Conference on Image Processing ({ICIP'15})", month = "September 27--30,", year = "2015", pages = "3670--3674", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2015f" } |
Li, J., Luisier, F. & Blu, T.,"Deconvolution of Poissonian Images with the PURE-LET Approach", Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP'16), Phoenix, AZ, USA, pp. 2708-2712, September 25--28, 2016. Best Paper Runner-Up Award. |
We propose a non-iterative image deconvolution algorithm for data corrupted by Poisson noise. Many applications involve such a problem, ranging from astronomical to biological imaging. We parametrize the deconvolution process as a linear combination of elementary functions, termed as linear expansion of thresholds (LET). This parametrization is then optimized by minimizing a robust estimate of the mean squared error, the "Poisson unbiased risk estimate (PURE)''. Each elementary function consists of a Wiener filtering followed by a pointwise thresholding of undecimated Haar wavelet coefficients. In contrast to existing approaches, the proposed algorithm merely amounts to solving a linear system of equations which has a fast and exact solution. Simulation experiments over various noise levels indicate that the proposed method outperforms current state-of-the-art techniques, in terms of both restoration quality and computational time. |
@inproceedings{blu2016d, author = "Li, J. and Luisier, F. and Blu, T.", title = "Deconvolution of {P}oissonian Images with the {PURE-LET} Approach", booktitle = "Proceedings of the 2016 {IEEE} International Conference on Image Processing ({ICIP'16})", month = "September 25--28,", year = "2016", pages = "2708--2712", note = "\textbf{Best Paper Runner-Up Award}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2016d" } |
Li, J., Luisier, F. & Blu, T.,"PURE-LET Deconvolution of 3D Fluorescence Microscopy Images", Proceedings of the Fourteenth IEEE International Symposium on Biomedical Imaging (ISBI'17), Melbourne, Australia, pp. 723-727, April 18--21, 2017. Best student paper award (2nd place). |
Three-dimensional (3D) deconvolution microscopy is very effective in improving the quality of fluorescence microscopy images. In this work, we present an efficient approach for the deconvolution of 3D fluorescence microscopy images based on the recently developed PURE-LET algorithm. By combining multiple Wiener filtering and wavelet denoising, we parametrize the deconvolution process as a linear combination of elementary functions. Then the Poisson unbiased risk estimate (PURE) is used to obtain the optimal coefficients. The proposed approach is non-iterative and outperforms existing techniques (usually, variants of Richardson-Lucy algorithm) both in terms of computational efficiency and quality. We illustrate its effectiveness on both synthetic and real data. |
@inproceedings{blu2017b, author = "Li, J. and Luisier, F. and Blu, T.", title = "{PURE}-{LET} Deconvolution of {3D} Fluorescence Microscopy Images", booktitle = "Proceedings of the Fourteenth {IEEE} International Symposium on Biomedical Imaging ({ISBI'17})", month = "April 18--21,", year = "2017", pages = "723--727", note = "\textbf{Best student paper award (2nd place)}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2017b" } |
Li, J., Luisier, F. & Blu, T.,"PURE-LET Image Deconvolution", IEEE Transactions on Image Processing, Vol. 27 (1), pp. 92-105, January 2018. |
We propose a non-iterative image deconvolution algorithm for data corrupted by Poisson or mixed Poisson-Gaussian noise. Many applications involve such a problem, ranging from astronomical to biological imaging. We parametrize the deconvolution process as a linear combination of elementary functions, termed as linear expansion of thresholds (LET). This parametrization is then optimized by minimizing a robust estimate of the true mean squared error, the Poisson unbiased risk estimate (PURE). Each elementary function consists of a Wiener filtering followed by a pointwise thresholding of undecimated Haar wavelet coefficients. In contrast to existing approaches, the proposed algorithm merely amounts to solving a linear system of equations which has a fast and exact solution. Simulation experiments over different types of convolution kernels and various noise levels indicate that the proposed method outperforms state-of-the-art techniques, in terms of both restoration quality and computational complexity. Finally, we present some results on real confocal fluorescence microscopy images, and demonstrate the potential applicability of the proposed method for improving the quality of these images. |
@article{blu2018a, author = "Li, J. and Luisier, F. and Blu, T.", title = "{PURE-LET} Image Deconvolution", journal = "IEEE Transactions on Image Processing", month = "January", year = "2018", volume = "27", number = "1", pages = "92--105", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2018a" } |
Li, J., Xue, F. & Blu, T.,"Fast and Accurate 3D PSF Computation for Fluorescence Microscopy", Journal of the Optical Society of America A, Vol. 34 (6), pp. 1029-1034, June 2017. |
The point-spread function (PSF) plays a fundamental role in fluorescence microscopy. A realistic and accurately calculated PSF model can significantly improve the performance in 3D deconvolution microscopy and also the localization accuracy in single-molecule microscopy. In this work, we propose a fast and accurate approximation of the Gibson-Lanni model, which has been shown to represent the PSF suitably under a variety of imaging conditions. We express the Kirchhoff's integral in this model as a linear combination of rescaled Bessel functions, thus providing an integral-free way for the calculation. The explicit approximation error in terms of parameters is given numerically. Experiments demonstrate that the proposed approach results in a significantly smaller computational time compared with current state-ofthe- art techniques to achieve the same accuracy. This approach can also be extended to other microscopy PSF models. |
@article{blu2017e, author = "Li, J. and Xue, F. and Blu, T.", title = "Fast and Accurate {3D} {PSF} Computation for Fluorescence Microscopy", journal = "Journal of the Optical Society of America A", month = "June", year = "2017", volume = "34", number = "6", pages = "1029--1034", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2017e" } |
Li, J., Xue, F. & Blu, T.,"Gaussian Blur Estimation For Photon-Limited Images", Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP'17), Beijing, China, pp. 495-497, September 17--20, 2017. |
Blur estimation is critical to blind image deconvolution. In this work, by taking Gaussian kernel as an example, we propose an approach to estimate the blur size for photon-limited images. This estimation is based on the minimization of a novel criterion, blur-PURE (Poisson unbiased risk estimate), which makes use of the Poisson noise statistics of the measurement. Experimental results demonstrate the effectiveness of the proposed method in various scenarios. This approach can be then plugged into our recent PURE-LET deconvolution algorithm, and an example on real fluorescence microscopy is presented. |
@inproceedings{blu2017f, author = "Li, J. and Xue, F. and Blu, T.", title = "Gaussian Blur Estimation For Photon-Limited Images", booktitle = "Proceedings of the 2017 {IEEE} International Conference on Image Processing ({ICIP'17})", month = "September 17--20,", year = "2017", pages = "495--497", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2017f" } |
Li, J., Xue, F. & Blu, T.,"Accurate 3D PSF Estimation from a Wide-Field Microscopy Image", Proceedings of the Fifteenth IEEE International Symposium on Biomedical Imaging (ISBI'18), Washington, DC, USA, pp. 501-504, April 4--7, 2018. |
The 3D point-spread function (PSF) plays a fundamental role in wide-field fluorescence microscopy. An accurate PSF estimation can significantly improve the performance of deconvolution algorithms. In this work, we propose a calibration-free method to obtain the PSF directly from the image obtained. Specifically, we first parametrize the spherically aberrated PSF as a linear combination of few basis functions. The coefficients of these basis functions are then obtained iteratively by minimizing a novel criterion, which is derived from the mixed Poisson-Gaussian noise statistics. Experiments demonstrate that the proposed approach results in highly accurate PSF estimations. |
@inproceedings{blu2018f, author = "Li, J. and Xue, F. and Blu, T.", title = "Accurate {3D} {PSF} Estimation from a Wide-Field Microscopy Image", booktitle = "Proceedings of the Fifteenth {IEEE} International Symposium on Biomedical Imaging ({ISBI'18})", month = "April 4--7,", year = "2018", pages = "501--504", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2018f" } |
Li, J., Xue, F., Qu, F., Ho, Y.-P. & Blu, T.,"On-the-fly estimation of a microscopy point spread function", Optics Express, Vol. 26 (20), pp. 26120-26133, October 2018. |
A proper estimation of realistic point-spread function (PSF) in optical microscopy can significantly improve the deconvolution performance and assist the microscope calibration process. In this work, by exemplifying 3D wide-field fluorescence microscopy, we propose an approach for estimating the spherically aberrated PSF of a microscope, directly from the observed samples. The PSF, expressed as a linear combination of 4 basis functions, is obtained directly from the acquired image by minimizing a novel criterion, which is derived from the noise statistics in the microscope. We demonstrate the effectiveness of the PSF approximation model and of our estimation method using both simulations and real experiments that were carried out on quantum dots. The principle of our PSF estimation approach is sufficiently flexible to be generalized non-spherical aberrations and other microscope modalities. |
@article{blu2018h, author = "Li, J. and Xue, F. and Qu, F. and Ho, Y.-P. and Blu, T.", title = "On-the-fly estimation of a microscopy point spread function", journal = "Optics Express", month = "October", year = "2018", volume = "26", number = "20", pages = "26120--26133", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2018h" } |
Li, Y., Guo, R., Blu, T. & Zhao, H.,"Generic FRI-Based DOA Estimation: A Model-Fitting Method", IEEE Transactions on Signal Processing, Vol. 69, pp. 4102-4115, 2021. |
Direction of arrival (DOA) estimation is a classical topic in source localization. Notably, a reliable grid-free sparse representation algorithm, called FRI (finite rate of innovation) algorithm, was proposed to recover the finite number of Dirac pulses from a stream of 1D temporal samples, which also offers a efficient solution to the DOA estimation problem. Typically, FRI method assumes uniform sampling with single snapshot. However, the actual situation is richer and more diverse. Motivated by the requests of practical applications (e.g. array deployment, algorithm run-speed, etc.), a generic FRI method is proposed to tackle the more general case in practice, i.e. non-uniform sampling with multiple snapshots. Instead of annihilating the measured sensor data, a model-fitting method is used to robustly retrieve the sparse representation (i.e. DOAs and associated amplitudes) of the 1D samples. We demonstrate that our algorithm can handle challenging DOA tasks with high-resolution, which we validate in various conditions, such as multiple coherent sources, insufficient signal snapshots, low signal-to-noise ratio (SNR), etc. Moreover, we show that the computational complexity of our algorithm mainly scales with the number of sources and varies very slowly with the number of samples and snapshots, which meets the needs of a wider range of practical applications. |
@article{blu2021a, author = "Li, Y. and Guo, R. and Blu, T. and Zhao, H.", title = "Generic {FRI}-Based {DOA} Estimation: A Model-Fitting Method", journal = "IEEE Transactions on Signal Processing", year = "2021", volume = "69", pages = "4102--4115", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2021a" } |
Li, Y., Guo, R., Blu, T. & Zhao, H.,"Robust sparse reconstruction of attenuated acoustic field with unknown range of source", The Journal of the Acoustical Society of America, Vol. 152 (6), pp. 3523-3534, 2022. |
In this paper, we present a gridless algorithm to recover an attenuated acoustic field without knowing the range information of the source. This algorithm provides the joint estimation of horizontal wavenumbers, mode amplitudes, and acoustic attenuation. The key idea is to approximate the acoustic field in range as a finite sum of damped sinusoids, for which the sinusoidal parameters convey the ocean information of interest (e.g., wavenumber, attenuation, etc.). Using an efficient finite rate of innovation algorithm, an accurate recovery of the attenuated acoustic field can be achieved, even if the measurement noise is correlated and the range of the source is unknown. Moreover, the proposed method is able to perform joint recovery of multiple sensor data, which leads to a more robust field reconstruction. The data used here are acquired from a vertical line array at different depths measuring a moving source at several ranges. We demonstrate the performance of the proposed algorithm both in synthetic simulations and real shallow water evaluation cell experiment 1996 data. |
@article{blu2022g, author = "Li,Yongfei and Guo,Ruiming and Blu,Thierry and Zhao,Hangfang", title = "Robust sparse reconstruction of attenuated acoustic field with unknown range of source", journal = "The Journal of the Acoustical Society of America", year = "2022", volume = "152", number = "6", pages = "3523-3534", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2022g", doi = "10.1121/10.0016497" } |
Lie, J.P., Blu, T. & See, C.M.S.,"Single Antenna Power Measurements Based Direction Finding", IEEE Transactions on Signal Processing, Vol. 58 (11), pp. 5682-5692, November 2010. |
In this paper, the problem of estimating direction-of-arrival (DOA) of multiple uncorrelated sources from single antenna power measurements is addressed. Utilizing the fact that the antenna pattern is bandlimited and can be modeled as a finite sum of complex exponentials, we first show that the problem can be transformed into a frequency estimation problem. Then, we explain how the annihilating filter method can be used to solve for the DOA in the noiseless case. In the presence of noise, we propose to use Cadzow denoising that is formulated as an iterative algorithm derived from exploiting the matrix rank and linear structure properties. Furthermore, we have also derived the Cramér-Rao Bound (CRB) and reviewed several alternative approaches that can be used as a comparison to the proposed approach. From the simulation and experimental results, we demonstrate that the proposed approach significantly outperforms other approaches. It is also evident from the Monte Carlo analysis that the proposed approach converges to the CRB. |
@article{blu2010e, author = "Lie, J.P. and Blu, T. and See, C.M.S.", title = "Single Antenna Power Measurements Based Direction Finding", journal = "{IEEE} Transactions on Signal Processing", month = "November", year = "2010", volume = "58", number = "11", pages = "5682--5692", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010e" } |
Lie, J.P., Blu, T. & See, C.M.S.,"Azimuth-Elevation Direction Finding Using Power Measurements From Single Antenna", Proceedings of the Thirty-sixth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'11), Prague, Czech Republic, pp. 2608-2611, May 22--27, 2011. |
This paper considers the problem of extending the single antenna power measurements based direction finding to the two-dimensional (2D) case, and proposes a method to estimate the azimuth-elevation direction-of-arrival (DOA) from a matrix of received power. Exploiting the fact that the azimuth-elevation antenna pattern is 2D bandlimited, the problem can be transformed into a 2D spectral analysis problem. The proposed method first decomposes the 2D spectral analysis problem into one-dimensional case and then solved them independently. As the solution does not ensure that the estimated azimuth and elevation is in correct order, the solution is subjected to permutation ambiguity. This can then be solved by finding the permutation that best matches the 2D spectral representation. Simulation results demonstrating the high-resolution capability of the proposed method in two-source case and the effectiveness in five-source case are also presented in this paper. |
@inproceedings{blu2011d, author = "Lie, J.P. and Blu, T. and See, C.M.S.", title = "Azimuth-Elevation Direction Finding Using Power Measurements From Single Antenna", booktitle = "Proceedings of the Thirty-sixth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'11})", month = "May 22--27,", year = "2011", pages = "2608--2611", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2011d" } |
Lie, J.P., Blu, T. & See, C.M.S.,"Single Antenna Power Measurements Based Direction Finding With Incomplete Spatial Coverage", Proceedings of the Thirty-seventh IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'12), Kyoto, Japan, pp. 2641-2644, March 25--30, 2012. |
This paper consider the problem of relaxing the spatial coverage requirement on the mechanical rotation in the direction finding (DF) approach based on the received power measurements from single antenna pointing to different directions. Under incomplete spatial coverage, we show that the least square (LS) solution used to transform the problem into its spectral form is no longer accurate due to its ill-conditioned system matrix. To overcome this, we propose an approach based on spatial remodeling of the spatial power measurements such that its spatial periodicity can be adjusted according to the spatial coverage. The approach also incorporates the Tikhonov regularization in calculating the LS solution based on the new system matrix. Upon arriving at the new spectral form, the Cadzow annihilating filter method can then be used to estimate the direction-of- arrival. Both simulation and experimental results are presented to show the efficacy of the proposed method. |
@inproceedings{blu2012c, author = "Lie, J.P. and Blu, T. and See, C.M.S.", title = "Single Antenna Power Measurements Based Direction Finding With Incomplete Spatial Coverage", booktitle = "Proceedings of the Thirty-seventh {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'12})", month = "March 25--30,", year = "2012", pages = "2641--2644", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2012c" } |
Liebling, M., Blu, T., Cuche, É., Marquet, P., Depeursinge, C.D. & Unser, M.,"Local Amplitude and Phase Retrieval Method for Digital Holography Applied to Microscopy", Proceedings of the SPIE European Conference on Biomedical Optics: Novel Optical Instrumentation for Biomedical Applications (ECBO'03), Münich, Germany, Vol. 5143, pp. 210-214, June 22--25, 2003. |
We present a numerical two-step reconstruction procedure for digital off-axis Fresnel holograms. First, we retrieve the amplitude and phase of the object wave in the CCD plane. For each point we solve a weighted linear set of equations in the least-squares sense. The algorithm has O(N) complexity and gives great flexibility. Second, we numerically propagate the obtained wave to achieve proper focus. We apply the method to microscopy and demonstrate its suitability for the real time imaging of biological samples. |
@inproceedings{blu2003j, author = "Liebling, M. and Blu, T. and Cuche, {\'{E}}. and Marquet, P. and Depeursinge, C.D. and Unser, M.", title = "Local Amplitude and Phase Retrieval Method for Digital Holography Applied to Microscopy", booktitle = "Proceedings of the {SPIE} European Conference on Biomedical Optics: {N}ovel Optical Instrumentation for Biomedical Applications ({ECBO'03})", month = "June 22--25,", year = "2003", volume = "5143", pages = "210--214", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003j" } |
Liebling, M., Blu, T., Cuche, É., Marquet, P., Depeursinge, C. & Unser, M.,"A Novel Non-Diffractive Reconstruction Method for Digital Holographic Microscopy", Proceedings of the First IEEE International Symposium on Biomedical Imaging (ISBI'02), Washington, USA, Vol. {II}, pp. 625-628, July 7--10, 2002. |
We present a new method for reconstructing digitally recorded off-axis Fresnel holograms. Currently-used reconstruction methods are based on the simulation and propagation of a reference wave that is diffracted by the hologram. This procedure introduces a twin-image and a zero-order term which are inherent to the diffraction phenomenon. These terms perturb the reconstruction and limit the field of view. Our new approach splits the reconstruction process into two parts. First, we recover the amplitude and the phase in the camera plane from the measured hologram intensity. Our algorithm is based on the hypothesis of a slowly varying object wave which interferes with a more rapidly varying reference wave. In a second step, we propagate this complex wave to refocus it using the Fresnel transform. We therefore avoid the presence of the twin-image and zero-order interference terms. This new approach is flexible and can be adapted easily to complicated experimental setups. We demonstrate its feasibility in the case of digital holographic microscopy and present results for the imaging of living neurons. |
@inproceedings{blu2002i, author = "Liebling, M. and Blu, T. and Cuche, {\'{E}}. and Marquet, P. and Depeursinge, C. and Unser, M.", title = "A Novel Non-Diffractive Reconstruction Method for Digital Holographic Microscopy", booktitle = "Proceedings of the First {IEEE} International Symposium on Biomedical Imaging ({ISBI'02})", month = "July 7--10,", year = "2002", volume = "{II}", pages = "625--628", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002i" } |
Liebling, M., Blu, T. & Unser, M.,"Fresnelets---A New Wavelet Basis for Digital Holography", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing IX, San Diego, USA, Vol. 4478, pp. 347-352, July 29--August 1, 2001. |
We present a new class of wavelet bases—Fresnelets—which is obtained by applying the Fresnel transform operator to a wavelet basis of L2. The thus constructed wavelet family exhibits properties that are particularly useful for analyzing and processing optically generated holograms recorded on CCD-arrays. We first investigate the multiresolution properties (translation, dilation) of the Fresnel transform that are needed to construct our new wavelet. We derive a Heisenberg-like uncertainty relation that links the localization of the Fresnelets with that of the original wavelet basis. We give the explicit expression of orthogonal and semi-orthogonal Fresnelet bases corresponding to polynomial spline wavelets. We conclude that the Fresnel B-splines are particularly well suited for processing holograms because they tend to be well localized in both domains. |
@inproceedings{blu2001h, author = "Liebling, M. and Blu, T. and Unser, M.", title = "Fresnelets---{A} New Wavelet Basis for Digital Holography", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {IX}", month = "July 29--August 1,", year = "2001", volume = "4478", pages = "347--352", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001h" } |
Liebling, M., Blu, T. & Unser, M.,"Fresnelets: New Multiresolution Wavelet Bases for Digital Holography", IEEE Transactions on Image Processing, Vol. 12 (1), pp. 29-43, January 2003. |
We propose a construction of new wavelet-like bases that are well suited for the reconstruction and processing of optically generated Fresnel holograms recorded on CCD-arrays. The starting point is a wavelet basis of L2 to which we apply a unitary Fresnel transform. The transformed basis functions are shift-invariant on a level-by-level basis but their multiresolution properties are governed by the special form that the dilation operator takes in the Fresnel domain. We derive a Heisenberg-like uncertainty relation that relates the localization of Fresnelets with that of their associated wavelet basis. According to this criterion, the optimal functions for digital hologram processing turn out to be Gabor functions, bringing together two separate aspects of the holography inventor's work. We give the explicit expression of orthogonal and semi-orthogonal Fresnelet bases corresponding to polynomial spline wavelets. This special choice of Fresnelets is motivated by their near-optimal localization properties and their approximation characteristics. We then present an efficient multiresolution Fresnel transform algorithm, the Fresnelet transform. This algorithm allows for the reconstruction (backpropagation) of complex scalar waves at several user-defined, wavelength-independent resolutions. Furthermore, when reconstructing numerical holograms, the subband decomposition of the Fresnelet transform naturally separates the image to reconstruct from the unwanted zero-order and twin image terms. This greatly facilitates their suppression. We show results of experiments carried out on both synthetic (simulated) data sets as well as on digitally acquired holograms. |
@article{blu2003k, author = "Liebling, M. and Blu, T. and Unser, M.", title = "Fresnelets: {N}ew Multiresolution Wavelet Bases for Digital Holography", journal = "{IEEE} Transactions on Image Processing", month = "January", year = "2003", volume = "12", number = "1", pages = "29--43", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003k" } |
Liebling, M., Blu, T. & Unser, M.,"Non-Linear Fresnelet Approximation for Interference Term Suppression in Digital Holography", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing X, San Diego, USA, Vol. 5207, pp. 553-559, August 3--8, 2003. Part II. |
We present a zero-order and twin image elimination algorithm for digital Fresnel holograms that were acquired in an off-axis geometry. These interference terms arise when the digital hologram is reconstructed and corrupt the result. Our algorithm is based on the Fresnelet transform, a wavelet-like transform that uses basis functions tailor-made for digital holography. We show that in the Fresnelet domain, the coefficients associated to the interference terms are separated both spatially and with respect to the frequency bands. We propose a method to suppress them by selectively thresholding the Fresnelet coefficients. Unlike other methods that operate in the Fourier domain and affect the whole spacial domain, our method operates locally in both space and frequency, allowing for a more targeted processing. |
@inproceedings{blu2003l, author = "Liebling, M. and Blu, T. and Unser, M.", title = "Non-Linear {F}resnelet Approximation for Interference Term Suppression in Digital Holography", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {X}", month = "August 3--8,", year = "2003", volume = "5207", pages = "553--559", note = "Part {II}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003l" } |
Liebling, M., Blu, T. & Unser, M.,"Complex-Wave Retrieval from a Single Off-Axis Hologram", Journal of the Optical Society of America A, Vol. 21 (3), pp. 367-377, March 2004. |
We present a new digital two-step reconstruction method for off-axis holograms recorded on a CCD camera. First, we retrieve the complex object wave in the acquisition plane from the hologram's samples. In a second step, if required, we propagate the wave front by using a digital Fresnel transform to achieve proper focus. This algorithm is sufficiently general to be applied to sophisticated optical setups that include a microscope objective. We characterize and evaluate the algorithm by using simulated data sets and demonstrate its applicability to real-world experimental conditions by reconstructing optically acquired holograms. |
@article{blu2004h, author = "Liebling, M. and Blu, T. and Unser, M.", title = "Complex-Wave Retrieval from a Single Off-Axis Hologram", journal = "Journal of the Optical Society of {A}merica {A}", month = "March", year = "2004", volume = "21", number = "3", pages = "367--377", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004h" } |
Luisier, F. & Blu, T.,"SURE-LET Interscale-Intercolor Wavelet Thresholding for Color Image Denoising", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet XII, San Diego, USA, Vol. 6701, pp. 67011H-1-67011H-10, August 26--29, 2007. |
We propose a new orthonormal wavelet thresholding algorithm for denoising color images that are assumed to be corrupted by additive Gaussian white noise of known intercolor covariance matrix. The proposed wavelet denoiser consists of a linear expansion of thresholding (LET) functions, integrating both the interscale and intercolor dependencies. The linear parameters of the combination are then solved for by minimizing Stein's unbiased risk estimate (SURE), which is nothing but a robust unbiased estimate of the mean squared error (MSE) between the (unknown) noise-free data and the denoised one. Thanks to the quadratic form of this MSE estimate, the parameters optimization simply amounts to solve a linear system of equations. The experimentations we made over a wide range of noise levels and for a representative set of standard color images have shown that our algorithm yields even slightly better peak signal-to-noise ratios than most state-of-the-art wavelet thresholding procedures, even when the latters are executed in an undecimated wavelet representation. |
@inproceedings{blu2007g, author = "Luisier, F. and Blu, T.", title = "{SURE-LET} Interscale-Intercolor Wavelet Thresholding for Color Image Denoising", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet {XII}", month = "August 26--29,", year = "2007", volume = "6701", pages = "67011H-1--67011H-10", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007g" } |
Luisier, F. & Blu, T.,"Image Denoising by Pointwise Thresholding of the Undecimated Wavelet Coefficients: A Global SURE Optimum", Proceedings of the Thirty-Second IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'07), Honolulu, USA, pp. {I}-593-{I}-596, April 15--20, 2007. |
We devise a new undecimated wavelet thresholding for denoising images corrupted by additive Gaussian white noise. The first key point of our approach is the use of a linearly parameterized pointwise thresholding function. The second key point consists in optimizing the parameters globally by minimizing Stein's unbiased MSE estimate (SURE) directly in the image-domain, and not separately in the wavelet subbands. Amazingly, our method gives similar results to the best state-of-the-art algorithms, despite using only a simple pointwise thresholding function; we demonstrate it in simulations over a wide range of noise levels for a representative set of standard grayscale images. |
@inproceedings{blu2007h, author = "Luisier, F. and Blu, T.", title = "Image Denoising by Pointwise Thresholding of the Undecimated Wavelet Coefficients: {A} Global {SURE} Optimum", booktitle = "Proceedings of the Thirty-Second {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'07})", month = "April 15--20,", year = "2007", pages = "{I}-593--{I}-596", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007h" } |
Luisier, F. & Blu, T.,"SURE-LET Multichannel Image Denoising: Interscale Orthonormal Wavelet Thresholding", IEEE Transactions on Image Processing, Vol. 17 (4), pp. 482-492, April 2008. |
We propose a vector/matrix extension of our denoising algorithm initially developed for grayscale images, in order to efficiently process multichannel (e.g., color) images. This work follows our recently published SURE-LET approach where the denoising algorithm is parameterized as a linear expansion of thresholds (LET) and optimized using Stein's unbiased risk estimate (SURE). The proposed wavelet thresholding function is pointwise and depends on the coefficients of same location in the other channels, as well as on their parents in the coarser wavelet subband. A nonredundant, orthonormal, wavelet transform is first applied to the noisy data, followed by the (subband-dependent) vector-valued thresholding of individual multichannel wavelet coefficients which are finally brought back to the image domain by inverse wavelet transform. Extensive comparisons with the state-of-the-art multiresolution image denoising algorithms indicate that despite being nonredundant, our algorithm matches the quality of the best redundant approaches, while maintaining a high computational efficiency and a low CPU/memory consumption. An online Java demo illustrates these assertions. |
@article{blu2008g, author = "Luisier, F. and Blu, T.", title = "{SURE-LET} Multichannel Image Denoising: {I}nterscale Orthonormal Wavelet Thresholding", journal = "{IEEE} Transactions on Image Processing", month = "April", year = "2008", volume = "17", number = "4", pages = "482--492", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008g" } |
Luisier, F. & Blu, T.,"SURE-LET Multichannel Image Denoising: Undecimated Wavelet Thresholding", Proceedings of the Thirty-Third IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'08), Las Vegas, USA, pp. 769-772, March 30--April 4, 2008. |
We propose an extension of the recently devised SURE-LET grayscale denoising approach for multichannel images. Assuming additive Gaussian white noise, the unknown linear parameters of a transform-domain pointwise multichannel thresholding are globally optimized by minimizing Stein's unbiased MSE estimate (SURE) in the image-domain. Using the undecimated wavelet transform, we demonstrate the efficiency of this approach for denoising color images by comparing our results with two other state-of-the-art denoising algorithms. |
@inproceedings{blu2008h, author = "Luisier, F. and Blu, T.", title = "{SURE-LET} Multichannel Image Denoising: {U}ndecimated Wavelet Thresholding", booktitle = "Proceedings of the Thirty-Third {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'08})", month = "March 30--April 4,", year = "2008", pages = "769--772", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008h" } |
Luisier, F., Blu, T., Forster, B. & Unser, M.,"Which Wavelet Bases Are the Best for Image Denoising?", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet XI, San Diego, USA, Vol. 5914, pp. 59140E-1-59140E-12, July 31--August 3, 2005. |
We use a comprehensive set of non-redundant orthogonal wavelet transforms and apply a denoising method called SUREshrink in each individual wavelet subband to denoise images corrupted by additive Gaussian white noise. We show that, for various images and a wide range of input noise levels, the orthogonal fractional (α, τ)-B-splines give the best peak signal-to-noise ratio (PSNR), as compared to standard wavelet bases (Daubechies wavelets, symlets and coiflets). Moreover, the selection of the best set (α, τ) can be performed on the MSE estimate (SURE) itself, not on the actual MSE (Oracle). Finally, the use of complex-valued fractional B-splines leads to even more significant improvements; they also outperform the complex Daubechies wavelets. |
@inproceedings{blu2005f, author = "Luisier, F. and Blu, T. and Forster, B. and Unser, M.", title = "Which Wavelet Bases Are the Best for Image Denoising?", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet {XI}", month = "July 31--August 3,", year = "2005", volume = "5914", pages = "59140E-1--59140E-12", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005f" } |
Luisier, F., Blu, T. & Unser, M.,"SURE-Based Wavelet Thresholding Integrating Inter-Scale Dependencies", Proceedings of the 2006 IEEE International Conference on Image Processing (ICIP'06), Atlanta, USA, pp. 1457-1460, October 8--11, 2006. |
We propose here a new pointwise wavelet thresholding function that incorporates inter-scale dependencies. This non-linear function depends on a set of four linear parameters per subband which are set by minimizing Stein's unbiased MSE estimate (SURE). Our approach assumes additive Gaussian white noise. In order for the inter-scale dependencies to be faithfully taken into account, we also develop a rigorous feature alignment processing, that is adapted to arbitrary wavelet filters (e.g. non-symmetric filters). Finally, we demonstrate the efficiency of our denoising approach in simulations over a wide range of noise levels for a representative set of standard images. |
@inproceedings{blu2006d, author = "Luisier, F. and Blu, T. and Unser, M.", title = "{SURE}-Based Wavelet Thresholding Integrating Inter-Scale Dependencies", booktitle = "Proceedings of the 2006 {IEEE} International Conference on Image Processing ({ICIP'06})", month = "October 8--11,", year = "2006", pages = "1457--1460", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2006d" } |
Luisier, F., Blu, T. & Unser, M.,"A New SURE Approach to Image Denoising: Interscale Orthonormal Wavelet Thresholding", IEEE Transactions on Image Processing, Vol. 16 (3), pp. 593-606, March 2007. IEEE Signal Processing Society's 2009 Young Author Best Paper Award. |
This paper introduces a new approach to orthonormal wavelet image denoising. Instead of postulating a statistical model for the wavelet coefficients, we directly parametrize the denoising process as a sum of elementary nonlinear processes with unknown weights. We then minimize an estimate of the mean square error between the clean image and the denoised one. The key point is that we have at our disposal a very accurate, statistically unbiased, MSE estimate—Stein's unbiased risk estimate—that depends on the noisy image alone, not on the clean one. Like the MSE, this estimate is quadratic in the unknown weights, and its minimization amounts to solving a linear system of equations. The existence of this a priori estimate makes it unnecessary to devise a specific statistical model for the wavelet coefficients. Instead, and contrary to the custom in the literature, these coefficients are not considered random anymore. We describe an interscale orthonormal wavelet thresholding algorithm based on this new approach and show its near-optimal performance—both regarding quality and CPU requirement—by comparing with the results of three state-of-the-art nonredundant denoising algorithms on a large set of test images. An interesting fallout of this study is the development of a new, group-delay-based, parent-child prediction in a wavelet dyadic tree. IEEE Signal Processing Society's 2009 Young Author Best Paper Award |
@article{blu2007i, author = "Luisier, F. and Blu, T. and Unser, M.", title = "A New {SURE} Approach to Image Denoising: {I}nterscale Orthonormal Wavelet Thresholding", journal = "{IEEE} Transactions on Image Processing", month = "March", year = "2007", volume = "16", number = "3", pages = "593--606", note = "IEEE Signal Processing Society's 2009 \textbf{Young Author Best Paper Award}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007i" } |
Luisier, F., Blu, T. & Unser, M.,"SURE-LET for Orthonormal Wavelet-Domain Video Denoising", IEEE Transactions on Circuits and Systems for Video Technology, Vol. 20 (6), pp. 913-919, June 2010. |
We propose an efficient orthonormal wavelet-domain video denoising algorithm based on an appropriate integration of motion compensation into an adapted version of our recently devised Stein's unbiased risk estimator-linear expansion of thresholds (SURE-LET) approach. To take full advantage of the strong spatio-temporal correlations of neighboring frames, a global motion compensation followed by a selective blockmatching is first applied to adjacent frames, which increases their temporal correlations without distorting the interframe noise statistics. Then, a multiframe interscale wavelet thresholding is performed to denoise the current central frame. The simulations we made on standard grayscale video sequences for various noise levels demonstrate the efficiency of the proposed solution in reducing additive white Gaussian noise. Obtained at a lighter computational load, our results are even competitive with most state-of-the-art redundant wavelet-based techniques. By using a cycle-spinning strategy, our algorithm is in fact able to outperform these methods. |
@article{blu2010f, author = "Luisier, F. and Blu, T. and Unser, M.", title = "{SURE-LET} for Orthonormal Wavelet-Domain Video Denoising", journal = "{IEEE} Transactions on Circuits and Systems for Video Technology", month = "June", year = "2010", volume = "20", number = "6", pages = "913--919", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010f" } |
Luisier, F., Blu, T. & Unser, M.,"Undecimated Haar thresholding for Poisson intensity estimation", Proceedings of the 2010 IEEE International Conference on Image Processing (ICIP'10), Hong Kong, China, pp. 1697-1700, September 26--29, 2010. |
We propose a novel algorithm for denoising Poisson-corrupted images, that performs a signal-adaptive thresholding of the undecimated Haar wavelet coefficients. A Poisson's unbiased MSE estimate is devised and adapted to arbitrary transform-domain pointwise processing. This prior-free quadratic measure of quality is then used to globally optimize a linearly parameterized subband-adaptive thresholding, which accounts for the signal-dependent noise variance. We demonstrate the qualitative and computational competitiveness of the resulting denoising algorithm through comprehensive comparisons with some state-of-the-art multiscale techniques specifically designed for Poisson intensity estimation. We also show promising denoising results obtained on low-count fluorescence microscopy images. |
@inproceedings{blu2010g, author = "Luisier, F. and Blu, T. and Unser, M.", title = "Undecimated {H}aar thresholding for {P}oisson intensity estimation", booktitle = "Proceedings of the 2010 {IEEE} International Conference on Image Processing ({ICIP'10})", month = "September 26--29,", year = "2010", pages = "1697--1700", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010g" } |
Luisier, F., Blu, T. & Unser, M.,"Image Denoising in Mixed Poisson-Gaussian Noise", IEEE Transactions on Image Processing, Vol. 20 (3), pp. 696-708, March 2011. |
We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities.We also present denoising results obtained on real images of low-count fluorescence microscopy. |
@article{blu2011e, author = "Luisier, F. and Blu, T. and Unser, M.", title = "Image Denoising in Mixed {P}oisson-{G}aussian Noise", journal = "{IEEE} Transactions on Image Processing", month = "March", year = "2011", volume = "20", number = "3", pages = "696--708", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2011e" } |
Luisier, F., Blu, T. & Wolfe, P.J.,"A CURE for Noisy Magnetic Resonance Images: Chi-Square Unbiased Risk Estimation", IEEE Transactions on Image Processing, Vol. 21 (8), pp. 3454-3466, August 2012. |
In this paper, we derive an unbiased expression for the expected mean-squared error associated with continuously differentiable estimators of the noncentrality parameter of a chi-square random variable. We then consider the task of denoising squared-magnitude magnetic resonance (MR) image data, which are well modeled as independent noncentral chi-square random variables on two degrees of freedom. We consider two broad classes of linearly parameterized shrinkage estimators that can be optimized using our risk estimate, one in the general context of undecimated filterbank transforms, and the other in the specific case of the unnormalized Haar wavelet transform. The resultant algorithms are computationally tractable and improve upon most state-of-the-art methods for both simulated and actual MR image data. |
@article{blu2012d, author = "Luisier, F. and Blu, T. and Wolfe, P.J.", title = "A {CURE} for Noisy Magnetic Resonance Images: Chi-Square Unbiased Risk Estimation", journal = "{IEEE} Transactions on Image Processing", month = "August", year = "2012", volume = "21", number = "8", pages = "3454--3466", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2012d" } |
Luisier, F., Vonesch, C., Blu, T. & Unser, M.,"Fast Haar-Wavelet Denoising Of Multidimensional Fluorescence Microscopy Data", Proceedings of the Sixth IEEE International Symposium on Biomedical Imaging (ISBI'09), Boston, USA, June 28--July 1, 2009. |
We propose a non-Bayesian denoising algorithm to reduce the Poisson noise that is typically dominant in fluorescence microscopy data. To process large datasets at a low computational cost, we use the unnormalized Haar wavelet transform. Thanks to some of its appealing properties, independent unbiased MSE estimates can be derived for each subband. Based on these Poisson unbiased MSE estimates, we then optimize linearly parameterized interscale thresholding. Correlations between adjacent images of the multidimensional data are accounted for through a sliding window approach. Experiments on simulated and real data show that the proposed solution is qualitatively similar to a state-of-the-art multiscale method, while being orders of magnitude faster. |
@inproceedings{blu2009e, author = "Luisier, F. and Vonesch, C. and Blu, T. and Unser, M.", title = "Fast Haar-Wavelet Denoising Of Multidimensional Fluorescence Microscopy Data", booktitle = "Proceedings of the Sixth {IEEE} International Symposium on Biomedical Imaging ({ISBI'09})", month = "June 28--July 1,", year = "2009", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2009e" } |
Luisier, F., Vonesch, C., Blu, T. & Unser, M.,"Fast interscale wavelet denoising of Poisson-corrupted images", Signal Processing, Vol. 90 (2), pp. 415-427, February 2010. |
We present a fast algorithm for image restoration in the presence of Poisson noise. Our approach is based on (1) the minimization of an unbiased estimate of the MSE for Poisson noise, (2) a linear parametrization of the denoising process and (3) the preservation of Poisson statistics across scales within the Haar DWT. The minimization of the MSE estimate is performed independently in each wavelet subband, but this is equivalent to a global image-domain MSE minimization, thanks to the orthogonality of Haar wavelets. This is an important difference with standard Poisson noise-removal methods, in particular those that rely on a non-linear preprocessing of the data to stabilize the variance. Our non-redundant interscale wavelet thresholding outperforms standard variance-stabilizing schemes, even when the latter are applied in a translation-invariant setting (cycle-spinning). It also achieves a quality similar to a state-of-the-art multiscale method that was specially developed for Poisson data. Considering that the computational complexity of our method is orders of magnitude lower, it is a very competitive alternative. The proposed approach is particularly promising in the context of low signal intensities and/or large data sets. This is illustrated experimentally with the denoising of low-count fluorescence micrographs of a biological sample. |
@article{blu2010h, author = "Luisier, F. and Vonesch, C. and Blu, T. and Unser, M.", title = "Fast interscale wavelet denoising of {P}oisson-corrupted images", journal = "Signal Processing", month = "February", year = "2010", volume = "90", number = "2", pages = "415--427", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010h" } |
Luo, G., He, Y., Shu, X., Zhou, R. & Blu, T.,"Complex Wave and Phase Retrieval from A Single Off-Axis Interferogram", Journal of the Optical Society of America A, Vol. 40 (1), pp. 85-95, January 2023. |
Single-frame off-axis holographic reconstruction is promising for quantitative phase imaging. However, reconstruction accuracy and contrast are degraded by noise, frequency spectrum overlap of the interferogram, severe phase distortion, etc. In this work, we propose an iterative single-frame complex wave retrieval that is based on an explicit model of the object and reference waves. We also develop a novel phase restoration algorithm which does not resort to phase unwrapping. Both simulation and real experiments demonstrate higher accuracy and robustness compared to the state-of-the-art methods, both for the complex wave estimation, and the phase reconstruction. Importantly, the allowed bandwidth for the object wave is significantly improved in realistic experimental conditions (similar amplitude for the object and reference waves), which makes it attractive for large field-of-view and high-resolution imaging applications. |
@article{blu2023a, author = "Luo, Gang and He, Yanping and Shu, Xin and Zhou, Renjie and Blu, Thierry", title = "Complex Wave and Phase Retrieval from A Single Off-Axis Interferogram", journal = "Journal of the Optical Society of America A", month = "January", year = "2023", volume = "40", number = "1", pages = "85--95", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2023a", doi = "10.1364/JOSAA.473726" } |
Ma, L., Blu, T. & Wang, W.S.Y.,"An EEG Blind Source Separation Algorithm Based On A Weak Exclusion Principle", Proceedings of the 38th International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC'16), Orlando, FL, USA, pp. 859-862, August 16--20, 2016. |
The question of how to separate individual brain and non-brain signals, mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings, is a significant problem in contemporary neu- roscience. This study proposes and evaluates a novel EEG Blind Source Separation (BSS) algorithm based on a weak exclusion principle (WEP). The chief point in which it differs from most previous EEG BSS algorithms is that the proposed algorithm is not based upon the hypothesis that the sources are statistically independent. Our first step was to investigate algorithm performance on simulated signals which have ground truth. The purpose of this simulation is to illustrate the pro- posed algorithm's efficacy. The results show that the proposed algorithm has good separation performance. Then, we used the proposed algorithm to separate real EEG signals from a memory study using a revised version of Sternberg Task. The results show that the proposed algorithm can effectively separate the non-brain and brain sources. |
@inproceedings{blu2016e, author = "Ma, L. and Blu, T. and Wang, W.S.Y.", title = "An {EEG} Blind Source Separation Algorithm Based On A Weak Exclusion Principle", booktitle = "Proceedings of the 38th International Conference of the {IEEE} Engineering in Medicine and Biology Society ({EMBC'16})", month = "August 16--20,", year = "2016", pages = "859--862", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2016e" } |
Ma, L., Blu, T. & Wang, W.S.Y.,"Event-Related Potentials Source Separation Based on a Weak Exclusion Principle", Proceedings of the Fourteenth IEEE International Symposium on Biomedical Imaging (ISBI'17), Melbourne, Australia, pp. 1011-1014, April 18--21, 2017. |
Currently, the standard event-related potentials (ERP) technique consists in averaging many on-going electroencephalogram (EEG) trials using the same stimuli. Key questions are how to extract the ERP from on-going EEG with fewer average times and how to further decompose ERP into basic components related to cognitive process. In this paper we introduce a novel Blind Source Separation (BSS) approach based on a weak exclusion principle (WEP) to solve these problems. The superior aspect of this algorithm is that it is based on a deterministic principle, which is more appropriate to analyze non-stationary EEG signals than most other BSS methods based on statistical hypotheses. The results show that our BSS algorithm can quickly and effectively extract ERPs using fewer average times than the traditional averaging methods. We show that, via BSS, we can isolate two main ERP components, which are respectively related to an exogenous process and a cognitive process, and can discriminate between the occipital lobe and the frontal lobe responses from the brain, agreeing with the classical component modeling in ERPs. Single-trial ERP separation results have demonstrated the consistency of these two main ERP components. Thus, BSS based on WEP can provide a window to better understand ERP, not only in averaging behavior, but in the complexities of moment-to-moment dynamics as well. |
@inproceedings{blu2017d, author = "Ma, L. and Blu, T. and Wang, W.S.Y.", title = "Event-Related Potentials Source Separation Based on a Weak Exclusion Principle", booktitle = "Proceedings of the Fourteenth {IEEE} International Symposium on Biomedical Imaging ({ISBI'17})", month = "April 18--21,", year = "2017", pages = "1011--1014", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2017d" } |
Ma, L., Minett, J.W., Blu, T. & Wang, W.S.Y.,"Resting State EEG-Based Biometrics for Individual Identification Using Convolutional Neural Networks", Proceedings of the 37th International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC'15), Milano, Italy, pp. 2848-2851, August 25--29, 2015. |
Biometrics is a growing field, which permits identification of individuals by means of unique physical features. Electroencephalography (EEG)-based biometrics utilizes the small intra-personal differences and large inter-personal differences between individuals' brainwave patterns. In the past, such methods have used features derived from manually-designed procedures for this purpose. Another possibility is to use convolutional neural networks (CNN) to automatically extract an individual's best and most unique neural features and conduct classification, using EEG data derived from both Resting State with Open Eyes (REO) and Resting State with Closed Eyes (REC). Results indicate that this CNN-based joint-optimized EEG-based Biometric System yields a high degree of accuracy of identification (88%) for 10-class classification. Furthermore, rich inter-personal difference can be found using a very low frequency band (0-2Hz). Additionally, results suggest that the temporal portions over which subjects can be individualized is less than 200 ms. |
@inproceedings{blu2015g, author = "Ma, L and Minett, J.W. and Blu, T. and Wang, W.S.Y.", title = "Resting State {EEG}-Based Biometrics for Individual Identification Using Convolutional Neural Networks", booktitle = "Proceedings of the 37th International Conference of the {IEEE} Engineering in Medicine and Biology Society ({EMBC'15})", month = "August 25--29,", year = "2015", pages = "2848--2851", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2015g" } |
Mari, J.-M., Blu, T., Bou Matar, O., Unser, M. & Cachard, C.,"A bulk modulus dependent linear model for acoustical imaging", Journal of the Acoustical Society of America, Vol. 125 (4), pp. 2413-2419, April 2009. |
Modeling the acoustical process of soft biological tissue imaging and understanding the consequences of the approximations required by such modeling are key steps for accurately simulating ultrasonic scanning as well as estimating the scattering coefficient of the imaged matter. In this document, a linear solution to the inhomogeneous ultrasonic wave equation is proposed. The classical assumptions required for linearization are applied; however, no approximation is made in the mathematical development regarding density and speed of sound. This leads to an expression of the scattering term that establishes a correspondence between the signal measured by an ultrasound transducer and an intrinsic mechanical property of the imaged tissues. This expression shows that considering the scattering as a function of small variations in the density and speed of sound around their mean values along with classical assumptions in this domain is equivalent to associating the acoustical acquisition with a measure of the relative longitudinal bulk modulus. Comparison of the model proposed to Jensen's earlier model shows that it is also appropriate to perform accurate simulations of the acoustical imaging process. |
@article{blu2009f, author = "Mari, J.-M. and Blu, T. and Bou Matar, O. and Unser, M. and Cachard, C.", title = "A bulk modulus dependent linear model for acoustical imaging", journal = "Journal of the Acoustical Society of America", month = "April", year = "2009", volume = "125", number = "4", pages = "2413--2419", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2009f" } |
Marziliano, P., Vetterli, M. & Blu, T.,"Sampling and Exact Reconstruction of Bandlimited Signals with Additive Shot Noise", IEEE Transactions on Information Theory, Vol. 52 (5), pp. 2230-2233, May 2006. |
In this correspondence, we consider sampling continuous-time periodic bandlimited signals which contain additive shot noise.The classical sampling scheme does not perfectly recover these particular nonbandlimited signals but only reconstructs a lowpass filtered approximation. By modeling the shot noise as a stream of Dirac pulses, we first show that the sum of a bandlimited signal with a stream of Dirac pulses falls into the class of signals that contain a finite rate of innovation, that is, a finite number of degrees of freedom. Second, by taking into account the degrees of freedom of the bandlimited signal in the sampling and reconstruction scheme developed previously for streams of Dirac pulses, we derive a sampling and perfect reconstruction scheme for the bandlimited signal with additive shot noise. |
@article{blu2006e, author = "Marziliano, P. and Vetterli, M. and Blu, T.", title = "Sampling and Exact Reconstruction of Bandlimited Signals with Additive Shot Noise", journal = "{IEEE} Transactions on Information Theory", month = "May", year = "2006", volume = "52", number = "5", pages = "2230--2233", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2006e" } |
Matusiak, S., Daoudi, M., Blu, T. & Avaro, O.,"Sketch-Based Images Database Retrieval", Proceedings of the Fourth International Workshop on Advances in Multimedia Information Systems (MIS'98), Istanbul, Turkey, pp. 185-191, September 24--26, 1998. |
This paper describes an application allowing content-based retrieval that can thus be considered as an MPEG-7 example application. The application may be called "sketch-based database retrieval" since the user interacts with the database by means of sketches. The user draws its request with a pencil: the request image is then a binary image that comprises a contour on a uniform bottom. |
@inproceedings{blu1998e, author = "Matusiak, S. and Daoudi, M. and Blu, T. and Avaro, O.", title = "Sketch-Based Images Database Retrieval", booktitle = "Proceedings of the Fourth International Workshop on Advances in Multimedia Information Systems ({MIS'98})", month = "September 24--26,", year = "1998", pages = "185--191", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1998e" } |
Mayrargue, S. & Blu, T.,"Relationship Between High-Resolution Methods and Discrete Fourier Transform", Proceedings of the Sixteenth IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'91), Toronto, Canada, Vol. {V}, pp. 3321-3324, May 14--17, 1991. |
A link is established between the discrete Fourier transform (DFT) and two high-resolution methods, MUSIC and the Tufts-Kumaresan (1982) method (TK). The existence and location of the extraneous peaks of MUSIC and the noise zeros of TK are related to the minima of the DFT of the rectangular window filtering the data. Other properties of the noise zeros are given, in relation to polynomial theory. |
@inproceedings{blu1991a, author = "Mayrargue, S. and Blu, T.", title = "Relationship Between High-Resolution Methods and Discrete {F}ourier Transform", booktitle = "Proceedings of the Sixteenth {IEEE} International Conference on Acoustics, Speech and Signal Processing ({ICASSP'91})", month = "May 14--17,", year = "1991", volume = "{V}", pages = "3321--3324", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1991a" } |
Middlemiss, S., Blandenet, M., Roberts, D.M., McMahon, A., Grimshaw, J., Edwards, J.M., Sun, Z., Whitley, K.D., Blu, T., Strahl, H. & Holden, S.,"Molecular motor tug-of-war regulates elongasome cell wall synthesis dynamics in Bacillus subtilis", Nature Communications, Vol. 15, pp. 5411, 26 June 2024. |
Most rod-shaped bacteria elongate by inserting new cell wall material into the inner surface of the cell sidewall. This is performed by class A penicillin binding proteins (PBPs) and a highly conserved protein complex, the elongasome, which moves processively around the cell circumference and inserts long glycan strands that act as barrel-hoop-like reinforcing structures, thereby giving rise to a rod-shaped cell. However, it remains unclear how elongasome synthesis dynamics and termination events are regulated to determine the length of these critical cell-reinforcing structures. To address this, we developed a method to track individual elongasome complexes around the entire circumference of Bacillus subtilis cells for minutes-long periods using single-molecule fluorescence microscopy. We found that the B. subtilis elongasome is highly processive and that processive synthesis events are frequently terminated by rapid reversal or extended pauses. We found that cellular levels of RodA regulate elongasome processivity, reversal and pausing. Our single-molecule data, together with stochastic simulations, show that elongasome dynamics and processivity are regulated by molecular motor tug-of-war competition between several, likely two, oppositely oriented peptidoglycan synthesis complexes associated with the MreB filament. Altogether these results demonstrate that molecular motor tug-of-war is a key regulator of elongasome dynamics in B. subtilis, which likely also regulates the cell shape via modulation of elongasome processivity. |
@article{blu2024b, author = "Middlemiss, Stuart and Blandenet, Matthieu and Roberts, David M. and McMahon, Andrew and Grimshaw, James and Edwards, Joshua M. and Sun, Zikai and Whitley, Kevin D. and Blu, Thierry and Strahl, Henrik and Holden, S\'{e}amus", title = "Molecular motor tug-of-war regulates elongasome cell wall synthesis dynamics in Bacillus subtilis", journal = "Nature Communications", month = "26 June", year = "2024", volume = "15", pages = "5411", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2024b", doi = "10.1038/s41467-024-49785-x" } |
Muñoz Barrutia, A., Blu, T. & Unser, M.,"Efficient Image Resizing Using Finite Differences", Proceedings of the 1999 IEEE International Conference on Image Processing (ICIP'99), Kobe, Japan, Vol. {III}, pp. 662-666, October 25--28, 1999. |
We present an optimal spline-based algorithm for the enlargement or reduction of digital images with arbitrary scaling factors. A demonstration is available on the web at http://bigwww.epfl.ch/demo/jresize/. This projection-based approach is realizable thanks to a new finite difference method that allows the computation of inner products with analysis functions that are B-splines of any degree n. For a given choice of basis functions, the results of our method are consistently better that those of the standard interpolation procedure; the present scheme achieves a reduction of artifacts such as aliasing and blocking and a significant improvement of the signal-to-noise ratio. |
@inproceedings{blu1999g, author = "Mu{\~{n}}oz Barrutia, A. and Blu, T. and Unser, M.", title = "Efficient Image Resizing Using Finite Differences", booktitle = "Proceedings of the 1999 {IEEE} International Conference on Image Processing ({ICIP'99})", month = "October 25--28,", year = "1999", volume = "{III}", pages = "662--666", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1999g" } |
Muñoz Barrutia, A., Blu, T. & Unser, M.,"Non-Uniform to Uniform Grid Conversion Using Least-Squares Splines", Proceedings of the Tenth European Signal Processing Conference (EUSIPCO'00), Tampere, Finland, Vol. {IV}, pp. 1997-2000, September 4--8, 2000. |
We propose a new technique to perform nonuniform to uniform grid conversion: first, interpolate using nonuniform splines, then project the resulting function onto a uniform spline space and finally, resample. We derive a closed form solution to the least-squares approximation problem. Our implementation is computationally exact and works for arbitrary sampling rates. We present examples that illustrate advantages of our projection technique over direct interpolation and resampling. The main benefit is the suppression of aliasing. |
@inproceedings{blu2000f, author = "Mu{\~{n}}oz Barrutia, A. and Blu, T. and Unser, M.", title = "Non-Uniform to Uniform Grid Conversion Using Least-Squares Splines", booktitle = "Proceedings of the Tenth European Signal Processing Conference ({EUSIPCO'00})", month = "September 4--8,", year = "2000", volume = "{IV}", pages = "1997--2000", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000f" } |
Muñoz Barrutia, A., Blu, T. & Unser, M.,"Non-Euclidean Pyramids", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing VIII, San Diego, USA, Vol. 4119, pp. 710-720, July 31--August 4, 2000. |
We propose to design the reduction operator of an image pyramid so as to minimize the approximation error in the lp sense (not restricted to the usual p = 2), where p can take non-integer values. The underlying image model is specified using arbitrary shift-invariant basis functions such as splines. The solution is determined by an iterative optimization algorithm, based on digital filtering. Its convergence is accelerated by the use of first and second derivatives. For p = 1, our modified pyramid is robust to outliers; edges are preserved better than in the standard case where p = 2. For 1 < p < 2, the pyramid decomposition combines the qualities of l1 and l2 approximations. The method is applied to edge detection and its improved performance over the standard formulation is determined. |
@inproceedings{blu2000g, author = "Mu{\~{n}}oz Barrutia, A. and Blu, T. and Unser, M.", title = "Non-{E}uclidean Pyramids", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {VIII}", month = "July 31--August 4,", year = "2000", volume = "4119", pages = "710--720", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000g" } |
Muñoz Barrutia, A., Blu, T. & Unser, M.,"Least-Squares Image Resizing Using Finite Differences", IEEE Transactions on Image Processing, Vol. 10 (9), pp. 1365-1378, September 2001. |
We present an optimal spline-based algorithm for the enlargement or reduction of digital images with arbitrary (noninteger) scaling factors. This projection-based approach can be realized thanks to a new finite difference method that allows the computation of inner products with analysis functions that are B-splines of any degree n. A noteworthy property of the algorithm is that the computational complexity per pixel does not depend on the scaling factor a. For a given choice of basis functions, the results of our method are consistently better than those of the standard interpolation procedure; the present scheme achieves a reduction of artifacts such as aliasing and blocking and a significant improvement of the signal-to-noise ratio. The method can be generalized to include other classes of piecewise polynomial functions, expressed as linear combinations of B-splines and their derivatives. |
@article{blu2001i, author = "Mu{\~{n}}oz Barrutia, A. and Blu, T. and Unser, M.", title = "Least-Squares Image Resizing Using Finite Differences", journal = "{IEEE} Transactions on Image Processing", month = "September", year = "2001", volume = "10", number = "9", pages = "1365--1378", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001i" } |
Muñoz Barrutia, A., Blu, T. & Unser, M.,"ℓ_p-Multiresolution Analysis: How to Reduce Ringing and Sparsify the Error", IEEE Transactions on Image Processing, Vol. 11 (6), pp. 656-669, June 2002. |
We propose to design the reduction operator of an image pyramid so as to minimize the approximation error in the lp-sense (not restricted to the usual p = 2), where p can take non-integer values. The underlying image model is specified using shift-invariant basis functions, such as B-splines. The solution is well-defined and determined by an iterative optimization algorithm based on digital filtering. Its convergence is accelerated by the use of first and second order derivatives. For p close to 1, we show that the ringing is reduced and that the histogram of the detail image is sparse as compared with the standard case, where p = 2. |
@article{blu2002j, author = "Mu{\~{n}}oz Barrutia, A. and Blu, T. and Unser, M.", title = "${\ell}_{p}$-{M}ultiresolution Analysis: {H}ow to Reduce Ringing and Sparsify the Error", journal = "{IEEE} Transactions on Image Processing", month = "June", year = "2002", volume = "11", number = "6", pages = "656--669", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002j" } |
Ou, H., Stantchev, R.I., Chen, X., Blu, T., Semtsiv, M., Masselink, W.T., Hernandez Serrano, A.I., Costa, C., Young, J., Chopra, N., Lloyd-Hughes, J. & MacPherson, E.,"Simultaneous measurement of orthogonal terahertz fields via an emission multiplexing scheme", Optics Express, Vol. 32 (4), pp. 5567-5581, Optical Society of America, 2024. |
We propose a polarization sensitive terahertz time-domain spectrometer that can record orthogonally polarized terahertz fields simultaneously, using fibre-coupled photoconductive antennas and a scheme that modulated the emitter's polarization. The s and p channels of the multi-pixel terahertz emitter were modulated at different frequencies, thereby allowing orthogonal waveforms to be demultiplexed from the recorded signal in post-processing. The performance of the multi-pixel emitter used in this multiplexing scheme was comparable to that of a commercial single-polarization H-dipole antenna. The approach allowed two orthogonally polarized terahertz pulses to be recorded with good signal to noise (>1000:1) within half a second. We verified the capability of the spectrometer by characterizing a birefringent crystal and by imaging a polarization-sensitive metamaterial. This work has significant potential to improve the speed of terahertz polarization sensitive applications, such as ellipsometry and imaging. |
@article{blu2024a, author = "Ou, Huiliang and Stantchev, Rayko Ivanov and Chen, Xuequan and Blu, Thierry and Semtsiv, Mykhaylo and Masselink, William Ted and Hernandez Serrano, Arturo I. and Costa, Concalo and Young, Jacob and Chopra, Nishtha and Lloyd-Hughes, James and MacPherson, Emma", title = "Simultaneous measurement of orthogonal terahertz fields via an emission multiplexing scheme", journal = "Optics Express", publisher = "Optical Society of America", year = "2024", volume = "32", number = "4", pages = "5567--5581", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2024a" } |
Pan, H. & Blu, T.,"Sparse image restoration using iterated linear expansion of thresholds", Proceedings of the 2011 IEEE International Conference on Image Processing (ICIP'11), Brussels, Belgium, pp. 1905-1908, September 11--14, 2011. |
We focus on image restoration that consists in regularizing a quadratic data-fidelity term with the standard l1 sparse-enforcing norm. We propose a novel algorithmic approach to solve this optimization problem. Our idea amounts to approximating the result of the restoration as a linear sum of basic thresholds (e.g. soft-thresholds) weighted by unknown coefficients. The few coefficients of this expansion are obtained by minimizing the equivalent low-dimensional l1-norm regularized objective function, which can be solved efficiently with standard convex optimization techniques, e.g. iterative reweighted least-squares (IRLS). By iterating this process, we claim that we reach the global minimum of the objective function. Experimentally we discover that very few iterations are required before we reach the convergence. |
@inproceedings{blu2011f, author = "Pan, H. and Blu, T.", title = "Sparse image restoration using iterated linear expansion of thresholds", booktitle = "Proceedings of the 2011 {IEEE} International Conference on Image Processing ({ICIP'11})", month = "September 11--14,", year = "2011", pages = "1905--1908", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2011f" } |
Pan, H. & Blu, T.,"An Iterative Linear Expansion of Thresholds for ℓ_1-based Image Restoration", IEEE Transactions on Image Processing, Vol. 22 (9), pp. 3715-3728, September 2013. |
This paper proposes a novel algorithmic framework to solve image restoration problems under sparsity assumptions. As usual, the reconstructed image is the minimum of an objective functional that consists of a data fidelity term and an l1 regularization. However, instead of estimating the reconstructed image that minimizes the objective functional directly, we focus on the restoration process that maps the degraded measurements to the reconstruction. Our idea amounts to parameterizing the process as a linear combination of few elementary thresholding functions (LET) and solve for the linear weighting coefficients by minimizing the objective functional. It is then possible to update the thresholding functions and to iterate this process (i-LET). The key advantage of such a linear parametrization is that the problem size reduces dramatically—each time we only need to solve an optimization problem over the dimension of the linear coefficients (typically less than 10) instead of the whole image dimension. With the elementary thresholding functions satisfying certain constraints, global convergence of the iterated LET algorithm is guaranteed. Experiments on several test images over a wide range of noise levels and different types of convolution kernels clearly indicate that the proposed framework usually outperforms state-of-art algorithms in terms of both CPU time and number of iterations. |
@article{blu2013e, author = "Pan, H. and Blu, T.", title = "An Iterative Linear Expansion of Thresholds for ${\ell}_1$-based Image Restoration", journal = "{IEEE} Transactions on Image Processing", month = "September", year = "2013", volume = "22", number = "9", pages = "3715--3728", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2013e" } |
Pan, H., Blu, T. & Dragotti, P.-L.,"Sampling Curves with Finite Rate of Innovation", Proceedings of the Ninth International Workshop on Sampling Theory and Applications (SampTA'11), Singapore, May 2--6, 2011. |
We focus on a specific class of curves that can be parametrized using a finite number of variables in two dimensions. The corresponding indicator plane, which is a binary image, has infinite bandwith and can not be sampled and perfectly reconstructed with classical sampling theory. In this paper, we illustrate that it is possible to recover parameters from finite samples of the indicator plane and have a perfect reconstruction of the indicator plane. The algorithm presented here extends the application of FRI signals to multi-dimensional cases and may find its application in field, like super-resolution. |
@inproceedings{blu2011g, author = "Pan, H. and Blu, T. and Dragotti, P.-L.", title = "Sampling Curves with Finite Rate of Innovation", booktitle = "Proceedings of the Ninth International Workshop on Sampling Theory and Applications ({SampTA'11})", month = "May 2--6,", year = "2011", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2011g" } |
Pan, H., Blu, T. & Dragotti, P.-L.,"Sampling Curves with Finite Rate of Innovation", IEEE Transactions on Signal Processing, Vol. 62 (2), pp. 458-471, January 2014. |
In this paper, we extend the theory of sampling signals with finite rate of innovation (FRI) to a specific class of two-dimensional curves, which are defined implicitly as the zeros of a mask function. Here the mask function has a parametric representation as a weighted summation of a finite number of complex exponentials, and therefore, has finite rate of innovation [1]. An associated edge image, which is discontinuous on the predefined parametric curve, is proved to satisfy a set of linear annihilation equations. We show that it is possible to reconstruct the parameters of the curve (i.e. to detect the exact edge positions in the continuous domain) based on the annihilation equations. Robust reconstruction algorithms are also developed to cope with scenarios with model mismatch. Moreover, the annihilation equations that characterize the curve are linear constraints that can be easily exploited in optimization problems for further image processing (e.g., image up-sampling). We demonstrate one potential application of the annihilation algorithm with examples in edge-preserving interpolation. Experimental results with both synthetic curves as well as edges of natural images clearly show the effectiveness of the annihilation constraint in preserving sharp edges, and improving SNRs. |
@article{blu2014a, author = "Pan, H. and Blu, T. and Dragotti, P.-L", title = "Sampling Curves with Finite Rate of Innovation", journal = "{IEEE} Transactions on Signal Processing", month = "January", year = "2014", volume = "62", number = "2", pages = "458--471", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2014a" } |
Pan, H., Blu, T. & Vetterli, M.,"Annihilation-driven Localised Image Edge Models", Proceedings of the Fortieth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'15), Brisbane, Australia, pp. 5977-5981, April 19--24, 2015. |
We propose a novel edge detection algorithm with sub-pixel accuracy based on annihilation of signals with finite rate of innovation. We show that the Fourier domain annihilation equations can be interpreted as spatial domain multiplications. From this new perspective, we obtain an accurate estimation of the edge model by assuming a simple parametric form within each localised block. Further, we build a locally adaptive global mask function (i.e, our edge model) for the whole image. The mask function is then used as an edge- preserving constraint in further processing. Numerical experiments on both edge localisations and image up-sampling show the effectiveness of the proposed approach, which out- performs state-of-the-art method. |
@inproceedings{blu2015d, author = "Pan, H. and Blu, T. and Vetterli, M.", title = "Annihilation-driven Localised Image Edge Models", booktitle = "Proceedings of the Fortieth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'15})", month = "April 19--24,", year = "2015", pages = "5977--5981", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2015d" } |
Pan, H., Blu, T. & Vetterli, M.,"Towards Generalized FRI Sampling with an Application to Source Resolution in Radioastronomy", IEEE Transactions on Signal Processing, Vol. 65 (4), pp. 821-835, February 2017. |
It is a classic problem to estimate continuous-time sparse signals, like point sources in a direction-of-arrival problem, or pulses in a time-of-flight measurement. The earliest occurrence is the estimation of sinusoids in time series using Prony's method. This is at the root of a substantial line of work on high resolution spectral estimation. The estimation of continuous-time sparse signals from discrete-time samples is the goal of the sampling theory for finite rate of innovation (FRI) signals. Both spectral estimation and FRI sampling usually assume uniform sampling. But not all measurements are obtained uniformly, as exemplified by a concrete radioastronomy problem we set out to solve. Thus, we develop the theory and algorithm to reconstruct sparse signals, typically sum of sinusoids, from non-uniform samples. We achieve this by identifying a linear transformation that relates the unknown uniform samples of sinusoids to the given measurements. These uniform samples are known to satisfy the annihilation equations. A valid solution is then obtained by solving a constrained minimization such that the reconstructed signal is consistent with the given measurements and satisfies the annihilation constraint. Thanks to this new approach, we unify a variety of FRI-based methods. We demonstrate the versatility and robustness of the proposed approach with five FRI reconstruction problems, namely Dirac reconstructions with irregular time or Fourier domain samples, FRI curve reconstructions, Dirac reconstructions on the sphere and point source reconstructions in radioastronomy. The proposed algorithm improves substantially over state of the art methods and is able to reconstruct point sources accurately from irregularly sampled Fourier measurements under severe noise conditions. |
@article{blu2017c, author = "Pan, H. and Blu, T. and Vetterli, M.", title = "Towards Generalized {FRI} Sampling with an Application to Source Resolution in Radioastronomy", journal = "IEEE Transactions on Signal Processing", month = "February", year = "2017", volume = "65", number = "4", pages = "821--835", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2017c" } |
Pan, H., Blu, T. & Vetterli, M.,"Efficient Multi-dimensional Diracs Estimation with Linear Sample Complexity", IEEE Transactions on Signal Processing, Vol. 66 (17), pp. 4642-4656, September 2018. |
Estimating Diracs in continuous two or higher dimensions is a fundamental problem in imaging. Previous approaches extended one dimensional methods, like the ones based on finite rate of innovation (FRI) sampling, in a separable manner, e.g., along the horizontal and vertical dimensions separately in 2D. The separate estimation leads to a sample complexity of O(KD) for K Diracs in D dimensions, despite that the total degrees of freedom only increase linearly with respect to D. We propose a new method that enforces the continuous-domain sparsity constraints simultaneously along all dimensions, leading to a reconstruction algorithm with linear sample complexity O(K), or a gain of O(KD-1) over previous FRI-based methods. The multi-dimensional Dirac locations are subsequently determined by the intersections of hypersurfaces (e.g., curves in 2D), which can be computed algebraically from the common roots of polynomials. We first demonstrate the performance of the new multidimensional algorithm on simulated data: multi-dimensional Dirac location retrieval under noisy measurements. Then we show results on real data: radio astronomy point source reconstruction (from LOFAR telescope measurements) and the direction of arrival estimation of acoustic signals (using Pyramic microphone arrays). |
@article{blu2018g, author = "Pan, H. and Blu, T. and Vetterli, M.", title = "Efficient Multi-dimensional {D}iracs Estimation with Linear Sample Complexity", journal = "IEEE Transactions on Signal Processing", month = "September", year = "2018", volume = "66", number = "17", pages = "4642--4656", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2018g" } |
Pan, H., Simeoni, M., Hurley, P., Blu, T. & Vetterli, M.,"LEAP: Looking beyond pixels with continuous-space EstimAtion of Point sources", Astronomy & Astrophysics, Vol. A\&A 608, pp. A136, 1-14, December 2017. |
Context. Two main classes of imaging algorithms have emerged
in radio interferometry: the CLEAN algorithm and its multiple variants,
and compressed-sensing inspired methods. They are both discrete in
nature, and estimate source locations and intensities on a regular
grid. For the traditional CLEAN-based imaging pipeline, the resolution
power of the tool is limited by the width of the synthesized beam,
which is inversely proportional to the largest baseline. The finite
rate of innovation (FRI) framework is a robust method to find the
locations of point-sources in a continuum without grid imposition.
The continuous formulation makes the FRI recovery performance only
dependent on the number of measurements and the number of sources
in the sky. FRI can theoretically find sources below the perceived
tool resolution. To date, FRI had never been tested in the extreme
conditions inherent to radio astronomy: weak signal / high noise,
huge data sets, large numbers of sources. Aims. The aims were (i) to adapt FRI to radio astronomy, (ii) verify it can recover sources in radio astronomy conditions with more accurate positioning than CLEAN, and possibly resolve some sources that would otherwise be missed, (iii) show that sources can be found using less data than would otherwise be required to find them, and (iv) show that FRI does not lead to an augmented rate of false positives. Methods. We implemented a continuous domain sparse reconstruction algorithm in Python. The angular resolution performance of the new algorithm was assessed under simulation, and with visibility measurements from the LOFAR telescope. Existing catalogs were used to confirm the existence of sources. Results. We adapted the FRI framework to radio interferometry, and showed that it is possible to determine accurate off-grid pointsource locations and their corresponding intensities. In addition, FRI-based sparse reconstruction required less integration time and smaller baselines to reach a comparable reconstruction quality compared to a conventional method. The achieved angular resolution is higher than the perceived instrument resolution, and very close sources can be reliably distinguished. The proposed approach has cubic complexity in the total number (typically around a few thousand) of uniform Fourier data of the sky image estimated from the reconstruction. It is also demonstrated that the method is robust to the presence of extended-sources, and that false-positives can be addressed by choosing an adequate model order to match the noise level. |
@article{blu2017k, author = "Pan, H. and Simeoni, M. and Hurley, P. and Blu, T. and Vetterli, M.", title = "{LEAP}: {L}ooking beyond pixels with continuous-space {E}stim{A}tion of {P}oint sources", journal = "Astronomy \& Astrophysics", month = "December", year = "2017", volume = "A\&A 608", pages = "A136, 1--14", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2017k" } |
Panisetti, B.K., Blu, T. & Seelamantula, C.S.,"An Unbiased Risk Estimator for Multiplicative Noise---Application to 1-D Signal Denoising", Proceedings of the Nineteenth International Conference on Digital Signal Processing (DSP'14), Hong Kong, China, pp. 497-502, August 20--23, 2014. |
The effect of multiplicative noise on a signal when compared with that of additive noise is very large. In this paper, we address the problem of suppressing multiplicative noise in one-dimensional signals. To deal with signals that are corrupted with multiplicative noise, we propose a denoising algorithm based on minimization of an unbiased estimator (MURE) of mean-square error (MSE). We derive an expression for an unbiased estimate of the MSE. The proposed denoising is carried out in wavelet domain (soft thresholding) by considering time-domain MURE. The parameters of thresholding function are obtained by minimizing the unbiased estimator MURE. We show that the parameters for optimal MURE are very close to the optimal parameters considering the oracle MSE. Experiments show that the SNR improvement for the proposed denoising algorithm is competitive with a state-of-the-art method. |
@inproceedings{blu2014d, author = "Panisetti, B.K. and Blu, T. and Seelamantula, C.S.", title = "An Unbiased Risk Estimator for Multiplicative Noise---Application to {1-D} Signal Denoising", booktitle = "Proceedings of the Nineteenth International Conference on Digital Signal Processing ({DSP'14})", month = "August 20--23,", year = "2014", pages = "497--502", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2014d" } |
Parrott, E.P.J., Sy, S.M.Y., Blu, T., Wallace, V.P. & Pickwell-MacPherson, E.,"Terahertz pulsed imaging in vivo: measurements and processing methods", Journal of Biomedical Optics, Vol. 16 (10), pp. 106010 1-8, SPIE, October 2011. |
This paper presents a number of data processing algorithms developed to improve the accuracy of results derived from datasets acquired by a recently designed terahertz handheld probe. These techniques include a baseline subtraction algorithm and a number of algorithms to extract the sample impulse response: double Gaussian inverse filtering, frequency-wavelet domain deconvolution, and sparse deconvolution. In vivo measurements of human skin are used as examples, and a comparison is made of the terahertz impulse response from a number of different skin positions. The algorithms presented enable both the spectroscopic and time domain properties of samples measured in reflection geometry to be better determined compared to previous calculation methods. |
@article{blu2011h, author = "Parrott, E.P.J. and Sy, S.M.Y. and Blu, T. and Wallace, V.P. and Pickwell-MacPherson, E.", title = "Terahertz pulsed imaging in vivo: measurements and processing methods", journal = "Journal of Biomedical Optics", publisher = "SPIE", month = "October", year = "2011", volume = "16", number = "10", pages = "106010 1--8", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2011h" } |
Peyronny, L., Soligon, O., Roux, C., Avaro, O. & Blu, T.,"How to Construct an MPEG4 API: A Videoconference Application Example", Proceedings of the International Conference on Image and Multidimensional Digital Signal Processing (IMDSP'98), Alpbach, Austria, pp. 111-114, July 16, 1998. |
The construction and animation of face-objects in MPEG-4/SNHC (synthetic natural hybrid coding) systems implies content-based and semantic analysis of the observed 3D scene. These processes require sophisticated image processing tools and algorithms which are time-consuming and also not so suitable for videoconferencing applications. With regard to the coding process following the MPEG-4 standard, it is shown that 4 possible levels of coding are ready-to-send in an MPEG-4 data stream. The main functionalities sought in the MPEG-4 standard are data scalability, user data interactivity and opening to various kinds of coder schemes. With non-high-order semantic interpretation of the observed scene, it is well-known that it is possible to build and animate a facial 3D model. A transcoding system is then needed to fit to the MPEG-4 data stream format. |
@inproceedings{blu1998f, author = "Peyronny, L. and Soligon, O. and Roux, C. and Avaro, O. and Blu, T.", title = "How to Construct an {MPEG4} {API}: {A} Videoconference Application Example", booktitle = "Proceedings of the International Conference on Image and Multidimensional Digital Signal Processing ({IMDSP'98})", month = "July 16,", year = "1998", pages = "111--114", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1998f" } |
Precioso, F., Barlaud, M., Blu, T. & Unser, M.,"Smoothing B-Spline Active Contour for Fast and Robust Image and Video Segmentation", Proceedings of the 2003 IEEE International Conference on Image Processing (ICIP'03), Barcelona, Spain, Vol. {I}, pp. 137-140, September 14--17, 2003. |
This paper deals with fast image and video segmentation using active contours. Region based active contours using level-sets are powerful techniques for video segmentation but they suffer from large computational cost. A parametric active contour method based on B-Spline interpolation has been proposed in [1] to highly reduce the computational cost but this method is sensitive to noise. Here, we choose to relax the rigid interpolation constraint in order to robustify our method in the presence of noise: by using smoothing splines, we trade a tunable amount of interpolation error for a smoother spline curve. We show by experiments on natural sequences that this new flexibility yields segmentation results of higher quality at no additional computational cost. Hence real time processing for moving objects segmentation is preserved. References |
@inproceedings{blu2003m, author = "Precioso, F. and Barlaud, M. and Blu, T. and Unser, M.", title = "Smoothing {B-Spline} Active Contour for Fast and Robust Image and Video Segmentation", booktitle = "Proceedings of the 2003 {IEEE} International Conference on Image Processing ({ICIP'03})", month = "September 14--17,", year = "2003", volume = "{I}", pages = "137--140", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003m" } |
Precioso, F., Barlaud, M., Blu, T. & Unser, M.,"Robust Real-Time Segmentation of Images and Videos Using a Smooth-Spline Snake-Based Algorithm", IEEE Transactions on Image Processing, Vol. 14 (7), pp. 910-924, July 2005. |
This paper deals with fast image and video segmentation using active contours. Region-based active contours using level sets are powerful techniques for video segmentation, but they suffer from large computational cost. A parametric active contour method based on B-Spline interpolation has been proposed in [1] to highly reduce the computational cost, but this method is sensitive to noise. Here, we choose to relax the rigid interpolation constraint in order to robustify our method in the presence of noise: by using smoothing splines, we trade a tunable amount of interpolation error for a smoother spline curve. We show by experiments on natural sequences that this new flexibility yields segmentation results of higher quality at no additional computational cost. Hence, real-time processing for moving objects segmentation is preserved. References |
@article{blu2005g, author = "Precioso, F. and Barlaud, M. and Blu, T. and Unser, M.", title = "Robust Real-Time Segmentation of Images and Videos Using a Smooth-Spline Snake-Based Algorithm", journal = "{IEEE} Transactions on Image Processing", month = "July", year = "2005", volume = "14", number = "7", pages = "910--924", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005g" } |
Ramani, S., Blu, T. & Unser, M.,"Monte-Carlo SURE: A Black-Box Optimization of Regularization Parameters for General Denoising Algorithms", IEEE Transactions on Image Processing, Vol. 17 (9), pp. 1540-1554, September 2008. |
We consider the problem of optimizing the parameters of a given denoising algorithm for restoration of a signal corrupted by white Gaussian noise. To achieve this, we propose to minimize Stein's unbiased risk estimate (SURE) which provides a means of assessing the true mean-squared error (MSE) purely from the measured data without need for any knowledge about the noise-free signal. Specifically, we present a novel Monte-Carlo technique which enables the user to calculate SURE for an arbitrary denoising algorithm characterized by some specific parameter setting. Our method is a black-box approach which solely uses the response of the denoising operator to additional input noise and does not ask for any information about its functional form. This, therefore, permits the use of SURE for optimization of a wide variety of denoising algorithms. We justify our claims by presenting experimental results for SURE-based optimization of a series of popular image-denoising algorithms such as total-variation denoising, wavelet soft-thresholding, and Wiener filtering/smoothing splines. In the process, we also compare the performance of these methods. We demonstrate numerically that SURE computed using the new approach accurately predicts the true MSE for all the considered algorithms. We also show that SURE uncovers the optimal values of the parameters in all cases. Supplementary material |
@article{blu2008i, author = "Ramani, S. and Blu, T. and Unser, M.", title = "Monte-{C}arlo {SURE}: {A} Black-Box Optimization of Regularization Parameters for General Denoising Algorithms", journal = "{IEEE} Transactions on Image Processing", month = "September", year = "2008", volume = "17", number = "9", pages = "1540--1554", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008i" } |
Ramani, S., Blu, T. & Unser, M.,"Blind Optimization of Algorithm Parameters for Signal Denoising by Monte-Carlo SURE", Proceedings of the Thirty-Third IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'08), Las Vegas, USA, pp. 905-908, March 30--April 4, 2008. |
We consider the problem of optimizing the parameters of an arbitrary denoising algorithm by minimizing Stein's Unbiased Risk Estimate (SURE) which provides a means of assessing the true mean-squared-error (MSE) purely from the measured data assuming that it is corrupted by Gaussian noise. To accomplish this, we propose a novel Monte-Carlo technique based on a black-box approach which enables the user to compute SURE for an arbitrary denoising algorithm with some specific parameter setting. Our method only requires the response of the denoising algorithm to additional input noise and does not ask for any information about the functional form of the corresponding denoising operator. This, therefore, permits SURE-based optimization of a wide variety of denoising algorithms (global-iterative, pointwise, etc). We present experimental results to justify our claims. |
@inproceedings{blu2008j, author = "Ramani, S. and Blu, T. and Unser, M.", title = "Blind Optimization of Algorithm Parameters for Signal Denoising by {M}onte-{C}arlo {SURE}", booktitle = "Proceedings of the Thirty-Third {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'08})", month = "March 30--April 4,", year = "2008", pages = "905--908", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008j" } |
Ramani, S., Van De Ville, D., Blu, T. & Unser, M.,"Nonideal Sampling and Regularization Theory", IEEE Transactions on Signal Processing, Vol. 56 (3), pp. 1055-1070, March 2008. |
Shannon's sampling theory and its variants provide effective solutions to the problem of reconstructing a signal from its samples in some “shift-invariant” space, which may or may not be bandlimited. In this paper, we present some further justification for this type of representation, while addressing the issue of the specification of the best reconstruction space. We consider a realistic setting where a multidimensional signal is prefiltered prior to sampling, and the samples are corrupted by additive noise. We adopt a variational approach to the reconstruction problem and minimize a data fidelity term subject to a Tikhonov-like (continuous domain) L2-regularization to obtain the continuous-space solution. We present theoretical justification for the minimization of this cost functional and show that the globally minimal continuous-space solution belongs to a shift-invariant space generated by a function (generalized B-spline) that is generally not bandlimited. When the sampling is ideal, we recover some of the classical smoothing spline estimators. The optimal reconstruction space is characterized by a condition that links the generating function to the regularization operator and implies the existence of a B-spline-like basis. To make the scheme practical, we specify the generating functions corresponding to the most popular families of regularization operators (derivatives, iterated Laplacian), as well as a new, generalized one that leads to a new brand of Matérn splines.We conclude the paper by proposing a stochastic interpretation of the reconstruction algorithm and establishing an equivalence with the minimax and minimum mean square error (MMSE/Wiener) solutions of the generalized sampling problem. |
@article{blu2008k, author = "Ramani, S. and Van De Ville, D. and Blu, T. and Unser, M.", title = "Nonideal Sampling and Regularization Theory", journal = "{IEEE} Transactions on Signal Processing", month = "March", year = "2008", volume = "56", number = "3", pages = "1055--1070", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008k" } |
Sekhar, S.C., Leitgeb, R.A., Villiger, M.L., Bachmann, A.H., Blu, T. & Unser, M.,"Non-Iterative Exact Signal Recovery in Frequency Domain Optical Coherence Tomography", Proceedings of the Fourth IEEE International Symposium on Biomedical Imaging (ISBI'07), Arlington, USA, pp. 808-811, April 12--15, 2007. |
We address the problem of exact signal recovery in frequency domain optical coherence tomography (FDOCT) systems. Our technique relies on the fact that, in a spectral interferometry setup, the intensity of the total signal reflected from the object is smaller than that of the reference arm. We develop a novel algorithm to compute the reflected signal amplitude from the interferometric measurements. Our technique is non-iterative, non-linear and it leads to an exact solution in the absence of noise. The reconstructed signal is free from artifacts such as the autocorrelation noise that is normally encountered in the conventional inverse Fourier transform techniques. We present results on synthesized data where we have a benchmark for comparing the performance of the technique. We also report results on experimental FDOCT measurements of the retina of the human eye. |
@inproceedings{blu2007j, author = "Sekhar, S.C. and Leitgeb, R.A. and Villiger, M.L. and Bachmann, A.H. and Blu, T. and Unser, M.", title = "Non-Iterative Exact Signal Recovery in Frequency Domain Optical Coherence Tomography", booktitle = "Proceedings of the Fourth {IEEE} International Symposium on Biomedical Imaging ({ISBI'07})", month = "April 12--15,", year = "2007", pages = "808--811", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007j" } |
Sekhar, S.C., Nazkani, H., Blu, T. & Unser, M.,"A New Technique for High-Resolution Frequency Domain Optical Coherence Tomography", Proceedings of the Thirty-Second IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'07), Honolulu, USA, pp. {I}-425-{I}-428, April 15--20, 2007. |
Frequency domain optical coherence tomography (FDOCT) is a new technique that is well-suited for fast imaging of biological specimens, as well as non-biological objects. The measurements are in the frequency domain, and the objective is to retrieve an artifact-free spatial domain description of the specimen. In this paper, we develop a new technique for model-based retrieval of spatial domain data from the frequency domain data. We use a piecewise-constant model for the refractive index profile that is suitable for multi-layered specimens. We show that the estimation of the layered structure parameters can be mapped into a harmonic retrieval problem, which enables us to use high-resolution spectrum estimation techniques. The new technique that we propose is efficient and requires few measurements. We also analyze the effect of additive measurement noise on the algorithm performance. The experimental results show that the technique gives highly accurate parameter estimates. For example, at 25dB signal-to-noise ratio, the mean square error in the position estimate is about 0.01% of the actual value. |
@inproceedings{blu2007k, author = "Sekhar, S.C. and Nazkani, H. and Blu, T. and Unser, M.", title = "A New Technique for High-Resolution Frequency Domain Optical Coherence Tomography", booktitle = "Proceedings of the Thirty-Second {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'07})", month = "April 15--20,", year = "2007", pages = "{I}-425--{I}-428", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007k" } |
van Spaendonck, R., Blu, T., Baraniuk, R. & Vetterli, M.,"Orthogonal Hilbert Transform Filter Banks and Wavelets", Proceedings of the Twenty-Eighth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'03), Hong Kong, China, Vol. {VI}, pp. 505-508, April 6--10, 2003. |
Complex wavelet transforms offer the opportunity to perform directional and coherent processing based on the local magnitude and phase of signals and images. Although denoising, segmentation, and image enhancement are significantly improved using complex wavelets, the redundancy of most current transforms hinders their application in compression and related problems. In this paper we introduce a new orthonormal complex wavelet transform with no redundancy for both real— and complex-valued signals. The transform's filterbank features a real lowpass filter and two complex highpass filters arranged in a critically sampled, three-band structure. Placing symmetry and orthogonality constraints on these filters, we find that each high-pass filter can be factored into a real highpass filter followed by an approximate Hilbert transform filter. |
@inproceedings{blu2003n, author = "van Spaendonck, R. and Blu, T. and Baraniuk, R. and Vetterli, M.", title = "Orthogonal {H}ilbert Transform Filter Banks and Wavelets", booktitle = "Proceedings of the Twenty-Eighth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'03})", month = "April 6--10,", year = "2003", volume = "{VI}", pages = "505--508", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003n" } |
Stalder, A.F., Melchior, T., Müller, M., Sage, D., Blu, T. & Unser, M.,"Low-bond axisymmetric drop shape analysis for surface tension and contact angle measurements of sessile drops", Colloids and Surfaces A: Physicochemical and Engineering Aspects, Vol. 364 (1-3), pp. 72-81, July 2010. |
A new method based on the Young-Laplace equation for measuring contact angles and surface tensions is presented. In this approach, a first-order perturbation technique helps to analytically solve the Young-Laplace equation according to photographic images of axisymmetric sessile drops. When appropriate, the calculated drop contour is extended by mirror symmetry so that reflection of the drop into substrate allows the detection of position of the contact points. To keep a wide range of applicability, a discretisation of the drop's profile is not realised; instead, an optimisation of an advanced image-energy term fits an approximation of the Young-Laplace equation to drop boundaries. In addition, cubic B-spline interpolation is applied to the image of the drop to reach subpixel resolution. To demonstrate the method's accuracy, simulated drops as well as images of liquid coal ash slags were analysed. Thanks to the high-quality image interpolation model and the image-energy term, the experiments demonstrated robust measurements over a wide variety of image types and qualities. The method was implemented in Java and is freely available [A.F. Stalder, LBADSA, Biomedical Imaging Group, EPFL: download link]. |
@article{blu2010i, author = "Stalder, A.F. and Melchior, T. and M\"uller, M. and Sage, D. and Blu, T. and Unser, M.", title = "Low-bond axisymmetric drop shape analysis for surface tension and contact angle measurements of sessile drops", journal = "Colloids and Surfaces {A}: Physicochemical and Engineering Aspects", month = "July", year = "2010", volume = "364", number = "1--3", pages = "72--81", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010i" } |
Stantchev, R.I., Blu, T. & Pickwell-Macpherson, E.,"Total Internal Reflection THz Devices for High Speed Imaging", Proceedings of the 2018 43rd International Conference on Infrared, Millimeter, and Terahertz Waves (IRMMW-THz), Nagoya, Japan, September 9--14, 2018. |
Electron-hole pair photoexcitation switches a semiconductor's response from dielectric to conducting. We show that this process is most efficient in a total internal reflection (TIR) geometry allowing the use of cheaper, less powerful light sources. Further, by employing a digital micromirror device to spatially pattern the photoexcitation area, we perform imaging with single-element detector and present solutions to the optical problems of imaging in this geometry. We finally show that by taking into account the carrier lifetimes in the signal processing one can improve the acquisition rate by a factor 5. |
@inproceedings{blu2018i, author = "Stantchev, R.I. and Blu, T. and Pickwell-Macpherson, E.", title = "Total Internal Reflection {THz} Devices for High Speed Imaging", booktitle = "Proceedings of the 2018 43rd International Conference on Infrared, Millimeter, and Terahertz Waves (IRMMW-THz)", month = "September 9--14,", year = "2018", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2018i" } |
Stantchev, R.I., Yu, X., Blu, T. & Pickwell-MacPherson, E.,"Real-time terahertz imaging with a single-pixel detector", Nature Communications, Vol. 11 (1), pp. 2535-2542, 21 May 2020. |
Terahertz (THz) radiation is poised to have an essential role in many imaging applications, from industrial inspections to medical diagnosis. However, commercialization is prevented by impractical and expensive THz instrumentation. Single-pixel cameras have emerged as alternatives to multi-pixel cameras due to reduced costs and superior durability. Here, by optimizing the modulation geometry and post-processing algorithms, we demonstrate the acquisition of a THz-video (32×32 pixels at 6 frames-per-second), shown in real-time, using a single-pixel fiber-coupled photoconductive THz detector. A laser diode with a digital micromirror device shining visible light onto silicon acts as the spatial THz modulator. We mathematically account for the temporal response of the system, reduce noise with a lock-in free carrier-wave modulation and realize quick, noise-robust image undersampling. Since our modifications do not impose intricate manufacturing, require long post-processing, nor sacrifice the time-resolving capabilities of THz-spectrometers, their greatest asset, this work has the potential to serve as a foundation for all future single-pixel THz imaging systems. |
@article{blu2020e, author = "Stantchev, R.I. and Yu, X and Blu, T. and Pickwell-MacPherson, E.", title = "Real-time terahertz imaging with a single-pixel detector", journal = "Nature Communications", month = "21 May", year = "2020", volume = "11", number = "1", pages = "2535--2542", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2020e" } |
Sun, Z. & Blu, T.,"A Nonlinear Steerable Complex Wavelet Decomposition of Images", Proceedings of the Forty-seventh IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'22), Singapore, pp. 1690-1694, May 22--27, 2022. |
Signal and image representations that are steerable are essential to capture efficiently directional features. However, those that are successful at achieving directional selectivity usually use too many subbands, resulting in low computational efficiency. In this paper, we propose a two-dimensional nonlinear transform that uses only two subbands to achieve rotation invariance property, and enjoys a mirror reconstruction making it similar to a "tight frame". The two-subband structure is merged into a unique, concise, complex-valued subband that approximates a Wirtinger gradient which is naturally steerable. Complete steerability, though, is achieved by utilizing the Fourier-Argand representation, which provides a steerable filter able to estimate the amplitude and direction of image features, even in the presence of very high noise. We demonstrate the efficiency of the representation by comparing how it performs in wavelet-based denoising algorithms. |
@inproceedings{blu2022b, author = "Sun, Z. and Blu, T.", title = "A Nonlinear Steerable Complex Wavelet Decomposition of Images", booktitle = "Proceedings of the Forty-seventh {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'22})", month = "May 22--27,", year = "2022", pages = "1690--1694", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2022b", doi = "10.1109/ICASSP43922.2022.9747539" } |
Sun, Z. & Blu, T.,"Empowering Networks With Scale and Rotation Equivariance Using A Similarity Convolution", Proceedings of the The Eleventh International Conference on Learning Representations (ICLR), Kigali, Rwanda, May 1--5 2023. arXiv:2303.00326 [cs.CV] |
The translational equivariant nature of Convolutional Neural Networks (CNNs) is a reason for its great success in computer vision. However, networks do not enjoy more general equivariance properties such as rotation or scaling, ultimately limiting their generalization performance. To address this limitation, we devise a method that endows CNNs with simultaneous equivariance with respect to translation, rotation, and scaling. Our approach defines a convolution-like operation and ensures equivariance based on our proposed scalable Fourier-Argand representation. The method maintains similar efficiency as a traditional network and hardly introduces any additional learnable parameters, since it does not face the computational issue that often occurs in group-convolution operators. We validate the efficacy of our approach in the image classification task, demonstrating its robustness and the generalization ability to both scaled and rotated inputs. |
@inproceedings{blu2023b, author = "Sun, Z. and Blu, T.", title = "Empowering Networks With Scale and Rotation Equivariance Using A Similarity Convolution", booktitle = "Proceedings of the The Eleventh International Conference on Learning Representations ({ICLR})", month = "May 1--5", year = "2023", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2023b" } |
Sun, Z., Zhang, Z. & Blu, T.,"An Algebraic Optimization Approach to Image Registration", Proceedings of the 2022 IEEE International Conference on Image Processing (ICIP'22), Bordeaux, France, pp. 2776-2780, October 16--19, 2022. |
High-speed is an essential requirement for many applications of image registration. However, existing methods are usually time-consuming due to the difficulty of the task. In this paper, different from usual feature-based ideas, we convert the matching problem into an algebraic optimization task. By solving a series of quadratic optimization equations, the underlying deformation (rotation, scaling and shift) between image pairs can be retrieved. This process is extremely fast and can be performed in real-time. Experiments show that our method can achieve good performance at a much lower computation cost. When used to initialize our earlier parametric Local All-Pass (LAP) registration algorithm, the results obtained improve significantly over the state of the art. |
@inproceedings{blu2022c, author = "Sun, Z. and Zhang, Z. and Blu, T", title = "An Algebraic Optimization Approach to Image Registration", booktitle = "Proceedings of the 2022 {IEEE} International Conference on Image Processing ({ICIP'22})", month = "October 16--19,", year = "2022", pages = "2776--2780", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2022c", doi = "10.1109/ICIP46576.2022.9897451" } |
Thévenaz, P., Blu, T. & Unser, M.,"Image Interpolation and Resampling", Handbook of Medical Imaging, Processing and Analysis, San Diego CA, USA, pp. 393-420, Academic Press, 2000. |
This chapter presents a survey of interpolation and resampling techniques in the context of exact, separable interpolation of regularly sampled data. In this context, the traditional view of interpolation is to represent an arbitrary continuous function as a discrete sum of weighted and shifted synthesis functions—in other words, a mixed convolution equation. An important issue is the choice of adequate synthesis functions that satisfy interpolation properties. Examples of finite-support ones are the square pulse (nearest-neighbor interpolation), the hat function (linear interpolation), the cubic Keys' function, and various truncated or windowed versions of the sinc function. On the other hand, splines provide examples of infinite-support interpolation functions that can be realized exactly at a finite, surprisingly small computational cost. We discuss implementation issues and illustrate the performance of each synthesis function. We also highlight several artifacts that may arise when performing interpolation, such as ringing, aliasing, blocking and blurring. We explain why the approximation order inherent in the synthesis function is important to limit these interpolation artifacts, which motivates the use of splines as a tunable way to keep them in check without any significant cost penalty. |
@incollection{blu2000h, author = "Th{\'{e}}venaz, P. and Blu, T. and Unser, M.", title = "Image Interpolation and Resampling", booktitle = "Handbook of Medical Imaging, Processing and Analysis", publisher = "Academic Press", year = "2000", pages = "393--420", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000h" } |
Thévenaz, P., Blu, T. & Unser, M.,"Short Basis Functions for Constant-variance Interpolation", Proceedings of the SPIE International Symposium on Medical Imaging: Image Processing (MI'08), San Diego, USA, Vol. 6914, pp. 69142L-1-69142L-8, February 16--21, 2008. |
An interpolation model is a necessary ingredient of intensity-based registration methods. The properties of such a model depend entirely on its basis function, which has been traditionally characterized by features such as its order of approximation and its support. However, as has been recently shown, these features are blind to the amount of registration bias created by the interpolation process alone; an additional requirement that has been named constant-variance interpolation is needed to remove this bias. In this paper, we present a theoretical investigation of the role of the interpolation basis in a registration context. Contrarily to published analyses, ours is deterministic; it nevertheless leads to the same conclusion, which is that constant-variance interpolation is beneficial to image registration. In addition, we propose a novel family of interpolation bases that can have any desired order of approximation while maintaining the constant-variance property. Our family includes every constant-variance basis we know of. It is described by an explicit formula that contains two free functional terms: an arbitrary 1-periodic binary function that takes values from {-1, 1}, and another arbitrary function that must satisfy the partition of unity. These degrees of freedom can be harnessed to build many family members for a given order of approximation and a fixed support. We provide the example of a symmetric basis with two orders of approximation that is supported over [-3 ⁄ 2, 3 ⁄ 2]; this support is one unit shorter than a basis of identical order that had been previously published. |
@inproceedings{blu2008l, author = "Th{\'{e}}venaz, P. and Blu, T. and Unser, M.", title = "Short Basis Functions for Constant-variance Interpolation", booktitle = "Proceedings of the {SPIE} International Symposium on Medical Imaging: {I}mage Processing ({MI'08})", month = "February 16--21,", year = "2008", volume = "6914", pages = "69142L-1--69142L-8", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008l" } |
Thévenaz, P., Blu, T. & Unser, M.,"Interpolation Revisited", IEEE Transactions on Medical Imaging, Vol. 19 (7), pp. 739-758, July 2000. |
Based on the theory of approximation, this paper presents a unified analysis of interpolation and resampling techniques. An important issue is the choice of adequate basis functions. We show that, contrary to the common belief, those that perform best are not interpolating. By opposition to traditional interpolation, we call their use generalized interpolation; they involve a prefiltering step when correctly applied. We explain why the approximation order inherent in any basis function is important to limit interpolation artifacts. The decomposition theorem states that any basis function endowed with approximation order can be expressed as the convolution of a B-spline of the same order with another function that has none. This motivates the use of splines and spline-based functions as a tunable way to keep artifacts in check without any significant cost penalty. We discuss implementation and performance issues, and we provide experimental evidence to support our claims. Erratum
|
@article{blu2000i, author = "Th{\'{e}}venaz, P. and Blu, T. and Unser, M.", title = "Interpolation Revisited", journal = "{IEEE} Transactions on Medical Imaging", month = "July", year = "2000", volume = "19", number = "7", pages = "739--758", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000i" } |
Thévenaz, P., Blu, T. & Unser, M.,"Complete Parametrization of Piecewise-Polynomial Interpolators According to Degree, Support, Regularity, and Order", Proceedings of the 2000 IEEE International Conference on Image Processing (ICIP'00), Vancouver, Canada, Vol. {II}, pp. 335-338, September 10--13, 2000. |
The most essential ingredient of interpolation is its basis function. We have shown in previous papers that this basis need not be necessarily interpolating to achieve good results. On the contrary, several recent studies have confirmed that non-interpolating bases, such as B-splines and O-moms, perform best. This opens up a much wider choice of basis functions. In this paper, we give to the designer the tools that will allow him to characterize this enlarged space of functions. In particular, he will be able to specify up-front the four most important parameters for image processing: degree, support, regularity, and order. The theorems presented here will then allow him to refine his design by dealing with additional coefficients that can be selected freely, without interfering with the main design parameters. Errata The following erratum applies to the printed proceedings (the version that you can download below is amended):
|
@inproceedings{blu2000j, author = "Th{\'{e}}venaz, P. and Blu, T. and Unser, M.", title = "Complete Parametrization of Piecewise-Polynomial Interpolators According to Degree, Support, Regularity, and Order", booktitle = "Proceedings of the 2000 {IEEE} International Conference on Image Processing ({ICIP'00})", month = "September 10--13,", year = "2000", volume = "{II}", pages = "335--338", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000j" } |
Unser, M. & Blu, T.,"Comparison of Wavelets from the Point of View of Their Approximation Error", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing VI, San Diego, USA, Vol. 3458, pp. 14-21, July 19--24, 1998. |
We present new quantitative results for the characterization of the L2-error of wavelet-like expansions as a function of the scale a. This yields an extension as well as a simplification of the asymptotic error formulas that have been published previously. We use our bound determinations to compare the approximation power of various families of wavelet transforms. We present explicit formulas for the leading asymptotic constant for both splines and Daubechies wavelets. For a specified approximation error, this allows us to predict the sampling rate reduction that can obtained by using splines instead Daubechies wavelets. In particular, we prove that the gain in sampling density (splines vs. Daubechies) converges to π as the order goes to infinity. |
@inproceedings{blu1998g, author = "Unser, M. and Blu, T.", title = "Comparison of Wavelets from the Point of View of Their Approximation Error", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {VI}", month = "July 19--24,", year = "1998", volume = "3458", pages = "14--21", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1998g" } |
Unser, M. & Blu, T.,"Spline Wavelets with Fractional Order of Approximation", Wavelet Applications Workshop, Monte Verità, Switzerland, September 28--October 2, 1998. |
We extend Schoenberg's family of polynomial splines with uniform knots to all fractional degrees α>-1/2. These splines, which involve linear combinations of the one sided power functions x+α=max{0,x}α, are α-Hölder continuous for α≥0. We construct the corresponding B-splines by taking fractional finite differences and provide an explicit characterization in both time and frequency domains. We show that these functions satisfy most of the properties of the traditional B-splines, including the convolution property, and a generalized fractional differentiation rule that involves finite differences only. We characterize the decay of the fractional B-splines which are not compactly supported for non-integral α's. |
@inproceedings{blu1998h, author = "Unser, M. and Blu, T.", title = "Spline Wavelets with Fractional Order of Approximation", booktitle = "Wavelet Applications Workshop", month = "September 28--October 2,", year = "1998", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1998h" } |
Unser, M. & Blu, T.,"Construction of Fractional Spline Wavelet Bases", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing VII, Denver, USA, Vol. 3813, pp. 422-431, July 19--23, 1999. |
We extend Schoenberg's B-splines to all fractional degrees α > -1/2. These splines are constructed using linear combinations of the integer shifts of the power functions x+α(one-sided) or |x|*α(symmetric); in each case, they are α-Hölder continuous for α > 0. They satisfy most of the properties of the traditional B-splines; in particular, the Riesz basis condition and the two-scale relation, which makes them suitable for the construction of new families of wavelet bases. What is especially interesting from a wavelet perspective is that the fractional B-splines have a fractional order of approximation (α+1), while they reproduce the polynomials of degree [α]. We show how they yield continuous-order generalizations of the orthogonal Battle-Lemarié wavelets and of the semi-orthogonal B-spline wavelets. As α increases, these latter wavelets tend to be optimally localized in time and frequency in the sense specified by the uncertainty principle. The corresponding analysis wavelets also behave like fractional differentiators; they may therefore be used to whiten fractional Brownian motion processes. |
@inproceedings{blu1999h, author = "Unser, M. and Blu, T.", title = "Construction of Fractional Spline Wavelet Bases", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {VII}", month = "July 19--23,", year = "1999", volume = "3813", pages = "422--431", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1999h" } |
Unser, M. & Blu, T.,"Fractional Splines and Wavelets", SIAM Review, Vol. 42 (1), pp. 43-67, March 2000. |
We extend Schoenberg's family of polynomial splines with uniform knots to all fractional degrees α > -1. These splines, which involve linear combinations of the one-sided power functions x+α = max(0, x)α, belong to L1 and are α-Hölder continuous for α > 0. We construct the corresponding B-splines by taking fractional finite differences and provide an explicit characterization in both time and frequency domains. We show that these functions satisfy most of the properties of the traditional B-splines, including the convolution property, and a generalized fractional differentiation rule that involves finite differences only. We characterize the decay of the B-splines which are not compactly supported for non-integral α's. Their most astonishing feature (in reference to the Strang-Fix theory) is that they have a fractional order of approximation α + 1 while they reproduce the polynomials of degree [α]. For α > 1/2, they satisfy all the requirements for a multiresolution analysis of L2 (Riesz bounds, two scale relation) and may therefore be used to build new families of wavelet bases with a continuously-varying order parameter. Our construction also yields symmetrized fractional B-splines which provide the connection with Duchon's general theory of radial (m,s)-splines (including thin-plate splines). In particular, we show that the symmetric version of our splines can be obtained as solution of a variational problem involving the norm of a fractional derivative. (Front cover). |
@article{blu2000k, author = "Unser, M. and Blu, T.", title = "Fractional Splines and Wavelets", journal = "{SIAM} Review", month = "March", year = "2000", volume = "42", number = "1", pages = "43--67", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000k" } |
Unser, M. & Blu, T.,"Wavelets and Radial Basis Functions: A Unifying Perspective", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing VIII, San Diego, USA, Vol. 4119, pp. 487-493, July 31--August 4, 2000. |
Wavelets and radial basis functions (RBF) are two rather distinct ways of representing signals in terms of shifted basis functions. An essential aspect of RBF, which makes the method applicable to non-uniform grids, is that the basis functions, unlike wavelets, are non-local—in addition, they do not involve any scaling at all. Despite these fundamental differences, we show that the two types of representation are closely connected. We use the linear splines as motivating example. These can be constructed by using translates of the one-side ramp function (which is not localized), or, more conventionally, by using the shifts of a linear B-spline. This latter function, which is the prototypical example of a scaling function, can be obtained by localizing the one-side ramp function using finite differences. We then generalize the concept and identify the whole class of self-similar radial basis functions that can be localized to yield conventional multiresolution wavelet bases. Conversely, we prove that, for any compactly supported scaling function φ(x), there exists a one-sided central basis function ρ+(x) that spans the same multiresolution subspaces. The central property is that the multiresolution bases are generated by simple translation of ρ+, without any dilation. |
@inproceedings{blu2000l, author = "Unser, M. and Blu, T.", title = "Wavelets and Radial Basis Functions: {A} Unifying Perspective", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {VIII}", month = "July 31--August 4,", year = "2000", volume = "4119", pages = "487--493", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000l" } |
Unser, M. & Blu, T.,"Why Restrict Ourselves to Compactly Supported Basis Functions?", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing IX, San Diego, USA, Vol. 4478, pp. 311-314, July 29--August 1, 2001. |
Compact support is undoubtedly one of the wavelet properties that is given the greatest weight both in theory and applications. It is usually believed to be essential for two main reasons: (1) to have fast numerical algorithms, and (2) to have good time or space localization properties. Here, we argue that this constraint is unnecessarily restrictive and that fast algorithms and good localization can also be achieved with non-compactly supported basis functions. By dropping the compact support requirement, one gains in flexibility. This opens up new perspectives such as fractional wavelets whose key parameters (order, regularity, etc…) are tunable in a continuous fashion. To make our point, we draw an analogy with the closely related task of image interpolation. This is an area where it was believed until very recently that interpolators should be designed to be compactly supported for best results. Today, there is compelling evidence that non-compactly supported interpolators (such as splines, and others) provide the best cost/performance tradeoff. |
@inproceedings{blu2001j, author = "Unser, M. and Blu, T.", title = "Why Restrict Ourselves to Compactly Supported Basis Functions?", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {IX}", month = "July 29--August 1,", year = "2001", volume = "4478", pages = "311--314", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001j" } |
Unser, M. & Blu, T.,"Fractional Splines and Wavelets: From Theory to Applications", Joint IDR-IMA Workshop: Ideal Data Representation, Minneapolis, USA, April 9--13, 2001. |
In the first part, we present the theory of fractional splines; an extension of the polynomial splines for non-integer degrees. Their basic constituents are piecewise power functions of degree α. The corresponding B-splines are obtained through a localization process similar to the classical one, replacing finite differences by fractional differences. We show that the fractional B-splines share virtually all the properties of the classical B-splines, including the two-scale relation, and can therefore be used to define new wavelet bases with a continuously varying order parameter. We discuss some of their remarkable properties; in particular, the fact that the fractional spline wavelets behave like fractional derivatives of order α + 1. In the second part, we turn to applications. We first describe a fast implementation of the fractional wavelet transform, which is essential to make the method practical. We then present an application of fractional splines to tomographic reconstruction where we take advantage of explicit formulas for computing the fractional derivatives of splines. We also make the connection with ridgelets. Finally, we consider the use of fractional wavelets for the detection and localization of brain activation in fMRI sequences. Here, we take advantage of the continuously varying order parameter which allows us to fine-tune the localization properties of the basis functions. |
@inproceedings{blu2001k, author = "Unser, M. and Blu, T.", title = "Fractional Splines and Wavelets: {F}rom Theory to Applications", booktitle = "Joint {IDR-IMA} Workshop: {I}deal Data Representation", month = "April 9--13,", year = "2001", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001k" } |
Unser, M. & Blu, T.,"Fractional Wavelets: Properties and Applications", Proceedings of the First 2002 SIAM Conference on Imaging Science (SIAG-IS'02), Boston, USA, Vol. MS1, pp. 33, March 4--6, 2002. |
We introduce the concept of fractional wavelets which extends the conventional theory to non-integer orders. This allows for the construction of new wavelet bases that are indexed by a continuously-varying order parameter, as opposed to an integer. An essential feature of the method is to gain control over the key wavelet properties (regularity, time-frequency localization, etc…). Practically, this translates into the fact that all important wavelet parameters are adjustable in a continuous fashion so that the new basis functions can be fine-tuned for the application at hand. We present some specific examples of wavelets (fractional splines) and investigate the main implications of the fractional order property. In particular, we prove that these wavelets essentially behave like fractional derivative operators which makes them good candidates for the analysis and synthesis of fractal-like processes. We also consider non-separable extensions to quincunx lattices which are well suited for image processing. Finally, we deal with the practical aspect of the evaluation of these transforms and present a fast implementation based on the FFT. |
@inproceedings{blu2002k, author = "Unser, M. and Blu, T.", title = "Fractional Wavelets: Properties and Applications", booktitle = "Proceedings of the First 2002 {SIAM} Conference on Imaging Science ({SIAG-IS'02})", month = "March 4--6,", year = "2002", volume = "MS1", pages = "33", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002k" } |
Unser, M. & Blu, T.,"Mathematical Properties of the JPEG2000 Wavelet Filters", IEEE Transactions on Image Processing, Vol. 12 (9), pp. 1080-1090, September 2003. |
The LeGall 5⁄3 and Daubechies 9⁄7 filters have risen to special prominence because they were selected for inclusion in the JPEG2000 standard. Here, we determine their key mathematical features: Riesz bounds, order of approximation, and regularity (Hölder and Sobolev). We give approximation theoretic quantities such as the asymptotic constant for the L2 error and the angle between the analysis and synthesis spaces which characterizes the loss of performance with respect to an orthogonal projection. We also derive new asymptotic error formulæ that exhibit bound constants that are proportional to the magnitude of the first nonvanishing moment of the wavelet. The Daubechies 9⁄7 stands out because it is very close to orthonormal, but this turns out to be slightly detrimental to its asymptotic performance when compared to other wavelets with four vanishing moments. |
@article{blu2003o, author = "Unser, M. and Blu, T.", title = "Mathematical Properties of the {JPEG2000} Wavelet Filters", journal = "{IEEE} Transactions on Image Processing", month = "September", year = "2003", volume = "12", number = "9", pages = "1080--1090", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003o" } |
Unser, M. & Blu, T.,"Wavelet Theory Demystified", IEEE Transactions on Signal Processing, Vol. 51 (2), pp. 470-483, February 2003. |
In this paper, we revisit wavelet theory starting from the representation of a scaling function as the convolution of a B-spline (the regular part of it) and a distribution (the irregular or residual part). This formulation leads to some new insights on wavelets and makes it possible to rederive the main results of the classical theory—including some new extensions for fractional orders—in a self-contained, accessible fashion. In particular, we prove that the B-spline component is entirely responsible for five key wavelet properties: order of approximation, reproduction of polynomials, vanishing moments, multiscale differentiation property, and smoothness (regularity) of the basis functions. We also investigate the interaction of wavelets with differential operators giving explicit time domain formulas for the fractional derivatives of the basis functions. This allows us to specify a corresponding dual wavelet basis and helps us understand why the wavelet transform provides a stable characterization of the derivatives of a signal. Additional results include a new peeling theory of smoothness, leading to the extended notion of wavelet differentiability in the Lp-sense and a sharper theorem stating that smoothness implies order. |
@article{blu2003p, author = "Unser, M. and Blu, T.", title = "Wavelet Theory Demystified", journal = "{IEEE} Transactions on Signal Processing", month = "February", year = "2003", volume = "51", number = "2", pages = "470--483", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003p" } |
Unser, M. & Blu, T.,"Fractional Wavelets, Derivatives, and Besov Spaces", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing X, San Diego, USA, Vol. 5207, pp. 147-152, August 3--8, 2003. Part I. |
We show that a multi-dimensional scaling function of order γ (possibly fractional) can always be represented as the convolution of a polyharmonic B-spline of order γ and a distribution with a bounded Fourier transform which has neither order nor smoothness. The presence of the B-spline convolution factor explains all key wavelet properties: order of approximation, reproduction of polynomials, vanishing moments, multi-scale differentiation property, and smoothness of the basis functions. The B-spline factorization also gives new insights on the stability of wavelet bases with respect to differentiation. Specifically, we show that there is a direct correspondence between the process of moving a B-spline factor from one side to another in a pair of biorthogonal scaling functions and the exchange of fractional integrals/derivatives on their wavelet counterparts. This result yields two “eigen-relations” for fractional differential operators that map biorthogonal wavelet bases into other stable wavelet bases. This formulation provides a better understanding as to why the Sobolev/Besov norm of a signal can be measured from the lp-norm of its rescaled wavelet coefficients. Indeed, the key condition for a wavelet basis to be an unconditional basis of the Besov space Bqs(Lp(Rd)) is that the s-order derivative of the wavelet be in Lp. |
@inproceedings{blu2003q, author = "Unser, M. and Blu, T.", title = "Fractional Wavelets, Derivatives, and {B}esov Spaces", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {X}", month = "August 3--8,", year = "2003", volume = "5207", pages = "147--152", note = "Part {I}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003q" } |
Unser, M. & Blu, T.,"The Spline Foundation of Wavelet Theory", International Conference on Wavelets and Splines (EIMI-WS'03), Saint Petersburg, Russia, pp. 98-99, July 3--8, 2003. |
Recently, we came up with two interesting generalizations of polynomial splines by extending the degree of the generating functions to both real and complex exponents. While these may qualify as exotic constructions at first sight, we show here that both types of splines (fractional and complex) play a truly fundamental role in wavelet theory and that they lead to a better understanding of what wavelets really are. To this end, we first revisit wavelet theory starting from the representation of a scaling function as the convolution of a B-spline (the regular part of it) and a distribution (the irregular or residual part). This formulation leads to some new insights on wavelets and makes it possible to re-derive the main results of the classical theory—including some new extensions for fractional orders—in a self-contained, accessible fashion. In particular, we prove that the B-spline component is entirely responsible for five key wavelet properties: order of approximation, reproduction of polynomials, vanishing moments, multi-scale differentiation, and smoothness (regularity) of the basis functions. Second, we show that any scaling function can be expanded as a sum of harmonic splines (a particular subset of the splines with complex exponents); these play essentially the same role here as the Fourier exponentials do for periodic signals. This harmonic expansion provides an explicit time-domain representation of scaling functions and wavelets; it also explains their fractal nature. Remarkably, truncating the expansion preserves the essential multiresolution property (two-scale relation). Keeping the first term alone yields a fractional-spline approximation that captures most of the important wavelet features; e.g., its general shape and smoothness. References
|
@inproceedings{blu2003r, author = "Unser, M. and Blu, T.", title = "The Spline Foundation of Wavelet Theory", booktitle = "International Conference on Wavelets and Splines ({EIMI-WS'03})", month = "July 3--8,", year = "2003", pages = "98--99", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003r" } |
Unser, M. & Blu, T.,"A Unifying Spline Formulation for Stochastic Signal Processing [Or How Schoenberg Meets Wiener, with the Help of Tikhonov]", Second International Conference on Computational Harmonic Analysis, Nineteenth Annual Shanks Lecture (CHA'04), Nashville, USA, May 24--30, 2004. Plenary talk. |
We introduce an extended class of cardinal L-splines where L is a pseudo-differential—but not necessarily local—operator satisfying some admissibility conditions. This family is quite general and includes a variety of standard constructions including the polynomial, elliptic, exponential, and fractional splines. In order to fit such splines to the noisy samples of a signal, we specify a corresponding smoothing spline problem which involves an L-semi-norm regularization term. We prove that the optimal solution, among all possible functions, is a cardinal L*L-spline which has a stable representation in a B-spline-like basis. We show that the coefficients of this spline estimator can be computed by digital filtering of the input samples; we also describe an efficient recursive filtering algorithm that is applicable whenever the transfer function of L is rational. We justify this procedure statistically by establishing an equivalence between L*L smoothing splines and the MMSE (minimum mean square error) estimation of a stationary signal corrupted by white Gaussian noise. In this model-based formulation, the optimum operator L is the whitening filter of the process, and the regularization parameter is proportional to the noise variance. Thus, the proposed formalism yields the optimal discretization of the classical Wiener filter, together with a fast recursive algorithm. It extends the standard Wiener solution by providing the optimal interpolation space. We also present a Bayesian interpretation of such spline estimators. References
|
@inproceedings{blu2004i, author = "Unser, M. and Blu, T.", title = "A Unifying Spline Formulation for Stochastic Signal Processing [{O}r How {S}choenberg Meets {W}iener, with the Help of {T}ikhonov]", booktitle = "Second International Conference on Computational Harmonic Analysis, Nineteenth Annual Shanks Lecture ({CHA'04})", month = "May 24--30,", year = "2004", note = "Plenary talk", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004i" } |
Unser, M. & Blu, T.,"Generalized Smoothing Splines and the Optimal Discretization of the Wiener Filter", IEEE Transactions on Signal Processing, Vol. 53 (6), pp. 2146-2159, June 2005. |
We introduce an extended class of cardinal L*L-splines, where L is a pseudo-differential operator satisfying some admissibility conditions. We show that the L*L-spline signal interpolation problem is well posed and that its solution is the unique minimizer of the spline energy functional, ||L s||L22, subject to the interpolation constraint. Next, we consider the corresponding regularized least squares estimation problem, which is more appropriate for dealing with noisy data. The criterion to be minimized is the sum of a quadratic data term, which forces the solution to be close to the input samples, and a “smoothness” term that privileges solutions with small spline energies. Here, too, we find that the optimal solution, among all possible functions, is a cardinal L*L-spline. We show that this smoothing spline estimator has a stable representation in a B-spline-like basis and that its coefficients can be computed by digital filtering of the input signal. We describe an efficient recursive filtering algorithm that is applicable whenever the transfer function of L is rational (which corresponds to the case of exponential splines). We justify these algorithms statistically by establishing an equivalence between L*L smoothing splines and the minimum mean square error (MMSE) estimation of a stationary signal corrupted by white Gaussian noise. In this model-based formulation, the optimum operator L is the whitening filter of the process, and the regularization parameter is proportional to the noise variance. Thus, the proposed formalism yields the optimal discretization of the classical Wiener filter, together with a fast recursive algorithm. It extends the standard Wiener solution by providing the optimal interpolation space. We also present a Bayesian interpretation of the algorithm. |
@article{blu2005h, author = "Unser, M. and Blu, T.", title = "Generalized Smoothing Splines and the Optimal Discretization of the {W}iener Filter", journal = "{IEEE} Transactions on Signal Processing", month = "June", year = "2005", volume = "53", number = "6", pages = "2146--2159", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005h" } |
Unser, M. & Blu, T.,"Cardinal Exponential Splines: Part I---Theory and Filtering Algorithms", IEEE Transactions on Signal Processing, Vol. 53 (4), pp. 1425-1438, April 2005. |
Causal exponentials play a fundamental role in classical system theory. Starting from those elementary building blocks, we propose a complete and self-contained signal processing formulation of exponential splines defined on a uniform grid. We specify the corresponding B-spline basis functions and investigate their reproduction properties (Green function and exponential polynomials); we also characterize their stability (Riesz bounds). We show that the exponential B-spline framework allows an exact implementation of continuous-time signal processing operators including convolution, differential operators, and modulation, by simple processing in the discrete B-spline domain. We derive efficient filtering algorithms for multiresolution signal extrapolation and approximation, extending earlier results for polynomial splines. Finally, we present a new asymptotic error formula that predicts the magnitude and the Nth-order decay of the L2-approximation error as a function of the knot spacing T. Please consult also the companion paper by M. Unser, "Cardinal Exponential Splines: Part II—Think Analog, Act Digital," IEEE Transactions on Signal Processing, vol. 53, no. 4, pp. 1439-1449, April 2005. |
@article{blu2005i, author = "Unser, M. and Blu, T.", title = "Cardinal Exponential Splines: {P}art {I}---{T}heory and Filtering Algorithms", journal = "{IEEE} Transactions on Signal Processing", month = "April", year = "2005", volume = "53", number = "4", pages = "1425--1438", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005i" } |
Unser, M. & Blu, T.,"Self-Similarity: Part I---Splines and Operators", IEEE Transactions on Signal Processing, Vol. 55 (4), pp. 1352-1363, April 2007. |
The central theme of this pair of papers (Parts I and II in this issue) is self-similarity, which is used as a bridge for connecting splines and fractals. The first part of the investigation is deterministic, and the context is that of L-splines; these are defined in the following terms: s(t) is a cardinal L-spline iff L{s(t)} = ∑k∈Z a[k] δ(t−k), where L is a suitable pseudodifferential operator. Our starting point for the construction of “self-similar” splines is the identification of the class of differential operators L that are both translation and scale invariant. This results into a two-parameter family of generalized fractional derivatives, ∂τγ, where γ is the order of the derivative and τ is an additional phase factor. We specify the corresponding L-splines, which yield an extended class of fractional splines. The operator ∂τγ is used to define a scale-invariant energy measure—the squared L2-norm of the γth derivative of the signal—which provides a regularization functional for interpolating or fitting the noisy samples of a signal. We prove that the corresponding variational (or smoothing) spline estimator is a cardinal fractional spline of order 2γ, which admits a stable representation in a B-spline basis. We characterize the equivalent frequency response of the estimator and show that it closely matches that of a classical Butterworth filter of order 2γ. We also establish a formal link between the regularization parameter λ and the cutoff frequency of the smoothing spline filter: ω0 ≅ λ−2γ. Finally, we present an efficient computational solution to the fractional smoothing spline problem: It uses the fast Fourier transform and takes advantage of the multiresolution properties of the underlying basis functions. Please consult also the companion paper by T. Blu, M. Unser, "Self-Similarity: Part II—Optimal Estimation of Fractal Processes," IEEE Transactions on Signal Processing, vol. 55, no. 4, pp. 1364-1378, April 2007. |
@article{blu2007l, author = "Unser, M. and Blu, T.", title = "Self-Similarity: {P}art {I}---{S}plines and Operators", journal = "{IEEE} Transactions on Signal Processing", month = "April", year = "2007", volume = "55", number = "4", pages = "1352--1363", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007l" } |
Unser, M., Horbelt, S. & Blu, T.,"Fractional Derivatives, Splines and Tomography", Proceedings of the Tenth European Signal Processing Conference (EUSIPCO'00), Tampere, Finland, Vol. {IV}, pp. 2017-2020, September 4--8, 2000. |
We develop a spline calculus for dealing with fractional derivatives. After a brief review of fractional splines, we present the main formulas for computing the fractional derivatives of the underlying basis functions. In particular, we show that the γth fractional derivative of a B-spline of degree α (not necessarily integer) is given by the γth fractional difference of a B-spline of degree α-γ. We use these results to derive an improved version the filtered backprojection algorithm for tomographic reconstruction. The projection data is first interpolated with splines; the continuous model is then used explicitly for an exact implementation of the filtering and backprojection steps. |
@inproceedings{blu2000m, author = "Unser, M. and Horbelt, S. and Blu, T.", title = "Fractional Derivatives, Splines and Tomography", booktitle = "Proceedings of the Tenth European Signal Processing Conference ({EUSIPCO'00})", month = "September 4--8,", year = "2000", volume = "{IV}", pages = "2017--2020", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000m" } |
Urigüen, J.A., Blu, T. & Dragotti, P.-L.,"FRI Sampling with Arbitrary Kernels", IEEE Transactions on Signal Processing, Vol. 61 (21), pp. 5310-5323, November 2013. |
This paper addresses the problem of sampling non-bandlimited signals
within the Finite Rate of Innovation (FRI) setting. We had previously shown that, by using sampling kernels whose integer span contains specific exponentials (generalized Strang-Fix conditions), it is possible to devise non-iterative, fast reconstruction algorithms from very low-rate samples. Yet, the accuracy and sensitivity to noise of these algorithms is highly dependent on these exponential reproducing kernels --- actually, on the exponentials that they reproduce. Hence, our first contribution here is to provide clear guidelines on how to choose the sampling kernels optimally, in such a way that the reconstruction quality is maximized in the presence of noise. The optimality of these kernels is validated by comparing with Cramér-Rao's lower bounds (CRLB). Our second contribution is to relax the exact exponential reproduction requirement. Instead, we demonstrate that arbitrary sampling kernels can reproduce the "best" exponentials within quite a high accuracy in general, and that applying the exact FRI algorithms in this approximate context results in near-optimal reconstruction accuracy for practical noise levels. Essentially, we propose a universal extension of the FRI approach to arbitrary sampling kernels. Numerical results checked against the CRLB validate the various contributions of the paper and in particular outline the ability of arbitrary sampling kernels to be used in FRI algorithms. |
@article{blu2013f, author = "Urig{\"u}en, J.A. and Blu, T. and Dragotti, P.-L.", title = "{FRI} Sampling with Arbitrary Kernels", journal = "{IEEE} Transactions on Signal Processing", month = "November", year = "2013", volume = "61", number = "21", pages = "5310--5323", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2013f" } |
Urigüen, J.A., Dragotti, P.L. & Blu, T.,"Method and apparatus for sampling and reconstruction of signals", International patent WO/2014/191771, December 2014. |
A signal processing method for estimating a frequency domain representation of signal from a series of samples distorted by an instrument function, the method comprising obtaining the series of samples; obtaining a set of coefficients that fit a set of basis functions to a complex exponential function wherein the set of basis functions comprises a plurality of basis functions each defined by a shifted version of the instrument function in a signal domain; estimating the frequency domain representation of the signal based on the series of samples and the coefficients. This is wherein the estimate of the instrument function is based on a characterisation of the instrument function in the frequency domain at frequencies associated with the complex exponential function. |
@misc{blu2014f, author = "Urig\"{u}en, J.A. and Dragotti, P.L. and Blu, T.", title = "Method and apparatus for sampling and reconstruction of signals", month = "December", year = "2014", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2014f" } |
Urigüen, J.A., Dragotti, P.-L. & Blu, T.,"On The Exponential Reproducing Kernels for Sampling Signals with Finite Rate of Innovation", Proceedings of the Ninth International Workshop on Sampling Theory and Applications (SampTA'11), Singapore, May 2--6, 2011. |
The theory of Finite Rate of Innovation (FRI) broadened the traditional sampling paradigm to certain classes of parametric signals. In the presence of noise, the original procedures are not as stable, and a different treatment is needed. In this paper we review the ideal FRI sampling scheme and some of the existing techniques to combat noise. We then present alternative denoising methods for the case of exponential reproducing kernels. We first vary existing subspace-based approaches. We also discuss how to design exponential reproducing kernels that are most robust to noise. |
@inproceedings{blu2011i, author = "Urig{\"u}en, J.A. and Dragotti, P.-L. and Blu, T.", title = "On The Exponential Reproducing Kernels for Sampling Signals with Finite Rate of Innovation", booktitle = "Proceedings of the Ninth International Workshop on Sampling Theory and Applications ({SampTA'11})", month = "May 2--6,", year = "2011", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2011i" } |
Van De Ville, D., Bathellier, B., Accolla, R., Carleton, A., Blu, T. & Unser, M.,"Wavelet-Based Detection of Stimulus Responses in Time-Lapse Microscopy", Proceedings of the Thirty-First IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'06), Toulouse, France, pp. {V}-1161-{V}-1164, May 14--19, 2006. |
Many experimental paradigms in biology aim at studying the response to coordinated stimuli. In dynamic imaging experiments, the observed data is often not straightforward to interpret and not directly measurable in a quantitative fashion. Consequently, the data is typically preprocessed in an ad hoc fashion and the results subjected to a statistical inference at the level of a population. We propose a new framework for analyzing time-lapse images that exploits some a priori knowledge on the type of temporal response and takes advantage of the spatial correlation of the data. This is achieved by processing the data in the wavelet domain and expressing the time course of each wavelet coefficient by a linear model. We end up with a statistical map in the spatial domain for the contrast of interest (i.e., the stimulus response). The feasibility of the method is demonstrated by an example of intrinsic microscopy imaging of mice's brains during coordinated sensory stimulation. |
@inproceedings{blu2006f, author = "Van De Ville, D. and Bathellier, B. and Accolla, R. and Carleton, A. and Blu, T. and Unser, M.", title = "Wavelet-Based Detection of Stimulus Responses in Time-Lapse Microscopy", booktitle = "Proceedings of the Thirty-First {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'06})", month = "May 14--19,", year = "2006", pages = "{V}-1161--{V}-1164", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2006f" } |
Van De Ville, D., Bathellier, B., Carleton, A., Blu, T. & Unser, M.,"Wavelet-Based Statistical Analysis for Optical Imaging in Mouse Olfactory Bulb", Proceedings of the Fourth IEEE International Symposium on Biomedical Imaging (ISBI'07), Arlington, USA, pp. 448-451, April 12--15, 2007. |
Optical imaging is a powerful technique to map brain function in animals. In this study, we consider in vivo optical imaging of the murine olfactory bulb, using an intrinsic signal and a genetically expressed activity reporter fluorescent protein (synaptopHfluorin). The aim is to detect odor-evoked activations that occur in small spherical structures of the olfactory bulb called glomeruli. We propose a new way of analyzing this kind of data that combines a linear model (LM) fitting along the temporal dimension, together with a discrete wavelet transform (DWT) along the spatial dimensions. We show that relevant regressors for the LM are available for both types of optical signals. In addition, the spatial wavelet transform allows us to exploit spatial correlation at different scales, and in particular to extract activation patterns at the expected size of glomeruli. Our framework also provides a statistical significance for every pixel in the activation maps and it has strong type I error control. |
@inproceedings{blu2007m, author = "Van De Ville, D. and Bathellier, B. and Carleton, A. and Blu, T. and Unser, M.", title = "Wavelet-Based Statistical Analysis for Optical Imaging in Mouse Olfactory Bulb", booktitle = "Proceedings of the Fourth {IEEE} International Symposium on Biomedical Imaging ({ISBI'07})", month = "April 12--15,", year = "2007", pages = "448--451", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007m" } |
Van De Ville, D., Blu, T., Forster, B. & Unser, M.,"Isotropic-Polyharmonic {B}-Splines and Wavelets", Proceedings of the 2004 IEEE International Conference on Image Processing (ICIP'04), Singapore, Singapore, pp. 661-664, October 24--27, 2004. |
We propose the use of polyharmonic B-splines to build non-separable two-dimensional wavelet bases. The central idea is to base our design on the isotropic polyharmonic B-splines, a new type of polyharmonic B-splines that do converge to a Gaussian as the order increases. We opt for the quincunx subsampling scheme which allows us to characterize the wavelet spaces with a single wavelet: the isotropic-polyharmonic B-spline wavelet. Interestingly, this wavelet converges to a combination of four Gabor atoms, which are well separated in frequency domain. We also briefly discuss our Fourier-based implementation and present some experimental results. |
@inproceedings{blu2004j, author = "Van De Ville, D. and Blu, T. and Forster, B. and Unser, M.", title = "Isotropic-Polyharmonic \mbox{{B}-Splines} and Wavelets", booktitle = "Proceedings of the 2004 {IEEE} International Conference on Image Processing ({ICIP'04})", month = "October 24--27,", year = "2004", pages = "661--664", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004j" } |
Van De Ville, D., Blu, T., Forster, B. & Unser, M.,"Semi-Orthogonal Wavelets That Behave like Fractional Differentiators", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet XI, San Diego, USA, Vol. 5914, pp. 59140C-1-59140C-8, July 31--August 3, 2005. |
The approximate behavior of wavelets as differential operators is often considered as one of their most fundamental properties. In this paper, we investigate how we can further improve on the wavelet's behavior as differentiator. In particular, we propose semi-orthogonal differential wavelets. The semi-orthogonality condition ensures that wavelet spaces are mutually orthogonal. The operator, hidden within the wavelet, can be chosen as a generalized differential operator ∂τγ, for a γ-th order derivative with shift τ. Both order of derivation and shift can be chosen fractional. Our design leads us naturally to select the fractional B-splines as scaling functions. By putting the differential wavelet in the perspective of a derivative of a smoothing function, we find that signal singularities are compactly characterized by at most two local extrema of the wavelet coefficients in each subband. This property could be beneficial for signal analysis using wavelet bases. We show that this wavelet transform can be efficiently implemented using FFTs. |
@inproceedings{blu2005j, author = "Van De Ville, D. and Blu, T. and Forster, B. and Unser, M.", title = "Semi-Orthogonal Wavelets That Behave like Fractional Differentiators", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet {XI}", month = "July 31--August 3,", year = "2005", volume = "5914", pages = "59140C-1--59140C-8", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005j" } |
Van De Ville, D., Blu, T., Forster, B. & Unser, M.,"Polyharmonic {B}-Spline Wavelets: From Isotropy to Directionality", Advanced Concepts for Intelligent Vision Systems (ACIVS'06), Antwerp, Belgium, September 18--21, 2006. Invited talk. |
Polyharmonic B-splines are excellent basis functions to build multidimensional wavelet bases. These functions are nonseparable, multidimensional generators that are localized versions of radial basis functions. We show that Rabut's elementary polyharmonic B-splines do not converge to a Gaussian as the order parameter increases, as opposed to their separable B-spline counterparts. Therefore, we introduce a more isotropic localization operator that guarantees this convergence, resulting into the isotropic polyharmonic B-splines. Next, we focus on the two-dimensional quincunx subsampling scheme. This configuration is of particular interest for image processing, because it yields a finer scale progression than the standard dyadic approach. However, up until now, the design of appropriate filters for the quincunx scheme has mainly been done using the McClellan transform. In our approach, we start from the scaling functions, which are the polyharmonic B-splines and, as such, explicitly known, and we derive a family of polyharmonic spline wavelets corresponding to different flavors of the semi-orthogonal wavelet transform; e.g., orthonormal, B-spline, and dual. The filters are automatically specified by the scaling relations satisfied by these functions. We prove that the isotropic polyharmonic B-spline wavelet converges to a combination of four Gabor atoms, which are well separated in the frequency domain. We also show that these wavelets are nearly isotropic and that they behave as an iterated Laplacian operator at low frequencies. We describe an efficient fast Fourier transform-based implementation of the discrete wavelet transform based on polyharmonic B-splines. Finally, we propose a new way to build directional wavelets using modified polyharmonic B-splines. This approach benefits from the previous results (construction of the wavelet filters, fast implementation,…) but allows one to recover directional information about the edges from the (complex-valued) wavelet coefficients. |
@inproceedings{blu2006g, author = "Van De Ville, D. and Blu, T. and Forster, B. and Unser, M.", title = "Polyharmonic \mbox{{B}-Spline} Wavelets: {F}rom Isotropy to Directionality", booktitle = "Advanced Concepts for Intelligent Vision Systems ({ACIVS'06})", month = "September 18--21,", year = "2006", note = "Invited talk", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2006g" } |
Van De Ville, D., Blu, T. & Unser, M.,"Wavelets Versus Resels in the Context of fMRI: Establishing the Link with SPM", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing X, San Diego, USA, Vol. 5207, pp. 417-425, August 3--8, 2003. Part I. |
Statistical Parametric Mapping (SPM) is a widely deployed tool for detecting and analyzing brain activity from fMRI data. One of SPM's main features is smoothing the data by a Gaussian filter to increase the SNR. The subsequent statistical inference is based on the continuous Gaussian random field theory. Since the remaining spatial resolution has deteriorated due to smoothing, SPM introduces the concept of “resels” (resolution elements) or spatial information-containing cells. The number of resels turns out to be inversely proportional to the size of the Gaussian smoother. Detection the activation signal in fMRI data can also be done by a wavelet approach: after computing the spatial wavelet transform, a straightforward coefficient-wise statistical test is applied to detect activated wavelet coefficients. In this paper, we establish the link between SPM and the wavelet approach based on two observations. First, the (iterated) lowpass analysis filter of the discrete wavelet transform can be chosen to closely resemble SPM's Gaussian filter. Second, the subsampling scheme provides us with a natural way to define the number of resels; i.e., the number of coefficients in the lowpass subband of the wavelet decomposition. Using this connection, we can obtain the degree of the splines of the wavelet transform that makes it equivalent to SPM's method. We show results for two particularly attractive biorthogonal wavelet transforms for this task; i.e., 3D fractional-spline wavelets and 2D+Z fractional quincunx wavelets. The activation patterns are comparable to SPM's. |
@inproceedings{blu2003s, author = "Van De Ville, D. and Blu, T. and Unser, M.", title = "Wavelets Versus Resels in the Context of {fMRI}: {E}stablishing the Link with {SPM}", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {X}", month = "August 3--8,", year = "2003", volume = "5207", pages = "417--425", note = "Part {I}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003s" } |
Van De Ville, D., Blu, T. & Unser, M.,"On the Approximation Power of Splines: Orthogonal Versus Hexagonal Lattices", Proceedings of the Fifth International Workshop on Sampling Theory and Applications (SampTA'03), Strobl, Austria, pp. 109-111, May 26--30, 2003. |
Recently, we have proposed a novel family of bivariate, non-separable splines. These splines, called "hexsplines" have been designed to deal with hexagonally sampled data. Incorporating the shape of the Voronoi cell of a hexagonal lattice, they preserve the twelve-fold symmetry of the hexagon tiling cell. Similar to B-splines, we can use them to provide a link between the discrete and the continuous domain, which is required for many fundamental operations such as interpolation and resampling. The question we answer in this paper is "How well do the hex-splines approximate a given function in the continuous domain?" and more specifically "How do they compare to separable B-splines deployed on a lattice with the same sampling density?" |
@inproceedings{blu2003t, author = "Van De Ville, D. and Blu, T. and Unser, M.", title = "On the Approximation Power of Splines: {O}rthogonal Versus Hexagonal Lattices", booktitle = "Proceedings of the Fifth International Workshop on Sampling Theory and Applications ({SampTA'03})", month = "May 26--30,", year = "2003", pages = "109--111", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003t" } |
Van De Ville, D., Blu, T. & Unser, M.,"Recursive Filtering for Splines on Hexagonal Lattices", Proceedings of the Twenty-Eighth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'03), Hong Kong, China, Vol. {III}, pp. 301-304, April 6--10, 2003. |
Hex-splines are a novel family of bivariate splines, which are well suited to handle hexagonally sampled data. Similar to classical 1D B-splines, the spline coefficients need to be computed by a prefilter. Unfortunately, the elegant implementation of this prefilter by causal and anti-causal recursive filtering is not applicable for the (non-separable) hex-splines. Therefore, in this paper we introduce a novel approach from the viewpoint of approximation theory. We propose three different recursive filters and optimize their parameters such that a desired order of approximation is obtained. The results for third and fourth order hex-splines are discussed. Although the proposed solutions provide only quasi-interpolation, they tend to be very close to the interpolation prefilter. |
@inproceedings{blu2003u, author = "Van De Ville, D. and Blu, T. and Unser, M.", title = "Recursive Filtering for Splines on Hexagonal Lattices", booktitle = "Proceedings of the Twenty-Eighth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'03})", month = "April 6--10,", year = "2003", volume = "{III}", pages = "301--304", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003u" } |
Van De Ville, D., Blu, T. & Unser, M.,"Integrated Wavelet Processing and Spatial Statistical Testing of fMRI Data", NeuroImage, Vol. 23 (4), pp. 1472-1485, December 2004. |
We introduce an integrated framework for detecting brain activity from fMRI data, which is based on a spatial discrete wavelet transform. Unlike the standard wavelet-based approach for fMRI analysis, we apply the suitable statistical test procedure in the spatial domain. For a desired significance level, this scheme has one remaining degree of freedom, characterizing the wavelet processing, which is optimized according to the principle of minimal approximation error. This allows us to determine the threshold values in a way that does not depend on data. While developing our framework, we make only conservative assumptions. Consequently, the detection of activation is based on strong evidence. We have implemented this framework as a toolbox (WSPM) for the SPM2 software, taking advantage of multiple options and functions of SPM such as the setup of the linear model and the use of the hemodynamic response function. We show by experimental results that our method is able to detect activation patterns; the results are comparable to those obtained by SPM even though statistical assumptions are more conservative. The associated software is available here. |
@article{blu2004k, author = "Van De Ville, D. and Blu, T. and Unser, M.", title = "Integrated Wavelet Processing and Spatial Statistical Testing of {fMRI} Data", journal = "NeuroImage", month = "December", year = "2004", volume = "23", number = "4", pages = "1472--1485", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004k" } |
Van De Ville, D., Blu, T. & Unser, M.,"Wavelet-Based fMRI Statistical Analysis and Spatial Interpretation: A Unifying Approach", Proceedings of the Second IEEE International Symposium on Biomedical Imaging (ISBI'04), Arlington, USA, pp. 1167-1170, April 15--18, 2004. |
Wavelet-based statistical analysis methods for fMRI are able to detect brain activity without smoothing the data. Typically, the statistical inference is performed in the wavelet domain by testing the t-values of each wavelet coefficient; subsequently, an activity map is reconstructed from the significant coefficients. The limitation of this approach is that there is no direct statistical interpretation of the reconstructed map. In this paper, we propose a new methodology that takes advantage of wavelet processing but keeps the statistical meaning in the spatial domain. We derive a spatial threshold with a proper non-stationary component and determine optimal threshold values by minimizing an approximation error. The sensitivity of our method is comparable to SPM's (Statistical Parametric Mapping). |
@inproceedings{blu2004l, author = "Van De Ville, D. and Blu, T. and Unser, M.", title = "Wavelet-Based {fMRI} Statistical Analysis and Spatial Interpretation: {A} Unifying Approach", booktitle = "Proceedings of the Second {IEEE} International Symposium on Biomedical Imaging ({ISBI'04})", month = "April 15--18,", year = "2004", pages = "1167--1170", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004l" } |
Van De Ville, D., Blu, T. & Unser, M.,"WSPM: Wavelet Processing and the Analysis of fMRI Using Statistical Parametric Maps", Second International Conference on Computational Harmonic Analysis, Nineteenth Annual Shanks Lecture (CHA'04), Nashville, USA, May 24--30, 2004. Invited talk. |
Wavelet-based methods for the statistical analysis of functional magnetic resonance images (fMRI) are able to detect brain activity without smoothing the data (3D space + time). Up to now, the statistical inference was typically performed in the wavelet domain by testing the t-values of each wavelet coefficient; the activity map was reconstructed from the significant coefficients. The limitation of this approach is that there is no direct statistical interpretation of the reconstructed map. Here, we describe a new methodology that takes advantage of wavelet processing but keeps the statistical meaning in the spatial domain. We derive a spatial threshold with a proper non-stationary component and determine optimal threshold values by minimizing an approximation error. This framework was implemented as a toolbox (WSPM) for the widely-used SPM2 software, taking advantage of the multiple options and functionality of SPM (Statistical Parametric Mapping) such as the specification of a linear model that may account for the hemodymanic response of the system. The sensitivity of our method is comparable to that of conventional SPM, which applies a spatial Gaussian prefilter to the data, even though our statistical assumptions are more conservative. |
@inproceedings{blu2004m, author = "Van De Ville, D. and Blu, T. and Unser, M.", title = "{WSPM}: {W}avelet Processing and the Analysis of {fMRI} Using Statistical Parametric Maps", booktitle = "Second International Conference on Computational Harmonic Analysis, Nineteenth Annual Shanks Lecture ({CHA'04})", month = "May 24--30,", year = "2004", note = "Invited talk", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004m" } |
Van De Ville, D., Blu, T. & Unser, M.,"Isotropic Polyharmonic {B}-Splines: Scaling Functions and Wavelets", IEEE Transactions on Image Processing, Vol. 14 (11), pp. 1798-1813, November 2005. |
In this paper, we use polyharmonic B-splines to build multidimensional wavelet bases. These functions are nonseparable, multidimensional basis functions that are localized versions of radial basis functions. We show that Rabut's elementary polyharmonic B-splines do not converge to a Gaussian as the order parameter increases, as opposed to their separable B-spline counterparts. Therefore, we introduce a more isotropic localization operator that guarantees this convergence, resulting into the isotropic polyharmonic B-splines. Next, we focus on the two-dimensional quincunx subsampling scheme. This configuration is of particular interest for image processing, because it yields a finer scale progression than the standard dyadic approach. However, up until now, the design of appropriate filters for the quincunx scheme has mainly been done using the McClellan transform. In our approach, we start from the scaling functions, which are the polyharmonic B-splines and, as such, explicitly known, and we derive a family of polyharmonic spline wavelets corresponding to different flavors of the semi-orthogonal wavelet transform; e.g., orthonormal, B-spline, and dual. The filters are automatically specified by the scaling relations satisfied by these functions. We prove that the isotropic polyharmonic B-spline wavelet converges to a combination of four Gabor atoms, which are well separated in the frequency domain. We also show that these wavelets are nearly isotropic and that they behave as an iterated Laplacian operator at low frequencies. We describe an efficient fast Fourier transform-based implementation of the discrete wavelet transform based on polyharmonic B-splines. |
@article{blu2005k, author = "Van De Ville, D. and Blu, T. and Unser, M.", title = "Isotropic Polyharmonic \mbox{{B}-Splines}: {S}caling Functions and Wavelets", journal = "{IEEE} Transactions on Image Processing", month = "November", year = "2005", volume = "14", number = "11", pages = "1798--1813", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005k" } |
Van De Ville, D., Blu, T. & Unser, M.,"On the Multidimensional Extension of the Quincunx Subsampling Matrix", IEEE Signal Processing Letters, Vol. 12 (2), pp. 112-115, February 2005. |
The dilation matrix associated with the three-dimensional (3-D) face-centered cubic (FCC) sublattice is often considered to be the natural 3-D extension of the two-dimensional (2-D) quincunx dilation matrix. However, we demonstrate that both dilation matrices are of different nature: while the 2-D quincunx matrix is a similarity transform, the 3-D FCC matrix is not. More generally, we show that is impossible to obtain a dilation matrix that is a similarity transform and performs downsampling of the Cartesian lattice by a factor of two in more than two dimensions. Furthermore, we observe that the popular 3-D FCC subsampling scheme alternates between three different lattices: Cartesian, FCC, and quincunx. The latter one provides a less isotropic sampling density, a property that should be taken into account to properly orient 3-D data before processing using such a subsampling matrix. |
@article{blu2005l, author = "Van De Ville, D. and Blu, T. and Unser, M.", title = "On the Multidimensional Extension of the Quincunx Subsampling Matrix", journal = "{IEEE} Signal Processing Letters", month = "February", year = "2005", volume = "12", number = "2", pages = "112--115", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005l" } |
Van De Ville, D., Blu, T. & Unser, M.,"WSPM: A New Approach for Wavelet-Based Statistical Analysis of fMRI Data", Eleventh Annual Meeting of the Organization for Human Brain Mapping (HBM'05), Toronto, Canada, pp. S17, June 12--16, 2005. |
Recently, we have proposed a new framework for detecting brain activity from fMRI data, which is based on the spatial discrete wavelet transform. The standard wavelet-based approach performs a statistical test in the wavelet domain, and therefore fails to provide a rigorous statistical interpretation in the spatial domain. The new framework provides an “integrated” approach: the data is processed in the wavelet domain (e.g., by thresholding wavelet coefficients), and a suitable statistical testing procedure is applied afterwards in the spatial domain. This method is based on conservative assumptions only and has a strong type-I error control by construction. At the same time, it has a sensitivity comparable to that of SPM. |
@inproceedings{blu2005m, author = "Van De Ville, D. and Blu, T. and Unser, M.", title = "{WSPM}: {A} New Approach for Wavelet-Based Statistical Analysis of {fMRI} Data", booktitle = "Eleventh Annual Meeting of the Organization for Human Brain Mapping ({HBM'05})", month = "June 12--16,", year = "2005", pages = "S17", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005m" } |
Van De Ville, D., Blu, T. & Unser, M.,"Surfing the Brain---An Overview of Wavelet-Based Techniques for fMRI Data Analysis", IEEE Engineering in Medicine and Biology Magazine, Vol. 25 (2), pp. 65-78, March-April 2006. |
The measurement of brain activity in a noninvasive way is an essential element in modern neurosciences. Modalities such as electroencephalography (EEG) and magnetoencephalography (MEG) recently gained interest, but two classical techniques remain predominant. One of them is positron emission tomography (PET), which is costly and lacks temporal resolution but allows the design of tracers for specific tasks; the other main one is functional magnetic resonance imaging (fMRI), which is more affordable than PET from a technical, financial, and ethical point of view, but which suffers from poor contrast and low signal-to-noise ratio (SNR). For this reason, advanced methods have been devised to perform the statistical analysis of fMRI data. The associated software is available here. |
@article{blu2006h, author = "Van De Ville, D. and Blu, T. and Unser, M.", title = "Surfing the Brain---{A}n Overview of Wavelet-Based Techniques for {fMRI} Data Analysis", journal = "{IEEE} Engineering in Medicine and Biology Magazine", month = "March-April", year = "2006", volume = "25", number = "2", pages = "65--78", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2006h" } |
Van De Ville, D., Blu, T. & Unser, M.,"WSPM or How to Obtain Statistical Parametric Maps Using Shift-Invariant Wavelet Processing", Proceedings of the IEEE Thirty-First International Conference on Acoustics, Speech, and Signal Processing (ICASSP'06), Toulouse, France, pp. {V}-1101-{V}-1104, May 14--19, 2006. |
Recently, we have proposed a new framework for detecting brain activity from fMRI data, which is based on the spatial discrete wavelet transform. The standard wavelet-based approach performs a statistical test in the wavelet domain, and therefore fails to provide a rigorous statistical interpretation in the spatial domain. The new framework provides an “integrated” approach: the data is processed in the wavelet domain (by thresholding wavelet coefficients), and a suitable statistical testing procedure is applied afterwards in the spatial domain. This method is based on conservative assumptions only and has a strong type-I error control by construction. At the same time, it has a sensitivity comparable to that of SPM. Here, we discuss the extension of our algorithm to the redundant discrete wavelet transform, which provides a shift-invariant detection scheme. The key features of our technique are illustrated with experimental results. An implementation of our framework is available as a toolbox (WSPM) for the SPM2 software. |
@inproceedings{blu2006i, author = "Van De Ville, D. and Blu, T. and Unser, M.", title = "{WSPM} or How to Obtain Statistical Parametric Maps Using Shift-Invariant Wavelet Processing", booktitle = "Proceedings of the {IEEE} Thirty-First International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'06})", month = "May 14--19,", year = "2006", pages = "{V}-1101--{V}-1104", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2006i" } |
Van De Ville, D., Blu, T., Unser, M., Philips, W., Lemahieu, I. & Van de Walle, R.,"Hex-Splines: A Novel Spline Family for Hexagonal Lattices", IEEE Transactions on Image Processing, Vol. 13 (6), pp. 758-772, June 2004. |
This paper proposes a new family of bivariate, non-separable splines, called hex-splines, especially designed for hexagonal lattices. The starting point of the construction is the indicator function of the Voronoi cell, which is used to define in a natural way the first-order hex-spline. Higher order hex-splines are obtained by successive convolutions. A mathematical analysis of this new bivariate spline family is presented. In particular, we derive a closed form for a hex-spline of arbitrary order. We also discuss important properties, such as their Fourier transform and the fact they form a Riesz basis. We also highlight the approximation order. For conventional rectangular lattices, hex-splines revert to classical separable tensor-product B-splines. Finally, some prototypical applications and experimental results demonstrate the usefulness of hex-splines for handling hexagonally sampled data. |
@article{blu2004n, author = "Van De Ville, D. and Blu, T. and Unser, M. and Philips, W. and Lemahieu, I. and Van de Walle, R.", title = "Hex-Splines: {A} Novel Spline Family for Hexagonal Lattices", journal = "{IEEE} Transactions on Image Processing", month = "June", year = "2004", volume = "13", number = "6", pages = "758--772", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004n" } |
Van De Ville, D., Forster-Heinlein, B., Unser, M. & Blu, T.,"Analytical Footprints: Compact Representation of Elementary Singularities in Wavelet Bases", IEEE Transactions on Signal Processing, Vol. 58 (12), pp. 6105-6118, December 2010. |
We introduce a family of elementary singularities that are point-Hölder α-regular. These singularities are self-similar and are the Green functions of fractional derivative operators; i.e., by suitable fractional differentiation, one retrieves a Dirac δ function at the exact location of the singularity. We propose to use fractional operator-like wavelets that act as a multiscale version of the derivative in order to characterize and localize singularities in the wavelet domain. We show that the characteristic signature when the wavelet interacts with an elementary singularity has an asymptotic closed-form expression, termed the analytical footprint. Practically, this means that the dictionary of wavelet footprints is embodied in a single analytical form. We show that the wavelet coefficients of the (nonredundant) decomposition can be fitted in a multiscale fashion to retrieve the parameters of the underlying singularity. We propose an algorithm based on stepwise parametric fitting and the feasibility of the approach to recover singular signal representations. |
@article{blu2010j, author = "Van De Ville, D. and Forster-Heinlein, B. and Unser, M. and Blu, T.", title = "Analytical Footprints: {C}ompact Representation of Elementary Singularities in Wavelet Bases", journal = "{IEEE} Transactions on Signal Processing", month = "December", year = "2010", volume = "58", number = "12", pages = "6105--6118", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010j" } |
Van De Ville, D., Seghier, M.L., Lazeyras, F., Blu, T. & Unser, M.,"WSPM: Wavelet-Based Statistical Parametric Mapping", NeuroImage, Vol. 37 (4), pp. 1205-1217, October 1, 2007. |
Recently, we have introduced an integrated framework that combines wavelet-based processing with statistical testing in the spatial domain. In this paper, we propose two important enhancements of the framework. First, we revisit the underlying paradigm; i.e., that the effect of the wavelet processing can be considered as an adaptive denoising step to “improve” the parameter map, followed by a statistical detection procedure that takes into account the non-linear processing of the data. With an appropriate modification of the framework, we show that it is possible to reduce the bias of the method with respect to the best linear estimate, providing conservative results that are closer to the original data. Second, we propose an extension of our earlier technique that compensates for the lack of shift-invariance of the wavelet transform. We demonstrate experimentally that both enhancements have a positive effect on performance. In particular, we present a reproducibility study for multi-session data that compares WSPM against SPM with different amounts of smoothing. The full approach is available as a toolbox, named WSPM, for the SPM2 software; it takes advantage of multiple options and features of SPM such as the general linear model. The associated software is available here. |
@article{blu2007n, author = "Van De Ville, D. and Seghier, M.L. and Lazeyras, F. and Blu, T. and Unser, M.", title = "{WSPM}: {W}avelet-Based Statistical Parametric Mapping", journal = "NeuroImage", month = "October 1,", year = "2007", volume = "37", number = "4", pages = "1205--1217", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007n" } |
Van De Ville, D., Seghier, M., Lazeyras, F., Blu, T. & Unser, M.,"Empirical Sensitivity, Specificity, and Bias of Wavelet-Based Statistical Parametric Mapping (WSPM)", Thirteenth Annual Meeting of the Organization for Human Brain Mapping (HBM'07), Chicago, USA, June 10--14, 2007. CD-ROM paper no. 336 TH PM. |
Introduction Material and methods Analysis: Standard preprocessing and the general linear model (GLM) setup done using SPM2, including regressors from the realignment procedure and the autoregressive model for serial correlations. For each session, maps are obtained for a broad range of significance levels with
SPM compensates for multiple testing using the GRF theory, while WSPM and the voxel-by-voxel test use simple Bonferroni correction. Reproducibility study: The consistency of the detection maps over the 4 sessions is assessed using a reproducibility study [2]: we estimate the empirical sensitivity and specificity from a binomial mixture model for the histogram of the cumulative detection maps. Additionally, we estimate the bias of the methods as the sum of the absolute differences between the contrast before thresholding and the one of the voxel-by-voxel approach. Discussion In Figure 4, we show the ROC curves obtained after estimating the parameters of the binomial mixture model for both methods. By construction, the voxel-by-voxel statistical test has no bias, but reaches a very low sensitivity. For SPM, smoothing increases the bias (as can be expected) and the empirical sensitivity-specificity. Finally, WSPM has a lower bias than SPM 4mm combined with a comparable or better compromise between sensitivity and specificity. The wiggly curve behavior is due to the non-linear operation of thresholding in the wavelet domain. References
|
@inproceedings{blu2007o, author = "Van De Ville, D. and Seghier, M. and Lazeyras, F. and Blu, T. and Unser, M.", title = "Empirical Sensitivity, Specificity, and Bias of Wavelet-Based Statistical Parametric Mapping ({WSPM})", booktitle = "Thirteenth Annual Meeting of the Organization for Human Brain Mapping ({HBM'07})", month = "June 10--14,", year = "2007", note = "CD-ROM paper no. 336 TH PM", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007o" } |
Van De Ville, D., Seghier, M., Lazeyras, F., Blu, T. & Unser, M.,"Wavelet-Based Statistical Analysis of fMRI Data with High Spatial Resolution", CHUV Research Day (CHUV'07), Lausanne, Switzerland, pp. 185, February 1, 2007. |
Wavelet-based statistical parametric mapping (WSPM) analyzes fMRI data using a combination of powerful denoising in the wavelet domain with statistical testing in the spatial domain. It also guarantees strong type I error (false positives) control and thus high confidence in the detections. In this poster, we show the various stages of this framework and we propose a comparison of WSPM and SPM2, which is the de-facto standard for statistical analysis of fMRI data. WSPM is available to the neuro-imaging community as a toolbox for SPM. One of the major advantages of WSPM is that is does not require to pre-smooth the data before statistical analysis, which is a prerequisite of the SPM approach. Therefore, potential high spatial resolution information available in the data is not lost and can be used to retrieve small and highly detailed activation patterns. As a typical result, we show the activation maps for SPM (6mm) and WSPM. The experimental paradigm was single-frequency acoustic stimulation (1.5T scanner; TR=1.2s; 1.8×1.8×3mm). For the same statistical significance (5% corrected), the activation patterns retrieved by WSPM are clearly more detailed than those by SPM2. In the poster, we also include the results of the empirically measured sensivity and specificity using a reproducibility analysis for multi-session data using both WSPM and SPM2. From this evaluation, we see that with WSPM we are able to obtain high spatial resolution without loss of sensitivity. SPM (6mm) WSPM |
@inproceedings{blu2007p, author = "Van De Ville, D. and Seghier, M. and Lazeyras, F. and Blu, T. and Unser, M.", title = "Wavelet-Based Statistical Analysis of {fMRI} Data with High Spatial Resolution", booktitle = "CHUV Research Day ({CHUV'07})", month = "February 1,", year = "2007", pages = "185", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007p" } |
Van De Ville, D., Seghier, M., Lazeyras, F., Pelizzone, M., Blu, T. & Unser, M.,"SPM versus WSPM: Sensitivity and Specificity for Multi-Session fMRI Data", Twelfth Annual Meeting of the Organization for Human Brain Mapping (HBM'06), Florence, Italy, pp. S94, June 11--15, 2006. Invited talk. |
Wavelet-based statistical parametric mapping (WSPM) combines powerful denoising in the wavelet domain with statistical testing in the spatial domain. It guarantees strong type I error control and thus high confidence in the detections. In this poster, we propose a comparison of WSPM and SPM2, based on the results of multi-session experimental data. The dataset comes from a carefully conducted experiment with auditory stimulation (Philips Gyroscan 1.5T; TR = 1.2s; spatial resolution 1.8×1.8×3mm). The subject's head is placed within custom-designed headset, which isolates the person from most of the MR scanner's noise, and exposed to single-frequency acoustic stimulation during the activation condition of a block paradigm. Four different auditory frequencies (300Hz, 1126Hz, 2729Hz, 4690Hz) are used in each session. The complete experiment contains 4 sessions, each spanning 250 volumes.The general linear model (GLM) setup was done using SPM, including regressors from the realignment procedure and the autoregressive model for serial correlations. For each session, functional maps of the combined contrast (all frequencies-rest) were obtained for a broad range of significance levels with SPM (4mm smoothing) and WSPM (orthogonal B-spline wavelets slice-by-slice; degree 1.0; 2 iterations; combination of 4 spatial shifts). SPM compensates for multiple testing using the Gaussian Random Field theory, while WSPM uses simple Bonferroni correction. The results were then analyzed using two different criteria:
Figure 1. In Fig. 2, we show for each session the number of detections for SPM and WSPM, inside and outside the ROI, as a function of the significance level. Notice that the significance level is at the volume level; i.e., α = 1.0 corresponds to the expectation of a single false positive for the whole volume. We found that both methods are about equally calibrated for α = 1.0. However, the slope of the curve that links the number of detections as a function of the significance level is lower for WSPM than for SPM, which means more detections with WSPM for the same type I error probability. Interestingly, number of detections outside the ROI is not higher for WSPM, suggesting that its higher sensitivity does not lead to false positives augmentation. Figure 2. In Fig. 3, we show the ROCs after estimating the binomial mixture model for both methods. The mixture parameter is globally estimated. The higher performance of WSPM is confirmed: higher sensitivity (more true positives) combined with higher specificity (more true negatives). Figure 3. In Fig. 4, we show the area under the ROC (i.e., sensitivity times specificity), which is a good single measure for the performance of the detection technique. This figure illustrates again the excellent balance of sensitivity and specificity for WSPM. Figure 4. Finally, we note that WSPM guarantees its strong type I error control using conservative assumptions only. References
|
@inproceedings{blu2006j, author = "Van De Ville, D. and Seghier, M. and Lazeyras, F. and Pelizzone, M. and Blu, T. and Unser, M.", title = "{SPM} {\textit{versus}} {WSPM}: {S}ensitivity and Specificity for Multi-Session {fMRI} Data", booktitle = "Twelfth Annual Meeting of the Organization for Human Brain Mapping ({HBM'06})", month = "June 11--15,", year = "2006", pages = "S94", note = "Invited talk", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2006j" } |
Vetterli, M., Marziliano, P. & Blu, T.,"Sampling Discrete-Time Piecewise Bandlimited Signals", Proceedings of the Fourth International Conference on Sampling Theory and Applications (SampTA'01), Orlando, USA, pp. 97-102, May 13--17, 2001. |
We consider sampling discrete-time periodic signals which are piecewise bandlimited, that is, a signal that is the sum of a bandlimited signal with a piecewise polynomial signal containing a finite number of transitions. These signals are not bandlimited and thus the Shannon—also due to Kotelnikov, Whittaker—sampling theorem for bandlimited signals can not be applied. In this paper, we derive sampling and reconstruction schemes based on those developed in [1, 2, 3] for piecewise polynomial signals which take into account the extra degrees of freedom due to the bandlimitedness. References
|
@inproceedings{blu2001l, author = "Vetterli, M. and Marziliano, P. and Blu, T.", title = "Sampling Discrete-Time Piecewise Bandlimited Signals", booktitle = "Proceedings of the Fourth International Conference on Sampling Theory and Applications ({SampTA'01})", month = "May 13--17,", year = "2001", pages = "97--102", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001l" } |
Vetterli, M., Marziliano, P. & Blu, T.,"A Sampling Theorem for Periodic Piecewise Polynomial Signals", Proceedings of the Twenty-Sixth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'01), Salt Lake City, USA, Vol. 6, pp. 3893-3896, May 7--11, 2001. |
We consider the problem of sampling signals which are not bandlimited, but still have a finite number of degrees of freedom per unit of time, such as, for example, piecewise polynomials. We demonstrate that by using an adequate sampling kernel and a sampling rate greater or equal to the number of degrees of freedom per unit of time, one can uniquely reconstruct such signals. This proves a sampling theorem for a wide class of signals beyond bandlimited signals. Applications of this sampling theorem can be found in signal processing, communication systems and biological systems. |
@inproceedings{blu2001m, author = "Vetterli, M. and Marziliano, P. and Blu, T.", title = "A Sampling Theorem for Periodic Piecewise Polynomial Signals", booktitle = "Proceedings of the Twenty-Sixth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'01})", month = "May 7--11,", year = "2001", volume = "6", pages = "3893--3896", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001m" } |
Vetterli, M., Marziliano, P. & Blu, T.,"Sampling Signals with Finite Rate of Innovation", IEEE Transactions on Signal Processing, Vol. 50 (6), pp. 1417-1428, June 2002. IEEE Signal Processing Society's 2006 Best Paper Award. |
Consider classes of signals that have a finite number of degrees of freedom per unit of time and call this number the rate of innovation. Examples of signals with a finite rate of innovation include streams of Diracs (e.g., the Poisson process), nonuniform splines, and piecewise polynomials. Even though these signals are not bandlimited, we showthat they can be sampled uniformly at (or above) the rate of innovation using an appropriate kernel and then be perfectly reconstructed. Thus, we prove sampling theorems for classes of signals and kernels that generalize the classic "bandlimited and sinc kernel" case. In particular, we show how to sample and reconstruct periodic and finite-length streams of Diracs, nonuniform splines, and piecewise polynomials using sinc and Gaussian kernels. For infinite-length signals with finite local rate of innovation, we show local sampling and reconstruction based on spline kernels. The key in all constructions is to identify the innovative part of a signal (e.g., time instants and weights of Diracs) using an annihilating or locator filter: a device well known in spectral analysis and error-correction coding. This leads to standard computational procedures for solving the sampling problem, which we show through experimental results. Applications of these new sampling results can be found in signal processing, communications systems, and biological systems. IEEE Signal Processing Society's 2006 Best Paper Award |
@article{blu2002l, author = "Vetterli, M. and Marziliano, P. and Blu, T.", title = "Sampling Signals with Finite Rate of Innovation", journal = "{IEEE} Transactions on Signal Processing", month = "June", year = "2002", volume = "50", number = "6", pages = "1417--1428", note = "IEEE Signal Processing Society's 2006 \textbf{Best Paper Award}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002l" } |
Vetterli, M., Marziliano, P. & Blu, T.,"Sampling methods, reconstruction methods and devices for sampling and/or reconstructing signals", International Patent WO200278197, 2002. This technology was transferred to Qualcomm Inc. in 2007. |
@misc{blu2002m, author = "Vetterli, M. and Marziliano, P. and Blu, T.", title = "Sampling methods, reconstruction methods and devices for sampling and/or reconstructing signals", year = "2002", note = "This technology was transferred to Qualcomm Inc. in 2007", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002m" } |
Vetterli, M., Marziliano, P., Blu, T. & Dragotti, P.-L.,"Sparse Sampling: Theory, Algorithms and Applications", Tutorial Presentation at the Thirty-Fourth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'09), Taipei, Taiwan, April 19--24, 2009. |
Signal acquisition and reconstruction is at the heart of signal processing and communications, and sampling theorems provide the bridge between the continuous and the discrete-time worlds. The most celebrated and widely used sampling theorem is often attributed to Shannon, and gives a sufficient condition, namely bandlimitedness, for an exact sampling and interpolation formula. Recently, this framework has been extended to classes of non-bandlimited signals. The way around Shannon classical sampling theorem resides in a parametric approach, where the prior that the signal is sparse in a basis or in a parametric space is put to contribution. This leads to new exact reconstruction formulas and fast algorithms that achieve such reconstructions. The aim of this tutorial is to give an overview of these recent exciting findings in sampling theory. The fundamental theoretical results will be reviewed and constructive algorithms will be presented. Finally, a diverse set of applications will be presented so as to demonstrate the tangibility of the theoretical concepts. |
@conference{blu2009h, author = "Vetterli, M. and Marziliano, P. and Blu, T. and Dragotti, P.-L.", title = "Sparse Sampling: Theory, Algorithms and Applications", booktitle = "Tutorial Presentation at the Thirty-Fourth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'09})", month = "April 19--24,", year = "2009", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2009h" } |
Vetterli, M., Marziliano, P., Blu, T. & Dragotti, P.-L.,"Sparse Sampling of Structured Data", Tutorial Presentation at the Seventeenth European Signal Processing Conference (EUSIPCO'09), Glasgow, Scotland UK, August 24--28, 2009. |
The problem of reconstructing or estimating partially observed or sampled signals is an old and important one, and finds application in many areas of signal processing and communications. Traditional acquisition and reconstruction approaches are heavily influences by the classical Shannon sampling theory which gives an exact sampling and interpolation formula for bandlimited signals. Recently, the classical Shannon sampling framework has been extended to classes of non-bandlimited structured signals, which we call signals with Finite Rate of Innovation. In these new sampling schemes, the prior that the signal is sparse in a basis or in a parametric space is put to contribution and perfect reconstruction is possible based on a set of suitable measurements. This leads to new exact reconstruction formulas and fast algorithms that achieve such reconstructions. The main aim of this tutorial is to give an overview of these new exciting findings in sampling theory. The fundamental theoretical results will be reviewed and constructive algorithms will be presented, both for 1-D and 2-D signals. We also discuss the effect of noise on the sampling and reconstruction of structured signals. Finally a diverse set of applications of these new concepts will be presented to emphasize the importance and far reaching implications of these new theories. |
@conference{blu2009i, author = "Vetterli, M. and Marziliano, P. and Blu, T. and Dragotti, P.-L.", title = "Sparse Sampling of Structured Data", booktitle = "Tutorial Presentation at the Seventeenth {E}uropean Signal Processing Conference ({EUSIPCO'09})", month = "August 24--28,", year = "2009", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2009i" } |
Vonesch, C., Blu, T. & Unser, M.,"Generalized Biorthogonal Daubechies Wavelets", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet XI, San Diego, USA, Vol. 5914, pp. 59141X-1-59141X-6, July 31--August 3, 2005. |
We propose a generalization of the Cohen-Daubechies-Feauveau (CDF) and 9⁄7 biorthogonal wavelet families. This is done within the framework of non-stationary multiresolution analysis, which involves a sequence of embedded approximation spaces generated by scaling functions that are not necessarily dilates of one another. We consider a dual pair of such multiresolutions, where the scaling functions at a given scale are mutually biorthogonal with respect to translation. Also, they must have the shortest-possible support while reproducing a given set of exponential polynomials. This constitutes a generalization of the standard polynomial reproduction property. The corresponding refinement filters are derived from the ones that were studied by Dyn et al. in the framework of non-stationary subdivision schemes. By using different factorizations of these filters, we obtain a general family of compactly supported dual wavelet bases of L2. In particular, if the exponential parameters are all zero, one retrieves the standard CDF B-spline wavelets and the 9⁄7 wavelets. Our generalized description yields equivalent constructions for E-spline wavelets. A fast filterbank implementation of the corresponding wavelet transform follows naturally; it is similar to Mallat's algorithm, except that the filters are now scale-dependent. This new scheme offers high flexibility and is tunable to the spectral characteristics of a wide class of signals. In particular, it is possible to obtain symmetric basis functions that are well-suited for image processing. |
@inproceedings{blu2005n, author = "Vonesch, C. and Blu, T. and Unser, M.", title = "Generalized Biorthogonal {D}aubechies Wavelets", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet {XI}", month = "July 31--August 3,", year = "2005", volume = "5914", pages = "59141X-1--59141X-6", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005n" } |
Vonesch, C., Blu, T. & Unser, M.,"Generalized Daubechies Wavelets", Proceedings of the Thirtieth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'05), Philadelphia, USA, Vol. {IV}, pp. 593-596, March 18--23, 2005. |
We present a generalization of the Daubechies wavelet family. The context is that of a non-stationary multiresolution analysis—i.e., a sequence of embedded approximation spaces generated by scaling functions that are not necessarily dilates of one another. The constraints that we impose on these scaling functions are: (1) orthogonality with respect to translation, (2) reproduction of a given set of exponential polynomials, and (3) minimal support. These design requirements lead to the construction of a general family of compactly-supported, orthonormal wavelet-like bases of L2. If the exponential parameters are all zero, then one recovers Daubechies wavelets, which are orthogonal to the polynomials of degree (N − 1) where N is the order (vanishing-moment property). A fast filterbank implementation of the generalized wavelet transform follows naturally; it is similar to Mallat's algorithm, except that the filters are now scale-dependent. The new transforms offer increased flexibility and are tunable to the spectral characteristics of a wide class of signals. |
@inproceedings{blu2005o, author = "Vonesch, C. and Blu, T. and Unser, M.", title = "Generalized Daubechies Wavelets", booktitle = "Proceedings of the Thirtieth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'05})", month = "March 18--23,", year = "2005", volume = "{IV}", pages = "593--596", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005o" } |
Vonesch, C., Blu, T. & Unser, M.,"Generalized Daubechies Wavelet Families", IEEE Transactions on Signal Processing, Vol. 55 (9), pp. 4415-4429, September 2007. |
We present a generalization of the orthonormal Daubechies wavelets and of their related biorthogonal flavors (Cohen-Daubechies-Feauveau, 9⁄7). Our fundamental constraint is that the scaling functions should reproduce a predefined set of exponential polynomials. This allows one to tune the corresponding wavelet transform to a specific class of signals, thereby ensuring good approximation and sparsity properties. The main difference with the classical construction of Daubechies et al. is that the multiresolution spaces are derived from scale-dependent generating functions. However, from an algorithmic standpoint, Mallat's Fast Wavelet Transform algorithm can still be applied; the only adaptation consists in using scale-dependent filter banks. Finite support ensures the same computational efficiency as in the classical case. We characterize the scaling and wavelet filters, construct them and show several examples of the associated functions. We prove that these functions are square-integrable and that they converge to their classical counterparts of the corresponding order. |
@article{blu2007q, author = "Vonesch, C. and Blu, T. and Unser, M.", title = "Generalized {D}aubechies Wavelet Families", journal = "{IEEE} Transactions on Signal Processing", month = "September", year = "2007", volume = "55", number = "9", pages = "4415--4429", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007q" } |
Wang, M. & Blu, T.,"Generalized YUV interpolation of CFA images", Proceedings of the 2010 IEEE International Conference on Image Processing (ICIP'10), Hong Kong, China, pp. 1909-1912, September 26--29, 2010. |
This paper presents a simple yet effective color filter array (CFA) interpolation algorithm. It is based on a linear interpolating kernel, but operates on YUV space, which results in a nontrivial boost on the peak signal-to-noise ratio (PSNR) of red and blue channels. The algorithm can be implemented efficiently. At the end of the paper, we present its performance compared with nonlinear interpolation methods and show that it's competitive even among state-of-the-art CFA demosaicing algorithms. |
@inproceedings{blu2010k, author = "Wang, M. and Blu, T.", title = "Generalized {YUV} interpolation of {CFA} images", booktitle = "Proceedings of the 2010 {IEEE} International Conference on Image Processing ({ICIP'10})", month = "September 26--29,", year = "2010", pages = "1909--1912", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010k" } |
Wang, R., Blu, T. & Dragotti, P.-L.,"Reconstruction of Images With Finite Rate of Innovation From Noisy Tomographic Projections", Proceedings of the Thirty-first European Signal Processing Conference (EUSIPCO), Helsinki, Finland, pp. 1953-1957, September 4--8 2023. |
In this paper, we propose a robust 2D tomographic reconstruction algorithm for classes of piecewise constant images whose edges are defined by curves with finite rate innovation (FRI). The curve is known to satisfy an annihilation relation associated with the derivative of the image. Given limited numbers of noisy projections and inaccurately known angles, unlike conventional methods that are directly reconstructing images from projections, we first recover the curve exploiting the frequency domain annihilation relation. Once the curve is retrieved, we leverage the spatial domain interpretation of the annihilation relation, and formulate the image reconstruction as a regularized optimization problem. To enhance the reconstruction quality, we further refine the angles given the reconstructed image and iterate the process. Experiments show that by doing so, we can robustly reconstruct the 2D image even from limited numbers of severely corrupted projections and highly inaccurate projection angles. |
@inproceedings{blu2023d, author = "Wang, R. and Blu, T. and Dragotti, P.-L.", title = "Reconstruction of Images With Finite Rate of Innovation From Noisy Tomographic Projections", booktitle = "Proceedings of the Thirty-first European Signal Processing Conference ({EUSIPCO})", month = "September 4--8", year = "2023", pages = "1953--1957", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2023d", doi = "10.23919/EUSIPCO58844.2023.10289882" } |
Wei, L. & Blu, T.,"A new non-redundant complex Hilbert wavelet transforms", Proceedings of the IEEE Statistical Signal Processing Workshop (SSP), Ann Arbor, USA, pp. 652-655, August 5--8, 2012. |
In this paper, a novel non-redundant complex wavelet transform (NRCWT) for real-valued signals is proposed. For this purpose, an orthogonal complex filter bank is developed to implement this NRCWT. We show how to choose the two complex filters from classical real-valued wavelet filters in such a way that the filterbank is always orthogonal. Using fractional B-spline filters, a pair of exact Hilbert wavelets are constructed, which can separate the positive frequencies from the negative frequencies. |
@inproceedings{blu2012e, author = "Wei, L. and Blu, T.", title = "A new non-redundant complex {H}ilbert wavelet transforms", booktitle = "Proceedings of the {IEEE} Statistical Signal Processing Workshop (SSP)", month = "August 5--8,", year = "2012", pages = "652--655", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2012e" } |
Wei, L. & Blu, T.,"Construction of an Orthonormal Complex Multiresolution Analysis", Proceedings of the Thirty-eighth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'13), Vancouver, Canada, pp. 2381-2385, May 26--31, 2013. |
We design two complex filters {h[n],g[n])} for an orthogonal filter bank structure based on two atom functions {\rho_0(t),\rho_1/2(t)}, such that: 1) they generate an orthonormal multiwavelet basis; 2) the two complex conjugate wavelets are Hilbert wavelets, i.e., their frequency responses are supported either on positive or negative frequencies; and 3) the two scaling functions are real. The developed complex wavelet transform (CWT) is non-redundant, nearly shift-invariant, and distinguishable for diagonal features. The distinguishability in diagonal features is demonstrated by comparison with real discrete wavelet transform. |
@inproceedings{blu2013g, author = "Wei, L. and Blu, T.", title = "Construction of an Orthonormal Complex Multiresolution Analysis", booktitle = "Proceedings of the Thirty-eighth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'13})", month = "May 26--31,", year = "2013", pages = "2381--2385", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2013g" } |
Wei, X., Blu, T. & Dragotti, P.-L.,"Finite Rate of Innovation with Non-Uniform Samples", Proceedings of the IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC'12), Hong Kong, China, pp. 369-372, August 12--15, 2012. |
In this paper, we investigate the problem of retrieving the innovation parameters (time and amplitude) of a stream of Diracs from non-uniform samples taken with a novel kernel (a hyperbolic secant). We devise a non-iterative, exact algorithm that allows perfect reconstruction of 2K innovations from as few as 2K non-uniform samples. We also investigate noise issues and compute the Cramér-Rao lower bounds for this problem. A simple total least-squares extension of the algorithm proves to be efficient in reconstructing the location of a single Dirac from noisy measurements. |
@inproceedings{blu2012f, author = "Wei, X. and Blu, T. and Dragotti, P.-L.", title = "Finite Rate of Innovation with Non-Uniform Samples", booktitle = "Proceedings of the {IEEE} International Conference on Signal Processing, Communications and Computing {(ICSPCC'12)}", month = "August 12--15,", year = "2012", pages = "369--372", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2012f" } |
Xue, F. & Blu, T.,"SURE-Based Motion estimation", Proceedings of the IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC'12), Hong Kong, China, pp. 373-377, August 12--15, 2012. |
We propose a novel approach to estimate the parameters of motion
blur (blur length and orientation) from an observed image. |
@inproceedings{blu2012g, author = "Xue, F. and Blu, T.", title = "{SURE}-Based Motion estimation", booktitle = "Proceedings of the {IEEE} International Conference on Signal Processing, Communications and Computing {(ICSPCC'12)}", month = "August 12--15,", year = "2012", pages = "373--377", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2012g" } |
Xue, F. & Blu, T.,"SURE-based blind Gaussian deconvolution", Proceedings of the IEEE Statistical Signal Processing Workshop (SSP), Ann Arbor, USA, pp. 452-455, August 5--8, 2012. |
We propose a novel blind deconvolution method that consisting of firstly estimating the variance of the Gaussian blur, then performing non-blind deconvolution with the estimated PSF. The main contribution of this paper is the first step - to estimate the variance of the Gaussian blur, by minimizing a novel objective functional: an unbiased estimate of a blur MSE (SURE). The optimal parameter and blur variance are obtained by minimizing this criterion over linear processings that have the form of simple Wiener filterings. We then perform non-blind deconvolution using our recent high-quality SURE-based deconvolution algorithm. The very competitive results show the highly accurate estimation of the blur variance (compared to the ground-truth value) and the great potential of developing more powerful blind deconvolution algorithms based on the SURE-type principle. |
@inproceedings{blu2012h, author = "Xue, F. and Blu, T.", title = "{SURE}-based blind {G}aussian deconvolution", booktitle = "Proceedings of the {IEEE} Statistical Signal Processing Workshop (SSP)", month = "August 5--8,", year = "2012", pages = "452--455", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2012h" } |
Xue, F. & Blu, T.,"A Novel SURE-Based Criterion for Parametric PSF Estimation", IEEE Transactions on Image Processing, Vol. 24 (2), pp. 595-607, February 2015. |
We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates. |
@article{blu2015a, author = "Xue, F. and Blu, T.", title = "A Novel {SURE}-Based Criterion for Parametric {PSF} Estimation", journal = "IEEE Transactions on Image Processing", month = "February", year = "2015", volume = "24", number = "2", pages = "595--607", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2015a" } |
Xue, F. & Blu, T.,"On The Degrees Of Freedom in Total Variation Minimization", Proceedings of the Forty-fifth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'20), Barcelona, Spain, pp. 5690-5694, May 4--8, 2020. |
In the theory of linear models, the degrees of freedom (DOF) of an estimator play a pivotal role in risk estimation, as it quantifies the complexity of a statistical modeling procedure. Considering the total-variation (TV) regularization, we present a theoretical study of the DOF in Stein's unbiased risk estimate (SURE), under a very mild assumption. First, from the duality perspective, we give an analytic expression of the exact TV solution, with identification of its support. The closed-form expression of the DOF is derived based on the Karush-Kuhn-Tucker (KKT) conditions. It is also shown that the DOF is upper bounded by the nullity of a sub-analysis-matrix. The theoretical analysis is finally validated by the numerical tests on image recovery. |
@inproceedings{blu2020b, author = "Xue, F. and Blu, T.", title = "On The Degrees Of Freedom in Total Variation Minimization", booktitle = "Proceedings of the Forty-fifth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'20})", month = "May 4--8,", year = "2020", pages = "5690--5694", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2020b" } |
Xue, F., Blu, T., Du, R. & Liu, J.,"An iterative SURE-LET approach to sparse reconstruction", Proceedings of the Forty-first IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'16), Shanghai, China, pp. 4493-4497, March 20--25, 2016. |
Sparsity-promoting regularization is often formulated as ℓν-penalized minimization (0<ν<1), which can be efficiently solved by iteratively reweighted least squares (IRLS). The reconstruction quality is generally sensitive to the value of regularization parameter. In this work, for accurate recovery, we develop two data-driven optimization schemes based on minimization of Stein's unbiased risk estimate (SURE). First, we propose a recursive method for computing SURE for a given IRLS iterate, which enables us to unbiasedly evaluate the reconstruction error, and select the optimal value of regularization parameter. Second, for fast optimization, we parametrize each IRLS iterate as a linear combination of few elementary functions (LET), and solve the linear weights by minimizing SURE. Numerical experiments show that iterating this process leads to higher reconstruction accuracy with remarkably faster computational speed than standard IRLS. |
@inproceedings{blu2016b, author = "Xue, F. and Blu, T. and Du, R. and Liu, J.", title = "An iterative {SURE-LET} approach to sparse reconstruction", booktitle = "Proceedings of the Forty-first {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'16})", month = "March 20--25,", year = "2016", pages = "4493--4497", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2016b" } |
Xue, F., Blu, T., Liu, J. & Xia, A.,"Recursive Evaluation of SURE for Total Variation Denoising", Proceedings of the Forty-third IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'18), Calgary, AB, Canada, pp. 1338-1342, April 15--20, 2018. |
Recently, total variation (TV)-based regularization has become a standard technique for signal denoising. The reconstruction quality is generally sensitive to the value of regularization parameter. In this work, based on Chambolle's algorithm, we develop two data-driven optimization schemes based on minimization of Stein's unbiased risk estimate (SURE)--statistically equivalent to mean squared error (MSE). First, we propose a recursive evaluation of SURE to monitor the estimation error during Chambolle's iteration; the optimal value is then identified by the minimum SURE. Second, for fast optimization, we perform alternating update between regularization parameter and solution within Chambolle's iteration. We exemplify the proposed methods with both 1-D and 2-D signal denoising. Numerical experiments show that the proposed methods lead to highly accurate estimate of regularization parameter and nearly optimal denoising performance. |
@inproceedings{blu2018d, author = "Xue, F. and Blu, T. and Liu, J. and Xia, A.", title = "Recursive Evaluation of {SURE} for Total Variation Denoising", booktitle = "Proceedings of the Forty-third {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'18})", month = "April 15--20,", year = "2018", pages = "1338--1342", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2018d" } |
Xue, F., Blu, T., Liu, J. & Xia, A.,"A Novel GCV-Based Criterion for Parameter Selection in Image Deconvolution", Proceedings of the Forty-third IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'18), Calgary, AB, Canada, pp. 1403-1407, April 15--20, 2018. |
A proper selection of regularization parameter is essential for regularization-based image deconvolution. The main contribution of this paper is to propose a new form of generalized cross validation (GCV) as a criterion for this optimal selection. Incorporating a nil-trace non-linear estimate, we develop this new GCV based on Stein's unbiased risk estimate (SURE)--an unbiased estimate of mean squared error (MSE). The key advantage of this GCV over SURE is that it does not require the knowledge of noise variance. We exemplify this criterion with both Tikhonov regularization and ℓ1-based sparse deconvolution. In particular, we develop a recursive evaluation of GCV for the ℓ1-estimate based on iterative soft-thresholding (IST) algorithm. Numerical experiments demonstrate the nearly optimal parameter selection and negligible loss of its resultant deconvolution quality. |
@inproceedings{blu2018e, author = "Xue, F. and Blu, T. and Liu, J. and Xia, A.", title = "A Novel {GCV}-Based Criterion for Parameter Selection in Image Deconvolution", booktitle = "Proceedings of the Forty-third {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'18})", month = "April 15--20,", year = "2018", pages = "1403--1407", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2018e" } |
Xue, F., Luisier, F. & Blu, T.,"SURE-LET image deconvolution using multiple Wiener filters", Proceedings of the 2012 IEEE International Conference on Image Processing (ICIP'12), Orlando, USA, pp. 3037-3040, September 30--October 3, 2012. |
We propose a novel deconvolution algorithm based on the minimization of Stein's unbiased risk estimate (SURE). We linearly parametrize the deconvolution process by using multiple Wiener filterings as elementary functions, followed by undecimated Haar-wavelet thresholding. The key contributions of our approach are: 1) the linear combination of several Wiener filters with different (but fixed) regularization parameters, which avoids the manual adjustment of a single nonlinear parameter; 2) the use of linear parameterization, which makes the SURE minimization finally boil down to solving a linear system of equations, leading to a very fast and exact optimization of the whole deconvolution process. The results obtained on standard test images show that our algorithm favorably compares with the other state-of-the-art deconvolution methods in both speed and quality. |
@inproceedings{blu2012i, author = "Xue, F. and Luisier, F. and Blu, T.", title = "{SURE-LET} image deconvolution using multiple {W}iener filters", booktitle = "Proceedings of the 2012 {IEEE} International Conference on Image Processing ({ICIP'12})", month = "September 30--October 3,", year = "2012", pages = "3037--3040", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2012i" } |
Xue, F., Luisier, F. & Blu, T.,"Multi-Wiener SURE-LET Deconvolution", IEEE Transactions on Image Processing, Vol. 22 (5), pp. 1954-1968, May 2013. |
In this paper, we propose a novel deconvolution algorithm based on the minimization of a regularized Stein's unbiased risk estimate (SURE), which is a good estimate of the mean squared error (MSE). We linearly parametrize the deconvolution process by using multiple Wiener filters as elementary functions, followed by undecimated Haar-wavelet thresholding. Due to the quadratic nature of SURE and the linear parametrization, the deconvolution problem finally boils down to solving a linear system of equations, which is very fast and exact. The linear coefficients, i.e., the solution of the linear system of equations, constitute the best approximation of the optimal processing on the Wiener-Haar-threshold basis that we consider. In addition, the proposed multi-Wiener SURE-LET approach is applicable for both periodic and symmetric boundary conditions, and can thus be used in various practical scenarios. The very competitive (both in computation time and quality) results show that the proposed algorithm, which can be interpreted as a kind of non-linear Wiener processing, can be used as a basic tool for building more sophisticated deconvolution algorithms. |
@article{blu2013h, author = "Xue, F. and Luisier, F. and Blu, T.", title = "Multi-{W}iener {SURE-LET} Deconvolution", journal = "{IEEE} Transactions on Image Processing", month = "May", year = "2013", volume = "22", number = "5", pages = "1954--1968", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2013h" } |
Xue, X., Li, J. & Blu, T.,"An Iterative SURE-LET Deconvolution Algorithm Based on BM3D Denoiser", Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP'19), Taipei, Taiwan, pp. 1795-1799, September 22--25, 2019. |
Recently, the plug-and-play priors (PPP) have been a popular technique for image reconstruction. Based on the basic iterative thresholding scheme, we in this paper propose a new iterative SURE-LET deconvolution algorithm with a plug-in BM3D denoiser. To optimize the deconvolution process, we linearly parametrize the thresholding function by using multiple BM3D denoisers as elementary functions. The key contributions of our approach are: (1) the linear combination of several BM3D denoisers with different (but fixed) parameters, which avoids the manual adjustment of a single non-linear parameter; (2) linear parametrization makes the minimization of Stein's unbiased risk estimate (SURE) finally boil down to solving a linear system of equations, leading to a very fast and exact optimization during each iteration. In particular, the SURE of BM3D denoiser is approximately evaluated by finite-dierence Monte-Carlo technique. Experiments show that the proposed algorithm, in average, achieves better deconvolution performance than other state-of-the-art methods, both numerically and visually. |
@inproceedings{blu2019c, author = "Xue, X. and Li, J. and Blu, T.", title = "An Iterative SURE-LET Deconvolution Algorithm Based on BM3D Denoiser", booktitle = "Proceedings of the 2019 {IEEE} International Conference on Image Processing ({ICIP'19})", month = "September 22--25,", year = "2019", pages = "1795--1799", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2019c" } |
Zhang, X., Gilliam, C. & Blu, T.,"Iterative Fitting After Elastic Registration: An Efficient Strategy For Accurate Estimation Of Parametric Deformations", Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP'17), Beijing, China, pp. 1492-1496, September 17--20, 2017. |
We propose an efficient method for image registration based on iteratively fitting a parametric model to the output of an elastic registration. It combines the flexibility of elastic registration - able to estimate complex deformations - with the robustness of parametric registration - able to estimate very large displacement. Our approach is made feasible by using the recent Local All-Pass (LAP) algorithm; a fast and accurate filter-based method for estimating the local deformation between two images. Moreover, at each iteration we fit a linear parametric model to the local deformation which is equivalent to solving a linear system of equations (very fast and efficient). We use a quadratic polynomial model however the framework can easily be extended to more complicated models. The significant advantage of the proposed method is its robustness to model mis-match (e.g. noise and blurring). Experimental results on synthetic images and real images demonstrate that the proposed algorithm is highly accurate and outperforms a selection of image registration approaches. |
@inproceedings{blu2017g, author = "Zhang, X. and Gilliam, C. and Blu, T.", title = "Iterative Fitting After Elastic Registration: An Efficient Strategy For Accurate Estimation Of Parametric Deformations", booktitle = "Proceedings of the 2017 {IEEE} International Conference on Image Processing ({ICIP'17})", month = "September 17--20,", year = "2017", pages = "1492--1496", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2017g" } |
Zhang, X., Gilliam, C. & Blu, T.,"Parametric Registration for Mobile Phone Images", Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP'19), Taipei, Taiwan, pp. 1312-1316, September 22--25, 2019. |
Image registration is a significant step in a wide range of practical applications and it is a fundamental problem in various computer vision tasks. In this paper, we propose a highly accurate and fast parametric registration method for mobile phone photos. The proposed algorithm is based on a fast and accurate elastic registration algorithm, the Local All-Pass (LAP) algorithm, which performs in a coarse-to-fine manner. At each iteration, the LAP displacement field is fitted by a parametric model. Thus the image registration problem is equivalent to finding a few parameters to describe the displacement field. The fitting step can be performed very efficiently by solving a linear system of equations. In terms of the fitting model, it is easy to change the type of models to do the parametric fitting for specific applications. Experimental results on both synthetic and real images demonstrate the high accuracy and computational efficiency of the proposed algorithm. |
@inproceedings{blu2019d, author = "Zhang, X. and Gilliam, C. and Blu, T.", title = "Parametric Registration for Mobile Phone Images", booktitle = "Proceedings of the 2019 {IEEE} International Conference on Image Processing ({ICIP'19})", month = "September 22--25,", year = "2019", pages = "1312--1316", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2019d" } |
Zhang, X., Gilliam, C. & Blu, T.,"All-Pass Parametric Image Registration", IEEE Transactions on Image Processing, Vol. 29 (1), pp. 5625-5640, April 2020. |
Image registration is a required step in many practical applications that involve the acquisition of multiple related images. In this paper, we propose a methodology to deal with both the geometric and intensity transformations in the image registration problem. The main idea is to modify an accurate and fast elastic registration algorithm (Local All-Pass—LAP) so that it returns a parametric displacement field, and to estimate the intensity changes by fitting another parametric expression. Although we demonstrate the methodology using a low-order parametric model, our approach is highly flexible and easily allows substantially richer parametrisations, while requiring only limited extra computation cost. In addition, we propose two novel quantitative criteria to evaluate the accuracy of the alignment of two images ("salience correlation") and the number of degrees of freedom ("parsimony") of a displacement field, respectively. Experimental results on both synthetic and real images demonstrate the high accuracy and computational efficiency of our methodology. Furthermore, we demonstrate that the resulting displacement fields are more parsimonious than the ones obtained in other state-of-the-art image registration approaches. |
@article{blu2020c, author = "Zhang, X. and Gilliam, C. and Blu, T.", title = "All-Pass Parametric Image Registration", journal = "IEEE Transactions on Image Processing", month = "April", year = "2020", volume = "29", number = "1", pages = "5625--5640", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2020c", doi = "10.1109/TIP.2020.2984897" } |
Zhang, Z. & Blu, T.,"Blind Source Separation via a Weak Exclusion Principle", Proceedings of the Forty-seventh IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'22), Singapore, pp. 2699-2703, May 22--27, 2022. |
In this paper, we propose a generalized Blind Source Separation (BSS) method using a novel assumption which we call "weak exclusion" principle. We first give the mathematical definition of the exclusion criterion and propose an iterative algorithm to minimize it. We then test WEP in simulated and real datasets, compared with other four methods. The experiments on synthetic and real datasets demonstrate that WEP outperforms the other methods, both in terms of accuracy and in terms of speed. |
@inproceedings{blu2022a, author = "Zhang, Z. and Blu, T.", title = "Blind Source Separation via a Weak Exclusion Principle", booktitle = "Proceedings of the Forty-seventh {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'22})", month = "May 22--27,", year = "2022", pages = "2699--2703", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2022a", doi = "10.1109/ICASSP43922.2022.9747709" } |
Zhao, S., Cahill, D.G., Li, S., Xiao, F., Blu, T., Griffith, J.F. & Chen, W.,"Denoising of three-dimensional fast spin echo magnetic resonance images of knee joints using spatial-variant noise-relevant residual learning of convolution neural network", Computers in Biology and Medicine, Vol. 151, pp. 106295, December 2022. |
Purpose: Two-dimensional (2D) fast spin echo (FSE) techniques play a central role in the clinical magneticresonance imaging (MRI) of knee joints. Moreover, three-dimensional (3D) FSE provides high-isotropicresolutionmagnetic resonance (MR) images of knee joints, but it has a reduced signal-to-noise ratio comparedto 2D FSE. Deep-learning denoising methods are a promising approach for denoising MR images, but they areoften trained using synthetic noise due to challenges in obtaining true noise distributions for MR images. In thisstudy, inherent true noise information from two number of excitations (2-NEX) acquisition was used to develop adeep-learning model based on residual learning of convolutional neural network (CNN), and this model was usedto suppress the noise in 3D FSE MR images of knee joints. Methods: A deep learning-based denoising method was developed. The proposed CNN used two-step residuallearning over parallel transporting and residual blocks and was designed to comprehensively learn real noisefeatures from 2-NEX training data. Results: The results of an ablation study validated the network design. The new method achieved improveddenoising performance of 3D FSE knee MR images compared with current state-of-the-art methods, based on thepeak signal-to-noise ratio and structural similarity index measure. The improved image quality after denoisingusing the new method was verified by radiological evaluation. Conclusion: A deep CNN using the inherent spatial-varying noise information in 2-NEX acquisitions was developed.This method showed promise for clinical MRI assessments of the knee, and has potential applications forthe assessment of other anatomical structures. |
@article{blu2022f, author = "Zhao, Shutian and Cahill, D{\'o}nal G. and Li, Siyue and Xiao, Fan and Blu, Thierry and Griffith, James F. and Chen, Weitian", title = "Denoising of three-dimensional fast spin echo magnetic resonance images of knee joints using spatial-variant noise-relevant residual learning of convolution neural network", journal = "Computers in Biology and Medicine", month = "December", year = "2022", volume = "151", pages = "106295", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2022f", doi = "https://doi.org/10.1016/j.compbiomed.2022.106295" } |
Zhao, T. & Blu, T.,"Detecting Curves in Very Noisy Images Using Fourier-Argand Moments", Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP'19), Taipei, Taiwan, pp. 3011-3015, September 22--25, 2019. |
Detection of curves (e.g., ridges) from very noisy images is an important yet challenging task in photon-starved imaging applications (e.g., nuclear imaging modalities, fluorescence/ electron microscopy, radioastronomy, first photon/light-in-flight imaging). In this paper, we exploit the consistency of the image along the curve, i.e., the fact that the image changes slowly when we move along the curve---"locally" laminar image. We compute a sequence of complex scalars that we call Fourier-Argand moments, and show that the direction of variation of a laminar image is purely encoded in the phase of these moments. In particular, focusing on ridges located at the center of the image, we show that using these moments altogether in a frequency estimation algorithm provides a very accurate and highly robust estimate of the direction of the ridge: we demonstrate this accuracy for noise levels as high as -10 dB. We then show how to detect curves---i.e., local ridges---by computing the Fourier Argand moments within a sliding window across the image, and design a consistency map whose thresholding allows to keep only the pixels on the curve. Numerical experiments on both synthetic images and real images (low light photography) demonstrate the accuracy and robustness to noise of the proposed method, compared to a state of the art method. |
@inproceedings{blu2019e, author = "Zhao, T. and Blu, T.", title = "Detecting Curves in Very Noisy Images Using {F}ourier-{A}rgand Moments", booktitle = "Proceedings of the 2019 {IEEE} International Conference on Image Processing ({ICIP'19})", month = "September 22--25,", year = "2019", pages = "3011--3015", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2019e" } |
Zhao, T. & Blu, T.,"The Fourier-Argand Representation: An Optimal Basis of Steerable Patterns", IEEE Transactions on Image Processing, Vol. 29 (1), pp. 6357-6371, December 2020. |
Computing the convolution between a 2D signal and a corresponding filter with variable orientations is a basic problem that arises in various tasks ranging from low level image processing (e.g. ridge/edge detection) to high level computer vision (e.g. pattern recognition). Through decades of research, there still lacks an efficient method for solving this problem. In this paper, we investigate this problem from the perspective of approximation by considering the following problem: what is the optimal basis for approximating all rotated versions of a given bivariate function? Surprisingly, solely minimising the L2-approximation-error leads to a rotation-covariant linear expansion, which we name Fourier-Argand representation. This representation presents two major advantages: 1) rotation-covariance of the basis, which implies a "strong steerability"—rotating by an angle 𝛼 corresponds to multiplying each basis function by a complex scalar exp(-ik𝛼); 2) optimality of the Fourier-Argand basis, which ensures a few number of basis functions suffice to accurately approximate complicated patterns and highly direction-selective filters. We show the relation between the Fourier-Argand representation and the Radon transform, leading to an efficient implementation of the decomposition for digital filters. We also show how to retrieve accurate orientation of local structures/patterns using a fast frequency estimation algorithm. |
@article{blu2020d, author = "Zhao, T. and Blu, T.", title = "The {F}ourier-{A}rgand Representation: An Optimal Basis of Steerable Patterns", journal = "IEEE Transactions on Image Processing", month = "December", year = "2020", volume = "29", number = "1", pages = "6357--6371", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2020d" } |
Zheng, N., Li, X., Blu, T. & Lee, T.,"SURE-MSE speech enhancement for robust speech recognition", Proceedings of the 2010 International Symposium on Chinese Spoken Language Processing (ISCSLP'10), Tainan, Taiwan, pp. 271-274, November 29--December 3, 2010. |
This paper presents a new approach to enhancing noisy (white Gaussian noise) speech signals for robust speech recognition. It is based on the minimization of an estimate of denoising MSE (known as SURE) and does not require any hypotheses on the original signal. The enhanced signal is obtained by thresholding coefficients in the DCT domain, with the parameters in the thresholding functions being specified through the minimization of the SURE. Thanks to a linear parametrization, this optimization is very cost-effective. This method also works well for non-white noise with a noise whitening processing before the optimization. We have performed automatic speech recognition tests on a subset of the AURORA 2 database, to compare our method with different denoising strategies. The results show that our method brings a substantial increase in recognition accuracy. |
@inproceedings{blu2010l, author = "Zheng, N. and Li, X. and Blu, T. and Lee, T.", title = "{SURE}-{MSE} speech enhancement for robust speech recognition", booktitle = "Proceedings of the 2010 International Symposium on Chinese Spoken Language Processing (ISCSLP'10)", month = "November 29--December 3,", year = "2010", pages = "271--274", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010l" } |
Created by JabRef on 27/06/2024.