Years: 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1993 1992 1991
Ai, Q.Y.H., Chen, W., So, T.Y., Lam, W.K.J., Jiang, B., Poon, D.M.C., Qamar, S., Mo, F.K.F., Blu, T., Chan, Q., Ma, B.B.Y., Hui, E.P., Chan, K.C.A. & King, A.D.,"Quantitative T1(ρ) MRI of the Head and Neck Discriminates Carcinoma and Benign Hyperplasia in the Nasopharynx", American Journal of Neuroradiology, 2020. 
BACKGROUND AND PURPOSE: T1ρ imaging is a new quantitative MR imaging pulse sequence with the potential to discriminate between malignant and benign tissue. In this study, we evaluated the capability of T1ρ imaging to characterize tissue by applying T1ρ imaging to malignant and benign tissue in the nasopharynx and to normal tissue in the head and neck. MATERIALS AND METHODS: Participants with undifferentiated nasopharyngeal carcinoma and benign hyperplasia of the nasopharynx prospectively underwent T1ρ imaging. T1ρ measurements obtained from the histogram analysis for nasopharyngeal carcinoma in 43 participants were compared with those for benign hyperplasia and for normal tissue (brain, muscle, and parotid glands) in 41 participants using the MannWhitney U test. The area under the curve of significant T1ρ measurements was calculated and compared using receiver operating characteristic analysis and the Delong test, respectively. A P < 0.05 indicated statistical significance. RESULTS: There were significant differences in T1ρ measurements between nasopharyngeal carcinoma and benign hyperplasia and between nasopharyngeal carcinoma and normal tissue (all, P < 0.05). Compared with benign hyperplasia, nasopharyngeal carcinoma showed a lower T1ρ mean (62.14 versus 65.45 × ms), SD (12.60 versus 17.73 × ms), and skewness (0.61 versus 0.76) (all P < 0.05), but no difference in kurtosis (P = 0.18). The T1ρ SD showed the highest area under the curve of 0.95 compared with the T1ρ mean (area under the curve = 0.72) and T1ρ skewness (area under the curve = 0.72) for discriminating nasopharyngeal carcinoma and benign hyperplasia (all, P < 0.05). CONCLUSIONS: Quantitative T1ρ imaging has the potential to discriminate malignant from benign and normal tissue in the head and neck. 
@article{blu2020g, author = "Ai, Q.Y.H. and Chen, W. and So, T.Y. and Lam, W.K.J. and Jiang, B. and Poon, D.M.C. and Qamar, S. and Mo, F.K.F. and Blu, T. and Chan, Q. and Ma, B.B.Y. and Hui, E.P. and Chan, K.C.A. and King, A.D.", title = "Quantitative {T1}\(\rho\) {MRI} of the Head and Neck Discriminates Carcinoma and Benign Hyperplasia in the Nasopharynx", journal = "American Journal of Neuroradiology", year = "2020", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2020g", doi = "10.3174/ajnr.A6828" } 
Alexandru, R., Blu, T. & Dragotti, P.L.,"DSLAM: Diffusion Source Localization and Trajectory Mapping", Proceedings of the Fortyfifth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'20), Barcelona, Spain, pp. 56005604, May 48, 2020. 
We consider physical fields induced by a finite number of instantaneous diffusion sources, which we sample using a mobile sensor, along unknown trajectories composed of multiple linear segments. We address the problem of estimating the sources, as well as the trajectory of the mobile sensor. Within this framework, we propose a method for localizing sources of unknown amplitudes, and known activation times. The reconstruction method we propose maps the measurements obtained using the mobile sensor to a sequence of generalized field samples. From these generalized samples, we can then retrieve the locations of the sources as well as the trajectory of the sensor (up to a linear geometric transformation). 
@inproceedings{blu2020a, author = "Alexandru, R. and Blu, T. and Dragotti, P.L.", title = "{DSLAM}: Diffusion Source Localization and Trajectory Mapping", booktitle = "Proceedings of the Fortyfifth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'20})", month = "May 48,", year = "2020", pages = "56005604", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2020a" } 
Alexandru, R., Blu, T. & Dragotti, P.L.,"Diffusion SLAM: Localising diffusion sources from samples taken by locationunaware mobile sensors", IEEE Transactions on Signal Processing, Vol. 69, pp. 55395554, 2021. 
We consider diffusion fields induced by multiple localised and instantaneous sources. We assume a mobile sensor samples the field, uniformly along a piecewise linear trajectory, which is unknown. The problem we address is the estimation of the amplitudes and locations of the diffusion sources, as well as of the trajectory of the sensor. We first propose a method for diffusion source localisation and trajectory mapping (DSLAM) in 2D, where we assume the activation times of the sources are known and the evolution of the diffusion field over time is negligible. The reconstruction method we propose maps the measurements obtained using the mobile sensor to a sequence of generalised field samples. From these generalised samples, we can then retrieve the locations of the sources as well as the trajectory of the sensor (up to a 2D orthogonal geometric transformation). We then relax these assumptions and show that we can perform DSLAM also in the case of unknown activation times, from samples of a timevarying field, as well as in 3D spaces. Finally, simulation results on both synthetic and real data further validate the proposed framework. 
@article{blu2021c, author = "Alexandru, R. and Blu, T. and Dragotti, P.L.", title = "Diffusion {SLAM}: Localising diffusion sources from samples taken by locationunaware mobile sensors", journal = "IEEE Transactions on Signal Processing", year = "2021", volume = "69", pages = "55395554", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2021c" } 
Alexandru, R., Blu, T. & Dragotti, P.L.,"Localising Diffusion Sources from Samples Taken Along Unknown Parametric Trajectories", Proceedings of the TwentyNinth European Signal Processing Conference (EUSIPCO), Dublin, Ireland, pp. 21992203, August 2327 2021. 
In a recent paper we showed that it is possible to localise diffusion sources observed with a mobile sensor whilst simultaneously estimating the piecewise linear trajectory of the sensor. Here we address the case in which the sensor moves along an arbitrary unknown parametric trajectory and we show that by solving a linear system of equations, we can retrieve the inner products between the parameters of the trajectory. From these inner products we then retrieve the curve parameters up to an orthogonal transformation, which allows us to also perfectly estimate the amplitudes of the sources and find their locations up to an orthogonal transformation. 
@inproceedings{blu2021e, author = "Alexandru, R. and Blu, T. and Dragotti, P.L.", title = "Localising Diffusion Sources from Samples Taken Along Unknown Parametric Trajectories", booktitle = "Proceedings of the TwentyNinth European Signal Processing Conference ({EUSIPCO})", month = "August 2327", year = "2021", pages = "21992203", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2021e" } 
Barbotin, Y., Van De Ville, D., Blu, T. & Unser, M.,"Fast Computation of Polyharmonic BSpline Autocorrelation Filters", IEEE Signal Processing Letters, Vol. 15, pp. 773776, 2008. 
A fast computational method is given for the Fourier transform of the polyharmonic Bspline autocorrelation sequence in d dimensions. The approximation error is exponentially decaying with the number of terms taken into account. The algorithm improves speed upon a simple truncatedsum approach. Moreover, it is virtually independent of the spline's order. The autocorrelation filter directly serves for various tasks related to polyharmonic splines, such as interpolation, orthonormalization, and wavelet basis design. 
@article{blu2008a, author = "Barbotin, Y. and Van De Ville, D. and Blu, T. and Unser, M.", title = "Fast Computation of Polyharmonic \mbox{BSpline} Autocorrelation Filters", journal = "{IEEE} Signal Processing Letters", year = "2008", volume = "15", pages = "773776", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008a" } 
Batenkov, D., Bhandari, A. & Blu, T.,"Rethinking SuperResolution: The Bandwidth Selection Problem", Proceedings of the Fortyfourth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'19), Brighton, UK, pp. 50875091, March 1217, 2019. 
Superresolution is the art of recovering spikes from lowpass projections in the Fourier domain. Over the last decade specifically, several significant advancements linked with mathematical guarantees and recovery algorithms have been made. Most superresolution algorithms rely on a twostep procedure: deconvolution followed by highresolution frequency estimation. However, for this to work, exact bandwidth of lowpass filter must be known; an assumption that is central to the mathematical model of superresolution. On the flip side, when it comes to practice, smoothness rather than bandlimitedness is a much more applicable property. Since smooth pulses decay quickly, one may still capitalize on the existing superresolution algorithms provided that the essential bandwidth is known. This problem has not been discussed in literature and is the theme of our work. In this paper, we start with an experiment to show that superresolution is sensitive to bandwidth selection. This raises the question of how to select the optimal bandwidth. To this end, we propose a bandwidth selection criterion which works by minimizing a proxy of estimation error that is dependent of bandwidth. Our criterion is easy to compute, and gives reasonable results for experimentally acquired data, thus opening interesting avenues for further investigation, for instance the relationship to Cram\'erRao bounds. 
@inproceedings{blu2019a, author = "Batenkov, D. and Bhandari, A. and Blu, T.", title = "Rethinking SuperResolution: The Bandwidth Selection Problem", booktitle = "Proceedings of the Fortyfourth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'19})", month = "March 1217,", year = "2019", pages = "50875091", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2019a" } 
Bathellier, B., Van De Ville, D., Blu, T., Unser, M. & Carleton, A.,"WaveletBased MultiResolution Statistics for Optical Imaging Signals: Application to Automated Detection of Odour Activated Glomeruli in the Mouse Olfactory Bulb", NeuroImage, Vol. 34 (3), pp. 10201035, February 1, 2007. 
Optical imaging techniques offer powerful solutions to capture brain networks processing in animals, especially when activity is distributed in functionally distinct spatial domains. Despite the progress in imaging techniques, the standard analysis procedures and statistical assessments for this type of data are still limited. In this paper, we perform two in vivo noninvasive optical recording techniques in the mouse olfactory bulb, using a genetically expressed activity reporter fluorescent protein (synaptopHfluorin) and intrinsic signals of the brain. For both imaging techniques, we show that the odourtriggered signals can be accurately parameterized using linear models. Fitting the models allows us to extract odour specific signals with a reduced level of noise compared to standard methods. In addition, the models serve to evaluate statistical significance, using a waveletbased framework that exploits spatial correlation at different scales. We propose an extension of this framework to extract activation patterns at specific wavelet scales. This method is especially interesting to detect the odour inputs that segregate on the olfactory bulb in small spherical structures called glomeruli. Interestingly, with proper selection of wavelet scales, we can isolate significantly activated glomeruli and thus determine the odour map in an automated manner. Comparison against manual detection of glomeruli shows the high accuracy of the proposed method. Therefore, beyond the advantageous alternative to the existing treatments of optical imaging signals in general, our framework propose an interesting procedure to dissect brain activation patterns on multiple scales with statistical control. Supplementary data 
@article{blu2007a, author = "Bathellier, B. and Van De Ville, D. and Blu, T. and Unser, M. and Carleton, A.", title = "WaveletBased MultiResolution Statistics for Optical Imaging Signals: {A}pplication to Automated Detection of Odour Activated Glomeruli in the Mouse Olfactory Bulb", journal = "NeuroImage", month = "February 1,", year = "2007", volume = "34", number = "3", pages = "10201035", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007a" } 
Berent, J., Dragotti, P.L. & Blu, T.,"Sampling Piecewise Sinusoidal Signals with Finite Rate of Innovation Methods", IEEE Transactions on Signal Processing, Vol. 58 (2), pp. 613625, February 2010. 
We consider the problem of sampling piecewise sinusoidal signals. Classical sampling theory does not enable perfect reconstruction of such signals since they are not bandlimited. However, they can be characterized by a finite number of parameters, namely, the frequency, amplitude, and phase of the sinusoids and the location of the discontinuities. In this paper, we show that under certain hypotheses on the sampling kernel, it is possible to perfectly recover the parameters that define the piecewise sinusoidal signal from its sampled version. In particular, we show that, at least theoretically, it is possible to recover piecewise sine waves with arbitrarily high frequencies and arbitrarily close switching points. Extensions of the method are also presented such as the recovery of combinations of piecewise sine waves and polynomials. Finally, we study the effect of noise and present a robust reconstruction algorithm that is stable down to SNR levels of 7 [dB]. 
@article{blu2010a, author = "Berent, J. and Dragotti, P.L. and Blu, T.", title = "Sampling Piecewise Sinusoidal Signals with Finite Rate of Innovation Methods", journal = "{IEEE} Transactions on Signal Processing", month = "February", year = "2010", volume = "58", number = "2", pages = "613625", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010a" } 
Bergner, S., Van De Ville, D., Blu, T. & Möller, T.,"On Sampling Lattices with Similarity Scaling Relationships", Proceedings of the Eighth International Workshop on Sampling Theory and Applications (SampTA'09), Marseille, France, May 1822, 2009. 
We provide a method for constructing regular sampling lattices in arbitrary dimensions together with an integer dilation matrix. Subsampling using this dilation matrix leads to a similaritytransformed version of the lattice with a chosen density reduction. These lattices are interesting candidates for multidimensional wavelet constructions with a limited number of subbands. 
@inproceedings{blu2009a, author = "Bergner, S. and Van De Ville, D. and Blu, T. and M{\"{o}}ller, T.", title = "On Sampling Lattices with Similarity Scaling Relationships", booktitle = "Proceedings of the Eighth International Workshop on Sampling Theory and Applications ({SampTA'09})", month = "May 1822,", year = "2009", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2009a" } 
Bhandari, A. & Blu, T.,"FRI Sampling and TimeVarying Pulses: Some Theory and Four Short Stories", Proceedings of the Fortysecond IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'17), New Orleans, LA, USA, pp. 38043808, March 59, 2017. 
The field of signal processing is replete with exemplary problems where the measurements amount to timedelayed and amplitude scaled echoes of some template function or a pulse. When the interpulse spacing is favorable, something as primitive as a matched filter serves the purpose of identifying timedelays and amplitudes. When the interpulse spacing poses an algorithmic challenge, highresolution methods such as finiterateofinnovation (FRI) may be used. However, in many practical cases of interest, the template function may be distorted due to physical properties of propagation and transmission. Such cases can not be handled well by existing signal models. Inspired by problems in spectroscopy, radar, photoacoustic imaging and ultrawide band arrays, on which we base our case studies, in this work we take a step towards recovering spikes from timevarying pulses. To this end, we repurpose the FRI method and extend its utility to the case of phase distorted pulses. Application of our algorithm on the abovementioned case studies results in substantial improvement in peaksignaltonoise ratio, thus promising interesting future directions. 
@inproceedings{blu2017a, author = "Bhandari, A. and Blu, T.", title = "{FRI} Sampling and TimeVarying Pulses: Some Theory and Four Short Stories", booktitle = "Proceedings of the Fortysecond {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'17})", month = "March 59,", year = "2017", pages = "38043808", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2017a" } 
Blanc, P., Blu, T., Ranchin, T., Wald, L. & Aloisi, R.,"Using Iterated Rational Filter Banks Within the ARSIS Concept for Producing 10 m Landsat Multispectral Images", International Journal of Remote Sensing, Vol. 19 (12), pp. 23312343, August 1998. 
The ARSIS concept is designed to increase the spatial resolution of an image without modification of its spectral contents, by merging structures extracted from a higher resolution image of the same scene, but in a different spectral band. It makes use of wavelet transforms and multiresolution analysis. It is currently applied in an operational way with dyadic wavelet transforms that limit the merging of images whose ratio of their resolution is a power of 2. Rational discrete wavelet transforms can be approximated numerically by rational filter banks which would enable a more general merging. Indeed, in theory, the ratio of the resolution of the images to merge is a power of a certain family of rational numbers. The aim of this paper is to examine whether the use of those approximations of rational wavelet transforms are efficient within the ARSIS concept. This work relies on a particular case: the merging of a 10 m SPOT Panchromatic image and a 30 m Landsat Thematic Mapper multispectral image to synthesize 10m multispectral image TMHR. 
@article{blu1998a, author = "Blanc, P. and Blu, T. and Ranchin, T. and Wald, L. and Aloisi, R.", title = "Using Iterated Rational Filter Banks Within the {ARSIS} Concept for Producing 10 {m} {L}andsat Multispectral Images", journal = "International Journal of Remote Sensing", month = "August", year = "1998", volume = "19", number = "12", pages = "23312343", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1998a" } 
Blu, T.,"Iterated Rational Filter BanksUnderlying Limit Functions", Proceedings of the IEEE Signal Processing Society Digital Signal Processing Workshop, Utica, USA, pp. 1.8.11.8.2, September 1316, 1992. 
The term “Rational Filter Bank” (RFB) stands for “Filter Bank with Rational Rate Changes”. An analysis twoband RFB critically sampled is shown with its synthesis counterpart in figure 1. G stands typically for a lowpass FIR filter, whereas H is highpass FIR. We are interested, in this paper in the iteration of the sole lowpass branch, which leads, in the integer case (q = 1), to a wavelet decomposition. Kovacevic and Vetterli have wondered whether iterated RFB could involve too, a discrete wavelet transform. Actually, Daubechies proved that whenever p/q is not an integert and G is FIR, this could not be the case. We here show that despite this discouraging feature, there still exists, not only one function (then shifted), as in the integer case, but an infinite set of compactly supported functions φ_{s}(t). More importantly, under certain conditions, these functions appear to be "almost" the shifted version of one sole function. These φ_{s} are constructed the same way as in the dyadic case (p = 2, q = 1), that is to say by the iteration of the lowpass branch of a synthesis RFB, but in this case the initialization is meaningful. 
@inproceedings{blu1992a, author = "Blu, T.", title = "Iterated Rational Filter Banks{U}nderlying Limit Functions", booktitle = "Proceedings of the {IEEE} Signal Processing Society Digital Signal Processing Workshop", month = "September 1316,", year = "1992", pages = "1.8.11.8.2", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1992a" } 
Blu, T.,"Iterated Filter Banks with Rational Rate ChangesConnection with Discrete Wavelet Transforms", IEEE Transactions on Signal Processing, Vol. 41 (12), pp. 32323244, December 1993. 
Some properties of twoband filter banks with rational rate changes ("rational filter banks") are first reviewed. Focusing then on iterated rational filter banks, compactly supported limit functions are obtained, in the same manner as previously done for dyadic schemes, allowing a characterization of such filter banks. These functions are carefully studied and the properties they share with the dyadic case are highlighted. They are experimentally observed to verify a "shift property" (strictly verified in the dyadic ease) up to an error which can be made arbitrarily small when their regularity increases. In this case, the highpass outputs of an iterated filter bank can be very close to samples of a discrete wavelet transform with the same rational dilation factor. Straightforward extension of the formalism of multiresolution analysis is also made. Finally, it is shown that if one is ready to put up with the loss of the shift property, rational iterated filter banks can be used in the same manner as if they were dyadic filter banks, with the advantage that rational dilation factors can be chosen closer to 1. 
@article{blu1993a, author = "Blu, T.", title = "Iterated Filter Banks with Rational Rate Changes{C}onnection with Discrete Wavelet Transforms", journal = "{IEEE} Transactions on Signal Processing", month = "December", year = "1993", volume = "41", number = "12", pages = "32323244", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1993a" } 
Blu, T.,"Shift Error in Iterated Rational Filter Banks", Proceedings of the Eighth European Signal Processing Conference (EUSIPCO'96), Trieste, Italy, Vol. {II}, pp. 11991202, September 1013, 1996. 
For FIR filters, limit functions generated in iterated rational schemes are not invariant under shift operations, unlike what happens in the dyadic case: this feature prevents an analysis iterated rational filter bank (AIRFB) behaving exactly as a discrete wavelet transform, even though an adequate choice of the generating filter makes it possible to minimize its consequences. This paper indicates how to compute the error between an "average" shifted function and these limit functions, an open problem until now. Also connections are pointed out between this shift error and the selectivity of the AIRFB. 
@inproceedings{blu1996a, author = "Blu, T.", title = "Shift Error in Iterated Rational Filter Banks", booktitle = "Proceedings of the Eighth European Signal Processing Conference ({EUSIPCO'96})", month = "September 1013,", year = "1996", volume = "{II}", pages = "11991202", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1996a" } 
Blu, T.,"An Iterated Rational Filter Bank for Audio Coding", Proceedings of the Third IEEE Signal Processing Society International Symposium on TimeFrequency and TimeScale Analysis (IEEESP'96), Paris, France, pp. 8184, June 1821, 1996. 
This paper proposes a regular thirdofanoctave filter bank for high fidelity audio coding. The originality here is twofold: first, the filter bank is an iterated orthonormal rational filter bank for which the generating filters have been designed so that its outputs closely approximate a wavelet transform. This is different from the known coding algorithms which all use an integer filter bank, and most often a uniform one; second, the masking procedure itself is modelized with the help of a wavelet transform unlike the classical procedure in which a short time spectrum is computed and which gives rise to unwanted preecho effects. The masking procedure is then made equivalent to a quantization procedure. A simple nonoptimized algorithm has been worked out in order to show the benefits of such a structure, especially in terms of preecho (which is perceptually inaudible), and the disadvantages, especially as far as delay is concerned. 
@inproceedings{blu1996b, author = "Blu, T.", title = "An Iterated Rational Filter Bank for Audio Coding", booktitle = "Proceedings of the Third {IEEE} Signal Processing Society International Symposium on TimeFrequency and TimeScale Analysis ({IEEESP'96})", month = "June 1821,", year = "1996", pages = "8184", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1996b" } 
Blu, T.,"Bancs de filtres itérés en fraction d'octave  Application au codage de son (Iterated Rational Filter Banks with an Application to Audio Coding)", School: ENST Paris, Nr. 96 E 009, 1996. in French. 
The following thesis is merely focused on the iteration of discrete schemes which allow fractional sampling: this is a generalization of better known timefrequency tools, namely the dyadic schemes. It is shown in particular how classical results can be extended to the ``rational'' case, and how the arising new problems are solved... One of the most interesting results is indeed the existence of limit functions associated to the iterated schemes, thus providing an interpretation of the iterated rational filter bank as the discrete form of a timescale transform. However, the fact that shift invariance between these functions is not preserved prevents the timescale transform to behave exactly as a wavelet transform. The amount of shift error is quantified under the name ``amnesia''. The pro¬perties of the limit functions (regularity, amnesia among others) are studied in details, while the implications on the iterated filter bank are explicited. On the other side, the discrete features of the filter bank are studied as far as implementation (finite precision) or filter design are concerned: in the latter case, an algorithm which proved to be very efficient is described. Finally, an application to high fidelity sound coding has been implemented: a new formulation of the psychoacoustic masking effect under the form of a wavelet transform, and the application of the theoretical considerations developed in the previous chapters, lead to a new coding algorithm whose main characteristic is the inaudibility of the classical preecho effect. 
@phdthesis{blu1996c, author = "Blu, T.", title = "Bancs de filtres it\'er\'es en fraction d'octave  Application au codage de son (Iterated Rational Filter Banks with an Application to Audio Coding)", school = "ENST Paris, Nr. 96 E 009", year = "1996", note = "in French", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1996c" } 
Blu, T.,"A New Design Algorithm for TwoBand Orthonormal Rational Filter Banks and Orthonormal Rational Wavelets", IEEE Transactions on Signal Processing, Vol. 46 (6), pp. 14941504, June 1998. 
In this paper, we present a new algorithm for the design of orthonormal twoband rational filter banks. Owing to the connection between iterated rational filter banks and rational wavelets, this is also a design algorithm for orthonormal rational wavelets. It is basically a simple iterative procedure, which explains its exponential convergence and adaptability under various linear constraints (e.g., regularity). Although the filters obtained from this algorithm are suboptimally designed, they show excellent frequency selectivity. After an indepth account of the algorithm, we discuss the properties of the rational wavelets generated by some designed filters. In particular, we stress the possibility to design "almost" shift errorfree wavelets, which allows the implementation of a rational wavelet transform. 
@article{blu1998b, author = "Blu, T.", title = "A New Design Algorithm for TwoBand Orthonormal Rational Filter Banks and Orthonormal Rational Wavelets", journal = "{IEEE} Transactions on Signal Processing", month = "June", year = "1998", volume = "46", number = "6", pages = "14941504", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1998b" } 
Blu, T.,"The Generalized Annihilation PropertyA Tool For Solving Finite Rate of Innovation Problems", Proceedings of the Eighth International Workshop on Sampling Theory and Applications (SampTA'09), Marseille, France, May 1822, 2009. 
We describe a property satisfied by a class of nonlinear systems of equations that are of the form $\F(\Omega)\X=\Y$. Here $\F(\Omega)$ is a matrix that depends on an unknown $K$dimensional vector $\Omega$, $\X$ is an unknown $K$dimensional vector and $\Y$ is a vector of $N$ $\ge K$) given measurements. Such equations are encountered in superresolution or sparse signal recovery problems known as ``Finite Rate of Innovation'' signal reconstruction. We show how this property allows to solve explicitly for the unknowns $\Omega$ and $\X$ by a direct, noniterative, algorithm that involves the resolution of two linear systems of equations and the extraction of the roots of a polynomial and give examples of problems where this type of solutions has been found useful. 
@inproceedings{blu2009b, author = "Blu, T.", title = "The Generalized Annihilation PropertyA Tool For Solving Finite Rate of Innovation Problems", booktitle = "Proceedings of the Eighth International Workshop on Sampling Theory and Applications ({SampTA'09})", month = "May 1822,", year = "2009", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2009b" } 
Blu, T.,"The SURELET MethodologyA PriorFree Approach to Signal and Image Denoising", Plenary Presentation at the Eighth International Workshop on Sampling Theory and Applications (SampTA'09), Marseille, France, May 1822, 2009. 
A novel methodology for restoring signal/images from noisy measurements will be presented. Contrary to the usual approaches (Bayesian, sparsebased), there is no prior modelization of the noiseless signal. Instead, it is the reconstruction algorithm itself that is parametrized, or approximated (using a Linear Expansion of Thresholds: LET). These parameters are then optimized by minimizing an estimate of the MSE between the (unknown) noiseless signal and the one processed by the algorithm. Surprisingly but admirably, it is possible to build such an estimate  Stein's Unbiased Risk Estimate (SURE)  using the noisy signal only, and without making any hypothesis on the noiseless signal. The only hypothesis is on the statistics of the noise (additive, Gaussian). Examples on image denoising are shown to validate the efficiency of this methodology. 
@conference{blu2009c, author = "Blu, T.", title = "The {SURELET} Methodology{A} PriorFree Approach to Signal and Image Denoising", booktitle = "Plenary Presentation at the Eighth International Workshop on Sampling Theory and Applications ({SampTA'09})", month = "May 1822,", year = "2009", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2009c" } 
Blu, T.,"Image Denoising and the SURELET Methodology", Tutorial Presentation at APSIPA Annual Summit and Conference 2010, Singapore, December 1417, 2010. 
Image denoising consists in approximating the noiseless image by performing some, usually nonlinear, processing of the noisy image. Most standard techniques involve assumptions on the result of this processing (sparsity, low highfrequency contents, etc.); i.e., the denoised image. Instead, the SURELET methodology that we promote consists in approximating the processing itself (seen as a function) in some linear combination of elementary nonlinear processings (LET: Linear Expansion of Thresholds), and to optimize the coefficients of this combination by minimizing a statistically unbiased estimate of the Mean Square Error (SURE: Stein's Unbiased Risk Estimate, for additive Gaussian noise). This tutorial will introduce the technique to the attendance, will outline its advantages (fast, noiserobust, flexible, image adaptive). A very complete set of results will be shown and compared with the stateoftheart. Extensions of the approach to Poisson noise reduction with application to microscopy imaging will also be shown. 
@conference{blu2010b, author = "Blu, T.", title = "Image Denoising and the {SURELET} Methodology", booktitle = "Tutorial Presentation at APSIPA Annual Summit and Conference 2010", month = "December 1417,", year = "2010", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010b" } 
Blu, T.,"Sparsity Through Annihilation: Algorithms and Applications", Keynote Presentation at the Tenth IEEE International Conference on Signal Processing (ICSP'10), Beijing, China, October 2428, 2010. 
The problem of reconstructing or estimating partially observed or sampled signals is an old and important one, and finds application in many areas that involve data acquisition. Traditional sampling and reconstruction approaches are heavily influenced by the classical Shannon sampling theory which gives an exact sampling and interpolation formula for bandlimited signals. Recently, the classical Shannon sampling framework has been extended to classes of nonbandlimited structured signals, which we call signals with Finite Rate of Innovation. In these new sampling schemes, the prior that the signal is sparse in a basis or in a parametric space takes the form of a linear system of equations expressing the annihilation of signalderived quantities. The coefficients of this annihilation system are then related in a nonlinear way (e.g., polynomial roots) to the sparse signal parameters; i.e., its "innovations". This leads to new exact reconstruction formulas and fast algorithms that achieve such reconstructions. We will show how these algorithms are able to deal succesfully with noise issues, leading to statistically optimal recovery, and we will exemplify this theory with a number of applications that benefit from these novel schemes. 
@conference{blu2010c, author = "Blu, T.", title = "Sparsity Through Annihilation: Algorithms and Applications", booktitle = "Keynote Presentation at the Tenth {IEEE} International Conference on Signal Processing ({ICSP'10})", month = "October 2428,", year = "2010", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010c" } 
Blu, T.,"Linear Expansion of Thresholds: A Tool for Approximating Image Processing Algorithms", Keynote Presentation at the ninth International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISPBMEI'16), Datong, China, October 1517, 2016. 
Contrary to the usual processing approaches which consist in approximating
all the pixels of an image (often by optimizing some criterion),
we propose to approximate the processing itself using a linear combination
of a few basic nonlinear processings  "thresholds". Accordingly,
we term this approach "Linear Expansion of Thresholds" (LET). 
@conference{blu2016f, author = "Blu, T.", title = "Linear Expansion of Thresholds: A Tool for Approximating Image Processing Algorithms", booktitle = "Keynote Presentation at the ninth International Congress on Image and Signal Processing, BioMedical Engineering and Informatics ({CISPBMEI'16})", month = "October 1517,", year = "2016", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2016f" } 
Blu, T., Bay, H. & Unser, M.,"A New HighResolution Processing Method for the Deconvolution of Optical Coherence Tomography Signals", Proceedings of the First IEEE International Symposium on Biomedical Imaging (ISBI'02), Washington, USA, Vol. {III}, pp. 777780, July 710, 2002. 
We show the feasibility and the potential of a new signal processing algorithm for the highresolution deconvolution of OCT signals. Our technique relies on the description of the measures in a parametric form, each set of four parameters describing the optical characteristics of a physical interface (e.g., complex refractive index, depth). Under the hypothesis of a Gaussian source light, we show that it is possible to recover the 4K parameters corresponding to K interfaces using as few as 4K uniform samples of the OCT signal. With noisy data, we can expect the robustness of our method to increase with the oversampling rate—or with the redundancy of the measures. The validation results show that the quality of the estimation of the parameters (in particular the depth of the interfaces) is narrowly linked to the noise level of the OCT measures—and not to the coherence length of the source light—and to their degree of redundancy. 
@inproceedings{blu2002a, author = "Blu, T. and Bay, H. and Unser, M.", title = "A New HighResolution Processing Method for the Deconvolution of Optical Coherence Tomography Signals", booktitle = "Proceedings of the First {IEEE} International Symposium on Biomedical Imaging ({ISBI'02})", month = "July 710,", year = "2002", volume = "{III}", pages = "777780", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002a" } 
Blu, T., Dragotti, P.L., Vetterli, M., Marziliano, P. & Coulot, L.,"Sparse Sampling of Signal Innovations", IEEE Signal Processing Magazine, Vol. 25 (2), pp. 3140, March 2008. 
Signal acquisition and reconstruction is at the heart of signal processing, and sampling theorems provide the bridge between the continuous and the discretetime worlds. The most celebrated and widely used sampling theorem is often attributed to Shannon (and many others, from Whittaker to Kotel′nikov and Nyquist, to name a few) and gives a sufficient condition, namely bandlimitedness, for an exact sampling and interpolation formula. The sampling rate, at twice the maximum frequency present in the signal, is usually called the Nyquist rate. Bandlimitedness, however, is not necessary as is well known but only rarely taken advantage of [1]. In this broader, nonbandlimited view, the question is: when can we acquire a signal using a sampling kernel followed by uniform sampling and perfectly reconstruct it? The Shannon case is a particular example, where any signal from the subspace of bandlimited signals, denoted by BL, can be acquired through sampling and perfectly interpolated from the samples. Using the sinc kernel, or ideal lowpass filter, nonbandlimited signals will be projected onto the subspace BL. The question is: can we beat Shannon at this game, namely, acquire signals from outside of BL and still perfectly reconstruct? An obvious case is bandpass sampling and variations thereof. Less obvious are sampling schemes taking advantage of some sort of sparsity in the signal, and this is the central theme of this article. That is, instead of generic bandlimited signals, we consider the sampling of classes of nonbandlimited parametric signals. This allows us to circumvent Nyquist and perfectly sample and reconstruct signals using sparse sampling, at a rate characterized by how sparse they are per unit of time. In some sense, we sample at the rate of innovation of the signal by complying with Occam's razor principle [known as Lex Parcimoniæ or Law of Parsimony: Entia non svnt mvltiplicanda præter necessitatem, or, “Entities should not be multiplied beyond necessity” (from Wikipedia)]. Besides Shannon's sampling theorem, a second basic result that permeates signal processing is certainly Heisenberg's uncertainty principle, which suggests that a singular event in the frequency domain will be necessarily widely spread in the time domain. A superficial interpretation might lead one to believe that a perfect frequency localization requires a very long time observation. That this is not necessary is demonstrated by high resolution spectral analysis methods, which achieve very precise frequency localization using finite observation windows [2], [3]. The way around Heisenberg resides in a parametric approach, where the prior that the signal is a linear combination of sinusoids is put to contribution. If by now you feel uneasy about slaloming around Nyquist, Shannon, and Heisenberg, do not worry. Estimation of sparse data is a classic problem in signal processing and communications, from estimating sinusoids in noise, to locating errors in digital transmissions. Thus, there is a wide variety of available techniques and algorithms. Also, the best possible performance is given by the CramérRao lower bounds for this parametric estimation problem, and one can thus check how close to optimal a solution actually is. We are thus ready to pose the basic questions of this article. Assume a sparse signal (be it in continuous or discrete time) observed through a sampling device that is a smoothing kernel followed by regular or uniform sampling. What is the minimum sampling rate (as opposed to Nyquist's rate, which is often infinite in cases of interest) that allows to recover the signal? What classes of sparse signals are possible? What are good observation kernels, and what are efficient and stable recovery algorithms? How does observation noise influence recovery, and what algorithms will approach optimal performance? How will these new techniques impact practical applications, from inverse problems to wideband communications? And finally, what is the relationship between the presented methods and classic methods as well as the recent advances in compressed sensing and sampling? References
Erratum: There is an unfortunate discrepancy between (3) and (910). To be consistent with (3), the factor τ expressing the proportionality between y_{m} ˆ and x_{m} ˆ has to be replaced by N/B in (10). 
@article{blu2008b, author = "Blu, T. and Dragotti, P.L. and Vetterli, M. and Marziliano, P. and Coulot, L.", title = "Sparse Sampling of Signal Innovations", journal = "{IEEE} Signal Processing Magazine", month = "March", year = "2008", volume = "25", number = "2", pages = "3140", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008b" } 
Blu, T. & Lebrun, J.,"Analyse tempsfréquence linéaire II: représentations de type ondelettes", Tempsfréquence, concepts et outils, Paris, France, pp. 101138, Hermès, 2005. 
La théorie des ondelettes est née au milieu des années quatrevingts pour répondre aux problèmes de résolution tempsfréquence des méthodes de type Fourier. En effet, nombre de signaux nonstationnaires nécessitent une analyse dont la résolution fréquentielle (respectivement temporelle) varie avec la localisation temporelle (respectivement fréquentielle). C'est pour permettre cette flexibilité que les ondelettes, un nouveau concept d'analyse dite «multirésolution» ou «multiéchelle», ont vu le jour. Après une présentation succincte de la transformation en ondelettes continue, nous nous focaliserons sur sa version discrète, notamment l'algorithme de Mallat, qui est à la transformation en ondelettes ce que la FFT est à la transformée de Fourier. Nous considérerons également l'important problème de la conception de filtres générateurs d'ondelettes (filtres de Daubechies, par exemple). Par ailleurs, nous étudierons de récentes généralisations ou extensions (en particulier, multiondelettes, paquets d'ondelettes et frames) rendues nécessaires par certaines limitations de la théorie des ondelettes. Enfin, nous détaillerons quelques applications qui font le succès actuel des ondelettes et plus généralement, des méthodes tempséchelle (compression et débruitage, mise en correspondance d'images, …). L'un des buts de ce chapitre aura ainsi été de mettre en évidence la fertilisation croisée entre des approches parfois assez théoriques, où mathématiques et sciences de l'ingénieur se marient avec bonheur. 
@incollection{blu2005a, author = "Blu, T. and Lebrun, J.", title = "Analyse tempsfr{\'{e}}quence lin{\'{e}}aire {II}: repr{\'{e}}sentations de type ondelettes", booktitle = "Tempsfr{\'{e}}quence, concepts et outils", publisher = "Herm{\`{e}}s", year = "2005", pages = "101138", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005a" } 
Blu, T. & Lebrun, J.,"Linear timefrequency analysis II: wavelettype representations", TimeFrequency Analysis  Concepts and Methods, London UK, pp. 93130, WileyISTE, 2008. 
Wavelet theory was born in the mid1980s in response to the timefrequency resolution problems of Fouriertype methods. Indeed, many nonstationary signals call for an analysis whose spectral (resp. temporal) resolution varies with the temporal (resp. spectral) localization. It is to allow this flexibility that wavelets, a new analysis concept called "multiresolution" or "multiscale," have been brought to light. After a brief presentation of the continuous wavelet transform, we shall focus on its discrete version, notably the Mallat algorithm, which is for the wavelet transform what the FFT is for the Fourier transform. We shall also consider the important problem of the design of wavelet generator filters (Daubechies filters, for example). Furthermore, we shall study some recent generalizations or extensions (in particular, multiwavelets, wavelet packets, and frames) that were motivated by certain limitations of wavelet theory. Finally, we shall discuss some applications that caused the present success of wavelets and, more generally, of timescale methods (compression and denoising, aligning images, etc.). One of the aims of this chapter will thus be to demonstrate the crossfertilization between sometimes quite theoretical approaches, where mathematics and engineering sciences are happily united. 
@incollection{blu2008c, author = "Blu, T. and Lebrun, J.", title = "Linear timefrequency analysis {II}: wavelettype representations", booktitle = "TimeFrequency Analysis  Concepts and Methods", publisher = "WileyISTE", year = "2008", pages = "93130", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008c" } 
Blu, T. & Luisier, F.,"The SURELET Approach to Image Denoising", IEEE Transactions on Image Processing, Vol. 16 (11), pp. 27782786, November 2007. 
We propose a new approach to image denoising, based on the imagedomain minimization of an estimate of the mean squared error—Stein's unbiased risk estimate (SURE). Unlike most existing denoising algorithms, using the SURE makes it needless to hypothesize a statistical model for the noiseless image. A key point of our approach is that, although the (nonlinear) processing is performed in a transformed domain—typically, an undecimated discrete wavelet transform, but we also address nonorthonormal transforms—this minimization is performed in the image domain. Indeed, we demonstrate that, when the transform is a “tight” frame (an undecimated wavelet transform using orthonormal filters), separate subband minimization yields substantially worse results. In order for our approach to be viable, we add another principle, that the denoising process can be expressed as a linear combination of elementary denoising processes—linear expansion of thresholds (LET). Armed with the SURE and LET principles, we show that a denoising algorithm merely amounts to solving a linear system of equations which is obviously fast and efficient. Quite remarkably, the very competitive results obtained by performing a simple threshold (imagedomain SURE optimized) on the undecimated Haar wavelet coefficients show that the SURELET principle has a huge potential. 
@article{blu2007b, author = "Blu, T. and Luisier, F.", title = "The {SURELET} Approach to Image Denoising", journal = "{IEEE} Transactions on Image Processing", month = "November", year = "2007", volume = "16", number = "11", pages = "27782786", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007b" } 
Blu, T. & Luisier, F.,"Image Denoising and the SURELET Methodology", Tutorial Presentation at the Seventeenth International Conference on Image Processing (ICIP'2010), Hong Kong, China, September 2629, 2010. 
Image denoising consists in approximating the noiseless image by performing some, usually nonlinear, processing of the noisy image. Most standard techniques involve assumptions on the result of this processing (sparsity, low highfrequency contents, etc.); i.e., the denoised image. Instead, the SURELET methodology that we promote consists in approximating the processing itself (seen as a function) in some linear combination of elementary nonlinear processings (LET: Linear Expansion of Thresholds), and to optimize the coefficients of this combination by minimizing a statistically unbiased estimate of the Mean Square Error (SURE: Stein's Unbiased Risk Estimate, for additive Gaussian noise). This tutorial will introduce the technique to the attendance, will outline its advantages (fast, noiserobust, flexible, image adaptive). A very complete set of results will be shown and compared with the stateoftheart. Extensions of the approach to Poisson noise reduction with application to microscopy imaging will also be shown. 
@conference{blu2010d, author = "Blu, T. and Luisier, F.", title = "Image Denoising and the {SURELET} Methodology", booktitle = "Tutorial Presentation at the Seventeenth International Conference on Image Processing ({ICIP'2010})", month = "September 2629,", year = "2010", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010d" } 
Blu, T., Moulin, P. & Gilliam, C.,"Approximation order of the LAP optical flow algorithm", Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP'15), Québec City, Canada, pp. 4852, September 2730, 2015. 
Estimating the displacements between two images is often addressed using a small displacement assumption, which leads to what is known as the optical flow equation. We study the quality of the underlying approximation for the recently developed Local AllPass (LAP) optical flow algorithm, which is based on another approach—displacements result from filtering. While the simplest version of LAP computes only firstorder differences, we show that the order of LAP approximation is quadratic, unlike standard optical flow equation based algorithms for which this approximation is only linear. More generally, the order of approximation of the LAP algorithm is twice larger than the differentiation order involved. The key step in the derivation is the use of Padé approximants. 
@inproceedings{blu2015b, author = "Blu, T. and P. Moulin and C. Gilliam", title = "Approximation order of the {LAP} optical flow algorithm", booktitle = "Proceedings of the 2015 {IEEE} International Conference on Image Processing ({ICIP'15})", month = "September 2730,", year = "2015", pages = "4852", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2015b" } 
Blu, T. & Rioul, O.,"Wavelet Regularity of Iterated Filter Banks with Rational Sampling Changes", Proceedings of the Eighteenth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'93), Minneapolis, USA, Vol. {III}, pp. 213216, April 2730, 1993. 
The regularity property was first introduced by wavelet theory for octaveband dyadic filter banks. In the present work, the authors provide a detailed theoretical analysis of the regularity property in the more flexible case of filter banks with rational sampling changes. Such filter banks provide a finer analysis of fractions of an octave, and regularity is as important as in the dyadic case. Sharp regularity estimates for any filter bank are given. The major difficulty of the rational case, as compared with the dyadic case, is that one obtains wavelets that are not shifted versions of each other at a given scale. It is shown, however, that, under regularity conditions, shift invariance can almost be obtained. This is a desirable property for, e.g., coding applications and for efficient filter bank implementation of a continuous wavelet transform. 
@inproceedings{blu1993b, author = "Blu, T. and Rioul, O.", title = "Wavelet Regularity of Iterated Filter Banks with Rational Sampling Changes", booktitle = "Proceedings of the Eighteenth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'93})", month = "April 2730,", year = "1993", volume = "{III}", pages = "213216", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1993b" } 
Blu, T., Sühling, M., Thévenaz, P. & Unser, M.,"Approximation Order: Why the Asymptotic Constant Matters", Second Pacific Rim Conference on Mathematics (PRCM'01), Taipei, Taiwan, pp. {II}.3{II}.4, January 48, 2001. 
We consider the approximation (either interpolation, or leastsquares) of L^{2} functions in the shiftinvariant space V_{T} = span_{k∈Z} { φ(t ⁄ T − k) } that is generated by the single shifted function φ. We measure the approximation error in an L^{2} sense and evaluate the asymptotic equivalent of this error as the sampling step T tends to zero. Let ƒ ∈ L^{2} and ƒ_{T} be its approximation in V_{T}. It is wellknown that, if φ satisfies the StrangFix conditions of order L, and under mild technical constraints,  ƒ − ƒ_{T}  _{L}2 = O(T^{L}). In this presentation however, we want to be more accurate and concentrate on the constant C_{φ} which is such that  ƒ − ƒ_{T}  _{L}2 = C_{φ}  ƒ^{(L)}  _{L}2 T^{L} + o(T^{L}). 
@inproceedings{blu2001a, author = "Blu, T. and S{\"{u}}hling, M. and Th{\'{e}}venaz, P. and Unser, M.", title = "Approximation Order: {W}hy the Asymptotic Constant Matters", booktitle = "Second Pacific Rim Conference on Mathematics ({PRCM'01})", month = "January 48,", year = "2001", pages = "{II}.3{II}.4", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001a" } 
Blu, T., Thévenaz, P. & Unser, M.,"Minimum Support Interpolators with Optimum Approximation Properties", Proceedings of the 1998 IEEE International Conference on Image Processing (ICIP'98), Chicago, USA, Vol. {III}, pp. 242245, October 47, 1998. 
We investigate the functions of given approximation order L that have the smallest support. Those are shown to be linear combinations of the Bspline of degree L1 and its L1 first derivatives. We then show how to find the functions that minimize the asymptotic approximation constant among this finite dimension space; in particular, a tractable induction relation is worked out. Using these functions instead of splines, we observe that the approximation error is dramatically reduced, not only in the limit when the sampling step tends to zero, but also for higher values up to the Shannon rate. Finally, we show that those optimal functions satisfy a scaling equation, although less simple than the usual twoscale difference equation. 
@inproceedings{blu1998c, author = "Blu, T. and Th{\'{e}}venaz, P. and Unser, M.", title = "Minimum Support Interpolators with Optimum Approximation Properties", booktitle = "Proceedings of the 1998 {IEEE} International Conference on Image Processing ({ICIP'98})", month = "October 47,", year = "1998", volume = "{III}", pages = "242245", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1998c" } 
Blu, T., Thévenaz, P. & Unser, M.,"Generalized Interpolation: Higher Quality at no Additional Cost", Proceedings of the 1999 IEEE International Conference on Image Processing (ICIP'99), Kobe, Japan, Vol. {III}, pp. 667671, October 2528, 1999. 
We extend the classical interpolation method to generalized interpolation. This extension is done by replacing the interpolating function by a noninterpolating function that is applied to prefiltered data, in order to preserve the interpolation condition. We show, both theoretically and practically, that this approach performs much better than classical methods, for the same computational cost. 
@inproceedings{blu1999a, author = "Blu, T. and Th{\'{e}}venaz, P. and Unser, M.", title = "Generalized Interpolation: {H}igher Quality at no Additional Cost", booktitle = "Proceedings of the 1999 {IEEE} International Conference on Image Processing ({ICIP'99})", month = "October 2528,", year = "1999", volume = "{III}", pages = "667671", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1999a" } 
Blu, T., Thévenaz, P. & Unser, M.,"MOMS: MaximalOrder Interpolation of Minimal Support", IEEE Transactions on Image Processing, Vol. 10 (7), pp. 10691080, July 2001. 
We consider the problem of interpolating a signal using a linear combination of shifted versions of a compactlysupported basis function φ(x). We first give the expression of the φ's that have minimal support for a given accuracy (also known as "approximation order"). This class of functions, which we call maximalorderminimalsupport functions (MOMS), is made of linear combinations of the Bspline of same order and of its derivatives. We provide the explicit form of the MOMS that maximize the approximation accuracy when the stepsize is small enough. We compute the sampling gain obtained by using these optimal basis functions over the splines of same order. We show that it is already substantial for small orders and that it further increases with the approximation order L. When L is large, this sampling gain becomes linear; more specifically, its exact asymptotic expression is (2 L ⁄ (π × e)). Since the optimal functions are continuous, but not differentiable, for even orders, and even only piecewise continuous for odd orders, our result implies that regularity has little to do with approximating performance. These theoretical findings are corroborated by experimental evidence that involves compounded rotations of images. 
@article{blu2001b, author = "Blu, T. and Th{\'{e}}venaz, P. and Unser, M.", title = "{MOMS}: {M}aximalOrder Interpolation of Minimal Support", journal = "{IEEE} Transactions on Image Processing", month = "July", year = "2001", volume = "10", number = "7", pages = "10691080", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001b" } 
Blu, T., Thévenaz, P. & Unser, M.,"How a Simple Shift Can Significantly Improve the Performance of Linear Interpolation", Proceedings of the 2002 IEEE International Conference on Image Processing (ICIP'02), Rochester, USA, Vol. {III}, pp. 377380, September 2225, 2002. 
We present a simple, original method to improve piecewise linear interpolation with uniform knots: We shift the sampling knots by a fixed amount, while enforcing the interpolation property. Thanks to a theoretical analysis, we determine the optimal shift that maximizes the quality of our shifted linear interpolation. Surprisingly enough, this optimal value is nonzero and it is close to 1 ⁄ 5. We confirm our theoretical findings by performing a cumulative rotation experiment, which shows a significant increase of the quality of the shifted method with respect to the standard one. Most interesting is the fact that we get a quality similar to that of highquality cubic convolution at the computational cost of linear interpolation. 
@inproceedings{blu2002b, author = "Blu, T. and Th{\'{e}}venaz, P. and Unser, M.", title = "How a Simple Shift Can Significantly Improve the Performance of Linear Interpolation", booktitle = "Proceedings of the 2002 {IEEE} International Conference on Image Processing ({ICIP'02})", month = "September 2225,", year = "2002", volume = "{III}", pages = "377380", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002b" } 
Blu, T., Thévenaz, P. & Unser, M.,"Complete Parameterization of PiecewisePolynomial Interpolation Kernels", IEEE Transactions on Image Processing, Vol. 12 (11), pp. 12971309, November 2003. 
Every now and then, a new design of an interpolation kernel shows up in the literature. While interesting results have emerged, the traditional design methodology proves laborious and is riddled with very large systems of linear equations that must be solved analytically. In this paper, we propose to ease this burden by providing an explicit formula that will generate every possible piecewisepolynomial kernel given its degree, its support, its regularity, and its order of approximation. This formula contains a set of coefficients that can be chosen freely and do not interfere with the four main design parameters; it is thus easy to tune the design to achieve any additional constraints that the designer may care for. 
@article{blu2003a, author = "Blu, T. and Th{\'{e}}venaz, P. and Unser, M.", title = "Complete Parameterization of PiecewisePolynomial Interpolation Kernels", journal = "{IEEE} Transactions on Image Processing", month = "November", year = "2003", volume = "12", number = "11", pages = "12971309", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003a" } 
Blu, T., Thévenaz, P. & Unser, M.,"Linear Interpolation Revitalized", IEEE Transactions on Image Processing, Vol. 13 (5), pp. 710719, May 2004. 
We present a simple, original method to improve piecewiselinear interpolation with uniform knots: we shift the sampling knots by a fixed amount, while enforcing the interpolation property. We determine the theoretical optimal shift that maximizes the quality of our shifted linear interpolation. Surprisingly enough, this optimal value is nonzero and close to 1⁄5. We confirm our theoretical findings by performing several experiments: a cumulative rotation experiment and a zoom experiment. Both show a significant increase of the quality of the shifted method with respect to the standard one. We also observe that, in these results, we get a quality that is similar to that of the computationally more costly “highquality” cubic convolution. Erratum

@article{blu2004a, author = "Blu, T. and Th{\'{e}}venaz, P. and Unser, M.", title = "Linear Interpolation Revitalized", journal = "{IEEE} Transactions on Image Processing", month = "May", year = "2004", volume = "13", number = "5", pages = "710719", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004a" } 
Blu, T., Thévenaz, P. & Unser, M.,"HighQuality Causal Interpolation for Online Unidimensional Signal Processing", Proceedings of the Twelfth European Signal Processing Conference (EUSIPCO'04), Wien, Austria, pp. 14171420, September 610, 2004. 
We present a procedure for designing interpolation kernels that are adapted to time signals; i.e., they are causal, even though they do not have a finite support. The considered kernels are obtained by digital IIR filtering of a finite support function that has maximum approximation order. We show how to build these kernel starting from the allpole digital filter and we give some practical design examples. 
@inproceedings{blu2004b, author = "Blu, T. and Th{\'{e}}venaz, P. and Unser, M.", title = "HighQuality Causal Interpolation for Online Unidimensional Signal Processing", booktitle = "Proceedings of the Twelfth European Signal Processing Conference ({EUSIPCO'04})", month = "September 610,", year = "2004", pages = "14171420", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004b" } 
Blu, T. & Unser, M.,"Quantitative L^2 Error Analysis for Interpolation Methods and Wavelet Expansions", Proceedings of the 1997 IEEE International Conference on Image Processing (ICIP'97), Santa Barbara, USA, Vol. {I}, pp. 663666, October 2629, 1997. 
Our goal in this paper is to set a theoretical basis for the comparison of resampling and interpolation methods. We consider the general problem of the approximation of an arbitrary continuouslydefined function f(x)—not necessarily bandlimited—when we vary the sampling step T. We present an accurate L^{2} computation of the induced approximation error as a function of T for a general class of linear approximation operators including interpolation and other kinds of projectors. This new quantitative result provides exact expressions for the asymptotic development of the error as T→0, and also sharp (asymptotically exact) upper bounds. 
@inproceedings{blu1997a, author = "Blu, T. and Unser, M.", title = "Quantitative {$L^{2}$} Error Analysis for Interpolation Methods and Wavelet Expansions", booktitle = "Proceedings of the 1997 {IEEE} International Conference on Image Processing ({ICIP'97})", month = "October 2629,", year = "1997", volume = "{I}", pages = "663666", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1997a" } 
Blu, T. & Unser, M.,"A Quantitative Fourier Analysis of the Linear Approximation Error by Wavelets", Wavelet Applications Workshop, Monte Verità, Switzerland, September 28October 2, 1998. 
We introduce a simple method—integration of the power spectrum against a Fourier kernel—for computing the approximation error by wavelets. This method is powerful enough to recover all classical L^{2} results in approximation theory (StrangFix theory), and also to provide new error estimates that are sharper and asymptotically exact. 
@inproceedings{blu1998d, author = "Blu, T. and Unser, M.", title = "A Quantitative {F}ourier Analysis of the Linear Approximation Error by Wavelets", booktitle = "Wavelet Applications Workshop", month = "September 28October 2,", year = "1998", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1998d" } 
Blu, T. & Unser, M.,"Quantitative Fourier Analysis of Approximation Techniques: Part IIWavelets", IEEE Transactions on Signal Processing, Vol. 47 (10), pp. 27962806, October 1999. 
In a previous paper, we proposed a general Fourier method which provides an accurate prediction of the approximation error, irrespective of the scaling properties of the approximating functions. Here, we apply our results when these functions satisfy the usual twoscale relation encountered in dyadic multiresolution analysis. As a consequence of this additional constraint, the quantities introduced in our previous paper can be computed explicitly as a function of the refinement filter. This is in particular true for the asymptotic expansion of the approximation error for biorthonormal wavelets, as the scale tends to zero. One of the contributions of this paper is the computation of sharp, asymptotically optimal upper bounds for the leastsquares approximation error. Another contribution is the application of these results to Bsplines and Daubechies scaling functions, which yields explicit asymptotic developments and upper bounds. Thanks to these explicit expressions, we can quantify the improvement that can be obtained by using Bsplines instead of Daubechies wavelets. In other words, we can use a coarser spline sampling and achieve the same reconstruction accuracy as Daubechies: Specifically, we show that this sampling gain converges to pi as the order tends to infinity. Please consult also the companion paper by T. Blu, M. Unser, "Quantitative Fourier Analysis of Approximation Techniques: Part I—Interpolators and Projectors," IEEE Transactions on Signal Processing, vol. 47, no. 10, pp. 27832795, October 1999. 
@article{blu1999b, author = "Blu, T. and Unser, M.", title = "Quantitative {F}ourier Analysis of Approximation Techniques: {P}art {II}{W}avelets", journal = "{IEEE} Transactions on Signal Processing", month = "October", year = "1999", volume = "47", number = "10", pages = "27962806", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1999b" } 
Blu, T. & Unser, M.,"Quantitative Fourier Analysis of Approximation Techniques: Part IInterpolators and Projectors", IEEE Transactions on Signal Processing, Vol. 47 (10), pp. 27832795, October 1999. 
We present a general Fourierbased method that provides an accurate prediction of the approximation error as a function of the sampling step T. Our formalism applies to an extended class of convolutionbased signal approximation techniques, which includes interpolation, generalized sampling with prefiltering, and the projectors encountered in wavelet theory. We claim that we can predict the L^{2}approximation error, by integrating the spectrum of the function to approximate—not necessarily bandlimited—against a frequency kernel E(ω) that characterizes the approximation operator. This prediction is easier, yet more precise than was previously available. Our approach has the remarkable property of providing a global error estimate that is the average of the true approximation error over all possible shifts of the input function. Our error prediction is exact for stationary processes, as well as for bandlimited signals. We apply this method to the comparison of standard interpolation and approximation techniques. Our method has interesting implications for approximation theory. In particular, we use our results to obtain some new asymptotic expansions of the error as T tends to 0, and also to derive improved upper bounds of the kind found in the StrangFix theory. We finally show how we can design quasiinterpolators that are nearoptimal in the leastsquares sense. Please consult also the companion paper by T. Blu, M. Unser, "Quantitative Fourier Analysis of Approximation Techniques: Part II—Wavelets," IEEE Transactions on Signal Processing, vol. 47, no. 10, pp. 27962806, October 1999. 
@article{blu1999c, author = "Blu, T. and Unser, M.", title = "Quantitative {F}ourier Analysis of Approximation Techniques: {P}art {I}{I}nterpolators and Projectors", journal = "{IEEE} Transactions on Signal Processing", month = "October", year = "1999", volume = "47", number = "10", pages = "27832795", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1999c" } 
Blu, T. & Unser, M.,"Approximation Error for QuasiInterpolators and (Multi) Wavelet Expansions", Applied and Computational Harmonic Analysis, Vol. 6 (2), pp. 219251, March 1999. 
We investigate the approximation properties of general polynomial preserving operators that approximate a function into some scaled subspace of L_{2} via an appropriate sequence of inner products. In particular, we consider integer shiftinvariant approximations such as those provided by splines and wavelets, as well as finite elements and multiwavelets which use multiple generators. We estimate the approximation error as a function of the scale parameter T when the function to approximate is sufficiently regular. We then present a generalized sampling theorem, a result that is rich enough to provide tight bounds as well as asymptotic expansions of the approximation error as a function of the sampling step T. Another more theoretical consequence is the proof of a conjecture by Strang and Fix, stating the equivalence between the order of a multiwavelet space and the order of a particular subspace generated by a single function. Finally, we consider refinable generating functions and use the twoscale relation to obtain explicit formulae for the coefficients of the asymptotic development of the error. The leading constants are easily computable and can be the basis for the comparison of the approximation power of wavelet and multiwavelet expansions of a given order L. 
@article{blu1999d, author = "Blu, T. and Unser, M.", title = "Approximation Error for QuasiInterpolators and (Multi) Wavelet Expansions", journal = "Applied and Computational Harmonic Analysis", month = "March", year = "1999", volume = "6", number = "2", pages = "219251", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1999d" } 
Blu, T. & Unser, M.,"A Theoretical Analysis of the Projection Error onto Discrete Wavelet Subspaces", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing VII, Denver, USA, Vol. 3813, pp. 273281, July 1923, 1999. 
A filterbank decomposition can be seen as a series of projections onto several discrete wavelet subspaces. In this presentation, we analyze the projection onto one of them—the lowpass one, since many signals tend to be lowpass. We prove a general but simple formula that allows the computation of the l_{2}error made by approximating the signal by its projection. This result provides a norm for evaluating the accuracy of a complete decimation/interpolation branch for arbitrary analysis and synthesis filters; such a norm could be useful for the joint design of an analysis and synthesis filter, especially in the nonorthonormal case. As an example, we use our framework to compare the efficiency of different wavelet filters, such as Daubechies' or splines. In particular, we prove that the error made by using a Daubechies' filter downsampled by 2 is of the same order as the error using an orthonormal spline filter downsampled by 6. This proof is valid asymptotically as the number of regularity factors tends to infinity, and for a signal that is essentially lowpass. This implies that splines bring an additional compression gain of at least 3 over Daubechies' filters, asymptotically. 
@inproceedings{blu1999e, author = "Blu, T. and Unser, M.", title = "A Theoretical Analysis of the Projection Error onto Discrete Wavelet Subspaces", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {VII}", month = "July 1923,", year = "1999", volume = "3813", pages = "273281", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1999e" } 
Blu, T. & Unser, M.,"The Fractional Spline Wavelet Transform: Definition and Implementation", Proceedings of the TwentyFifth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'00), Istanbul, Turkey, Vol. {I}, pp. 512515, June 59, 2000. 
We define a new wavelet transform that is based on a recently defined family of scaling functions: the fractional Bsplines. The interest of this family is that they interpolate between the integer degrees of polynomial Bsplines and that they allow a fractional order of approximation. The orthogonal fractional spline wavelets essentially behave as a fractional differentiators. This property seems promising for the analysis of 1/f^{α}; noise that can be whitened by an appropriate choice of the degree of the spline transform. We present a practical FFTbased algorithm for the implementation of these fractional wavelet transforms, and give some examples of processing. 
@inproceedings{blu2000a, author = "Blu, T. and Unser, M.", title = "The Fractional Spline Wavelet Transform: {D}efinition and Implementation", booktitle = "Proceedings of the TwentyFifth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'00})", month = "June 59,", year = "2000", volume = "{I}", pages = "512515", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000a" } 
Blu, T. & Unser, M.,"Wavelets, Fractals, and Radial Basis Functions", IEEE Transactions on Signal Processing, Vol. 50 (3), pp. 543553, March 2002. IEEE Signal Processing Society's 2003 Best Paper Award. 
Wavelets and radial basis functions (RBFs) lead to two distinct ways of representing signals in terms of shifted basis functions. RBFs, unlike wavelets, are nonlocal and do not involve any scaling, which makes them applicable to nonuniform grids. Despite these fundamental differences, we show that the two types of representation are closely linked together …through fractals. First, we identify and characterize the whole class of selfsimilar radial basis functions that can be localized to yield conventional multiresolution wavelet bases. Conversely, we prove that for any compactly supported scaling function φ(x), there exists a onesided central basis function ρ_{+}(x) that spans the same multiresolution subspaces. The central property is that the multiresolution bases are generated by simple translation of ρ_{+} without any dilation. We also present an explicit timedomain representation of a scaling function as a sum of harmonic splines. The leading term in the decomposition corresponds to the fractional splines: a recent, continuousorder generalization of the polynomial splines. IEEE Signal Processing Society's 2003 Best Paper Award 
@article{blu2002c, author = "Blu, T. and Unser, M.", title = "Wavelets, Fractals, and Radial Basis Functions", journal = "{IEEE} Transactions on Signal Processing", month = "March", year = "2002", volume = "50", number = "3", pages = "543553", note = "IEEE Signal Processing Society's 2003 \textbf{Best Paper Award}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002c" } 
Blu, T. & Unser, M.,"Harmonic Spline Series Representation of Scaling Functions", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing X, San Diego, USA, Vol. 5207, pp. 120124, August 38, 2003. Part I. 
We present here an explicit timedomain representation of any compactly supported dyadic scaling function as a sum of harmonic splines. The leading term in the decomposition corresponds to the fractional splines, a recent, continuousorder generalization of the polynomial splines. 
@inproceedings{blu2003b, author = "Blu, T. and Unser, M.", title = "Harmonic Spline Series Representation of Scaling Functions", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {X}", month = "August 38,", year = "2003", volume = "5207", pages = "120124", note = "Part {I}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003b" } 
Blu, T. & Unser, M.,"A Complete Family of Scaling Functions: The (α,τ)Fractional Splines", Proceedings of the TwentyEighth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'03), Hong Kong, China, Vol. {VI}, pp. 421424, April 610, 2003. 
We describe a new family of scaling functions, the (α, τ)fractional splines, which generate valid multiresolution analyses. These functions are characterized by two real parameters: α, which controls the width of the scaling functions; and τ, which specifies their position with respect to the grid (shift parameter). This new family is complete in the sense that it is closed under convolutions and correlations. We give the explicit time and Fourier domain expressions of these fractional splines. We prove that the family is closed under generalized fractional differentiations, and, in particular, under the Hilbert transformation. We also show that the associated wavelets are able to whiten 1⁄ƒ^{λ}type noise, by an adequate tuning of the spline parameters. A fast (and exact) FFTbased implementation of the fractional spline wavelet transform is already available. We show that fractional integration operators can be expressed as the composition of an analysis and a synthesis iterated filterbank. 
@inproceedings{blu2003c, author = "Blu, T. and Unser, M.", title = "A Complete Family of Scaling Functions: {T}he (${\alpha},{\tau}$)Fractional Splines", booktitle = "Proceedings of the TwentyEighth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'03})", month = "April 610,", year = "2003", volume = "{VI}", pages = "421424", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003c" } 
Blu, T. & Unser, M.,"Quantitative L^2 Approximation Error of a Probability Density Estimate Given by It Samples", Proceedings of the TwentyNinth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'04), Montréal, Canada, Vol. {III}, pp. 952955, May 1721, 2004. 
We present a new result characterized by an exact integral expression for the approximation error between a probability density and an integer shift invariant estimate obtained from its samples. Unlike the Parzen window estimate, this estimate avoids recomputing the complete probability density for each new sample: only a few coefficients are required making it practical for realtime applications. We also show how to obtain the exact asymptotic behavior of the approximation error when the number of samples increases and provide the tradeoff between the number of samples and the sampling step size. 
@inproceedings{blu2004c, author = "Blu, T. and Unser, M.", title = "Quantitative $\mathbf{L}^{2}$ Approximation Error of a Probability Density Estimate Given by It Samples", booktitle = "Proceedings of the TwentyNinth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'04})", month = "May 1721,", year = "2004", volume = "{III}", pages = "952955", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004c" } 
Blu, T. & Unser, M.,"Optimal Interpolation of Fractional Brownian Motion Given Its Noisy Samples", Proceedings of the ThirtyFirst IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'06), Toulouse, France, pp. {III}860{III}863, May 1419, 2006. 
We consider the problem of estimating a fractional Brownian motion known only from its noisy samples at the integers. We show that the optimal estimator can be expressed using a digital Wienerlike filter followed by a simple timevariant correction accounting for nonstationarity. Moreover, we prove that this estimate lives in a symmetric fractional spline space and give a practical implementation for optimal upsampling of noisy fBm samples by integer factors. 
@inproceedings{blu2006a, author = "Blu, T. and Unser, M.", title = "Optimal Interpolation of Fractional {B}rownian Motion Given Its Noisy Samples", booktitle = "Proceedings of the ThirtyFirst {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'06})", month = "May 1419,", year = "2006", pages = "{III}860{III}863", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2006a" } 
Blu, T. & Unser, M.,"SelfSimilarity: Part IIOptimal Estimation of Fractal Processes", IEEE Transactions on Signal Processing, Vol. 55 (4), pp. 13641378, April 2007. 
In a companion paper (see SelfSimilarity: Part I—Splines and Operators), we characterized the class of scaleinvariant convolution operators: the generalized fractional derivatives of order γ. We used these operators to specify regularization functionals for a series of Tikhonovlike leastsquares data fitting problems and proved that the general solution is a fractional spline of twice the order. We investigated the deterministic properties of these smoothing splines and proposed a fast Fourier transform (FFT)based implementation. Here, we present an alternative stochastic formulation to further justify these fractional spline estimators. As suggested by the title, the relevant processes are those that are statistically selfsimilar; that is, fractional Brownian motion (fBm) and its higher order extensions. To overcome the technical difficulties due to the nonstationary character of fBm, we adopt a distributional formulation due to Gel′fand. This allows us to rigorously specify an innovation model for these fractal processes, which rests on the property that they can be whitened by suitable fractional differentiation. Using the characteristic form of the fBm, we then derive the conditional probability density function (PDF) p(B_{H}(t)Y), where Y = {B_{H}(k)+n[k]}_{k∈Z} are the noisy samples of the fBm B_{H}(t) with Hurst exponent H. We find that the conditional mean is a fractional spline of degree 2H, which proves that this class of functions is indeed optimal for the estimation of fractallike processes. The result also yields the optimal [minimum meansquare error (MMSE)] parameters for the smoothing spline estimator, as well as the connection with kriging and Wiener filtering. Please consult also the companion paper by M. Unser, T. Blu, "SelfSimilarity: Part I—Splines and Operators," IEEE Transactions on Signal Processing, vol. 55, no. 4, pp. 13521363, April 2007. 
@article{blu2007c, author = "Blu, T. and Unser, M.", title = "SelfSimilarity: {P}art {II}{O}ptimal Estimation of Fractal Processes", journal = "{IEEE} Transactions on Signal Processing", month = "April", year = "2007", volume = "55", number = "4", pages = "13641378", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007c" } 
Blu, T., Unser, M. & Thévenaz, P.,"Optimizing Basis Functions for Best Approximation", Fifth International Conference on Curves and Surfaces (ICCS'02), Saint Malo, France, June 27July 3, 2002. 
By evaluating approximation theoretic quantities we show how to compute explicitely the basis generators that minimize the approximation error for a full set of functions to approximate. We give several examples of this optimization, either to get the best generators that have maximal order for minimum support [1], or to design the best interpolation scheme with classical generators, such as Bsplines [2]. We present practical examples that visually confirm the validity of our approach. References

@inproceedings{blu2002d, author = "Blu, T. and Unser, M. and Th{\'{e}}venaz, P.", title = "Optimizing Basis Functions for Best Approximation", booktitle = "Fifth International Conference on Curves and Surfaces ({ICCS'02})", month = "June 27July 3,", year = "2002", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002d" } 
Blu, T., Unser, M., Thévenaz, P. & Sühling, M.,"Interpolation Method and Apparatus", International Patent WO2003021474, 2003. 
Method of interpolating digital samples using interpolation functions that are shifted by an arbitrary shift value relative to said samples. It will be shown that there is a nonzero and nontrivial optimal value for this shift value for which the approximation error is minimized. 
@misc{blu2003d, author = "Blu, T. and Unser, M. and Th{\'{e}}venaz, P. and S{\"u}hling, M.", title = "Interpolation Method and Apparatus", year = "2003", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003d" } 
Chacko, N., Liebling, M. & Blu, T.,"Discretization of continuous convolution operators for accurate modeling of wave propagation in digital holography", Journal of the Optical Society of America A, Vol. 30 (10), pp. 20122020, October 2013. 
Discretization of continuous (analog) convolution operators by direct sampling of the convolution kernel and use of fast Fourier transforms (FFT) is highly efficient. However, it assumes the input and output signals are bandlimited, a condition rarely met in practice, where signals have finite support or abrupt edges and sampling is nonideal. Here, we propose to approximate signals in analog, shiftinvariant function spaces, which do not need to be bandlimited, resulting in discrete coefficients for which we derive discrete convolution kernels that accurately model the analog convolution operator while taking into account nonideal sampling devices (such as finite fillfactor cameras). This approach retains the efficiency of direct sampling but not its limiting assumption. We propose fast forward and inverse algorithms that handle finitelength, periodic, and mirrorsymmetric signals with rational sampling rates. We provide explicit convolution kernels for computing coherent wave propagation in the context of digital holography. When compared to bandlimited methods in simulations, our method leads to fewer reconstruction artifacts when signals have sharp edges or when using nonideal sampling devices. 
@article{blu2013a, author = "Chacko, N. and Liebling, M. and Blu, T.", title = "Discretization of continuous convolution operators for accurate modeling of wave propagation in digital holography", journal = "Journal of the Optical Society of America A", month = "October", year = "2013", volume = "30", number = "10", pages = "20122020", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2013a" } 
Condat, L., Blu, T. & Unser, M.,"Beyond Interpolation: Optimal Reconstruction by QuasiInterpolation", Proceedings of the 2005 IEEE International Conference on Image Processing (ICIP'05), Genova, Italy, Vol. {I}, pp. 3336, September 1114, 2005. Best Student Paper Award. 
We investigate the use of quasiinterpolating approximation schemes, to construct an estimate of an unknown function from its given discrete samples. We show theoretically and with practical experiments that such methods perform better than classical interpolation, for the same computation cost. ICIP'05 Best Student Paper Award 
@inproceedings{blu2005b, author = "Condat, L. and Blu, T. and Unser, M.", title = "Beyond Interpolation: {O}ptimal Reconstruction by QuasiInterpolation", booktitle = "Proceedings of the 2005 {IEEE} International Conference on Image Processing ({ICIP'05})", month = "September 1114,", year = "2005", volume = "{I}", pages = "3336", note = "\textbf{Best Student Paper Award}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005b" } 
Condat, L., Van De Ville, D. & Blu, T.,"Hexagonal Versus Orthogonal Lattices: A New Comparison Using Approximation Theory", Proceedings of the 2005 IEEE International Conference on Image Processing (ICIP'05), Genova, Italy, Vol. {III}, pp. 11161119, September 1114, 2005. 
We provide a new comparison between hexagonal and orthogonal lattices, based on approximation theory. For each of the lattices, we select the “natural” spline basis function as generator for a shiftinvariant function space; i.e., the tensorproduct Bsplines for the orthogonal lattice and the nonseparable hexsplines for the hexagonal lattice. For a given order of approximation, we compare the asymptotic constants of the error kernels, which give a very good indication of the approximation quality. We find that the approximation quality on the hexagonal lattice is consistently better, when choosing lattices with the same sampling density. The area sampling gain related to these asymptotic constants quickly converges when the order of approximation of the basis functions increases. Surprisingly, nearestneighbor interpolation does not allow to profit from the hexagonal grid. For practical purposes, the secondorder hexspline (i.e., constituted by linear patches) appears as a particularly useful candidate to exploit the advantages of hexagonal lattices when representing images on them. 
@inproceedings{blu2005c, author = "Condat, L. and Van De Ville, D. and Blu, T.", title = "Hexagonal Versus Orthogonal Lattices: {A} New Comparison Using Approximation Theory", booktitle = "Proceedings of the 2005 {IEEE} International Conference on Image Processing ({ICIP'05})", month = "September 1114,", year = "2005", volume = "{III}", pages = "11161119", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005c" } 
Delpretti, S., Luisier, F., Ramani, S., Blu, T. & Unser, M.,"Multiframe SURELET Denoising of Timelapse Fluorescence Microscopy Images", Proceedings of the Fifth IEEE International Symposium on Biomedical Imaging (ISBI'08), Paris, France, pp. 149152, May 1417, 2008. 
Due to the random nature of photon emission and the various internal noise sources of the detectors, real timelapse fluorescence microscopy images are usually modeled as the sum of a Poisson process plus some Gaussian white noise. In this paper, we propose an adaptation of our SURELET denoising strategy to take advantage of the potentially strong similarities between adjacent frames of the observed image sequence. To stabilize the noise variance, we first apply the generalized Anscombe transform using suitable parameters automatically estimated from the observed data. With the proposed algorithm, we show that, in a reasonable computation time, real fluorescence timelapse microscopy images can be denoised with higher quality than conventional algorithms. 
@inproceedings{blu2008d, author = "Delpretti, S. and Luisier, F. and Ramani, S. and Blu, T. and Unser, M.", title = "Multiframe {SURELET} Denoising of Timelapse Fluorescence Microscopy Images", booktitle = "Proceedings of the Fifth {IEEE} International Symposium on Biomedical Imaging ({ISBI'08})", month = "May 1417,", year = "2008", pages = "149152", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008d" } 
Depeursinge, C., Cuche, É., Colomb, T., Massatch, P., Marian, A., Montfort, F., Liebling, M., Blu, T., Unser, M., Marquet, P. & Magistretti, P.J.,"Digital Holography Applied to Microscopy: A New Imaging Modality in the SubWavelength Range", Hundertvierte Jahrestagung der Deutschen Gesellschaft für angewandte Optik (DGaO), Münster (Westfalen), Germany, June 1014, 2003. 
Digital holographic microscopy appears as a new imaging technique with high resolution and real time observation capabilities: longitudinal resolutions of a few nanometers in air and a few tenths of nanometers in liquids are achievable, provided that optical signals diffracted by the object can be rendered sufficiently large. Living biological cells in culture, have been observed with around 40 nanometers in height and half of a micron in width. The originality of our approach is to provide both a slightly modified microscope design, yielding digital holograms of microscopic objects and an interactive computer environment to easily reconstruct wavefronts from digital holograms. 
@inproceedings{blu2003e, author = "Depeursinge, C. and Cuche, {\'{E}}. and Colomb, T. and Massatch, P. and Marian, A. and Montfort, F. and Liebling, M. and Blu, T. and Unser, M. and Marquet, P. and Magistretti, P.J.", title = "Digital Holography Applied to Microscopy: {A} New Imaging Modality in the SubWavelength Range", booktitle = "Hundertvierte Jahrestagung der Deutschen Gesellschaft f{\"{u}}r angewandte Optik ({DGaO})", month = "June 1014,", year = "2003", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003e" } 
Doğan, Z., Blu, T. & Van De Ville, D.,"Eigensensing And Deconvolution For The Reconstruction Of Heat Absorption Profiles From Photoacoustic Tomography Data", Proceedings of the Tenth IEEE International Symposium on Biomedical Imaging (ISBI'13), San Francisco, USA, pp. 11421145, April 711, 2013. 
Photoacoustic tomography (PAT) is a relatively recent imaging modality
that is promising for breast cancer detection and breast screening.
It combines the high intrinsic contrast of optical radiation with
acoustic imaging at submillimeter spatial resolution through the
photoacoustic effect of absorption and thermal expansion. However,
image reconstruction from boundary measurements of the propagating
wave field is still a challenging inverse problem. Here we propose a new theoretical framework, for which we coin the term eigensensing, to recover the heat absorption profile of the tissue. One of the main features of our method is that there is no explicit forward model that needs to be used within a (usually) slow iterative scheme. Instead, the eigensensing principle allow us to computationally obtain several intermediate images that are blurred by known convolution kernels which are chosen as the eigenfunctions of the spatial Laplace operator. The source image can then be reconstructed by a joint deconvolution algorithm that uses the intermediate images as input. Moreover, total variation regularization is added to make the inverse problem wellposed and to favor piecewisesmooth images. 
@inproceedings{blu2013b, author = "Do\u{g}an, Z. and Blu, T. and Van De Ville, D.", title = "Eigensensing And Deconvolution For The Reconstruction Of Heat Absorption Profiles From Photoacoustic Tomography Data", booktitle = "Proceedings of the Tenth {IEEE} International Symposium on Biomedical Imaging ({ISBI'13})", month = "April 711,", year = "2013", pages = "11421145", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2013b" } 
Doğan, Z., Blu, T. & Van De Ville, D.,"Finiterateofinnovation for the inverse source problem of radiating fields", Sampling Theory in Signal and Image Processing, Vol. 13 (3), pp. 271294, 2014. 
Finiterateofinnovation (FRI) is a framework that has been developed
for the sampling and reconstruction of specific classes of signals,
in par ticular nonbandlimited signals that are characterized byfinitely
many pa rameters. It has been shown that by using specific sampling
kernels that reproduce polynomials or exponentials (i.e., satisfy
StrangFix condition), it is possible to design noniterative and
fast reconstruction algorithms. In fact, the innovative part of the
signal can be reconstructed perfectly using Prony's method (the annihilatingfilter). In this paper, we propose an adapted FRI framework to deal with the inverse source problem of radiating fields from boundary measurements. In particular, we consider the case where the source signals are modelled as stream of Diracs in 3D and, we assume that the induced field governed by the Helmholtz equation is measured on a boundary. First, we propose a technique, termed "sensing principle"—also known as the reciprocity gap principle—to provide a link between the physical measurements and the source signal through a surface integral. We have shown that it is possible to design sensing schemes in complex domain using holomorphic functions such that they allow to determine the positions of the sources with a noniterative algorithm using an adapted annihilating filter method. 
@article{blu2014g, author = "Do\u{g}an, Z. and Blu, T. and Van De Ville, D.", title = "Finiterateofinnovation for the inverse source problem of radiating fields", journal = "Sampling Theory in Signal and Image Processing", year = "2014", volume = "13", number = "3", pages = "271294", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2014g" } 
Doğan, Z., Gilliam, C., Blu, T. & Van De Ville, D.,"Reconstruction of Finite Rate of Innovation Signals with ModelFitting Approach", IEEE Transactions on Signal Processing, Vol. 63 (22), pp. 60246036, November 2015. 
Finite rate of innovation (FRI) is a recent framework for sampling and reconstruction of a large class of parametric signals that are characterized by finite number of innovations (parameters) per unit interval. In the absence of noise, exact recovery of FRI signals has been demonstrated. In the noisy scenario, there exist techniques to deal with nonideal measurements. Yet, the accuracy and resiliency to noise and model mismatch are still challenging problems for realworld applications. We address the reconstruction of FRI signals, specifically a stream of Diracs, from few signal samples degraded by noise and we propose a new FRI reconstruction method that is based on a modelfitting approach related to the structuredTLS problem. The modelfitting method is based on minimizing the training error, that is, the error between the computed and the recovered moments (i.e., the FRIsamples of the signal), subject to an annihilation system. We present our framework for three different constraints of the annihilation system. Moreover, we propose a model order selection framework to determine the innovation rate of the signal; i.e., the number of Diracs by estimating the noise level through the training error curve. We compare the performance of the modelfitting approach with known FRI reconstruction algorithms and CramerRao's lower bound (CRLB) to validate these contributions. 
@article{blu2015h, author = "Do\u{g}an, Z. and Gilliam, C. and Blu, T. and Van De Ville, D.", title = "Reconstruction of Finite Rate of Innovation Signals with ModelFitting Approach", journal = "IEEE Transactions on Signal Processing", month = "November", year = "2015", volume = "63", number = "22", pages = "60246036", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2015h" } 
Doğan, Z., Jovanovic, I., Blu, T. & Van De Ville, D.,"3D reconstruction of wavepropagated point sources from boundary measurements using joint sparsity and finite rate of innovation", Proceedings of the Ninth IEEE International Symposium on Biomedical Imaging (ISBI'12), Barcelona, Spain, pp. 15751578, May 25, 2012. 
Reconstruction of point sources from boundary measurements is a challenging problem in many applications. Recently, we proposed a new sensing and noniterative reconstruction scheme for systems governed by the threedimensional wave equation. The points sources are described by their magnitudes and positions. The core of the method relies on the principles of finiterateofinnovation, and allows retrieving the parameters in the continuous domain without discretization. Here we extend the method when the source configuration shows joint sparsity for different temporal frequencies; i.e., the sources have same positions for different frequencies, not necessarily the same magnitudes. We demonstrate that joint sparsity improves upon the robustness of the estimation results. In addition, we propose a modified multisource version of Dijkstra's algorithm to recover the Z parameters. We illustrate the feasibility of our method to reconstruct multiple sources in a 3D spherical geometry. 
@inproceedings{blu2012a, author = "Do\u{g}an, Z. and Jovanovic, I. and Blu, T. and Van De Ville, D.", title = "{3D} reconstruction of wavepropagated point sources from boundary measurements using joint sparsity and finite rate of innovation", booktitle = "Proceedings of the Ninth {IEEE} International Symposium on Biomedical Imaging ({ISBI'12})", month = "May 25,", year = "2012", pages = "15751578", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2012a" } 
Doğan, Z., Jovanovic, I., Blu, T. & Van De Ville, D.,"Localization of Point Sources in Wave Fields From Boundary Measurements Using New Sensing Principle", Proceedings of the Tenth International Workshop on Sampling Theory and Applications (SampTA'13), Bremen, Germany, pp. 321324, July 15, 2013. 
We address the problem of localizing point sources in 3D from boundary
measurements of a wave field. Recently, we proposed the sensing principle
which allows extracting volumetric samples of the unknown source
distribution from the boundary measurements. The extracted samples
allow a noniterative reconstruction algorithm that can recover the
parameters of the source distribution projected on a 2D plane in
the continuous domain without any discretization. Here we extend the method for the 3D localization of multiple point sources by combining multiple 2D planar projections. In particular, we propose a threestep algorithm to retrieve the locations by means of multiplanar application of the sensing principle. First, we find the projections of the locations onto several 2D planes. Second, we propose a greedy algorithm to pair the solutions in each plane. Third, we retrieve the 3D locations by least squares regression. 
@inproceedings{blu2013c, author = "Do\u{g}an, Z. and Jovanovic, I. and Blu, T. and Van De Ville, D.", title = "Localization of Point Sources in Wave Fields From Boundary Measurements Using New Sensing Principle", booktitle = "Proceedings of the Tenth International Workshop on Sampling Theory and Applications ({SampTA'13})", month = "July 15,", year = "2013", pages = "321324", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2013c" } 
Doğan, Z., Jovanovic, I., Blu, T. & Van De Ville, D.,"Application of a New Sensing Principle for Photoacoustic Imaging of Point Absorbers", SPIE BiOS Photons Plus Ultrasound: Imaging and Sensing 2013. Proceedings of the SPIE., San Francisco, USA, Vol. 8581, pp. 8581144P17, February 27, 2013. 
Photoacoustic tomography (PAT) is a hybrid imaging method, which combines ultrasonic and optical imaging modalities, in order to overcome their respective weaknesses and to combine their strengths. It is based on the reconstruction of optical absorption properties of the tissue from the measurements of a photoacousticallygenerated pressure field. Current methods consider laser excitation, under thermal and stress confinement assumptions, which leads to the generation of a propagating pressure field. Conventional reconstruction techniques then recover the initial pressure field based on the boundary measurements by iterative reconstruction algorithms in time or Fourierdomain. Here, we propose an application of a new sensing principle that allows for efficient and noniterative reconstruction algorithm for imaging point absorbers in PAT. We consider a closed volume surrounded by a measurement surface in an acoustically homogeneous medium and we aim at recovering the positions and the amount of heat absorbed by these absorbers. We propose a twostep algorithm based on proper choice of socalled sensing functions. Specifically, in the first step, we extract the projected positions on the complex plane and the weights by a sensing function that is welllocalized on the same plane. In the second step, we recover the remaining zlocation by choosing a proper set of plane waves. We show that the proposed families of sensing functions are sufficient to recover the parameters of the unknown sources without any discretization of the domain. We extend the method for sources that have jointsparsity; i.e., the absorbers have the same positions for different frequencies. We evaluate the performance of the proposed algorithm using simulated and noisy sensor data and we demonstrate the improvement obtained by exploiting joint sparsity. 
@inproceedings{blu2013i, author = "Do\u{g}an, Z. and Jovanovic, I. and Blu, T. and Van De Ville, D.", title = "Application of a New Sensing Principle for Photoacoustic Imaging of Point Absorbers", booktitle = "SPIE BiOS Photons Plus Ultrasound: Imaging and Sensing 2013. Proceedings of the SPIE.", month = "February 27,", year = "2013", volume = "8581", pages = "8581144P17", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2013i" } 
Doğan, Z., Tsiminaki, V., Jovanovic, I., Blu, T. & Van De Ville, D.,"Localization of point sources for systems governed by the wave equation", Wavelets and Sparsity XIV. Proceedings of the SPIE, San Diego, USA, Vol. 8138, pp. 81380P111, August 2124, 2011. 
Analytic sensing has recently been proposed for source localization from boundary measurements using a generalization of the finiterateofinnovation framework. The method is tailored to the quasistatic electromagnetic approximation, which is commonly used in electroencephalography. In this work, we extend analytic sensing for physical systems that are governed by the wave equation; i.e., the sources emit signals that travel as waves through the volume and that are measured at the boundary over time. This source localization problem is highly illposed (i.e., the unicity of the source distribution is not guaranteed) and additional assumptions about the sources are needed. We assume that the sources can be described with finite number of parameters, particularly, we consider point sources that are characterized by their position and strength. This assumption makes the solution unique and turns the problem into parametric estimation. Following the framework of analytic sensing, we propose a twostep method. In the first step, we extend the reciprocity gap functional concept to waveequation based test functions; i.e., wellchosen test functions can relate the boundary measurements to generalized measure that contain volumetric information about the sources within the domain. In the second stepagain due to the choice of the test functions  we can apply the finiterateofinnovation principle; i.e., the generalized samples can be annihilated by a known filter, thus turning the nonlinear source localization problem into an equivalent rootfinding one. We demonstrate the feasibility of our technique for a 3D spherical geometry. The performance of the reconstruction algorithm is evaluated in the presence of noise and compared with the theoretical limit given by CramerRao lower bounds. 
@inproceedings{blu2011a, author = "Do\u{g}an, Z. and Tsiminaki, V. and Jovanovic, I. and Blu, T. and Van De Ville, D.", title = "Localization of point sources for systems governed by the wave equation", booktitle = "Wavelets and Sparsity XIV. Proceedings of the SPIE", month = "August 2124,", year = "2011", volume = "8138", pages = "81380P111", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2011a" } 
Dragotti, P.L., Vetterli, M. & Blu, T.,"Sampling Moments and Reconstructing Signals of Finite Rate of Innovation: Shannon Meets StrangFix", IEEE Transactions on Signal Processing, Vol. 55 (5), pp. 17411757, May 2007. Part 1. 
Consider the problem of sampling signals which are not bandlimited, but still have a finite number of degrees of freedom per unit of time, such as, for example, nonuniform splines or piecewise polynomials, and call the number of degrees of freedom per unit of time the rate of innovation. Classical sampling theory does not enable a perfect reconstruction of such signals since they are not bandlimited. Recently, it was shown that, by using an adequate sampling kernel and a sampling rate greater or equal to the rate of innovation, it is possible to reconstruct such signals uniquely [1]. These sampling schemes, however, use kernels with infinite support, and this leads to complex and potentially unstable reconstruction algorithms. In this paper, we show that many signals with a finite rate of innovation can be sampled and perfectly reconstructed using physically realizable kernels of compact support and a local reconstruction algorithm. The class of kernels that we can use is very rich and includes functions satisfying StrangFix conditions, exponential splines and functions with rational Fourier transform. This last class of kernels is quite general and includes, for instance, any linear electric circuit. We thus show with an example how to estimate a signal of finite rate of innovation at the output of an RC circuit. The case of noisy measurements is also analyzed, and we present a novel algorithm that reduces the effect of noise by oversampling. References

@article{blu2007d, author = "Dragotti, P.L. and Vetterli, M. and Blu, T.", title = "Sampling Moments and Reconstructing Signals of Finite Rate of Innovation: {S}hannon Meets {S}trang{F}ix", journal = "{IEEE} Transactions on Signal Processing", month = "May", year = "2007", volume = "55", number = "5", pages = "17411757", note = "Part 1", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007d" } 
Feilner, M., Blu, T. & Unser, M.,"Statistical Analysis of fMRI Data Using Orthogonal Filterbanks", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing VII, Denver, USA, Vol. 3813, pp. 551560, July 1923, 1999. 
Functional magnetic resonance imaging (fMRI) is a recent technique that allows the measurement of brain metabolism (local concentration of deoxyhemoglobin using BOLD contrast) while subjects are performing a specific task. A block paradigm produces alternating sequences of images (e.g., rest versus motor task). In order to detect and localize areas of cerebral activation, one analyzes the data using paired differences at the voxel level. As an alternative to the traditional approach which uses Gaussian spatial filtering to reduce measurement noise, we propose to analyze the data using an orthogonal filterbank. This procedure is intended to simplify and eventually imp ove the statistical analysis. The system is designed to concentrate the signal into a fewer number of components thereby improving the signaltonoise ratio. Thanks to the orthogonality property, we can test the filtered components independently on a voxelbyvoxel basis; this testing procedure is optimal fo i.i.d. measurement noise. The number of components to test is also reduced because of downsampling. This offers a straightforward approach to increasing the sensitivity of the analysis (lower detection threshold) while applying the standard Bonferroni correction fo multiple statistical tests. We present experimental results to illustrate the procedure. In addition, we discuss filter design issues. In particular, we introduce a family of orthogonal filters which are such that any integer reduction m can be implemented as a succession of elementary reductions m_{1} to m_{p} where m = m_{1} ... m_{p} is a prime number factorization of m. 
@inproceedings{blu1999f, author = "Feilner, M. and Blu, T. and Unser, M.", title = "Statistical Analysis of {fMRI} Data Using Orthogonal Filterbanks", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {VII}", month = "July 1923,", year = "1999", volume = "3813", pages = "551560", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1999f" } 
Feilner, M., Blu, T. & Unser, M.,"Analysis of fMRI Data Using Spline Wavelets", Proceedings of the Tenth European Signal Processing Conference (EUSIPCO'00), Tampere, Finland, Vol. {IV}, pp. 20132016, September 48, 2000. 
Our goal is to detect and localize areas of activation in the brain from sequences of fMRI images. The standard approach for reducing the noise contained in the fMRI images is to apply a spatial Gaussian filter which entails some loss of details. Here instead, we consider a wavelet solution to the problem, which has the advantage of retaining highfrequency information. We use fractionalspline orthogonal wavelets with a continuouslyvarying order parameter alpha; by adjusting alpha, we can balance spatial resolution against frequency localization. The activation pattern is detected by performing multiple (Bonferronicorrected) ttests in the wavelet domain. This pattern is then localized by inverse wavelet transform of a thresholded coefficient map. In order to compare transforms and to select the best alpha, we devise a simulation study for the detection of a known activation pattern. We also apply our methodology to the analysis of acquired fMRI data for a motor task. 
@inproceedings{blu2000b, author = "Feilner, M. and Blu, T. and Unser, M.", title = "Analysis of {fMRI} Data Using Spline Wavelets", booktitle = "Proceedings of the Tenth European Signal Processing Conference ({EUSIPCO'00})", month = "September 48,", year = "2000", volume = "{IV}", pages = "20132016", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000b" } 
Feilner, M., Blu, T. & Unser, M.,"Optimizing Wavelets for the Analysis of fMRI Data", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing VIII, San Diego, USA, Vol. 4119, pp. 626637, July 31August 4, 2000. 
Ruttimann et al. have proposed to use the wavelet transform for the detection and localization of activation patterns in functional magnetic resonance imaging (fMRI). Their main idea was to apply a statistical test in the wavelet domain to detect the coefficients that are significantly different from zero. Here, we improve the original method in the case of nonstationary Gaussian noise by replacing the original ztest by a ttest that takes into account the variability of each wavelet coefficient separately. The application of a threshold that is proportional to the residual noise level, after the reconstruction by an inverse wavelet transform, further improves the localization of the activation pattern in the spatial domain. A key issue is to find out which wavelet and which type of decomposition is best suited for the detection of a given activation pattern. In particular, we want to investigate the applicability of alternative wavelet bases that are not necessarily orthogonal. For this purpose, we consider the various brands of fractional spline wavelets (orthonormal, Bspline, and dual) which are indexed by a continuouslyvarying order parameter α. We perform an extensive series of tests using simulated data and compare the various transforms based on their false detection rate (type I + type II errors). In each case, we observe that there is a strongly optimal value of α and a best number of scales that minimizes the error. We also find that splines generally outperform Daubechies wavelets and that they are quite competitive with SPM (the standard analysis method used in the field), although it uses much simpler statistics. An interesting practical finding is that performance is strongly correlated with the number of coefficients detected in the wavelet domain, at least in the orthonormal and Bspline cases. This suggest that it is possible to optimize the structural wavelet parameters simply by maximizing the number of wavelet counts, without any prior knowledge of the activation pattern. Some examples of analysis of real data are also presented. 
@inproceedings{blu2000c, author = "Feilner, M. and Blu, T. and Unser, M.", title = "Optimizing Wavelets for the Analysis of {fMRI} Data", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {VIII}", month = "July 31August 4,", year = "2000", volume = "4119", pages = "626637", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000c" } 
Fong, M.C.M., Minett, J.W., Blu, T. & Wang, W.S.Y.,"Braincomputer interface (BCI): Is it strictly necessary to use random sequences in visual spellers?", Proceedings of the tenth Asia Pacific Conference on Computer Human Interaction (APCHI 2012), Matsue, Japan, pp. 109118, August 2831, 2012. 
The P300 speller is a standard paradigm for braincomputer interfacing (BCI) based on electroencephalography (EEG). It exploits the fact that the user's selective attention to a target stimulus among a random sequence of stimuli enhances the magnitude of the P300 evoked potential. The present study questions the necessity of using random sequences of stimulation. In two types of experimental runs, subjects attended to a target stimulus while the stimuli, four in total, were each intensified twelve times, in either random order or deterministic order. The 32channel EEG data were analyzed offline using linear discriminant analysis (LDA). Similar classification accuracies of 95.3% and 93.2% were obtained for the random and deterministic runs, respectively, using the data associated with 3 sequences of stimulation. Furthermore, using a montage of 5 posterior electrodes, the two paradigms attained identical accuracy of 92.4%. These results suggest that: (a) the use of random sequences is not necessary for effective BCI performance; and (b) deterministic sequences can be used in some BCI speller applications. 
@inproceedings{blu2012b, author = "Fong, M. C. M. and Minett, J. W. and Blu, T. and Wang, W. S. Y.", title = "Braincomputer interface ({BCI}): Is it strictly necessary to use random sequences in visual spellers?", booktitle = "Proceedings of the tenth Asia Pacific Conference on Computer Human Interaction ({APCHI 2012})", month = "August 2831,", year = "2012", pages = "109118", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2012b" } 
Fong, M.C.M., Minett, J.W., Blu, T. & Wang, W.S.Y.,"Towards a Neural Measure of Perceptual DistanceClassification of Electroencephalographic Responses to Synthetic Vowels", Proceedings of the Fifteenth Annual Conference of the International Speech Communication Association (Interspeech 2014), Singapore, pp. 25952599, 1418 September, 2014. 
How vowels are organized cortically has previously been studied using auditory evoked potentials (AEPs), one focus of which is to determine whether perceptual distance could be inferred using AEP components. The present study extends this line of research by adopting a machinelearning framework to classify evoked responses to four synthetic midvowels differing only in second formant frequency (F2 = 840, 1200, 1680, and 2280 Hz). 6 subjects attended 4 EEG sessions each on separate days. Classifiers were trained using timedomain data in successive timewindows of various sizes. Results were the most accurate when a window of about 80 ms was used. By integrating the scores from individual classifiers, the maximum mean binary classification rates improved to 70% (10 trials) and 77% (20 trials). To assess how well perceptual distances among the vowels were reflected in our results, discriminability indices (d ) were computed using both the behavioral results in a screening test and the classification results. It was found that the two set of indices were significantly correlated. The pair that was the most (least) discriminable behaviorally was also the most (least) classifiable neurally. Our results support the use of classification methodology for developing a neural measure of perceptual distance. 
@inproceedings{blu2014e, author = "Fong, M. C. M. and Minett, J. W. and Blu, T. and Wang, W. S. Y.", title = "Towards a Neural Measure of Perceptual DistanceClassification of Electroencephalographic Responses to Synthetic Vowels", booktitle = "Proceedings of the Fifteenth Annual Conference of the International Speech Communication Association (Interspeech 2014)", month = "1418 September,", year = "2014", pages = "25952599", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2014e" } 
Forster, B., Blu, T. & Unser, M.,"A New Family of Complex RotationCovariant Multiresolution Bases in 2D", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing X, San Diego, USA, Vol. 5207, pp. 475479, August 38, 2003. Part I. 
We present complex rotationcovariant multiresolution families aimed for image analysis. Since they are complexvalued functions, they provide the important phase information, which is missing in the discrete wavelet transform with real wavelets. Our basis elements have nice properties in Hilbert space such as smoothness of fractional order α ∈ R^{+}. The corresponding filters allow a FFTbased implementation and thus provide a fast algorithm for the wavelet transform. 
@inproceedings{blu2003f, author = "Forster, B. and Blu, T. and Unser, M.", title = "A New Family of Complex RotationCovariant Multiresolution Bases in {2D}", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {X}", month = "August 38,", year = "2003", volume = "5207", pages = "475479", note = "Part {I}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003f" } 
Forster, B., Blu, T. & Unser, M.,"Complex {B}Splines and Wavelets", Second International Conference on Computational Harmonic Analysis, Nineteenth Annual Shanks Lecture (CHA'04), Nashville, USA, May 2430, 2004. 
Bspline multiresolution analyses have proven to be an adequate tool for signal analysis. But for some applications, e.g. in speech processing and digital holography, complexvalued scaling functions and wavelets are more favourable than real ones, since they allow to deduce the crucial phase information. In this talk, we extend the classical resp. fractional Bspline approach to complex Bsplines. We perform this by choosing a complex exponent, i.e., a complex order z of the Bspline, and show that this does not influence the basic properties such as smothness and decay, recurrence relations and others. Moreover, the resulting complex Bsplines satisfy a twoscale relation and generate a multiresolution analysis of L^{2}(R). We show that the complex Bsplines as well as the corresponding wavelets converge to Gabor functions as ℜ(z) increases and ℑ(z) is fixed. Thus they are approximately optimally timefrequency localized. 
@inproceedings{blu2004d, author = "Forster, B. and Blu, T. and Unser, M.", title = "Complex \mbox{{B}Splines} and Wavelets", booktitle = "Second International Conference on Computational Harmonic Analysis, Nineteenth Annual Shanks Lecture ({CHA'04})", month = "May 2430,", year = "2004", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004d" } 
Forster, B., Blu, T. & Unser, M.,"Complex {B}Splines", Applied and Computational Harmonic Analysis, Vol. 20 (2), pp. 261282, March 2006. 
We propose a complex generalization of Schoenberg's cardinal splines. To this end, we go back to the Fourier domain definition of the Bsplines and extend it to complexvalued degrees. We show that the resulting complex Bsplines are piecewise modulated polynomials, and that they retain most of the important properties of the classical ones: smoothness, recurrence, and twoscale relations, Riesz basis generator, explicit formulae for derivatives, including fractional orders, etc. We also show that they generate multiresolution analyses of L^{2}(R) and that they can yield wavelet bases. We characterize the decay of these functions which are nolonger compactly supported when the degree is not an integer. Finally, we prove that the complex Bsplines converge to modulated Gaussians as their degree increases, and that they are asymptotically optimally localized in the timefrequency plane in the sense of Heisenberg's uncertainty principle. 
@article{blu2006b, author = "Forster, B. and Blu, T. and Unser, M.", title = "Complex \mbox{{B}Splines}", journal = "Applied and Computational Harmonic Analysis", month = "March", year = "2006", volume = "20", number = "2", pages = "261282", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2006b" } 
Forster, B., Blu, T., Van De Ville, D. & Unser, M.,"ShiftInvariant Spaces from RotationCovariant Functions", Applied and Computational Harmonic Analysis, Vol. 25 (2), pp. 240265, September 2008. 
We consider shiftinvariant multiresolution spaces generated by rotationcovariant functions ρ in ℝ^{2}. To construct corresponding scaling and wavelet functions, ρ has to be localized with an appropriate multiplier, such that the localized version is an element of L^{2}(ℝ^{2}). We consider several classes of multipliers and show a new method to improve regularity and decay properties of the corresponding scaling functions and wavelets. The wavelets are complexvalued functions, which are approximately rotationcovariant and therefore behave as Wirtinger differential operators. Moreover, our class of multipliers gives a novel approach for the construction of polyharmonic Bsplines with better polynomial reconstruction properties. 
@article{blu2008e, author = "Forster, B. and Blu, T. and Van De Ville, D. and Unser, M.", title = "ShiftInvariant Spaces from RotationCovariant Functions", journal = "Applied and Computational Harmonic Analysis", month = "September", year = "2008", volume = "25", number = "2", pages = "240265", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008e" } 
Gilliam, C., Bingham, A., Blu, T. & Jelfs, B.,"TimeVarying Delay Estimation Using Common Local AllPass Filters with Application to Surface Electromyography", Proceedings of the Fortythird IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'18), Calgary, AB, Canada, pp. 841845, April 1520, 2018. 
Estimation of conduction velocity (CV) is an important task in the analysis of surface electromyography (sEMG). The problem can be framed as estimation of a timevarying delay (TVD) between electrode recordings. In this paper we present an algorithm which incorporates information from multiple electrodes into a single TVD estimation. The algorithm uses a common allpass filter to relate two groups of signals at a local level. We also address a current limitation of CV estimators by providing an automated way of identifying the innervation zone from a set of electrode recordings, thus allowing incorporation of the entire array into the estimation. We validate the algorithm on both synthetic and real sEMG data with results showing the proposed algorithm is both robust and accurate. 
@inproceedings{blu2018c, author = "Gilliam, C. and Bingham, A. and Blu, T. and Jelfs, B.", title = "TimeVarying Delay Estimation Using Common Local AllPass Filters with Application to Surface Electromyography", booktitle = "Proceedings of the Fortythird {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'18})", month = "April 1520,", year = "2018", pages = "841845", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2018c" } 
Gilliam, C. & Blu, T.,"Fitting Instead Of Annihilation: Improved Recovery Of Noisy FRI Signals", Proceedings of the Thirtyninth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'14), Florence, Italy, pp. 5155, May 49, 2014. 
Recently, classical sampling theory has been broadened to include a class of nonbandlimited signals that possess finite rate of innovation (FRI). In this paper we consider the reconstruction of a periodic stream of Diracs from noisy samples. We demonstrate that its noiseless FRI samples can be represented as a ratio of two polynomials. Using this structure as a model, we propose recovering the FRI signal using a model fitting approach rather than an annihilation method. We present an algorithm that fits this model to the noisy samples and demonstrate that it has low computation cost and is more reliable than two stateoftheart methods. 
@inproceedings{blu2014b, author = "Gilliam, C. and Blu, T.", title = "Fitting Instead Of Annihilation: Improved Recovery Of Noisy {FRI} Signals", booktitle = "Proceedings of the Thirtyninth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'14})", month = "May 49,", year = "2014", pages = "5155", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2014b" } 
Gilliam, C. & Blu, T.,"Local AllPass Filters for Optical Flow Estimation", Proceedings of the Fortieth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'15), Brisbane, Australia, pp. 15331537, April 1924, 2015. 
The optical flow is a velocity field that describes the motion of pixels within a sequence (or set) of images. Its estimation plays an important role in areas such as motion compensation, object tracking and image registration. In this paper, we present a novel framework to estimate the optical flow using local allpass filters. Instead of using the optical flow equation, the framework is based on relating one image to another, on a local level, using an allpass filter and then extracting the optical flow from the filter. Using this framework, we present a fast novel algorithm for estimating a smoothly varying optical flow, which we term the Local AllPass (LAP) algorithm. We demonstrate that this algorithm is consistent and accurate, and that it outperforms three stateoftheart algorithms when estimating constant and smoothly varying flows. We also show initial competitive results for real images. 
@inproceedings{blu2015c, author = "Gilliam, C. and Blu, T.", title = "Local AllPass Filters for Optical Flow Estimation", booktitle = "Proceedings of the Fortieth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'15})", month = "April 1924,", year = "2015", pages = "15331537", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2015c" } 
Gilliam, C. & Blu, T.,"Finding the minimum rate of innovation in the presence of noise", Proceedings of the Fortyfirst IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'16), Shanghai, China, pp. 40194023, March 2025, 2016. 
Recently, sampling theory has been broadened to include a class of nonbandlimited signals that possess finite rate of innovation (FRI). In this paper, we consider the problem of determining the minimum rate of innovation (RI) in a noisy setting. First, we adapt a recent modelfitting algorithm for FRI recovery and demonstrate that it achieves the CramérRao bounds. Using this algorithm, we then present a framework to estimate the minimum RI based on fitting the sparsest model to the noisy samples whilst satisfying a mean squared error (MSE) criterion  a signal is recovered if the output MSE is less than the input MSE. Specifically, given a RI, we use the MSE criterion to judge whether our modelfitting has been a success or a failure. Using this output, we present a Dichotomic algorithm that performs a binary search for the minimum RI and demonstrate that it obtains a sparser RI estimate than an existing information criterion approach. 
@inproceedings{blu2016a, author = "Gilliam, C. and Blu, T.", title = "Finding the minimum rate of innovation in the presence of noise", booktitle = "Proceedings of the Fortyfirst {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'16})", month = "March 2025,", year = "2016", pages = "40194023", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2016a" } 
Gilliam, C. & Blu, T.,"Local AllPass Geometric Deformations", IEEE Transactions on Image Processing, Vol. 27 (2), pp. 10101025, February 2018. 
This paper deals with the estimation of a deformation that describes the geometric transformation between two images. To solve this problem, we propose a novel framework that relies upon the brightness consistency hypothesis  a pixel's intensity is maintained throughout the transformation. Instead of assuming small distortion and linearising the problem (e.g. via Taylor Series expansion), we propose to interpret the brightness hypothesis as an allpass filtering relation between the two images. The key advantages of this new interpretation are that no restrictions are placed on the amplitude of the deformation or on the spatial variations of the images. Moreover, by converting the allpass filtering to a linear forwardbackward filtering relation, our solution to the estimation problem equates to solving a linear system of equations, which leads to a highly efficient implementation. Using this framework, we develop a fast algorithm that relates one image to another, on a local level, using an allpass filter and then extracts the deformation from the filterhence the name "Local AllPass" (LAP) algorithm. The effectiveness of this algorithm is demonstrated on a variety of synthetic and real deformations that are found in applications such as image registration and motion estimation. In particular, the LAP obtains very accurate results for significantly reduced computation time when compared to a selection of image registration algorithms and is very robust to noise corruption. 
@article{blu2018b, author = "Gilliam, C. and Blu, T.", title = "{L}ocal {A}ll{P}ass Geometric Deformations", journal = "IEEE Transactions on Image Processing", month = "February", year = "2018", volume = "27", number = "2", pages = "10101025", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2018b" } 
Gilliam, C., Küstner, T. & Blu, T.,"3D Motion Flow Estimation Using Local AllPass Filters", Proceedings of the Thirteenth IEEE International Symposium on Biomedical Imaging (ISBI'16), Prague, Czech Republic, pp. 282285, April 1316, 2016. 
Fast and accurate motion estimation is an important tool in biomedical imaging applications such as motion compensation and image registration. In this paper, we present a novel algorithm to estimate motion in volumetric images based on the recently developed Local AllPass (LAP) optical flow framework. The framework is built upon the idea that any motion can be regarded as a local rigid displacement and is hence equivalent to allpass filtering. Accordingly, our algorithm aims to relate two images, on a local level, using a 3D allpass filter and then extract the local motion flow from the filter. As this process is based on filtering, it can be efficiently repeated over the whole image volume allowing fast estimation of a dense 3D motion. We demonstrate the effectiveness of this algorithm on both synthetic motion flows and invivo MRI data involving respiratory motion. In particular, the algorithm obtains greater accuracy for significantly reduced computation time when compared to competing approaches. 
@inproceedings{blu2016c, author = "Gilliam, C. and K\"{u}stner, T. and Blu, T.", title = "{3D} Motion Flow Estimation Using Local AllPass Filters", booktitle = "Proceedings of the Thirteenth {IEEE} International Symposium on Biomedical Imaging ({ISBI'16})", month = "April 1316,", year = "2016", pages = "282285", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2016c" } 
Guo, R. & Blu, T.,"FRI Sensing: Sampling Images along Unknown Curves", Proceedings of the Fortyfourth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'19), Brighton, UK, pp. 51325136, March 1217 2019. 
While sensors have been widely used in various applications, an essential current trend of research consists of collecting and fusing the information that comes from many sensors. In this paper, on the contrary, we would like to concentrate on a unique mobile sensor; our goal is to unveil the multidimensional information entangled within a stream of onedimensional data, called FRI Sensing. Our key finding is that, even if we don't have any position knowledge of the moving sensors, it is still possible to reconstruct the sampling trajectory (up to a linear transformation and a shift), and then reconstruct an image that represents the physical sampling field under certain hypotheses. We further investigate the reconstruction hypotheses and propose novel algorithms that could make this 1D to 2D reconstruction feasible. Experiments show that the proposed approach retrieves the sampling image and trajectory accurately under the developed hypotheses. This method can be applied to geolocation applications, such as indoor localization and submarine navigation. Moreover, we show that the proposed algorithms have the potential to visualize the onedimensional signal, which may not be sampled from a real 2D/3D physical field (e.g. speech and text signals), as a two or threedimensional image. 
@inproceedings{blu2019b, author = "Guo, R. and Blu, T.", title = "{FRI} Sensing: Sampling Images along Unknown Curves", booktitle = "Proceedings of the Fortyfourth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'19})", month = "March 1217", year = "2019", pages = "51325136", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2019b" } 
Guo, R. & Blu, T.,"FRI Sensing: Retrieving the Trajectory of a Mobile Sensor from Its Temporal Samples", IEEE Transactions on Signal Processing, Vol. 68, pp. 55335545, 2020. 
In this paper, contrary to current research trend which consists of
fusing (big) data from many different sensors, we focus on onedimensional
samples collected by a unique mobile sensor (e.g., temperature, pressure,
magnetic field, etc.), without explicit positioning information
(such as GPS). We demonstrate that this stream of 1D data contains
valuable 2D geometric information that can be unveiled by adequate
processing—using a highaccuracy Finite Rate of Innovation
(FRI) algorithm: FRI Sensing. Our key finding is that, despite the absence of any position information, the basic sequence of 1D sensor samples makes it possible to reconstruct the sampling trajectory (up to an affine transformation), and then the image that represents the physical field that has been sampled. We state the FRI Sensing sampling theorem and the hypotheses needed for this trajectory and image reconstruction to be successful. The proof of our theorem is constructive and leads to a very efficient and robust algorithm, which we validate in various conditions. Moreover, although we essentially model the images as finite sums of 2D sinusoids, we also observe that our algorithm works accurately for real textured images. 
@article{blu2020f, author = "Guo, R. and Blu, T.", title = "{FRI} Sensing: Retrieving the Trajectory of a Mobile Sensor from Its Temporal Samples", journal = "IEEE Transactions on Signal Processing", year = "2020", volume = "68", pages = "55335545", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2020f" } 
Guo, R. & Blu, T.,"FRI Sensing: 2D Localization from 1D Mobile Sensor Data", Proceedings of the 2020 AsiaPacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Auckland, NZ, pp. 986991, December 710 2020. 
Sensor localization is a basic and important problem in many areas. It often relies on transmissioncommunication equipment to obtain the sensor geolocation information. However, in this work, on the contrary, our goal is to retrieve the 2D sensor location only from the 1D sensor data. We demonstrate that there is valuable 2D geometric information that can be unveiled hidden within the 1D sampled signal. We investigate the hypotheses needed and propose a very efficient and robust algorithm to realize this 2D localization. This method can be possibly applied to a series of biomedical applications, like robotic endoscopic capsules, medicine tracking, and biological tissue detection. For example, people inject tiny sensors about the size of a grain of sand to monitor human biometrics (like blood ph, etc) and accurate localization plays an essential role in pathological diagnosis. 
@inproceedings{blu2020h, author = "Guo, R. and Blu, T.", title = "{FRI} Sensing: {2D} Localization from {1D} Mobile Sensor Data", booktitle = "Proceedings of the 2020 AsiaPacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)", month = "December 710", year = "2020", pages = "986991", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2020h" } 
Guo, R. & Blu, T.,"Exploring the Geometry of OneDimensional Signals", IEEE Transactions on Signal Processing, Vol. 69, pp. 52995312, 2021. 
The wide availability of inexpensive sensors of all kinds (inertia, magnetic field, light, temperature, pressure, chemicals etc.) makes it possible to empower a host of novel applications. We have shown in a previous paper that, if the field sensed can be expressed as a finite sum of 2D sinusoids, it is possible to reconstruct the sampling curve from the 1D sequence of image samples alone (up to a linear transformation)without extra positioning information. Here, we explore the validity of this result if, instead, we assume the image to be directional or, as an extreme case, laminar and we simplify our previous approach to the single sinusoid fitting of segments of the 1D samples. We obtain predictive results that quantify the accuracy with which the frequencies found can be used to estimate the slope of the sampling trajectory. We also develop a robust algorithm to retrieve the sampling trajectory and estimate the laminar image that underlies the 1D samples. We finally demonstrate the validity of our approach on synthetic and wellchosen real images. 
@article{blu2021b, author = "Guo, R. and Blu, T.", title = "Exploring the Geometry of OneDimensional Signals", journal = "IEEE Transactions on Signal Processing", year = "2021", volume = "69", pages = "52995312", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2021b", doi = "10.1109/TSP.2021.3112914" } 
Guo, R., Li, Y., Blu, T. & Zhao, H.,"VectorFRI Recovery of MultiSensor Measurements", IEEE Transactions on Signal Processing, Vol. 70, pp. 43694380, 2022. 
Thanks to lowering costs, sensors of all kinds have increasingly been used in a wide variety of disciplines and fields, facilitating the rapid development of new technologies and applications. The information of interest (e.g. source location, refractive index, etc.) gets encoded in the measured sensor data, and the key problem is then to decode this information from the sensor measurements. In many cases, sensor data exhibit sparse features"innovations"that typically take the form of a finite sum of sinusoids. In practice, the robust retrieval of such encoded information from multisensors data (array or network) is difficult due to the nonuniformity of instrument precision and noise (i.e. different across sensors). This motivates the development of a joint sparse ("vector Finite Rate of Innovation") recovery strategy for multisensor data: by fitting the data to a joint parametric model, an accurate sparse recovery can be achieved, even if the noise of the sensors is nonhomogenous and correlated. Although developed for onedimensional sensor data, we show that our method is easily extended to multidimensional sensor measurements, e.g. directionofarrival data of 2D planar array and interference fringes of underwater acoustics, which provides a generic solution to these applications. A very robust and efficient algorithm is proposed, which we validate in various conditions (simulations, multiple types of real data). 
@article{blu2022d, author = "Guo, R. and Li, Y. and Blu, T. and Zhao, H.", title = "Vector{FRI} Recovery of MultiSensor Measurements", journal = "IEEE Transactions on Signal Processing", year = "2022", volume = "70", pages = "43694380", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2022d", doi = "10.1109/TSP.2022.3204402" } 
Hao, Y., Marziliano, P., Vetterli, M. & Blu, T.,"Compression of ECG as a Signal with Finite Rate of Innovation", Proceedings of the TwentySeventh Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS'05), Shanghai, China, pp. 75647567, September 14, 2005. 
Compression of ECG (electrocardiogram) as a signal with finite rate of innovation (FRI) is proposed in this paper. By modelling the ECG signal as the sum of bandlimited and nonuniform linear spline which contains finite rate of innovation (FRI), sampling theory is applied to achieve effective compression and reconstruction of ECG signal. The simulation results show that the performance of the algorithm is quite satisfactory in preserving the diagnostic information as compared to the classical sampling scheme which uses the sinc interpolation. 
@inproceedings{blu2005d, author = "Hao, Y. and Marziliano, P. and Vetterli, M. and Blu, T.", title = "Compression of {ECG} as a Signal with Finite Rate of Innovation", booktitle = "Proceedings of the TwentySeventh Annual International Conference of the {IEEE} Engineering in Medicine and Biology Society ({EMBS'05})", month = "September 14,", year = "2005", pages = "75647567", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005d" } 
Horbelt, S., Muñoz Barrutia, A., Blu, T. & Unser, M.,"Spline Kernels for ContinuousSpace Image Processing", Proceedings of the TwentyFifth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'00), Istanbul, Turkey, Vol. {IV}, pp. 21912194, June 59, 2000. 
We present an explicit formula for spline kernels; these are defined as the convolution of several Bsplines of variable widths h and degrees n. The spline kernels are useful for continuous signal processing algorithms that involve Bspline innerproducts or the convolution of several spline basis functions. We apply our results for the derivation of splinebased algorithms for two classes of problems. The first is the resizing of images with arbitrary scaling factors. The second problem is the computation of the Radon transform and of its inverse; in particular, we present a new splinebased version of the filtered backprojection algorithm for tomographic reconstruction. In both case, our explicit kernel formula allows for the use high degree splines; these offer better approximation and performance than the conventional lower order formulations (e.g., piecewise constant or piecewise linear models). 
@inproceedings{blu2000d, author = "Horbelt, S. and Mu{\~{n}}oz Barrutia, A. and Blu, T. and Unser, M.", title = "Spline Kernels for ContinuousSpace Image Processing", booktitle = "Proceedings of the TwentyFifth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'00})", month = "June 59,", year = "2000", volume = "{IV}", pages = "21912194", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000d" } 
Ichige, K., Blu, T. & Unser, M.,"A Study on Spline Functions and Their Applications to Digital Signal and Image Processing", The Telecommunications Advancement Foundation, Vol. 18 (7(1)), pp. 358365, January 2003. 
本稿では, 2つの異なる関数基底を用いて信号を補間する手法として一般化区分的線形補間法を提案し, こうした関数系の信号処理・画像処理における有用性を検証した結果について報告する。 提案する関数系は, 線形近似と同様に近似オーダー(approximation order)が2であり, 階段関数や折れ線を正 確に再構成できる。関数基底は2つの実パラメータ τ と α によって特徴付けられる。パラメータ τ は関数基底の座標に対応するシフトパラメータであり, もう一方のパラメータaは関数の非対称性をあらわすパラメータである。これらのパラメータを変化させることで, 入力信号・画像に関係なく, 近似精度を向上させ最適化を図ることが可能となることを示す。 この補間手法では, 2つのパラメータを, τ=0.21, α=1 と設定することで, シフト線形補間 (shiftedlinear interpolation) を再現することができる。ここでは, このパラメータの組み合わせ以外に, τ=0.21, α=0.58 と設定した場合に, シフト線形補間と同様の精度で信号の補間を行うことができることに注⽬する。シフト線形補間では分解プロセスにおいて IIR フィルタを必要としていたが, 後者のパラメータを設定した場合は FIR フィルタのみで構成可能である。これにより, 後者のパラメータはシフト線形補間におけるギブス (発振) 現象を大いに低減できる。 こうしたパラメータを設定した場合の有効性を, 補間操作を用いてディジタル画像を回転した場合のピーク SN 比 (原画像と回転した画像の信号・ノイズ比), 補間後の画像の最大振幅などを検証することを通して評価する。 
@article{blu2003g, author = "Ichige, K. and Blu, T. and Unser, M.", title = "A Study on Spline Functions and Their Applications to Digital Signal and Image Processing", journal = "The Telecommunications Advancement Foundation", month = "January", year = "2003", volume = "18", number = "7(1)", pages = "358365", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003g" } 
Ichige, K., Blu, T. & Unser, M.,"MultiwaveletLike Bases for High Quality Image Interpolation", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing X, San Diego, USA, Vol. 5207, pp. 153161, August 38, 2003. Part I. 
We present a simple but generalized interpolation method for digital images that uses multiwaveletlike basis functions. Most of interpolation methods uses only one symmetric basis function; for example, standard and shifted piecewiselinear interpolations use the “hat” function only. The proposed method uses q different multiwaveletlike basis functions. The basis functions can be dissymmetric but should preserve the “partition of unity” property for highquality signal interpolation. The scheme of decomposition and reconstruction of signals by the proposed basis functions can be implemented in a filterbank form using separable IIR implementation. An important property of the proposed scheme is that the prefilters for decomposition can be implemented by FIR filters. Recall that the shiftedlinear interpolation requires IIR prefiltering, but we find a new configuration which reaches almost the same quality with the shiftedlinear interpolation, while requiring FIR prefiltering only. Moreover, the present basis functions can be explicitly formulated in timedomain, although most of (multi)wavelets don't have a timedomain formula. We specify an optimum configuration of interpolation parameters for image interpolation, and validate the proposed method by computing PSNR of the difference between multirotated images and their original version. 
@inproceedings{blu2003h, author = "Ichige, K. and Blu, T. and Unser, M.", title = "MultiwaveletLike Bases for High Quality Image Interpolation", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {X}", month = "August 38,", year = "2003", volume = "5207", pages = "153161", note = "Part {I}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003h" } 
Ichige, K., Blu, T. & Unser, M.,"Interpolation of Signals by Generalized PiecewiseLinear Multiple Generators", Proceedings of the TwentyEighth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'03), Hong Kong, China, Vol. {VI}, pp. 261264, April 610, 2003. 
This paper presents an interpolation method based on shifted versions of two piecewise linear generators, which provides approximation order 2 like usual piecewiselinear interpolation; i.e., this method is able to represent the constant and the ramp exactly. Our interpolation is characterized by two real parameters: τ, the location of the generators, and α, related to their dissymmetry. By varying these parameters, we show that it is possible to optimize the quality of the approximation, independently of the function to interpolate. We recover the optimal value of shiftedlinear interpolation (τ = 0.21 and α = 1) which requires IIR prefiltering, but we also find a new configuration (τ = 0.21 and α = 0.58) which reaches almost the same quality, while requiring FIR filtering only. This new solution is able to greatly reduce the amount of Gibbs oscillations generated in the shiftedlinear interpolation scheme. We validate our finding by computing the PSNR of the difference between multirotated images and their original version. 
@inproceedings{blu2003i, author = "Ichige, K. and Blu, T. and Unser, M.", title = "Interpolation of Signals by Generalized PiecewiseLinear Multiple Generators", booktitle = "Proceedings of the TwentyEighth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'03})", month = "April 610,", year = "2003", volume = "{VI}", pages = "261264", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003i" } 
Inc., Q., Blu, T., Vetterli, M. & Coulot, L.,"Sparse sampling of signal innovations", International patent WO/2009/096995, August 2009. 
Signals, including signals from outside of the subspace of bandlimited signals associated with the Shannon theorem, are acquired while still providing an acceptable reconstruction. In some aspects a denoising process is used in conjunction with sparse sampling techniques. For example, a denoising process utilizing a Cadzow algorithm may be used to reduce the amount of noise associated with sampled information. In some aspects the denoising process may be iterative such that the denoising process is repeated until the samples are denoised to a sufficient degree. In some aspects, the denoising process converts a set of received samples into another set corresponding to a signal with a Finite Rate of Innovation (FRI), or to an approximation of such a signal. The disclosure relates in some aspects to combination of a denoising process with annihilating filter methods to retrieve information from a noisy, sparse sampled signal. The disclosure relates in some aspects to determining a sampling kernel to be used to sample the signal based on noise associated with the signal. The disclosure relates in some aspects to determining the number of samples to obtain from a signal over a period of time based on noise associated with the signal. The disclosure relates in some aspects to determining the finite number of innovations of a received signal. 
@misc{blu2009g, author = "{Qualcomm Inc.} and Blu, T. and Vetterli, M. and Coulot, L.", title = "Sparse sampling of signal innovations", month = "August", year = "2009", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2009g" } 
Jacob, M., Blu, T. & Unser, M.,"A Unifying Approach and Interface for SplineBased Snakes", Progress in Biomedical Optics and Imaging, vol. 2, no. 27, San Diego, USA, Vol. 4322, pp. 340347, February 1922, 2001. Part I. 
In this paper, we present different solutions for improving splinebased snakes. First, we demonstrate their minimum curvature interpolation property, and use it as an argument to get rid of the explicit smoothness constraint. We also propose a new external energy obtained by integrating a nonlinearly preprocessed image in the closed region bounded by the curve. We show that this energy, besides being efficiently computable, is sufficiently general to include the widely used gradientbased schemes, Bayesian schemes, their combinations and discriminantbased approaches. We also introduce two initialization modes and the appropriate constraint energies. We use these ideas to develop a general snake algorithm to track boundaries of closed objects, with a userfriendly interface. 
@inproceedings{blu2001c, author = "Jacob, M. and Blu, T. and Unser, M.", title = "A Unifying Approach and Interface for SplineBased Snakes", booktitle = "Progress in Biomedical Optics and Imaging, vol. 2, no. 27", month = "February 1922,", year = "2001", volume = "4322", pages = "340347", note = "Part {I}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001c" } 
Jacob, M., Blu, T. & Unser, M.,"Exact Computation of Area Moments for Spline and Wavelet Curves", Proceedings of the Fifteenth International Conference on Pattern Recognition (ICPR'00), Barcelona, Spain, Vol. {III}, pp. 131134, September 38, 2000. 
We present an exact algorithm for the computation of the moments of a region bounded by a curve represented in a scaling function or wavelet basis. Using Green's theorem, we show that the computation of the area moments is equivalent to applying a suitable multidimensional filter on the coefficients of the curve and thereafter computing a scalar product. We compare this algorithm with existing methods such as pixelbased approaches and approximation of the region by a polygon. 
@inproceedings{blu2000e, author = "Jacob, M. and Blu, T. and Unser, M.", title = "Exact Computation of Area Moments for Spline and Wavelet Curves", booktitle = "Proceedings of the Fifteenth International Conference on Pattern Recognition ({ICPR'00})", month = "September 38,", year = "2000", volume = "{III}", pages = "131134", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000e" } 
Jacob, M., Blu, T. & Unser, M.,"An Exact Method for Computing the Area Moments of Wavelet and Spline Curves", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 23 (6), pp. 633642, June 2001. 
We present a method for the exact computation of the moments of a region bounded by a curve represented by a scaling function or wavelet basis. Using Green's Theorem, we show that the computation of the area moments is equivalent to applying a suitable multidimensional filter on the coefficients of the curve and thereafter computing a scalar product. The multidimensional filter coefficients are precomputed exactly as the solution of a twoscale relation. To demonstrate the performance improvement of the new method, we compare it with existing methods such as pixelbased approaches and approximation of the region by a polygon. We also propose an alternate scheme when the scaling function is sinc(x). 
@article{blu2001d, author = "Jacob, M. and Blu, T. and Unser, M.", title = "An Exact Method for Computing the Area Moments of Wavelet and Spline Curves", journal = "{IEEE} Transactions on Pattern Analysis and Machine Intelligence", month = "June", year = "2001", volume = "23", number = "6", pages = "633642", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001d" } 
Jacob, M., Blu, T. & Unser, M.,"An Error Analysis for the Sampling of Periodic Signals", Proceedings of the Fourth International Conference on Sampling Theory and Applications (SampTA'01), Orlando, USA, pp. 4548, May 1317, 2001. 
We analyze the representation of periodic signals in a scaling function basis. This representation is sufficiently general to include the widely used approximation schemes like wavelets, splines and Fourier series representation. We derive a closed form expression for the approximation error in the scaling function representation. The error formula takes the simple form of a Parseval like sum, weighted by an appropriate error kernel. This formula may be useful in choosing the right representation for a class of signals. We also experimentally verify the theory in the particular case of description of closed curves. 
@inproceedings{blu2001e, author = "Jacob, M. and Blu, T. and Unser, M.", title = "An Error Analysis for the Sampling of Periodic Signals", booktitle = "Proceedings of the Fourth International Conference on Sampling Theory and Applications ({SampTA'01})", month = "May 1317,", year = "2001", pages = "4548", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001e" } 
Jacob, M., Blu, T. & Unser, M.,"Sampling of Periodic Signals: A Quantitative Error Analysis", IEEE Transactions on Signal Processing, Vol. 50 (5), pp. 11531159, May 2002. 
We present an exact expression for the L_{2} error that occurs when one approximates a periodic signal in a basis of shifted and scaled versions of a generating function. This formulation is applicable to a wide variety of linear approximation schemes including wavelets, splines, and bandlimited signal expansions. The formula takes the simple form of a Parseval'slike relation, where the Fourier coefficients of the signal are weighted against a frequency kernel that characterizes the approximation operator. We use this expression to analyze the behavior of the error as the sampling step approaches zero. We also experimentally verify the expression of the error in the context of the interpolation of closed curves. 
@article{blu2002e, author = "Jacob, M. and Blu, T. and Unser, M.", title = "Sampling of Periodic Signals: {A} Quantitative Error Analysis", journal = "{IEEE} Transactions on Signal Processing", month = "May", year = "2002", volume = "50", number = "5", pages = "11531159", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002e" } 
Jacob, M., Blu, T. & Unser, M.,"3D Reconstruction of DNA Filaments from Stereo CryoElectron Micrographs", Proceedings of the First IEEE International Symposium on Biomedical Imaging (ISBI'02), Washington, USA, Vol. {II}, pp. 597600, July 710, 2002. 
We propose an algorithm for the 3D reconstruction of DNA filaments from a pair of stereo cryoelectron micrographs. The underlying principle is to specify a 3D model of a filament—described as a spline curve—and to fit it to the 2D data using a snakelike algorithm. To drive the snake, we constructed a ridgeenhancing vector field for each of the images based on the maximum output of a bank of rotating matched filters. The magnitude of the field gives a confidence measure for the presence of a filament and the phase indicates its direction. We also propose a fast algorithm to perform the matched filtering. The snake algorithm starts with an initial curve (input by the user) and evolves it so that its projections on the viewing plane are in maximal agreement with the corresponding vector fields. 
@inproceedings{blu2002f, author = "Jacob, M. and Blu, T. and Unser, M.", title = "{3D} Reconstruction of {DNA} Filaments from Stereo CryoElectron Micrographs", booktitle = "Proceedings of the First {IEEE} International Symposium on Biomedical Imaging ({ISBI'02})", month = "July 710,", year = "2002", volume = "{II}", pages = "597600", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002f" } 
Jacob, M., Blu, T. & Unser, M.,"Efficient Energies and Algorithms for Parametric Snakes", IEEE Transactions on Image Processing, Vol. 13 (9), pp. 12311244, September 2004. 
Parametric active contour models are one of the preferred approaches for image segmentation because of their computational efficiency and simplicity. However, they have a few drawbacks which limit their performance. In this paper, we identify some of these problems and propose efficient solutions to get around them. The widelyused gradient magnitudebased energy is parameter dependent; its use will negatively affect the parametrization of the curve and, consequently, its stiffness. Hence, we introduce a new edgebased energy that is independent of the parameterization. It is also more robust since it takes into account the gradient direction as well. We express this energy term as a surface integral, thus unifying it naturally with the regionbased schemes. The unified framework enables the user to tune the image energy to the application at hand. We show that parametric snakes can guarantee low curvature curves, but only if they are described in the curvilinear abscissa. Since normal curve evolution do not ensure constant arclength, we propose a new internal energy term that will force this configuration. The curve evolution can sometimes give rise to closed loops in the contour, which will adversely interfere with the optimization algorithm. We propose a curve evolution scheme that prevents this condition. 
@article{blu2004e, author = "Jacob, M. and Blu, T. and Unser, M.", title = "Efficient Energies and Algorithms for Parametric Snakes", journal = "{IEEE} Transactions on Image Processing", month = "September", year = "2004", volume = "13", number = "9", pages = "12311244", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004e" } 
Jacob, M., Blu, T. & Unser, M.,"Shape Estimation of 3D DNA Molecules from Stereo CryoElectron MicroGraphs", Proceedings of the 2004 IEEE International Conference on Image Processing (ICIP'04), Singapore, Singapore, pp. 18831886, October 2427, 2004. 
We introduce a 3D parametric active contour algorithm for the shape estimation of DNA molecules from stereo cryoelectron micrographs. We consider a 3D filament (consisting of a Bspline skeleton and a specified radial profile) and match its projections with the micrographs using an optimization algorithm. To accelerate the evaluation of the projections, we approximate the global model locally by an elongated bloblike template that is designed to be projectionsteerable. This means that the 2D projections of the template at any 3D orientation can be expressed as a linear combination of a few basis functions. Thus, the matching of the template projections is reduced to evaluating a weighted sum of the innerproducts between the basis functions and the micrographs. We choose an internal energy term that penalizes the total curvature magnitude of the curve. We also use a constraint energy term that forces the curve to have a specified length. The sum of these terms along with the image energy obtained from the matching process is minimized using a conjugategradient algorithm. We validate the algorithm using real as well as simulated data. 
@inproceedings{blu2004f, author = "Jacob, M. and Blu, T. and Unser, M.", title = "Shape Estimation of {3D} {DNA} Molecules from Stereo CryoElectron MicroGraphs", booktitle = "Proceedings of the 2004 {IEEE} International Conference on Image Processing ({ICIP'04})", month = "October 2427,", year = "2004", pages = "18831886", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004f" } 
Jacob, M., Blu, T., Vaillant, C., Maddocks, J.H. & Unser, M.,"3D Shape Estimation of DNA Molecules from Stereo CryoElectron MicroGraphs Using a ProjectionSteerable Snake", IEEE Transactions on Image Processing, Vol. 15 (1), pp. 214227, January 2006. 
We introduce a threedimensional (3D) parametric active contour algorithm for the shape estimation of DNA molecules from stereo cryoelectron micrographs. We estimate the shape by matching the projections of a 3D global shape model with the micrographs; we choose the global model as a 3D filament with a Bspline skeleton and a specified radial profile. The active contour algorithm iteratively updates the Bspline coefficients, which requires us to evaluate the projections and match them with the micrographs at every iteration. Since the evaluation of the projections of the global model is computationally expensive, we propose a fast algorithm based on locally approximating it by elongated bloblike templates. We introduce the concept of projectionsteerability and derive a projectionsteerable elongated template. Since the twodimensional projections of such a blob at any 3D orientation can be expressed as a linear combination of a few basis functions, matching the projections of such a 3D template involves evaluating a weighted sum of inner products between the basis functions and the micrographs. The weights are simple functions of the 3D orientation and the innerproducts are evaluated efficiently by separable filtering. We choose an internal energy term that penalizes the average curvature magnitude. Since the exact length of the DNA molecule is known a priori, we introduce a constraint energy term that forces the curve to have this specified length. The sum of these energies along with the image energy derived from the matching process is minimized using the conjugate gradients algorithm. We validate the algorithm using real, as well as simulated, data and show that it performs well. 
@article{blu2006c, author = "Jacob, M. and Blu, T. and Vaillant, C. and Maddocks, J.H. and Unser, M.", title = "{3D} Shape Estimation of {DNA} Molecules from Stereo CryoElectron MicroGraphs Using a ProjectionSteerable Snake", journal = "{IEEE} Transactions on Image Processing", month = "January", year = "2006", volume = "15", number = "1", pages = "214227", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2006c" } 
Jayashankar, T., Moulin, P., Blu, T. & Gilliam, C.,"LAPBased Video Frame Interpolation", Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP'19), Taipei, Taiwan, pp. 41954199, September 2225, 2019. 
Highquality video frame interpolation often necessitates accurate motion estimation, which can be obtained using modern optical flow methods. In this paper, we use the recently proposed Local AllPass (LAP) algorithm to compute the optical flow between two consecutive frames. The resulting flow field is used to perform interpolation using cubic splines. We compare the interpolation results against a wellknown optical flow estimation algorithm as well as against a recent convolutional neural network scheme for video frame interpolation. Qualitative and quantitative results show that the LAP algorithm performs fast, highquality video frame interpolation, and perceptually outperforms the neural network and the LucasKanade method on a variety of test sequences. 
@inproceedings{blu2019f, author = "Jayashankar, T. and Moulin, P. and Blu, T. and Gilliam, C.", title = "{LAP}Based Video Frame Interpolation", booktitle = "Proceedings of the 2019 {IEEE} International Conference on Image Processing ({ICIP'19})", month = "September 2225,", year = "2019", pages = "41954199", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2019f" } 
Jiang, B., Jin, T., Blu, T. & Chen, W.,"Probing chemical exchange using quantitative spinlock R_1ρ asymmetry imaging with adiabatic RF pulses", Magnetic Resonance in Medicine, Vol. 82 (5), pp. 17671781, November 2019. 
Purpose CEST is commonly used to probe the effects of chemical exchange. Although R1ρ asymmetry quantification has also been described as a promising option for detecting the effects of chemical exchanges, the existing acquisition approaches are highly susceptible to B1 RF and B0 field inhomogeneities. To address this problem, we report a new R1ρ asymmetry imaging approach, ACiTIP, which is based on the previously reported techniques of irradiation with toggling inversion preparation (iTIP) and adiabatic continuous wave constant amplitude spinlock RF pulses (ACCSL). We also derived the optimal spinlock RF pulse B1 amplitude that yielded the greatest R1ρ asymmetry. Methods BlochMcConnell simulations were used to verify the analytical formula derived for the optimal spinlock RF pulse B1 amplitude. The performance of the ACiTIP approach was compared to that of the iTIP approach based on hard RF pulses and the R1ρspectrum acquired using adiabatic RF pulses with the conventional fitting method. Comparisons were performed using BlochMcConnell simulations, phantom, and in vivo experiments at 3.0T. Results The analytical prediction of the optimal B1 was validated. Compared to the other 2 approaches, the ACiTIP approach was more robust under the influences of B1 RF and B0 field inhomogeneities. A linear relationship was observed between the measured R1ρ asymmetry and the metabolite concentration. Conclusion The ACiTIP approach could probe the chemical exchange effect more robustly than the existing R1ρ asymmetry acquisition approaches. Therefore, ACiTIP is a promising technique for metabolite imaging based on the chemical exchange effect. 
@article{blu2019g, author = "Jiang, B. and Jin, T. and Blu, T. and Chen, W.", title = "Probing chemical exchange using quantitative spinlock ${R}_{1\rho}$ asymmetry imaging with adiabatic {RF} pulses", journal = "Magnetic Resonance in Medicine", month = "November", year = "2019", volume = "82", number = "5", pages = "17671781", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2019g" } 
Kandaswamy, D., Blu, T., Spinelli, L., Michel, C. & Van De Ville, D.,"EEG Source Localization by MultiPlanar Analytic Sensing", Proceedings of the Fifth IEEE International Symposium on Biomedical Imaging (ISBI'08), Paris, France, pp. 10751078, May 1417, 2008. 
Source localization from EEG surface measurements is an important problem in neuroimaging. We propose a new mathematical framework to estimate the parameters of a multidipole source model. To that aim, we perform 2D analytic sensing in multiple planes. The estimation of the projection on each plane of the dipoles' positions, which is a nonlinear problem, is reduced to polynomial root finding. The 3D information is then recovered as a special case of tomographic reconstruction. The feasibility of the proposed approach is shown for both synthetic and experimental data. 
@inproceedings{blu2008f, author = "Kandaswamy, D. and Blu, T. and Spinelli, L. and Michel, C. and Van De Ville, D.", title = "{EEG} Source Localization by MultiPlanar Analytic Sensing", booktitle = "Proceedings of the Fifth {IEEE} International Symposium on Biomedical Imaging ({ISBI'08})", month = "May 1417,", year = "2008", pages = "10751078", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008f" } 
Kandaswamy, D., Blu, T., Spinelli, L., Michel, C. & Van De Ville, D.,"Local Mulilayer Analytic Sensing for EEG Source Localization: Performance Bounds and Experimental Results", Proceedings of the Eighth IEEE International Symposium on Biomedical Imaging (ISBI'11), Chicago, USA, pp. 479483, March 30April 2, 2011. 
Analytic sensing is a new mathematical framework to estimate the parameters of a multidipole source model from boundary measurements. The method deploys two working principles. First, the sensing principle relates the boundary measurements to the volumetric interactions of the sources with the socalled "analytic sensor," a test function that is concentrated around a singular point outside the domain of interest. Second, the annihilation principle allows retrieving the projection of the dipoles' positions in a single shot by polynomial root finding. Here, we propose to apply analytic sensing in a local way; i.e., the poles are not surrounding the complete domain. By combining two local projections of the (nearby) dipolar sources, we are able to reconstruct the full 3D information. We demonstrate the feasibility of the proposed approach for both synthetic and experimental data, together with the theoretical lower bounds of the localization error. 
@inproceedings{blu2011b, author = "Kandaswamy, D. and Blu, T. and Spinelli, L. and Michel, C. and Van De Ville, D.", title = "Local Mulilayer Analytic Sensing for {EEG} Source Localization: Performance Bounds and Experimental Results", booktitle = "Proceedings of the Eighth {IEEE} International Symposium on Biomedical Imaging ({ISBI'11})", month = "March 30April 2,", year = "2011", pages = "479483", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2011b" } 
Kandaswamy, D., Blu, T. & Van De Ville, D.,"Analytic Sensing: Direct Recovery of Point Sources from Planar Cauchy Boundary Measurements", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet XII, San Diego, USA, Vol. 6701, pp. 67011Y167011Y6, August 2629, 2007. 
Inverse problems play an important role in engineering. A problem that often occurs in electromagnetics (e.g. EEG) is the estimation of the locations and strengths of point sources from boundary data. We propose a new technique, for which we coin the term “analytic sensing”. First, generalized measures are obtained by applying Green's theorem to selected functions that are analytic in a given domain and at the same time localized to “sense” the sources. Second, we use the finiterateofinnovation framework to determine the locations of the sources. Hence, we construct a polynomial whose roots are the sources' locations. Finally, the strengths of the sources are found by solving a linear system of equations. Preliminary results, using synthetic data, demonstrate the feasibility of the proposed method. 
@inproceedings{blu2007e, author = "Kandaswamy, D. and Blu, T. and Van De Ville, D.", title = "Analytic Sensing: {D}irect Recovery of Point Sources from Planar {C}auchy Boundary Measurements", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet {XII}", month = "August 2629,", year = "2007", volume = "6701", pages = "67011Y167011Y6", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007e" } 
Kandaswamy, D., Blu, T. & Van De Ville, D.,"Analytic Sensing: Noniterative Retrieval of Point Sources from Boundary Measurements", SIAM Journal on Scientific Computing, Vol. 31 (4), pp. 31793194, 2009. 
We consider the problem of locating point sources in the planar domain from overdetermined boundary measurements of solutions of Poisson's equation. In this paper, we propose a novel technique, termed "analytic sensing," which combines the application of Green's theorem to functions with vanishing Laplacian—known as the "reciprocity gap" principle—with the careful selection of analytic functions that "sense" the manifestation of the sources in order to determine their positions and intensities. Using this formalism we express the problem at hand as a generalized sampling problem, where the signal to be reconstructed is the source distribution. To determine the positions of the sources, which is a nonlinear problem, we extend the annihilatingfilter method, which reduces the problem to solving a linear system of equations for a polynomial whose roots are the positions of the point sources. Once these positions are found, resolving the according intensities boils down to solving a linear system of equations. We demonstrate the performance of our technique in the presence of noise by comparing the achieved accuracy with the theoretical lower bound provided by CramérRao theory. 
@article{blu2009d, author = "Kandaswamy, D. and Blu, T. and Van De Ville, D.", title = "Analytic Sensing: {N}oniterative Retrieval of Point Sources from Boundary Measurements", journal = "{SIAM} Journal on Scientific Computing", year = "2009", volume = "31", number = "4", pages = "31793194", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2009d" } 
Kandaswamy, D., Blu, T. & Van De Ville, D.,"Analytic Sensing for MultiLayer Spherical Models with Application to EEG Source Imaging", Inverse Problems and Imaging, Vol. 7 (4), pp. 12511270, November 2013. 
Source imaging maps back boundary measurements to underlying generators
within the domain; e.g., retrieving the parameters of the generating
dipoles from electrical potential measurements on the scalp such
as in electroencephalography (EEG). Fitting such a parametric source
model is nonlinear in the positions of the sources and renewed interest
in mathematical imaging has led to several promising approaches. One important step in these methods is the application of a sensing principle that links the boundary measurements to volumetric information about the sources. This principle is based on the divergence theorem and a mathematical test function that needs to be an homogeneous solution of the governing equations (i.e., Poisson's equation). For a specific choice of the test function, we have devised an algebraic noniterative source localization technique for which we have coined the term "analytic sensing". Until now, this sensing principle has been applied to homogeneousconductivity spherical models only. Here, we extend it for multilayer spherical models that are commonly applied in EEG. We obtain a closedform expression for the test function that can then be applied for subsequent localization. A simulation study show the feasibility of the proposed approach. 
@article{blu2013d, author = "Kandaswamy, D. and Blu, T. and Van De Ville, D.", title = "Analytic Sensing for MultiLayer Spherical Models with Application to {EEG} Source Imaging", journal = "Inverse Problems and Imaging", month = "November", year = "2013", volume = "7", number = "4", pages = "12511270", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2013d" } 
Khalidov, I., Blu, T. & Unser, M.,"Generalized LSpline Wavelet Bases", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet XI, San Diego, USA, Vol. 5914, pp. 59140F159140F8, July 31August 3, 2005. 
We build waveletlike functions based on a parametrized family of pseudodifferential operators L_{v} that satisfy some admissibility and scalability conditions. The shifts of the generalized Bsplines, which are localized versions of the Green function of L_{v}, generate a family of Lspline spaces. These spaces have the approximation order equal to the order of the underlying operator. A sequence of embedded spaces is obtained by choosing a dyadic scale progression a = 2^{i}. The consecutive inclusion of the spaces yields the refinement equation, where the scaling filter depends on scale. The generalized Lwavelets are then constructed as basis functions for the orthogonal complements of spline spaces. The vanishing moment property of conventional wavelets is generalized to the vanishing null space element property. In spite of the scale dependence of the filters, the wavelet decomposition can be performed using an adapted version of Mallat's filterbank algorithm. 
@inproceedings{blu2005e, author = "Khalidov, I. and Blu, T. and Unser, M.", title = "Generalized ${\mathrm{L}}$Spline Wavelet Bases", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet {XI}", month = "July 31August 3,", year = "2005", volume = "5914", pages = "59140F159140F8", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005e" } 
Khalidov, I., Van De Ville, D., Blu, T. & Unser, M.,"Construction of Wavelet Bases That Mimic the Behaviour of Some Given Operator", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet XII, San Diego, USA, Vol. 6701, pp. 67010S167010S7, August 2629, 2007. 
Probably the most important property of wavelets for signal processing is their multiscale derivativelike behavior when applied to functions. In order to extend the class of problems that can profit of waveletbased techniques, we propose to build new families of wavelets that behave like an arbitrary scalecovariant operator. Our extension is general and includes many known wavelet bases. At the same time, the method takes advantage a fast filterbank decompositionreconstruction algorithm. We give necessary conditions for the scalecovariant operator to admit our wavelet construction, and we provide examples of new wavelets that can be obtained with our method. 
@inproceedings{blu2007f, author = "Khalidov, I. and Van De Ville, D. and Blu, T. and Unser, M.", title = "Construction of Wavelet Bases That Mimic the Behaviour of Some Given Operator", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet {XII}", month = "August 2629,", year = "2007", volume = "6701", pages = "67010S167010S7", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007f" } 
Küstner, T., Pan, J., Gilliam, C., Qi, H., Cruz, G., Hammernik, K., Yang, B., Blu, T., Rueckert, D., Botnar, R., Prieto, C. & Gatidis, S.,"DeepLearning Based MotionCorrected Image Reconstruction in 4D Magnetic Resonance Imaging of the Body Trunk", Proceedings of the 2020 AsiaPacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Auckland, NZ, pp. 976985, December 710 2020. 
Respiratory and cardiac motion can cause artifacts in magnetic resonance imaging of the body trunk if patients cannot hold their breath or triggered acquisitions are not practical. Retrospective correction strategies usually cope with motion by fast imaging sequences with integrated motion tracking under freemovement conditions. These acquisitions perform subNyquist sampling and retrospectively bin the data into the respective motion states, yielding subsampled and motionresolved kspace data. The motionresolved kspaces are linked to each other by nonrigid deformation fields. The accurate estimation of such motion is thus an important task in the successful correction of respiratory and cardiac motion. Usually this problem is formulated in image space via diffusion, parametricspline or optical flow methods. Imagebased registration can be however impaired by aliasing artifacts or by estimation from lowresolution images. Subsequently, any motioncorrected reconstruction can be biased by errors in the deformation fields. In this work, we propose a novel deeplearning based motioncorrected 4D (3D spatial + time) image reconstruction which combines a nonrigid registration network and a (3+1)D reconstruction network. Nonrigid motion is estimated directly in kspace based on an optical flow idea and incorporated into the reconstruction network. The proposed method is evaluated on invivo 4D motionresolved magnetic resonance images of patients with suspected liver or lung metastases and healthy subjects. 
@inproceedings{blu2020i, author = "K\"ustner, T. and Pan, J. and Gilliam, C. and Qi, H. and Cruz, G. and Hammernik, K. and Yang, B. and Blu, T. and Rueckert, D. and Botnar, R. and Prieto, C. and Gatidis, S.", title = "DeepLearning Based MotionCorrected Image Reconstruction in {4D} Magnetic Resonance Imaging of the Body Trunk", booktitle = "Proceedings of the 2020 AsiaPacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)", month = "December 710", year = "2020", pages = "976985", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2020i" } 
Küstner, T., Pan, J., Qi, H., Cruz, G., Gilliam, C., Blu, T., Yang, B., Gatidis, S., Botnar, R. & Prieto, C.,"LAPNet: Nonrigid Registration derived in kspace for Magnetic Resonance Imaging", IEEE Transactions on Medical Imaging, Vol. 40 (12), pp. 36863697, December 2021. 
Physiological motion, such as cardiac and respiratory motion, during Magnetic Resonance (MR) image acquisition can cause image artifacts. Motion correction techniques have been proposed to compensate for these types of motion during thoracic scans, relying on accurate motion estimation from undersampled motionresolved reconstruction. A particular interest and challenge lie in the derivation of reliable nonrigid motion fields from the undersampled motionresolved data. Motion estimation is usually formulated in image space via diffusion, parametricspline, or optical flow methods. However, imagebased registration can be impaired by remaining aliasing artifacts due to the undersampled motionresolved reconstruction. In this work, we describe a formalism to perform nonrigid registration directly in the sampled Fourier space, i.e. kspace. We propose a deeplearning based approach to perform fast and accurate nonrigid registration from the undersampled kspace data. The basic working principle originates from the Local All Pass (LAP) technique, a recently introduced optical flowbased registration. The proposed LAPNet is compared against traditional and deep learning imagebased registrations and tested on fullysampled and highlyaccelerated (with two undersampling strategies) 3D respiratory motionresolved MR images in a cohort of 40 patients with suspected liver or lung metastases and 25 healthy subjects. The proposed LAPNet provided consistent and superior performance to imagebased approaches throughout different sampling trajectories and acceleration factors. 
@article{blu2021d, author = "K{\"u}stner, T. and Pan, J. and Qi, H. and Cruz, G. and Gilliam, C. and Blu, T. and Yang, B. and Gatidis, S. and Botnar, R. and Prieto, C.", title = "{LAPNet}: Nonrigid Registration derived in {k}space for Magnetic Resonance Imaging", journal = "IEEE Transactions on Medical Imaging", month = "December", year = "2021", volume = "40", number = "12", pages = "36863697", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2021d" } 
Küstner, T., Schwartz, M., Martirosian, P., Gatidis, S., Seith, F., Gilliam, C., Blu, T., Fayad, H., Visvikis, D., Schick, F., Yang, B., Schmidt, H. & Schwenzer, N.F.,"MRbased respiratory and cardiac motion correction for PET imaging", Medical Image Analysis, Vol. 42, pp. 129144, December 2017. 
Purpose: To develop a motion correction for PositronEmissionTomography (PET) using simultaneously acquired magneticresonance (MR) images within 90 seconds. Methods: A 90 seconds MR acquisition allows the generation of a cardiac and respiratory motion model of the body trunk. Thereafter, further diagnostic MR sequences can be recorded during the PET examination without any limitation. To provide full PET scan time coverage, a sensor fusion approach maps external motion signals (respiratory belt, ECGderived respiration signal) to a complete surrogate signal on which the retrospective data binning is performed. A joint Compressed Sensing reconstruction and motion estimation of the subsampled data provides motionresolved MR images (respiratory + cardiac). A 1POINT DIXON method is applied to these MR images to derive a motionresolved attenuation map. The motion model and the attenuation map are fed to the Customizable and Advanced Software for Tomographic Reconstruction (CASToR) PET reconstruction system in which the motion correction is incorporated. All reconstruction steps are performed online on the scanner via Gadgetron to provide a clinically feasible setup for improved general applicability. The method was evaluated on 36 patients with suspected liver or lung metastasis in terms of lesion quantification (SUVmax, SNR, contrast), delineation (FWHM, slope steepness) and diagnostic confidence level (3point Likertscale). Results: A motion correction could be conducted for all patients, however, only in 30 patients moving lesions could be observed. For the examined 134 malignant lesions, an average improvement in lesion quantification of 22%, delineation of 64% and diagnostic confidence level of 23% was achieved. Conclusion: The proposed method provides a clinically feasible setup for respiratory and cardiac motion correction of PET data by simultaneous shortterm MRI. The acquisition sequence and all reconstruction steps are publicly available to foster multicenter studies and various motion correction scenarios. 
@article{blu2017j, author = "K\"ustner, T. and Schwartz, M. and Martirosian, P. and Gatidis, S. and Seith, F. and Gilliam, C. and Blu, T. and Fayad, H. and Visvikis, D. and Schick, F. and Yang, B. and Schmidt, H. and Schwenzer, N.F.", title = "{MR}based respiratory and cardiac motion correction for {PET} imaging", journal = "Medical Image Analysis", month = "December", year = "2017", volume = "42", pages = "129144", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2017j" } 
Kybic, J., Blu, T. & Unser, M.,"Variational Approach to Tomographic Reconstruction", Progress in Biomedical Optics and Imaging, vol. 2, no. 27, San Diego , USA, Vol. 4322, pp. 3039, February 1922, 2001. Part I. 
We formulate the tomographic reconstruction problem in a variational setting. The object to be reconstructed is considered as a continuous density function, unlike in the pixelbased approaches. The measurements are modeled as linear operators (Radon transform), integrating the density function along the ray path. The criterion that we minimize consists of a data term and a regularization term. The data term represents the inconsistency between applying the measurement model to the density function and the real measurements. The regularization term corresponds to the smoothness of the density function. We show that this leads to a solution lying in a finite dimensional vector space which can be expressed as a linear combination of generating functions. The coefficients of this linear combination are determined from a linear equation set, solvable either directly, or by using an iterative approach. Our experiments show that our new variational method gives results comparable to the classical filtered backprojection for high number of measurements (projection angles and sensor resolution). The new method performs better for medium number of measurements. Furthermore, the variational approach gives usable results even with very few measurements when the filtered backprojection fails. Our method reproduces amplitudes more faithfully and can cope with high noise levels; it can be adapted to various characteristics of the acquisition device. 
@inproceedings{blu2001f, author = "Kybic, J. and Blu, T. and Unser, M.", title = "Variational Approach to Tomographic Reconstruction", booktitle = "Progress in Biomedical Optics and Imaging, vol. 2, no. 27", month = "February 1922,", year = "2001", volume = "4322", pages = "3039", note = "Part {I}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001f" } 
Kybic, J., Blu, T. & Unser, M.,"Generalized Sampling: A Variational Approach", Proceedings of the Fourth International Conference on Sampling Theory and Applications (SampTA'01), Orlando, USA, pp. 151154, May 1317, 2001. 
We consider the problem of reconstructing a multidimensional and multivariate function ƒ: ℜ^{m} → ℜ^{n} from the discretely and irregularly sampled responses of q linear shiftinvariant filters. Unlike traditional approaches which reconstruct the function in some signal space V, our reconstruction is optimal in the sense of a plausibility criterion J. The reconstruction is either consistent with the measures, or minimizes the consistence error. There is no bandlimiting restriction for the input signals. We show that important characteristics of the reconstruction process are induced by the properties of the criterion J. We give the reconstruction formula and apply it to several practical cases. 
@inproceedings{blu2001g, author = "Kybic, J. and Blu, T. and Unser, M.", title = "Generalized Sampling: {A} Variational Approach", booktitle = "Proceedings of the Fourth International Conference on Sampling Theory and Applications ({SampTA'01})", month = "May 1317,", year = "2001", pages = "151154", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001g" } 
Kybic, J., Blu, T. & Unser, M.,"Generalized Sampling: A Variational ApproachPart II: Applications", IEEE Transactions on Signal Processing, Vol. 50 (8), pp. 19771985, August 2002. 
The variational reconstruction theory from a companion paper finds a solution consistent with some linear constraints and minimizing a quadratic plausibility criterion. It is suitable for treating vector and multidimensional signals. Here, we apply the theory to a generalized sampling system consisting of a multichannel filterbank followed by a nonuniform sampling. We provide readymade formulas, which should permit application of the technique directly to problems at hand. We comment on the practical aspects of the method, such as numerical stability and speed. We show the reconstruction formula and apply it to several practical examples, including new variational formulation of derivative sampling, landmark warping, and tomographic reconstruction. 
@article{blu2002g, author = "Kybic, J. and Blu, T. and Unser, M.", title = "Generalized Sampling: {A} Variational Approach{P}art {II}: {A}pplications", journal = "{IEEE} Transactions on Signal Processing", month = "August", year = "2002", volume = "50", number = "8", pages = "19771985", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002g" } 
Kybic, J., Blu, T. & Unser, M.,"Generalized Sampling: A Variational ApproachPart I: Theory", IEEE Transactions on Signal Processing, Vol. 50 (8), pp. 19651976, August 2002. 
We consider the problem of reconstructing a multidimensional vector function f_{in}: ℜ^{m} → ℜ^{n} from a finite set of linear measures. These can be irregularly sampled responses of several linear filters. Traditional approaches reconstruct in an a priori given space, e.g., the space of bandlimited functions. Instead, we have chosen to specify a reconstruction that is optimal in the sense of a quadratic plausibility criterion J. First, we present the solution of the generalized interpolation problem. Later, we also consider the approximation problem, and we show that both lead to the same class of solutions. Imposing generally desirable properties on the reconstruction largely limits the choice of the criterion J. Linearity leads to a quadratic criterion based on bilinear forms. Specifically, we show that the requirements of translation, rotation, and scaleinvariance restrict the form of the criterion to essentially a oneparameter family. We show that the solution can be obtained as a linear combination of generating functions. We provide analytical techniques to find these functions and the solution itself. Practical implementation issues and examples of applications are treated in a companion paper. 
@article{blu2002h, author = "Kybic, J. and Blu, T. and Unser, M.", title = "Generalized Sampling: {A} Variational Approach{P}art {I}: {T}heory", journal = "{IEEE} Transactions on Signal Processing", month = "August", year = "2002", volume = "50", number = "8", pages = "19651976", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002h" } 
Lakshman, H., Schwartz, H., Blu, T. & Wiegand, T.,"Generalized interpolation for motion compensated prediction", Proceedings of the 2011 IEEE International Conference on Image Processing (ICIP'11), Brussels, Belgium, pp. 12131216, September 1114, 2011. 
Fractional sample interpolation with FIR filters is commonly used for motion compensated prediction (MCP). The FIR filtering can be viewed as a signal decomposition using restricted basis functions. The concept of generalized interpolation provides a greater degree of freedom for selecting basis functions. We implemented generalized interpolation using a combination of short IIR and FIR filters. An efficient multiplicationfree design of the algorithm that is suited for hardware implementation is shown. Compared to a 6tap FIR interpolation filter, average rate savings of 3.1% are observed. A detailed analysis of the complexity and memory bandwidth cycles compared to existing interpolation techniques for MCP is provided. 
@inproceedings{blu2011c, author = "Lakshman, H. and Schwartz, H. and Blu, T. and Wiegand, T.", title = "Generalized interpolation for motion compensated prediction", booktitle = "Proceedings of the 2011 {IEEE} International Conference on Image Processing ({ICIP'11})", month = "September 1114,", year = "2011", pages = "12131216", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2011c" } 
Langoju, R.V.V.L., Blu, T. & Unser, M.,"Resolution Enhancement in Optical Coherence Tomography", 2004 Annual Meeting of the Swiss Society of Biomedical Engineering (SSBE'04), Zürich, Switzerland, September 23, 2004. poster 9. 
OCT performs highresolution, crosssectional tomographic imaging of the internal structure in materials and biological systems by measuring the coherent part of the reflected light. The physical depth resolution in OCT depends on the coherence length of the light source and lies around 1015μm. The new parametric superresolution method described in this paper does not depend on the coherence length of the light source, but rather on the noise level of the measurement. The key idea is to describe the OCT measure of a multi layer sample by a parametric model containing the location of the layer and its amplitude. We then find these parameters by minimizing the distance between the model and measure. 
@inproceedings{blu2004g, author = "Langoju, R.V.V.L. and Blu, T. and Unser, M.", title = "Resolution Enhancement in Optical Coherence Tomography", booktitle = "2004 Annual Meeting of the Swiss Society of Biomedical Engineering ({SSBE'04})", month = "September 23,", year = "2004", note = "poster 9", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004g" } 
Li, J., Gilliam, C. & Blu, T.,"A multiframe optical flow spot tracker", Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP'15), Québec City, Canada, pp. 36703674, September 2730, 2015. 
Accurate and robust spot tracking is a necessary tool for quantitative motion analysis in fluorescence microscopy images. Few trackers however consider the underlying dynamics present in biological systems. For example, the collective motion of cells often exhibits both fast dynamics, i.e. Brownian motion, and slow dynamics, i.e. timeinvariant stationary motion. In this paper, we propose a novel, multiframe, tracker that exploits this stationary motion. More precisely, we first estimate the stationary motion and then use it to guide the spot tracker. We obtain the stationary motion by adapting a recent optical flow algorithm that relates one image to another locally using an allpass filter. We perform this operation over all the image frames simultaneously and estimate a single, stationary optical flow. We compare the proposed tracker with two existing techniques and show that our approach is more robust to high noise and varying structure. In addition, we also show initial experiments on real microscopy images. 
@inproceedings{blu2015f, author = "Li, J. and Gilliam, C. and Blu, T.", title = "A multiframe optical flow spot tracker", booktitle = "Proceedings of the 2015 {IEEE} International Conference on Image Processing ({ICIP'15})", month = "September 2730,", year = "2015", pages = "36703674", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2015f" } 
Li, J., Luisier, F. & Blu, T.,"Deconvolution of Poissonian Images with the PURELET Approach", Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP'16), Phoenix, AZ, USA, pp. 27082712, September 2528, 2016. Best Paper RunnerUp Award. 
We propose a noniterative image deconvolution algorithm for data corrupted by Poisson noise. Many applications involve such a problem, ranging from astronomical to biological imaging. We parametrize the deconvolution process as a linear combination of elementary functions, termed as linear expansion of thresholds (LET). This parametrization is then optimized by minimizing a robust estimate of the mean squared error, the "Poisson unbiased risk estimate (PURE)''. Each elementary function consists of a Wiener filtering followed by a pointwise thresholding of undecimated Haar wavelet coefficients. In contrast to existing approaches, the proposed algorithm merely amounts to solving a linear system of equations which has a fast and exact solution. Simulation experiments over various noise levels indicate that the proposed method outperforms current stateoftheart techniques, in terms of both restoration quality and computational time. 
@inproceedings{blu2016d, author = "Li, J. and Luisier, F. and Blu, T.", title = "Deconvolution of {P}oissonian Images with the {PURELET} Approach", booktitle = "Proceedings of the 2016 {IEEE} International Conference on Image Processing ({ICIP'16})", month = "September 2528,", year = "2016", pages = "27082712", note = "\textbf{Best Paper RunnerUp Award}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2016d" } 
Li, J., Luisier, F. & Blu, T.,"PURELET Deconvolution of 3D Fluorescence Microscopy Images", Proceedings of the Fourteenth IEEE International Symposium on Biomedical Imaging (ISBI'17), Melbourne, Australia, pp. 723727, April 1821, 2017. Best student paper award (2nd place). 
Threedimensional (3D) deconvolution microscopy is very effective in improving the quality of fluorescence microscopy images. In this work, we present an efficient approach for the deconvolution of 3D fluorescence microscopy images based on the recently developed PURELET algorithm. By combining multiple Wiener filtering and wavelet denoising, we parametrize the deconvolution process as a linear combination of elementary functions. Then the Poisson unbiased risk estimate (PURE) is used to obtain the optimal coefficients. The proposed approach is noniterative and outperforms existing techniques (usually, variants of RichardsonLucy algorithm) both in terms of computational efficiency and quality. We illustrate its effectiveness on both synthetic and real data. 
@inproceedings{blu2017b, author = "Li, J. and Luisier, F. and Blu, T.", title = "{PURE}{LET} Deconvolution of {3D} Fluorescence Microscopy Images", booktitle = "Proceedings of the Fourteenth {IEEE} International Symposium on Biomedical Imaging ({ISBI'17})", month = "April 1821,", year = "2017", pages = "723727", note = "\textbf{Best student paper award (2nd place)}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2017b" } 
Li, J., Luisier, F. & Blu, T.,"PURELET Image Deconvolution", IEEE Transactions on Image Processing, Vol. 27 (1), pp. 92105, January 2018. 
We propose a noniterative image deconvolution algorithm for data corrupted by Poisson or mixed PoissonGaussian noise. Many applications involve such a problem, ranging from astronomical to biological imaging. We parametrize the deconvolution process as a linear combination of elementary functions, termed as linear expansion of thresholds (LET). This parametrization is then optimized by minimizing a robust estimate of the true mean squared error, the Poisson unbiased risk estimate (PURE). Each elementary function consists of a Wiener filtering followed by a pointwise thresholding of undecimated Haar wavelet coefficients. In contrast to existing approaches, the proposed algorithm merely amounts to solving a linear system of equations which has a fast and exact solution. Simulation experiments over different types of convolution kernels and various noise levels indicate that the proposed method outperforms stateoftheart techniques, in terms of both restoration quality and computational complexity. Finally, we present some results on real confocal fluorescence microscopy images, and demonstrate the potential applicability of the proposed method for improving the quality of these images. 
@article{blu2018a, author = "Li, J. and Luisier, F. and Blu, T.", title = "{PURELET} Image Deconvolution", journal = "IEEE Transactions on Image Processing", month = "January", year = "2018", volume = "27", number = "1", pages = "92105", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2018a" } 
Li, J., Xue, F. & Blu, T.,"Fast and Accurate 3D PSF Computation for Fluorescence Microscopy", Journal of the Optical Society of America A, Vol. 34 (6), pp. 10291034, June 2017. 
The pointspread function (PSF) plays a fundamental role in fluorescence microscopy. A realistic and accurately calculated PSF model can significantly improve the performance in 3D deconvolution microscopy and also the localization accuracy in singlemolecule microscopy. In this work, we propose a fast and accurate approximation of the GibsonLanni model, which has been shown to represent the PSF suitably under a variety of imaging conditions. We express the Kirchhoff's integral in this model as a linear combination of rescaled Bessel functions, thus providing an integralfree way for the calculation. The explicit approximation error in terms of parameters is given numerically. Experiments demonstrate that the proposed approach results in a significantly smaller computational time compared with current stateofthe art techniques to achieve the same accuracy. This approach can also be extended to other microscopy PSF models. 
@article{blu2017e, author = "Li, J. and Xue, F. and Blu, T.", title = "Fast and Accurate {3D} {PSF} Computation for Fluorescence Microscopy", journal = "Journal of the Optical Society of America A", month = "June", year = "2017", volume = "34", number = "6", pages = "10291034", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2017e" } 
Li, J., Xue, F. & Blu, T.,"Gaussian Blur Estimation For PhotonLimited Images", Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP'17), Beijing, China, pp. 495497, September 1720, 2017. 
Blur estimation is critical to blind image deconvolution. In this work, by taking Gaussian kernel as an example, we propose an approach to estimate the blur size for photonlimited images. This estimation is based on the minimization of a novel criterion, blurPURE (Poisson unbiased risk estimate), which makes use of the Poisson noise statistics of the measurement. Experimental results demonstrate the effectiveness of the proposed method in various scenarios. This approach can be then plugged into our recent PURELET deconvolution algorithm, and an example on real fluorescence microscopy is presented. 
@inproceedings{blu2017f, author = "Li, J. and Xue, F. and Blu, T.", title = "Gaussian Blur Estimation For PhotonLimited Images", booktitle = "Proceedings of the 2017 {IEEE} International Conference on Image Processing ({ICIP'17})", month = "September 1720,", year = "2017", pages = "495497", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2017f" } 
Li, J., Xue, F. & Blu, T.,"Accurate 3D PSF Estimation from a WideField Microscopy Image", Proceedings of the Fifteenth IEEE International Symposium on Biomedical Imaging (ISBI'18), Washington, DC, USA, pp. 501504, April 47, 2018. 
The 3D pointspread function (PSF) plays a fundamental role in widefield fluorescence microscopy. An accurate PSF estimation can significantly improve the performance of deconvolution algorithms. In this work, we propose a calibrationfree method to obtain the PSF directly from the image obtained. Specifically, we first parametrize the spherically aberrated PSF as a linear combination of few basis functions. The coefficients of these basis functions are then obtained iteratively by minimizing a novel criterion, which is derived from the mixed PoissonGaussian noise statistics. Experiments demonstrate that the proposed approach results in highly accurate PSF estimations. 
@inproceedings{blu2018f, author = "Li, J. and Xue, F. and Blu, T.", title = "Accurate {3D} {PSF} Estimation from a WideField Microscopy Image", booktitle = "Proceedings of the Fifteenth {IEEE} International Symposium on Biomedical Imaging ({ISBI'18})", month = "April 47,", year = "2018", pages = "501504", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2018f" } 
Li, J., Xue, F., Qu, F., Ho, Y.P. & Blu, T.,"Onthefly estimation of a microscopy point spread function", Optics Express, Vol. 26 (20), pp. 2612026133, October 2018. 
A proper estimation of realistic pointspread function (PSF) in optical microscopy can significantly improve the deconvolution performance and assist the microscope calibration process. In this work, by exemplifying 3D widefield fluorescence microscopy, we propose an approach for estimating the spherically aberrated PSF of a microscope, directly from the observed samples. The PSF, expressed as a linear combination of 4 basis functions, is obtained directly from the acquired image by minimizing a novel criterion, which is derived from the noise statistics in the microscope. We demonstrate the effectiveness of the PSF approximation model and of our estimation method using both simulations and real experiments that were carried out on quantum dots. The principle of our PSF estimation approach is sufficiently flexible to be generalized nonspherical aberrations and other microscope modalities. 
@article{blu2018h, author = "Li, J. and Xue, F. and Qu, F. and Ho, Y.P. and Blu, T.", title = "Onthefly estimation of a microscopy point spread function", journal = "Optics Express", month = "October", year = "2018", volume = "26", number = "20", pages = "2612026133", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2018h" } 
Li, Y., Guo, R., Blu, T. & Zhao, H.,"Generic FRIBased DOA Estimation: A ModelFitting Method", IEEE Transactions on Signal Processing, Vol. 69, pp. 41024115, 2021. 
Direction of arrival (DOA) estimation is a classical topic in source localization. Notably, a reliable gridfree sparse representation algorithm, called FRI (finite rate of innovation) algorithm, was proposed to recover the finite number of Dirac pulses from a stream of 1D temporal samples, which also offers a efficient solution to the DOA estimation problem. Typically, FRI method assumes uniform sampling with single snapshot. However, the actual situation is richer and more diverse. Motivated by the requests of practical applications (e.g. array deployment, algorithm runspeed, etc.), a generic FRI method is proposed to tackle the more general case in practice, i.e. nonuniform sampling with multiple snapshots. Instead of annihilating the measured sensor data, a modelfitting method is used to robustly retrieve the sparse representation (i.e. DOAs and associated amplitudes) of the 1D samples. We demonstrate that our algorithm can handle challenging DOA tasks with highresolution, which we validate in various conditions, such as multiple coherent sources, insufficient signal snapshots, low signaltonoise ratio (SNR), etc. Moreover, we show that the computational complexity of our algorithm mainly scales with the number of sources and varies very slowly with the number of samples and snapshots, which meets the needs of a wider range of practical applications. 
@article{blu2021a, author = "Li, Y. and Guo, R. and Blu, T. and Zhao, H.", title = "Generic {FRI}Based {DOA} Estimation: A ModelFitting Method", journal = "IEEE Transactions on Signal Processing", year = "2021", volume = "69", pages = "41024115", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2021a" } 
Li, Y., Guo, R., Blu, T. & Zhao, H.,"Robust sparse reconstruction of attenuated acoustic field with unknown range of source", The Journal of the Acoustical Society of America, Vol. 152 (6), pp. 35233534, 2022. 
In this paper, we present a gridless algorithm to recover an attenuated acoustic field without knowing the range information of the source. This algorithm provides the joint estimation of horizontal wavenumbers, mode amplitudes, and acoustic attenuation. The key idea is to approximate the acoustic field in range as a finite sum of damped sinusoids, for which the sinusoidal parameters convey the ocean information of interest (e.g., wavenumber, attenuation, etc.). Using an efficient finite rate of innovation algorithm, an accurate recovery of the attenuated acoustic field can be achieved, even if the measurement noise is correlated and the range of the source is unknown. Moreover, the proposed method is able to perform joint recovery of multiple sensor data, which leads to a more robust field reconstruction. The data used here are acquired from a vertical line array at different depths measuring a moving source at several ranges. We demonstrate the performance of the proposed algorithm both in synthetic simulations and real shallow water evaluation cell experiment 1996 data. 
@article{blu2022g, author = "Li,Yongfei and Guo,Ruiming and Blu,Thierry and Zhao,Hangfang", title = "Robust sparse reconstruction of attenuated acoustic field with unknown range of source", journal = "The Journal of the Acoustical Society of America", year = "2022", volume = "152", number = "6", pages = "35233534", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2022g", doi = "10.1121/10.0016497" } 
Lie, J.P., Blu, T. & See, C.M.S.,"Single Antenna Power Measurements Based Direction Finding", IEEE Transactions on Signal Processing, Vol. 58 (11), pp. 56825692, November 2010. 
In this paper, the problem of estimating directionofarrival (DOA) of multiple uncorrelated sources from single antenna power measurements is addressed. Utilizing the fact that the antenna pattern is bandlimited and can be modeled as a finite sum of complex exponentials, we first show that the problem can be transformed into a frequency estimation problem. Then, we explain how the annihilating filter method can be used to solve for the DOA in the noiseless case. In the presence of noise, we propose to use Cadzow denoising that is formulated as an iterative algorithm derived from exploiting the matrix rank and linear structure properties. Furthermore, we have also derived the CramérRao Bound (CRB) and reviewed several alternative approaches that can be used as a comparison to the proposed approach. From the simulation and experimental results, we demonstrate that the proposed approach significantly outperforms other approaches. It is also evident from the Monte Carlo analysis that the proposed approach converges to the CRB. 
@article{blu2010e, author = "Lie, J.P. and Blu, T. and See, C.M.S.", title = "Single Antenna Power Measurements Based Direction Finding", journal = "{IEEE} Transactions on Signal Processing", month = "November", year = "2010", volume = "58", number = "11", pages = "56825692", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010e" } 
Lie, J.P., Blu, T. & See, C.M.S.,"AzimuthElevation Direction Finding Using Power Measurements From Single Antenna", Proceedings of the Thirtysixth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'11), Prague, Czech Republic, pp. 26082611, May 2227, 2011. 
This paper considers the problem of extending the single antenna power measurements based direction finding to the twodimensional (2D) case, and proposes a method to estimate the azimuthelevation directionofarrival (DOA) from a matrix of received power. Exploiting the fact that the azimuthelevation antenna pattern is 2D bandlimited, the problem can be transformed into a 2D spectral analysis problem. The proposed method first decomposes the 2D spectral analysis problem into onedimensional case and then solved them independently. As the solution does not ensure that the estimated azimuth and elevation is in correct order, the solution is subjected to permutation ambiguity. This can then be solved by finding the permutation that best matches the 2D spectral representation. Simulation results demonstrating the highresolution capability of the proposed method in twosource case and the effectiveness in fivesource case are also presented in this paper. 
@inproceedings{blu2011d, author = "Lie, J.P. and Blu, T. and See, C.M.S.", title = "AzimuthElevation Direction Finding Using Power Measurements From Single Antenna", booktitle = "Proceedings of the Thirtysixth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'11})", month = "May 2227,", year = "2011", pages = "26082611", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2011d" } 
Lie, J.P., Blu, T. & See, C.M.S.,"Single Antenna Power Measurements Based Direction Finding With Incomplete Spatial Coverage", Proceedings of the Thirtyseventh IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'12), Kyoto, Japan, pp. 26412644, March 2530, 2012. 
This paper consider the problem of relaxing the spatial coverage requirement on the mechanical rotation in the direction finding (DF) approach based on the received power measurements from single antenna pointing to different directions. Under incomplete spatial coverage, we show that the least square (LS) solution used to transform the problem into its spectral form is no longer accurate due to its illconditioned system matrix. To overcome this, we propose an approach based on spatial remodeling of the spatial power measurements such that its spatial periodicity can be adjusted according to the spatial coverage. The approach also incorporates the Tikhonov regularization in calculating the LS solution based on the new system matrix. Upon arriving at the new spectral form, the Cadzow annihilating filter method can then be used to estimate the directionof arrival. Both simulation and experimental results are presented to show the efficacy of the proposed method. 
@inproceedings{blu2012c, author = "Lie, J.P. and Blu, T. and See, C.M.S.", title = "Single Antenna Power Measurements Based Direction Finding With Incomplete Spatial Coverage", booktitle = "Proceedings of the Thirtyseventh {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'12})", month = "March 2530,", year = "2012", pages = "26412644", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2012c" } 
Liebling, M., Blu, T., Cuche, É., Marquet, P., Depeursinge, C.D. & Unser, M.,"Local Amplitude and Phase Retrieval Method for Digital Holography Applied to Microscopy", Proceedings of the SPIE European Conference on Biomedical Optics: Novel Optical Instrumentation for Biomedical Applications (ECBO'03), Münich, Germany, Vol. 5143, pp. 210214, June 2225, 2003. 
We present a numerical twostep reconstruction procedure for digital offaxis Fresnel holograms. First, we retrieve the amplitude and phase of the object wave in the CCD plane. For each point we solve a weighted linear set of equations in the leastsquares sense. The algorithm has O(N) complexity and gives great flexibility. Second, we numerically propagate the obtained wave to achieve proper focus. We apply the method to microscopy and demonstrate its suitability for the real time imaging of biological samples. 
@inproceedings{blu2003j, author = "Liebling, M. and Blu, T. and Cuche, {\'{E}}. and Marquet, P. and Depeursinge, C.D. and Unser, M.", title = "Local Amplitude and Phase Retrieval Method for Digital Holography Applied to Microscopy", booktitle = "Proceedings of the {SPIE} European Conference on Biomedical Optics: {N}ovel Optical Instrumentation for Biomedical Applications ({ECBO'03})", month = "June 2225,", year = "2003", volume = "5143", pages = "210214", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003j" } 
Liebling, M., Blu, T., Cuche, É., Marquet, P., Depeursinge, C. & Unser, M.,"A Novel NonDiffractive Reconstruction Method for Digital Holographic Microscopy", Proceedings of the First IEEE International Symposium on Biomedical Imaging (ISBI'02), Washington, USA, Vol. {II}, pp. 625628, July 710, 2002. 
We present a new method for reconstructing digitally recorded offaxis Fresnel holograms. Currentlyused reconstruction methods are based on the simulation and propagation of a reference wave that is diffracted by the hologram. This procedure introduces a twinimage and a zeroorder term which are inherent to the diffraction phenomenon. These terms perturb the reconstruction and limit the field of view. Our new approach splits the reconstruction process into two parts. First, we recover the amplitude and the phase in the camera plane from the measured hologram intensity. Our algorithm is based on the hypothesis of a slowly varying object wave which interferes with a more rapidly varying reference wave. In a second step, we propagate this complex wave to refocus it using the Fresnel transform. We therefore avoid the presence of the twinimage and zeroorder interference terms. This new approach is flexible and can be adapted easily to complicated experimental setups. We demonstrate its feasibility in the case of digital holographic microscopy and present results for the imaging of living neurons. 
@inproceedings{blu2002i, author = "Liebling, M. and Blu, T. and Cuche, {\'{E}}. and Marquet, P. and Depeursinge, C. and Unser, M.", title = "A Novel NonDiffractive Reconstruction Method for Digital Holographic Microscopy", booktitle = "Proceedings of the First {IEEE} International Symposium on Biomedical Imaging ({ISBI'02})", month = "July 710,", year = "2002", volume = "{II}", pages = "625628", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002i" } 
Liebling, M., Blu, T. & Unser, M.,"FresneletsA New Wavelet Basis for Digital Holography", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing IX, San Diego, USA, Vol. 4478, pp. 347352, July 29August 1, 2001. 
We present a new class of wavelet bases—Fresnelets—which is obtained by applying the Fresnel transform operator to a wavelet basis of L_{2}. The thus constructed wavelet family exhibits properties that are particularly useful for analyzing and processing optically generated holograms recorded on CCDarrays. We first investigate the multiresolution properties (translation, dilation) of the Fresnel transform that are needed to construct our new wavelet. We derive a Heisenberglike uncertainty relation that links the localization of the Fresnelets with that of the original wavelet basis. We give the explicit expression of orthogonal and semiorthogonal Fresnelet bases corresponding to polynomial spline wavelets. We conclude that the Fresnel Bsplines are particularly well suited for processing holograms because they tend to be well localized in both domains. 
@inproceedings{blu2001h, author = "Liebling, M. and Blu, T. and Unser, M.", title = "Fresnelets{A} New Wavelet Basis for Digital Holography", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {IX}", month = "July 29August 1,", year = "2001", volume = "4478", pages = "347352", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001h" } 
Liebling, M., Blu, T. & Unser, M.,"Fresnelets: New Multiresolution Wavelet Bases for Digital Holography", IEEE Transactions on Image Processing, Vol. 12 (1), pp. 2943, January 2003. 
We propose a construction of new waveletlike bases that are well suited for the reconstruction and processing of optically generated Fresnel holograms recorded on CCDarrays. The starting point is a wavelet basis of L_{2} to which we apply a unitary Fresnel transform. The transformed basis functions are shiftinvariant on a levelbylevel basis but their multiresolution properties are governed by the special form that the dilation operator takes in the Fresnel domain. We derive a Heisenberglike uncertainty relation that relates the localization of Fresnelets with that of their associated wavelet basis. According to this criterion, the optimal functions for digital hologram processing turn out to be Gabor functions, bringing together two separate aspects of the holography inventor's work. We give the explicit expression of orthogonal and semiorthogonal Fresnelet bases corresponding to polynomial spline wavelets. This special choice of Fresnelets is motivated by their nearoptimal localization properties and their approximation characteristics. We then present an efficient multiresolution Fresnel transform algorithm, the Fresnelet transform. This algorithm allows for the reconstruction (backpropagation) of complex scalar waves at several userdefined, wavelengthindependent resolutions. Furthermore, when reconstructing numerical holograms, the subband decomposition of the Fresnelet transform naturally separates the image to reconstruct from the unwanted zeroorder and twin image terms. This greatly facilitates their suppression. We show results of experiments carried out on both synthetic (simulated) data sets as well as on digitally acquired holograms. 
@article{blu2003k, author = "Liebling, M. and Blu, T. and Unser, M.", title = "Fresnelets: {N}ew Multiresolution Wavelet Bases for Digital Holography", journal = "{IEEE} Transactions on Image Processing", month = "January", year = "2003", volume = "12", number = "1", pages = "2943", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003k" } 
Liebling, M., Blu, T. & Unser, M.,"NonLinear Fresnelet Approximation for Interference Term Suppression in Digital Holography", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing X, San Diego, USA, Vol. 5207, pp. 553559, August 38, 2003. Part II. 
We present a zeroorder and twin image elimination algorithm for digital Fresnel holograms that were acquired in an offaxis geometry. These interference terms arise when the digital hologram is reconstructed and corrupt the result. Our algorithm is based on the Fresnelet transform, a waveletlike transform that uses basis functions tailormade for digital holography. We show that in the Fresnelet domain, the coefficients associated to the interference terms are separated both spatially and with respect to the frequency bands. We propose a method to suppress them by selectively thresholding the Fresnelet coefficients. Unlike other methods that operate in the Fourier domain and affect the whole spacial domain, our method operates locally in both space and frequency, allowing for a more targeted processing. 
@inproceedings{blu2003l, author = "Liebling, M. and Blu, T. and Unser, M.", title = "NonLinear {F}resnelet Approximation for Interference Term Suppression in Digital Holography", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {X}", month = "August 38,", year = "2003", volume = "5207", pages = "553559", note = "Part {II}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003l" } 
Liebling, M., Blu, T. & Unser, M.,"ComplexWave Retrieval from a Single OffAxis Hologram", Journal of the Optical Society of America A, Vol. 21 (3), pp. 367377, March 2004. 
We present a new digital twostep reconstruction method for offaxis holograms recorded on a CCD camera. First, we retrieve the complex object wave in the acquisition plane from the hologram's samples. In a second step, if required, we propagate the wave front by using a digital Fresnel transform to achieve proper focus. This algorithm is sufficiently general to be applied to sophisticated optical setups that include a microscope objective. We characterize and evaluate the algorithm by using simulated data sets and demonstrate its applicability to realworld experimental conditions by reconstructing optically acquired holograms. 
@article{blu2004h, author = "Liebling, M. and Blu, T. and Unser, M.", title = "ComplexWave Retrieval from a Single OffAxis Hologram", journal = "Journal of the Optical Society of {A}merica {A}", month = "March", year = "2004", volume = "21", number = "3", pages = "367377", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004h" } 
Luisier, F. & Blu, T.,"SURELET InterscaleIntercolor Wavelet Thresholding for Color Image Denoising", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet XII, San Diego, USA, Vol. 6701, pp. 67011H167011H10, August 2629, 2007. 
We propose a new orthonormal wavelet thresholding algorithm for denoising color images that are assumed to be corrupted by additive Gaussian white noise of known intercolor covariance matrix. The proposed wavelet denoiser consists of a linear expansion of thresholding (LET) functions, integrating both the interscale and intercolor dependencies. The linear parameters of the combination are then solved for by minimizing Stein's unbiased risk estimate (SURE), which is nothing but a robust unbiased estimate of the mean squared error (MSE) between the (unknown) noisefree data and the denoised one. Thanks to the quadratic form of this MSE estimate, the parameters optimization simply amounts to solve a linear system of equations. The experimentations we made over a wide range of noise levels and for a representative set of standard color images have shown that our algorithm yields even slightly better peak signaltonoise ratios than most stateoftheart wavelet thresholding procedures, even when the latters are executed in an undecimated wavelet representation. 
@inproceedings{blu2007g, author = "Luisier, F. and Blu, T.", title = "{SURELET} InterscaleIntercolor Wavelet Thresholding for Color Image Denoising", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet {XII}", month = "August 2629,", year = "2007", volume = "6701", pages = "67011H167011H10", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007g" } 
Luisier, F. & Blu, T.,"Image Denoising by Pointwise Thresholding of the Undecimated Wavelet Coefficients: A Global SURE Optimum", Proceedings of the ThirtySecond IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'07), Honolulu, USA, pp. {I}593{I}596, April 1520, 2007. 
We devise a new undecimated wavelet thresholding for denoising images corrupted by additive Gaussian white noise. The first key point of our approach is the use of a linearly parameterized pointwise thresholding function. The second key point consists in optimizing the parameters globally by minimizing Stein's unbiased MSE estimate (SURE) directly in the imagedomain, and not separately in the wavelet subbands. Amazingly, our method gives similar results to the best stateoftheart algorithms, despite using only a simple pointwise thresholding function; we demonstrate it in simulations over a wide range of noise levels for a representative set of standard grayscale images. 
@inproceedings{blu2007h, author = "Luisier, F. and Blu, T.", title = "Image Denoising by Pointwise Thresholding of the Undecimated Wavelet Coefficients: {A} Global {SURE} Optimum", booktitle = "Proceedings of the ThirtySecond {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'07})", month = "April 1520,", year = "2007", pages = "{I}593{I}596", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007h" } 
Luisier, F. & Blu, T.,"SURELET Multichannel Image Denoising: Interscale Orthonormal Wavelet Thresholding", IEEE Transactions on Image Processing, Vol. 17 (4), pp. 482492, April 2008. 
We propose a vector/matrix extension of our denoising algorithm initially developed for grayscale images, in order to efficiently process multichannel (e.g., color) images. This work follows our recently published SURELET approach where the denoising algorithm is parameterized as a linear expansion of thresholds (LET) and optimized using Stein's unbiased risk estimate (SURE). The proposed wavelet thresholding function is pointwise and depends on the coefficients of same location in the other channels, as well as on their parents in the coarser wavelet subband. A nonredundant, orthonormal, wavelet transform is first applied to the noisy data, followed by the (subbanddependent) vectorvalued thresholding of individual multichannel wavelet coefficients which are finally brought back to the image domain by inverse wavelet transform. Extensive comparisons with the stateoftheart multiresolution image denoising algorithms indicate that despite being nonredundant, our algorithm matches the quality of the best redundant approaches, while maintaining a high computational efficiency and a low CPU/memory consumption. An online Java demo illustrates these assertions. 
@article{blu2008g, author = "Luisier, F. and Blu, T.", title = "{SURELET} Multichannel Image Denoising: {I}nterscale Orthonormal Wavelet Thresholding", journal = "{IEEE} Transactions on Image Processing", month = "April", year = "2008", volume = "17", number = "4", pages = "482492", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008g" } 
Luisier, F. & Blu, T.,"SURELET Multichannel Image Denoising: Undecimated Wavelet Thresholding", Proceedings of the ThirtyThird IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'08), Las Vegas, USA, pp. 769772, March 30April 4, 2008. 
We propose an extension of the recently devised SURELET grayscale denoising approach for multichannel images. Assuming additive Gaussian white noise, the unknown linear parameters of a transformdomain pointwise multichannel thresholding are globally optimized by minimizing Stein's unbiased MSE estimate (SURE) in the imagedomain. Using the undecimated wavelet transform, we demonstrate the efficiency of this approach for denoising color images by comparing our results with two other stateoftheart denoising algorithms. 
@inproceedings{blu2008h, author = "Luisier, F. and Blu, T.", title = "{SURELET} Multichannel Image Denoising: {U}ndecimated Wavelet Thresholding", booktitle = "Proceedings of the ThirtyThird {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'08})", month = "March 30April 4,", year = "2008", pages = "769772", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008h" } 
Luisier, F., Blu, T., Forster, B. & Unser, M.,"Which Wavelet Bases Are the Best for Image Denoising?", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet XI, San Diego, USA, Vol. 5914, pp. 59140E159140E12, July 31August 3, 2005. 
We use a comprehensive set of nonredundant orthogonal wavelet transforms and apply a denoising method called SUREshrink in each individual wavelet subband to denoise images corrupted by additive Gaussian white noise. We show that, for various images and a wide range of input noise levels, the orthogonal fractional (α, τ)Bsplines give the best peak signaltonoise ratio (PSNR), as compared to standard wavelet bases (Daubechies wavelets, symlets and coiflets). Moreover, the selection of the best set (α, τ) can be performed on the MSE estimate (SURE) itself, not on the actual MSE (Oracle). Finally, the use of complexvalued fractional Bsplines leads to even more significant improvements; they also outperform the complex Daubechies wavelets. 
@inproceedings{blu2005f, author = "Luisier, F. and Blu, T. and Forster, B. and Unser, M.", title = "Which Wavelet Bases Are the Best for Image Denoising?", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet {XI}", month = "July 31August 3,", year = "2005", volume = "5914", pages = "59140E159140E12", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005f" } 
Luisier, F., Blu, T. & Unser, M.,"SUREBased Wavelet Thresholding Integrating InterScale Dependencies", Proceedings of the 2006 IEEE International Conference on Image Processing (ICIP'06), Atlanta, USA, pp. 14571460, October 811, 2006. 
We propose here a new pointwise wavelet thresholding function that incorporates interscale dependencies. This nonlinear function depends on a set of four linear parameters per subband which are set by minimizing Stein's unbiased MSE estimate (SURE). Our approach assumes additive Gaussian white noise. In order for the interscale dependencies to be faithfully taken into account, we also develop a rigorous feature alignment processing, that is adapted to arbitrary wavelet filters (e.g. nonsymmetric filters). Finally, we demonstrate the efficiency of our denoising approach in simulations over a wide range of noise levels for a representative set of standard images. 
@inproceedings{blu2006d, author = "Luisier, F. and Blu, T. and Unser, M.", title = "{SURE}Based Wavelet Thresholding Integrating InterScale Dependencies", booktitle = "Proceedings of the 2006 {IEEE} International Conference on Image Processing ({ICIP'06})", month = "October 811,", year = "2006", pages = "14571460", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2006d" } 
Luisier, F., Blu, T. & Unser, M.,"A New SURE Approach to Image Denoising: Interscale Orthonormal Wavelet Thresholding", IEEE Transactions on Image Processing, Vol. 16 (3), pp. 593606, March 2007. IEEE Signal Processing Society's 2009 Young Author Best Paper Award. 
This paper introduces a new approach to orthonormal wavelet image denoising. Instead of postulating a statistical model for the wavelet coefficients, we directly parametrize the denoising process as a sum of elementary nonlinear processes with unknown weights. We then minimize an estimate of the mean square error between the clean image and the denoised one. The key point is that we have at our disposal a very accurate, statistically unbiased, MSE estimate—Stein's unbiased risk estimate—that depends on the noisy image alone, not on the clean one. Like the MSE, this estimate is quadratic in the unknown weights, and its minimization amounts to solving a linear system of equations. The existence of this a priori estimate makes it unnecessary to devise a specific statistical model for the wavelet coefficients. Instead, and contrary to the custom in the literature, these coefficients are not considered random anymore. We describe an interscale orthonormal wavelet thresholding algorithm based on this new approach and show its nearoptimal performance—both regarding quality and CPU requirement—by comparing with the results of three stateoftheart nonredundant denoising algorithms on a large set of test images. An interesting fallout of this study is the development of a new, groupdelaybased, parentchild prediction in a wavelet dyadic tree. IEEE Signal Processing Society's 2009 Young Author Best Paper Award 
@article{blu2007i, author = "Luisier, F. and Blu, T. and Unser, M.", title = "A New {SURE} Approach to Image Denoising: {I}nterscale Orthonormal Wavelet Thresholding", journal = "{IEEE} Transactions on Image Processing", month = "March", year = "2007", volume = "16", number = "3", pages = "593606", note = "IEEE Signal Processing Society's 2009 \textbf{Young Author Best Paper Award}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007i" } 
Luisier, F., Blu, T. & Unser, M.,"SURELET for Orthonormal WaveletDomain Video Denoising", IEEE Transactions on Circuits and Systems for Video Technology, Vol. 20 (6), pp. 913919, June 2010. 
We propose an efficient orthonormal waveletdomain video denoising algorithm based on an appropriate integration of motion compensation into an adapted version of our recently devised Stein's unbiased risk estimatorlinear expansion of thresholds (SURELET) approach. To take full advantage of the strong spatiotemporal correlations of neighboring frames, a global motion compensation followed by a selective blockmatching is first applied to adjacent frames, which increases their temporal correlations without distorting the interframe noise statistics. Then, a multiframe interscale wavelet thresholding is performed to denoise the current central frame. The simulations we made on standard grayscale video sequences for various noise levels demonstrate the efficiency of the proposed solution in reducing additive white Gaussian noise. Obtained at a lighter computational load, our results are even competitive with most stateoftheart redundant waveletbased techniques. By using a cyclespinning strategy, our algorithm is in fact able to outperform these methods. 
@article{blu2010f, author = "Luisier, F. and Blu, T. and Unser, M.", title = "{SURELET} for Orthonormal WaveletDomain Video Denoising", journal = "{IEEE} Transactions on Circuits and Systems for Video Technology", month = "June", year = "2010", volume = "20", number = "6", pages = "913919", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010f" } 
Luisier, F., Blu, T. & Unser, M.,"Undecimated Haar thresholding for Poisson intensity estimation", Proceedings of the 2010 IEEE International Conference on Image Processing (ICIP'10), Hong Kong, China, pp. 16971700, September 2629, 2010. 
We propose a novel algorithm for denoising Poissoncorrupted images, that performs a signaladaptive thresholding of the undecimated Haar wavelet coefficients. A Poisson's unbiased MSE estimate is devised and adapted to arbitrary transformdomain pointwise processing. This priorfree quadratic measure of quality is then used to globally optimize a linearly parameterized subbandadaptive thresholding, which accounts for the signaldependent noise variance. We demonstrate the qualitative and computational competitiveness of the resulting denoising algorithm through comprehensive comparisons with some stateoftheart multiscale techniques specifically designed for Poisson intensity estimation. We also show promising denoising results obtained on lowcount fluorescence microscopy images. 
@inproceedings{blu2010g, author = "Luisier, F. and Blu, T. and Unser, M.", title = "Undecimated {H}aar thresholding for {P}oisson intensity estimation", booktitle = "Proceedings of the 2010 {IEEE} International Conference on Image Processing ({ICIP'10})", month = "September 2629,", year = "2010", pages = "16971700", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010g" } 
Luisier, F., Blu, T. & Unser, M.,"Image Denoising in Mixed PoissonGaussian Noise", IEEE Transactions on Image Processing, Vol. 20 (3), pp. 696708, March 2011. 
We propose a general methodology (PURELET) to design and optimize a wide class of transformdomain thresholding algorithms for denoising images corrupted by mixed PoissonGaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely dataadaptive unbiased estimate of the meansquared error (MSE), derived in a nonBayesian framework (PURE: PoissonGaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transformdomain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subbandadaptive thresholding functions with signaldependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with stateoftheart techniques that are specifically tailored to the estimation of Poisson intensities.We also present denoising results obtained on real images of lowcount fluorescence microscopy. 
@article{blu2011e, author = "Luisier, F. and Blu, T. and Unser, M.", title = "Image Denoising in Mixed {P}oisson{G}aussian Noise", journal = "{IEEE} Transactions on Image Processing", month = "March", year = "2011", volume = "20", number = "3", pages = "696708", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2011e" } 
Luisier, F., Blu, T. & Wolfe, P.J.,"A CURE for Noisy Magnetic Resonance Images: ChiSquare Unbiased Risk Estimation", IEEE Transactions on Image Processing, Vol. 21 (8), pp. 34543466, August 2012. 
In this paper, we derive an unbiased expression for the expected meansquared error associated with continuously differentiable estimators of the noncentrality parameter of a chisquare random variable. We then consider the task of denoising squaredmagnitude magnetic resonance (MR) image data, which are well modeled as independent noncentral chisquare random variables on two degrees of freedom. We consider two broad classes of linearly parameterized shrinkage estimators that can be optimized using our risk estimate, one in the general context of undecimated filterbank transforms, and the other in the specific case of the unnormalized Haar wavelet transform. The resultant algorithms are computationally tractable and improve upon most stateoftheart methods for both simulated and actual MR image data. 
@article{blu2012d, author = "Luisier, F. and Blu, T. and Wolfe, P.J.", title = "A {CURE} for Noisy Magnetic Resonance Images: ChiSquare Unbiased Risk Estimation", journal = "{IEEE} Transactions on Image Processing", month = "August", year = "2012", volume = "21", number = "8", pages = "34543466", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2012d" } 
Luisier, F., Vonesch, C., Blu, T. & Unser, M.,"Fast HaarWavelet Denoising Of Multidimensional Fluorescence Microscopy Data", Proceedings of the Sixth IEEE International Symposium on Biomedical Imaging (ISBI'09), Boston, USA, June 28July 1, 2009. 
We propose a nonBayesian denoising algorithm to reduce the Poisson noise that is typically dominant in fluorescence microscopy data. To process large datasets at a low computational cost, we use the unnormalized Haar wavelet transform. Thanks to some of its appealing properties, independent unbiased MSE estimates can be derived for each subband. Based on these Poisson unbiased MSE estimates, we then optimize linearly parameterized interscale thresholding. Correlations between adjacent images of the multidimensional data are accounted for through a sliding window approach. Experiments on simulated and real data show that the proposed solution is qualitatively similar to a stateoftheart multiscale method, while being orders of magnitude faster. 
@inproceedings{blu2009e, author = "Luisier, F. and Vonesch, C. and Blu, T. and Unser, M.", title = "Fast HaarWavelet Denoising Of Multidimensional Fluorescence Microscopy Data", booktitle = "Proceedings of the Sixth {IEEE} International Symposium on Biomedical Imaging ({ISBI'09})", month = "June 28July 1,", year = "2009", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2009e" } 
Luisier, F., Vonesch, C., Blu, T. & Unser, M.,"Fast interscale wavelet denoising of Poissoncorrupted images", Signal Processing, Vol. 90 (2), pp. 415427, February 2010. 
We present a fast algorithm for image restoration in the presence of Poisson noise. Our approach is based on (1) the minimization of an unbiased estimate of the MSE for Poisson noise, (2) a linear parametrization of the denoising process and (3) the preservation of Poisson statistics across scales within the Haar DWT. The minimization of the MSE estimate is performed independently in each wavelet subband, but this is equivalent to a global imagedomain MSE minimization, thanks to the orthogonality of Haar wavelets. This is an important difference with standard Poisson noiseremoval methods, in particular those that rely on a nonlinear preprocessing of the data to stabilize the variance. Our nonredundant interscale wavelet thresholding outperforms standard variancestabilizing schemes, even when the latter are applied in a translationinvariant setting (cyclespinning). It also achieves a quality similar to a stateoftheart multiscale method that was specially developed for Poisson data. Considering that the computational complexity of our method is orders of magnitude lower, it is a very competitive alternative. The proposed approach is particularly promising in the context of low signal intensities and/or large data sets. This is illustrated experimentally with the denoising of lowcount fluorescence micrographs of a biological sample. 
@article{blu2010h, author = "Luisier, F. and Vonesch, C. and Blu, T. and Unser, M.", title = "Fast interscale wavelet denoising of {P}oissoncorrupted images", journal = "Signal Processing", month = "February", year = "2010", volume = "90", number = "2", pages = "415427", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010h" } 
Luo, G., He, Y., Shu, X., Zhou, R. & Blu, T.,"Complex Wave and Phase Retrieval from A Single OffAxis Interferogram", Journal of the Optical Society of America A, Vol. 40 (1), pp. 8595, January 2023. 
Singleframe offaxis holographic reconstruction is promising for quantitative phase imaging. However, reconstruction accuracy and contrast are degraded by noise, frequency spectrum overlap of the interferogram, severe phase distortion, etc. In this work, we propose an iterative singleframe complex wave retrieval that is based on an explicit model of the object and reference waves. We also develop a novel phase restoration algorithm which does not resort to phase unwrapping. Both simulation and real experiments demonstrate higher accuracy and robustness compared to the stateoftheart methods, both for the complex wave estimation, and the phase reconstruction. Importantly, the allowed bandwidth for the object wave is significantly improved in realistic experimental conditions (similar amplitude for the object and reference waves), which makes it attractive for large fieldofview and highresolution imaging applications. 
@article{blu2023a, author = "Luo, Gang and He, Yanping and Shu, Xin and Zhou, Renjie and Blu, Thierry", title = "Complex Wave and Phase Retrieval from A Single OffAxis Interferogram", journal = "Journal of the Optical Society of America A", month = "January", year = "2023", volume = "40", number = "1", pages = "8595", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2023a", doi = "10.1364/JOSAA.473726" } 
Ma, L., Blu, T. & Wang, W.S.Y.,"An EEG Blind Source Separation Algorithm Based On A Weak Exclusion Principle", Proceedings of the 38th International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC'16), Orlando, FL, USA, pp. 859862, August 1620, 2016. 
The question of how to separate individual brain and nonbrain signals, mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings, is a significant problem in contemporary neu roscience. This study proposes and evaluates a novel EEG Blind Source Separation (BSS) algorithm based on a weak exclusion principle (WEP). The chief point in which it differs from most previous EEG BSS algorithms is that the proposed algorithm is not based upon the hypothesis that the sources are statistically independent. Our first step was to investigate algorithm performance on simulated signals which have ground truth. The purpose of this simulation is to illustrate the pro posed algorithm's efficacy. The results show that the proposed algorithm has good separation performance. Then, we used the proposed algorithm to separate real EEG signals from a memory study using a revised version of Sternberg Task. The results show that the proposed algorithm can effectively separate the nonbrain and brain sources. 
@inproceedings{blu2016e, author = "Ma, L. and Blu, T. and Wang, W.S.Y.", title = "An {EEG} Blind Source Separation Algorithm Based On A Weak Exclusion Principle", booktitle = "Proceedings of the 38th International Conference of the {IEEE} Engineering in Medicine and Biology Society ({EMBC'16})", month = "August 1620,", year = "2016", pages = "859862", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2016e" } 
Ma, L., Blu, T. & Wang, W.S.Y.,"EventRelated Potentials Source Separation Based on a Weak Exclusion Principle", Proceedings of the Fourteenth IEEE International Symposium on Biomedical Imaging (ISBI'17), Melbourne, Australia, pp. 10111014, April 1821, 2017. 
Currently, the standard eventrelated potentials (ERP) technique consists in averaging many ongoing electroencephalogram (EEG) trials using the same stimuli. Key questions are how to extract the ERP from ongoing EEG with fewer average times and how to further decompose ERP into basic components related to cognitive process. In this paper we introduce a novel Blind Source Separation (BSS) approach based on a weak exclusion principle (WEP) to solve these problems. The superior aspect of this algorithm is that it is based on a deterministic principle, which is more appropriate to analyze nonstationary EEG signals than most other BSS methods based on statistical hypotheses. The results show that our BSS algorithm can quickly and effectively extract ERPs using fewer average times than the traditional averaging methods. We show that, via BSS, we can isolate two main ERP components, which are respectively related to an exogenous process and a cognitive process, and can discriminate between the occipital lobe and the frontal lobe responses from the brain, agreeing with the classical component modeling in ERPs. Singletrial ERP separation results have demonstrated the consistency of these two main ERP components. Thus, BSS based on WEP can provide a window to better understand ERP, not only in averaging behavior, but in the complexities of momenttomoment dynamics as well. 
@inproceedings{blu2017d, author = "Ma, L. and Blu, T. and Wang, W.S.Y.", title = "EventRelated Potentials Source Separation Based on a Weak Exclusion Principle", booktitle = "Proceedings of the Fourteenth {IEEE} International Symposium on Biomedical Imaging ({ISBI'17})", month = "April 1821,", year = "2017", pages = "10111014", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2017d" } 
Ma, L., Minett, J.W., Blu, T. & Wang, W.S.Y.,"Resting State EEGBased Biometrics for Individual Identification Using Convolutional Neural Networks", Proceedings of the 37th International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC'15), Milano, Italy, pp. 28482851, August 2529, 2015. 
Biometrics is a growing field, which permits identification of individuals by means of unique physical features. Electroencephalography (EEG)based biometrics utilizes the small intrapersonal differences and large interpersonal differences between individuals' brainwave patterns. In the past, such methods have used features derived from manuallydesigned procedures for this purpose. Another possibility is to use convolutional neural networks (CNN) to automatically extract an individual's best and most unique neural features and conduct classification, using EEG data derived from both Resting State with Open Eyes (REO) and Resting State with Closed Eyes (REC). Results indicate that this CNNbased jointoptimized EEGbased Biometric System yields a high degree of accuracy of identification (88%) for 10class classification. Furthermore, rich interpersonal difference can be found using a very low frequency band (02Hz). Additionally, results suggest that the temporal portions over which subjects can be individualized is less than 200 ms. 
@inproceedings{blu2015g, author = "Ma, L and Minett, J.W. and Blu, T. and Wang, W.S.Y.", title = "Resting State {EEG}Based Biometrics for Individual Identification Using Convolutional Neural Networks", booktitle = "Proceedings of the 37th International Conference of the {IEEE} Engineering in Medicine and Biology Society ({EMBC'15})", month = "August 2529,", year = "2015", pages = "28482851", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2015g" } 
Mari, J.M., Blu, T., Bou Matar, O., Unser, M. & Cachard, C.,"A bulk modulus dependent linear model for acoustical imaging", Journal of the Acoustical Society of America, Vol. 125 (4), pp. 24132419, April 2009. 
Modeling the acoustical process of soft biological tissue imaging and understanding the consequences of the approximations required by such modeling are key steps for accurately simulating ultrasonic scanning as well as estimating the scattering coefficient of the imaged matter. In this document, a linear solution to the inhomogeneous ultrasonic wave equation is proposed. The classical assumptions required for linearization are applied; however, no approximation is made in the mathematical development regarding density and speed of sound. This leads to an expression of the scattering term that establishes a correspondence between the signal measured by an ultrasound transducer and an intrinsic mechanical property of the imaged tissues. This expression shows that considering the scattering as a function of small variations in the density and speed of sound around their mean values along with classical assumptions in this domain is equivalent to associating the acoustical acquisition with a measure of the relative longitudinal bulk modulus. Comparison of the model proposed to Jensen's earlier model shows that it is also appropriate to perform accurate simulations of the acoustical imaging process. 
@article{blu2009f, author = "Mari, J.M. and Blu, T. and Bou Matar, O. and Unser, M. and Cachard, C.", title = "A bulk modulus dependent linear model for acoustical imaging", journal = "Journal of the Acoustical Society of America", month = "April", year = "2009", volume = "125", number = "4", pages = "24132419", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2009f" } 
Marziliano, P., Vetterli, M. & Blu, T.,"Sampling and Exact Reconstruction of Bandlimited Signals with Additive Shot Noise", IEEE Transactions on Information Theory, Vol. 52 (5), pp. 22302233, May 2006. 
In this correspondence, we consider sampling continuoustime periodic bandlimited signals which contain additive shot noise.The classical sampling scheme does not perfectly recover these particular nonbandlimited signals but only reconstructs a lowpass filtered approximation. By modeling the shot noise as a stream of Dirac pulses, we first show that the sum of a bandlimited signal with a stream of Dirac pulses falls into the class of signals that contain a finite rate of innovation, that is, a finite number of degrees of freedom. Second, by taking into account the degrees of freedom of the bandlimited signal in the sampling and reconstruction scheme developed previously for streams of Dirac pulses, we derive a sampling and perfect reconstruction scheme for the bandlimited signal with additive shot noise. 
@article{blu2006e, author = "Marziliano, P. and Vetterli, M. and Blu, T.", title = "Sampling and Exact Reconstruction of Bandlimited Signals with Additive Shot Noise", journal = "{IEEE} Transactions on Information Theory", month = "May", year = "2006", volume = "52", number = "5", pages = "22302233", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2006e" } 
Matusiak, S., Daoudi, M., Blu, T. & Avaro, O.,"SketchBased Images Database Retrieval", Proceedings of the Fourth International Workshop on Advances in Multimedia Information Systems (MIS'98), Istanbul, Turkey, pp. 185191, September 2426, 1998. 
This paper describes an application allowing contentbased retrieval that can thus be considered as an MPEG7 example application. The application may be called "sketchbased database retrieval" since the user interacts with the database by means of sketches. The user draws its request with a pencil: the request image is then a binary image that comprises a contour on a uniform bottom. 
@inproceedings{blu1998e, author = "Matusiak, S. and Daoudi, M. and Blu, T. and Avaro, O.", title = "SketchBased Images Database Retrieval", booktitle = "Proceedings of the Fourth International Workshop on Advances in Multimedia Information Systems ({MIS'98})", month = "September 2426,", year = "1998", pages = "185191", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1998e" } 
Mayrargue, S. & Blu, T.,"Relationship Between HighResolution Methods and Discrete Fourier Transform", Proceedings of the Sixteenth IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'91), Toronto, Canada, Vol. {V}, pp. 33213324, May 1417, 1991. 
A link is established between the discrete Fourier transform (DFT) and two highresolution methods, MUSIC and the TuftsKumaresan (1982) method (TK). The existence and location of the extraneous peaks of MUSIC and the noise zeros of TK are related to the minima of the DFT of the rectangular window filtering the data. Other properties of the noise zeros are given, in relation to polynomial theory. 
@inproceedings{blu1991a, author = "Mayrargue, S. and Blu, T.", title = "Relationship Between HighResolution Methods and Discrete {F}ourier Transform", booktitle = "Proceedings of the Sixteenth {IEEE} International Conference on Acoustics, Speech and Signal Processing ({ICASSP'91})", month = "May 1417,", year = "1991", volume = "{V}", pages = "33213324", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1991a" } 
Muñoz Barrutia, A., Blu, T. & Unser, M.,"Efficient Image Resizing Using Finite Differences", Proceedings of the 1999 IEEE International Conference on Image Processing (ICIP'99), Kobe, Japan, Vol. {III}, pp. 662666, October 2528, 1999. 
We present an optimal splinebased algorithm for the enlargement or reduction of digital images with arbitrary scaling factors. A demonstration is available on the web at http://bigwww.epfl.ch/demo/jresize/. This projectionbased approach is realizable thanks to a new finite difference method that allows the computation of inner products with analysis functions that are Bsplines of any degree n. For a given choice of basis functions, the results of our method are consistently better that those of the standard interpolation procedure; the present scheme achieves a reduction of artifacts such as aliasing and blocking and a significant improvement of the signaltonoise ratio. 
@inproceedings{blu1999g, author = "Mu{\~{n}}oz Barrutia, A. and Blu, T. and Unser, M.", title = "Efficient Image Resizing Using Finite Differences", booktitle = "Proceedings of the 1999 {IEEE} International Conference on Image Processing ({ICIP'99})", month = "October 2528,", year = "1999", volume = "{III}", pages = "662666", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1999g" } 
Muñoz Barrutia, A., Blu, T. & Unser, M.,"NonUniform to Uniform Grid Conversion Using LeastSquares Splines", Proceedings of the Tenth European Signal Processing Conference (EUSIPCO'00), Tampere, Finland, Vol. {IV}, pp. 19972000, September 48, 2000. 
We propose a new technique to perform nonuniform to uniform grid conversion: first, interpolate using nonuniform splines, then project the resulting function onto a uniform spline space and finally, resample. We derive a closed form solution to the leastsquares approximation problem. Our implementation is computationally exact and works for arbitrary sampling rates. We present examples that illustrate advantages of our projection technique over direct interpolation and resampling. The main benefit is the suppression of aliasing. 
@inproceedings{blu2000f, author = "Mu{\~{n}}oz Barrutia, A. and Blu, T. and Unser, M.", title = "NonUniform to Uniform Grid Conversion Using LeastSquares Splines", booktitle = "Proceedings of the Tenth European Signal Processing Conference ({EUSIPCO'00})", month = "September 48,", year = "2000", volume = "{IV}", pages = "19972000", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000f" } 
Muñoz Barrutia, A., Blu, T. & Unser, M.,"NonEuclidean Pyramids", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing VIII, San Diego, USA, Vol. 4119, pp. 710720, July 31August 4, 2000. 
We propose to design the reduction operator of an image pyramid so as to minimize the approximation error in the l_{p} sense (not restricted to the usual p = 2), where p can take noninteger values. The underlying image model is specified using arbitrary shiftinvariant basis functions such as splines. The solution is determined by an iterative optimization algorithm, based on digital filtering. Its convergence is accelerated by the use of first and second derivatives. For p = 1, our modified pyramid is robust to outliers; edges are preserved better than in the standard case where p = 2. For 1 < p < 2, the pyramid decomposition combines the qualities of l_{1} and l_{2} approximations. The method is applied to edge detection and its improved performance over the standard formulation is determined. 
@inproceedings{blu2000g, author = "Mu{\~{n}}oz Barrutia, A. and Blu, T. and Unser, M.", title = "Non{E}uclidean Pyramids", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {VIII}", month = "July 31August 4,", year = "2000", volume = "4119", pages = "710720", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000g" } 
Muñoz Barrutia, A., Blu, T. & Unser, M.,"LeastSquares Image Resizing Using Finite Differences", IEEE Transactions on Image Processing, Vol. 10 (9), pp. 13651378, September 2001. 
We present an optimal splinebased algorithm for the enlargement or reduction of digital images with arbitrary (noninteger) scaling factors. This projectionbased approach can be realized thanks to a new finite difference method that allows the computation of inner products with analysis functions that are Bsplines of any degree n. A noteworthy property of the algorithm is that the computational complexity per pixel does not depend on the scaling factor a. For a given choice of basis functions, the results of our method are consistently better than those of the standard interpolation procedure; the present scheme achieves a reduction of artifacts such as aliasing and blocking and a significant improvement of the signaltonoise ratio. The method can be generalized to include other classes of piecewise polynomial functions, expressed as linear combinations of Bsplines and their derivatives. 
@article{blu2001i, author = "Mu{\~{n}}oz Barrutia, A. and Blu, T. and Unser, M.", title = "LeastSquares Image Resizing Using Finite Differences", journal = "{IEEE} Transactions on Image Processing", month = "September", year = "2001", volume = "10", number = "9", pages = "13651378", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001i" } 
Muñoz Barrutia, A., Blu, T. & Unser, M.,"ℓ_pMultiresolution Analysis: How to Reduce Ringing and Sparsify the Error", IEEE Transactions on Image Processing, Vol. 11 (6), pp. 656669, June 2002. 
We propose to design the reduction operator of an image pyramid so as to minimize the approximation error in the l_{p}sense (not restricted to the usual p = 2), where p can take noninteger values. The underlying image model is specified using shiftinvariant basis functions, such as Bsplines. The solution is welldefined and determined by an iterative optimization algorithm based on digital filtering. Its convergence is accelerated by the use of first and second order derivatives. For p close to 1, we show that the ringing is reduced and that the histogram of the detail image is sparse as compared with the standard case, where p = 2. 
@article{blu2002j, author = "Mu{\~{n}}oz Barrutia, A. and Blu, T. and Unser, M.", title = "${\ell}_{p}${M}ultiresolution Analysis: {H}ow to Reduce Ringing and Sparsify the Error", journal = "{IEEE} Transactions on Image Processing", month = "June", year = "2002", volume = "11", number = "6", pages = "656669", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002j" } 
Pan, H. & Blu, T.,"Sparse image restoration using iterated linear expansion of thresholds", Proceedings of the 2011 IEEE International Conference on Image Processing (ICIP'11), Brussels, Belgium, pp. 19051908, September 1114, 2011. 
We focus on image restoration that consists in regularizing a quadratic datafidelity term with the standard l1 sparseenforcing norm. We propose a novel algorithmic approach to solve this optimization problem. Our idea amounts to approximating the result of the restoration as a linear sum of basic thresholds (e.g. softthresholds) weighted by unknown coefficients. The few coefficients of this expansion are obtained by minimizing the equivalent lowdimensional l1norm regularized objective function, which can be solved efficiently with standard convex optimization techniques, e.g. iterative reweighted leastsquares (IRLS). By iterating this process, we claim that we reach the global minimum of the objective function. Experimentally we discover that very few iterations are required before we reach the convergence. 
@inproceedings{blu2011f, author = "Pan, H. and Blu, T.", title = "Sparse image restoration using iterated linear expansion of thresholds", booktitle = "Proceedings of the 2011 {IEEE} International Conference on Image Processing ({ICIP'11})", month = "September 1114,", year = "2011", pages = "19051908", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2011f" } 
Pan, H. & Blu, T.,"An Iterative Linear Expansion of Thresholds for ℓ_1based Image Restoration", IEEE Transactions on Image Processing, Vol. 22 (9), pp. 37153728, September 2013. 
This paper proposes a novel algorithmic framework to solve image restoration problems under sparsity assumptions. As usual, the reconstructed image is the minimum of an objective functional that consists of a data fidelity term and an l_{1} regularization. However, instead of estimating the reconstructed image that minimizes the objective functional directly, we focus on the restoration process that maps the degraded measurements to the reconstruction. Our idea amounts to parameterizing the process as a linear combination of few elementary thresholding functions (LET) and solve for the linear weighting coefficients by minimizing the objective functional. It is then possible to update the thresholding functions and to iterate this process (iLET). The key advantage of such a linear parametrization is that the problem size reduces dramatically—each time we only need to solve an optimization problem over the dimension of the linear coefficients (typically less than 10) instead of the whole image dimension. With the elementary thresholding functions satisfying certain constraints, global convergence of the iterated LET algorithm is guaranteed. Experiments on several test images over a wide range of noise levels and different types of convolution kernels clearly indicate that the proposed framework usually outperforms stateofart algorithms in terms of both CPU time and number of iterations. 
@article{blu2013e, author = "Pan, H. and Blu, T.", title = "An Iterative Linear Expansion of Thresholds for ${\ell}_1$based Image Restoration", journal = "{IEEE} Transactions on Image Processing", month = "September", year = "2013", volume = "22", number = "9", pages = "37153728", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2013e" } 
Pan, H., Blu, T. & Dragotti, P.L.,"Sampling Curves with Finite Rate of Innovation", Proceedings of the Ninth International Workshop on Sampling Theory and Applications (SampTA'11), Singapore, May 26, 2011. 
We focus on a specific class of curves that can be parametrized using a finite number of variables in two dimensions. The corresponding indicator plane, which is a binary image, has infinite bandwith and can not be sampled and perfectly reconstructed with classical sampling theory. In this paper, we illustrate that it is possible to recover parameters from finite samples of the indicator plane and have a perfect reconstruction of the indicator plane. The algorithm presented here extends the application of FRI signals to multidimensional cases and may find its application in field, like superresolution. 
@inproceedings{blu2011g, author = "Pan, H. and Blu, T. and Dragotti, P.L.", title = "Sampling Curves with Finite Rate of Innovation", booktitle = "Proceedings of the Ninth International Workshop on Sampling Theory and Applications ({SampTA'11})", month = "May 26,", year = "2011", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2011g" } 
Pan, H., Blu, T. & Dragotti, P.L.,"Sampling Curves with Finite Rate of Innovation", IEEE Transactions on Signal Processing, Vol. 62 (2), pp. 458471, January 2014. 
In this paper, we extend the theory of sampling signals with finite rate of innovation (FRI) to a specific class of twodimensional curves, which are defined implicitly as the zeros of a mask function. Here the mask function has a parametric representation as a weighted summation of a finite number of complex exponentials, and therefore, has finite rate of innovation [1]. An associated edge image, which is discontinuous on the predefined parametric curve, is proved to satisfy a set of linear annihilation equations. We show that it is possible to reconstruct the parameters of the curve (i.e. to detect the exact edge positions in the continuous domain) based on the annihilation equations. Robust reconstruction algorithms are also developed to cope with scenarios with model mismatch. Moreover, the annihilation equations that characterize the curve are linear constraints that can be easily exploited in optimization problems for further image processing (e.g., image upsampling). We demonstrate one potential application of the annihilation algorithm with examples in edgepreserving interpolation. Experimental results with both synthetic curves as well as edges of natural images clearly show the effectiveness of the annihilation constraint in preserving sharp edges, and improving SNRs. 
@article{blu2014a, author = "Pan, H. and Blu, T. and Dragotti, P.L", title = "Sampling Curves with Finite Rate of Innovation", journal = "{IEEE} Transactions on Signal Processing", month = "January", year = "2014", volume = "62", number = "2", pages = "458471", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2014a" } 
Pan, H., Blu, T. & Vetterli, M.,"Annihilationdriven Localised Image Edge Models", Proceedings of the Fortieth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'15), Brisbane, Australia, pp. 59775981, April 1924, 2015. 
We propose a novel edge detection algorithm with subpixel accuracy based on annihilation of signals with finite rate of innovation. We show that the Fourier domain annihilation equations can be interpreted as spatial domain multiplications. From this new perspective, we obtain an accurate estimation of the edge model by assuming a simple parametric form within each localised block. Further, we build a locally adaptive global mask function (i.e, our edge model) for the whole image. The mask function is then used as an edge preserving constraint in further processing. Numerical experiments on both edge localisations and image upsampling show the effectiveness of the proposed approach, which out performs stateoftheart method. 
@inproceedings{blu2015d, author = "Pan, H. and Blu, T. and Vetterli, M.", title = "Annihilationdriven Localised Image Edge Models", booktitle = "Proceedings of the Fortieth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'15})", month = "April 1924,", year = "2015", pages = "59775981", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2015d" } 
Pan, H., Blu, T. & Vetterli, M.,"Towards Generalized FRI Sampling with an Application to Source Resolution in Radioastronomy", IEEE Transactions on Signal Processing, Vol. 65 (4), pp. 821835, February 2017. 
It is a classic problem to estimate continuoustime sparse signals, like point sources in a directionofarrival problem, or pulses in a timeofflight measurement. The earliest occurrence is the estimation of sinusoids in time series using Prony's method. This is at the root of a substantial line of work on high resolution spectral estimation. The estimation of continuoustime sparse signals from discretetime samples is the goal of the sampling theory for finite rate of innovation (FRI) signals. Both spectral estimation and FRI sampling usually assume uniform sampling. But not all measurements are obtained uniformly, as exemplified by a concrete radioastronomy problem we set out to solve. Thus, we develop the theory and algorithm to reconstruct sparse signals, typically sum of sinusoids, from nonuniform samples. We achieve this by identifying a linear transformation that relates the unknown uniform samples of sinusoids to the given measurements. These uniform samples are known to satisfy the annihilation equations. A valid solution is then obtained by solving a constrained minimization such that the reconstructed signal is consistent with the given measurements and satisfies the annihilation constraint. Thanks to this new approach, we unify a variety of FRIbased methods. We demonstrate the versatility and robustness of the proposed approach with five FRI reconstruction problems, namely Dirac reconstructions with irregular time or Fourier domain samples, FRI curve reconstructions, Dirac reconstructions on the sphere and point source reconstructions in radioastronomy. The proposed algorithm improves substantially over state of the art methods and is able to reconstruct point sources accurately from irregularly sampled Fourier measurements under severe noise conditions. 
@article{blu2017c, author = "Pan, H. and Blu, T. and Vetterli, M.", title = "Towards Generalized {FRI} Sampling with an Application to Source Resolution in Radioastronomy", journal = "IEEE Transactions on Signal Processing", month = "February", year = "2017", volume = "65", number = "4", pages = "821835", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2017c" } 
Pan, H., Blu, T. & Vetterli, M.,"Efficient Multidimensional Diracs Estimation with Linear Sample Complexity", IEEE Transactions on Signal Processing, Vol. 66 (17), pp. 46424656, September 2018. 
Estimating Diracs in continuous two or higher dimensions is a fundamental problem in imaging. Previous approaches extended one dimensional methods, like the ones based on finite rate of innovation (FRI) sampling, in a separable manner, e.g., along the horizontal and vertical dimensions separately in 2D. The separate estimation leads to a sample complexity of O(K^{D}) for K Diracs in D dimensions, despite that the total degrees of freedom only increase linearly with respect to D. We propose a new method that enforces the continuousdomain sparsity constraints simultaneously along all dimensions, leading to a reconstruction algorithm with linear sample complexity O(K), or a gain of O(K^{D1}) over previous FRIbased methods. The multidimensional Dirac locations are subsequently determined by the intersections of hypersurfaces (e.g., curves in 2D), which can be computed algebraically from the common roots of polynomials. We first demonstrate the performance of the new multidimensional algorithm on simulated data: multidimensional Dirac location retrieval under noisy measurements. Then we show results on real data: radio astronomy point source reconstruction (from LOFAR telescope measurements) and the direction of arrival estimation of acoustic signals (using Pyramic microphone arrays). 
@article{blu2018g, author = "Pan, H. and Blu, T. and Vetterli, M.", title = "Efficient Multidimensional {D}iracs Estimation with Linear Sample Complexity", journal = "IEEE Transactions on Signal Processing", month = "September", year = "2018", volume = "66", number = "17", pages = "46424656", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2018g" } 
Pan, H., Simeoni, M., Hurley, P., Blu, T. & Vetterli, M.,"LEAP: Looking beyond pixels with continuousspace EstimAtion of Point sources", Astronomy & Astrophysics, Vol. A\&A 608, pp. A136, 114, December 2017. 
Context. Two main classes of imaging algorithms have emerged
in radio interferometry: the CLEAN algorithm and its multiple variants,
and compressedsensing inspired methods. They are both discrete in
nature, and estimate source locations and intensities on a regular
grid. For the traditional CLEANbased imaging pipeline, the resolution
power of the tool is limited by the width of the synthesized beam,
which is inversely proportional to the largest baseline. The finite
rate of innovation (FRI) framework is a robust method to find the
locations of pointsources in a continuum without grid imposition.
The continuous formulation makes the FRI recovery performance only
dependent on the number of measurements and the number of sources
in the sky. FRI can theoretically find sources below the perceived
tool resolution. To date, FRI had never been tested in the extreme
conditions inherent to radio astronomy: weak signal / high noise,
huge data sets, large numbers of sources. Aims. The aims were (i) to adapt FRI to radio astronomy, (ii) verify it can recover sources in radio astronomy conditions with more accurate positioning than CLEAN, and possibly resolve some sources that would otherwise be missed, (iii) show that sources can be found using less data than would otherwise be required to find them, and (iv) show that FRI does not lead to an augmented rate of false positives. Methods. We implemented a continuous domain sparse reconstruction algorithm in Python. The angular resolution performance of the new algorithm was assessed under simulation, and with visibility measurements from the LOFAR telescope. Existing catalogs were used to confirm the existence of sources. Results. We adapted the FRI framework to radio interferometry, and showed that it is possible to determine accurate offgrid pointsource locations and their corresponding intensities. In addition, FRIbased sparse reconstruction required less integration time and smaller baselines to reach a comparable reconstruction quality compared to a conventional method. The achieved angular resolution is higher than the perceived instrument resolution, and very close sources can be reliably distinguished. The proposed approach has cubic complexity in the total number (typically around a few thousand) of uniform Fourier data of the sky image estimated from the reconstruction. It is also demonstrated that the method is robust to the presence of extendedsources, and that falsepositives can be addressed by choosing an adequate model order to match the noise level. 
@article{blu2017k, author = "Pan, H. and Simeoni, M. and Hurley, P. and Blu, T. and Vetterli, M.", title = "{LEAP}: {L}ooking beyond pixels with continuousspace {E}stim{A}tion of {P}oint sources", journal = "Astronomy \& Astrophysics", month = "December", year = "2017", volume = "A\&A 608", pages = "A136, 114", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2017k" } 
Panisetti, B.K., Blu, T. & Seelamantula, C.S.,"An Unbiased Risk Estimator for Multiplicative NoiseApplication to 1D Signal Denoising", Proceedings of the Nineteenth International Conference on Digital Signal Processing (DSP'14), Hong Kong, China, pp. 497502, August 2023, 2014. 
The effect of multiplicative noise on a signal when compared with that of additive noise is very large. In this paper, we address the problem of suppressing multiplicative noise in onedimensional signals. To deal with signals that are corrupted with multiplicative noise, we propose a denoising algorithm based on minimization of an unbiased estimator (MURE) of meansquare error (MSE). We derive an expression for an unbiased estimate of the MSE. The proposed denoising is carried out in wavelet domain (soft thresholding) by considering timedomain MURE. The parameters of thresholding function are obtained by minimizing the unbiased estimator MURE. We show that the parameters for optimal MURE are very close to the optimal parameters considering the oracle MSE. Experiments show that the SNR improvement for the proposed denoising algorithm is competitive with a stateoftheart method. 
@inproceedings{blu2014d, author = "Panisetti, B.K. and Blu, T. and Seelamantula, C.S.", title = "An Unbiased Risk Estimator for Multiplicative NoiseApplication to {1D} Signal Denoising", booktitle = "Proceedings of the Nineteenth International Conference on Digital Signal Processing ({DSP'14})", month = "August 2023,", year = "2014", pages = "497502", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2014d" } 
Parrott, E.P.J., Sy, S.M.Y., Blu, T., Wallace, V.P. & PickwellMacPherson, E.,"Terahertz pulsed imaging in vivo: measurements and processing methods", Journal of Biomedical Optics, Vol. 16 (10), pp. 106010 18, SPIE, October 2011. 
This paper presents a number of data processing algorithms developed to improve the accuracy of results derived from datasets acquired by a recently designed terahertz handheld probe. These techniques include a baseline subtraction algorithm and a number of algorithms to extract the sample impulse response: double Gaussian inverse filtering, frequencywavelet domain deconvolution, and sparse deconvolution. In vivo measurements of human skin are used as examples, and a comparison is made of the terahertz impulse response from a number of different skin positions. The algorithms presented enable both the spectroscopic and time domain properties of samples measured in reflection geometry to be better determined compared to previous calculation methods. 
@article{blu2011h, author = "Parrott, E.P.J. and Sy, S.M.Y. and Blu, T. and Wallace, V.P. and PickwellMacPherson, E.", title = "Terahertz pulsed imaging in vivo: measurements and processing methods", journal = "Journal of Biomedical Optics", publisher = "SPIE", month = "October", year = "2011", volume = "16", number = "10", pages = "106010 18", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2011h" } 
Peyronny, L., Soligon, O., Roux, C., Avaro, O. & Blu, T.,"How to Construct an MPEG4 API: A Videoconference Application Example", Proceedings of the International Conference on Image and Multidimensional Digital Signal Processing (IMDSP'98), Alpbach, Austria, pp. 111114, July 16, 1998. 
The construction and animation of faceobjects in MPEG4/SNHC (synthetic natural hybrid coding) systems implies contentbased and semantic analysis of the observed 3D scene. These processes require sophisticated image processing tools and algorithms which are timeconsuming and also not so suitable for videoconferencing applications. With regard to the coding process following the MPEG4 standard, it is shown that 4 possible levels of coding are readytosend in an MPEG4 data stream. The main functionalities sought in the MPEG4 standard are data scalability, user data interactivity and opening to various kinds of coder schemes. With nonhighorder semantic interpretation of the observed scene, it is wellknown that it is possible to build and animate a facial 3D model. A transcoding system is then needed to fit to the MPEG4 data stream format. 
@inproceedings{blu1998f, author = "Peyronny, L. and Soligon, O. and Roux, C. and Avaro, O. and Blu, T.", title = "How to Construct an {MPEG4} {API}: {A} Videoconference Application Example", booktitle = "Proceedings of the International Conference on Image and Multidimensional Digital Signal Processing ({IMDSP'98})", month = "July 16,", year = "1998", pages = "111114", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1998f" } 
Precioso, F., Barlaud, M., Blu, T. & Unser, M.,"Smoothing BSpline Active Contour for Fast and Robust Image and Video Segmentation", Proceedings of the 2003 IEEE International Conference on Image Processing (ICIP'03), Barcelona, Spain, Vol. {I}, pp. 137140, September 1417, 2003. 
This paper deals with fast image and video segmentation using active contours. Region based active contours using levelsets are powerful techniques for video segmentation but they suffer from large computational cost. A parametric active contour method based on BSpline interpolation has been proposed in [1] to highly reduce the computational cost but this method is sensitive to noise. Here, we choose to relax the rigid interpolation constraint in order to robustify our method in the presence of noise: by using smoothing splines, we trade a tunable amount of interpolation error for a smoother spline curve. We show by experiments on natural sequences that this new flexibility yields segmentation results of higher quality at no additional computational cost. Hence real time processing for moving objects segmentation is preserved. References 
@inproceedings{blu2003m, author = "Precioso, F. and Barlaud, M. and Blu, T. and Unser, M.", title = "Smoothing {BSpline} Active Contour for Fast and Robust Image and Video Segmentation", booktitle = "Proceedings of the 2003 {IEEE} International Conference on Image Processing ({ICIP'03})", month = "September 1417,", year = "2003", volume = "{I}", pages = "137140", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003m" } 
Precioso, F., Barlaud, M., Blu, T. & Unser, M.,"Robust RealTime Segmentation of Images and Videos Using a SmoothSpline SnakeBased Algorithm", IEEE Transactions on Image Processing, Vol. 14 (7), pp. 910924, July 2005. 
This paper deals with fast image and video segmentation using active contours. Regionbased active contours using level sets are powerful techniques for video segmentation, but they suffer from large computational cost. A parametric active contour method based on BSpline interpolation has been proposed in [1] to highly reduce the computational cost, but this method is sensitive to noise. Here, we choose to relax the rigid interpolation constraint in order to robustify our method in the presence of noise: by using smoothing splines, we trade a tunable amount of interpolation error for a smoother spline curve. We show by experiments on natural sequences that this new flexibility yields segmentation results of higher quality at no additional computational cost. Hence, realtime processing for moving objects segmentation is preserved. References 
@article{blu2005g, author = "Precioso, F. and Barlaud, M. and Blu, T. and Unser, M.", title = "Robust RealTime Segmentation of Images and Videos Using a SmoothSpline SnakeBased Algorithm", journal = "{IEEE} Transactions on Image Processing", month = "July", year = "2005", volume = "14", number = "7", pages = "910924", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005g" } 
Ramani, S., Blu, T. & Unser, M.,"MonteCarlo SURE: A BlackBox Optimization of Regularization Parameters for General Denoising Algorithms", IEEE Transactions on Image Processing, Vol. 17 (9), pp. 15401554, September 2008. 
We consider the problem of optimizing the parameters of a given denoising algorithm for restoration of a signal corrupted by white Gaussian noise. To achieve this, we propose to minimize Stein's unbiased risk estimate (SURE) which provides a means of assessing the true meansquared error (MSE) purely from the measured data without need for any knowledge about the noisefree signal. Specifically, we present a novel MonteCarlo technique which enables the user to calculate SURE for an arbitrary denoising algorithm characterized by some specific parameter setting. Our method is a blackbox approach which solely uses the response of the denoising operator to additional input noise and does not ask for any information about its functional form. This, therefore, permits the use of SURE for optimization of a wide variety of denoising algorithms. We justify our claims by presenting experimental results for SUREbased optimization of a series of popular imagedenoising algorithms such as totalvariation denoising, wavelet softthresholding, and Wiener filtering/smoothing splines. In the process, we also compare the performance of these methods. We demonstrate numerically that SURE computed using the new approach accurately predicts the true MSE for all the considered algorithms. We also show that SURE uncovers the optimal values of the parameters in all cases. Supplementary material 
@article{blu2008i, author = "Ramani, S. and Blu, T. and Unser, M.", title = "Monte{C}arlo {SURE}: {A} BlackBox Optimization of Regularization Parameters for General Denoising Algorithms", journal = "{IEEE} Transactions on Image Processing", month = "September", year = "2008", volume = "17", number = "9", pages = "15401554", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008i" } 
Ramani, S., Blu, T. & Unser, M.,"Blind Optimization of Algorithm Parameters for Signal Denoising by MonteCarlo SURE", Proceedings of the ThirtyThird IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'08), Las Vegas, USA, pp. 905908, March 30April 4, 2008. 
We consider the problem of optimizing the parameters of an arbitrary denoising algorithm by minimizing Stein's Unbiased Risk Estimate (SURE) which provides a means of assessing the true meansquarederror (MSE) purely from the measured data assuming that it is corrupted by Gaussian noise. To accomplish this, we propose a novel MonteCarlo technique based on a blackbox approach which enables the user to compute SURE for an arbitrary denoising algorithm with some specific parameter setting. Our method only requires the response of the denoising algorithm to additional input noise and does not ask for any information about the functional form of the corresponding denoising operator. This, therefore, permits SUREbased optimization of a wide variety of denoising algorithms (globaliterative, pointwise, etc). We present experimental results to justify our claims. 
@inproceedings{blu2008j, author = "Ramani, S. and Blu, T. and Unser, M.", title = "Blind Optimization of Algorithm Parameters for Signal Denoising by {M}onte{C}arlo {SURE}", booktitle = "Proceedings of the ThirtyThird {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'08})", month = "March 30April 4,", year = "2008", pages = "905908", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008j" } 
Ramani, S., Van De Ville, D., Blu, T. & Unser, M.,"Nonideal Sampling and Regularization Theory", IEEE Transactions on Signal Processing, Vol. 56 (3), pp. 10551070, March 2008. 
Shannon's sampling theory and its variants provide effective solutions to the problem of reconstructing a signal from its samples in some “shiftinvariant” space, which may or may not be bandlimited. In this paper, we present some further justification for this type of representation, while addressing the issue of the specification of the best reconstruction space. We consider a realistic setting where a multidimensional signal is prefiltered prior to sampling, and the samples are corrupted by additive noise. We adopt a variational approach to the reconstruction problem and minimize a data fidelity term subject to a Tikhonovlike (continuous domain) L_{2}regularization to obtain the continuousspace solution. We present theoretical justification for the minimization of this cost functional and show that the globally minimal continuousspace solution belongs to a shiftinvariant space generated by a function (generalized Bspline) that is generally not bandlimited. When the sampling is ideal, we recover some of the classical smoothing spline estimators. The optimal reconstruction space is characterized by a condition that links the generating function to the regularization operator and implies the existence of a Bsplinelike basis. To make the scheme practical, we specify the generating functions corresponding to the most popular families of regularization operators (derivatives, iterated Laplacian), as well as a new, generalized one that leads to a new brand of Matérn splines.We conclude the paper by proposing a stochastic interpretation of the reconstruction algorithm and establishing an equivalence with the minimax and minimum mean square error (MMSE/Wiener) solutions of the generalized sampling problem. 
@article{blu2008k, author = "Ramani, S. and Van De Ville, D. and Blu, T. and Unser, M.", title = "Nonideal Sampling and Regularization Theory", journal = "{IEEE} Transactions on Signal Processing", month = "March", year = "2008", volume = "56", number = "3", pages = "10551070", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008k" } 
Sekhar, S.C., Leitgeb, R.A., Villiger, M.L., Bachmann, A.H., Blu, T. & Unser, M.,"NonIterative Exact Signal Recovery in Frequency Domain Optical Coherence Tomography", Proceedings of the Fourth IEEE International Symposium on Biomedical Imaging (ISBI'07), Arlington, USA, pp. 808811, April 1215, 2007. 
We address the problem of exact signal recovery in frequency domain optical coherence tomography (FDOCT) systems. Our technique relies on the fact that, in a spectral interferometry setup, the intensity of the total signal reflected from the object is smaller than that of the reference arm. We develop a novel algorithm to compute the reflected signal amplitude from the interferometric measurements. Our technique is noniterative, nonlinear and it leads to an exact solution in the absence of noise. The reconstructed signal is free from artifacts such as the autocorrelation noise that is normally encountered in the conventional inverse Fourier transform techniques. We present results on synthesized data where we have a benchmark for comparing the performance of the technique. We also report results on experimental FDOCT measurements of the retina of the human eye. 
@inproceedings{blu2007j, author = "Sekhar, S.C. and Leitgeb, R.A. and Villiger, M.L. and Bachmann, A.H. and Blu, T. and Unser, M.", title = "NonIterative Exact Signal Recovery in Frequency Domain Optical Coherence Tomography", booktitle = "Proceedings of the Fourth {IEEE} International Symposium on Biomedical Imaging ({ISBI'07})", month = "April 1215,", year = "2007", pages = "808811", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007j" } 
Sekhar, S.C., Nazkani, H., Blu, T. & Unser, M.,"A New Technique for HighResolution Frequency Domain Optical Coherence Tomography", Proceedings of the ThirtySecond IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'07), Honolulu, USA, pp. {I}425{I}428, April 1520, 2007. 
Frequency domain optical coherence tomography (FDOCT) is a new technique that is wellsuited for fast imaging of biological specimens, as well as nonbiological objects. The measurements are in the frequency domain, and the objective is to retrieve an artifactfree spatial domain description of the specimen. In this paper, we develop a new technique for modelbased retrieval of spatial domain data from the frequency domain data. We use a piecewiseconstant model for the refractive index profile that is suitable for multilayered specimens. We show that the estimation of the layered structure parameters can be mapped into a harmonic retrieval problem, which enables us to use highresolution spectrum estimation techniques. The new technique that we propose is efficient and requires few measurements. We also analyze the effect of additive measurement noise on the algorithm performance. The experimental results show that the technique gives highly accurate parameter estimates. For example, at 25dB signaltonoise ratio, the mean square error in the position estimate is about 0.01% of the actual value. 
@inproceedings{blu2007k, author = "Sekhar, S.C. and Nazkani, H. and Blu, T. and Unser, M.", title = "A New Technique for HighResolution Frequency Domain Optical Coherence Tomography", booktitle = "Proceedings of the ThirtySecond {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'07})", month = "April 1520,", year = "2007", pages = "{I}425{I}428", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007k" } 
van Spaendonck, R., Blu, T., Baraniuk, R. & Vetterli, M.,"Orthogonal Hilbert Transform Filter Banks and Wavelets", Proceedings of the TwentyEighth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'03), Hong Kong, China, Vol. {VI}, pp. 505508, April 610, 2003. 
Complex wavelet transforms offer the opportunity to perform directional and coherent processing based on the local magnitude and phase of signals and images. Although denoising, segmentation, and image enhancement are significantly improved using complex wavelets, the redundancy of most current transforms hinders their application in compression and related problems. In this paper we introduce a new orthonormal complex wavelet transform with no redundancy for both real— and complexvalued signals. The transform's filterbank features a real lowpass filter and two complex highpass filters arranged in a critically sampled, threeband structure. Placing symmetry and orthogonality constraints on these filters, we find that each highpass filter can be factored into a real highpass filter followed by an approximate Hilbert transform filter. 
@inproceedings{blu2003n, author = "van Spaendonck, R. and Blu, T. and Baraniuk, R. and Vetterli, M.", title = "Orthogonal {H}ilbert Transform Filter Banks and Wavelets", booktitle = "Proceedings of the TwentyEighth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'03})", month = "April 610,", year = "2003", volume = "{VI}", pages = "505508", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003n" } 
Stalder, A.F., Melchior, T., Müller, M., Sage, D., Blu, T. & Unser, M.,"Lowbond axisymmetric drop shape analysis for surface tension and contact angle measurements of sessile drops", Colloids and Surfaces A: Physicochemical and Engineering Aspects, Vol. 364 (13), pp. 7281, July 2010. 
A new method based on the YoungLaplace equation for measuring contact angles and surface tensions is presented. In this approach, a firstorder perturbation technique helps to analytically solve the YoungLaplace equation according to photographic images of axisymmetric sessile drops. When appropriate, the calculated drop contour is extended by mirror symmetry so that reflection of the drop into substrate allows the detection of position of the contact points. To keep a wide range of applicability, a discretisation of the drop's profile is not realised; instead, an optimisation of an advanced imageenergy term fits an approximation of the YoungLaplace equation to drop boundaries. In addition, cubic Bspline interpolation is applied to the image of the drop to reach subpixel resolution. To demonstrate the method's accuracy, simulated drops as well as images of liquid coal ash slags were analysed. Thanks to the highquality image interpolation model and the imageenergy term, the experiments demonstrated robust measurements over a wide variety of image types and qualities. The method was implemented in Java and is freely available [A.F. Stalder, LBADSA, Biomedical Imaging Group, EPFL: download link]. 
@article{blu2010i, author = "Stalder, A.F. and Melchior, T. and M\"uller, M. and Sage, D. and Blu, T. and Unser, M.", title = "Lowbond axisymmetric drop shape analysis for surface tension and contact angle measurements of sessile drops", journal = "Colloids and Surfaces {A}: Physicochemical and Engineering Aspects", month = "July", year = "2010", volume = "364", number = "13", pages = "7281", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010i" } 
Stantchev, R.I., Blu, T. & PickwellMacpherson, E.,"Total Internal Reflection THz Devices for High Speed Imaging", Proceedings of the 2018 43rd International Conference on Infrared, Millimeter, and Terahertz Waves (IRMMWTHz), Nagoya, Japan, September 914, 2018. 
Electronhole pair photoexcitation switches a semiconductor's response from dielectric to conducting. We show that this process is most efficient in a total internal reflection (TIR) geometry allowing the use of cheaper, less powerful light sources. Further, by employing a digital micromirror device to spatially pattern the photoexcitation area, we perform imaging with singleelement detector and present solutions to the optical problems of imaging in this geometry. We finally show that by taking into account the carrier lifetimes in the signal processing one can improve the acquisition rate by a factor 5. 
@inproceedings{blu2018i, author = "Stantchev, R.I. and Blu, T. and PickwellMacpherson, E.", title = "Total Internal Reflection {THz} Devices for High Speed Imaging", booktitle = "Proceedings of the 2018 43rd International Conference on Infrared, Millimeter, and Terahertz Waves (IRMMWTHz)", month = "September 914,", year = "2018", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2018i" } 
Stantchev, R.I., Yu, X., Blu, T. & PickwellMacPherson, E.,"Realtime terahertz imaging with a singlepixel detector", Nature Communications, Vol. 11 (1), pp. 25352542, 21 May 2020. 
Terahertz (THz) radiation is poised to have an essential role in many imaging applications, from industrial inspections to medical diagnosis. However, commercialization is prevented by impractical and expensive THz instrumentation. Singlepixel cameras have emerged as alternatives to multipixel cameras due to reduced costs and superior durability. Here, by optimizing the modulation geometry and postprocessing algorithms, we demonstrate the acquisition of a THzvideo (32×32 pixels at 6 framespersecond), shown in realtime, using a singlepixel fibercoupled photoconductive THz detector. A laser diode with a digital micromirror device shining visible light onto silicon acts as the spatial THz modulator. We mathematically account for the temporal response of the system, reduce noise with a lockin free carrierwave modulation and realize quick, noiserobust image undersampling. Since our modifications do not impose intricate manufacturing, require long postprocessing, nor sacrifice the timeresolving capabilities of THzspectrometers, their greatest asset, this work has the potential to serve as a foundation for all future singlepixel THz imaging systems. 
@article{blu2020e, author = "Stantchev, R.I. and Yu, X and Blu, T. and PickwellMacPherson, E.", title = "Realtime terahertz imaging with a singlepixel detector", journal = "Nature Communications", month = "21 May", year = "2020", volume = "11", number = "1", pages = "25352542", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2020e" } 
Sun, Z. & Blu, T.,"A Nonlinear Steerable Complex Wavelet Decomposition of Images", Proceedings of the Fortyseventh IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'22), Singapore, pp. 16901694, May 2227, 2022. 
Signal and image representations that are steerable are essential to capture efficiently directional features. However, those that are successful at achieving directional selectivity usually use too many subbands, resulting in low computational efficiency. In this paper, we propose a twodimensional nonlinear transform that uses only two subbands to achieve rotation invariance property, and enjoys a mirror reconstruction making it similar to a "tight frame". The twosubband structure is merged into a unique, concise, complexvalued subband that approximates a Wirtinger gradient which is naturally steerable. Complete steerability, though, is achieved by utilizing the FourierArgand representation, which provides a steerable filter able to estimate the amplitude and direction of image features, even in the presence of very high noise. We demonstrate the efficiency of the representation by comparing how it performs in waveletbased denoising algorithms. 
@inproceedings{blu2022b, author = "Sun, Z. and Blu, T.", title = "A Nonlinear Steerable Complex Wavelet Decomposition of Images", booktitle = "Proceedings of the Fortyseventh {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'22})", month = "May 2227,", year = "2022", pages = "16901694", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2022b", doi = "10.1109/ICASSP43922.2022.9747539" } 
Sun, Z. & Blu, T.,"Empowering Networks With Scale and Rotation Equivariance Using A Similarity Convolution", Proceedings of the The Eleventh International Conference on Learning Representations (ICLR), Kigali, Rwanda, May 15 2023. To appear.. 
The translational equivariant nature of Convolutional Neural Networks (CNNs) is a reason for its great success in computer vision. However, networks do not enjoy more general equivariance properties such as rotation or scaling, ultimately limiting their generalization performance. To address this limitation, we devise a method that endows CNNs with simultaneous equivariance with respect to translation, rotation, and scaling. Our approach defines a convolutionlike operation and ensures equivariance based on our proposed scalable FourierArgand representation. The method maintains similar efficiency as a traditional network and hardly introduces any additional learnable parameters, since it does not face the computational issue that often occurs in groupconvolution operators. We validate the efficacy of our approach in the image classification task, demonstrating its robustness and the generalization ability to both scaled and rotated inputs. 
@inproceedings{blu2023b, author = "Sun, Z. and Blu, T.", title = "Empowering Networks With Scale and Rotation Equivariance Using A Similarity Convolution", booktitle = "Proceedings of the The Eleventh International Conference on Learning Representations ({ICLR})", month = "May 15", year = "2023", note = "To appear.", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2023b" } 
Sun, Z., Zhang, Z. & Blu, T.,"An Algebraic Optimization Approach to Image Registration", Proceedings of the 2022 IEEE International Conference on Image Processing (ICIP'22), Bordeaux, France, pp. 27762780, October 1619, 2022. 
Highspeed is an essential requirement for many applications of image registration. However, existing methods are usually timeconsuming due to the difficulty of the task. In this paper, different from usual featurebased ideas, we convert the matching problem into an algebraic optimization task. By solving a series of quadratic optimization equations, the underlying deformation (rotation, scaling and shift) between image pairs can be retrieved. This process is extremely fast and can be performed in realtime. Experiments show that our method can achieve good performance at a much lower computation cost. When used to initialize our earlier parametric Local AllPass (LAP) registration algorithm, the results obtained improve significantly over the state of the art. 
@inproceedings{blu2022c, author = "Sun, Z. and Zhang, Z. and Blu, T", title = "An Algebraic Optimization Approach to Image Registration", booktitle = "Proceedings of the 2022 {IEEE} International Conference on Image Processing ({ICIP'22})", month = "October 1619,", year = "2022", pages = "27762780", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2022c", doi = "10.1109/ICIP46576.2022.9897451" } 
Thévenaz, P., Blu, T. & Unser, M.,"Image Interpolation and Resampling", Handbook of Medical Imaging, Processing and Analysis, San Diego CA, USA, pp. 393420, Academic Press, 2000. 
This chapter presents a survey of interpolation and resampling techniques in the context of exact, separable interpolation of regularly sampled data. In this context, the traditional view of interpolation is to represent an arbitrary continuous function as a discrete sum of weighted and shifted synthesis functions—in other words, a mixed convolution equation. An important issue is the choice of adequate synthesis functions that satisfy interpolation properties. Examples of finitesupport ones are the square pulse (nearestneighbor interpolation), the hat function (linear interpolation), the cubic Keys' function, and various truncated or windowed versions of the sinc function. On the other hand, splines provide examples of infinitesupport interpolation functions that can be realized exactly at a finite, surprisingly small computational cost. We discuss implementation issues and illustrate the performance of each synthesis function. We also highlight several artifacts that may arise when performing interpolation, such as ringing, aliasing, blocking and blurring. We explain why the approximation order inherent in the synthesis function is important to limit these interpolation artifacts, which motivates the use of splines as a tunable way to keep them in check without any significant cost penalty. 
@incollection{blu2000h, author = "Th{\'{e}}venaz, P. and Blu, T. and Unser, M.", title = "Image Interpolation and Resampling", booktitle = "Handbook of Medical Imaging, Processing and Analysis", publisher = "Academic Press", year = "2000", pages = "393420", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000h" } 
Thévenaz, P., Blu, T. & Unser, M.,"Short Basis Functions for Constantvariance Interpolation", Proceedings of the SPIE International Symposium on Medical Imaging: Image Processing (MI'08), San Diego, USA, Vol. 6914, pp. 69142L169142L8, February 1621, 2008. 
An interpolation model is a necessary ingredient of intensitybased registration methods. The properties of such a model depend entirely on its basis function, which has been traditionally characterized by features such as its order of approximation and its support. However, as has been recently shown, these features are blind to the amount of registration bias created by the interpolation process alone; an additional requirement that has been named constantvariance interpolation is needed to remove this bias. In this paper, we present a theoretical investigation of the role of the interpolation basis in a registration context. Contrarily to published analyses, ours is deterministic; it nevertheless leads to the same conclusion, which is that constantvariance interpolation is beneficial to image registration. In addition, we propose a novel family of interpolation bases that can have any desired order of approximation while maintaining the constantvariance property. Our family includes every constantvariance basis we know of. It is described by an explicit formula that contains two free functional terms: an arbitrary 1periodic binary function that takes values from {1, 1}, and another arbitrary function that must satisfy the partition of unity. These degrees of freedom can be harnessed to build many family members for a given order of approximation and a fixed support. We provide the example of a symmetric basis with two orders of approximation that is supported over [3 ⁄ 2, 3 ⁄ 2]; this support is one unit shorter than a basis of identical order that had been previously published. 
@inproceedings{blu2008l, author = "Th{\'{e}}venaz, P. and Blu, T. and Unser, M.", title = "Short Basis Functions for Constantvariance Interpolation", booktitle = "Proceedings of the {SPIE} International Symposium on Medical Imaging: {I}mage Processing ({MI'08})", month = "February 1621,", year = "2008", volume = "6914", pages = "69142L169142L8", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2008l" } 
Thévenaz, P., Blu, T. & Unser, M.,"Interpolation Revisited", IEEE Transactions on Medical Imaging, Vol. 19 (7), pp. 739758, July 2000. 
Based on the theory of approximation, this paper presents a unified analysis of interpolation and resampling techniques. An important issue is the choice of adequate basis functions. We show that, contrary to the common belief, those that perform best are not interpolating. By opposition to traditional interpolation, we call their use generalized interpolation; they involve a prefiltering step when correctly applied. We explain why the approximation order inherent in any basis function is important to limit interpolation artifacts. The decomposition theorem states that any basis function endowed with approximation order can be expressed as the convolution of a Bspline of the same order with another function that has none. This motivates the use of splines and splinebased functions as a tunable way to keep artifacts in check without any significant cost penalty. We discuss implementation and performance issues, and we provide experimental evidence to support our claims. Erratum

@article{blu2000i, author = "Th{\'{e}}venaz, P. and Blu, T. and Unser, M.", title = "Interpolation Revisited", journal = "{IEEE} Transactions on Medical Imaging", month = "July", year = "2000", volume = "19", number = "7", pages = "739758", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000i" } 
Thévenaz, P., Blu, T. & Unser, M.,"Complete Parametrization of PiecewisePolynomial Interpolators According to Degree, Support, Regularity, and Order", Proceedings of the 2000 IEEE International Conference on Image Processing (ICIP'00), Vancouver, Canada, Vol. {II}, pp. 335338, September 1013, 2000. 
The most essential ingredient of interpolation is its basis function. We have shown in previous papers that this basis need not be necessarily interpolating to achieve good results. On the contrary, several recent studies have confirmed that noninterpolating bases, such as Bsplines and Omoms, perform best. This opens up a much wider choice of basis functions. In this paper, we give to the designer the tools that will allow him to characterize this enlarged space of functions. In particular, he will be able to specify upfront the four most important parameters for image processing: degree, support, regularity, and order. The theorems presented here will then allow him to refine his design by dealing with additional coefficients that can be selected freely, without interfering with the main design parameters. Errata The following erratum applies to the printed proceedings (the version that you can download below is amended):

@inproceedings{blu2000j, author = "Th{\'{e}}venaz, P. and Blu, T. and Unser, M.", title = "Complete Parametrization of PiecewisePolynomial Interpolators According to Degree, Support, Regularity, and Order", booktitle = "Proceedings of the 2000 {IEEE} International Conference on Image Processing ({ICIP'00})", month = "September 1013,", year = "2000", volume = "{II}", pages = "335338", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000j" } 
Unser, M. & Blu, T.,"Comparison of Wavelets from the Point of View of Their Approximation Error", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing VI, San Diego, USA, Vol. 3458, pp. 1421, July 1924, 1998. 
We present new quantitative results for the characterization of the L_{2}error of waveletlike expansions as a function of the scale a. This yields an extension as well as a simplification of the asymptotic error formulas that have been published previously. We use our bound determinations to compare the approximation power of various families of wavelet transforms. We present explicit formulas for the leading asymptotic constant for both splines and Daubechies wavelets. For a specified approximation error, this allows us to predict the sampling rate reduction that can obtained by using splines instead Daubechies wavelets. In particular, we prove that the gain in sampling density (splines vs. Daubechies) converges to π as the order goes to infinity. 
@inproceedings{blu1998g, author = "Unser, M. and Blu, T.", title = "Comparison of Wavelets from the Point of View of Their Approximation Error", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {VI}", month = "July 1924,", year = "1998", volume = "3458", pages = "1421", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1998g" } 
Unser, M. & Blu, T.,"Spline Wavelets with Fractional Order of Approximation", Wavelet Applications Workshop, Monte Verità, Switzerland, September 28October 2, 1998. 
We extend Schoenberg's family of polynomial splines with uniform knots to all fractional degrees α>1/2. These splines, which involve linear combinations of the one sided power functions x_{+}^{α}=max{0,x}^{α}, are αHölder continuous for α≥0. We construct the corresponding Bsplines by taking fractional finite differences and provide an explicit characterization in both time and frequency domains. We show that these functions satisfy most of the properties of the traditional Bsplines, including the convolution property, and a generalized fractional differentiation rule that involves finite differences only. We characterize the decay of the fractional Bsplines which are not compactly supported for nonintegral α's. 
@inproceedings{blu1998h, author = "Unser, M. and Blu, T.", title = "Spline Wavelets with Fractional Order of Approximation", booktitle = "Wavelet Applications Workshop", month = "September 28October 2,", year = "1998", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1998h" } 
Unser, M. & Blu, T.,"Construction of Fractional Spline Wavelet Bases", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing VII, Denver, USA, Vol. 3813, pp. 422431, July 1923, 1999. 
We extend Schoenberg's Bsplines to all fractional degrees α > 1/2. These splines are constructed using linear combinations of the integer shifts of the power functions x_{+}^{α}(onesided) or x_{*}^{α}(symmetric); in each case, they are αHölder continuous for α > 0. They satisfy most of the properties of the traditional Bsplines; in particular, the Riesz basis condition and the twoscale relation, which makes them suitable for the construction of new families of wavelet bases. What is especially interesting from a wavelet perspective is that the fractional Bsplines have a fractional order of approximation (α+1), while they reproduce the polynomials of degree [α]. We show how they yield continuousorder generalizations of the orthogonal BattleLemarié wavelets and of the semiorthogonal Bspline wavelets. As α increases, these latter wavelets tend to be optimally localized in time and frequency in the sense specified by the uncertainty principle. The corresponding analysis wavelets also behave like fractional differentiators; they may therefore be used to whiten fractional Brownian motion processes. 
@inproceedings{blu1999h, author = "Unser, M. and Blu, T.", title = "Construction of Fractional Spline Wavelet Bases", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {VII}", month = "July 1923,", year = "1999", volume = "3813", pages = "422431", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu1999h" } 
Unser, M. & Blu, T.,"Fractional Splines and Wavelets", SIAM Review, Vol. 42 (1), pp. 4367, March 2000. 
We extend Schoenberg's family of polynomial splines with uniform knots to all fractional degrees α > 1. These splines, which involve linear combinations of the onesided power functions x_{+}^{α} = max(0, x)^{α}, belong to L^{1} and are αHölder continuous for α > 0. We construct the corresponding Bsplines by taking fractional finite differences and provide an explicit characterization in both time and frequency domains. We show that these functions satisfy most of the properties of the traditional Bsplines, including the convolution property, and a generalized fractional differentiation rule that involves finite differences only. We characterize the decay of the Bsplines which are not compactly supported for nonintegral α's. Their most astonishing feature (in reference to the StrangFix theory) is that they have a fractional order of approximation α + 1 while they reproduce the polynomials of degree [α]. For α > 1/2, they satisfy all the requirements for a multiresolution analysis of L^{2} (Riesz bounds, two scale relation) and may therefore be used to build new families of wavelet bases with a continuouslyvarying order parameter. Our construction also yields symmetrized fractional Bsplines which provide the connection with Duchon's general theory of radial (m,s)splines (including thinplate splines). In particular, we show that the symmetric version of our splines can be obtained as solution of a variational problem involving the norm of a fractional derivative. (Front cover). 
@article{blu2000k, author = "Unser, M. and Blu, T.", title = "Fractional Splines and Wavelets", journal = "{SIAM} Review", month = "March", year = "2000", volume = "42", number = "1", pages = "4367", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000k" } 
Unser, M. & Blu, T.,"Wavelets and Radial Basis Functions: A Unifying Perspective", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing VIII, San Diego, USA, Vol. 4119, pp. 487493, July 31August 4, 2000. 
Wavelets and radial basis functions (RBF) are two rather distinct ways of representing signals in terms of shifted basis functions. An essential aspect of RBF, which makes the method applicable to nonuniform grids, is that the basis functions, unlike wavelets, are nonlocal—in addition, they do not involve any scaling at all. Despite these fundamental differences, we show that the two types of representation are closely connected. We use the linear splines as motivating example. These can be constructed by using translates of the oneside ramp function (which is not localized), or, more conventionally, by using the shifts of a linear Bspline. This latter function, which is the prototypical example of a scaling function, can be obtained by localizing the oneside ramp function using finite differences. We then generalize the concept and identify the whole class of selfsimilar radial basis functions that can be localized to yield conventional multiresolution wavelet bases. Conversely, we prove that, for any compactly supported scaling function φ(x), there exists a onesided central basis function ρ_{+}(x) that spans the same multiresolution subspaces. The central property is that the multiresolution bases are generated by simple translation of ρ_{+}, without any dilation. 
@inproceedings{blu2000l, author = "Unser, M. and Blu, T.", title = "Wavelets and Radial Basis Functions: {A} Unifying Perspective", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {VIII}", month = "July 31August 4,", year = "2000", volume = "4119", pages = "487493", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000l" } 
Unser, M. & Blu, T.,"Why Restrict Ourselves to Compactly Supported Basis Functions?", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing IX, San Diego, USA, Vol. 4478, pp. 311314, July 29August 1, 2001. 
Compact support is undoubtedly one of the wavelet properties that is given the greatest weight both in theory and applications. It is usually believed to be essential for two main reasons: (1) to have fast numerical algorithms, and (2) to have good time or space localization properties. Here, we argue that this constraint is unnecessarily restrictive and that fast algorithms and good localization can also be achieved with noncompactly supported basis functions. By dropping the compact support requirement, one gains in flexibility. This opens up new perspectives such as fractional wavelets whose key parameters (order, regularity, etc…) are tunable in a continuous fashion. To make our point, we draw an analogy with the closely related task of image interpolation. This is an area where it was believed until very recently that interpolators should be designed to be compactly supported for best results. Today, there is compelling evidence that noncompactly supported interpolators (such as splines, and others) provide the best cost/performance tradeoff. 
@inproceedings{blu2001j, author = "Unser, M. and Blu, T.", title = "Why Restrict Ourselves to Compactly Supported Basis Functions?", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {IX}", month = "July 29August 1,", year = "2001", volume = "4478", pages = "311314", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001j" } 
Unser, M. & Blu, T.,"Fractional Splines and Wavelets: From Theory to Applications", Joint IDRIMA Workshop: Ideal Data Representation, Minneapolis, USA, April 913, 2001. 
In the first part, we present the theory of fractional splines; an extension of the polynomial splines for noninteger degrees. Their basic constituents are piecewise power functions of degree α. The corresponding Bsplines are obtained through a localization process similar to the classical one, replacing finite differences by fractional differences. We show that the fractional Bsplines share virtually all the properties of the classical Bsplines, including the twoscale relation, and can therefore be used to define new wavelet bases with a continuously varying order parameter. We discuss some of their remarkable properties; in particular, the fact that the fractional spline wavelets behave like fractional derivatives of order α + 1. In the second part, we turn to applications. We first describe a fast implementation of the fractional wavelet transform, which is essential to make the method practical. We then present an application of fractional splines to tomographic reconstruction where we take advantage of explicit formulas for computing the fractional derivatives of splines. We also make the connection with ridgelets. Finally, we consider the use of fractional wavelets for the detection and localization of brain activation in fMRI sequences. Here, we take advantage of the continuously varying order parameter which allows us to finetune the localization properties of the basis functions. 
@inproceedings{blu2001k, author = "Unser, M. and Blu, T.", title = "Fractional Splines and Wavelets: {F}rom Theory to Applications", booktitle = "Joint {IDRIMA} Workshop: {I}deal Data Representation", month = "April 913,", year = "2001", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001k" } 
Unser, M. & Blu, T.,"Fractional Wavelets: Properties and Applications", Proceedings of the First 2002 SIAM Conference on Imaging Science (SIAGIS'02), Boston, USA, Vol. MS1, pp. 33, March 46, 2002. 
We introduce the concept of fractional wavelets which extends the conventional theory to noninteger orders. This allows for the construction of new wavelet bases that are indexed by a continuouslyvarying order parameter, as opposed to an integer. An essential feature of the method is to gain control over the key wavelet properties (regularity, timefrequency localization, etc…). Practically, this translates into the fact that all important wavelet parameters are adjustable in a continuous fashion so that the new basis functions can be finetuned for the application at hand. We present some specific examples of wavelets (fractional splines) and investigate the main implications of the fractional order property. In particular, we prove that these wavelets essentially behave like fractional derivative operators which makes them good candidates for the analysis and synthesis of fractallike processes. We also consider nonseparable extensions to quincunx lattices which are well suited for image processing. Finally, we deal with the practical aspect of the evaluation of these transforms and present a fast implementation based on the FFT. 
@inproceedings{blu2002k, author = "Unser, M. and Blu, T.", title = "Fractional Wavelets: Properties and Applications", booktitle = "Proceedings of the First 2002 {SIAM} Conference on Imaging Science ({SIAGIS'02})", month = "March 46,", year = "2002", volume = "MS1", pages = "33", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002k" } 
Unser, M. & Blu, T.,"Mathematical Properties of the JPEG2000 Wavelet Filters", IEEE Transactions on Image Processing, Vol. 12 (9), pp. 10801090, September 2003. 
The LeGall 5⁄3 and Daubechies 9⁄7 filters have risen to special prominence because they were selected for inclusion in the JPEG2000 standard. Here, we determine their key mathematical features: Riesz bounds, order of approximation, and regularity (Hölder and Sobolev). We give approximation theoretic quantities such as the asymptotic constant for the L^{2} error and the angle between the analysis and synthesis spaces which characterizes the loss of performance with respect to an orthogonal projection. We also derive new asymptotic error formulæ that exhibit bound constants that are proportional to the magnitude of the first nonvanishing moment of the wavelet. The Daubechies 9⁄7 stands out because it is very close to orthonormal, but this turns out to be slightly detrimental to its asymptotic performance when compared to other wavelets with four vanishing moments. 
@article{blu2003o, author = "Unser, M. and Blu, T.", title = "Mathematical Properties of the {JPEG2000} Wavelet Filters", journal = "{IEEE} Transactions on Image Processing", month = "September", year = "2003", volume = "12", number = "9", pages = "10801090", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003o" } 
Unser, M. & Blu, T.,"Wavelet Theory Demystified", IEEE Transactions on Signal Processing, Vol. 51 (2), pp. 470483, February 2003. 
In this paper, we revisit wavelet theory starting from the representation of a scaling function as the convolution of a Bspline (the regular part of it) and a distribution (the irregular or residual part). This formulation leads to some new insights on wavelets and makes it possible to rederive the main results of the classical theory—including some new extensions for fractional orders—in a selfcontained, accessible fashion. In particular, we prove that the Bspline component is entirely responsible for five key wavelet properties: order of approximation, reproduction of polynomials, vanishing moments, multiscale differentiation property, and smoothness (regularity) of the basis functions. We also investigate the interaction of wavelets with differential operators giving explicit time domain formulas for the fractional derivatives of the basis functions. This allows us to specify a corresponding dual wavelet basis and helps us understand why the wavelet transform provides a stable characterization of the derivatives of a signal. Additional results include a new peeling theory of smoothness, leading to the extended notion of wavelet differentiability in the L_{p}sense and a sharper theorem stating that smoothness implies order. 
@article{blu2003p, author = "Unser, M. and Blu, T.", title = "Wavelet Theory Demystified", journal = "{IEEE} Transactions on Signal Processing", month = "February", year = "2003", volume = "51", number = "2", pages = "470483", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003p" } 
Unser, M. & Blu, T.,"Fractional Wavelets, Derivatives, and Besov Spaces", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing X, San Diego, USA, Vol. 5207, pp. 147152, August 38, 2003. Part I. 
We show that a multidimensional scaling function of order γ (possibly fractional) can always be represented as the convolution of a polyharmonic Bspline of order γ and a distribution with a bounded Fourier transform which has neither order nor smoothness. The presence of the Bspline convolution factor explains all key wavelet properties: order of approximation, reproduction of polynomials, vanishing moments, multiscale differentiation property, and smoothness of the basis functions. The Bspline factorization also gives new insights on the stability of wavelet bases with respect to differentiation. Specifically, we show that there is a direct correspondence between the process of moving a Bspline factor from one side to another in a pair of biorthogonal scaling functions and the exchange of fractional integrals/derivatives on their wavelet counterparts. This result yields two “eigenrelations” for fractional differential operators that map biorthogonal wavelet bases into other stable wavelet bases. This formulation provides a better understanding as to why the Sobolev/Besov norm of a signal can be measured from the l_{p}norm of its rescaled wavelet coefficients. Indeed, the key condition for a wavelet basis to be an unconditional basis of the Besov space B_{q}^{s}(L_{p}(R^{d})) is that the sorder derivative of the wavelet be in L_{p}. 
@inproceedings{blu2003q, author = "Unser, M. and Blu, T.", title = "Fractional Wavelets, Derivatives, and {B}esov Spaces", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {X}", month = "August 38,", year = "2003", volume = "5207", pages = "147152", note = "Part {I}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003q" } 
Unser, M. & Blu, T.,"The Spline Foundation of Wavelet Theory", International Conference on Wavelets and Splines (EIMIWS'03), Saint Petersburg, Russia, pp. 9899, July 38, 2003. 
Recently, we came up with two interesting generalizations of polynomial splines by extending the degree of the generating functions to both real and complex exponents. While these may qualify as exotic constructions at first sight, we show here that both types of splines (fractional and complex) play a truly fundamental role in wavelet theory and that they lead to a better understanding of what wavelets really are. To this end, we first revisit wavelet theory starting from the representation of a scaling function as the convolution of a Bspline (the regular part of it) and a distribution (the irregular or residual part). This formulation leads to some new insights on wavelets and makes it possible to rederive the main results of the classical theory—including some new extensions for fractional orders—in a selfcontained, accessible fashion. In particular, we prove that the Bspline component is entirely responsible for five key wavelet properties: order of approximation, reproduction of polynomials, vanishing moments, multiscale differentiation, and smoothness (regularity) of the basis functions. Second, we show that any scaling function can be expanded as a sum of harmonic splines (a particular subset of the splines with complex exponents); these play essentially the same role here as the Fourier exponentials do for periodic signals. This harmonic expansion provides an explicit timedomain representation of scaling functions and wavelets; it also explains their fractal nature. Remarkably, truncating the expansion preserves the essential multiresolution property (twoscale relation). Keeping the first term alone yields a fractionalspline approximation that captures most of the important wavelet features; e.g., its general shape and smoothness. References

@inproceedings{blu2003r, author = "Unser, M. and Blu, T.", title = "The Spline Foundation of Wavelet Theory", booktitle = "International Conference on Wavelets and Splines ({EIMIWS'03})", month = "July 38,", year = "2003", pages = "9899", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003r" } 
Unser, M. & Blu, T.,"A Unifying Spline Formulation for Stochastic Signal Processing [Or How Schoenberg Meets Wiener, with the Help of Tikhonov]", Second International Conference on Computational Harmonic Analysis, Nineteenth Annual Shanks Lecture (CHA'04), Nashville, USA, May 2430, 2004. Plenary talk. 
We introduce an extended class of cardinal Lsplines where L is a pseudodifferential—but not necessarily local—operator satisfying some admissibility conditions. This family is quite general and includes a variety of standard constructions including the polynomial, elliptic, exponential, and fractional splines. In order to fit such splines to the noisy samples of a signal, we specify a corresponding smoothing spline problem which involves an Lseminorm regularization term. We prove that the optimal solution, among all possible functions, is a cardinal L^{*}Lspline which has a stable representation in a Bsplinelike basis. We show that the coefficients of this spline estimator can be computed by digital filtering of the input samples; we also describe an efficient recursive filtering algorithm that is applicable whenever the transfer function of L is rational. We justify this procedure statistically by establishing an equivalence between L^{*}L smoothing splines and the MMSE (minimum mean square error) estimation of a stationary signal corrupted by white Gaussian noise. In this modelbased formulation, the optimum operator L is the whitening filter of the process, and the regularization parameter is proportional to the noise variance. Thus, the proposed formalism yields the optimal discretization of the classical Wiener filter, together with a fast recursive algorithm. It extends the standard Wiener solution by providing the optimal interpolation space. We also present a Bayesian interpretation of such spline estimators. References

@inproceedings{blu2004i, author = "Unser, M. and Blu, T.", title = "A Unifying Spline Formulation for Stochastic Signal Processing [{O}r How {S}choenberg Meets {W}iener, with the Help of {T}ikhonov]", booktitle = "Second International Conference on Computational Harmonic Analysis, Nineteenth Annual Shanks Lecture ({CHA'04})", month = "May 2430,", year = "2004", note = "Plenary talk", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004i" } 
Unser, M. & Blu, T.,"Generalized Smoothing Splines and the Optimal Discretization of the Wiener Filter", IEEE Transactions on Signal Processing, Vol. 53 (6), pp. 21462159, June 2005. 
We introduce an extended class of cardinal L^{*}Lsplines, where L is a pseudodifferential operator satisfying some admissibility conditions. We show that the L^{*}Lspline signal interpolation problem is well posed and that its solution is the unique minimizer of the spline energy functional, L s_{L2}^{2}, subject to the interpolation constraint. Next, we consider the corresponding regularized least squares estimation problem, which is more appropriate for dealing with noisy data. The criterion to be minimized is the sum of a quadratic data term, which forces the solution to be close to the input samples, and a “smoothness” term that privileges solutions with small spline energies. Here, too, we find that the optimal solution, among all possible functions, is a cardinal L^{*}Lspline. We show that this smoothing spline estimator has a stable representation in a Bsplinelike basis and that its coefficients can be computed by digital filtering of the input signal. We describe an efficient recursive filtering algorithm that is applicable whenever the transfer function of L is rational (which corresponds to the case of exponential splines). We justify these algorithms statistically by establishing an equivalence between L^{*}L smoothing splines and the minimum mean square error (MMSE) estimation of a stationary signal corrupted by white Gaussian noise. In this modelbased formulation, the optimum operator L is the whitening filter of the process, and the regularization parameter is proportional to the noise variance. Thus, the proposed formalism yields the optimal discretization of the classical Wiener filter, together with a fast recursive algorithm. It extends the standard Wiener solution by providing the optimal interpolation space. We also present a Bayesian interpretation of the algorithm. 
@article{blu2005h, author = "Unser, M. and Blu, T.", title = "Generalized Smoothing Splines and the Optimal Discretization of the {W}iener Filter", journal = "{IEEE} Transactions on Signal Processing", month = "June", year = "2005", volume = "53", number = "6", pages = "21462159", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005h" } 
Unser, M. & Blu, T.,"Cardinal Exponential Splines: Part ITheory and Filtering Algorithms", IEEE Transactions on Signal Processing, Vol. 53 (4), pp. 14251438, April 2005. 
Causal exponentials play a fundamental role in classical system theory. Starting from those elementary building blocks, we propose a complete and selfcontained signal processing formulation of exponential splines defined on a uniform grid. We specify the corresponding Bspline basis functions and investigate their reproduction properties (Green function and exponential polynomials); we also characterize their stability (Riesz bounds). We show that the exponential Bspline framework allows an exact implementation of continuoustime signal processing operators including convolution, differential operators, and modulation, by simple processing in the discrete Bspline domain. We derive efficient filtering algorithms for multiresolution signal extrapolation and approximation, extending earlier results for polynomial splines. Finally, we present a new asymptotic error formula that predicts the magnitude and the Nthorder decay of the L_{2}approximation error as a function of the knot spacing T. Please consult also the companion paper by M. Unser, "Cardinal Exponential Splines: Part II—Think Analog, Act Digital," IEEE Transactions on Signal Processing, vol. 53, no. 4, pp. 14391449, April 2005. 
@article{blu2005i, author = "Unser, M. and Blu, T.", title = "Cardinal Exponential Splines: {P}art {I}{T}heory and Filtering Algorithms", journal = "{IEEE} Transactions on Signal Processing", month = "April", year = "2005", volume = "53", number = "4", pages = "14251438", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005i" } 
Unser, M. & Blu, T.,"SelfSimilarity: Part ISplines and Operators", IEEE Transactions on Signal Processing, Vol. 55 (4), pp. 13521363, April 2007. 
The central theme of this pair of papers (Parts I and II in this issue) is selfsimilarity, which is used as a bridge for connecting splines and fractals. The first part of the investigation is deterministic, and the context is that of Lsplines; these are defined in the following terms: s(t) is a cardinal Lspline iff L{s(t)} = ∑_{k∈Z} a[k] δ(t−k), where L is a suitable pseudodifferential operator. Our starting point for the construction of “selfsimilar” splines is the identification of the class of differential operators L that are both translation and scale invariant. This results into a twoparameter family of generalized fractional derivatives, ∂_{τ}^{γ}, where γ is the order of the derivative and τ is an additional phase factor. We specify the corresponding Lsplines, which yield an extended class of fractional splines. The operator ∂_{τ}^{γ} is used to define a scaleinvariant energy measure—the squared L_{2}norm of the γth derivative of the signal—which provides a regularization functional for interpolating or fitting the noisy samples of a signal. We prove that the corresponding variational (or smoothing) spline estimator is a cardinal fractional spline of order 2γ, which admits a stable representation in a Bspline basis. We characterize the equivalent frequency response of the estimator and show that it closely matches that of a classical Butterworth filter of order 2γ. We also establish a formal link between the regularization parameter λ and the cutoff frequency of the smoothing spline filter: ω_{0} ≅ λ^{−2γ}. Finally, we present an efficient computational solution to the fractional smoothing spline problem: It uses the fast Fourier transform and takes advantage of the multiresolution properties of the underlying basis functions. Please consult also the companion paper by T. Blu, M. Unser, "SelfSimilarity: Part II—Optimal Estimation of Fractal Processes," IEEE Transactions on Signal Processing, vol. 55, no. 4, pp. 13641378, April 2007. 
@article{blu2007l, author = "Unser, M. and Blu, T.", title = "SelfSimilarity: {P}art {I}{S}plines and Operators", journal = "{IEEE} Transactions on Signal Processing", month = "April", year = "2007", volume = "55", number = "4", pages = "13521363", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007l" } 
Unser, M., Horbelt, S. & Blu, T.,"Fractional Derivatives, Splines and Tomography", Proceedings of the Tenth European Signal Processing Conference (EUSIPCO'00), Tampere, Finland, Vol. {IV}, pp. 20172020, September 48, 2000. 
We develop a spline calculus for dealing with fractional derivatives. After a brief review of fractional splines, we present the main formulas for computing the fractional derivatives of the underlying basis functions. In particular, we show that the γ^{th} fractional derivative of a Bspline of degree α (not necessarily integer) is given by the γ^{th} fractional difference of a Bspline of degree αγ. We use these results to derive an improved version the filtered backprojection algorithm for tomographic reconstruction. The projection data is first interpolated with splines; the continuous model is then used explicitly for an exact implementation of the filtering and backprojection steps. 
@inproceedings{blu2000m, author = "Unser, M. and Horbelt, S. and Blu, T.", title = "Fractional Derivatives, Splines and Tomography", booktitle = "Proceedings of the Tenth European Signal Processing Conference ({EUSIPCO'00})", month = "September 48,", year = "2000", volume = "{IV}", pages = "20172020", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2000m" } 
Urigüen, J.A., Blu, T. & Dragotti, P.L.,"FRI Sampling with Arbitrary Kernels", IEEE Transactions on Signal Processing, Vol. 61 (21), pp. 53105323, November 2013. 
This paper addresses the problem of sampling nonbandlimited signals
within the Finite Rate of Innovation (FRI) setting. We had previously shown that, by using sampling kernels whose integer span contains specific exponentials (generalized StrangFix conditions), it is possible to devise noniterative, fast reconstruction algorithms from very lowrate samples. Yet, the accuracy and sensitivity to noise of these algorithms is highly dependent on these exponential reproducing kernels  actually, on the exponentials that they reproduce. Hence, our first contribution here is to provide clear guidelines on how to choose the sampling kernels optimally, in such a way that the reconstruction quality is maximized in the presence of noise. The optimality of these kernels is validated by comparing with CramérRao's lower bounds (CRLB). Our second contribution is to relax the exact exponential reproduction requirement. Instead, we demonstrate that arbitrary sampling kernels can reproduce the "best" exponentials within quite a high accuracy in general, and that applying the exact FRI algorithms in this approximate context results in nearoptimal reconstruction accuracy for practical noise levels. Essentially, we propose a universal extension of the FRI approach to arbitrary sampling kernels. Numerical results checked against the CRLB validate the various contributions of the paper and in particular outline the ability of arbitrary sampling kernels to be used in FRI algorithms. 
@article{blu2013f, author = "Urig{\"u}en, J.A. and Blu, T. and Dragotti, P.L.", title = "{FRI} Sampling with Arbitrary Kernels", journal = "{IEEE} Transactions on Signal Processing", month = "November", year = "2013", volume = "61", number = "21", pages = "53105323", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2013f" } 
Urigüen, J.A., Dragotti, P.L. & Blu, T.,"Method and apparatus for sampling and reconstruction of signals", International patent WO/2014/191771, December 2014. 
A signal processing method for estimating a frequency domain representation of signal from a series of samples distorted by an instrument function, the method comprising obtaining the series of samples; obtaining a set of coefficients that fit a set of basis functions to a complex exponential function wherein the set of basis functions comprises a plurality of basis functions each defined by a shifted version of the instrument function in a signal domain; estimating the frequency domain representation of the signal based on the series of samples and the coefficients. This is wherein the estimate of the instrument function is based on a characterisation of the instrument function in the frequency domain at frequencies associated with the complex exponential function. 
@misc{blu2014f, author = "Urig\"{u}en, J.A. and Dragotti, P.L. and Blu, T.", title = "Method and apparatus for sampling and reconstruction of signals", month = "December", year = "2014", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2014f" } 
Urigüen, J.A., Dragotti, P.L. & Blu, T.,"On The Exponential Reproducing Kernels for Sampling Signals with Finite Rate of Innovation", Proceedings of the Ninth International Workshop on Sampling Theory and Applications (SampTA'11), Singapore, May 26, 2011. 
The theory of Finite Rate of Innovation (FRI) broadened the traditional sampling paradigm to certain classes of parametric signals. In the presence of noise, the original procedures are not as stable, and a different treatment is needed. In this paper we review the ideal FRI sampling scheme and some of the existing techniques to combat noise. We then present alternative denoising methods for the case of exponential reproducing kernels. We first vary existing subspacebased approaches. We also discuss how to design exponential reproducing kernels that are most robust to noise. 
@inproceedings{blu2011i, author = "Urig{\"u}en, J.A. and Dragotti, P.L. and Blu, T.", title = "On The Exponential Reproducing Kernels for Sampling Signals with Finite Rate of Innovation", booktitle = "Proceedings of the Ninth International Workshop on Sampling Theory and Applications ({SampTA'11})", month = "May 26,", year = "2011", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2011i" } 
Van De Ville, D., Bathellier, B., Accolla, R., Carleton, A., Blu, T. & Unser, M.,"WaveletBased Detection of Stimulus Responses in TimeLapse Microscopy", Proceedings of the ThirtyFirst IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'06), Toulouse, France, pp. {V}1161{V}1164, May 1419, 2006. 
Many experimental paradigms in biology aim at studying the response to coordinated stimuli. In dynamic imaging experiments, the observed data is often not straightforward to interpret and not directly measurable in a quantitative fashion. Consequently, the data is typically preprocessed in an ad hoc fashion and the results subjected to a statistical inference at the level of a population. We propose a new framework for analyzing timelapse images that exploits some a priori knowledge on the type of temporal response and takes advantage of the spatial correlation of the data. This is achieved by processing the data in the wavelet domain and expressing the time course of each wavelet coefficient by a linear model. We end up with a statistical map in the spatial domain for the contrast of interest (i.e., the stimulus response). The feasibility of the method is demonstrated by an example of intrinsic microscopy imaging of mice's brains during coordinated sensory stimulation. 
@inproceedings{blu2006f, author = "Van De Ville, D. and Bathellier, B. and Accolla, R. and Carleton, A. and Blu, T. and Unser, M.", title = "WaveletBased Detection of Stimulus Responses in TimeLapse Microscopy", booktitle = "Proceedings of the ThirtyFirst {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'06})", month = "May 1419,", year = "2006", pages = "{V}1161{V}1164", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2006f" } 
Van De Ville, D., Bathellier, B., Carleton, A., Blu, T. & Unser, M.,"WaveletBased Statistical Analysis for Optical Imaging in Mouse Olfactory Bulb", Proceedings of the Fourth IEEE International Symposium on Biomedical Imaging (ISBI'07), Arlington, USA, pp. 448451, April 1215, 2007. 
Optical imaging is a powerful technique to map brain function in animals. In this study, we consider in vivo optical imaging of the murine olfactory bulb, using an intrinsic signal and a genetically expressed activity reporter fluorescent protein (synaptopHfluorin). The aim is to detect odorevoked activations that occur in small spherical structures of the olfactory bulb called glomeruli. We propose a new way of analyzing this kind of data that combines a linear model (LM) fitting along the temporal dimension, together with a discrete wavelet transform (DWT) along the spatial dimensions. We show that relevant regressors for the LM are available for both types of optical signals. In addition, the spatial wavelet transform allows us to exploit spatial correlation at different scales, and in particular to extract activation patterns at the expected size of glomeruli. Our framework also provides a statistical significance for every pixel in the activation maps and it has strong type I error control. 
@inproceedings{blu2007m, author = "Van De Ville, D. and Bathellier, B. and Carleton, A. and Blu, T. and Unser, M.", title = "WaveletBased Statistical Analysis for Optical Imaging in Mouse Olfactory Bulb", booktitle = "Proceedings of the Fourth {IEEE} International Symposium on Biomedical Imaging ({ISBI'07})", month = "April 1215,", year = "2007", pages = "448451", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007m" } 
Van De Ville, D., Blu, T., Forster, B. & Unser, M.,"IsotropicPolyharmonic {B}Splines and Wavelets", Proceedings of the 2004 IEEE International Conference on Image Processing (ICIP'04), Singapore, Singapore, pp. 661664, October 2427, 2004. 
We propose the use of polyharmonic Bsplines to build nonseparable twodimensional wavelet bases. The central idea is to base our design on the isotropic polyharmonic Bsplines, a new type of polyharmonic Bsplines that do converge to a Gaussian as the order increases. We opt for the quincunx subsampling scheme which allows us to characterize the wavelet spaces with a single wavelet: the isotropicpolyharmonic Bspline wavelet. Interestingly, this wavelet converges to a combination of four Gabor atoms, which are well separated in frequency domain. We also briefly discuss our Fourierbased implementation and present some experimental results. 
@inproceedings{blu2004j, author = "Van De Ville, D. and Blu, T. and Forster, B. and Unser, M.", title = "IsotropicPolyharmonic \mbox{{B}Splines} and Wavelets", booktitle = "Proceedings of the 2004 {IEEE} International Conference on Image Processing ({ICIP'04})", month = "October 2427,", year = "2004", pages = "661664", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004j" } 
Van De Ville, D., Blu, T., Forster, B. & Unser, M.,"SemiOrthogonal Wavelets That Behave like Fractional Differentiators", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet XI, San Diego, USA, Vol. 5914, pp. 59140C159140C8, July 31August 3, 2005. 
The approximate behavior of wavelets as differential operators is often considered as one of their most fundamental properties. In this paper, we investigate how we can further improve on the wavelet's behavior as differentiator. In particular, we propose semiorthogonal differential wavelets. The semiorthogonality condition ensures that wavelet spaces are mutually orthogonal. The operator, hidden within the wavelet, can be chosen as a generalized differential operator ∂_{τ}^{γ}, for a γth order derivative with shift τ. Both order of derivation and shift can be chosen fractional. Our design leads us naturally to select the fractional Bsplines as scaling functions. By putting the differential wavelet in the perspective of a derivative of a smoothing function, we find that signal singularities are compactly characterized by at most two local extrema of the wavelet coefficients in each subband. This property could be beneficial for signal analysis using wavelet bases. We show that this wavelet transform can be efficiently implemented using FFTs. 
@inproceedings{blu2005j, author = "Van De Ville, D. and Blu, T. and Forster, B. and Unser, M.", title = "SemiOrthogonal Wavelets That Behave like Fractional Differentiators", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet {XI}", month = "July 31August 3,", year = "2005", volume = "5914", pages = "59140C159140C8", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005j" } 
Van De Ville, D., Blu, T., Forster, B. & Unser, M.,"Polyharmonic {B}Spline Wavelets: From Isotropy to Directionality", Advanced Concepts for Intelligent Vision Systems (ACIVS'06), Antwerp, Belgium, September 1821, 2006. Invited talk. 
Polyharmonic Bsplines are excellent basis functions to build multidimensional wavelet bases. These functions are nonseparable, multidimensional generators that are localized versions of radial basis functions. We show that Rabut's elementary polyharmonic Bsplines do not converge to a Gaussian as the order parameter increases, as opposed to their separable Bspline counterparts. Therefore, we introduce a more isotropic localization operator that guarantees this convergence, resulting into the isotropic polyharmonic Bsplines. Next, we focus on the twodimensional quincunx subsampling scheme. This configuration is of particular interest for image processing, because it yields a finer scale progression than the standard dyadic approach. However, up until now, the design of appropriate filters for the quincunx scheme has mainly been done using the McClellan transform. In our approach, we start from the scaling functions, which are the polyharmonic Bsplines and, as such, explicitly known, and we derive a family of polyharmonic spline wavelets corresponding to different flavors of the semiorthogonal wavelet transform; e.g., orthonormal, Bspline, and dual. The filters are automatically specified by the scaling relations satisfied by these functions. We prove that the isotropic polyharmonic Bspline wavelet converges to a combination of four Gabor atoms, which are well separated in the frequency domain. We also show that these wavelets are nearly isotropic and that they behave as an iterated Laplacian operator at low frequencies. We describe an efficient fast Fourier transformbased implementation of the discrete wavelet transform based on polyharmonic Bsplines. Finally, we propose a new way to build directional wavelets using modified polyharmonic Bsplines. This approach benefits from the previous results (construction of the wavelet filters, fast implementation,…) but allows one to recover directional information about the edges from the (complexvalued) wavelet coefficients. 
@inproceedings{blu2006g, author = "Van De Ville, D. and Blu, T. and Forster, B. and Unser, M.", title = "Polyharmonic \mbox{{B}Spline} Wavelets: {F}rom Isotropy to Directionality", booktitle = "Advanced Concepts for Intelligent Vision Systems ({ACIVS'06})", month = "September 1821,", year = "2006", note = "Invited talk", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2006g" } 
Van De Ville, D., Blu, T. & Unser, M.,"Wavelets Versus Resels in the Context of fMRI: Establishing the Link with SPM", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet Applications in Signal and Image Processing X, San Diego, USA, Vol. 5207, pp. 417425, August 38, 2003. Part I. 
Statistical Parametric Mapping (SPM) is a widely deployed tool for detecting and analyzing brain activity from fMRI data. One of SPM's main features is smoothing the data by a Gaussian filter to increase the SNR. The subsequent statistical inference is based on the continuous Gaussian random field theory. Since the remaining spatial resolution has deteriorated due to smoothing, SPM introduces the concept of “resels” (resolution elements) or spatial informationcontaining cells. The number of resels turns out to be inversely proportional to the size of the Gaussian smoother. Detection the activation signal in fMRI data can also be done by a wavelet approach: after computing the spatial wavelet transform, a straightforward coefficientwise statistical test is applied to detect activated wavelet coefficients. In this paper, we establish the link between SPM and the wavelet approach based on two observations. First, the (iterated) lowpass analysis filter of the discrete wavelet transform can be chosen to closely resemble SPM's Gaussian filter. Second, the subsampling scheme provides us with a natural way to define the number of resels; i.e., the number of coefficients in the lowpass subband of the wavelet decomposition. Using this connection, we can obtain the degree of the splines of the wavelet transform that makes it equivalent to SPM's method. We show results for two particularly attractive biorthogonal wavelet transforms for this task; i.e., 3D fractionalspline wavelets and 2D+Z fractional quincunx wavelets. The activation patterns are comparable to SPM's. 
@inproceedings{blu2003s, author = "Van De Ville, D. and Blu, T. and Unser, M.", title = "Wavelets Versus Resels in the Context of {fMRI}: {E}stablishing the Link with {SPM}", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet Applications in Signal and Image Processing {X}", month = "August 38,", year = "2003", volume = "5207", pages = "417425", note = "Part {I}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003s" } 
Van De Ville, D., Blu, T. & Unser, M.,"On the Approximation Power of Splines: Orthogonal Versus Hexagonal Lattices", Proceedings of the Fifth International Workshop on Sampling Theory and Applications (SampTA'03), Strobl, Austria, pp. 109111, May 2630, 2003. 
Recently, we have proposed a novel family of bivariate, nonseparable splines. These splines, called "hexsplines" have been designed to deal with hexagonally sampled data. Incorporating the shape of the Voronoi cell of a hexagonal lattice, they preserve the twelvefold symmetry of the hexagon tiling cell. Similar to Bsplines, we can use them to provide a link between the discrete and the continuous domain, which is required for many fundamental operations such as interpolation and resampling. The question we answer in this paper is "How well do the hexsplines approximate a given function in the continuous domain?" and more specifically "How do they compare to separable Bsplines deployed on a lattice with the same sampling density?" 
@inproceedings{blu2003t, author = "Van De Ville, D. and Blu, T. and Unser, M.", title = "On the Approximation Power of Splines: {O}rthogonal Versus Hexagonal Lattices", booktitle = "Proceedings of the Fifth International Workshop on Sampling Theory and Applications ({SampTA'03})", month = "May 2630,", year = "2003", pages = "109111", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003t" } 
Van De Ville, D., Blu, T. & Unser, M.,"Recursive Filtering for Splines on Hexagonal Lattices", Proceedings of the TwentyEighth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'03), Hong Kong, China, Vol. {III}, pp. 301304, April 610, 2003. 
Hexsplines are a novel family of bivariate splines, which are well suited to handle hexagonally sampled data. Similar to classical 1D Bsplines, the spline coefficients need to be computed by a prefilter. Unfortunately, the elegant implementation of this prefilter by causal and anticausal recursive filtering is not applicable for the (nonseparable) hexsplines. Therefore, in this paper we introduce a novel approach from the viewpoint of approximation theory. We propose three different recursive filters and optimize their parameters such that a desired order of approximation is obtained. The results for third and fourth order hexsplines are discussed. Although the proposed solutions provide only quasiinterpolation, they tend to be very close to the interpolation prefilter. 
@inproceedings{blu2003u, author = "Van De Ville, D. and Blu, T. and Unser, M.", title = "Recursive Filtering for Splines on Hexagonal Lattices", booktitle = "Proceedings of the TwentyEighth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'03})", month = "April 610,", year = "2003", volume = "{III}", pages = "301304", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2003u" } 
Van De Ville, D., Blu, T. & Unser, M.,"Integrated Wavelet Processing and Spatial Statistical Testing of fMRI Data", NeuroImage, Vol. 23 (4), pp. 14721485, December 2004. 
We introduce an integrated framework for detecting brain activity from fMRI data, which is based on a spatial discrete wavelet transform. Unlike the standard waveletbased approach for fMRI analysis, we apply the suitable statistical test procedure in the spatial domain. For a desired significance level, this scheme has one remaining degree of freedom, characterizing the wavelet processing, which is optimized according to the principle of minimal approximation error. This allows us to determine the threshold values in a way that does not depend on data. While developing our framework, we make only conservative assumptions. Consequently, the detection of activation is based on strong evidence. We have implemented this framework as a toolbox (WSPM) for the SPM2 software, taking advantage of multiple options and functions of SPM such as the setup of the linear model and the use of the hemodynamic response function. We show by experimental results that our method is able to detect activation patterns; the results are comparable to those obtained by SPM even though statistical assumptions are more conservative. The associated software is available here. 
@article{blu2004k, author = "Van De Ville, D. and Blu, T. and Unser, M.", title = "Integrated Wavelet Processing and Spatial Statistical Testing of {fMRI} Data", journal = "NeuroImage", month = "December", year = "2004", volume = "23", number = "4", pages = "14721485", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004k" } 
Van De Ville, D., Blu, T. & Unser, M.,"WaveletBased fMRI Statistical Analysis and Spatial Interpretation: A Unifying Approach", Proceedings of the Second IEEE International Symposium on Biomedical Imaging (ISBI'04), Arlington, USA, pp. 11671170, April 1518, 2004. 
Waveletbased statistical analysis methods for fMRI are able to detect brain activity without smoothing the data. Typically, the statistical inference is performed in the wavelet domain by testing the tvalues of each wavelet coefficient; subsequently, an activity map is reconstructed from the significant coefficients. The limitation of this approach is that there is no direct statistical interpretation of the reconstructed map. In this paper, we propose a new methodology that takes advantage of wavelet processing but keeps the statistical meaning in the spatial domain. We derive a spatial threshold with a proper nonstationary component and determine optimal threshold values by minimizing an approximation error. The sensitivity of our method is comparable to SPM's (Statistical Parametric Mapping). 
@inproceedings{blu2004l, author = "Van De Ville, D. and Blu, T. and Unser, M.", title = "WaveletBased {fMRI} Statistical Analysis and Spatial Interpretation: {A} Unifying Approach", booktitle = "Proceedings of the Second {IEEE} International Symposium on Biomedical Imaging ({ISBI'04})", month = "April 1518,", year = "2004", pages = "11671170", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004l" } 
Van De Ville, D., Blu, T. & Unser, M.,"WSPM: Wavelet Processing and the Analysis of fMRI Using Statistical Parametric Maps", Second International Conference on Computational Harmonic Analysis, Nineteenth Annual Shanks Lecture (CHA'04), Nashville, USA, May 2430, 2004. Invited talk. 
Waveletbased methods for the statistical analysis of functional magnetic resonance images (fMRI) are able to detect brain activity without smoothing the data (3D space + time). Up to now, the statistical inference was typically performed in the wavelet domain by testing the tvalues of each wavelet coefficient; the activity map was reconstructed from the significant coefficients. The limitation of this approach is that there is no direct statistical interpretation of the reconstructed map. Here, we describe a new methodology that takes advantage of wavelet processing but keeps the statistical meaning in the spatial domain. We derive a spatial threshold with a proper nonstationary component and determine optimal threshold values by minimizing an approximation error. This framework was implemented as a toolbox (WSPM) for the widelyused SPM2 software, taking advantage of the multiple options and functionality of SPM (Statistical Parametric Mapping) such as the specification of a linear model that may account for the hemodymanic response of the system. The sensitivity of our method is comparable to that of conventional SPM, which applies a spatial Gaussian prefilter to the data, even though our statistical assumptions are more conservative. 
@inproceedings{blu2004m, author = "Van De Ville, D. and Blu, T. and Unser, M.", title = "{WSPM}: {W}avelet Processing and the Analysis of {fMRI} Using Statistical Parametric Maps", booktitle = "Second International Conference on Computational Harmonic Analysis, Nineteenth Annual Shanks Lecture ({CHA'04})", month = "May 2430,", year = "2004", note = "Invited talk", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004m" } 
Van De Ville, D., Blu, T. & Unser, M.,"Isotropic Polyharmonic {B}Splines: Scaling Functions and Wavelets", IEEE Transactions on Image Processing, Vol. 14 (11), pp. 17981813, November 2005. 
In this paper, we use polyharmonic Bsplines to build multidimensional wavelet bases. These functions are nonseparable, multidimensional basis functions that are localized versions of radial basis functions. We show that Rabut's elementary polyharmonic Bsplines do not converge to a Gaussian as the order parameter increases, as opposed to their separable Bspline counterparts. Therefore, we introduce a more isotropic localization operator that guarantees this convergence, resulting into the isotropic polyharmonic Bsplines. Next, we focus on the twodimensional quincunx subsampling scheme. This configuration is of particular interest for image processing, because it yields a finer scale progression than the standard dyadic approach. However, up until now, the design of appropriate filters for the quincunx scheme has mainly been done using the McClellan transform. In our approach, we start from the scaling functions, which are the polyharmonic Bsplines and, as such, explicitly known, and we derive a family of polyharmonic spline wavelets corresponding to different flavors of the semiorthogonal wavelet transform; e.g., orthonormal, Bspline, and dual. The filters are automatically specified by the scaling relations satisfied by these functions. We prove that the isotropic polyharmonic Bspline wavelet converges to a combination of four Gabor atoms, which are well separated in the frequency domain. We also show that these wavelets are nearly isotropic and that they behave as an iterated Laplacian operator at low frequencies. We describe an efficient fast Fourier transformbased implementation of the discrete wavelet transform based on polyharmonic Bsplines. 
@article{blu2005k, author = "Van De Ville, D. and Blu, T. and Unser, M.", title = "Isotropic Polyharmonic \mbox{{B}Splines}: {S}caling Functions and Wavelets", journal = "{IEEE} Transactions on Image Processing", month = "November", year = "2005", volume = "14", number = "11", pages = "17981813", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005k" } 
Van De Ville, D., Blu, T. & Unser, M.,"On the Multidimensional Extension of the Quincunx Subsampling Matrix", IEEE Signal Processing Letters, Vol. 12 (2), pp. 112115, February 2005. 
The dilation matrix associated with the threedimensional (3D) facecentered cubic (FCC) sublattice is often considered to be the natural 3D extension of the twodimensional (2D) quincunx dilation matrix. However, we demonstrate that both dilation matrices are of different nature: while the 2D quincunx matrix is a similarity transform, the 3D FCC matrix is not. More generally, we show that is impossible to obtain a dilation matrix that is a similarity transform and performs downsampling of the Cartesian lattice by a factor of two in more than two dimensions. Furthermore, we observe that the popular 3D FCC subsampling scheme alternates between three different lattices: Cartesian, FCC, and quincunx. The latter one provides a less isotropic sampling density, a property that should be taken into account to properly orient 3D data before processing using such a subsampling matrix. 
@article{blu2005l, author = "Van De Ville, D. and Blu, T. and Unser, M.", title = "On the Multidimensional Extension of the Quincunx Subsampling Matrix", journal = "{IEEE} Signal Processing Letters", month = "February", year = "2005", volume = "12", number = "2", pages = "112115", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005l" } 
Van De Ville, D., Blu, T. & Unser, M.,"WSPM: A New Approach for WaveletBased Statistical Analysis of fMRI Data", Eleventh Annual Meeting of the Organization for Human Brain Mapping (HBM'05), Toronto, Canada, pp. S17, June 1216, 2005. 
Recently, we have proposed a new framework for detecting brain activity from fMRI data, which is based on the spatial discrete wavelet transform. The standard waveletbased approach performs a statistical test in the wavelet domain, and therefore fails to provide a rigorous statistical interpretation in the spatial domain. The new framework provides an “integrated” approach: the data is processed in the wavelet domain (e.g., by thresholding wavelet coefficients), and a suitable statistical testing procedure is applied afterwards in the spatial domain. This method is based on conservative assumptions only and has a strong typeI error control by construction. At the same time, it has a sensitivity comparable to that of SPM. 
@inproceedings{blu2005m, author = "Van De Ville, D. and Blu, T. and Unser, M.", title = "{WSPM}: {A} New Approach for WaveletBased Statistical Analysis of {fMRI} Data", booktitle = "Eleventh Annual Meeting of the Organization for Human Brain Mapping ({HBM'05})", month = "June 1216,", year = "2005", pages = "S17", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005m" } 
Van De Ville, D., Blu, T. & Unser, M.,"Surfing the BrainAn Overview of WaveletBased Techniques for fMRI Data Analysis", IEEE Engineering in Medicine and Biology Magazine, Vol. 25 (2), pp. 6578, MarchApril 2006. 
The measurement of brain activity in a noninvasive way is an essential element in modern neurosciences. Modalities such as electroencephalography (EEG) and magnetoencephalography (MEG) recently gained interest, but two classical techniques remain predominant. One of them is positron emission tomography (PET), which is costly and lacks temporal resolution but allows the design of tracers for specific tasks; the other main one is functional magnetic resonance imaging (fMRI), which is more affordable than PET from a technical, financial, and ethical point of view, but which suffers from poor contrast and low signaltonoise ratio (SNR). For this reason, advanced methods have been devised to perform the statistical analysis of fMRI data. The associated software is available here. 
@article{blu2006h, author = "Van De Ville, D. and Blu, T. and Unser, M.", title = "Surfing the Brain{A}n Overview of WaveletBased Techniques for {fMRI} Data Analysis", journal = "{IEEE} Engineering in Medicine and Biology Magazine", month = "MarchApril", year = "2006", volume = "25", number = "2", pages = "6578", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2006h" } 
Van De Ville, D., Blu, T. & Unser, M.,"WSPM or How to Obtain Statistical Parametric Maps Using ShiftInvariant Wavelet Processing", Proceedings of the IEEE ThirtyFirst International Conference on Acoustics, Speech, and Signal Processing (ICASSP'06), Toulouse, France, pp. {V}1101{V}1104, May 1419, 2006. 
Recently, we have proposed a new framework for detecting brain activity from fMRI data, which is based on the spatial discrete wavelet transform. The standard waveletbased approach performs a statistical test in the wavelet domain, and therefore fails to provide a rigorous statistical interpretation in the spatial domain. The new framework provides an “integrated” approach: the data is processed in the wavelet domain (by thresholding wavelet coefficients), and a suitable statistical testing procedure is applied afterwards in the spatial domain. This method is based on conservative assumptions only and has a strong typeI error control by construction. At the same time, it has a sensitivity comparable to that of SPM. Here, we discuss the extension of our algorithm to the redundant discrete wavelet transform, which provides a shiftinvariant detection scheme. The key features of our technique are illustrated with experimental results. An implementation of our framework is available as a toolbox (WSPM) for the SPM2 software. 
@inproceedings{blu2006i, author = "Van De Ville, D. and Blu, T. and Unser, M.", title = "{WSPM} or How to Obtain Statistical Parametric Maps Using ShiftInvariant Wavelet Processing", booktitle = "Proceedings of the {IEEE} ThirtyFirst International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'06})", month = "May 1419,", year = "2006", pages = "{V}1101{V}1104", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2006i" } 
Van De Ville, D., Blu, T., Unser, M., Philips, W., Lemahieu, I. & Van de Walle, R.,"HexSplines: A Novel Spline Family for Hexagonal Lattices", IEEE Transactions on Image Processing, Vol. 13 (6), pp. 758772, June 2004. 
This paper proposes a new family of bivariate, nonseparable splines, called hexsplines, especially designed for hexagonal lattices. The starting point of the construction is the indicator function of the Voronoi cell, which is used to define in a natural way the firstorder hexspline. Higher order hexsplines are obtained by successive convolutions. A mathematical analysis of this new bivariate spline family is presented. In particular, we derive a closed form for a hexspline of arbitrary order. We also discuss important properties, such as their Fourier transform and the fact they form a Riesz basis. We also highlight the approximation order. For conventional rectangular lattices, hexsplines revert to classical separable tensorproduct Bsplines. Finally, some prototypical applications and experimental results demonstrate the usefulness of hexsplines for handling hexagonally sampled data. 
@article{blu2004n, author = "Van De Ville, D. and Blu, T. and Unser, M. and Philips, W. and Lemahieu, I. and Van de Walle, R.", title = "HexSplines: {A} Novel Spline Family for Hexagonal Lattices", journal = "{IEEE} Transactions on Image Processing", month = "June", year = "2004", volume = "13", number = "6", pages = "758772", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2004n" } 
Van De Ville, D., ForsterHeinlein, B., Unser, M. & Blu, T.,"Analytical Footprints: Compact Representation of Elementary Singularities in Wavelet Bases", IEEE Transactions on Signal Processing, Vol. 58 (12), pp. 61056118, December 2010. 
We introduce a family of elementary singularities that are pointHölder αregular. These singularities are selfsimilar and are the Green functions of fractional derivative operators; i.e., by suitable fractional differentiation, one retrieves a Dirac δ function at the exact location of the singularity. We propose to use fractional operatorlike wavelets that act as a multiscale version of the derivative in order to characterize and localize singularities in the wavelet domain. We show that the characteristic signature when the wavelet interacts with an elementary singularity has an asymptotic closedform expression, termed the analytical footprint. Practically, this means that the dictionary of wavelet footprints is embodied in a single analytical form. We show that the wavelet coefficients of the (nonredundant) decomposition can be fitted in a multiscale fashion to retrieve the parameters of the underlying singularity. We propose an algorithm based on stepwise parametric fitting and the feasibility of the approach to recover singular signal representations. 
@article{blu2010j, author = "Van De Ville, D. and ForsterHeinlein, B. and Unser, M. and Blu, T.", title = "Analytical Footprints: {C}ompact Representation of Elementary Singularities in Wavelet Bases", journal = "{IEEE} Transactions on Signal Processing", month = "December", year = "2010", volume = "58", number = "12", pages = "61056118", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010j" } 
Van De Ville, D., Seghier, M.L., Lazeyras, F., Blu, T. & Unser, M.,"WSPM: WaveletBased Statistical Parametric Mapping", NeuroImage, Vol. 37 (4), pp. 12051217, October 1, 2007. 
Recently, we have introduced an integrated framework that combines waveletbased processing with statistical testing in the spatial domain. In this paper, we propose two important enhancements of the framework. First, we revisit the underlying paradigm; i.e., that the effect of the wavelet processing can be considered as an adaptive denoising step to “improve” the parameter map, followed by a statistical detection procedure that takes into account the nonlinear processing of the data. With an appropriate modification of the framework, we show that it is possible to reduce the bias of the method with respect to the best linear estimate, providing conservative results that are closer to the original data. Second, we propose an extension of our earlier technique that compensates for the lack of shiftinvariance of the wavelet transform. We demonstrate experimentally that both enhancements have a positive effect on performance. In particular, we present a reproducibility study for multisession data that compares WSPM against SPM with different amounts of smoothing. The full approach is available as a toolbox, named WSPM, for the SPM2 software; it takes advantage of multiple options and features of SPM such as the general linear model. The associated software is available here. 
@article{blu2007n, author = "Van De Ville, D. and Seghier, M.L. and Lazeyras, F. and Blu, T. and Unser, M.", title = "{WSPM}: {W}aveletBased Statistical Parametric Mapping", journal = "NeuroImage", month = "October 1,", year = "2007", volume = "37", number = "4", pages = "12051217", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007n" } 
Van De Ville, D., Seghier, M., Lazeyras, F., Blu, T. & Unser, M.,"Empirical Sensitivity, Specificity, and Bias of WaveletBased Statistical Parametric Mapping (WSPM)", Thirteenth Annual Meeting of the Organization for Human Brain Mapping (HBM'07), Chicago, USA, June 1014, 2007. CDROM paper no. 336 TH PM. 
Introduction Material and methods Analysis: Standard preprocessing and the general linear model (GLM) setup done using SPM2, including regressors from the realignment procedure and the autoregressive model for serial correlations. For each session, maps are obtained for a broad range of significance levels with
SPM compensates for multiple testing using the GRF theory, while WSPM and the voxelbyvoxel test use simple Bonferroni correction. Reproducibility study: The consistency of the detection maps over the 4 sessions is assessed using a reproducibility study [2]: we estimate the empirical sensitivity and specificity from a binomial mixture model for the histogram of the cumulative detection maps. Additionally, we estimate the bias of the methods as the sum of the absolute differences between the contrast before thresholding and the one of the voxelbyvoxel approach. Discussion In Figure 4, we show the ROC curves obtained after estimating the parameters of the binomial mixture model for both methods. By construction, the voxelbyvoxel statistical test has no bias, but reaches a very low sensitivity. For SPM, smoothing increases the bias (as can be expected) and the empirical sensitivityspecificity. Finally, WSPM has a lower bias than SPM 4mm combined with a comparable or better compromise between sensitivity and specificity. The wiggly curve behavior is due to the nonlinear operation of thresholding in the wavelet domain. References

@inproceedings{blu2007o, author = "Van De Ville, D. and Seghier, M. and Lazeyras, F. and Blu, T. and Unser, M.", title = "Empirical Sensitivity, Specificity, and Bias of WaveletBased Statistical Parametric Mapping ({WSPM})", booktitle = "Thirteenth Annual Meeting of the Organization for Human Brain Mapping ({HBM'07})", month = "June 1014,", year = "2007", note = "CDROM paper no. 336 TH PM", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007o" } 
Van De Ville, D., Seghier, M., Lazeyras, F., Blu, T. & Unser, M.,"WaveletBased Statistical Analysis of fMRI Data with High Spatial Resolution", CHUV Research Day (CHUV'07), Lausanne, Switzerland, pp. 185, February 1, 2007. 
Waveletbased statistical parametric mapping (WSPM) analyzes fMRI data using a combination of powerful denoising in the wavelet domain with statistical testing in the spatial domain. It also guarantees strong type I error (false positives) control and thus high confidence in the detections. In this poster, we show the various stages of this framework and we propose a comparison of WSPM and SPM2, which is the defacto standard for statistical analysis of fMRI data. WSPM is available to the neuroimaging community as a toolbox for SPM. One of the major advantages of WSPM is that is does not require to presmooth the data before statistical analysis, which is a prerequisite of the SPM approach. Therefore, potential high spatial resolution information available in the data is not lost and can be used to retrieve small and highly detailed activation patterns. As a typical result, we show the activation maps for SPM (6mm) and WSPM. The experimental paradigm was singlefrequency acoustic stimulation (1.5T scanner; TR=1.2s; 1.8×1.8×3mm). For the same statistical significance (5% corrected), the activation patterns retrieved by WSPM are clearly more detailed than those by SPM2. In the poster, we also include the results of the empirically measured sensivity and specificity using a reproducibility analysis for multisession data using both WSPM and SPM2. From this evaluation, we see that with WSPM we are able to obtain high spatial resolution without loss of sensitivity. SPM (6mm) WSPM 
@inproceedings{blu2007p, author = "Van De Ville, D. and Seghier, M. and Lazeyras, F. and Blu, T. and Unser, M.", title = "WaveletBased Statistical Analysis of {fMRI} Data with High Spatial Resolution", booktitle = "CHUV Research Day ({CHUV'07})", month = "February 1,", year = "2007", pages = "185", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007p" } 
Van De Ville, D., Seghier, M., Lazeyras, F., Pelizzone, M., Blu, T. & Unser, M.,"SPM versus WSPM: Sensitivity and Specificity for MultiSession fMRI Data", Twelfth Annual Meeting of the Organization for Human Brain Mapping (HBM'06), Florence, Italy, pp. S94, June 1115, 2006. Invited talk. 
Waveletbased statistical parametric mapping (WSPM) combines powerful denoising in the wavelet domain with statistical testing in the spatial domain. It guarantees strong type I error control and thus high confidence in the detections. In this poster, we propose a comparison of WSPM and SPM2, based on the results of multisession experimental data. The dataset comes from a carefully conducted experiment with auditory stimulation (Philips Gyroscan 1.5T; TR = 1.2s; spatial resolution 1.8×1.8×3mm). The subject's head is placed within customdesigned headset, which isolates the person from most of the MR scanner's noise, and exposed to singlefrequency acoustic stimulation during the activation condition of a block paradigm. Four different auditory frequencies (300Hz, 1126Hz, 2729Hz, 4690Hz) are used in each session. The complete experiment contains 4 sessions, each spanning 250 volumes.The general linear model (GLM) setup was done using SPM, including regressors from the realignment procedure and the autoregressive model for serial correlations. For each session, functional maps of the combined contrast (all frequenciesrest) were obtained for a broad range of significance levels with SPM (4mm smoothing) and WSPM (orthogonal Bspline wavelets slicebyslice; degree 1.0; 2 iterations; combination of 4 spatial shifts). SPM compensates for multiple testing using the Gaussian Random Field theory, while WSPM uses simple Bonferroni correction. The results were then analyzed using two different criteria:
Figure 1. In Fig. 2, we show for each session the number of detections for SPM and WSPM, inside and outside the ROI, as a function of the significance level. Notice that the significance level is at the volume level; i.e., α = 1.0 corresponds to the expectation of a single false positive for the whole volume. We found that both methods are about equally calibrated for α = 1.0. However, the slope of the curve that links the number of detections as a function of the significance level is lower for WSPM than for SPM, which means more detections with WSPM for the same type I error probability. Interestingly, number of detections outside the ROI is not higher for WSPM, suggesting that its higher sensitivity does not lead to false positives augmentation. Figure 2. In Fig. 3, we show the ROCs after estimating the binomial mixture model for both methods. The mixture parameter is globally estimated. The higher performance of WSPM is confirmed: higher sensitivity (more true positives) combined with higher specificity (more true negatives). Figure 3. In Fig. 4, we show the area under the ROC (i.e., sensitivity times specificity), which is a good single measure for the performance of the detection technique. This figure illustrates again the excellent balance of sensitivity and specificity for WSPM. Figure 4. Finally, we note that WSPM guarantees its strong type I error control using conservative assumptions only. References

@inproceedings{blu2006j, author = "Van De Ville, D. and Seghier, M. and Lazeyras, F. and Pelizzone, M. and Blu, T. and Unser, M.", title = "{SPM} {\textit{versus}} {WSPM}: {S}ensitivity and Specificity for MultiSession {fMRI} Data", booktitle = "Twelfth Annual Meeting of the Organization for Human Brain Mapping ({HBM'06})", month = "June 1115,", year = "2006", pages = "S94", note = "Invited talk", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2006j" } 
Vetterli, M., Marziliano, P. & Blu, T.,"Sampling DiscreteTime Piecewise Bandlimited Signals", Proceedings of the Fourth International Conference on Sampling Theory and Applications (SampTA'01), Orlando, USA, pp. 97102, May 1317, 2001. 
We consider sampling discretetime periodic signals which are piecewise bandlimited, that is, a signal that is the sum of a bandlimited signal with a piecewise polynomial signal containing a finite number of transitions. These signals are not bandlimited and thus the Shannon—also due to Kotelnikov, Whittaker—sampling theorem for bandlimited signals can not be applied. In this paper, we derive sampling and reconstruction schemes based on those developed in [1, 2, 3] for piecewise polynomial signals which take into account the extra degrees of freedom due to the bandlimitedness. References

@inproceedings{blu2001l, author = "Vetterli, M. and Marziliano, P. and Blu, T.", title = "Sampling DiscreteTime Piecewise Bandlimited Signals", booktitle = "Proceedings of the Fourth International Conference on Sampling Theory and Applications ({SampTA'01})", month = "May 1317,", year = "2001", pages = "97102", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001l" } 
Vetterli, M., Marziliano, P. & Blu, T.,"A Sampling Theorem for Periodic Piecewise Polynomial Signals", Proceedings of the TwentySixth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'01), Salt Lake City, USA, Vol. 6, pp. 38933896, May 711, 2001. 
We consider the problem of sampling signals which are not bandlimited, but still have a finite number of degrees of freedom per unit of time, such as, for example, piecewise polynomials. We demonstrate that by using an adequate sampling kernel and a sampling rate greater or equal to the number of degrees of freedom per unit of time, one can uniquely reconstruct such signals. This proves a sampling theorem for a wide class of signals beyond bandlimited signals. Applications of this sampling theorem can be found in signal processing, communication systems and biological systems. 
@inproceedings{blu2001m, author = "Vetterli, M. and Marziliano, P. and Blu, T.", title = "A Sampling Theorem for Periodic Piecewise Polynomial Signals", booktitle = "Proceedings of the TwentySixth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'01})", month = "May 711,", year = "2001", volume = "6", pages = "38933896", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2001m" } 
Vetterli, M., Marziliano, P. & Blu, T.,"Sampling Signals with Finite Rate of Innovation", IEEE Transactions on Signal Processing, Vol. 50 (6), pp. 14171428, June 2002. IEEE Signal Processing Society's 2006 Best Paper Award. 
Consider classes of signals that have a finite number of degrees of freedom per unit of time and call this number the rate of innovation. Examples of signals with a finite rate of innovation include streams of Diracs (e.g., the Poisson process), nonuniform splines, and piecewise polynomials. Even though these signals are not bandlimited, we showthat they can be sampled uniformly at (or above) the rate of innovation using an appropriate kernel and then be perfectly reconstructed. Thus, we prove sampling theorems for classes of signals and kernels that generalize the classic "bandlimited and sinc kernel" case. In particular, we show how to sample and reconstruct periodic and finitelength streams of Diracs, nonuniform splines, and piecewise polynomials using sinc and Gaussian kernels. For infinitelength signals with finite local rate of innovation, we show local sampling and reconstruction based on spline kernels. The key in all constructions is to identify the innovative part of a signal (e.g., time instants and weights of Diracs) using an annihilating or locator filter: a device well known in spectral analysis and errorcorrection coding. This leads to standard computational procedures for solving the sampling problem, which we show through experimental results. Applications of these new sampling results can be found in signal processing, communications systems, and biological systems. IEEE Signal Processing Society's 2006 Best Paper Award 
@article{blu2002l, author = "Vetterli, M. and Marziliano, P. and Blu, T.", title = "Sampling Signals with Finite Rate of Innovation", journal = "{IEEE} Transactions on Signal Processing", month = "June", year = "2002", volume = "50", number = "6", pages = "14171428", note = "IEEE Signal Processing Society's 2006 \textbf{Best Paper Award}", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002l" } 
Vetterli, M., Marziliano, P. & Blu, T.,"Sampling methods, reconstruction methods and devices for sampling and/or reconstructing signals", International Patent WO200278197, 2002. This technology was transferred to Qualcomm Inc. in 2007. 
@misc{blu2002m, author = "Vetterli, M. and Marziliano, P. and Blu, T.", title = "Sampling methods, reconstruction methods and devices for sampling and/or reconstructing signals", year = "2002", note = "This technology was transferred to Qualcomm Inc. in 2007", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2002m" } 
Vetterli, M., Marziliano, P., Blu, T. & Dragotti, P.L.,"Sparse Sampling: Theory, Algorithms and Applications", Tutorial Presentation at the ThirtyFourth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'09), Taipei, Taiwan, April 1924, 2009. 
Signal acquisition and reconstruction is at the heart of signal processing and communications, and sampling theorems provide the bridge between the continuous and the discretetime worlds. The most celebrated and widely used sampling theorem is often attributed to Shannon, and gives a sufficient condition, namely bandlimitedness, for an exact sampling and interpolation formula. Recently, this framework has been extended to classes of nonbandlimited signals. The way around Shannon classical sampling theorem resides in a parametric approach, where the prior that the signal is sparse in a basis or in a parametric space is put to contribution. This leads to new exact reconstruction formulas and fast algorithms that achieve such reconstructions. The aim of this tutorial is to give an overview of these recent exciting findings in sampling theory. The fundamental theoretical results will be reviewed and constructive algorithms will be presented. Finally, a diverse set of applications will be presented so as to demonstrate the tangibility of the theoretical concepts. 
@conference{blu2009h, author = "Vetterli, M. and Marziliano, P. and Blu, T. and Dragotti, P.L.", title = "Sparse Sampling: Theory, Algorithms and Applications", booktitle = "Tutorial Presentation at the ThirtyFourth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'09})", month = "April 1924,", year = "2009", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2009h" } 
Vetterli, M., Marziliano, P., Blu, T. & Dragotti, P.L.,"Sparse Sampling of Structured Data", Tutorial Presentation at the Seventeenth European Signal Processing Conference (EUSIPCO'09), Glasgow, Scotland UK, August 2428, 2009. 
The problem of reconstructing or estimating partially observed or sampled signals is an old and important one, and finds application in many areas of signal processing and communications. Traditional acquisition and reconstruction approaches are heavily influences by the classical Shannon sampling theory which gives an exact sampling and interpolation formula for bandlimited signals. Recently, the classical Shannon sampling framework has been extended to classes of nonbandlimited structured signals, which we call signals with Finite Rate of Innovation. In these new sampling schemes, the prior that the signal is sparse in a basis or in a parametric space is put to contribution and perfect reconstruction is possible based on a set of suitable measurements. This leads to new exact reconstruction formulas and fast algorithms that achieve such reconstructions. The main aim of this tutorial is to give an overview of these new exciting findings in sampling theory. The fundamental theoretical results will be reviewed and constructive algorithms will be presented, both for 1D and 2D signals. We also discuss the effect of noise on the sampling and reconstruction of structured signals. Finally a diverse set of applications of these new concepts will be presented to emphasize the importance and far reaching implications of these new theories. 
@conference{blu2009i, author = "Vetterli, M. and Marziliano, P. and Blu, T. and Dragotti, P.L.", title = "Sparse Sampling of Structured Data", booktitle = "Tutorial Presentation at the Seventeenth {E}uropean Signal Processing Conference ({EUSIPCO'09})", month = "August 2428,", year = "2009", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2009i" } 
Vonesch, C., Blu, T. & Unser, M.,"Generalized Biorthogonal Daubechies Wavelets", Proceedings of the SPIE Conference on Mathematical Imaging: Wavelet XI, San Diego, USA, Vol. 5914, pp. 59141X159141X6, July 31August 3, 2005. 
We propose a generalization of the CohenDaubechiesFeauveau (CDF) and 9⁄7 biorthogonal wavelet families. This is done within the framework of nonstationary multiresolution analysis, which involves a sequence of embedded approximation spaces generated by scaling functions that are not necessarily dilates of one another. We consider a dual pair of such multiresolutions, where the scaling functions at a given scale are mutually biorthogonal with respect to translation. Also, they must have the shortestpossible support while reproducing a given set of exponential polynomials. This constitutes a generalization of the standard polynomial reproduction property. The corresponding refinement filters are derived from the ones that were studied by Dyn et al. in the framework of nonstationary subdivision schemes. By using different factorizations of these filters, we obtain a general family of compactly supported dual wavelet bases of L_{2}. In particular, if the exponential parameters are all zero, one retrieves the standard CDF Bspline wavelets and the 9⁄7 wavelets. Our generalized description yields equivalent constructions for Espline wavelets. A fast filterbank implementation of the corresponding wavelet transform follows naturally; it is similar to Mallat's algorithm, except that the filters are now scaledependent. This new scheme offers high flexibility and is tunable to the spectral characteristics of a wide class of signals. In particular, it is possible to obtain symmetric basis functions that are wellsuited for image processing. 
@inproceedings{blu2005n, author = "Vonesch, C. and Blu, T. and Unser, M.", title = "Generalized Biorthogonal {D}aubechies Wavelets", booktitle = "Proceedings of the {SPIE} Conference on Mathematical Imaging: {W}avelet {XI}", month = "July 31August 3,", year = "2005", volume = "5914", pages = "59141X159141X6", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005n" } 
Vonesch, C., Blu, T. & Unser, M.,"Generalized Daubechies Wavelets", Proceedings of the Thirtieth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'05), Philadelphia, USA, Vol. {IV}, pp. 593596, March 1823, 2005. 
We present a generalization of the Daubechies wavelet family. The context is that of a nonstationary multiresolution analysis—i.e., a sequence of embedded approximation spaces generated by scaling functions that are not necessarily dilates of one another. The constraints that we impose on these scaling functions are: (1) orthogonality with respect to translation, (2) reproduction of a given set of exponential polynomials, and (3) minimal support. These design requirements lead to the construction of a general family of compactlysupported, orthonormal waveletlike bases of L_{2}. If the exponential parameters are all zero, then one recovers Daubechies wavelets, which are orthogonal to the polynomials of degree (N − 1) where N is the order (vanishingmoment property). A fast filterbank implementation of the generalized wavelet transform follows naturally; it is similar to Mallat's algorithm, except that the filters are now scaledependent. The new transforms offer increased flexibility and are tunable to the spectral characteristics of a wide class of signals. 
@inproceedings{blu2005o, author = "Vonesch, C. and Blu, T. and Unser, M.", title = "Generalized Daubechies Wavelets", booktitle = "Proceedings of the Thirtieth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'05})", month = "March 1823,", year = "2005", volume = "{IV}", pages = "593596", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2005o" } 
Vonesch, C., Blu, T. & Unser, M.,"Generalized Daubechies Wavelet Families", IEEE Transactions on Signal Processing, Vol. 55 (9), pp. 44154429, September 2007. 
We present a generalization of the orthonormal Daubechies wavelets and of their related biorthogonal flavors (CohenDaubechiesFeauveau, 9⁄7). Our fundamental constraint is that the scaling functions should reproduce a predefined set of exponential polynomials. This allows one to tune the corresponding wavelet transform to a specific class of signals, thereby ensuring good approximation and sparsity properties. The main difference with the classical construction of Daubechies et al. is that the multiresolution spaces are derived from scaledependent generating functions. However, from an algorithmic standpoint, Mallat's Fast Wavelet Transform algorithm can still be applied; the only adaptation consists in using scaledependent filter banks. Finite support ensures the same computational efficiency as in the classical case. We characterize the scaling and wavelet filters, construct them and show several examples of the associated functions. We prove that these functions are squareintegrable and that they converge to their classical counterparts of the corresponding order. 
@article{blu2007q, author = "Vonesch, C. and Blu, T. and Unser, M.", title = "Generalized {D}aubechies Wavelet Families", journal = "{IEEE} Transactions on Signal Processing", month = "September", year = "2007", volume = "55", number = "9", pages = "44154429", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2007q" } 
Wang, M. & Blu, T.,"Generalized YUV interpolation of CFA images", Proceedings of the 2010 IEEE International Conference on Image Processing (ICIP'10), Hong Kong, China, pp. 19091912, September 2629, 2010. 
This paper presents a simple yet effective color filter array (CFA) interpolation algorithm. It is based on a linear interpolating kernel, but operates on YUV space, which results in a nontrivial boost on the peak signaltonoise ratio (PSNR) of red and blue channels. The algorithm can be implemented efficiently. At the end of the paper, we present its performance compared with nonlinear interpolation methods and show that it's competitive even among stateoftheart CFA demosaicing algorithms. 
@inproceedings{blu2010k, author = "Wang, M. and Blu, T.", title = "Generalized {YUV} interpolation of {CFA} images", booktitle = "Proceedings of the 2010 {IEEE} International Conference on Image Processing ({ICIP'10})", month = "September 2629,", year = "2010", pages = "19091912", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010k" } 
Wei, L. & Blu, T.,"A new nonredundant complex Hilbert wavelet transforms", Proceedings of the IEEE Statistical Signal Processing Workshop (SSP), Ann Arbor, USA, pp. 652655, August 58, 2012. 
In this paper, a novel nonredundant complex wavelet transform (NRCWT) for realvalued signals is proposed. For this purpose, an orthogonal complex filter bank is developed to implement this NRCWT. We show how to choose the two complex filters from classical realvalued wavelet filters in such a way that the filterbank is always orthogonal. Using fractional Bspline filters, a pair of exact Hilbert wavelets are constructed, which can separate the positive frequencies from the negative frequencies. 
@inproceedings{blu2012e, author = "Wei, L. and Blu, T.", title = "A new nonredundant complex {H}ilbert wavelet transforms", booktitle = "Proceedings of the {IEEE} Statistical Signal Processing Workshop (SSP)", month = "August 58,", year = "2012", pages = "652655", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2012e" } 
Wei, L. & Blu, T.,"Construction of an Orthonormal Complex Multiresolution Analysis", Proceedings of the Thirtyeighth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'13), Vancouver, Canada, pp. 23812385, May 2631, 2013. 
We design two complex filters {h[n],g[n])} for an orthogonal filter bank structure based on two atom functions {\rho_0(t),\rho_1/2(t)}, such that: 1) they generate an orthonormal multiwavelet basis; 2) the two complex conjugate wavelets are Hilbert wavelets, i.e., their frequency responses are supported either on positive or negative frequencies; and 3) the two scaling functions are real. The developed complex wavelet transform (CWT) is nonredundant, nearly shiftinvariant, and distinguishable for diagonal features. The distinguishability in diagonal features is demonstrated by comparison with real discrete wavelet transform. 
@inproceedings{blu2013g, author = "Wei, L. and Blu, T.", title = "Construction of an Orthonormal Complex Multiresolution Analysis", booktitle = "Proceedings of the Thirtyeighth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'13})", month = "May 2631,", year = "2013", pages = "23812385", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2013g" } 
Wei, X., Blu, T. & Dragotti, P.L.,"Finite Rate of Innovation with NonUniform Samples", Proceedings of the IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC'12), Hong Kong, China, pp. 369372, August 1215, 2012. 
In this paper, we investigate the problem of retrieving the innovation parameters (time and amplitude) of a stream of Diracs from nonuniform samples taken with a novel kernel (a hyperbolic secant). We devise a noniterative, exact algorithm that allows perfect reconstruction of 2K innovations from as few as 2K nonuniform samples. We also investigate noise issues and compute the CramérRao lower bounds for this problem. A simple total leastsquares extension of the algorithm proves to be efficient in reconstructing the location of a single Dirac from noisy measurements. 
@inproceedings{blu2012f, author = "Wei, X. and Blu, T. and Dragotti, P.L.", title = "Finite Rate of Innovation with NonUniform Samples", booktitle = "Proceedings of the {IEEE} International Conference on Signal Processing, Communications and Computing {(ICSPCC'12)}", month = "August 1215,", year = "2012", pages = "369372", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2012f" } 
Xue, F. & Blu, T.,"SUREBased Motion estimation", Proceedings of the IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC'12), Hong Kong, China, pp. 373377, August 1215, 2012. 
We propose a novel approach to estimate the parameters of motion
blur (blur length and orientation) from an observed image. 
@inproceedings{blu2012g, author = "Xue, F. and Blu, T.", title = "{SURE}Based Motion estimation", booktitle = "Proceedings of the {IEEE} International Conference on Signal Processing, Communications and Computing {(ICSPCC'12)}", month = "August 1215,", year = "2012", pages = "373377", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2012g" } 
Xue, F. & Blu, T.,"SUREbased blind Gaussian deconvolution", Proceedings of the IEEE Statistical Signal Processing Workshop (SSP), Ann Arbor, USA, pp. 452455, August 58, 2012. 
We propose a novel blind deconvolution method that consisting of firstly estimating the variance of the Gaussian blur, then performing nonblind deconvolution with the estimated PSF. The main contribution of this paper is the first step  to estimate the variance of the Gaussian blur, by minimizing a novel objective functional: an unbiased estimate of a blur MSE (SURE). The optimal parameter and blur variance are obtained by minimizing this criterion over linear processings that have the form of simple Wiener filterings. We then perform nonblind deconvolution using our recent highquality SUREbased deconvolution algorithm. The very competitive results show the highly accurate estimation of the blur variance (compared to the groundtruth value) and the great potential of developing more powerful blind deconvolution algorithms based on the SUREtype principle. 
@inproceedings{blu2012h, author = "Xue, F. and Blu, T.", title = "{SURE}based blind {G}aussian deconvolution", booktitle = "Proceedings of the {IEEE} Statistical Signal Processing Workshop (SSP)", month = "August 58,", year = "2012", pages = "452455", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2012h" } 
Xue, F. & Blu, T.,"A Novel SUREBased Criterion for Parametric PSF Estimation", IEEE Transactions on Image Processing, Vol. 24 (2), pp. 595607, February 2015. 
We propose an unbiased estimate of a filtered version of the mean squared errorthe blurSURE (Stein's unbiased risk estimate)as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SUREbased framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blurSURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multiWiener SURELET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURElike estimates. 
@article{blu2015a, author = "Xue, F. and Blu, T.", title = "A Novel {SURE}Based Criterion for Parametric {PSF} Estimation", journal = "IEEE Transactions on Image Processing", month = "February", year = "2015", volume = "24", number = "2", pages = "595607", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2015a" } 
Xue, F. & Blu, T.,"On The Degrees Of Freedom in Total Variation Minimization", Proceedings of the Fortyfifth IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'20), Barcelona, Spain, pp. 56905694, May 48, 2020. 
In the theory of linear models, the degrees of freedom (DOF) of an estimator play a pivotal role in risk estimation, as it quantifies the complexity of a statistical modeling procedure. Considering the totalvariation (TV) regularization, we present a theoretical study of the DOF in Stein's unbiased risk estimate (SURE), under a very mild assumption. First, from the duality perspective, we give an analytic expression of the exact TV solution, with identification of its support. The closedform expression of the DOF is derived based on the KarushKuhnTucker (KKT) conditions. It is also shown that the DOF is upper bounded by the nullity of a subanalysismatrix. The theoretical analysis is finally validated by the numerical tests on image recovery. 
@inproceedings{blu2020b, author = "Xue, F. and Blu, T.", title = "On The Degrees Of Freedom in Total Variation Minimization", booktitle = "Proceedings of the Fortyfifth {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'20})", month = "May 48,", year = "2020", pages = "56905694", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2020b" } 
Xue, F., Blu, T., Du, R. & Liu, J.,"An iterative SURELET approach to sparse reconstruction", Proceedings of the Fortyfirst IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'16), Shanghai, China, pp. 44934497, March 2025, 2016. 
Sparsitypromoting regularization is often formulated as ℓ_{ν}penalized minimization (0<ν<1), which can be efficiently solved by iteratively reweighted least squares (IRLS). The reconstruction quality is generally sensitive to the value of regularization parameter. In this work, for accurate recovery, we develop two datadriven optimization schemes based on minimization of Stein's unbiased risk estimate (SURE). First, we propose a recursive method for computing SURE for a given IRLS iterate, which enables us to unbiasedly evaluate the reconstruction error, and select the optimal value of regularization parameter. Second, for fast optimization, we parametrize each IRLS iterate as a linear combination of few elementary functions (LET), and solve the linear weights by minimizing SURE. Numerical experiments show that iterating this process leads to higher reconstruction accuracy with remarkably faster computational speed than standard IRLS. 
@inproceedings{blu2016b, author = "Xue, F. and Blu, T. and Du, R. and Liu, J.", title = "An iterative {SURELET} approach to sparse reconstruction", booktitle = "Proceedings of the Fortyfirst {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'16})", month = "March 2025,", year = "2016", pages = "44934497", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2016b" } 
Xue, F., Blu, T., Liu, J. & Xia, A.,"Recursive Evaluation of SURE for Total Variation Denoising", Proceedings of the Fortythird IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'18), Calgary, AB, Canada, pp. 13381342, April 1520, 2018. 
Recently, total variation (TV)based regularization has become a standard technique for signal denoising. The reconstruction quality is generally sensitive to the value of regularization parameter. In this work, based on Chambolle's algorithm, we develop two datadriven optimization schemes based on minimization of Stein's unbiased risk estimate (SURE)statistically equivalent to mean squared error (MSE). First, we propose a recursive evaluation of SURE to monitor the estimation error during Chambolle's iteration; the optimal value is then identified by the minimum SURE. Second, for fast optimization, we perform alternating update between regularization parameter and solution within Chambolle's iteration. We exemplify the proposed methods with both 1D and 2D signal denoising. Numerical experiments show that the proposed methods lead to highly accurate estimate of regularization parameter and nearly optimal denoising performance. 
@inproceedings{blu2018d, author = "Xue, F. and Blu, T. and Liu, J. and Xia, A.", title = "Recursive Evaluation of {SURE} for Total Variation Denoising", booktitle = "Proceedings of the Fortythird {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'18})", month = "April 1520,", year = "2018", pages = "13381342", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2018d" } 
Xue, F., Blu, T., Liu, J. & Xia, A.,"A Novel GCVBased Criterion for Parameter Selection in Image Deconvolution", Proceedings of the Fortythird IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'18), Calgary, AB, Canada, pp. 14031407, April 1520, 2018. 
A proper selection of regularization parameter is essential for regularizationbased image deconvolution. The main contribution of this paper is to propose a new form of generalized cross validation (GCV) as a criterion for this optimal selection. Incorporating a niltrace nonlinear estimate, we develop this new GCV based on Stein's unbiased risk estimate (SURE)an unbiased estimate of mean squared error (MSE). The key advantage of this GCV over SURE is that it does not require the knowledge of noise variance. We exemplify this criterion with both Tikhonov regularization and ℓ1based sparse deconvolution. In particular, we develop a recursive evaluation of GCV for the ℓ1estimate based on iterative softthresholding (IST) algorithm. Numerical experiments demonstrate the nearly optimal parameter selection and negligible loss of its resultant deconvolution quality. 
@inproceedings{blu2018e, author = "Xue, F. and Blu, T. and Liu, J. and Xia, A.", title = "A Novel {GCV}Based Criterion for Parameter Selection in Image Deconvolution", booktitle = "Proceedings of the Fortythird {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'18})", month = "April 1520,", year = "2018", pages = "14031407", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2018e" } 
Xue, F., Luisier, F. & Blu, T.,"SURELET image deconvolution using multiple Wiener filters", Proceedings of the 2012 IEEE International Conference on Image Processing (ICIP'12), Orlando, USA, pp. 30373040, September 30October 3, 2012. 
We propose a novel deconvolution algorithm based on the minimization of Stein's unbiased risk estimate (SURE). We linearly parametrize the deconvolution process by using multiple Wiener filterings as elementary functions, followed by undecimated Haarwavelet thresholding. The key contributions of our approach are: 1) the linear combination of several Wiener filters with different (but fixed) regularization parameters, which avoids the manual adjustment of a single nonlinear parameter; 2) the use of linear parameterization, which makes the SURE minimization finally boil down to solving a linear system of equations, leading to a very fast and exact optimization of the whole deconvolution process. The results obtained on standard test images show that our algorithm favorably compares with the other stateoftheart deconvolution methods in both speed and quality. 
@inproceedings{blu2012i, author = "Xue, F. and Luisier, F. and Blu, T.", title = "{SURELET} image deconvolution using multiple {W}iener filters", booktitle = "Proceedings of the 2012 {IEEE} International Conference on Image Processing ({ICIP'12})", month = "September 30October 3,", year = "2012", pages = "30373040", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2012i" } 
Xue, F., Luisier, F. & Blu, T.,"MultiWiener SURELET Deconvolution", IEEE Transactions on Image Processing, Vol. 22 (5), pp. 19541968, May 2013. 
In this paper, we propose a novel deconvolution algorithm based on the minimization of a regularized Stein's unbiased risk estimate (SURE), which is a good estimate of the mean squared error (MSE). We linearly parametrize the deconvolution process by using multiple Wiener filters as elementary functions, followed by undecimated Haarwavelet thresholding. Due to the quadratic nature of SURE and the linear parametrization, the deconvolution problem finally boils down to solving a linear system of equations, which is very fast and exact. The linear coefficients, i.e., the solution of the linear system of equations, constitute the best approximation of the optimal processing on the WienerHaarthreshold basis that we consider. In addition, the proposed multiWiener SURELET approach is applicable for both periodic and symmetric boundary conditions, and can thus be used in various practical scenarios. The very competitive (both in computation time and quality) results show that the proposed algorithm, which can be interpreted as a kind of nonlinear Wiener processing, can be used as a basic tool for building more sophisticated deconvolution algorithms. 
@article{blu2013h, author = "Xue, F. and Luisier, F. and Blu, T.", title = "Multi{W}iener {SURELET} Deconvolution", journal = "{IEEE} Transactions on Image Processing", month = "May", year = "2013", volume = "22", number = "5", pages = "19541968", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2013h" } 
Xue, X., Li, J. & Blu, T.,"An Iterative SURELET Deconvolution Algorithm Based on BM3D Denoiser", Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP'19), Taipei, Taiwan, pp. 17951799, September 2225, 2019. 
Recently, the plugandplay priors (PPP) have been a popular technique for image reconstruction. Based on the basic iterative thresholding scheme, we in this paper propose a new iterative SURELET deconvolution algorithm with a plugin BM3D denoiser. To optimize the deconvolution process, we linearly parametrize the thresholding function by using multiple BM3D denoisers as elementary functions. The key contributions of our approach are: (1) the linear combination of several BM3D denoisers with different (but fixed) parameters, which avoids the manual adjustment of a single nonlinear parameter; (2) linear parametrization makes the minimization of Stein's unbiased risk estimate (SURE) finally boil down to solving a linear system of equations, leading to a very fast and exact optimization during each iteration. In particular, the SURE of BM3D denoiser is approximately evaluated by finitedierence MonteCarlo technique. Experiments show that the proposed algorithm, in average, achieves better deconvolution performance than other stateoftheart methods, both numerically and visually. 
@inproceedings{blu2019c, author = "Xue, X. and Li, J. and Blu, T.", title = "An Iterative SURELET Deconvolution Algorithm Based on BM3D Denoiser", booktitle = "Proceedings of the 2019 {IEEE} International Conference on Image Processing ({ICIP'19})", month = "September 2225,", year = "2019", pages = "17951799", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2019c" } 
Zhang, X., Gilliam, C. & Blu, T.,"Iterative Fitting After Elastic Registration: An Efficient Strategy For Accurate Estimation Of Parametric Deformations", Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP'17), Beijing, China, pp. 14921496, September 1720, 2017. 
We propose an efficient method for image registration based on iteratively fitting a parametric model to the output of an elastic registration. It combines the flexibility of elastic registration  able to estimate complex deformations  with the robustness of parametric registration  able to estimate very large displacement. Our approach is made feasible by using the recent Local AllPass (LAP) algorithm; a fast and accurate filterbased method for estimating the local deformation between two images. Moreover, at each iteration we fit a linear parametric model to the local deformation which is equivalent to solving a linear system of equations (very fast and efficient). We use a quadratic polynomial model however the framework can easily be extended to more complicated models. The significant advantage of the proposed method is its robustness to model mismatch (e.g. noise and blurring). Experimental results on synthetic images and real images demonstrate that the proposed algorithm is highly accurate and outperforms a selection of image registration approaches. 
@inproceedings{blu2017g, author = "Zhang, X. and Gilliam, C. and Blu, T.", title = "Iterative Fitting After Elastic Registration: An Efficient Strategy For Accurate Estimation Of Parametric Deformations", booktitle = "Proceedings of the 2017 {IEEE} International Conference on Image Processing ({ICIP'17})", month = "September 1720,", year = "2017", pages = "14921496", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2017g" } 
Zhang, X., Gilliam, C. & Blu, T.,"Parametric Registration for Mobile Phone Images", Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP'19), Taipei, Taiwan, pp. 13121316, September 2225, 2019. 
Image registration is a significant step in a wide range of practical applications and it is a fundamental problem in various computer vision tasks. In this paper, we propose a highly accurate and fast parametric registration method for mobile phone photos. The proposed algorithm is based on a fast and accurate elastic registration algorithm, the Local AllPass (LAP) algorithm, which performs in a coarsetofine manner. At each iteration, the LAP displacement field is fitted by a parametric model. Thus the image registration problem is equivalent to finding a few parameters to describe the displacement field. The fitting step can be performed very efficiently by solving a linear system of equations. In terms of the fitting model, it is easy to change the type of models to do the parametric fitting for specific applications. Experimental results on both synthetic and real images demonstrate the high accuracy and computational efficiency of the proposed algorithm. 
@inproceedings{blu2019d, author = "Zhang, X. and Gilliam, C. and Blu, T.", title = "Parametric Registration for Mobile Phone Images", booktitle = "Proceedings of the 2019 {IEEE} International Conference on Image Processing ({ICIP'19})", month = "September 2225,", year = "2019", pages = "13121316", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2019d" } 
Zhang, X., Gilliam, C. & Blu, T.,"AllPass Parametric Image Registration", IEEE Transactions on Image Processing, Vol. 29 (1), pp. 56255640, December 2020. 
Image registration is a required step in many practical applications that involve the acquisition of multiple related images. In this paper, we propose a methodology to deal with both the geometric and intensity transformations in the image registration problem. The main idea is to modify an accurate and fast elastic registration algorithm (Local AllPass—LAP) so that it returns a parametric displacement field, and to estimate the intensity changes by fitting another parametric expression. Although we demonstrate the methodology using a loworder parametric model, our approach is highly flexible and easily allows substantially richer parametrisations, while requiring only limited extra computation cost. In addition, we propose two novel quantitative criteria to evaluate the accuracy of the alignment of two images ("salience correlation") and the number of degrees of freedom ("parsimony") of a displacement field, respectively. Experimental results on both synthetic and real images demonstrate the high accuracy and computational efficiency of our methodology. Furthermore, we demonstrate that the resulting displacement fields are more parsimonious than the ones obtained in other stateoftheart image registration approaches. 
@article{blu2020c, author = "Zhang, X. and Gilliam, C. and Blu, T.", title = "AllPass Parametric Image Registration", journal = "IEEE Transactions on Image Processing", month = "December", year = "2020", volume = "29", number = "1", pages = "56255640", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2020c" } 
Zhang, Z. & Blu, T.,"Blind Source Separation via a Weak Exclusion Principle", Proceedings of the Fortyseventh IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'22), Singapore, pp. 26992703, May 2227, 2022. 
In this paper, we propose a generalized Blind Source Separation (BSS) method using a novel assumption which we call "weak exclusion" principle. We first give the mathematical definition of the exclusion criterion and propose an iterative algorithm to minimize it. We then test WEP in simulated and real datasets, compared with other four methods. The experiments on synthetic and real datasets demonstrate that WEP outperforms the other methods, both in terms of accuracy and in terms of speed. 
@inproceedings{blu2022a, author = "Zhang, Z. and Blu, T.", title = "Blind Source Separation via a Weak Exclusion Principle", booktitle = "Proceedings of the Fortyseventh {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP'22})", month = "May 2227,", year = "2022", pages = "26992703", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2022a", doi = "10.1109/ICASSP43922.2022.9747709" } 
Zhao, S., Cahill, D.G., Li, S., Xiao, F., Blu, T., Griffith, J.F. & Chen, W.,"Denoising of threedimensional fast spin echo magnetic resonance images of knee joints using spatialvariant noiserelevant residual learning of convolution neural network", Computers in Biology and Medicine, Vol. 151, pp. 106295, December 2022. 
Purpose: Twodimensional (2D) fast spin echo (FSE) techniques play a central role in the clinical magneticresonance imaging (MRI) of knee joints. Moreover, threedimensional (3D) FSE provides highisotropicresolutionmagnetic resonance (MR) images of knee joints, but it has a reduced signaltonoise ratio comparedto 2D FSE. Deeplearning denoising methods are a promising approach for denoising MR images, but they areoften trained using synthetic noise due to challenges in obtaining true noise distributions for MR images. In thisstudy, inherent true noise information from two number of excitations (2NEX) acquisition was used to develop adeeplearning model based on residual learning of convolutional neural network (CNN), and this model was usedto suppress the noise in 3D FSE MR images of knee joints. Methods: A deep learningbased denoising method was developed. The proposed CNN used twostep residuallearning over parallel transporting and residual blocks and was designed to comprehensively learn real noisefeatures from 2NEX training data. Results: The results of an ablation study validated the network design. The new method achieved improveddenoising performance of 3D FSE knee MR images compared with current stateoftheart methods, based on thepeak signaltonoise ratio and structural similarity index measure. The improved image quality after denoisingusing the new method was verified by radiological evaluation. Conclusion: A deep CNN using the inherent spatialvarying noise information in 2NEX acquisitions was developed.This method showed promise for clinical MRI assessments of the knee, and has potential applications forthe assessment of other anatomical structures. 
@article{blu2022f, author = "Zhao, Shutian and Cahill, D{\'o}nal G. and Li, Siyue and Xiao, Fan and Blu, Thierry and Griffith, James F. and Chen, Weitian", title = "Denoising of threedimensional fast spin echo magnetic resonance images of knee joints using spatialvariant noiserelevant residual learning of convolution neural network", journal = "Computers in Biology and Medicine", month = "December", year = "2022", volume = "151", pages = "106295", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2022f", doi = "https://doi.org/10.1016/j.compbiomed.2022.106295" } 
Zhao, T. & Blu, T.,"Detecting Curves in Very Noisy Images Using FourierArgand Moments", Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP'19), Taipei, Taiwan, pp. 30113015, September 2225, 2019. 
Detection of curves (e.g., ridges) from very noisy images is an important yet challenging task in photonstarved imaging applications (e.g., nuclear imaging modalities, fluorescence/ electron microscopy, radioastronomy, first photon/lightinflight imaging). In this paper, we exploit the consistency of the image along the curve, i.e., the fact that the image changes slowly when we move along the curve"locally" laminar image. We compute a sequence of complex scalars that we call FourierArgand moments, and show that the direction of variation of a laminar image is purely encoded in the phase of these moments. In particular, focusing on ridges located at the center of the image, we show that using these moments altogether in a frequency estimation algorithm provides a very accurate and highly robust estimate of the direction of the ridge: we demonstrate this accuracy for noise levels as high as 10 dB. We then show how to detect curvesi.e., local ridgesby computing the Fourier Argand moments within a sliding window across the image, and design a consistency map whose thresholding allows to keep only the pixels on the curve. Numerical experiments on both synthetic images and real images (low light photography) demonstrate the accuracy and robustness to noise of the proposed method, compared to a state of the art method. 
@inproceedings{blu2019e, author = "Zhao, T. and Blu, T.", title = "Detecting Curves in Very Noisy Images Using {F}ourier{A}rgand Moments", booktitle = "Proceedings of the 2019 {IEEE} International Conference on Image Processing ({ICIP'19})", month = "September 2225,", year = "2019", pages = "30113015", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2019e" } 
Zhao, T. & Blu, T.,"The FourierArgand Representation: An Optimal Basis of Steerable Patterns", IEEE Transactions on Image Processing, Vol. 29 (1), pp. 63576371, December 2020. 
Computing the convolution between a 2D signal and a corresponding filter with variable orientations is a basic problem that arises in various tasks ranging from low level image processing (e.g. ridge/edge detection) to high level computer vision (e.g. pattern recognition). Through decades of research, there still lacks an efficient method for solving this problem. In this paper, we investigate this problem from the perspective of approximation by considering the following problem: what is the optimal basis for approximating all rotated versions of a given bivariate function? Surprisingly, solely minimising the L2approximationerror leads to a rotationcovariant linear expansion, which we name FourierArgand representation. This representation presents two major advantages: 1) rotationcovariance of the basis, which implies a "strong steerability"—rotating by an angle 𝛼 corresponds to multiplying each basis function by a complex scalar exp(ik𝛼); 2) optimality of the FourierArgand basis, which ensures a few number of basis functions suffice to accurately approximate complicated patterns and highly directionselective filters. We show the relation between the FourierArgand representation and the Radon transform, leading to an efficient implementation of the decomposition for digital filters. We also show how to retrieve accurate orientation of local structures/patterns using a fast frequency estimation algorithm. 
@article{blu2020d, author = "Zhao, T. and Blu, T.", title = "The {F}ourier{A}rgand Representation: An Optimal Basis of Steerable Patterns", journal = "IEEE Transactions on Image Processing", month = "December", year = "2020", volume = "29", number = "1", pages = "63576371", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2020d" } 
Zheng, N., Li, X., Blu, T. & Lee, T.,"SUREMSE speech enhancement for robust speech recognition", Proceedings of the 2010 International Symposium on Chinese Spoken Language Processing (ISCSLP'10), Tainan, Taiwan, pp. 271274, November 29December 3, 2010. 
This paper presents a new approach to enhancing noisy (white Gaussian noise) speech signals for robust speech recognition. It is based on the minimization of an estimate of denoising MSE (known as SURE) and does not require any hypotheses on the original signal. The enhanced signal is obtained by thresholding coefficients in the DCT domain, with the parameters in the thresholding functions being specified through the minimization of the SURE. Thanks to a linear parametrization, this optimization is very costeffective. This method also works well for nonwhite noise with a noise whitening processing before the optimization. We have performed automatic speech recognition tests on a subset of the AURORA 2 database, to compare our method with different denoising strategies. The results show that our method brings a substantial increase in recognition accuracy. 
@inproceedings{blu2010l, author = "Zheng, N. and Li, X. and Blu, T. and Lee, T.", title = "{SURE}{MSE} speech enhancement for robust speech recognition", booktitle = "Proceedings of the 2010 International Symposium on Chinese Spoken Language Processing (ISCSLP'10)", month = "November 29December 3,", year = "2010", pages = "271274", url = "http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/publications.php?paper=blu2010l" } 
Created by JabRef on 18/04/2023.