Electronic Engineering Department, The Chinese University of Hong Kong - Prof LEE, Tan 李丹

Homepage

Professor
BSc, MPhil, PhD (CUHK)
homepage Rm 324, Ho Sin Hang Engineering Building phone Tel: +852 3943 8267 Email住址會使用灌水程式保護機制。你需要啟動Javascript才能觀看它

Research Interests:

Speech signal processing, spoken language technologies, pattern recognition, multimedia information retrieval, hearing and speaking assistive and rehabilitation technologies, music information processing

 

hyperlink http://www.ee.cuhk.edu.hk/~tanlee/

 

 

Resume of Career

Tan Lee received his BSc and MPhil degrees in Electronics, and PhD degree in Electronic Engineering, all from the Chinese University of Hong Kong (CUHK), in 1988, 1990 and 1996 respectively. He is currently a Professor in the Department of Electronic Engineering and the Associate Dean (Education) of Engineering at CUHK.

Tan Lee has been working on speech and language related research since early 90s. His research interests cover speech and audio signal processing, spoken language technology, deep learning models of speech and language, paralinguistics in speech, and neurological basis of speech and language. Tan Lee led the effort on developing Cantonese-focused spoken language technologies that have been widely licensed for industrial applications. His work addressed many challenging problems in multi-lingual and cross-lingual speech processing, e.g., code-switching, low- or zero-resource languages. Tan Lee's recent research is featured by in-depth and substantive collaboration across a wide spectrum of disciplines. He is committed to apply signal processing and machine learning methods to atypical speech and language that are resulted from different kinds of human communication and cognitive disorders across a wide age range.

At CUHK, Tan Lee teaches courses on signals and systems, digital signal processing, speech processing and automatic speech recognition. He was invited to teach an intensive course on spoken language technology tailored for elite students of the Yao Class at Tsinghua University in 2013, 2015 and 2017. Since 2019, Tan Lee has been teaching artificial intelligence related topics to gifted students of age 10 to 16.

Tan Lee is a member of IEEE and a member of the International Speech Communication Association (ISCA). He is an Associate Editor of the IEEE/ACM Transactions on Audio, Speech and Language Processing, and the Vice Chair of ISCA Special Interest Group of Chinese Spoken Language Processing. Tan Lee served as the Area Chair in the Technical Programme Committees of INTERSPEEFCH 2014, 2016 and 2018 and the General Chair of the 11th and the 12nd International Symposiums on Chinese Spoken Language Processing (ISCSLP).

Research Interests

Speech and audio signal processing: speech analysis, pitch estimation, speech enhancement/separation
Spoken language technology: automatic speech/speaker/language recognition, text-to-speech
Deep learning for speech: acoustic model adaptation, deep factorization, information disentanglement
Speech and healthcare: hearing assistive devices, assessment of speech/language disorders
Paralinguistics in speech: emotion, expressiveness, speaking style, attitude, empathy
Cantonese: linguistic properties, speech and language corpora, code-mixing, written vs. spoken

Honors, Awards and Achievements

  • Co-author of Best Student Papers in ICASSP 2005 and 2019
  • Co-inventor of ACEHearing, winner of the Bronze Award in Asian Innovation Awards 2011
  • First place, Spoken Web Search (SWS) Task, MediaEval Benchmarking Initiative for Multimedia Evaluation 2012
  • Distinguished Lecturer, Asia-Pacific Signal and Information Processing Association (APSIPA), 2014-2015
  • CUHK Vice-Chancellor's Exemplary Teaching Award in 2004
  • CUHK Faculty of Engineering's Exemplary Teaching Awards (12 times in 2001 – 2021)

Recent Funded Projects

  • RGC General Research Fund: Objective Assessment of Physical Competence and Wellness Based on Voice and Speech analytics, Jan. 2021 - Dec. 2023.
  • CUHK Sustainable Research Fund: Quantifying Effectiveness of Psychotherapy with Deep Learning Based Speech Analytics, Aug. 2019 - Jul. 2022.
  • Innovation and Technology Fund: Personalized Storytelling System Based on Expressive Voice Creation by Deep Learning, May 2019 - Apr. 2021.
  • RGC General Research Fund: Unsupervised Speech Modeling for Low-Resource Languages, Jan. 2017 - Dec. 2019.
  • Innovation and Technology Fund: Development of Computer-Based Tools for Clinical Assessment of Speech, Hearing and Language disabilities, Jan. 2015 - Jun. 2016.
  • RGC General Research Fund: Objective Assessment of Pathological Voices Based on Acoustic Signal Analysis and Classification, Dec. 2014 - Nov. 2017.

Taught Courses

Undergraduate

  • Innovations in Electronic Engineering
  • Signals and Systems
  • Microprocessors and Computer Systems
  • Principles of Communication Systems
  • Advanced Digital Signal Processing and Applications
  • Introduction to Digital Signal Processing
  • Technology, Society and Engineering Practice
  • Demystifying AI

Postgraduate

  • Digital Processing of Speech Signals
  • Pattern Recognition
  • Automatic Speech Recognition
  • History of Machine Translation

External Service

  • Associate Editor, IEEE Transactions on Audio, Speech and Language Processing, 2017 – present
  • Associate Editor, EURASIP Journal on Advances in Signal Processing, 2005 – 2019
  • Guest Editor of Special Issue, IEEE Journal on Selected Topics in Signal Processing, 04/2019 – 02/2020
  • General Co-Chair, the 12th International Symposium on Chinese Spoken Language Processing, Hong Kong, 24-27/01/2021
  • General Co-Chair, the 11th International Symposium on Chinese Spoken Language Processing, Taipei, 26-29/11/2018
  • Area Chair, Technical Program Committee, INTERSPEECH 2018, Hyderabad, India, 02-06/09/2018
  • Area Chair, Technical Program Committee, INTERSPEECH 2016, San Francisco, USA, 08-12/09/2016
  • Area Chair, Technical Program Committee, INTERSPEECH 2014, Singapore, 14/09 – 18/09/2014

Recent Publications

  1. Xurong Xie, Xunying Liu, Tan Lee and Lan Wang, “Bayesian learning for deep neural network adaptation,” in IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 2096-2110, 2021.
  2. Ying Qin, Tan Lee and Anthony P. H. Kong, “Automatic assessment of speech impairment in Cantonese-speaking people with aphasia,” IEEE Journal of Selected Topics in Signal Processing, Vol.14, No.2, pp.331-345, February 2020.
  3. Ying Qin, Yuzhong Wu, Tan Lee and Anthony P. H. Kong, “An end-to-end approach to automatic speech assessment for Cantonese-speaking people with aphasia,” Journal of Signal Processing System, Vol.8, pp.819-830, 2020.
  4. Siyuan Feng and Tan Lee, “Exploiting cross-lingual speaker and phonetic diversity for unsupervised subword modeling,” IEEE/ACM Transactions on Audio, Speech and Language Processing, Vol.27, No.12, pp.2000-2011, December 2019.
  5. Yuanyuan Liu, Tan Lee, Thomas K.T. Law and Kathy Y.S. Lee, “Acoustical assessment of voice disorder with continuous speech using ASR posterior features,” IEEE/ACM Transactions on Audio, Speech, Language and Processing, Vol.27, No.6, pp.1047-1059, June 2019.
  6. Zhiyuan Peng, Xu Li and Tan Lee, “Pairing weak with strong: twin models for defending against adversarial attack on speaker verification,” Proceedings of INTERSPEECH 2021, Brno, August 2021.
  7. Si-Ioi Ng, Cymie Ng, Jingyu Li and Tan Lee, “Detection of consonant errors in disordered speech based on consonant-vowel segment embedding,” Proceedings of INTERSPEECH 2021, Brno, August 2021.
  8. Daxin Tan and Tan Lee, “Fine-grained style modeling, transfer and prediction in text-to-speech synthesis via phone-level content-style disentanglement,” Proceedings of INTERSPEECH 2021, Brno, August 2021.
  9. Guangyan Zhang, Ying Qin, Daxin Tan and Tan Lee, “Applying the information bottleneck principle to prosodic representation learning,” Proceedings of INTERSPEECH 2021, Brno, August 2021.
  10. Guangyan Zhang, Ying Qin and Tan Lee, “Learning syllable-level discrete prosodic representation for expressive speech generation,” Proceedings of INTERSPEECH 2020, Shanghai, October 2020.
  11. Jingyu Li and Tan Lee, “Text-independent speaker verification with dual attention network,” Proceedings of INTERSPEECH 2020, Shanghai, October 2020.
  12. Shuiyang Mao, P. C. Ching and Tan Lee, “EigenEmo: Spectral utterance representation using dynamic mode decomposition for speech emotion classification,” Proceedings of INTERSPEECH 2020, Shanghai, October 2020.
  13. Shuiyang Mao, P. C. Ching and Tan Lee, “Emotion profile refinery for speech emotion classification,” Proceedings of INTERSPEECH 2020, Shanghai, October 2020.
  14. Shuiyang Mao, P. C. Ching, C.-C. Jay Kuo and Tan Lee, “Advancing multiple instance learning with attention modeling for categorical speech emotion classification,” Proceedings of INTERSPEECH 2020, Shanghai, October 2020.
  15. Si Ioi Ng and Tan Lee, “Automatic detection of phonological errors in child speech using Siamese recurrent autoencoder,” Proceedings of INTERSPEECH 2020, Shanghai, October 2020.
  16. Si Ioi Ng, Cymie Wing-Yee Ng, Jiarui Wang, Tan Lee, Kathy Yuet-Sheung Lee and Michael Chi-Fai Tong, “CUCHILD: A large-scale Cantonese corpus of child speech for phonology and articulation assessment,” Proceedings of INTERSPEECH 2020, Shanghai, October 2020.
  17. Yuzhong Wu and Tan Lee, “Time-frequency feature decomposition based on sound duration for acoustic scene classification,” Proceedings of ICASSP 2020, pp.716-720, Virtual Barcelona, Spain, May 2020.
  18. Matthew K.-H. Ma, Tan Lee, Manson C.-M. Fong and William S.Y. Wang, “Resting-state EEG-based biometrics with signals features extracted by multivariate empirical mode decomposition,” Proceedings of ICASSP 2020, pp.991-995, Virtual Barcelona, Spain, May 2020.
  19. Zhiyuan Peng, Siyuan Feng and Tan Lee, “Mixture factorized auto-encoder for unsupervised hierarchical deep factorization of speech signal,” Proceedings of ICASSP 2020, pp.6769-6773, Virtual Barcelona, Spain, May 2020.
  20. Shuiyang Mao, P.C. Ching and Tan Lee, “Deep learning of segment-level feature representation with multiple instance learning for utterance-level speech emotion recognition,” Proceedings of INTERSPEECH 2019, pp.1686-1690, September 2019.
  21. Jiarui Wang, Ying Qin, Zhiyuan Peng and Tan Lee, “Child speech disorder detection with Siamese recurrent network using speech attribute features,” Proceedings of INTERSPEECH 2019, pp.3885-3889, September 2019.
  22. Xurong Xie, Xunying Liu, Tan Lee and Lan Wang, “Fast DNN acoustic model speaker adaptation by learning hidden unit contribution features,” Proceedings of INTERSPEECH 2019, pp.759-763, September 2019.
  23. Ying Qin, Tan Lee and Anthony Pak-Hin Kong, “Automatic assessment of language impairment based on raw ASR output,” Proceedings of INTERSPEECH 2019, pp.3078-3082, Graz, September 2019.
  24. Siyuan Feng, Tan Lee and Zhiyuan Peng, “Combining adversarial training and disentangled speech representation for robust zero-resource subword modeling,” Proceedings of INTERSPEECH 2019, pp.1093-1097, Graz, September 2019.
  25. Siyuan Feng and Tan Lee, “Improving unsupervised subword modeling via disentangled speech representation learning and transformation,” Proceedings of INTERSPEECH 2019, pp. 281-285, Graz, September 2019.
  26. Zhiyuan Peng, Siyuan Feng and Tan Lee, “Adversarial multi-task deep features and unsupervised back-end adaptation for language recognition,” Proceedings of ICASSP 2019, pp.5961-5965, Brighton, May 2019.
  27. Xurong Xie, Xunying Liu, Tan Lee, Shoukang Hu and Lan Wang, “BLHUC: Bayesian learning of hidden unit contributions for deep neural network speaker adaptation,” Proceedings of ICASSP 2019, pp.5711-5715, Brighton, May 2019. (Winner of Best Student Paper Award)
  28. Yuzhong Wu and Tan Lee, “Enhancing sound texture in CNN-based acoustic scene classification,” Proceedings of ICASSP 2019, pp.815-819, Brighton, May 2019.
  29. Ying Qin, Tan Lee and Anthony Pak Hin Kong, “Combining phone posteriorgrams from strong and weak recognizers for automatic speech assessment of people with aphasia,” Proceedings of ICASSP 2019, pp.6420-6424, Brighton, May 2019
  30. Shuiyang Mao, Dehua Tao, Guangyan Zhang, Pak-Chung Ching and Tan Lee, “Revisiting hidden Markov models for speech emotion recognition,” Proceedings of ICASSP 2019, pp.6715-6719, Brighton, May 2019.
back-to-top