July - December, 2018
Clarifai Research Intern, San Francisco
High-level vision stuff with deep learning. Mentor: David Eigen
Sep 2015 - now
Ph.D. Student, Chinese Univ. of Hong Kong
Focus on deep learning and computer vision. Supervisor: Xiaogang

Occasional consultant at Accelerate Inc.

Summer 2012
Mitacs Research Internship at University of Victoria
Devise an algorithm to alleviate the server bandwidth consumption in a P2P VoD system.
2009-2014
Bachelor Degree at Dalian University of Technology
Major in Electronic and Information Engineering.
How to pronounce my name Hongyang? It's Home + Young :)
Highlight.



Short bio.
I am currently the third-year (2017.8 - 2018.7) PhD student at The Chinese University of Hong Kong. My research covers a wide span of deep learning, computer vision (high-level stuff) and machine learning. In particular, I am interested in object detection, generic deep learning algorithms and capsule research. I used to do some research on salient object detection.


Publication

Neat first-author versoin. For a full list, check the Google Scholar page.

Neural Network Encapsulation

The computational complexity in capsule routing becomes an 
bottleneck for scaling up to larger networks, as lower capsules need to
correspond to each and every higher capsule. To resolve this limitation,
we approximate the routing process with two branches: a master branch
which collects primary information from its direct contact in the lower
layer and an aide branch that replenishes master based on pattern variants 
encoded in other lower capsules. Compared with previous iterative
and unsupervised routing scheme, these two branches are communicated
in a fast, supervised and one-time pass fashion. Motivated by the routing 
to make higher capsule have agreement with lower capsule, 
we extend the mechanism as a compensation for the rapid loss of information 
in nearby layers. We devise a feedback agreement unit to send
back higher capsules as feedback. It could be regarded as an additional
regularization to the network. The feedback agreement is achieved by
comparing the optimal transport divergence between two distributions.
Hongyang Li, Xiaoyang Guo, Bo Dai, Wanli Ouyang, Xiaogang Wang
ECCV 2018
Zoom Out-and-In Network with Map Attention Decision for Region Proposal and Object Detection

We propose a zoom-out-and-in network for generating object proposals. 
A key observation is that it is difficult to classify anchors of different sizes 
with the same set of features. A map attention decision (MAD) unit is further proposed 
to aggressively search for neuron activations among two streams and 
attend the most contributive ones on the feature learning of the final loss. 
The unit serves as a decision maker to adaptively activate maps along certain channels 
with the solely purpose of optimizing the overall training loss. 
One advantage of MAD is that the learned weights enforced on each feature channel 
is predicted on-the-fly based on the input context, 
which is more suitable than the fixed enforcement of a convolutional kernel. 
Hongyang Li, Yu Liu, Wanli Ouyang, Xiaogang Wang
International Journal of Computer Vision (IJCV) 2018

Rethinking Feature Discrimination and Polymerization for Large-scale Recognition

Feature matters. How to train a deep  network to acquire discriminative features 
across categories and polymerized features within classes has always been 
at the core of many computer vision tasks. 
In this paper, we address this problem based on the simple intuition that 
the cosine distance of features in high-dimensional space should be 
close enough within one class and far away across categories. 
To this end, we proposed the congenerous cosine (COCO) algorithm to 
simultaneously optimize the cosine similarity among data. 
It inherits the softmax property to make inter-class features discriminative 
as well as shares the idea of class centroid in metric learning. 
Yu Liu*, Hongyang Li*, Xiaogang Wang (* equal contribution)
NIPS 2017 deep learning workshop
Do We Really Need More Training Data for Object Localization
New dataset proposed! ImageNet-3k where the total number of classes is around 2700, including the original 1000 classes from the standard ILSVRC CLS challenge. Source image is provided from official webiste and we annotate each category with instance-level bounding boxes. [abstract]

As more datasets are available nowadays, one may wonder whether 
the success of deep learning descends from data augmentation only. 
In this paper, we propose a new dataset, namely, Extended 
ImageNet Classification (EIC) dataset to investigate if 
more training data is crucial. We find more data could enhance 
average recall, but  a more balanced data distribution among categories 
could obtain better results  at the cost of fewer training samples.
Multi-Bias Non-linear Activation in Deep Neural Networks

In this paper, we propose a multi-bias non-linear activation (MBA) layer 
to explore the information hidden in the magnitudes of responses. 
It is placed after the convolution layer to decouple the responses 
to a convolution kernel into multiple maps by multi-thresholding magnitudes, 
thus generating more patterns in the feature space at a low computational cost. 
It provides great flexibility of selecting responses to different visual patterns 
in different magnitude ranges to form rich representations in higher layers. 
Hongyang Li, Wanli Ouyang, Xiaogang Wang
ICML 2016
Inner and Inter Label Propagation: Salient Object Detection in the Wild

We propose a novel label propagation based method for saliency detection. 
A key observation is that saliency in an image can be estimated by 
propagating the labels extracted from the most certain background and object regions. 
A co-transduction algorithm is devised to fuse both boundary and objectness labels 
based on an inter propagation scheme to effectively improve the saliency accuracy.
Hongyang Li, Huchuan Lu, Zhe Lin, Xiaohui Shen, Brian Price
IEEE Trans. on Image Processing (TIP) 2015

Missioner on the Road

As a Junior Researcher reviewing the following conferences
2019: ICLR.
2018: NIPS,ICLR,ECCV,AAAI,BMVC.
and many others in earlier years ...
As a Graduate Student learning the following courses
Foundations of Optimization. Big Data Analytics.

Pattern Recognition. Computer Vision. Advanced Machine Learning.

As a Teaching Assistant co-teaching the following courses
Digital Circuits and Systems. Introduction to Probability. Introduction to Deep Learning.