Home
Openning
I am looking for self-motivated students to work with me on cutting-edge research projects. I accept PhD students, research interns, and CSC visiting students supported by grants or scholarships (e.g., A*STAR SINGA Scholarship , A*STAR SIPGA Scholarship and ARAP ).
My Research
My research interests lie primarily in the area of statistical machine learning and optimization. I am particularly interested in black-box optimization, kernel methods, and approximation theory. I am mainly focusing on the following topics:
Black-box Optimization and Reinforcement Learning (RL)
Black-box optimization is a subarea of optimization. It handles the cases that only function quire can be accessed. The potential applications include reinforcement learning (RL), scientific design, and LLM target generation, etc. The goal is to design efficient and theoretically sound black-box optimization algorithms to improve query efficiency. These algorithms can be used for model-free RL. Moreover, the methodology of black-box optimization is also helpful for designing efficient RL methods.
Quasi-Monte Carlo (QMC) and Kernel Methods
QMC theory is a fundamental part of approximation theory. The goal of QMC methods is to design good points set for integral approximation. It can be used in Bayesian Inference and kernel approximation, etc. QMC based feature maps are a promising direction of random feature methods, which can reduce the time and space complexity of kernel methods (e.g., Gaussian Process and SVM). Interestingly, there is a close relationship between random feature maps and neural networks. Besides, QMC methods also have applications in sampling techniques in RL and generative models.
Robust Learning and Weakly-supervised Learning
Selected Recent Publications
|