{Postdoc at Johns Hopkins University, working with Raman Arora} ← {PhD at the Applied AI Institute, Deakin University (Australia) in 2022}.
Statistical learning theory, reinforcement learning (including contextual bandits), and collaborative (multi-task) learning.
Austin Watkins, Enayat Ullah, Thanh Nguyen-Tang, Raman Arora. Optimistic Rates for Multi-Task Representation Learning. NeurIPS, 2023.
Thanh Nguyen-Tang, Raman Arora. On Sample-Efficient Offline Reinforcement Learning: Data Diversity, Posterior Sampling and Beyond. NeurIPS, 2023.
Thanh Nguyen-Tang, Raman Arora. VIPeR: Provably Efficient Algorithm for Offline RL with Neural Function Approximation. ICLR, 2023 (top 25% noble).
Thanh Nguyen-Tang, Ming Yin, Sunil Gupta, Svetha Venkatesh, Raman Arora. On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation. AAAI, 2023.
Thanh Nguyen-Tang, Sunil Gupta, A.Tuan Nguyen, and Svetha Venkatesh. Offline Neural Contextual Bandits: Pessimism, Optimization and Generalization. ICLR, 2022.
Thanh Nguyen-Tang, Sunil Gupta, Hung Tran-The, Svetha Venkatesh. On Sample Complexity of Offline Reinforcement Learning with Deep ReLU Networks in Besov Spaces. TMLR, 2022.
Thanh Nguyen-Tang, Sunil Gupta, Svetha Venkatesh. Distributional Reinforcement Learning via Moment Matching. AAAI, 2021.
Thanh Nguyen-Tang, Sunil Gupta, Huong Ha, Santu Rana, Svetha Venkatesh. Distributionally Robust Bayesian Quadrature Optimization. AISTATS, 2020.
Alfred Deakin Medal for Doctoral Theses (for the most outstanding theses), 2022.
Teaching RL Theory in our JHU ML reading group, Summer/Fall 2023. [notes]
Guest lecturer (in bandits/reinforcement learning): Machine Learning (CS 475/675) Spring 2023, JHU. [notes]
Teaching Assistant: Statistical Machine Learning, Fall 2017, UNIST; Engineering Programming I/II, Spring 2016, UNIST; Various advanced mathematics and engineering courses, 2012-2016, Vietnam.