I am a Postdoctoral Research Fellow in Department of Computer Science at Johns Hopkins University, working with Raman Arora. I was an Associate Research Fellow at the Applied AI Institute, Deakin University in July 2021-June 2022 and completed my PhD there in Feb 2022. I did my Master in Computer Science and Engineering at Ulsan National Institute of Science and Technology (UNIST) in 2018.
I am building toward data-efficient, runtime-efficient and robust AI by studying three foundational pillars of modern machine learning – provable statistical efficiency, computational efficiency, and robustness. My current key focus includes:
Reinforcement Learning
(Deep) learning theory
Besides, I am also studying algorithmic designs for
Learning under Distributional Shifts
Robust Adversarial Learning
Probabilistic Deep Learning
Representation Learning
I’m always actively open to research collaborations and chat!
Jan. 20, 2023: One paper accepted to ICLR’23 (Spotlight, acceptance rate: 31.8%)
Dec. 9, 2022: One paper accepted to TMLR.
Nov. 19, 2022: One paper accepted to AAAI, 2023 (acceptance rate: 19.6%).
Oct. 30, 2022: I was acknowledged in Francis Bach's “Learning Theory from First Principles”
Sep. 14, 2022: One paper accepted to NeurIPS, 2022 (acceptance rate: 25.6%).
Aug. 8, 2022: I am acknowledged in Mengyan Zhang's PhD thesis.
Jan. 21, 2022: One paper accepted to ICLR, 2022 (acceptance rate: 32.26%).
May 20, 2021: Accepted to the Deep Learning Theory Summer School at Princeton (acceptance rate: 180/500 = 36%).
Provably Efficient Neural Offline Reinforcement Learning via Perturbed Rewards
Thanh Nguyen-Tang, Raman Arora
International Conference on Learning Representations (ICLR), 2023 (Top 25% - Spotlight)
[slides]
On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation
On Sample Complexity of Offline Reinforcement Learning with Deep ReLU Networks in Besov Spaces
Thanh Nguyen-Tang, Sunil Gupta, Hung Tran-The, Svetha Venkatesh
Transactions on Machine Learning Research (TMLR), 2022
[cite]
TIPI: Test Time Adaptation with Transformation Invariance
A. Tuan Nguyen, Thanh Nguyen-Tang, Ser-Nam Lim, Philip Torr
Under review, 2022
Improving Domain Generalization with Interpolation Robustness
Ragja Palakkadavath, Thanh Nguyen-Tang, Sunil Gupta, Svetha Venkatesh
Distribution Shifts Workshop@NeurIPS2022, INTERPOLATE@NeurIPS2022 (Spotlight)
Under review, 2022
Two-Stage Neural Contextual Bandits for Adaptive Personalised Recommendation
Mengyan Zhang, Thanh Nguyen-Tang, Fangzhao Wu, Zhenyu He, Xing Xie, Cheng Soon Ong
Under review, 2022
[arXiv]
Contextual Bandits with Reduced Explorations via Logged Data
Hung Tran-The, Thanh Nguyen-Tang, Sunil Gupta, Santu Rana, Svetha Venkatesh
Under review, 2022
[arXiv]
On Practical Reinforcement Learning: Provable Robustness, Scalability and Statistical Efficiency
Ph.D. dissertation, Deakin University, Australia, July 2021
Senior Program Committee: AAAI (2023)
Reviewer/Program Committee:
NeurIPS (2022, 2021, 2020)
ICML (2023, 2022, 2021)
ICLR (2023, 2022, 2021- Outstanding reviewer award)
AAAI (2022, 2021-Top 25% PC, 2020)
TPAMI (2023)
AISTATS (2021)
EWRL (2022)
L4DC (2022)
NeurIPS Workshop on OfflineRL (2022, 2021)
Volunteer: ICML (2022), AutoML (2022)
Neural Offline Reinforcement Learning [post]
VinAI, Vietnam, Jan. 13, 2023
Neural Offline Reinforcement Learning [record]
FPT AI, Vietnam, Dec. 21, 2022
Neural Offline Reinforcement Learning [slides]
UC San Diego, USA, Dec. 8, 2022 (Host: Rose Yu)
Offline Reinforcement Learning: Assurance in High-Stakes AI Applications [slides]
IAA Research Summit, Johns Hopkins University, USA, Nov. 2022
Offline Neural Contextual Bandits: Pessimism, Optimization, and Generalization
Ohio State University, USA, Jan. 2022 (Host: Yingbin Liang and Ness Shroff)
Offline Neural Contextual Bandits: Pessimism, Optimization, and Generalization
Arizona State University, USA, Dec. 2021 (Host: Kwang-Sung Jun)
Generalization and Optimization in Deep Learning: Over-parameterization and Interpolation [slides]
Deakin University, Australia, Aug. 2021
On Finite-Sample Analysis of Batch Reinforcement Learning with Deep ReLU Networks
Viet Operator Theorists Group, Vietnam and USA, Apr. 2021