*I'm on the 2024-2025 job market [research statement].
I am currently a postdoc at Johns Hopkins University (with Raman Arora). Prior to that, I did my PhD in Computer Science at The Applied AI Institute, Deakin University, Australia (Alfred Deakin Medal for Doctoral Theses). I did my M.Sc. in Computer Science at Ulsan National Institute of Science and Technology, South Korea. In my previous life, I studied Electronic and Communication Engineering (Talented Engineering Program) at Danang University of Science and Technology, Vietnam.
— Make the world an \(\epsilon\)-better place
An overarching goal of my research is to establish Algorithmic Foundations of Learning for modern AI (AFLAI Lab), with the vision of enabling next-generation AI with better scalability, explainability, and transferability. My approach emphasizes understanding learning through the lens of critical resources (e.g., data, and computation) and designing optimal algorithms that use these resources efficiently. My research agenda for the AFLAI Lab spans four main thrusts:
Transfer learning (e.g., offline learning, multi-task/representation learning, federated learning, domain adaptation)
Multi-agent learning (e.g., policy regret minimization, equilibrium computation, mechanism design for learning agents)
Trustworthy AI (e.g., distributional/adversarial robustness, distributional learning, differential privacy)
Large langugage models (e.g., representation, optimization, and generalization aspects of transformers, emerging abilities such as in-context learning and reasoning)
Keywords: learning, representation, optimization, computation.
I welcome and appreciate anonymous feedback from anyone on anything.
21. Ragja Palakkadavath, Hung Le, Thanh Nguyen-Tang, Svetha Venkatesh, Sunil Gupta. Fair Domain Generalization with Heterogeneous Sensitive Attributes Across Domains. WACV’25.
20. Thanh Nguyen-Tang, Raman Arora. Learning in Markov Games with Adaptive Adversaries: Policy Regret, Fundamental Barriers, and Efficient Algorithms. NeurIPS’24.
19. Austin Watkins, Thanh Nguyen-Tang, Enayat Ullah, Raman Arora. Adversarially Robust Multi-task Representation Learning. NeurIPS’24.
18. Haque Ishfaq, Thanh Nguyen-Tang, Songtao Feng, Raman Arora, Mengdi Wang, Ming Yin, Doina Precup. Offline Multitask Representation Learning for Reinforcement Learning. NeurIPS’24.
17. Thanh Nguyen-Tang, Raman Arora. On The Statistical Complexity of Offline Decision-Making. International Conference on Machine Learning (ICML), 2024.
16. Anh Tong, Thanh Nguyen-Tang, Dongeun Lee, Toan Tran, Jaesik Choi. SigFormer: Signature Transformers for Deep Hedging. 4th ACM International Conference on AI in Finance (ICAIF), 2023 (Oral).
15. Anh Do, Thanh Nguyen-Tang, Raman Arora. Multi-Agent Learning with Heterogeneous Linear Contextual Bandits. Advances in Neural Information Processing Systems (NeurIPS), 2023.
14. Austin Watkins, Enayat Ullah, Thanh Nguyen-Tang, Raman Arora. Optimistic Rates for Multi-Task Representation Learning. Advances in Neural Information Processing Systems (NeurIPS), 2023.
13. Thanh Nguyen-Tang, Raman Arora. On Sample-Efficient Offline Reinforcement Learning: Data Diversity, Posterior Sampling and Beyond. Advances in Neural Information Processing Systems (NeurIPS), 2023.
12. Ragja Palakkadavath, Thanh Nguyen-Tang, Hung Le, Svetha Venkatesh, Sunil Gupta. Domain Generalization with Interpolation Robustness. Asian Conference on Machine Learning (ACML), 2023.
11. Thong Bach, Anh Tong, Truong Son Hy, Vu Nguyen, Thanh Nguyen-Tang. Global Contrastive Learning for Long-Tailed Classification. Transactions on Machine Learning Research (TMLR), 2023.
10. A. Tuan Nguyen, Thanh Nguyen-Tang, Ser-Nam Lim, Philip Torr. TIPI: Test Time Adaptation with Transformation Invariance. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.
9. Thanh Nguyen-Tang, Raman Arora. VIPeR: Provably Efficient Algorithm for Offline RL with Neural Function Approximation
International Conference on Learning Representations (ICLR), 2023 (top 25% noble).
[talk] [slides] [code] [ERRATUM]
8. Thanh Nguyen-Tang, Ming Yin, Sunil Gupta, Svetha Venkatesh, Raman Arora. On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation.
AAAI Conference on Artificial Intelligence (AAAI), 2023 [arXiv] [poster] [slides] [video].
7. Anh Tong, Thanh Nguyen-Tang, Toan Tran, Jaesik Choi. Learning Fractional White Noises in Neural Stochastic Differential Equations.
Advances in Neural Information Processing Systems (NeurIPS), 2022.
[code].
6. Thanh Nguyen-Tang, Sunil Gupta, A.Tuan Nguyen, and Svetha Venkatesh.
Offline Neural Contextual Bandits: Pessimism, Optimization and Generalization.
International Conference on Learning Representations (ICLR), 2022.
[arXiv]
[poster]
[slides]
[code].
5. Thanh Nguyen-Tang, Sunil Gupta, Hung Tran-The, Svetha Venkatesh. On Sample Complexity of Offline Reinforcement Learning with Deep ReLU Networks in Besov Spaces.
Transactions on Machine Learning Research (TMLR), 2022, Workshop on RL Theory, ICML, 2021. [arXiv] [slides]
[talk].
4. Thanh Nguyen-Tang, Sunil Gupta, Svetha Venkatesh. Distributional Reinforcement Learning via Moment Matching. AAAI Conference on Artificial Intelligence (AAAI), 2021. [arXiv] [code] [slides] [poster] [talk].
3. Thanh Nguyen-Tang, Sunil Gupta, Huong Ha, Santu Rana, Svetha Venkatesh. Distributionally Robust Bayesian Quadrature Optimization. International Conference on Artificial Intelligence and Statistics (AISTATS), 2020. [arXiv] [code] [slides] [talk].
2. Huong Ha, Santu Rana, Sunil Gupta, Thanh Nguyen-Tang, Hung Tran-The, Svetha Venkatesh.
Bayesian Optimization with Unknown Search Space.
Advances in Neural Information Processing Systems (NeurIPS), 2019.
[code]
[poster].
1. Thanh Nguyen-Tang, Jaesik Choi.
Markov Information Bottleneck to Improve Information Flow in Stochastic Neural Networks. Entropy, 2019 (Special Issue on Information Bottleneck: Theory and Applications in Deep Learning).
Thanh Nguyen-Tang, Ming Yin, Masatoshi Uehara, Yu-Xiang Wang, Mengdi Wang, Raman Arora. Posterior Sampling via Langevin Monte Carlo for Offline Reinforcement Learning. In OpenReview 2023.
Nguyen Hung-Quang, Ngoc-Hieu Nguyen, The-Anh Ta, Thanh Nguyen-Tang, Kok-Seng Wong, Hoang Thanh-Tung,
and Khoa D Doan. Wicked oddities: Selectively poisoning for effective clean-label backdoor attacks. In NeurIPS 2023
Workshop on Backdoors in Deep Learning-The Good, the Bad, and the Ugly, 2023.
Nguyen Ngoc-Hieu, Nguyen Hung-Quang, The-Anh Ta, Thanh Nguyen-Tang, Khoa D Doan, Hoang Thanh-Tung. A Cosine Similarity-based Method for Out-of-Distribution Detection. In ArXiv 2023.
Mengyan Zhang, Thanh Nguyen-Tang, Fangzhao Wu, Zhenyu He, Xing Xie, Cheng Soon Ong.
Two-Stage Neural Contextual Bandits for Adaptive Personalised Recommendation. In Arxiv 2022.
Hung Tran-The, Thanh Nguyen-Tang, Sunil Gupta, Santu Rana, and Svetha Venkatesh. Combining online learning and offline learning for contextual bandits with deficient support. In ArXiv 2021.
Ragja Palakkadavath (PhD student at Deakin University, out-of-distribution generalization)
Thong Bach (independent researcher, self-supervised learning and domain adaptation)
Anh Do (PhD student at JHU, bandit/reinforcement learning)
Austin Watkins (PhD student at JHU, transfer learning and robustness)
Co-instructor (with Raman Arora), Machine Learning: Advanced Topics: Foundations of Data-Driven Sequential Decision-Making Systems (CS 779), JHU, Spring 2024.
Teaching RL Theory in our JHU ML reading group, Summer/Fall 2023. [notes]
Guest lecturer (in bandits/reinforcement learning): Machine Learning (CS 475/675) Spring 2023, JHU. [notes]
Teaching Assistant: Statistical Machine Learning, Fall 2017, UNIST; Engineering Programming I/II, Spring 2016, UNIST; Various advanced mathematics and engineering courses, 2012-2016, Vietnam.
* I participated in (and obtained a certificate of) Justice, Equity, Diversity, and Inclusion (JEDI) Training in the Classroom in March 2024 at JHU, as an effort to improve diversity in my future classes and research group.
Alfred Deakin Medal for Doctoral Theses (for the most outstanding theses), 2022.
I am acknowledged in Francis Bach's book, “Learning Theory from First Principles”
My AAAI’21 paper is featured as an excercise in Bellemare, Dabney, and Rowland's book, “Distributional Reinforcement Learning”
Area Chair/Senior Program Committee
International Conference on Artificial Intelligence and Statistics (AISTATS) 2025
AAAI Conference on Artificial Intelligence (AAAI) 2025, 2024, 2023
Conference Reviewer/Program Committee
Neural Information Processing Systems (NeurIPS) 2024, 2023, 2022, 2021, 2020
International Conference on Machine Learning (ICML) 2023, 2022, 2021
International Conference on Learning Representations (ICLR) 2024, 2023, 2022, 2021 (outstanding reviewer award)
AAAI Conference on Artificial Intelligence (AAAI) 2022, 2021 (top 25% reviewer)
International Conference on Artificial Intelligence and Statistics (AISTATS) 2021
Annual Learning for Dynamics & Control Conference (L4DC) 2022
Coordinator
AAAI Conference on Artificial Intelligence (AAAI) 2023 (session chair for ML theory)
International Conference on Machine Learning (ICML) 2022
International Conference on Automated Machine Learning (AutoML) 2022
TrustML Young Scientist Seminars, RIKEN Japan, Aug. 01, 2023 [post] [slides] [video].
VinAI, Vietnam, Jan. 13, 2023 [post].
FPT AI, Vietnam, Dec. 21, 2022 [record].
UC San Diego, USA, Dec. 8, 2022 (Host: Prof. Rose Yu).
IAA Research Summit, Johns Hopkins University, USA, Nov. 2022 [slides].
Ohio State University, USA, Jan. 2022 (Host: Prof. Yingbin Liang and Prof. Ness Shroff).
University of Arizona, USA, Dec. 2021 (Host: Prof. Kwang-Sung Jun).
Virginia Tech, USA, Nov. 2021 (Host: Prof. Thinh T. Doan).