*I'm on the 2024-2025 job market [research statement].
I am currently a postdoc at Johns Hopkins University (with Raman Arora). Prior to that, I did my PhD in Computer Science at The Applied AI Institute, Deakin University, Australia (Alfred Deakin Medal for Doctoral Theses). I did my M.Sc. in Computer Science at Ulsan National Institute of Science and Technology, South Korea. In my previous life, I studied Electronic and Communication Engineering (Talented Engineering Program) at Danang University of Science and Technology, Vietnam.
— Make the world an \(\epsilon\)-better place
My research is on the theoretical and algorithmic foundations of machine learning for modern data science and AI, with the current focus on the following topics:
Transfer decision-making (e.g., offline learning, multi-task/representation learning, federated learning, domain adaptation)
Multi-agent learning (e.g., policy regret minimization, equilibrium computation, mechanism design for learning agents)
Trustworthy AI (e.g., distributional/adversarial robustness, distributional learning, differential privacy)
Large language models (e.g., understanding inductive biases of transformers for emerging abilities such as in-context learning and reasoning)
Keywords: learning, representation, optimization, computation.
Note:
Highly motivated and self-driven students with a strong mathematical background are welcome to contact me for research.
I welcome and appreciate anonymous feedback from anyone on anything.
23. Anh Tong, Thanh Nguyen-Tang, Dongeun Lee, Duc Nguyen, Toan Tran, David Leo Wright Hall, Cheongwoong Kang, Jaesik Choi.
Neural ODE transformers: Analyzing internal dynamics and adaptive fine-tuning. ICLR, 2025.
22. Nguyen Hung-Quang, Ngoc-Hieu Nguyen, The-Anh Ta, Thanh Nguyen-Tang, Kok-Seng Wong, Hoang Thanh-Tung,
and Khoa D Doan. Wicked oddities: Selectively poisoning for effective clean-label backdoor attacks. ICLR, 2025 [pdf].
21. Ragja Palakkadavath, Hung Le, Thanh Nguyen-Tang, Svetha Venkatesh, Sunil Gupta.
Fair domain generalization with heterogeneous sensitive attributes across domains. WACV, 2025 [pdf].
20. Thanh Nguyen-Tang, Raman Arora. Learning in Markov games with adaptive adversaries: Policy regret, fundamental barriers, and efficient algorithms. NeurIPS, 2024 [pdf].
19. Austin Watkins, Thanh Nguyen-Tang, Enayat Ullah, Raman Arora. Adversarially robust multi-task representation learning. NeurIPS, 2024 [pdf].
18. Haque Ishfaq, Thanh Nguyen-Tang, Songtao Feng, Raman Arora, Mengdi Wang, Ming Yin, Doina Precup. Offline multitask representation learning for reinforcement learning. NeurIPS, 2024 [pdf].
17. Thanh Nguyen-Tang, Raman Arora. On the statistical complexity of offline decision-making. ICML, 2024 [pdf].
16. Anh Tong, Thanh Nguyen-Tang, Dongeun Lee, Toan Tran, Jaesik Choi. SigFormer: Signature transformers for deep hedging. ICAIF, 2023 (Oral)[pdf].
15. Anh Do, Thanh Nguyen-Tang, Raman Arora. Multi-agent learning with heterogeneous linear contextual bandits. NeurIPS, 2023 [pdf].
14. Austin Watkins, Enayat Ullah, Thanh Nguyen-Tang, Raman Arora. Optimistic rates for multi-task representation learning. NeurIPS, 2023 [pdf]
13. Thanh Nguyen-Tang, Raman Arora. On sample-efficient offline reinforcement learning: Data diversity, posterior sampling and beyond. NeurIPS, 2023 [pdf].
12. Ragja Palakkadavath, Thanh Nguyen-Tang, Hung Le, Svetha Venkatesh, Sunil Gupta. Domain generalization with interpolation robustness. ACML, 2023 [pdf].
11. Thong Bach, Anh Tong, Truong Son Hy, Vu Nguyen, Thanh Nguyen-Tang. Global contrastive learning for long-tailed classification. TMLR, 2023 [pdf].
10. A. Tuan Nguyen, Thanh Nguyen-Tang, Ser-Nam Lim, Philip Torr. TIPI: Test time adaptation with transformation invariance. CVPR, 2023 [html].
9. Thanh Nguyen-Tang, Raman Arora. VIPeR: Provably efficient algorithm for offline RL with neural function approximation. ICLR, 2023 (top 25% noble).
[talk] [slides] [code] [ERRATUM.]
8. Thanh Nguyen-Tang, Ming Yin, Sunil Gupta, Svetha Venkatesh, Raman Arora. On instance-dependent bounds for offline reinforcement learning with linear function approximation. AAAI, 2023 [arXiv] [poster] [slides] [video].
7. Anh Tong, Thanh Nguyen-Tang, Toan Tran, Jaesik Choi. Learning fractional white noises in neural stochastic differential equations. NeurIPS, 2022 [pdf]
[code].
6. Thanh Nguyen-Tang, Sunil Gupta, A.Tuan Nguyen, and Svetha Venkatesh. Offline neural contextual bandits: Pessimism, optimization, and generalization. ICLR, 2022 [pdf] [poster]
[slides]
[code].
5. Thanh Nguyen-Tang, Sunil Gupta, Hung Tran-The, Svetha Venkatesh. On sample complexity of offline reinforcement learning with deep ReLU networks in Besov spaces. TMLR, 2022, Workshop on RL Theory, ICML, 2021 [arXiv] [slides]
[talk].
4. Thanh Nguyen-Tang, Sunil Gupta, Svetha Venkatesh. Distributional reinforcement learning via moment matching. AAAI, 2021 [arXiv] [code] [slides] [poster] [talk].
3. Thanh Nguyen-Tang, Sunil Gupta, Huong Ha, Santu Rana, Svetha Venkatesh. Distributionally robust Bayesian quadrature optimization. AISTATS, 2020 [arXiv] [code] [slides] [talk].
2. Huong Ha, Santu Rana, Sunil Gupta, Thanh Nguyen-Tang, Hung Tran-The, Svetha Venkatesh. Bayesian optimization with unknown search space. NeurIPS, 2019 [pdf] [code]
[poster].
1. Thanh Nguyen-Tang, Jaesik Choi. Markov information bottleneck to improve information flow in stochastic neural networks. Entropy, 2019 (Special Issue on Information Bottleneck: Theory and Applications in Deep Learning) [pdf].
Andrew Gilbert, sophomore in Computer Science and Applied Mathematics & Statistics at JHU (01/2025-present). Topic: Reinforcement learning
Le Duc Khai, Masters student in Biomedical Engineering at University of Toronto (12/2024-present). Topic: Medical AI and multimodal LLMs
Austin Watkins, PhD student at JHU (2022-present). Topic: Transfer learning and robustness
Thong Bach, independent researcher -> PhD student at Deakin (2022-now). Topic: Self-supervised learning in LLMs
Anh Do, PhD student at JHU (2022-2024). Topic: Bandit/Reinforcement learning
Ragja Palakkadavath, PhD student at Deakin University (2022-2024). Topic: Out-of-distribution generalization
Guest lecturer, Machine Learning, JHU CS Spring 2025.
Guest lecturer, Learning Theory (EN.601.474.01 : ML) – 36 students, JHU CS Fall 2024
Co-lecturer, Machine Learning: Advanced Topics (EN.601.779.01.SP24) – 17 graduate students, JHU CS Spring 2024
Guest lecturer, Machine Learning (EN.601.675.01.SP23) – 77 undergraduate students, JHU CS Spring 2023
Teaching Assistant, Advanced Machine Learning (CSE 54401), UNIST CSE Fall 2016
Teaching Assistant, Engineering Programming (ITP117), UNIST CSE Spring 2016
Teaching Assistant, Linear Algebra, Calculus, Digital Signal Processing, Machine Learning, ECE, DUT, 2011 - 2015
* I participated in (and obtained a certificate of) Justice, Equity, Diversity, and Inclusion (JEDI) Training in the Classroom in March 2024 at JHU, as an effort to improve diversity in my future classes and research group.
Alfred Deakin Medal for Doctoral Theses (for the most outstanding theses), 2022.
I am acknowledged in Francis Bach's book, “Learning Theory from First Principles”
My AAAI’21 paper is featured as an excercise in Bellemare, Dabney, and Rowland's book, “Distributional Reinforcement Learning”
Area Chair/Senior Program Committee
International Conference on Artificial Intelligence and Statistics (AISTATS) 2025
AAAI Conference on Artificial Intelligence (AAAI) 2025, 2024, 2023
Conference Reviewer/Program Committee
Neural Information Processing Systems (NeurIPS) 2024, 2023, 2022, 2021, 2020
International Conference on Machine Learning (ICML) 2025, 2023, 2022, 2021
International Conference on Learning Representations (ICLR) 2024, 2023, 2022, 2021 (outstanding reviewer award)
AAAI Conference on Artificial Intelligence (AAAI) 2022, 2021 (top 25% reviewer)
International Conference on Artificial Intelligence and Statistics (AISTATS) 2021
Annual Learning for Dynamics & Control Conference (L4DC) 2022
Coordinator
AAAI Conference on Artificial Intelligence (AAAI) 2023 (session chair for ML theory)
International Conference on Machine Learning (ICML) 2022
International Conference on Automated Machine Learning (AutoML) 2022
TrustML Young Scientist Seminars, RIKEN Japan, Aug. 01, 2023 [post] [slides] [video].
VinAI, Vietnam, Jan. 13, 2023 [post].
FPT AI, Vietnam, Dec. 21, 2022 [record].
UC San Diego, USA, Dec. 8, 2022 (Host: Prof. Rose Yu).
IAA Research Summit, Johns Hopkins University, USA, Nov. 2022 [slides].
Ohio State University, USA, Jan. 2022 (Host: Prof. Yingbin Liang and Prof. Ness Shroff).
University of Arizona, USA, Dec. 2021 (Host: Prof. Kwang-Sung Jun).
Virginia Tech, USA, Nov. 2021 (Host: Prof. Thinh T. Doan).