Thanh Nguyen-Tang

tnt
Postdoctoral Research Fellow
Department of Computer Science
Whiting School of Engineering
Johns Hopkins University
Malone Hall 345, 3400 N Charles Street, Baltimore, MD 21218
nguyent at cs dot jhu dot edu / thnguyentang at gmail dot com
[Google Scholar] [Github] [blog]

*I'm on the 2024-2025 job market [research statement].

Background

I am currently a postdoc at Johns Hopkins University (with Raman Arora). Prior to that, I did my PhD in Computer Science at The Applied AI Institute, Deakin University, Australia (Alfred Deakin Medal for Doctoral Theses). I did my M.Sc. in Computer Science at Ulsan National Institute of Science and Technology, South Korea. In my previous life, I studied Electronic and Communication Engineering (Talented Engineering Program) at Danang University of Science and Technology, Vietnam.

Research interest

 — Make the world an \(\epsilon\)-better place

My research is on the theoretical and algorithmic foundations of machine learning for modern data science and AI, with the current focus on the following topics:

Keywords: learning, representation, optimization, computation.

Note:

Publications

2025

23. Anh Tong, Thanh Nguyen-Tang, Dongeun Lee, Duc Nguyen, Toan Tran, David Leo Wright Hall, Cheongwoong Kang, Jaesik Choi. Neural ODE transformers: Analyzing internal dynamics and adaptive fine-tuning. ICLR, 2025.
22. Nguyen Hung-Quang, Ngoc-Hieu Nguyen, The-Anh Ta, Thanh Nguyen-Tang, Kok-Seng Wong, Hoang Thanh-Tung, and Khoa D Doan. Wicked oddities: Selectively poisoning for effective clean-label backdoor attacks. ICLR, 2025 [pdf].
21. Ragja Palakkadavath, Hung Le, Thanh Nguyen-Tang, Svetha Venkatesh, Sunil Gupta. Fair domain generalization with heterogeneous sensitive attributes across domains. WACV, 2025 [pdf].

2024

20. Thanh Nguyen-Tang, Raman Arora. Learning in Markov games with adaptive adversaries: Policy regret, fundamental barriers, and efficient algorithms. NeurIPS, 2024 [pdf].
19. Austin Watkins, Thanh Nguyen-Tang, Enayat Ullah, Raman Arora. Adversarially robust multi-task representation learning. NeurIPS, 2024 [pdf].
18. Haque Ishfaq, Thanh Nguyen-Tang, Songtao Feng, Raman Arora, Mengdi Wang, Ming Yin, Doina Precup. Offline multitask representation learning for reinforcement learning. NeurIPS, 2024 [pdf].
17. Thanh Nguyen-Tang, Raman Arora. On the statistical complexity of offline decision-making. ICML, 2024 [pdf].

2023

16. Anh Tong, Thanh Nguyen-Tang, Dongeun Lee, Toan Tran, Jaesik Choi. SigFormer: Signature transformers for deep hedging. ICAIF, 2023 (Oral)[pdf].
15. Anh Do, Thanh Nguyen-Tang, Raman Arora. Multi-agent learning with heterogeneous linear contextual bandits. NeurIPS, 2023 [pdf].
14. Austin Watkins, Enayat Ullah, Thanh Nguyen-Tang, Raman Arora. Optimistic rates for multi-task representation learning. NeurIPS, 2023 [pdf]
13. Thanh Nguyen-Tang, Raman Arora. On sample-efficient offline reinforcement learning: Data diversity, posterior sampling and beyond. NeurIPS, 2023 [pdf].
12. Ragja Palakkadavath, Thanh Nguyen-Tang, Hung Le, Svetha Venkatesh, Sunil Gupta. Domain generalization with interpolation robustness. ACML, 2023 [pdf].
11. Thong Bach, Anh Tong, Truong Son Hy, Vu Nguyen, Thanh Nguyen-Tang. Global contrastive learning for long-tailed classification. TMLR, 2023 [pdf].
10. A. Tuan Nguyen, Thanh Nguyen-Tang, Ser-Nam Lim, Philip Torr. TIPI: Test time adaptation with transformation invariance. CVPR, 2023 [html].
9. Thanh Nguyen-Tang, Raman Arora. VIPeR: Provably efficient algorithm for offline RL with neural function approximation. ICLR, 2023 (top 25% noble). [talk] [slides] [code] [ERRATUM.]
8. Thanh Nguyen-Tang, Ming Yin, Sunil Gupta, Svetha Venkatesh, Raman Arora. On instance-dependent bounds for offline reinforcement learning with linear function approximation. AAAI, 2023 [arXiv] [poster] [slides] [video].

2022

7. Anh Tong, Thanh Nguyen-Tang, Toan Tran, Jaesik Choi. Learning fractional white noises in neural stochastic differential equations. NeurIPS, 2022 [pdf] [code].
6. Thanh Nguyen-Tang, Sunil Gupta, A.Tuan Nguyen, and Svetha Venkatesh. Offline neural contextual bandits: Pessimism, optimization, and generalization. ICLR, 2022 [pdf] [poster] [slides] [code].
5. Thanh Nguyen-Tang, Sunil Gupta, Hung Tran-The, Svetha Venkatesh. On sample complexity of offline reinforcement learning with deep ReLU networks in Besov spaces. TMLR, 2022, Workshop on RL Theory, ICML, 2021 [arXiv] [slides] [talk].

2021

4. Thanh Nguyen-Tang, Sunil Gupta, Svetha Venkatesh. Distributional reinforcement learning via moment matching. AAAI, 2021 [arXiv] [code] [slides] [poster] [talk].

2020

3. Thanh Nguyen-Tang, Sunil Gupta, Huong Ha, Santu Rana, Svetha Venkatesh. Distributionally robust Bayesian quadrature optimization. AISTATS, 2020 [arXiv] [code] [slides] [talk].

2019

2. Huong Ha, Santu Rana, Sunil Gupta, Thanh Nguyen-Tang, Hung Tran-The, Svetha Venkatesh. Bayesian optimization with unknown search space. NeurIPS, 2019 [pdf] [code] [poster].
1. Thanh Nguyen-Tang, Jaesik Choi. Markov information bottleneck to improve information flow in stochastic neural networks. Entropy, 2019 (Special Issue on Information Bottleneck: Theory and Applications in Deep Learning) [pdf].

Mentoring

Teaching

* I participated in (and obtained a certificate of) Justice, Equity, Diversity, and Inclusion (JEDI) Training in the Classroom in March 2024 at JHU, as an effort to improve diversity in my future classes and research group.

Selected award/honor

Independent recognition

Professional service

Area Chair/Senior Program Committee

Conference Reviewer/Program Committee

Coordinator

Invited talks

For students