Thanh Nguyen-Tang (TNT)

alt text 

Assistant Professor
Department of Data Science
Ying Wu College of Computing
New Jersey Institute of Technology

218 Central Ave
GITC 2110
Newark, NJ 07102

thanh.nguyen at njit dot edu | thnguyentang at gmail dot com
Google Scholar

Bio. I am an assistant professor in the Department of Data Science, Ying Wu College of Computing at New Jersey Institute of Technology (NJIT). Prior to that, I was a postdoc at Johns Hopkins University (with Raman Arora), did my PhD in Computer Science at Deakin University, Australia, my M.Sc. in Computer Science and Engineering at Ulsan National Institute of Science and Technology, South Korea, my B.Eng. in Electronic and Communication Engineering (Talented Engineering Program) at Danang University of Science and Technology, Vietnam. I was awarded the Alfred Deakin Medal for Doctoral Theses, 2022.

Research Interest. I am primarily interested in the theoretical and algorithmic aspects of machine learning motivated by real-world problem settings. I like to seek a mathematical understanding of the underlying algorithmic principles for learning, and thereby design efficient machine learning algorithms with strong theoretical guarantees. The current topics include sequential decision-making (RL, bandits, games), responsible AI and reasoning.

I am seeking highly motivated and self-driven Ph.D. students with a strong mathematical background in machine learning to join my research group at the Ying Wu College of Computing at NJIT, starting in Fall 2025 or Spring 2026. Prospective PhD students are strongly encouraged to email me with your CV, transcript, and a brief paragraph describing research experience and areas of interest. You can also apply directly to the DS PhD program at NJIT and mention my name.

Service

Area Chair at NeurIPS (2025), AISTATS (2026, 2025), AAMAS (2026); Senior Program Committee at AAAI (2025, 2024, 2023).

Publications

(See Google Scholar for, perhaps, a more up-to-date list.)

Preprint:

  • Quan Nguyen, Thanh Nguyen-Tang. One-Layer Transformers are Provably Optimal for In-context Reasoning and Distributional Association Learning in Next-Token Prediction Tasks. [ArXiv]

2025

  • Ragja Palakkadavath, Hung Le, Thanh Nguyen-Tang, Svetha Venkatesh, and Sunil Gupta. Federated Domain Generalization with Latent Space Inversion. ICDM, 2025.

  • Khai Le-Duc, Tuyen Tran, Bach Phan Tat, Nguyen Kim Hai Bui, Quan Dang, Hung-Phong Tran, Thanh-Thuy Nguyen, Ly Nguyen, Tuan-Minh Phan, Thi Thu Phuong Tran, Chris Ngo, Nguyen X Khanh, Thanh Nguyen-Tang. MultiMed-ST: Large-scale Many-to-many Multilingual Medical Speech Translation. EMNLP, 2025. [arXiv]

  • Khai Le-Duc, Phuc Phan, Tan-Hanh Pham, Bach Phan Tat, Minh-Huong Ngo, Thanh Nguyen-Tang, Truong-Son Hy. MultiMed: Multilingual medical speech recognition via attention encoder decoder. ACL (Industry), 2025. [pdf]

  • Thanh Nguyen-Tang, Raman Arora. Policy regret minimization in Markov games with function approximation. ICML, 2025.

  • Anh Tong, Thanh Nguyen-Tang, Dongeun Lee, Duc Nguyen, Toan Tran, David Leo Wright Hall, Cheongwoong Kang, Jaesik Choi. Neural ODE transformers: Analyzing internal dynamics and adaptive fine-tuning. ICLR, 2025.

  • Nguyen Hung-Quang, Ngoc-Hieu Nguyen, The-Anh Ta, Thanh Nguyen-Tang, Kok-Seng Wong, Hoang Thanh-Tung, and Khoa D Doan. Wicked oddities: Selectively poisoning for effective clean-label backdoor attacks. ICLR, 2025 [pdf].

  • Ragja Palakkadavath, Hung Le, Thanh Nguyen-Tang, Svetha Venkatesh, Sunil Gupta. Fair domain generalization with heterogeneous sensitive attributes across domains. WACV, 2025 [pdf].

2024

  • Thanh Nguyen-Tang, Raman Arora. Learning in Markov games with adaptive adversaries: Policy regret, fundamental barriers, and efficient algorithms. NeurIPS, 2024 [pdf].

  • Austin Watkins, Thanh Nguyen-Tang, Enayat Ullah, Raman Arora. Adversarially robust multi-task representation learning. NeurIPS, 2024 [pdf].

  • Haque Ishfaq, Thanh Nguyen-Tang, Songtao Feng, Raman Arora, Mengdi Wang, Ming Yin, Doina Precup. Offline multitask representation learning for reinforcement learning. NeurIPS, 2024 [pdf].

  • Thanh Nguyen-Tang, Raman Arora. On the statistical complexity of offline decision-making. ICML, 2024 [pdf].

2023

  • Anh Tong, Thanh Nguyen-Tang, Dongeun Lee, Toan Tran, Jaesik Choi. SigFormer: Signature transformers for deep hedging. ICAIF, 2023 (Oral)[pdf].

  • Anh Do, Thanh Nguyen-Tang, Raman Arora. Multi-agent learning with heterogeneous linear contextual bandits. NeurIPS, 2023 [pdf].

  • Austin Watkins, Enayat Ullah, Thanh Nguyen-Tang, Raman Arora. Optimistic rates for multi-task representation learning. NeurIPS, 2023 [pdf]

  • Thanh Nguyen-Tang, Raman Arora. On sample-efficient offline reinforcement learning: Data diversity, posterior sampling and beyond. NeurIPS, 2023 [pdf].

  • Ragja Palakkadavath, Thanh Nguyen-Tang, Hung Le, Svetha Venkatesh, Sunil Gupta. Domain generalization with interpolation robustness. ACML, 2023 [pdf].

  • Thong Bach, Anh Tong, Truong Son Hy, Vu Nguyen, Thanh Nguyen-Tang. Global contrastive learning for long-tailed classification. TMLR, 2023 [pdf].

  • A. Tuan Nguyen, Thanh Nguyen-Tang, Ser-Nam Lim, Philip Torr. TIPI: Test time adaptation with transformation invariance. CVPR, 2023 [html].

  • Thanh Nguyen-Tang, Raman Arora. VIPeR: Provably efficient algorithm for offline RL with neural function approximation. ICLR, 2023 (top 25% noble). [talk] [slides] [code] [ERRATUM.]

  • Thanh Nguyen-Tang, Ming Yin, Sunil Gupta, Svetha Venkatesh, Raman Arora. On instance-dependent bounds for offline reinforcement learning with linear function approximation. AAAI, 2023 [arXiv] [poster] [slides] [video].

2022

  • Anh Tong, Thanh Nguyen-Tang, Toan Tran, Jaesik Choi. Learning fractional white noises in neural stochastic differential equations. NeurIPS, 2022 [pdf] [code].

  • Thanh Nguyen-Tang, Sunil Gupta, A.Tuan Nguyen, and Svetha Venkatesh. Offline neural contextual bandits: Pessimism, optimization, and generalization. ICLR, 2022 [pdf] [poster] [slides] [code].

  • Thanh Nguyen-Tang, Sunil Gupta, Hung Tran-The, Svetha Venkatesh. On sample complexity of offline reinforcement learning with deep ReLU networks in Besov spaces. TMLR, 2022, Workshop on RL Theory, ICML, 2021 [arXiv] [slides] [talk].

2021

  • Thanh Nguyen-Tang, Sunil Gupta, Svetha Venkatesh. Distributional reinforcement learning via moment matching. AAAI, 2021 [arXiv] [code] [slides] [poster] [talk].

2020

  • Thanh Nguyen-Tang, Sunil Gupta, Huong Ha, Santu Rana, Svetha Venkatesh. Distributionally robust Bayesian quadrature optimization. AISTATS, 2020 [arXiv] [code] [slides] [talk].

2019

  • Huong Ha, Santu Rana, Sunil Gupta, Thanh Nguyen-Tang, Hung Tran-The, Svetha Venkatesh. Bayesian optimization with unknown search space. NeurIPS, 2019 [pdf] [code] [poster].

  • Thanh Nguyen-Tang, Jaesik Choi. Markov information bottleneck to improve information flow in stochastic neural networks. Entropy, 2019 (Invited, special Issue on Information Bottleneck: Theory and Applications in Deep Learning) [pdf].