Publications (reverse chronological order)

2024

18. Thanh Nguyen-Tang, Raman Arora. The Statistical Complexity of Offline Decision-Making. International Conference on Machine Learning (ICML), 2024.

2023

17. Anh Tong, Thanh Nguyen-Tang, Dongeun Lee, Toan Tran, Jaesik Choi. SigFormer: Signature Transformers for Deep Hedging. 4th ACM International Conference on AI in Finance (ICAIF), 2023 (Oral).
16. Anh Do, Thanh Nguyen-Tang, Raman Arora. Multi-Agent Learning with Heterogeneous Linear Contextual Bandits. Advances in Neural Information Processing Systems (NeurIPS), 2023.
15. Austin Watkins, Enayat Ullah, Thanh Nguyen-Tang, Raman Arora. Optimistic Rates for Multi-Task Representation Learning. Advances in Neural Information Processing Systems (NeurIPS), 2023.
14. Thanh Nguyen-Tang, Raman Arora. On Sample-Efficient Offline Reinforcement Learning: Data Diversity, Posterior Sampling and Beyond. Advances in Neural Information Processing Systems (NeurIPS), 2023.
13. Ragja Palakkadavath, Thanh Nguyen-Tang, Hung Le, Svetha Venkatesh, Sunil Gupta. Domain Generalization with Interpolation Robustness. Asian Conference on Machine Learning (ACML), 2023.
12. Thong Bach, Anh Tong, Truong Son Hy, Vu Nguyen, Thanh Nguyen-Tang. Global Contrastive Learning for Long-Tailed Classification. Transactions on Machine Learning Research (TMLR), 2023.
11. A. Tuan Nguyen, Thanh Nguyen-Tang, Ser-Nam Lim, Philip Torr. TIPI: Test Time Adaptation with Transformation Invariance. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.
10. Thanh Nguyen-Tang, Raman Arora. VIPeR: Provably Efficient Algorithm for Offline RL with Neural Function Approximation. International Conference on Learning Representations (ICLR), 2023 (top 25% noble) [talk] [slides] [code].
9. Thanh Nguyen-Tang, Ming Yin, Sunil Gupta, Svetha Venkatesh, Raman Arora. On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation. AAAI Conference on Artificial Intelligence (AAAI), 2023 [arXiv] [poster] [slides] [video].

2022

8. Ragja Palakkadavath, Thanh Nguyen-Tang, Sunil Gupta, Svetha Venkatesh. Domain Generalization with Interpolation Robustness. Distribution Shifts Workshop@NeurIPS, INTERPOLATE@NeurIPS (Spotlight), 2022.
7. Anh Tong, Thanh Nguyen-Tang, Toan Tran, Jaesik Choi. Learning Fractional White Noises in Neural Stochastic Differential Equations. Advances in Neural Information Processing Systems (NeurIPS), 2022. [code]. [arXiv].
6. Thanh Nguyen-Tang, Sunil Gupta, A.Tuan Nguyen, and Svetha Venkatesh. Offline Neural Contextual Bandits: Pessimism, Optimization and Generalization. International Conference on Learning Representations (ICLR), 2022. [arXiv] [poster] [slides] [code].
5. Thanh Nguyen-Tang, Sunil Gupta, Hung Tran-The, Svetha Venkatesh. On Sample Complexity of Offline Reinforcement Learning with Deep ReLU Networks in Besov Spaces. Transactions on Machine Learning Research (TMLR), 2022, Workshop on RL Theory, ICML, 2021. [arXiv] [slides] [talk].

2021

4. Thanh Nguyen-Tang, Sunil Gupta, Svetha Venkatesh. Distributional Reinforcement Learning via Moment Matching. AAAI Conference on Artificial Intelligence (AAAI), 2021. [arXiv] [code] [slides] [poster] [talk].

2020

3. Thanh Nguyen-Tang, Sunil Gupta, Huong Ha, Santu Rana, Svetha Venkatesh. Distributionally Robust Bayesian Quadrature Optimization. International Conference on Artificial Intelligence and Statistics (AISTATS), 2020. [arXiv] [code] [slides] [talk].

2019

2. Thanh Nguyen-Tang, Jaesik Choi. Markov Information Bottleneck to Improve Information Flow in Stochastic Neural Networks. Entropy, 2019 (Special Issue on Information Bottleneck: Theory and Applications in Deep Learning).
1. Huong Ha, Santu Rana, Sunil Gupta, Thanh Nguyen-Tang, Hung Tran-The, Svetha Venkatesh. Bayesian Optimization with Unknown Search Space. Advances in Neural Information Processing Systems (NeurIPS), 2019. [code] [poster].

Preprints

Mengyan Zhang, Thanh Nguyen-Tang, Fangzhao Wu, Zhenyu He, Xing Xie, Cheng Soon Ong. Two-Stage Neural Contextual Bandits for Adaptive Personalised Recommendation. Preprint, 2022.