tnt


Thanh Nguyen-Tang

Postdoctoral Research Fellow
Department of Computer Science
Whiting School of Engineering
Johns Hopkins University
3400 N Charles Street, Malone Hall 331, Baltimore, MD 21218
Email: tnguy258 at jhu dot edu, or nguyent2792 at gmail dot com

Intro: I am a Postdoctoral Research Fellow at Department of Computer Science, Whiting School of Engineering, Johns Hopkins University, working with Raman Arora. I was an Associate Research Fellow at the Applied AI Institute, Deakin University in July 2021-June 2022 and completed my PhD there in Feb 2022. I did my Master in Computer Science and Engineering at Ulsan National University of Science and Technology (UNIST) in 2018.

Research interests:

  • Reinforcement Learning

  • Probabilistic Deep Learning

  • Representation Learning

  • Learning under Distributional Shifts

I’m always actively open to research collaborations and chat!

Here are my Google Scholar, Semantic Scholar, Github, Twitter .

Latest News

  • [Sep 14, 2022] One paper got accepted to NeurIPS, 2022.

  • [Aug 8, 2022] I am acknowledged in Mengyan Zhang's PhD thesis.

  • [Jan 21, 2022] One paper got accepted to ICLR, 2022.

  • [Oct 25, 2021] A short version of our work has been accepted to the NeurIPS’21 Workshop on Offline Reinforcement Learning.

  • [Jul 8, 2021] A short version of our work has been accepted to the ICML’21 Workshop on Reinforcement Learning Theory.

  • [Jul 1, 2021] I start my postdoc at A\(^2\)I\(^2\), Deakin University after submitting my Ph.D. thesis in 24 Jun.

  • [May 20, 2021] I have been accepted to the Deep Learning Theory Summer School at Princeton, acceptance rate: 180/500 = 36%.

Publications

2022

2021

2020

2019

Dissertations

Academic Service

  • Senior Program Committee: AAAI (2023)

  • Reviewer/Program Committee: NeurIPS (2022, 2021, 2020), ICML (2022, 2021), ICLR (2023, 2022, 2021- Outstanding reviewer award), AISTATS (2021), AAAI (2022, 2021-Top 25% of Program Committees, 2020), EWRL (2022), L4DC (2022), NeurIPS Workshop on Offline Reinforcement Learning (2022, 2021)

  • Volunteer: ICML (2022), AutoML (2022)