Global AI Frontier Lab at New York University

earlier this year, we officially launched the Global AI Frontier Lab at New York University (NYU), directly under the Office of the President. the Global AI Frontier Lab was created at NYU in collaboration with the Korea’s Ministry of Science and ICT (specifically with its Institute for Information communication Technology Planning and Evaluation IITP), in order to support research in artificial intelligence (AI) and facilitate international collaboration between a growing body of AI researchers at NYU and those in Korea. together with Yann, i am co-directing this lab. the Global AI Frontier Lab focuses on three research themes; (1) Fundamental

Glen de Vries Professor of Health Statistics

i joined NYU as an assistant professor nearly ten years ago in Fall 2015. during that fall, Glen de Vries, who was the founder and president of Medidata, endowed a professorship in health statistics at the Courant Institute of Mathematical Sciences, NYU. this was celebrated in an event in the faculty lounge on the 13th floor of the Warren Weaver Hall. because i had a weekly lab session for my very first course at NYU almost at the same time, i sadly could not attend it myself, but it was how i learned of Glen de Vries, Medidata and their

Softmax forever, or why I like softmax

[UPDATE: Feb 8 2025] my amazing colleague Max Shen noticed a sign mistake in my derivation of the partial derivative of the log-harmonic function below. i taught my first full-semester course on <Natural Language Processing with Distributed Representation> in fall 2015 (whoa, a decade ago!) you can find the lecture note from this course at https://arxiv.org/abs/1511.07916. in one of the lectures, David Rosenberg, who was teaching machine learning at NYU back then and had absolutely no reason other than kindness to sit in at my course, asked why we use softmax and whether this is the only way to turn

Bye, Felix

Note: i wrote this on December 9 2024 but could not dare posting it because i did not want to and could not believe what just happened then. my heart still aches so much to even think about it, but i’m posting it on the last day of 2024 to remember Felix. It was sometime early summer in 2014. I was a postdoc in Montreal under supervision of Yoshua Bengio, and Felix was a visiting student who just arrived in Montreal then. I was struggling with building a neural machine translation system that can handle long source/target sentences, and in

Amortized Mixture of Gaussians (AMoG): A Proof of Concept for “Learning to X”, or how I re-discovered simulation-based inference

here’s my final hackathon of the year (2024). there are a few concepts in deep learning that i simply love. they include (but are not limited to) autoregressive sequence modeling, mixture density networks, boltzmann machines, variational autoencoders, stochastic gradient descent with adaptive learning rate and more recently set transformers. so, as the final hackathon of this year, i’ve decided to see if i can put together a set transformer, an autoregressive transformer decoder and a mixture density network to learn to infer an underlying mixture of Gaussians. i’ve got some help (and also misleading guidances) from Google Gemini (gemini-exp-1206) ,

1 2 3 17