i joined NYU as an assistant professor nearly ten years ago in Fall 2015. during that fall, Glen de Vries, who was the founder and president of Medidata, endowed a professorship in health statistics at the Courant Institute of Mathematical Sciences, NYU. this was celebrated in an event in the faculty lounge on the 13th floor of the Warren Weaver Hall. because i had a weekly lab session for my very first course at NYU almost at the same time, i sadly could not attend it myself, but it was how i learned of Glen de Vries, Medidata and their
Author: kyunghyuncho
Softmax forever, or why I like softmax
[UPDATE: Feb 8 2025] my amazing colleague Max Shen noticed a sign mistake in my derivation of the partial derivative of the log-harmonic function below. i taught my first full-semester course on <Natural Language Processing with Distributed Representation> in fall 2015 (whoa, a decade ago!) you can find the lecture note from this course at https://arxiv.org/abs/1511.07916. in one of the lectures, David Rosenberg, who was teaching machine learning at NYU back then and had absolutely no reason other than kindness to sit in at my course, asked why we use softmax and whether this is the only way to turn
Bye, Felix
Note: i wrote this on December 9 2024 but could not dare posting it because i did not want to and could not believe what just happened then. my heart still aches so much to even think about it, but i’m posting it on the last day of 2024 to remember Felix. It was sometime early summer in 2014. I was a postdoc in Montreal under supervision of Yoshua Bengio, and Felix was a visiting student who just arrived in Montreal then. I was struggling with building a neural machine translation system that can handle long source/target sentences, and in
Amortized Mixture of Gaussians (AMoG): A Proof of Concept for “Learning to X”, or how I re-discovered simulation-based inference
here’s my final hackathon of the year (2024). there are a few concepts in deep learning that i simply love. they include (but are not limited to) autoregressive sequence modeling, mixture density networks, boltzmann machines, variational autoencoders, stochastic gradient descent with adaptive learning rate and more recently set transformers. so, as the final hackathon of this year, i’ve decided to see if i can put together a set transformer, an autoregressive transformer decoder and a mixture density network to learn to infer an underlying mixture of Gaussians. i’ve got some help (and also misleading guidances) from Google Gemini (gemini-exp-1206) ,
Stochastic variational inference for low-rank stochastic block models, or how i re-discovered SBM unnecessarily
Prologue a few weeks ago, i listened to Sebastian Seung’s mini-lecture at Flatiron Institute (CCM) about the recently completed fruit fly brain connectome. near the end of the mini-lecture, sebastian talked about the necessity of graph node clustering based on the type-level connectivity patterns instead of node-level connectivity patterns. i thought that would be obviously easy to solve with latent variable modeling and ChatGPT. i was so wrong, because ChatGPT misled me into every possible wrong corner of the solution space over the next two weeks or so. eventually, i implemented a simple variational inference approach to latent variable clustering,