NYU Center for Data Science: What is intelligence?

A few weeks ago there was an open house at NYU Center for Data Science intended for faculty members of NYU. As one of the early members of the Center (i know! already!) i was given an opportunity to share why i joined the center and my experience at the Center so far with the audience. although i’m much more familiar with giving a research talk using a set of slides, i decided to try something new and give a talk without any slide. of course, this is totally new to me, and i couldn’t help but prepare a script in advance.

a short note on “Rebooting AI” by Marcus & Davis

Disclaimer: I received the hard copy of <Rebooting AI> from the publisher, although I had by then purchased the Kindle version of the book myself on Amazon. I only gave a quick look at the book on my flight between UIUC and NYC and wrote this brief note on my flight back to NYC from Chicago. I also felt it would be good to have even a short note by a machine learning researcher to balance all those praises by “Noam Chomsky, Steven Pinker, Garry Kasparov” and others.   <Rebooting AI> is a well-written piece (somewhat hastily) summarizing the current state of

Discrepancy between GD-by-GD and GD-by-SGD

The ICLR deadline is approaching, and of course, it’s time to write a short blog post that has absolutely nothing to do with any of my manuscripts in preparation. i’d like to thank Ed Grefenstette, Tim Rocktäschel and Phu Mon Htut for fruitful discussion. Let’s consider the following meta-optimization objective function: $$\mathcal{L}'(D’; \theta_0 – \eta \nabla_{\theta} \mathcal{L}(D; \theta_0))$$ which we want to minimize w.r.t. θ₀. it has become popular recently thanks to the success of MAML and its earlier and more recent variants to use gradient descent to minimize such a meta-optimization objective function. the gradient can be written down as* $$\nabla_{\theta_0} \mathcal{L}'(D’; \theta_0 – \eta \nabla_\theta \mathcal{L}(D; \theta_0) =

BERT has a Mouth and must Speak, but it is not an MRF

It was pointed out by our colleagues at NYU, Chandel, Joseph and Ranganath, that there is an error in the recent technical report <BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model> written by Alex Wang and me. The mistake was entirely on me not on Alex. There is an upcoming paper by Chandel, Joseph and Ranganath (2019) on a much better and correct interpretation and analysis of BERT, which I will share and refer to in an updated version of our technical report as soon as it appears publicly. Here, I would like to briefly point

Are we ready for self-driving cars?

Last Monday (April 29), I had an awesome experience of having been invited and participating in the debate event organized by the Review and Debates at NYU (http://www.thereviewatnyu.com/). By being born and raised in South Korea, I can confidently tell you that i cannot remember a single moment where I participated in any kind of formal debate nor a single chance in which i was taught how to make an argument for or against any specific topic. My mom often tells me I draw way too gloomy picture of Korean K-12 education I had, but it is true that our