Lecture Note for “NLP with Distributed Representation” on arXiv Now

On the same day I moved to NYC at the end of August, I had coffee with Hal Daume III. Among many things we talked about, I just had to ask Hal for advice on teaching, as my very first full-semester course was about to start then. One of the first questions I asked was whether he had some lectures slides all ready now that it’s been some years since he’s started teaching.    His response was that there was no slide! No slide? I was shocked for a moment. Though, now that I think about it, most of the

Lost in Interpretability

The Center for Data Science (CDS) at NYU has a weekly lunch seminar series. Each Monday, one speaker gives an (informal) presentation on any topic she/he wants to talk about, or at least so I thought. Anyways, I thought it would be a good chance to discuss with people (students, research fellows at CDS as well as faculty members from various departments all over NYU) what the interpretability of machine learning models means. I prepared a set of slides based on an excellent article <Statistical Modeling: The Two Cultures> by Leo Breiman. Instead of trying to write what I’ve talked

Brief Summary of the Panel Discussion at DL Workshop @ICML 2015

Overview The finale of the Deep Learning Workshop at ICML 2015 was the panel discussion on the future of deep learning. After a couple of weeks of extensive discussion and exchange of emails among the workshop organizers, we invited six panelists; Yoshua Bengio (University of Montreal), Neil Lawrence (University of Sheffield), Juergen Schmidhuber (IDSIA), Demis Hassabis (Google DeepMind), Yann LeCun (Facebook, NYU) and Kevin Murphy (Google). As recent deep learning revolution has come from both academia and industry, we tried our best to balance the panelists so that audience can hear from the experts in both industry and academia. Before I say anything more, I would like to thank the panelists for having accepted