After visiting Google DeepMind, I continued on to visit the University of Oxford. I gave a talk at the <Computational Linguistics Seminars> series about neural machine translation and met many awesome people there. I had interesting discussions with Prof. Nando de Freitas, Prof. Phil Blunsom and their student (Misha Denil, Ziyu Yang and many others). The next stop was the University of Cambridge thank to Felix Hill and his supervisor, Anna Korhonen. I again presented our neural translation model and my (Felix’s) view/take on syntax to both machine learning researchers and NLP researchers. There were some mixed response, and the
I visited Google DeepMind on Friday (17 Oct). I gave a talk on neural machine translation and met many researchers, including some of my former colleagues such as Razvan Pascanu and Guillaume Desjardins. As can be expected, I signed NDA before my visit and cannot discuss much about any interesting things that I have heard and discussed at DeepMind. But, I can tell one thing: despite my constant attempt, none of the researchers there was able to tell me a single downside (even when they were quite drunk!). Incredible! Anyways, it was a very interesting and exciting day.
Today, I gave a talk about our latest result in neural machine translation at the NLP PIC Seminar organised at IBM Watson Research Centre. After the talk, I enjoyed discussions with the researchers working at the NLP group. The slides I used today can be found here.
Prof. Kee-Eung Kim invited me to give a tutorial on deep learning at KAIST. Although I realize everyday how mush I *don’t* know enough about the field, I couldn’t resist *preaching* deep learning. The slides I used at the tutorial are available here.
I am moving this month to University of Montreal as a postdoctoral researcher at the Machine Learning Lab led by Prof. Yoshua Bengio.