Best paper runner-up at NAACL’16

A paper by Orhan Firat, me and Yoshua Bengio on multi-way, multilingual neural machine translation is sadly but also happily a best paper runner up at NAACL’16. You can find the paper at The code has also been made public recently by Orhan at  

[Closed] A Post-Doctoral Researcher Position in Deep Learning for Medical Image Analysis

Update on March 15, 2016 Thanks for sending me your CV! I have screened the applications and have made an offer. Prof. Kyunghyun Cho ( at the Computational Intelligence, Learning, Vision, and Robotics (CILVR) Group (, Department of Computer Science (, New York University invites applications for a postdoctoral position on deep learning for medical image analysis. Applicants are expected to have strong background and experience in developing and investigating deep neural networks for computer vision, in addition to good knowledge of machine learning and excellent programming skills. Applicants should be able to implement deep neural networks, including multilayered convolutional

Google Faculty Award: Fall 2015

I have been awarded Google Faculty Award (Fall 2015) in the field of machine translation. I am honoured to be a recipient of this award and will use it toward advancing my machine translation research further. For more details, see

to arXiv or not to arXiv

I believe it is a universal phenomenon: when you’re swamped with work, you suddenly feel the irresistible urge to do something else. This is one of those something else. Back in January (2016), right after the submission deadline of NAACL’16, Chris Dyer famously (?) posted on this Facebook wall, “to arxiv or not to arxiv, that is the increasingly annoying question.” This question of “to arxiv or not to arxiv” a conference submission, that has not yet gone through peer-review, indeed has become a thorny issue in the field of machine learning and a wider research community around it, including

DeepMind Q&A Data

One major issue with research in Q&A is that there is not a controlled but large-scale standard benchmark dataset available. There are a number of open-ended Q&A datasets, but they often require a system to have access to external resources. This makes it difficult for researchers to compare different models in a unified way.  Recently, one such large-scale standard Q&A dataset was proposed by Hermann et al. (2015). In this dataset, a question comes along with a context of which one of the word is an answer to the question. And.. wait.. I just realized that I don’t have to

1 9 10 11 12 13 15