Best paper runner-up at NAACL’16

A paper by Orhan Firat, me and Yoshua Bengio on multi-way, multilingual neural machine translation is sadly but also happily a best paper runner up at NAACL’16. You can find the paper at  https://arxiv.org/abs/1601.01073 The code has also been made public recently by Orhan at  https://github.com/nyu-dl/dl4mt-multi  

[Closed] A Post-Doctoral Researcher Position in Deep Learning for Medical Image Analysis

Update on March 15, 2016 Thanks for sending me your CV! I have screened the applications and have made an offer. Prof. Kyunghyun Cho (https://www.kyunghyuncho.me/) at the Computational Intelligence, Learning, Vision, and Robotics (CILVR) Group (http://cilvr.cs.nyu.edu/), Department of Computer Science (https://cs.nyu.edu/), New York University invites applications for a postdoctoral position on deep learning for medical image analysis. Applicants are expected to have strong background and experience in developing and investigating deep neural networks for computer vision, in addition to good knowledge of machine learning and excellent programming skills. Applicants should be able to implement deep neural networks, including multilayered convolutional

Google Faculty Award: Fall 2015

I have been awarded Google Faculty Award (Fall 2015) in the field of machine translation. I am honoured to be a recipient of this award and will use it toward advancing my machine translation research further. For more details, see http://googleresearch.blogspot.com/2016/02/google-research-awards-fall-2015.html.

to arXiv or not to arXiv

I believe it is a universal phenomenon: when you’re swamped with work, you suddenly feel the irresistible urge to do something else. This is one of those something else. Back in January (2016), right after the submission deadline of NAACL’16, Chris Dyer famously (?) posted on this Facebook wall, “to arxiv or not to arxiv, that is the increasingly annoying question.” This question of “to arxiv or not to arxiv” a conference submission, that has not yet gone through peer-review, indeed has become a thorny issue in the field of machine learning and a wider research community around it, including

DeepMind Q&A Data

One major issue with research in Q&A is that there is not a controlled but large-scale standard benchmark dataset available. There are a number of open-ended Q&A datasets, but they often require a system to have access to external resources. This makes it difficult for researchers to compare different models in a unified way.  Recently, one such large-scale standard Q&A dataset was proposed by Hermann et al. (2015). In this dataset, a question comes along with a context of which one of the word is an answer to the question. And.. wait.. I just realized that I don’t have to

1 9 10 11 12 13 15