My research proposal on <A Trainable Decoding Algorithm for Neural Machine Translation> has been selected for Google Research Award 2016 (it’s a bit confusing whether it’s 2016 or 2017; deadline in 2016 but decision in 2017.) I’d like to thank Google for this award which would greatly help my research. Gotta go buy a few more GPU’s! For more info, see https://research.googleblog.com/2017/02/google-research-awards-2016.html.
A paper by Orhan Firat, me and Yoshua Bengio on multi-way, multilingual neural machine translation is sadly but also happily a best paper runner up at NAACL’16. You can find the paper at https://arxiv.org/abs/1601.01073 The code has also been made public recently by Orhan at https://github.com/nyu-dl/dl4mt-multi
Update on March 15, 2016 Thanks for sending me your CV! I have screened the applications and have made an offer. Prof. Kyunghyun Cho (https://www.kyunghyuncho.me/) at the Computational Intelligence, Learning, Vision, and Robotics (CILVR) Group (http://cilvr.cs.nyu.edu/), Department of Computer Science (https://cs.nyu.edu/), New York University invites applications for a postdoctoral position on deep learning for medical image analysis. Applicants are expected to have strong background and experience in developing and investigating deep neural networks for computer vision, in addition to good knowledge of machine learning and excellent programming skills. Applicants should be able to implement deep neural networks, including multilayered convolutional
I have been awarded Google Faculty Award (Fall 2015) in the field of machine translation. I am honoured to be a recipient of this award and will use it toward advancing my machine translation research further. For more details, see http://googleresearch.blogspot.com/2016/02/google-research-awards-fall-2015.html.
One major issue with research in Q&A is that there is not a controlled but large-scale standard benchmark dataset available. There are a number of open-ended Q&A datasets, but they often require a system to have access to external resources. This makes it difficult for researchers to compare different models in a unified way. Recently, one such large-scale standard Q&A dataset was proposed by Hermann et al. (2015). In this dataset, a question comes along with a context of which one of the word is an answer to the question. And.. wait.. I just realized that I don’t have to