I have been awarded Google Faculty Award (Fall 2015) in the field of machine translation. I am honoured to be a recipient of this award and will use it toward advancing my machine translation research further. For more details, see http://googleresearch.blogspot.com/2016/02/google-research-awards-fall-2015.html.
Author: kyunghyuncho
to arXiv or not to arXiv
I believe it is a universal phenomenon: when you’re swamped with work, you suddenly feel the irresistible urge to do something else. This is one of those something else. Back in January (2016), right after the submission deadline of NAACL’16, Chris Dyer famously (?) posted on this Facebook wall, “to arxiv or not to arxiv, that is the increasingly annoying question.” This question of “to arxiv or not to arxiv” a conference submission, that has not yet gone through peer-review, indeed has become a thorny issue in the field of machine learning and a wider research community around it, including
DeepMind Q&A Data
One major issue with research in Q&A is that there is not a controlled but large-scale standard benchmark dataset available. There are a number of open-ended Q&A datasets, but they often require a system to have access to external resources. This makes it difficult for researchers to compare different models in a unified way. Recently, one such large-scale standard Q&A dataset was proposed by Hermann et al. (2015). In this dataset, a question comes along with a context of which one of the word is an answer to the question. And.. wait.. I just realized that I don’t have to
Lecture Note for “NLP with Distributed Representation” on arXiv Now
On the same day I moved to NYC at the end of August, I had coffee with Hal Daume III. Among many things we talked about, I just had to ask Hal for advice on teaching, as my very first full-semester course was about to start then. One of the first questions I asked was whether he had some lectures slides all ready now that it’s been some years since he’s started teaching. His response was that there was no slide! No slide? I was shocked for a moment. Though, now that I think about it, most of the
Lost in Interpretability
The Center for Data Science (CDS) at NYU has a weekly lunch seminar series. Each Monday, one speaker gives an (informal) presentation on any topic she/he wants to talk about, or at least so I thought. Anyways, I thought it would be a good chance to discuss with people (students, research fellows at CDS as well as faculty members from various departments all over NYU) what the interpretability of machine learning models means. I prepared a set of slides based on an excellent article <Statistical Modeling: The Two Cultures> by Leo Breiman. Instead of trying to write what I’ve talked