The extended abstract version of <Deep Neural Networks Improve Radiologists’ Performance in Breast Cancer Screening> has received the best paper award at the AI for Social Good Workshop co-located with ICML’19 last week in Long Beach, CA. Congratulations to the first author Nan who is a PhD student at the NYU Center for Data Science, the project lead Krzysztof who is an assistant professor at NYU Radiology, and all the other members of this project!
Author: kyunghyuncho
BERT has a Mouth and must Speak, but it is not an MRF
Update on June 9 2021: i still don’t know the fate of the hypothetical manuscript by Chandel et al., but i’ve noticed that Kartik Goyal, Chris Dyer & Taylor Berg-Kirkpatrick fixed this issue (https://arxiv.org/abs/2106.02736) in this blog post by using BERT’s conditional as a proposal distribution in Metropolis-Hastings, to sample from the distribution defined using the potentials defined by the BERT’s single-token conditionals’ logits. It was pointed out by our colleagues at NYU, Chandel, Joseph and Ranganath, that there is an error in the recent technical report <BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model> written
Are we ready for self-driving cars?
Last Monday (April 29), I had an awesome experience of having been invited and participating in the debate event organized by the Review and Debates at NYU (http://www.thereviewatnyu.com/). By being born and raised in South Korea, I can confidently tell you that i cannot remember a single moment where I participated in any kind of formal debate nor a single chance in which i was taught how to make an argument for or against any specific topic. My mom often tells me I draw way too gloomy picture of Korean K-12 education I had, but it is true that our
On the causality view of “Context-Aware Learning for Neural Machine Translation”
[Notice: what an unfortunate timing! This post is definitely NOT an april fool’s joke.] Sebastien Jean and I had a paper titled <context-aware learning for neural machine translation> rejected from NAACL’19, perhaps understandable because we did not report any substantial gain in the BLEU score. As I finally found some time to read Pearl’s <Book of Why> due to a personal reason (yes, personal reasons sometimes can help), I thought I wrote a short note on how the idea in this paper was originally motivated. As I was never educated in causal inference or learning, I was scared of using a term
Best Paper Award at ISMIR 2017
Keunwoo Choi, George Fazekas, Mark Sandler and I have received the Best Paper Award at the 18th Annual International Society for Music Information Retrieval Conference (ISMIR). The paper is <Transfer Learning for Music Classification and Regression Tasks> which investigates different ways to exploit the knowledge captured within a deep convolutional network trained to tag a song for other relevant tasks. The main idea is to use not only the final hidden activation vector (as has been usual in computer vision) but to use the activations from all the layers, as some target tasks may require low-level details. Check it out