[Initial posting on Nov 29 2020][Updated on Nov 30 2020] added a section about the scaling law w.r.t. the model size, per request from Felix Hill. [Updated on Dec 1 2020] added a paragraph referring to Dauphin & Bengio’s “Big Neural Networks Waste Capacity“. this is a short post on why i thought (or more like imagined) the scaling laws from <scaling laws for autoregressive generative modeling> by Heninghan et al. “[is] inevitable from using log loss (the reducible part of KL(p||q))” when “the log loss [was used] with a max entropy model“, which was my response to Tim Dettmers’s
Earlier this month (Nov 2020) at the Samsung AI Forum 2020 I was one of the five recipients of the inaugural Samsung AI Researcher of the Year Award by the Samsung Advanced Institute of Technology (SAIT). Samsung has been supporting my research ever since I was a postdoc at Mila in Montreal, and without their support I wouldn’t have been able to support all my PhD students (NSF, i’m looking at you!) Because of this prolonged support, I had been already grateful to Samsung even before this award, and I am even more thankful. It was also a humbling experience
Click here to jump to my foreword and skip the background. If you want to read the foreword in pdf, click here. If you’re interested in the tables of contents from the series, click here. Here’s my video message for their publication celebration: https://youtu.be/O78XdDYRZfc. Background: Right before COVID-19 struck NY heavily this past Spring, K-12 teachers from Busan, Korea stopped by at NYC on their trip to US for studying various AI education strategies in US, and asked me for a short meeting. Frankly i was quite skeptical about this meeting, and was assuming it was their vacation in disguise.
[WARNING: there is nothing “WOW” nor technical about this post, but a piece of thought i had about GPT-3 and few-shot learning.] Many aspects of OpenAI’s GPT-3 have fascinated and continue to fascinate people, including myself. these aspects include the sheer scale, both in terms of the number of parameters, the amount of compute and the size of data, the amazing infrastructure technology that has enabled training this massive model, etc. of course, among all these fascinating aspects, meta-learning, or few-shot learning, seems to be the one that fascinates people most. the idea behind this observation of GPT-3 as a
Update on October 23 2020: After I wrote this post, i was invited to give a talk on this topic of social impacts & bias of AI at the course <Ethics in AI> by Prof. Alice Oh at KAIST. I’m sharing the slide set here: Unreasonably shallow deep learning [slides]. There have been a series of news articles in Korea about AI and its applications that have been worrying me for sometime. I’ve often ranted about them on social media, but I was told that my rant alone is not enough, because it does not tell others why I ranted about