Manifold mixup: degeneracy?

i’ve been thinking about mixup quite a bit over the past few years since it was proposed in [1710.09412] mixup: Beyond Empirical Risk Minimization (arxiv.org). what a fascinatingly simple and yet intuitively correct idea! we want our model to behave linearly between any pair of training examples, which thus helps our model generalize better to an unseen example which is likely to be close to an interpolated point between some pair of training examples. if we consider the case of regression (oh i hate this name “regression” so much..) we can write this down as minimizing $$-\frac{1}{2} \| \alpha y

The 8-th SW Welcomes Girls

I was invited to give a short talk at the 8-th SW Welcomes Girls event recently. it’s not often (in fact it’s almost never) that i’m invited to (and have accepted to) give a talk on a non-scientific topic. this event, however, i couldn’t say no to.. you can watch the whole event (1.5hr long) at SW WELCOMES GIRLS 8TH – YouTube, and i’m sharing the script i used to record my talk below. sorry it’s in Korean, and it’s way too long for me to translate it myself: 안녕하세요? 이런 좋은 행사에 초대해주셔서 감사합니다. 일단 간단히 제 소개부터 하겠습니다.

Supporting female researchers and researchers from under-represented groups, together with CIFAR

if i had to pick organizations that have impacted my current career path most, CIFAR would be very near (if not at) the top of this list. there are a few reasons behind this. first, CIFAR started a program named “Neural Computation & Adaptive Perception” (NCAP) in 2004, supporting research in artificial neural networks, which has become a dominant paradigm in machine learning as well as more broadly artificial intelligence and all adjacent areas, including natural language processing and computer vision. i started my graduate study in 2009 with focus on restricted Boltzmann machines and graduated in 2014 with a

Restricted Boltzmann machines or contrastive learning?

my inbox started to over-flow with emails that urgently require my attention, and my TODO list (which doesn’t exist outside my own brain) started to randomly remove entries to avoid overflowing. of course, this is perfect time for me to think of some random stuff. This time, this random stuff is contrastive learning. my thought on this stuff was sparked by Lerrel Pinto’s message on #random in our group’s Slack responding to the question “What is wrong with contrastive learning?” thrown by Andrew Gordon Wilson. Lerrel said, My understanding is that getting negatives for contrastive learning is difficult. Lerrel Pinto

Ho-Am Prize & Lim Mi-Sook Scholarship (임미숙 장학금) at KAIST

NOTE: this post is the third part of the three-post series. see here to learn about the Ho-Am Prize in Engineering I just was just awarded with. Ho-Am Prize & Scholarship for Macademia at Aalto University Ho-Am Prize & 백규고전학술상 (Baek-Gyu Scholarly Award for Classics) Ho-Am Prize & Lim Mi-Sook Scholarhip (임미숙 장학금) at KAIST Lim Mi-Sook Scholarship 임미숙 장학금 i graduated from Korea Advanced Institute of Science and Technology (KAIST) with the Bachelor in Science (B.Sc.) degree. i majored in computer science which is the subject i’ve never left so far, having become a professor of computer science (and