# Blog

## [NeurIPS 2022] How to request a reduced load for reviewers

NeurIPS 2022 is striving to recruit as many qualified senior area chairs, area chairs and reviewers as we can, in order to ensure quality, timely and relaxed reviewing of an incredible number of submissions we anticipate this year. In doing so, we’ve invited already more than 110 senior area chairs, more than 930 area chairs and more than 13,000 reviewers. furthermore, senior area chairs, who accept our invitations, are encouraged to nominate anyone for the roles of area chairs as well as reviewers, and area chairs, who accept our invitations, are encouraged to nominate anyone for the role of reviewers.

## How to think of uncertainty and calibration … (2)

in the previous post (How to think of uncertainty and calibration …), i described a high-level function $U(y, p, \tau)$ that can be used for various purposes, such as (1) retrieving all predictions above some level of certainty and (2) calibrating the predictive distribution. of course, one thing that was hidden under the rug was what this predictive distribution $p$ was. in this short, follow-up post, i’d like to give some thoughts about what this $p$ is. to be specific, i will use $p(y|x)$ to indicate that this is a distribution over all possible answers $\mathcal{Y}$ returned by a machine

## How to think of uncertainty and calibration …

since i started Prescient Design almost exactly a year ago and Prescient Design joined Genentech about 4 months ago, i’ve begun thinking about (but not taking any action on) uncertainty and what it means. as our goal is to research and develop a new framework for de novo protein design that includes not only a computational component but also a wet-lab component, we want to ensure that we balance exploration and exploitation carefully. in doing so, one way that feels natural is to use the level of uncertainty in a design (a novel protein proposed by our algorithm) by our

## Manifold mixup: degeneracy?

i’ve been thinking about mixup quite a bit over the past few years since it was proposed in [1710.09412] mixup: Beyond Empirical Risk Minimization (arxiv.org). what a fascinatingly simple and yet intuitively correct idea! we want our model to behave linearly between any pair of training examples, which thus helps our model generalize better to an unseen example which is likely to be close to an interpolated point between some pair of training examples. if we consider the case of regression (oh i hate this name “regression” so much..) we can write this down as minimizing -\frac{1}{2} \| \alpha y

## The 8-th SW Welcomes Girls

I was invited to give a short talk at the 8-th SW Welcomes Girls event recently. it’s not often (in fact it’s almost never) that i’m invited to (and have accepted to) give a talk on a non-scientific topic. this event, however, i couldn’t say no to.. you can watch the whole event (1.5hr long) at SW WELCOMES GIRLS 8TH – YouTube, and i’m sharing the script i used to record my talk below. sorry it’s in Korean, and it’s way too long for me to translate it myself: 안녕하세요? 이런 좋은 행사에 초대해주셔서 감사합니다. 일단 간단히 제 소개부터 하겠습니다.