이제서야 (외)할머니 산소에 들렀습니다. 작년 (2021) 어느날 그냥 문득 어머니께 전화을 드리고 싶어서 영상전화를 드렸는데 황급히 걸어가시면서 조금 있다가 전화하겠다며 끊으셨습니다. 글쎄요. 조금 싸한 느낌이 들었는데 곧 어머니께서 다시 전화를 걸으셨고 병원 1층에서 휠체어를 타고 입원하시는 할머니와 아주 짧은 통화를 했습니다. 어째 마음이 무거워서 오히려 통화를 제대로 못 했는데, 그 통화가 할머니와의 마지막 통화였네요. 언제나 호탕하시고 유쾌하셨던 할머니께서는 한결 같이 저와 통화를 하시면 꼭 두 가지 얘기를 하셨습니다. 하나는 제가 어릴 때 어찌나 서럽게 쉬지 않고 밤낮으로 울었다는 것이었습니다. 저야 애기 때니 기억은 안 나지만 얼마나 울어댔길래 다른 최근 기억은 세월에 잊혀지면서도 이 이야기는 안 잊혀졌던 것 일까요. 이제 할아버지

[NeurIPS 2022] How to request a reduced load for reviewers

NeurIPS 2022 is striving to recruit as many qualified senior area chairs, area chairs and reviewers as we can, in order to ensure quality, timely and relaxed reviewing of an incredible number of submissions we anticipate this year. In doing so, we’ve invited already more than 110 senior area chairs, more than 930 area chairs and more than 13,000 reviewers. furthermore, senior area chairs, who accept our invitations, are encouraged to nominate anyone for the roles of area chairs as well as reviewers, and area chairs, who accept our invitations, are encouraged to nominate anyone for the role of reviewers.

How to think of uncertainty and calibration … (2)

in the previous post (How to think of uncertainty and calibration …), i described a high-level function $U(y, p, \tau)$ that can be used for various purposes, such as (1) retrieving all predictions above some level of certainty and (2) calibrating the predictive distribution. of course, one thing that was hidden under the rug was what this predictive distribution $p$ was. in this short, follow-up post, i’d like to give some thoughts about what this $p$ is. to be specific, i will use $p(y|x)$ to indicate that this is a distribution over all possible answers $\mathcal{Y}$ returned by a machine

How to think of uncertainty and calibration …

since i started Prescient Design almost exactly a year ago and Prescient Design joined Genentech about 4 months ago, i’ve begun thinking about (but not taking any action on) uncertainty and what it means. as our goal is to research and develop a new framework for de novo protein design that includes not only a computational component but also a wet-lab component, we want to ensure that we balance exploration and exploitation carefully. in doing so, one way that feels natural is to use the level of uncertainty in a design (a novel protein proposed by our algorithm) by our

Manifold mixup: degeneracy?

i’ve been thinking about mixup quite a bit over the past few years since it was proposed in [1710.09412] mixup: Beyond Empirical Risk Minimization ( what a fascinatingly simple and yet intuitively correct idea! we want our model to behave linearly between any pair of training examples, which thus helps our model generalize better to an unseen example which is likely to be close to an interpolated point between some pair of training examples. if we consider the case of regression (oh i hate this name “regression” so much..) we can write this down as minimizing $$-\frac{1}{2} \| \alpha y

1 2 3 12