[NeurIPS’22] Chasing reviewers

as some of you may have noticed, i was one of the program chairs of NeurIPS’22 which just ended last Friday (December 9 2022). it was a two-week-long conference with the first week being in person in New Orleans which was followed by the virtual week. program chairs were mostly tasked with running the review process for the main track of the conference and inviting keynote speakers, and there were other organizing committee members who have taken care of various other aspects of the conference, including expos, workshops, tutorials, datasets and benchmark track, social events, affinity workshops and many more,

My opening statement at the ICML 2022 Debate

i was honoured to participate in the ICML Debate 2022 on the topic of <Progress towards achieving AI will be mostly driven by engineering not science>. the debate was in the British Parliamentary Style which i was not familiar with at all but found interesting. i was assigned to the opposition party and was designated as the “leader”, which meant i had to open the debate from the opposition side following the opening from the proposition. the proposition party consisted of Sella Nevo, Maya R. Gupta and François Charton. Been Kim was unfortunately unable to participate, although she would’ve been

Reading others’ reviews

it’s typically not a part of any formal training of PhD students to learn how to write a review. certainly there are materials online that aim to address this issue by providing various tips & tricks of writing a review, such as Reviewing Advice – ACL-IJCNLP 2021 (aclweb.org), but it’s not easy to learn to write something off of a bullet-point list of what should be written. it’s thus often left for student authors to learn to review by reading the reviews of their own papers. this learning-to-review-by-reading-one’s-own-reviews strategy has some downsides. a major one is that people are often

How to think of uncertainty and calibration … (2)

in the previous post (How to think of uncertainty and calibration …), i described a high-level function $U(y, p, \tau)$ that can be used for various purposes, such as (1) retrieving all predictions above some level of certainty and (2) calibrating the predictive distribution. of course, one thing that was hidden under the rug was what this predictive distribution $p$ was. in this short, follow-up post, i’d like to give some thoughts about what this $p$ is. to be specific, i will use $p(y|x)$ to indicate that this is a distribution over all possible answers $\mathcal{Y}$ returned by a machine

How to think of uncertainty and calibration …

since i started Prescient Design almost exactly a year ago and Prescient Design joined Genentech about 4 months ago, i’ve begun thinking about (but not taking any action on) uncertainty and what it means. as our goal is to research and develop a new framework for de novo protein design that includes not only a computational component but also a wet-lab component, we want to ensure that we balance exploration and exploitation carefully. in doing so, one way that feels natural is to use the level of uncertainty in a design (a novel protein proposed by our algorithm) by our