i often find myself extremely embarrassed by myself, because i learn of concepts in machine learning that i should’ve known as a professor in machine learning but had never even heard of before. one latest example was expectile regression; i ran into this concept while studying Kostrikov et al. (2021) on implicit Q learning for offline reinforcement learning together with Daekyu who is visiting me from Samsung. in their paper, Kostrikov et al. present the following loss function to estimate the $\tau$-th expectile of a random variable $X$: $$\arg\min_{m_{\tau}} \mathbb{E}_{x \sim X}\left[ L_2^\tau (x – m_{\tau}) \right],$$ where $L_2^\tau(u) =

# Category: Research

## Defining emergence

so, apparently, emergence has become a hot topic on twitter while i was away in Kigali attending ICLR, moto-taxing in Kigali, injuring myself and breaking my phone running and tracking, seeing a majestic group of gorillas and being back at AIMS Rwanda after 4 years. the mountain gorillas were majestic. i do not want to discuss any particular paper/tweet/blog, because this topic seems to attract a weird set of people arguing for weird things, when in fact there are just a couple of different views into a single phenomenon, which is only natural in science and engineering. that said, if

## When do duplicates/frequencies matter in classification?

an interesting urban legend or wisdom is that a classifier we train will work better on examples that appear more frequently in the training set than on those that are rare. that is, the existence of duplicates or near-duplicates in the training set affects the decision boundary learned by a classifier. for instance, imagine training a face detector for your phone’s camera in order to determine which filter (one optimized for portraits and the other for other types of pictures). if most of the training examples for building such face detector were taken in bright day light, one often without

## Three faces of sparsity: nonlinear sparse coding

it’s always puzzled me what sparsity means when computation is nonlinear, i.e., decoding the observation from a sparse code using nonlinear computation, because the sparse code can very well be turned into a dense code along the nonlinear path from the original sparse code to the observation. this made me write a short note earlier, as in a few years back, and i thought i’d share my thoughts on sparsity here with you: in my mind, there are three ways to define sparse coding. these are equivalent if we constrain the decoder to be linear (i.e., $x = \sum_{i=1}^{d’} z_i

## Are JPEG and LM similar to each other? If so, in what sense, and is this the real question to ask?

last night, Douwe Kiela sent me a link to this article by Ted Chiang. i was already quite drunk already back then, quickly read the whole column and posted the following tweet: Delip Rao then retweeted and said that he does not “buy his lossy compression analogy for LMs”, in particular in the context of JPEG compression. Delip and i exchanged a few tweets earlier today, and i thought i’d state it here in a blog post how i described in the following tweet why i think LM and JPEG have the same conceptual background: one way in which I