Google Faculty Award: 2020

I am happy to share the news that Cristina Savin and I have been selected to receive the Google Faculty Research Award this year in the area of computational neuroscience with the topic on <Online Meta-Learning>. See https://research.google/outreach/past-programs/faculty-research-awards/ for the list of awardees.

NYU Center for Data Science: What is intelligence?

A few weeks ago there was an open house at NYU Center for Data Science intended for faculty members of NYU. As one of the early members of the Center (i know! already!) i was given an opportunity to share why i joined the center and my experience at the Center so far with the audience. although i’m much more familiar with giving a research talk using a set of slides, i decided to try something new and give a talk without any slide. of course, this is totally new to me, and i couldn’t help but prepare a script in advance.

a short note on “Rebooting AI” by Marcus & Davis

Disclaimer: I received the hard copy of <Rebooting AI> from the publisher, although I had by then purchased the Kindle version of the book myself on Amazon. I only gave a quick look at the book on my flight between UIUC and NYC and wrote this brief note on my flight back to NYC from Chicago. I also felt it would be good to have even a short note by a machine learning researcher to balance all those praises by “Noam Chomsky, Steven Pinker, Garry Kasparov” and others.   <Rebooting AI> is a well-written piece (somewhat hastily) summarizing the current state of

Discrepancy between GD-by-GD and GD-by-SGD

The ICLR deadline is approaching, and of course, it’s time to write a short blog post that has absolutely nothing to do with any of my manuscripts in preparation. i’d like to thank Ed Grefenstette, Tim Rocktäschel and Phu Mon Htut for fruitful discussion. Let’s consider the following meta-optimization objective function: $$\mathcal{L}'(D’; \theta_0 – \eta \nabla_{\theta} \mathcal{L}(D; \theta_0))$$ which we want to minimize w.r.t. θ₀. it has become popular recently thanks to the success of MAML and its earlier and more recent variants to use gradient descent to minimize such a meta-optimization objective function. the gradient can be written down as* $$\nabla_{\theta_0} \mathcal{L}'(D’; \theta_0 – \eta \nabla_\theta \mathcal{L}(D; \theta_0) =

Sharing some good news and some bad news

I have some news, both good and bad, to share with everyone around me, because I’ve always been a big fan of transparency and also because i’ve recently realized that it can easily become awkward when those who know of these news and who don’t are in the same place with me. Let me begin. The story, which contains all these news, starts sometime mid-2017, when I finally decided to apply for permanent residence (green card) after spending three years here in US. As I’m already in the US, the process consists of two stages. In the first stage, I,

1 7 8 9 10 11 15