Disclaimer: I received the hard copy of <Rebooting AI> from the publisher, although I had by then purchased the Kindle version of the book myself on Amazon. I only gave a quick look at the book on my flight between UIUC and NYC and wrote this brief note on my flight back to NYC from Chicago. I also felt it would be good to have even a short note by a machine learning researcher to balance all those praises by “Noam Chomsky, Steven Pinker, Garry Kasparov” and others.
<Rebooting AI> is a well-written piece (somewhat hastily) summarizing the current state of artificial intelligence (or perhaps more like machine learning) in both terms of research and deployment. If one has not been in the field themself, they would appreciate the effort of the authors in gathering various recent (and old) findings that succinctly describe what we could and should expect from the current technology and what we cannot expect from the current technology. To me, and perhaps some of my colleagues in the field of deep learning (and slightly more broadly machine learning), which is often the target of skepticism from the authors (to be fair, the authors do demonstrate healthy skepticism toward any other existing technology in machine learning and artificial intelligence,) the book feels relatively light despite its grand reception expressed by various folks on social media.
Why do I feel this way? Perhaps it’s because I could classify the set of failure modes of the current technology, which are presented in this book as surprisingly findings, into two categories. The first category of these failure cases almost exclusively consists of what have been reported by machine learning researchers. That is, unlike how I have felt the book was implying (either implicitly or explicitly), it is machine learning researchers who are at the frontier of discovering, investigating and trying their best to address these weaknesses of the current technology. The second category consists of failures that were found largely by the authors themselves manually playing around (or more seriously testing) some of the products or demos that boast to have employed latest technology. Whether this limited interaction (just because everyone has 24 hours a day without any exception) is enough depends on what kind of argument in which way these failure cases are used, and I see some cases in this book that I find refreshing as these examples do clearly demonstrate weak aspects of those systems. It is however the empirical side of me that finds it a bit less satisfying to see a scientific argument made based on a few manually selected examples. In summary, unlike the authors’ implication, these problems are known and are being actively discovered by AI researchers (in particular ML researchers), and we are actively seeking to tackle these problems, although it’s rare for journalists or pundits to talk about these compared to other fancier news, e.g., silicon valley acquisition/merger/funding of supposedly-AI companies.
Yet another reason might be that the book does not really provide a clear, verified (or even verifable) way to “reboot” AI or even how we would think of approaching the problem of AI. In short, there were too many “seems to”, “will need to”, “should”, “is pretty clear”, and other uncertain, perhaps risk-avoiding terms in the book whenever the authors tried to argue the importance (or more like necessity) of a certain direction or method they “pretty clearly” believe a general AI system “seems to” require to have. The empirical side of me had struck again and again whenever I ran into these statements; that is, if we cannot prove it somewhat rigorously nor cannot demonstrate empirically and convincingly it, my scientific trust in these arguments tends to go down. Especially, in the latter case (empirical demonstration), the level at which the demonstration was convincing almost directly correlates with my trust, and sadly I could not find much of those in this book. For instance, I was much more convinced of the importance of common sense, which was emphasized over and over by the authors, by Yejin Choi of UW who showed me, over beer in Chicago two days ago, her latest work on natural language based learning of common sense, than by the arguments in this book. This is of course not to say that the authors’ proposals/arguments are incorrect nor fully unconvincingly. It is just that, as I mentioned earlier, they feel lighter than what I would’ve expected from its title <Rebooting AI> and the weights of the authors, Gary Marcus and Ernest Davis, both of whom I know in person.
This brief note on what I thought of <Rebooting AI> has concentrated mostly on the first part (which arguably takes most of the book) that is mainly about the technological side of AI. For me, it was more enjoyable to read the second part (or the last part) that discusses the true danger/consequence of AI, perceived by the authors, beyond the usual straw-man argument on humanity’s extinction by super-intelligence. I wonder what researchers in AI safety or ethical use of AI/ML think of this second part. Would they also find it too light, as I have found the first part of the book, however without sacrificing the correctness? If so, that would ironically imply that the authors have done a commendable job of summarizing various, latest developments (and non-developments) in AI/ML, while nicely blending in their own views and research, so as to pique the interest of bystanders and educate them to a level that they are aware of these developments and potential consequences/concerns.
<Rebooting AI> reads a bit too light for my taste, but it’s almost certainly due to my own involvement in the field of AI myself as a researcher and educator. Taking a small step back from my current position, I believe it was necessary and perhaps timely for some book to succinctly summarize both the up- and down-sides of the current state of AI for laypersons (as in anyone who is necessarily following the non-stop flood of academic papers in the field of AI), and it is not easy to imagine a better person (or a better team of people) than Gary and Ernie.
In short, I would recommend my parents (when Korean translation becomes available) to read <Rebooting AI> (although they might feel sad my name wasn’t mentioned even once when the improvement in Google Translate was described ;)) if your parents are not AI researchers, I’d suggest you recommend them as well. I would not however find it necessary for AI researchers themselves to read this book, unless you want to get a short, but interesting discussion on trustworthy AI toward the end of the book. Of course, if you want to have a Twitter or Facebook debate with Gary, I guess it wouldn’t hurt giving the book a quick look (although I don’t find it too necessary.)