<The Atomic Human> by Neil Lawrence

i can’t recall exactly but it was sometime in 2013 when Neil Lawrence visited Aalto University (it was january, apparently!). he gave a talk in a pretty small lecture room which was completely packed (and i was there as well.) he talked about his years-long effort in introducing probabilistic interpretation (and thereby extensions) to (hierarchical) unsupervised learning, which was back then being consumed by deep learning based approaches. that’s when i first learned clearly the intuition and motivation behind so-called GP-LVM (Gaussian process latent variable models). that was beautiful, or to be precise, how neil delivered his inspiration, motivation and intuition was beautiful; on the technical side, a lot of these algorithms in retrospective look pretty similar to each other, and no one particular method is more beautiful than the others (i mean .. if i use dropout, stochastic gradient descent and ensembling to train a deep autoencoder, is it really that different from GP-LVM? ;)) it perhaps sounded more appealing to me back then, because his motivation was not to build an intelligent system he already dreamt of building ever since he was a teenager reading Asimov or whatever else.

it was only later i learned that neil started off as a some kind of a software engineer working at oil rigs and only came back to academia for research in artificial intelligence after he realized the need for some kind of machine intelligence to solve this problem better (and perhaps more problems.) this was and continues to be refreshing, because a lot of, if not most of, scientific leaders in artificial intelligence (AI) were motivated and inspired to pursue their careers as well as research directions by reading sci-fi novels that depict AI (and often intelligent robots.) indeed quite a lot of more junior scientists and engineers in the field of AI are similarly motivated to work on AI due to similar experiences in their earlier years. but, not all of us are, and as a part of this “not all of us”, neil’s take and thoughts on AI and its impacts and consequences have already resonated with me quite well.

it’s therefore not a surprise that i pre-ordered neil’s book <The Atomic Human> as soon as i could. yes, this is not a paid endorsement; it’s me who paid neil by buying his wonderful book.

this is a masterfully written popular science book on AI. neil seamlessly connects both temporally and spatially separated topics into a single thread. he is able to go from (hypothetical) purely reflexive organisms, higher-level organisms that can both reflex and reflect all the way to bigger organizations consisting of many such organisms, such as the allies and axes during the second world war and how eisenhower made the perhaps-reckless or perhaps-ill-informed but ultimately-successful call on D-day. along its way, he seamlessly talks about industrial revolution and more recent information revolution, making analogy between steam engines, reflexive organisms and instinctive maneuvers by fighter jet pilots, cyclists and drivers. this connects very naturally to the idea of a dual process theory (or system 1 vs system 2, by kahnemann.) he covers a series of philosophers, from ancient greek philosphers as well as ancient chinese philosophers to bertrand russel and wittgenstein, explaining the humanity’s (successful and failed) effort at formalizing thinking. he skillfully connects the latter with another thread that starts from george boole, the second world war, bletcherey park, turing, shannon and ultimately weiner, rosenblatt and the Dartmouth summer workshop in 1956, to touch upon the beginning (and perhaps fall) of artificial intelligence. he even connects all these seemingly distant ideas together with the rise (and perhaps the fall) of machine learning driven web services, including social media and e-commerce, and (partly) explain how this perspective helps us understand how the russian election interferences in US Election worked in 2016.

but this book is not only about the historical account of modern AI (or more broadly computational approaches to statistics, logic and generalization) but is about the modern AI itself which includes neil himself. starting from george boole of the boolean fame, he walks us all the way to geoff hinton (yes, that nobel prize winning hinton!) via the resurrection of rosenblatt’s dream in the legendary PDP (parallel distributed processing) and yann lecun’s way-ahead-of-time video demo of convolutional networks in action from late 90s. in doing so, neil makes pretty much the perfect balance between praising and paying respect to these visionaries who persisted decades of neglects and telling us that this is but a small step in a long line of thoughts on computation, logic and probability and will give away to future scientists, innovations and steps (, or so i read.)

there is one theme that underlies all these threads of thoughts presented extremely well by neil in this book. that theme is the inevitability of uncertainty, or in neil’s words, laplace’s goblins (as opposed to the famous laplace’s demon.) intelligence emerges as a way for us to cope with these ever-existing laplace’s goblins, and in its emergence, we end up with intricate social (planners vs. doers) and intellectual (reflex vs. reflect) structures. these laplace’s goblins are the driving force behind the necessity of trust among intelligent beings, like us, as well as everything that composes the world in which we exist. rises and falls of computational paradigms, inclusive of mathematics, logic, probability and more, are often due to our inability to properly appreciate/acknowledge these goblins. of course, rises and falls of countries are also often due to these laplace’s goblins, as explained by neil with an example of D-day when Rommel (and neil’s grandfather) decided to take a day off back, failing to anticipate the D-Day.

toward the end of the book, neil skillfully ties all these threads of thoughts together to provide us with a much-needed nuanced perspective into modern AI and its impact on the society. it is not about an absolute level of intelligence (whatever that is, and probably not defined well enough to mean anything) of AI, but rather the relationship between humans and machines. how should we as humans interact with these human-analogue machines, when we are tempted to extremely anthropomorphize (oh i hate this word, and i agree with neil that we should just use “anthrox”) these machines but these machines do not look back at us in the same way? to what degree do we as humans want to give up our own decision making autonomy to these human-analogue machines, when these machines do not share the level of embodiment we have and thereby do not account for consequences in ways that we would? neil poses many questions but does not impose his own answers on us, readers, (unlike many others.)

i personally never liked sci-fi’s much when i was growing up. that’s still true today. i got to know of AI as something i can work on myself only when i started my master’s program. in this sense, AI has always been the best tool for solving challenging problems to me, and only after starting to work on AI, i started to ask myself a question of “what intelligence is”. perhaps this is why i could read and enjoy this book much more so than any other pop-sci books on AI. it gives me a narrative of AI that i can follow, empathize and agree with. for this, i thank neil for this excellent book.

finally, a fair warning. this is not an easy book to read. neil’s masterful threading of topics that span thousands of years if not millions of years in time and multiple continents is so masterful to the point that even a moment of day dreaming means you will have to trace back several paragraphs if not pages to figure out where you were before. if you however pay full attention to the book (i didn’t, and i will have to re-visit this book a few more times in the future, to grasp it fully,) you will find it delightful and also extremely enlightening, especially in this dizzying new era of AI.

Leave a Reply