In Artificial Unintelligence: How Computers Misunderstand the World, Meredith Broussard adds to the growing literature exploring the limits of artificial intelligence (AI) and techno-solutionism, furthermore showing how its socially-constructed nature replicates existing structural inequalities. Calling for greater racial and gender diversity in tech, the book offers a timely, accessible and often entertaining account that sets the record straight on what current approaches to AI are and are not capable of delivering, writes Nikita Aggarwal.
This post originally appeared on LSE Review of Books. If you would like to contribute to the series, please contact the managing editor of LSE Review of Books, Dr Rosemary Deller, at email@example.com.
Take a look at the following translation from Google Translate
You don’t need to speak Italian to know that something’s not quite right. In English, no-one ever talks of a ‘dead bathroom’. It makes no literal sense—bathrooms are not natural creatures that live and die. Figuratively, we might say that a place is dead, in the sense of being empty or deserted, but it would be highly unusual to describe a bathroom in this way. Adjusting for the obvious error, our brains infer that the phrase probably means the bathroom is broken, or closed. Yet, whilst most English-speaking humans can understand these distinctions and infer the correct meaning, computers cannot.
Google Translate essentially treats translation as a prediction task based on observed patterns between the source and target translation languages. It is built using machine learning—notably, ‘deep learning’ with artificial neural networks, which relies on large quantities of data (sample translations between different languages) and a lot of maths (learning algorithms and statistical models) to build a computational system that can perform the translation task without being explicitly programmed to do so. The more translated examples the neural net is exposed to, the better it gets at predicting a new translation. Ultimately, however, Google Translate cannot actually understand the meaning of the words or phrase it is translating, which is why it can’t identify the nonsense in ‘dead bathroom’ nor infer a more suitable translation.
It is precisely these computational limits that are exposed in Artificial Unintelligence: How Computers Misunderstand the World, a timely and informative book by Meredith Broussard that adds to the growing literature on the limits of artificial intelligence (AI) and techno-solutionism. In a similar vein to Weapons of Math Destruction by Cathy O’Neill, Automating Inequality by Virginia Eubanks and The Whale and the Reactor, the classic by Langdon Winner, the author is at pains to emphasise the socially-constructed nature of technologies that are inevitably embedded with the cultural and political values of their designers and developers. Tools like machine learning will simply perpetuate structural social inequalities, such as racial and gender-based discrimination, if these biases are not consciously designed out of the system—for example, through responsible data practices and rigorous model feedback testing.
Broussard adds to this narrative with examples from her own experience as a programmer and data journalist, making a call for greater racial and gender diversity in tech, particularly in Silicon Valley. Lamenting the failure of self-governance within the tech community, she furthermore argues for increased ethical and legal education of developers. In this respect, the book adds to a recently resurgent critique of the masculinised tech ‘bro’ culture—one that seems to valorise reckless experimentation and the cult of genius, with little regard for safety and social, ethical or legal values.
This book makes three important contributions to the existing literature. First, it grounds the sociological analysis in an accessible technical account of the key computational processes involved in machine learning. In doing so, it enables non-technical readers to better appreciate the computational limits of machine learning and the fallacy of trying to program ‘intelligence’ in machines. ‘Artificial intelligence’ is a misnomer—and the ‘magical thinking’ around AI is accordingly misplaced. Machines will never actually be intelligent, in the sense of having consciousness, sentience, common sense or imagination. Rather, they can be instructed to perform specific tasks that could be considered ‘intelligent’ if performed by a human (so-called ‘narrow AI’) . (The soon-to-be-published Reboot: Getting To AI We Can Trust, by Gary Marcus and Ernie Davis of NYU, is another excellent effort at demystifying and level-setting our expectations of AI.)
Building on this technical account and a series of case studies, the book’s second important contribution is to offer a convincing case against ‘techno-chauvinism’—the unrealistic belief that technology, and in particular AI, can solve everything. The author’s assessment of the state of the art in ‘self-driving’ cars is especially sobering. The clear message of this book is that human beings and real world social problems are too complex—too random and unpredictable—to be solved exclusively through computation. Maths works best for well-defined problems in well-defined situations with well-defined parameters. It can be used well to solve specific tasks—for example, programming a car to parallel park automatically—but is unable to scale effectively for more complex, multi-domain tasks—such as actually driving a car.
Indeed, as Broussard argues, there may be better and more obvious low-tech solutions for some of these problems—for example, greater investment in public transport or, in the educational context, simply supplying schools with more textbooks. Moreover, the inability of computational systems to understand meaning makes them a dangerous tool for decisions that involve qualitative and often sensitive social and value judgments—such as who should get parole or be admitted to college. As such, the book hones in on the importance of human-centric design and human-machine collaboration as a counterbalance to technological chauvinism. In particular, the author emphasises the need for ‘human-in-the-loop’ systems where computers work in sync with, and augment the capabilities of, humans. Human judgement, she argues, remains essential for dealing with the edge-cases that computers are fundamentally incapable of resolving.
The third noteworthy contribution of this book is to lay down a marker for the emerging field of ‘algorithmic accountability reporting’: the use of investigative code to check how decision-making algorithms work. This stands to become an important area of data journalism as decisions, especially in the public sector, are increasingly made through computational methods. However, the message of this book is not limited to journalists. Rather, it aims to make all readers think like data journalists: to challenge false claims about tech and to remove the injustice and inequality embedded in today’s computational systems.
If this book is to be criticised, it is for its sometimes moralistic tone, attributing perhaps excessive blame to Silicon Valley culture and the ‘tech elite’ for broader social injustices. For instance, Broussard’s assertion that the Valley drug culture is to be blamed for fuelling the nationwide opioid crisis belies a much more nuanced reality. Additionally, more scientific minds might find the technical explanations overly simplistic (for example, different approaches to deep learning, such as transfer learning and self learning, are not explicitly addressed). That said, the book is clearly targeted at a non-technical audience: simplicity and accessibility are part of its objective.
Overall, this book deserves praise as a timely, accessible and often entertaining account that sets the record straight on what current approaches to AI are and are not capable of delivering. As Broussard notes, the true artificial ‘unintelligence’ is in humans relying too heavily on computation for answers to complex social issues. This book will be of particular value to social scientists interested in the political, economic and social dynamics of AI and data-driven technology. It will also be of interest to investigative and data journalists seeking to leverage computational tools.
Nikita Aggarwal is a lawyer and research fellow in law and technology at the University of Oxford, Faculty of Law, as well as a research associate at the Oxford Internet Institute Digital Ethics Lab. Her research focuses on shifting regulatory and legal paradigms in an increasingly data-driven world, with a particular interest in machine learning systems. She previously practised law at the International Monetary Fund and Clifford Chance LLP. [These views are her own and do not represent those of her employer.]