Event organised by the Department of Digital Humanities, King’s College London
How did automatic speech recognition lay the ground for contemporary computational knowledge practices? Join us for a public talk with Xiaochang Li (Max Planck Institute for the History of Science, Berlin).
Beginning in the 1970s, a team of researchers at IBM began to reorient the field of automatic speech recognition from the scientific study of human perception and language towards a startling new mandate: to find “the natural way for the machine to do it.” In what is recognizable today as a data-driven, “black box” approach to language processing, IBM’s Continuous Speech Recognition group set out to meticulously uncouple computational modelling from the demands of explanation and interpretability. Automatic speech recognition was refashioned as a problem of large-scale data acquisition, storage, and classification, one that was distinct from—if not antithetical to—human perception, expertise, and understanding. These efforts were pivotal in bringing language under the purview of data processing, and in doing so helped draw a narrow form of data-driven computational modelling across diverse domains and into the sphere of everyday life, spurring the development of algorithmic techniques that now appear in applications for everything from machine translation to protein sequencing. The history of automatic speech recognition invites a glimpse into how making language into data made data into an imperative, and thus shaped the conceptual and technical groundwork for what is now one of our most wide-reaching modes of computational knowledge.