Challenges of Automating Fact-Checking: A Technographic Case Study
Journal article, Peer reviewed
Published version

View/ Open
Date
2024Metadata
Show full item recordCollections
Abstract
The prevalence of disinformation in media ecosystems has spurred efforts by researchers from various disciplines and media professionals to find effective methods for verifying information at scale. Automated fact-checking has emerged as a promising solution to combat disinformation. However, fully automated tools have not yet materialized. This technographic case study of a start-up company, “X,” investigated the challenges associated with this process. By conceptualizing automated fact-checking as a technological innovation within journalistic knowledge production, the article uncovered the reasons behind the gap between “X's” initial enthusiasm about AI's capabilities in verifying information and the actual performance of such tools. These reasons cross the disciplinary boundaries relating to the technological aspects of automated fact-checking and a requirement for such tools to be epistemically authoritative. The study revealed significant hurdles faced by the start-up, including issues with the accuracy of the AI editor and its adoption by the industry. Key obstacles included the elusive nature of truth claims, the rigidity of so-called binary epistemology (ascribing true/false values to information claims), data scarcity, algorithmic deficiencies, issues with the transparency of results, and industry-tool compatibility. While focused on a single company's experience, the study offers valuable insights for researchers and practitioners navigating the evolving landscape of automated fact-checking.