dc.contributor.author | Dierickx, Laurence | |
dc.contributor.author | Linden, Carl-Gustav | |
dc.contributor.author | Opdahl, Andreas Lothe | |
dc.date.accessioned | 2024-09-11T09:41:16Z | |
dc.date.available | 2024-09-11T09:41:16Z | |
dc.date.created | 2023-12-04T14:31:54Z | |
dc.date.issued | 2023 | |
dc.identifier.issn | 0302-9743 | |
dc.identifier.uri | https://hdl.handle.net/11250/3151405 | |
dc.description.abstract | Large language models have enabled the rapid production of misleading or fake narratives, presenting a challenge for direct detection methods. Considering that generative artificial intelligence tools are likely to be used either to inform or to disinform, evaluating the (non)human nature of machine-generated content is questioned, especially regarding the ‘hallucination’ phenomenon, which relates to generated content that does not correspond to real-world input. In this study, we argue that assessing machine-generated content is most reliable when done by humans because doing so involves critical consideration of the meaning of the information and its informative, misinformative or disinformative value, which is related to the accuracy and reliability of the news. To explore human-based judgement methods, we developed the Information Disorder Level (IDL) index, a language-independent metric to evaluate the factuality of machine-generated content. It has been tested on a corpus of forty made-up and actual news stories generated with ChatGPT. For newsrooms using generative AI, results suggest that every piece of machine-generated content should be vetted and post-edited by humans before being published. From a digital media literacy perspective, the IDL index is a valuable tool to understand the limits of generative AI and trigger a reflection on what constitutes the factuality of a reported event. | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Springer | en_US |
dc.relation.uri | https://nordishub.eu/about/ | |
dc.rights | Navngivelse 4.0 Internasjonal | * |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/deed.no | * |
dc.title | The Information Disorder Level (IDL) Index: A Human-Based Metric to Assess the Factuality of Machine-Generated Content | en_US |
dc.type | Journal article | en_US |
dc.type | Peer reviewed | en_US |
dc.description.version | publishedVersion | en_US |
dc.rights.holder | Copyright 2023 The Author(s) | en_US |
cristin.ispublished | true | |
cristin.fulltext | original | |
cristin.qualitycode | 1 | |
dc.identifier.doi | https://doi.org/10.1007/978-3-031-47896-3_5 | |
dc.identifier.cristin | 2208599 | |
dc.source.journal | Lecture Notes in Computer Science (LNCS) | en_US |
dc.source.pagenumber | 60-71 | en_US |
dc.identifier.citation | Lecture Notes in Computer Science (LNCS). 2023, 14397, 60-71. | en_US |
dc.source.volume | 14397 | en_US |