Generating Fluent Fact Checking Explanations with Unsupervised Post-Editing
Research output: Contribution to journal › Journal article › Research › peer-review
Documents
- Fulltext
402 KB, PDF document
Fact-checking systems have become important tools to verify fake and misguiding news. These systems become more trustworthy when human-readable explanations accompany the veracity labels. However, manual collection of these explanations is expensive and time-consuming. Recent work has used extractive summarization to select a sufficient subset of the most important facts from the ruling comments (RCs) of a professional journalist to obtain fact-checking explanations. However, these explanations lack fluency and sentence coherence. In this work, we present an iterative edit-based algorithm that uses only phrase-level edits to perform unsupervised post-editing of disconnected RCs. To regulate our editing algorithm, we use a scoring function with components including fluency and semantic preservation. In addition, we show the applicability of our approach in a completely unsupervised setting. We experiment with two benchmark datasets, namely LIAR-PLUS and PubHealth. We show that our model generates explanations that are fluent, readable, non-redundant, and cover important information for the fact check.
Original language | English |
---|---|
Article number | 500 |
Journal | Information (Switzerland) |
Volume | 13 |
Issue number | 10 |
Pages (from-to) | 1-18 |
ISSN | 2078-2489 |
DOIs | |
Publication status | Published - 2022 |
Bibliographical note
Publisher Copyright:
© 2022 by the authors.
- explainable AI, fact-checking, natural language generation
Research areas
Number of downloads are based on statistics from Google Scholar and www.ku.dk
ID: 324680448