Dynamic Forecasting of Conversation Derailment
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Standard
Dynamic Forecasting of Conversation Derailment. / Kementchedjhieva, Yova Radoslavova; Søgaard, Anders.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2021. p. 7915–7919.Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Dynamic Forecasting of Conversation Derailment
AU - Kementchedjhieva, Yova Radoslavova
AU - Søgaard, Anders
PY - 2021
Y1 - 2021
N2 - Online conversations can sometimes take a turn for the worse, either due to systematic cultural differences, accidental misunderstandings, or mere malice. Automatically forecasting derailment in public online conversations provides an opportunity to take early action to moderate it. Previous work in this space is limited, and we extend it in several ways. We apply a pretrained language encoder to the task, which outperforms earlier approaches. We further experiment with shifting the training paradigm for the task from a static to a dynamic one to increase the forecast horizon. This approach shows mixed results: in a high-quality data setting, a longer average forecast horizon can be achieved at the cost of a small drop in F1; in a low-quality data setting, however, dynamic training propagates the noise and is highly detrimental to performance.
AB - Online conversations can sometimes take a turn for the worse, either due to systematic cultural differences, accidental misunderstandings, or mere malice. Automatically forecasting derailment in public online conversations provides an opportunity to take early action to moderate it. Previous work in this space is limited, and we extend it in several ways. We apply a pretrained language encoder to the task, which outperforms earlier approaches. We further experiment with shifting the training paradigm for the task from a static to a dynamic one to increase the forecast horizon. This approach shows mixed results: in a high-quality data setting, a longer average forecast horizon can be achieved at the cost of a small drop in F1; in a low-quality data setting, however, dynamic training propagates the noise and is highly detrimental to performance.
U2 - 10.18653/v1/2021.emnlp-main.624
DO - 10.18653/v1/2021.emnlp-main.624
M3 - Article in proceedings
SP - 7915
EP - 7919
BT - Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
PB - Association for Computational Linguistics
T2 - 2021 Conference on Empirical Methods in Natural Language Processing
Y2 - 7 November 2021 through 11 November 2021
ER -
ID: 299821897