Spurious Correlations in Cross-Topic Argument Mining

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Spurious Correlations in Cross-Topic Argument Mining. / Thorn Jakobsen, Terne Sasha; Barrett, Maria; Søgaard, Anders.

Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics. Association for Computational Linguistics, 2021. p. 263-277.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Thorn Jakobsen, TS, Barrett, M & Søgaard, A 2021, Spurious Correlations in Cross-Topic Argument Mining. in Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics. Association for Computational Linguistics, pp. 263-277, Tenth Joint Conference on Lexical and Computational Semantics - SEM 2021, Online, 05/08/2021. https://doi.org/10.18653/v1/2021.starsem-1.25

APA

Thorn Jakobsen, T. S., Barrett, M., & Søgaard, A. (2021). Spurious Correlations in Cross-Topic Argument Mining. In Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics (pp. 263-277). Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.starsem-1.25

Vancouver

Thorn Jakobsen TS, Barrett M, Søgaard A. Spurious Correlations in Cross-Topic Argument Mining. In Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics. Association for Computational Linguistics. 2021. p. 263-277 https://doi.org/10.18653/v1/2021.starsem-1.25

Author

Thorn Jakobsen, Terne Sasha ; Barrett, Maria ; Søgaard, Anders. / Spurious Correlations in Cross-Topic Argument Mining. Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics. Association for Computational Linguistics, 2021. pp. 263-277

Bibtex

@inproceedings{95458fc0bc324c1a8bc9224068252985,
title = "Spurious Correlations in Cross-Topic Argument Mining",
abstract = "Recent work in cross-topic argument mining attempts to learn models that generalise across topics rather than merely relying on within-topic spurious correlations. We examine the effectiveness of this approach by analysing the output of single-task and multi-task models for cross-topic argument mining, through a combination of linear approximations of their decision boundaries, manual feature grouping, challenge examples, and ablations across the input vocabulary. Surprisingly, we show that cross-topic models still rely mostly on spurious correlations and only generalise within closely related topics, e.g., a model trained only on closed-class words and a few common open-class words outperforms a state-of-the-art cross-topic model on distant target topics.",
author = "{Thorn Jakobsen}, {Terne Sasha} and Maria Barrett and Anders S{\o}gaard",
year = "2021",
doi = "10.18653/v1/2021.starsem-1.25",
language = "English",
pages = "263--277",
booktitle = "Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics",
publisher = "Association for Computational Linguistics",
note = "Tenth Joint Conference on Lexical and Computational Semantics - SEM 2021 ; Conference date: 05-08-2021 Through 06-08-2021",

}

RIS

TY - GEN

T1 - Spurious Correlations in Cross-Topic Argument Mining

AU - Thorn Jakobsen, Terne Sasha

AU - Barrett, Maria

AU - Søgaard, Anders

PY - 2021

Y1 - 2021

N2 - Recent work in cross-topic argument mining attempts to learn models that generalise across topics rather than merely relying on within-topic spurious correlations. We examine the effectiveness of this approach by analysing the output of single-task and multi-task models for cross-topic argument mining, through a combination of linear approximations of their decision boundaries, manual feature grouping, challenge examples, and ablations across the input vocabulary. Surprisingly, we show that cross-topic models still rely mostly on spurious correlations and only generalise within closely related topics, e.g., a model trained only on closed-class words and a few common open-class words outperforms a state-of-the-art cross-topic model on distant target topics.

AB - Recent work in cross-topic argument mining attempts to learn models that generalise across topics rather than merely relying on within-topic spurious correlations. We examine the effectiveness of this approach by analysing the output of single-task and multi-task models for cross-topic argument mining, through a combination of linear approximations of their decision boundaries, manual feature grouping, challenge examples, and ablations across the input vocabulary. Surprisingly, we show that cross-topic models still rely mostly on spurious correlations and only generalise within closely related topics, e.g., a model trained only on closed-class words and a few common open-class words outperforms a state-of-the-art cross-topic model on distant target topics.

U2 - 10.18653/v1/2021.starsem-1.25

DO - 10.18653/v1/2021.starsem-1.25

M3 - Article in proceedings

SP - 263

EP - 277

BT - Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics

PB - Association for Computational Linguistics

T2 - Tenth Joint Conference on Lexical and Computational Semantics - SEM 2021

Y2 - 5 August 2021 through 6 August 2021

ER -

ID: 300082790