We Need to Talk About Random Splits

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

We Need to Talk About Random Splits. / Søgaard, Anders; Ebert, Sebastian Elgaard; Bastings, Jasmijn ; Filippova, Katja.

Proceeding of the 2021 Conference of the European Chapter of the Association for Computational Linguistics (EACL). Association for Computational Linguistics, 2021. p. 1823–1832.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Søgaard, A, Ebert, SE, Bastings, J & Filippova, K 2021, We Need to Talk About Random Splits. in Proceeding of the 2021 Conference of the European Chapter of the Association for Computational Linguistics (EACL). Association for Computational Linguistics, pp. 1823–1832. https://doi.org/10.18653/v1/2021.eacl-main.156

APA

Søgaard, A., Ebert, S. E., Bastings, J., & Filippova, K. (2021). We Need to Talk About Random Splits. In Proceeding of the 2021 Conference of the European Chapter of the Association for Computational Linguistics (EACL) (pp. 1823–1832). Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.eacl-main.156

Vancouver

Søgaard A, Ebert SE, Bastings J, Filippova K. We Need to Talk About Random Splits. In Proceeding of the 2021 Conference of the European Chapter of the Association for Computational Linguistics (EACL). Association for Computational Linguistics. 2021. p. 1823–1832 https://doi.org/10.18653/v1/2021.eacl-main.156

Author

Søgaard, Anders ; Ebert, Sebastian Elgaard ; Bastings, Jasmijn ; Filippova, Katja. / We Need to Talk About Random Splits. Proceeding of the 2021 Conference of the European Chapter of the Association for Computational Linguistics (EACL). Association for Computational Linguistics, 2021. pp. 1823–1832

Bibtex

@inproceedings{3b0aa166f0a64a1abbf4e0bfdea53e22,
title = "We Need to Talk About Random Splits",
abstract = "Gorman and Bedrick (2019) argued for using random splits rather than standard splits in NLP experiments. We argue that random splits, like standard splits, lead to overly optimistic performance estimates. We can also split data in biased or adversarial ways, e.g., training on short sentences and evaluating on long ones. Biased sampling has been used in domain adaptation to simulate real-world drift; this is known as the covariate shift assumption. In NLP, however, even worst-case splits, maximizing bias, often under-estimate the error observed on new samples of in-domain data, i.e., the data that models should minimally generalize to at test time. This invalidates the covariate shift assumption. Instead of using multiple random splits, future benchmarks should ideally include multiple, independent test sets instead; if infeasible, we argue that multiple biased splits leads to more realistic performance estimates than multiple random splits.",
author = "Anders S{\o}gaard and Ebert, {Sebastian Elgaard} and Jasmijn Bastings and Katja Filippova",
year = "2021",
doi = "10.18653/v1/2021.eacl-main.156",
language = "English",
pages = "1823–1832",
booktitle = "Proceeding of the 2021 Conference of the European Chapter of the Association for Computational Linguistics (EACL)",
publisher = "Association for Computational Linguistics",

}

RIS

TY - GEN

T1 - We Need to Talk About Random Splits

AU - Søgaard, Anders

AU - Ebert, Sebastian Elgaard

AU - Bastings, Jasmijn

AU - Filippova, Katja

PY - 2021

Y1 - 2021

N2 - Gorman and Bedrick (2019) argued for using random splits rather than standard splits in NLP experiments. We argue that random splits, like standard splits, lead to overly optimistic performance estimates. We can also split data in biased or adversarial ways, e.g., training on short sentences and evaluating on long ones. Biased sampling has been used in domain adaptation to simulate real-world drift; this is known as the covariate shift assumption. In NLP, however, even worst-case splits, maximizing bias, often under-estimate the error observed on new samples of in-domain data, i.e., the data that models should minimally generalize to at test time. This invalidates the covariate shift assumption. Instead of using multiple random splits, future benchmarks should ideally include multiple, independent test sets instead; if infeasible, we argue that multiple biased splits leads to more realistic performance estimates than multiple random splits.

AB - Gorman and Bedrick (2019) argued for using random splits rather than standard splits in NLP experiments. We argue that random splits, like standard splits, lead to overly optimistic performance estimates. We can also split data in biased or adversarial ways, e.g., training on short sentences and evaluating on long ones. Biased sampling has been used in domain adaptation to simulate real-world drift; this is known as the covariate shift assumption. In NLP, however, even worst-case splits, maximizing bias, often under-estimate the error observed on new samples of in-domain data, i.e., the data that models should minimally generalize to at test time. This invalidates the covariate shift assumption. Instead of using multiple random splits, future benchmarks should ideally include multiple, independent test sets instead; if infeasible, we argue that multiple biased splits leads to more realistic performance estimates than multiple random splits.

U2 - 10.18653/v1/2021.eacl-main.156

DO - 10.18653/v1/2021.eacl-main.156

M3 - Article in proceedings

SP - 1823

EP - 1832

BT - Proceeding of the 2021 Conference of the European Chapter of the Association for Computational Linguistics (EACL)

PB - Association for Computational Linguistics

ER -

ID: 258376087