Copyright Violations and Large Language Models

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Copyright Violations and Large Language Models. / Karamolegkou, Antonia; Li, Jiaang; Zhou, Li; Søgaard, Anders.

Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics (ACL), 2023. p. 7403-7412.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Karamolegkou, A, Li, J, Zhou, L & Søgaard, A 2023, Copyright Violations and Large Language Models. in Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics (ACL), pp. 7403-7412, 2023 Conference on Empirical Methods in Natural Language Processing, Singapore, 06/12/2023. https://doi.org/10.18653/v1/2023.emnlp-main.458

APA

Karamolegkou, A., Li, J., Zhou, L., & Søgaard, A. (2023). Copyright Violations and Large Language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 7403-7412). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.458

Vancouver

Karamolegkou A, Li J, Zhou L, Søgaard A. Copyright Violations and Large Language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics (ACL). 2023. p. 7403-7412 https://doi.org/10.18653/v1/2023.emnlp-main.458

Author

Karamolegkou, Antonia ; Li, Jiaang ; Zhou, Li ; Søgaard, Anders. / Copyright Violations and Large Language Models. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics (ACL), 2023. pp. 7403-7412

Bibtex

@inproceedings{3b1fb4f0e10e47d0bf8ee8a416ac8b8c,
title = "Copyright Violations and Large Language Models",
abstract = "Language models may memorize more than just facts, including entire chunks of texts seen during training. Fair use exemptions to copyright laws typically allow for limited use of copyrighted material without permission from the copyright holder, but typically for extraction of information from copyrighted materials, rather than verbatim reproduction. This work explores the issue of copyright violations and large language models through the lens of verbatim memorization, focusing on possible redistribution of copyrighted text. We present experiments with a range of language models over a collection of popular books and coding problems, providing a conservative characterization of the extent to which language models can redistribute these materials. Overall, this research highlights the need for further examination and the potential impact on future developments in natural language processing to ensure adherence to copyright regulations. Code is at https://github.com/coastalcph/CopyrightLLMs.",
author = "Antonia Karamolegkou and Jiaang Li and Li Zhou and Anders S{\o}gaard",
year = "2023",
doi = "10.18653/v1/2023.emnlp-main.458",
language = "English",
isbn = "979-8-89176-060-8",
pages = "7403--7412",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
publisher = "Association for Computational Linguistics (ACL)",
address = "United States",
note = "2023 Conference on Empirical Methods in Natural Language Processing ; Conference date: 06-12-2023 Through 10-12-2023",

}

RIS

TY - GEN

T1 - Copyright Violations and Large Language Models

AU - Karamolegkou, Antonia

AU - Li, Jiaang

AU - Zhou, Li

AU - Søgaard, Anders

PY - 2023

Y1 - 2023

N2 - Language models may memorize more than just facts, including entire chunks of texts seen during training. Fair use exemptions to copyright laws typically allow for limited use of copyrighted material without permission from the copyright holder, but typically for extraction of information from copyrighted materials, rather than verbatim reproduction. This work explores the issue of copyright violations and large language models through the lens of verbatim memorization, focusing on possible redistribution of copyrighted text. We present experiments with a range of language models over a collection of popular books and coding problems, providing a conservative characterization of the extent to which language models can redistribute these materials. Overall, this research highlights the need for further examination and the potential impact on future developments in natural language processing to ensure adherence to copyright regulations. Code is at https://github.com/coastalcph/CopyrightLLMs.

AB - Language models may memorize more than just facts, including entire chunks of texts seen during training. Fair use exemptions to copyright laws typically allow for limited use of copyrighted material without permission from the copyright holder, but typically for extraction of information from copyrighted materials, rather than verbatim reproduction. This work explores the issue of copyright violations and large language models through the lens of verbatim memorization, focusing on possible redistribution of copyrighted text. We present experiments with a range of language models over a collection of popular books and coding problems, providing a conservative characterization of the extent to which language models can redistribute these materials. Overall, this research highlights the need for further examination and the potential impact on future developments in natural language processing to ensure adherence to copyright regulations. Code is at https://github.com/coastalcph/CopyrightLLMs.

U2 - 10.18653/v1/2023.emnlp-main.458

DO - 10.18653/v1/2023.emnlp-main.458

M3 - Article in proceedings

SN - 979-8-89176-060-8

SP - 7403

EP - 7412

BT - Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

PB - Association for Computational Linguistics (ACL)

T2 - 2023 Conference on Empirical Methods in Natural Language Processing

Y2 - 6 December 2023 through 10 December 2023

ER -

ID: 381725956