D 2023

Resources and Few-shot Learners for In-context Learning in Slavic Languages

ŠTEFÁNIK, Michal; Marek KADLČÍK; Piotr GRAMACKI a Petr SOJKA

Základní údaje

Originální název

Resources and Few-shot Learners for In-context Learning in Slavic Languages

Autoři

ŠTEFÁNIK, Michal (703 Slovensko, garant, domácí); Marek KADLČÍK (203 Česká republika, domácí); Piotr GRAMACKI (616 Polsko) a Petr SOJKA (203 Česká republika, domácí)

Vydání

Dubrovnik, Croatia, Proceedings of the 9th Workshop on Slavic Natural Language Processing 2023 (SlavicNLP 2023), od s. 94-105, 12 s. 2023

Nakladatel

Association for Computational Linguistics

Další údaje

Jazyk

angličtina

Typ výsledku

Stať ve sborníku

Stát vydavatele

Spojené státy

Utajení

není předmětem státního či obchodního tajemství

Forma vydání

elektronická verze "online"

Odkazy

Kód RIV

RIV/00216224:14330/23:00130911

Organizace

Fakulta informatiky – Masarykova univerzita – Repozitář

ISBN

978-1-959429-57-9

EID Scopus

2-s2.0-85161966608

Klíčová slova anglicky

in-context learning; low-resource; Slavic languages; datasets

Návaznosti

MUNI/A/1339/2022, interní kód Repo. Czech-BioImaging II, velká výzkumná infrastruktura.
Změněno: 27. 4. 2024 04:19, RNDr. Daniel Jakubík

Anotace

V originále

Despite the rapid recent progress in creating accurate and compact in-context learners, most recent work focuses on in-context learning (ICL) for tasks in English. However, the ability to interact with users of languages outside English presents a great potential for broadening the applicability of language technologies to non-English speakers. In this work, we collect the infrastructure necessary for training and evaluation of ICL in a selection of Slavic languages: Czech, Polish, and Russian. We link a diverse set of datasets and cast these into a unified instructional format through a set of transformations and newly-crafted templates written purely in target languages. Using the newly-curated dataset, we evaluate a set of the most recent in-context learners and compare their results to the supervised baselines. Finally, we train, evaluate and publish a set of in-context learning models that we train on the collected resources and compare their performance to previous work. We find that ICL models tuned in English are also able to learn some tasks from non-English contexts, but multilingual instruction fine-tuning consistently improves the ICL ability. We also find that the massive multitask training can be outperformed by single-task training in the target language, uncovering the potential for specializing in-context learners to the language(s) of their application.

Přiložené soubory