Lidé
Ing. Herbert Ullrich
Všechny publikace
Pipeline and dataset generation for automated fact-checking in almost any language
- Autoři: Ing. Jan Drchal, Ph.D., Ing. Herbert Ullrich, Mlynář, T., Moravec, V.
- Publikace: Neural Computing and Applications. 2024, ISSN 1433-3058.
- Rok: 2024
- DOI: 10.1007/s00521-024-10113-5
- Odkaz: https://doi.org/10.1007/s00521-024-10113-5
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
This article presents a pipeline for automated fact-checking leveraging publicly available Language Models and data. The objective is to assess the accuracy of textual claims using evidence from a ground-truth evidence corpus. The pipeline consists of two main modules − the evidence retrieval and the claim veracity evaluation. Our primary focus is on the ease of deployment in various languages that remain unexplored in the field of automated fact-checking. Unlike most similar pipelines, which work with evidence sentences, our pipeline processes data on a paragraph level, simplifying the overall architecture and data requirements. Given the high cost of annotating language-specific fact-checking training data, our solution builds on the Question Answering for Claim Generation (QACG) method, which we adapt and use to generate the data for all models of the pipeline. Our strategy enables the introduction of new languages through machine translation of only two fixed datasets of moderate size. Subsequently, any number of training samples can be generated based on an evidence corpus in the target language. We provide open access to all data and fine-tuned models for Czech, English, Polish, and Slovak pipelines, as well as to our codebase that may be used to reproduce the results. We comprehensively evaluate the pipelines for all four languages, including human annotations and per-sample difficulty assessment using Pointwise V-information. The presented experiments are based on full Wikipedia snapshots to promote reproducibility. To facilitate implementation and user interaction, we develop the FactSearch application featuring the proposed pipeline and the preliminary feedback on its performance.
CsFEVER and CTKFacts: Acquiring Czech Data for Fact Verification
- Autoři: Ing. Herbert Ullrich, Ing. Jan Drchal, Ph.D., Rýpar, M., Vincourová, H., Moravec, V.
- Publikace: Language Resources and Evaluation. 2023, 57(4), 1571-1605. ISSN 1574-020X.
- Rok: 2023
- DOI: 10.1007/s10579-023-09654-3
- Odkaz: https://doi.org/10.1007/s10579-023-09654-3
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
In this paper, we examine several methods of acquiring Czech data for automated fact-checking, which is a task commonly modeled as a classification of textual claim veracity w.r.t. a corpus of trusted ground truths. We attempt to collect sets of data in form of a factual claim, evidence within the ground truth corpus, and its veracity label (supported, refuted or not enough info). As a first attempt, we generate a Czech version of the large-scale FEVER dataset built on top of Wikipedia corpus. We take a hybrid approach of machine translation and document alignment; the approach and the tools we provide can be easily applied to other languages. We discuss its weaknesses, propose a future strategy for their mitigation and publish the 127k resulting translations, as well as a version of such dataset reliably applicable for the Natural Language Inference task—the CsFEVER-NLI. Furthermore, we collect a novel dataset of 3,097 claims, which is annotated using the corpus of 2.2 M articles of Czech News Agency. We present an extended dataset annotation methodology based on the FEVER approach, and, as the underlying corpus is proprietary, we also publish a standalone version of the dataset for the task of Natural Language Inference we call CTKFactsNLI. We analyze both acquired datasets for spurious cues—annotation patterns leading to model overfitting. CTKFacts is further examined for inter-annotator agreement, thoroughly cleaned, and a typology of common annotator errors is extracted. Finally, we provide baseline models for all stages of the fact-checking pipeline and publish the NLI datasets, as well as our annotation platform and other experimental data.