PropertyValue
?:abstract
  • Neural sequence models can generate highly fluent sentences but recent studies have also shown that they are also prone to hallucinate additional content not supported by the input, which can cause a lack of trust in the model. To better assess the faithfulness of the machine outputs, we propose a new task to predict whether each token in the output sequence is hallucinated conditioned on the source input, and collect new manually annotated evaluation sets for this task. We also introduce a novel method for learning to model hallucination detection, based on pretrained language models fine tuned on synthetic data that includes automatically inserted hallucinations. Experiments on machine translation and abstract text summarization demonstrate the effectiveness of our proposed approach -- we obtain an average F1 of around 60 across all the benchmark datasets. Furthermore, we demonstrate how to use the token-level hallucination labels to define a fine-grained loss over the target sequence in the low-resource machine translation and achieve significant improvements over strong baseline methods. We will release our annotated data and code to support future research.
is ?:annotates of
?:arxiv_id
  • 2011.02593
?:creator
?:externalLink
?:license
  • arxiv
?:publication_isRelatedTo_Disease
?:source
  • ArXiv
?:title
  • Detecting Hallucinated Content in Conditional Neural Sequence Generation
?:type
?:year
  • 2020-11-05

Metadata

Anon_0  
expand all