PropertyValue
?:abstract
  • This paper focuses on the use of ‘black box’ AI in medicine and asks whether the physician needs to disclose to patients that even the best AI comes with the risks of cyberattacks, systematic bias, and a particular type of mismatch between AI’s implicit assumptions and an individual patient’s background situation. Pace current clinical practice, I argue that, under certain circumstances, these risks do need to be disclosed. Otherwise, the physician either vitiates a patient’s informed consent or violates a more general obligation to warn him about potentially harmful consequences. To support this view, I argue, first, that the already widely accepted conditions in the evaluation of risks, i.e. the ‘nature’ and ‘likelihood’ of risks, speak in favour of disclosure and, second, that principled objections against the disclosure of these risks do not withstand scrutiny. Moreover, I also explain that these risks are exacerbated by pandemics like the COVID-19 crisis, which further emphasises their significance.
is ?:annotates of
?:creator
?:doi
?:doi
  • 10.1007/s00146-020-01085-w
?:journal
  • AI_Soc
?:license
  • cc-by
?:pdf_json_files
  • document_parses/pdf_json/939813eb4445de10978eb5c1ee2a6675d6f82905.json
?:pmc_json_files
  • document_parses/pmc_json/PMC7580986.xml.json
?:pmcid
?:pmid
?:pmid
  • 33110296.0
?:publication_isRelatedTo_Disease
?:sha_id
?:source
  • Medline; PMC
?:title
  • Artificial intelligence in medicine and the disclosure of risks
?:type
?:year
  • 2020-10-22

Metadata

Anon_0  
expand all