PropertyValue
?:abstract
  • The COVID-19 pandemic has resulted in a rapidly growing quantity of scientific publications from journal articles, preprints, and other sources. The TREC-COVID Challenge was created to evaluate information retrieval methods and systems for this quickly expanding corpus. Based on the COVID-19 Open Research Dataset (CORD-19), several dozen research teams participated in over 5 rounds of the TREC-COVID Challenge. While previous work has compared IR techniques used on other test collections, there are no studies that have analyzed the methods used by participants in the TREC-COVID Challenge. We manually reviewed team run reports from Rounds 2 and 5, extracted features from the documented methodologies, and used a univariate and multivariate regression-based analysis to identify features associated with higher retrieval performance. We observed that fine-tuning datasets with relevance judgments, MS-MARCO, and CORD-19 document vectors was associated with improved performance in Round 2 but not in Round 5. Though the relatively decreased heterogeneity of runs in Round 5 may explain the lack of significance in that round, fine-tuning has been found to improve search performance in previous challenge evaluations by improving a system\'s ability to map relevant queries and phrases to documents. Furthermore, term expansion was associated with improvement in system performance, and the use of the narrative field in the TREC-COVID topics was associated with decreased system performance in both rounds. These findings emphasize the need for clear queries in search. While our study has some limitations in its generalizability and scope of techniques analyzed, we identified some IR techniques that may be useful in building search systems for COVID-19 using the TREC-COVID test collections.
is ?:annotates of
?:creator
?:doi
?:doi
  • 10.1101/2020.10.15.20213645
?:license
  • medrxiv
?:pdf_json_files
  • document_parses/pdf_json/f76f36e9274de7a935cfcdd44b326ffcfe95b056.json
?:publication_isRelatedTo_Disease
?:sha_id
?:source
  • MedRxiv; WHO
?:title
  • A Comparative Analysis of System Features Used in the TREC-COVID Information Retrieval Challenge
?:type
?:year
  • 2020-10-20

Metadata

Anon_0  
expand all