Overview of the ImageCLEF 2020: Multimedia Retrieval in Medical, Lifelogging, Nature, and Internet Applications
Ionescu, Bogdan; Müller, Henning; Péteri, Renaud; Abacha, Asma Ben; Datla, Vivek; Hasan, Sadid A.; Demner-Fushman, Dina; Kozlovski, Serge; Liauchuk, Vitali; Cid, Yashin Dicente; Kovalev, Vassili; Pelka, Obioma; Friedrich, Christoph M.; García Seco de Herrera, Alba; Ninh, Van-Tu; Le, Tu-Khiem; Zhou, Liting; Piras, Luca; Riegler, Michael; Halvorsen, Pål; Tran, Minh-Triet; Lux, Mathias; Gurrin, Cathal; Dang Nguyen, Duc Tien; Chamberlain, Jon; Clark, Adrian; Campello, Antonio; Fichou, Dimitri; Berari, Raul; Brie, Paul; Dogariu, Mihai; Ştefan, Liviu Daniel; Constantin, Mihai Gabriel
Journal article, Peer reviewed
MetadataShow full item record
Original versionLecture Notes in Computer Science (LNCS). 2020, 12260, 311-341 10.1007/978-3-030-58219-7_22
This paper presents an overview of the ImageCLEF 2020 lab that was organized as part of the Conference and Labs of the Evaluation Forum - CLEF Labs 2020. ImageCLEF is an ongoing evaluation initiative (first run in 2003) that promotes the evaluation of technologies for annotation, indexing and retrieval of visual data with the aim of providing information access to large collections of images in various usage scenarios and domains. In 2020, the 18th edition of ImageCLEF runs four main tasks: (i) a medical task that groups three previous tasks, i.e., caption analysis, tuberculosis prediction, and medical visual question answering and question generation, (ii) a lifelog task (videos, images and other sources) about daily activity understanding, retrieval and summarization, (iii) a coral task about segmenting and labeling collections of coral reef images, and (iv) a new Internet task addressing the problems of identifying hand-drawn user interface components. Despite the current pandemic situation, the benchmark campaign received a strong participation with over 40 groups submitting more than 295 runs.