• norsk
    • English
  • norsk 
    • norsk
    • English
  • Logg inn
Vis innførsel 
  •   Hjem
  • Faculty of Social Sciences
  • Department of Information Science and Media Studies
  • Department of Information Science and Media Studies
  • Vis innførsel
  •   Hjem
  • Faculty of Social Sciences
  • Department of Information Science and Media Studies
  • Department of Information Science and Media Studies
  • Vis innførsel
JavaScript is disabled for your browser. Some features of this site may not work without it.

Video Recommendations Based on Visual Features Extracted with Deep Learning

Kvifte, Tord
Master thesis
Åpne
master thesis (Låst)
Permanent lenke
https://hdl.handle.net/11250/2760300
Utgivelsesdato
2021-06-01
Metadata
Vis full innførsel
Samlinger
  • Department of Information Science and Media Studies [727]
Sammendrag
When a movie is uploaded to a movie Recommender System (e.g., YouTube), the system can exploit various forms of descriptive features (e.g., tags and genre) in order to generate personalized recommendation for users. However, there are situations where the descriptive features are missing or very limited and the system may fail to include such a movie in the recommendation list, known as Cold-start problem. This thesis investigates recommendation based on a novel form of content features, extracted from movies, in order to generate recommendation for users. Such features represent the visual aspects of movies, based on Deep Learning models, and hence, do not require any human annotation when extracted. The proposed technique has been evaluated in both offline and online evaluations using a large dataset of movies. The online evaluation has been carried out in a evaluation framework developed for this thesis. Results from the offline and online evaluation (N=150) show that automatically extracted visual features can mitigate the cold-start problem by generating recommendation with a superior quality compared to different baselines, including recommendation based on human-annotated features. The results also point to subtitles as a high-quality future source of automatically extracted features. The visual feature dataset, named DeepCineProp13K and the subtitle dataset, CineSub3K, as well as the proposed evaluation framework are all made openly available online in a designated Github repository.
Beskrivelse
Postponed access: the file will be accessible after 2022-06-01
Utgiver
The University of Bergen
Opphavsrett
Copyright the Author. All rights reserved

Kontakt oss | Gi tilbakemelding

Personvernerklæring
DSpace software copyright © 2002-2019  DuraSpace

Levert av  Unit
 

 

Bla i

Hele arkivetDelarkiv og samlingerUtgivelsesdatoForfattereTitlerEmneordDokumenttyperTidsskrifterDenne samlingenUtgivelsesdatoForfattereTitlerEmneordDokumenttyperTidsskrifter

Min side

Logg inn

Statistikk

Besøksstatistikk

Kontakt oss | Gi tilbakemelding

Personvernerklæring
DSpace software copyright © 2002-2019  DuraSpace

Levert av  Unit