Show simple item record

dc.contributor.authorNordberg, Øystein Helle
dc.date.accessioned2021-12-14T00:41:43Z
dc.date.available2021-12-14T00:41:43Z
dc.date.issued2021-11-22
dc.date.submitted2021-12-13T23:00:04Z
dc.identifier.urihttps://hdl.handle.net/11250/2834056
dc.description.abstractSubtitle: Multispectral-to-panchromatic single-image super resolution of GeoEye-1 satellite images using an ESRGAN deep learning model trained exclusively on WorldView-2 images. Abstract: Today, easy and abundant access to high resolution satellite imagery is taken for granted by consumers and businesses. Many remote sensing applications require optical images with a spatial resolution of 0.5 meters ground sampling distance (GSD) or less, but satellites that capture such high resolution images require heavy optical instruments, and are thus expensive to manufacture and launch. Consequently there are only a handful of such commercial satellites in orbit. WorldView-2 and GeoEye-1 are two of them. They both capture multispectral (MS) bands with a GSD of approximately 2 meters, as well as a matching panchromatic (PAN) band with a 4x higher resolution, a GSD of about 0.5 meters. Miniaturization have enabled cheaper satellites, and has made it commercially viable to launch and maintain large constellations of nanosatellites. While plentiful, their sensors are not as capable as their larger counterparts. Their MS bands typically have a GSD of around 3-5 meters, and they do not capture a PAN band whatsoever. This limits their applications. The question then arises: Can we increase the spatial resolution of the nanosatellites through post-processing of the images? Single image super-resolution models, tasked to recover a high resolution (HR) image from a single lower resolution (LR) image, are designed to do this. We modify and apply one of the highest performing deep learning SISR models, ESRGAN, to estimate an HR PAN band from a set of LR MS bands (4x increase in resolution). The model is trained on images taken by WorldView-2 and evaluated on images taken by both WorldView-2 and, most interestingly, GeoEye-1, a different satellite. We thus demonstrate an ability to construct an artificial HR PAN band from the MS bands of a satellite, without training on images from that particular satellite, i.e., a cross-sensor application of SISR. This opens up the possibility to construct an artificial HR PAN band for the aforementioned nanosatellites, and we suggest this topic as an area for further research. An added benefit of the MS-to-PAN design is that we avoid having to downsample (degrade) HR images into LR images as a preprocessing step, since the MS/PAN image pair is already an LR/HR image pair. Consequently, our model performance is not reliant on any particular downsampling method.
dc.language.isoeng
dc.publisherThe University of Bergen
dc.rightsCopyright the Author. All rights reserved
dc.subjectWorldView-2
dc.subjectpanchromatic
dc.subjectdeep learning
dc.subjectmultispectral
dc.subjectsuper-resolution
dc.subjectESRGAN
dc.subjectGAN
dc.subjectSISR
dc.subjectGeoEye-1
dc.subjectsatellite imagery
dc.titleDeep learning-based cross-sensor super resolution of satellite images
dc.typeMaster thesis
dc.date.updated2021-12-13T23:00:04Z
dc.rights.holderCopyright the Author. All rights reserved
dc.description.degreeMasteroppgave i statistikk
dc.description.localcodeSTAT399
dc.description.localcodeMAMN-STAT
dc.subject.nus753299
fs.subjectcodeSTAT399
fs.unitcode12-11-0


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record