Fusing data to improve spatial, spectral and temporal resolution
Mapping small features (e.g., hedges, streams, urban vegetation) requires higher spatial resolution than the 10m offered natively by Sentinel-2, as well as regular revisits which commercial VHR satellites cannot offer at a global scale. Boosted by deep Convolutional Neural Network (CNN) architectures and Generative Adversarial Networks (GAN), Single Image Super Resolution (SIRS) has recently been proven as very promising in the context of EO. Deep SIRS has the advantage of concentrating processing power and data consumption in the training phase but is resource-friendly when applied during inference. On the other hand, it requires a large training dataset with imagery at the target high resolution and it cannot solve cloudy images.
In the frame of EvoLand we will therefore develop a multi-modal generic architecture for the fusion of time series building further on the existing work by consortium partners and the outcomes of method no 1, Weakly Supervised Learning.
ResponsivePics errors