How to increase the resolution of Sentinel 2 images from 10 to 1 meter

First of all, let’s give credit where credit is due! I discovered this technique thanks to Franz’s blog, which I invite you to explore if you speak Spanish.

To process Sentinel 2 images, we will use two products:

  • Google Collab, and
  • IA S2DR3



Google Colab

Google Colab (or Google Colaboratory) is a free service offered by Google that allows you to write and run Python code directly in your browser, without installing anything on your computer.

Here are the key points:


What it is

It is a Jupyter Notebook environment hosted in the cloud.

In other words, it’s an interface where you can:

  • write text (such as explanations or instructions);
  • execute Python code cell by cell;
  • display graphs, tables, or images;
  • and all this in a simple web browser.


What it lets you do

  • Test Python code easily (without local configuration).
  • Perform scientific calculations or data processing (with NumPy, Pandas, Matplotlib, etc.).
  • Run machine learning (TensorFlow, PyTorch, Scikit-learn, etc.).
  • Use a free GPU or TPU to speed up calculations (useful for AI and deep learning).
  • Collaborate online, much like with Google Docs: multiple people can work on the same notebook.


Example of use

Open a new Colab page:
https://colab.research.google.com

And you can type, for example:

Example

import numpy as np
import matplotlib.pyplot as plt

x = np.linspace(0, 10, 100)
y = np.sin(x)

plt.plot(x, y)
plt.title("Courbe du sinus")
plt.show()

The graph will be displayed directly below the code.


Backup

  • Your notebooks are automatically backed up to your Google Drive.
  • You can also import/export .ipynb or .py files.


S2DR3 (Sentinel-2 Deep Resolution 3.0)


Overview

  • S2DR3 is an artificial intelligence super-resolution model applied to images from the Sentinel-2 satellite (Copernicus program mission). (Medium)
  • Its goal is to take Sentinel-2 multispectral bands (native resolutions of 10 m, 20 m, and sometimes 60 m) and “enhance” them to ≈ 1 m/pixel for all bands. (LinkedIn)
  • This provides access to much finer spatial details (roads, buildings, small objects, crop heterogeneity, etc.) than the standard Sentinel-2 resolution allows. (MalaGIS)


Why it’s interesting

Some key points:

  • Better spatial resolution: going from 10 m to 1 m is a leap in visible detail. This opens up uses that previously required very expensive commercial data. (data jungle adventures)
  • Complete multispectral coverage: unlike some models that only focus on a few bands (e.g., RGB), S2DR3 processes all Sentinel-2 bands, which preserves spectral richness (useful for agriculture, vegetation, water, etc.). (Medium)
  • Various applications: precision agriculture, detailed urban mapping, environmental monitoring, infrastructure. For example, a study shows that vegetation indices derived from S2DR3 correlate better with crop yields than those from the standard 10 m version. (MDPI)


Simplified operation

  • We start with a Sentinel-2 scene (L1C or L2A level) containing bands with resolutions of 10 m, 20 m, and 60 m. (MDPI)
  • The model (deep neural network) learns to “super-resolve” each band: that is, to predict a finer version of the image from the original, while trying to preserve the correct spectral signatures. (data jungle adventures)
  • The result: a georeferenced multispectral image at ~1 m/pixel, including all bands.
  • The script below uses this model via a Python package s2dr3.inferutils, to automate the call, scene retrieval, processing, export to Google Drive, etc.


Practical uses in a script

Let’s see how the code fits into this framework:

Example de code S2DR3

# Lien vers le dossier Drive
!ln -s /content/drive/MyDrive/Sentinel2_1m /content/output

# Installation du paquet S2DR3
!pip -q install https://storage.googleapis.com/.../s2dr3-20250905.1-cp312-cp312-linux_x86_64.whl

import s2dr3.inferutils

# Coordonnées (Loja, Équateur)
lonlat = (-79.203, -4.008)

# Date de l’image
date = '2024-08-23'

# Traitement
s2dr3.inferutils.test(lonlat, date)

  • You specify an area of interest (longitude/latitude): this triggers the retrieval of the Sentinel-2 scene closest to the date, covering this area.
  • The model then generates the 1 m version of the band set from the original image.
  • The result is saved in /content/output (which is linked to your Google Drive) for access/backup.


Limitations & points to note

It is also important to be aware of the limitations:

  • Variable quality depending on area/date: The model assumes that the image contains few clouds, little obscuring, and that surfaces are well represented. Under certain conditions (too much shade, clouds, very specific areas), super-resolution can produce artifacts.
  • Generated data, not “true” 1 m commercial data: Even if the resolution is ~1 m, this is still an estimate by a model, not a native 1 m acquisition. Care should therefore be taken with highly critical uses (e.g., legal or engineering).
  • Computing resources: Processing can be resource-intensive (GPU, memory). Free notebooks (e.g., Colab) have limitations. (See the following article on how to configure processing with a T4 GPU.)
  • Validations required: As with all AI, results must be validated according to the application. One study shows that for certain crops, the advantage is real but not automatic. (MDPI)
  • Cost/licensing: Some commercial or large-scale versions may involve a cost or conditions of use. (data jungle adventures)


Recommendations for use

Here are some tips for getting the most out of S2DR3 in your scripts:

  • Check that the chosen date has good coverage (few clouds) for the area of interest.
  • After processing, visually check the output (e.g., in QGIS or a notebook) for any artifacts.
  • Compare (if possible) with the original standard resolution version from Sentinel-2 to see what the model brings.
  • If you are performing analysis (e.g., vegetation, classification), test on a small batch before processing large areas.
  • Document the source clearly: “super-resolved image via S2DR3 on XX/XX/XXXX” for transparency.
  • For production or commercial use, check the license terms for the model or generated products.


In the next article, TUTORIAL: Using S2DR3 in Google Colab to study corals in Mauritius, we will look at a concrete example of how this procedure can be used.


Si cet article vous a intéressé et que vous pensez qu'il pourrait bénéficier à d'autres personnes, n'hésitez pas à le partager sur vos réseaux sociaux en utilisant les boutons ci-dessous. Votre partage est apprécié !

Leave a Reply

Your email address will not be published. Required fields are marked *