1. Home
  2. Stories
  3. Mapping from high-resolution images and height data
Spatial Data Science

Mapping from high-resolution images and height data

Publication date: 14-02-2024, Read time: 2 min

Where do our maps come from?

Most of the maps we use in daily life are based on information captured in Earth Observation data, such as aerial photographs or satellite images. Elements that you find in those images include buildings, roads, water bodies, and trees. The more detailed the maps, the higher the resolution of the images should be. And, unfortunately, it's difficult to find and map all those detailed objects automatically. 

How can we automate this process? 

In this article, we focus on the procedure to produce a map of the objects that are visible in high-resolution images and height data captured from cameras and laser scanners mounted in airplanes. A human operator integrates implicit knowledge on how the objects should be generalized in a map. The question is whether we can automate this process: how do we teach the computer that a group of pixels from the images should be labelled as, for example, “building”, “bare ground”, “cycle lane” or “bridge”? How do we teach the algorithm to draw boundaries between objects?  

We make use of big geodata to train deep learning networks on how the maps should be produced. To be precise, we are using existing maps, together with aerial images and height data, to train the network. For this, we make use of nationwide open data. That is a huge dataset, containing billions of polygons and even more image pixels and height data. 

In our education at ITC, we cover topics on cartographic rules in map production, image analysis, point cloud processing, data fusion, deep learning, and quality analyses of the produced results. 

Spatial Data Science
Last edited: 07-05-2024

Personalize your experience

Create a free account to save your favorite articles, follow important topics, sign up for newsletters and more!