Site icon DIELMO 3D

LiDAR data processing supercomputing project

Dielmo supercomputing control panel 1024x492 1

In Dielmo, we are aware of the need that many of our customers have to increase the speed in the processing of their LiDAR data, because increasingly, the data collected for analysis and presentation of results, have larger sizes and the required delivery times are increasingly smaller.

That is why, sometimes, the processing of these data is not agile and delivery times are not operational.

For example, for obtaining results in the analysis of hazardous vegetation near power lines from LiDAR data, we have very short delivery times and after the classification of the point cloud, we usually have two or three days to perform the calculations and make a quality control of the results, so the calculation time must be reduced as much as possible.

That is why, in this industry where the data to be processed has increasingly larger sizes and tighter turnaround times, applying new supercomputing techniques for LiDAR data processing is a necessity.

In response, Dielmo launched its internal R&D project:

Development of parallelized computing algorithms to increase computational speed in BIG DATA processing on its local network.

Main objectives

Understanding the challenge
The technological challenge was, and still is, one of the most important in the area of geospatial BIG DATA processing.

With the appearance of new sensors for massive LiDAR data capture such as the Geiger-Mode, developed by Harris, or the single photon LiDAR from Leica, which allow a LiDAR data capture 10 times faster and 10 times denser, the natural cyclical evolution of our industry goes through 3 phases:

  • Develop data collection systems that allow a higher volume of capture.
  • Work on the variety of information assets collected that comes as a result of the increase in data volume.
  • Work on improving the speed of processing this data.

Looking for solutions

With this project, Dielmo aimed to provide a solution to the third step, thus finding a scalable improvement in the speed at which we processed our data.

It is a necessary advance worldwide, since currently, there is no API for the treatment, management and presentation of specific results of LiDAR data, with a BackEnd that controls the prioritization of the work queues and that allows the integral management of the data processing.

With this in mind, the projects’ objectives have focused on finding answers to the following questions:

  • How to improve task queuing.
  • How to change priorities or remove tasks from the running queue.
  • How to communicate to consumers the tasks to be performed.
  • How to improve the monitoring of running tasks.
  • How to develop calculation algorithms under the SPMD (Single Program, Multiple Data) paradigm to increase the calculation speed in the processing of BIG DATA within the company’s local network.
    Unlimited scalability

This project also aims to achieve unlimited scalability thanks to the in-house development of the algorithms. This way we have no software license limitations, we could simply add more hardware in case we need more processing power.

Exit mobile version