3D Sensors

3D sensors capture depth data of objects and environments, making them ideal for object recognition, detection and localization in three-dimensional space. As different 3D sensors have different characteristics such as price, technology used, maximum range and quality of the data generated, a calibrated and synchronized data set was recorded at Fraunhofer IML for logistic scenarios and for evaluation purposes. A list of the sensors used can be found within the table below.

Sensors from the paper

Intel RealSense D455
P+F SmartRunner Explorer 3-D
Azure Kinect DK
Intel RealSense L515
Sick T-mini
Sick Visionary-S
Zivid Two

The released data sets were primarily created for various purposes such as training AI algorithms and comparing different sensors. In total, three different data sets are provided, which again contain several different scenarios. All our datasets have the following characteristics:

  • Raw depth data in point cloud format in a ROS bag file
  •  RGB data from all sensors that provide RGB information
  • Camera matrices and IMU data, if available
  • Time synchronisation data

A detailed description of the data sets described below, as well as a qualitative comparison of the sensors used, can be found in this paper: 

Hoose, S., Finke, J., Rest, C., Stenzel, J., & Jost, J. (2024). Evaluating 3D Depth Sensors: A Study and an Open Source Data Set of Logistic Scenarios.

This can also be used to refer to the data set [ dataset.iml.fraunhofer.de/2024/logistic_depth_data] described here in scientific publications.

Data Sets

The evaluation data sets, consisting of recordings of a 3D-printed cube and a box with external dimensions of 100 x 100 mm can be used to assess the quality of the sensors used. The structure of the scenes is static for each data set, with the sensors mounted on a table at a height of 102 cm and a bird's eye view of one of the objects. A CAD model is also available for each object.

These data sets can be used, for example, to evaluate machine learning and segmentation algorithms in logistics scenarios. In all scenes, a static camera position was chosen in which the sensors are aligned with the area of interest at a distance of 102 cm in a bird's eye view. The first scenario includes a pallet with various packaging schemes of different boxes. The second scenario shows moving and stationary small load carriers (SLC) on a conveyor belt, where the SLC are filled with different types of goods, including tools, retail items, boxes and bags with different contents.

These data sets are used to evaluate the performance of e.g. machine learning algorithms - especially in segmented mapping applications - using moving cameras. The relative position of the sensors to each other remains unaltered due to a frame. Two logistics scenarios were selected for the evaluation. In the first scenario, recordings from a pallet loaded with different parcels were taken while the sensors hovered in a circle around the pallet. In the second scenario, the sensor frame rotates in a logistics hall equipped with various objects typically found in warehouse and logistics facilities, such as pallet cages, forklifts, pallets, SLCs, fire extinguishers and people walking through the scene.