Extrinsic Calibration Between a 3D Laser Scanner and a Camera Under Interval Uncertainty

Abstract

For navigation in the absence of GPS, mobile robots often fuse information from both laser scanner and camera to take advantage of both sensor modalities. While the laser scanner allows to measure accurate distances to the environment, camera images can be employed to re-identify salient features over time and space. However, to fuse data from both sensors, the extrinsic transformation - i.e. the rotation and translation between the sensor coordinate systems - has to be known. Generally, all approaches for the extrinsic calibration between camera and laser scanner can be divided into two categories: target-based (e.g. using a checkerboard) or target-less (by relying on natural image features). Since the target-less extrinsic calibration is usually less accurate due to the problem of accurately identifying the same features in both laser scan and camera data, we omit it and focus on the target-based calibration. Unnikrishnan and Hebert employ a checkerboard for which they extract the plane parameters from the data of both sensors. Since their approach requires corresponding plane parameters from at least three different checkerboard poses, Zhou et al. aim to reduce this number by incorporating additional features into their non-linear optimization. Instead of only extracting the checkerboard’s plane, they also identify the border lines in both the camera image and the laser scan. Consequently, the authors integrate these line features as additional constraints, and can thus reduce the number of required checkerboard poses while also increasing the calibration accuracy.

Publication
Book of Abstracts of the 12th Summer Workshop on Interval Methods (SWIM 2019)
Raphael Voges
Raphael Voges
Doctor in Robotics

My research interests include error modeling, sensor fusion, SLAM, state estimation and AI in the context of autonomous driving.

Related