Since cameras and Light Detection and Ranging (LiDAR) sensors provide complementary information about the environment, it is beneficial for mobile robot localization to fuse their information by assigning distances measured by the LiDAR to visual features detected in the image. However, existing approaches neglect the uncertainty of the fused information or model it in an optimistic way (e.g., without taking extrinsic calibration errors into account). Since the actual distribution of errors during sensor fusion is often unknown, we assume to only know bounds (or intervals) enclosing the errors. Consequently, we propose to use interval analysis to propagate the error from the input sources to the fused information in a straightforward way. To show the applicability of our approach, we use the fused information for dead reckoning. Since interval analysis is used, the result of our approach are intervals that are guaranteed to enclose the robot’s true pose. An evaluation using real data shows that we are indeed able to localize the robot in a guaranteed way. This enables us to detect faults of an established approach, which neglects the uncertainty of the fused information, in three out of ten cases.
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
The following video shows the idea of the approach presented in this paper: