My passion for robotics and autonomous driving can be traced back to my childhood and was ignited with a robotics kit by fischertechnik. This inspired me to study computer science at the Leibniz University of Hannover. Recently, I successfully defended my PhD thesis on the topic of “Bounded-Error Visual-LiDAR Odometry on Mobile Robots Under Consideration of Spatiotemporal Uncertainties” (summa cum laude).
Currently, I am a sensor and calibration expert at Volkswagen Commercial Vehicles working on the autonomous vehicle ID.BUZZ AD. My main focus is on the definition of the sensor set, the extrinsic calibration of the sensor set and the consideration/assessment of movements of the sensor set during driving.
Doctor of Engineering (Dr.-Ing.) in Mobile Robotics, 2020
Leibniz University Hannover
M. Sc. in Computer Science, 2016
Leibniz University Hannover
B. Sc. in Computer Science, 2014
Leibniz University Hannover
Development of the autonomous vehicle ID.BUZZ AD including
Responsibilities included:
Successfully prepared and defended my PhD thesis on the topic of “Bounded-Error Visual-LiDAR Odometry on Mobile Robots Under Consideration of Spatiotemporal Uncertainties” (summa cum laude). For more details, see here.
Date of thesis submission: 26 Feb 2020
Date of thesis defense: 17 Jun 2020
Responsibilities included:
Two separate internships for one month each:
Responsibilities included:
Since cameras and Light Detection and Ranging (LiDAR) sensors provide complementary information about the environment, it is beneficial for mobile robot localization to fuse their information by assigning distances measured by the LiDAR to visual features detected in the image. However, existing approaches neglect the uncertainty of the fused information or model it in an optimistic way (e.g., without taking extrinsic calibration errors into account). Since the actual distribution of errors during sensor fusion is often unknown, we assume to only know bounds (or intervals) enclosing the errors. Consequently, we propose to use interval analysis to propagate the error from the input sources to the fused information in a straightforward way. To show the applicability of our approach, we use the fused information for dead reckoning. Since interval analysis is used, the result of our approach are intervals that are guaranteed to enclose the robot’s true pose. An evaluation using real data shows that we are indeed able to localize the robot in a guaranteed way. This enables us to detect faults of an established approach, which neglects the uncertainty of the fused information, in three out of ten cases.
To fuse information from a 3D Light Detection and Ranging (LiDAR) sensor and a camera, the extrinsic transformation between the sensor coordinate systems needs to be known. Therefore, an extrinsic calibration must be performed, which is usually based on features extracted from sensor data. Naturally, sensor errors can affect the feature extraction process, and thus distort the calibration result. Unlike previous works, which do not consider the uncertainties of the sensors, we propose a set-membership approach that takes all sensor errors into account. Since the actual error distribution of off-the-shelf sensors is often unknown, we assume to only know bounds (or intervals) enclosing the sensor errors and accordingly introduce novel error models for both sensors. Next, we introduce interval-based approaches to extract corresponding features from images and point clouds. Due to the unknown but bounded sensor errors, we cannot determine the features exactly, but compute intervals guaranteed to enclose them. Subsequently, these feature intervals enable us to formulate a Constraint Satisfaction Problem (CSP). Finally, the CSP is solved to find a set of solutions that is guaranteed to contain the true solution and simultaneously reflects the accuracy of the calibration. Experiments using simulated and real data validate our approach and show its advantages over existing methods.
With the advent of autonomous driving, the localization of mobile robots, especially without GNSS information, is becoming increasingly important. It must be ensured that the localization works robustly and timely warnings are provided if the pose estimates are too uncertain to assure a safe operation of the system. To meet these requirements, autonomous systems require reliable and trustworthy information about their environment. To improve the reliability and the integrity of information, and to be robust with respect to sensor failures, information from multiple sensors should be fused. However, this requires inter-sensor properties (e.g. the transformation between sensor coordinate systems) to be known. Naturally, neither the actual sensor measurements nor the inter-sensor properties can be determined without errors, and thus must be modeled accordingly during sensor fusion. To localize autonomous vehicles without GNSS information in 3D, this work introduces a dead reckoning approach relying on information from a camera, a laser scanner and an IMU. First, novel error models for the individual sensors are introduced. Here, the errors are assumed to be unknown but bounded, which requires bounds (i.e. intervals) that are not exceeded by the actual sensor errors to be known. However, no further assumptions are required. In particular, the error distribution within the bounds does not need to be known, which is a frequently overlooked assumption of established approaches. Furthermore, interval-based error models are compatible with unknown systematic errors and can be used to guarantee results. Second, to determine the inter-sensor properties and the corresponding uncertainties, this thesis presents new approaches for the spatiotemporal calibration between camera, laser scanner and IMU that employ the proposed error models. Third, an innovative method that considers both sensor and inter-sensor errors for guaranteed sensor fusion is proposed. The fused information is subsequently used to perform interval-based dead reckoning of a mobile robot. To evaluate the developed methods, both simulated and real data are analyzed. It becomes evident that all proposed approaches are guaranteed to enclose the true solution if the sensor error bounds are correct. Moreover, although interval-based approaches consider the “worst case”, i.e. the maximum sensor errors, the results are reasonably accurate. In particular, it can be determined in which instances a state-of-the-art method computes a result that deviates significantly from the actual solution.
To properly fuse IMU and camera information for robotics applications, the relative timestamp offset between both sensors' data streams has to be considered. However, finding the exact timestamp offset is often impossible. Thus, it is necessary to additionally consider the offset’s uncertainty if we want to produce reliable results. In order to find the offset and its uncertainty, we determine orientation estimates from IMU and camera under interval uncertainty. Subsequently, these intervals are used as a common representation for our bounded-error approach that finds an interval enclosing the true offset while also modeling the uncertainty. Calibration data can be acquired in a few seconds using a simple setup of IMU, camera and camera target. Results using both simulated and real data demonstrate that we are able to determine the offset to an accuracy of 20 ms with a computation time that is suitable for future online applications. Here, our approach could be used to monitor the timestamp offset in a guaranteed way. Additionally, our method can be adapted to determine an interval for the rotation between both sensors. While this increases the computation time drastically, it also enhances the accuracy of the timestamp offset to less than 10 ms.
Multi-sensor systems are widely used for robotics applications. While additional sensors can increase the accuracy and robustness of the solution, it is inevitable to synchronize them in order to rely on the results. For our multi-sensor system consisting of an actuated laser scanner, its motor and a camera we assume that the timestamps are only delayed by a constant offset. We propose two different approaches to calculate timestamp offsets from laser scanner to motor, one of which is additionally capable of determining the timestamp offset between laser scanner and camera. Both approaches use parts of a SLAM algorithm but apply different criteria to find an appropriate solution. Our experiments show that we are able to determine timestamp offsets with a reasonable accuracy. Furthermore, our experiments exhibit the significance of a proper synchronization for a multi-sensor system.