For Drones, Combining Vision Sensor and IMU Data Leads to More Robust Pose Estimation*

For Drones, Combining Vision Sensor and IMU Data Leads to More Robust Pose Estimation*

Enhancing Situational Awareness for Drones by Combining IMU and Vision Sensor Data

When a drone (e.g. quadrcopters) flies, it needs to know where it is in three-dimensional space at all times, across all six degrees of freedom for translation and rotation. Such pose estimation is crucial for flying without crashes or other errors. Drone developers are heavily challenged when attempting to use a single IMU or vision sensor to measure both orientation and translation in space. A hybrid approach combining IMU and vision data, conversely, improves the precision of pose estimation for drones based on the paired strengths of both measuring methods.

The IMU sensor (Inertial Measurement Unit) measures acceleration, with information about the orientation derived from its raw output data. In theory, such acceleration measurements could also be used to derive translation. However, to calculate such results, developers need to integrate twice, a process that results in increased errors. Therefore, the IMU sensor alone is not an accurate source of precise location information.

Fotolia_82728088_M3m89qijPEcWPPIn contrast, the vision sensor is quite good at measuring location; it’s sub-optimal at determining orientation, however. Particularly with wide-view angles and long-distance observation, it’s quite complicated for the vision system alone to measure orientation with adequate precision. A hybrid system of paired IMU and vision data can provide a more precise measurement for the full six degrees of pose in space, providing better results than using either the IMU or the vision sensor individually.

The most challenging issues in such a sensor fusion configuration are to determine a common coordinate frame of reference for both orientation and translation data, as well as to minimize the noise produced by the sensors. A common approach to create the reference frame leverages linear Kalman filters, which have the capability to merge both IMU and vision data for hybrid pose estimation purposes. For a vision system mounted on or embedded in the drone, SLAM (simultaneous localization and mapping) provides spatial awareness for the drone by mapping their environment to ensure that it does not collide with trees, buildings, other drones or other objects.

 

Factors to Consider When Building a Hybrid Sensor-based Drone System

Several key factors influence the measurement quality. First off, the quality results of an IMU’s measurements highly depend on the quality of the IMU selected for the design. Inexpensive IMU tend to generate high noise levels, which can lead to various errors and other deviations. More generally, proper calibration is necessary to comprehend the filter’s characteristics, e.g. the sensor’s noise model. Individual sensors, even from the same model and manufacturer, will have slightly different noise patterns that require consideration.

On the vision side, the implementation specifics fundamentally depend on whether a global or rolling shutter image sensor is being used. With a global shutter image sensor, every pixel is illuminated at the same time, with no consequent read-out distortion caused by object motion. With a more economical rolling shutter image sensor, conversely, distortion can occur due to read-out time differences between pixels. IMU information can correct for rolling shutter artifacts, historically by the use of various filter methods. Nowadays, the noise reduction of the IMU sensor can also be corrected by deep learning-based processing.

 

Combining IMU and Vision Data

One challenge with hybrid systems is that the captured vision data is often very slow from a frame rate standpoint, usually well below 100 Hz, while the IMU data comes across at high frequency, sometimes well over 1 KHz. The root of the resultant implementation problem lies in finding a way to obtain information from both systems at the exact same time. SLAM techniques such as Continuous Trajectory Estimation can approximate the drone’s movement by assuming that the drone’s speed is continuous.

Developers can integrate both the IMU and vision data into an environment with a common reference frame, allowing them to assign measurements to a specific part of the continuous trajectory. In-between any two image acquisitions, multiple IMU measurements provide additional reference points regarding this trajectory. When in the air, the drone will then constantly be time-synchronized and updated with IMU data. And every time a vision image is received, it then corrects the IMU information.

 

Hardware Requirements, Implementation and Testing

Considering the drones’ limited integration space and more general resource-limited embedded nature, implementing a robust setup for synchronization of the IMU and vision data is not straightforward. Light and lean components with powerful processing units are necessary, in a limited memory footprint while consuming little power. Stabilized sensors and software filtering are also essential for SoC developers as a result of the extreme jitter due to propeller movement, which influences the vision components of the design.

Both the vision sensor and IMU have individual local reference systems to measure their own pose, which need to be calibrated to ensure a hybrid SLAM has a common reference frame. Each system must be calibrated individually first, and then co-calibrated with respect to each other, in order to receive the position in the common reference frame. Multiple data sets are available to test aerial vehicle pose estimation, also in hybrid fashion, with the developed pipelines. The most commonly used dataset is EuRoC, which provides raw vision and IMU data to test algorithms and compare against other methods.

 

This article is part of a shared content initiative with the Embedded Vision Alliance.
The article is written by FRAMOS and will be published in cooperation with the Embedded Vision Alliance and further companies – please see the full initiative “Multi-sensor Fusion for Robust Device Autonomy” at the Embedded Vision Alliance site. EVA_LOgo

Stay Tuned with FRAMOS. Sign Up for our Monthly Newsletters

Contact Us

Salutation*
First Name*
Last Name*
Job Title*
Company*
Email*
Phone*
Country*
What FRAMOS product or service would you like to know more about?*
By submitting this request, your data will be processed to get in contact with you. If you become a customer of FRAMOS, you will receive regular updates, advertisements and other marketing communications from FRAMOS per e-mail. You can object to receiving these types of e-mails at any point without charge by contacting FRAMOS or by unsubscribing in the respective e-mail. For more information about the processing of your data, please take a look at our privacy policy.

Contact Us

Salutation*
First Name*
Last Name*
Job Title*
Company*
Email*
Phone*
Country*
By submitting this request, your data will be processed to get in contact with you. If you become a customer of FRAMOS, you will receive regular updates, advertisements and other marketing communications from FRAMOS per e-mail. You can object to receiving these types of e-mails at any point without charge by contacting FRAMOS or by unsubscribing in the respective e-mail. For more information about the processing of your data, please take a look at our privacy policy.