Skip to Main content Skip to Navigation
New interface

Direct visual odometry and dense large-scale environment mapping from panoramic RGB-D images

Renato Martins 1 
1 Lagadic - Visual servoing in robotics, computer vision, and augmented reality
CRISAM - Inria Sophia Antipolis - Méditerranée , Inria Rennes – Bretagne Atlantique , IRISA-D5 - SIGNAUX ET IMAGES NUMÉRIQUES, ROBOTIQUE
Abstract : This thesis is in the context of self-localization and 3D mapping from RGB-D cameras for mobile robots and autonomous systems. We present image alignment and mapping tech- niques to perform the camera localization (tracking) notably for large camera motions or low frame rate. Possible domains of application are virtual and augmented reality, localization of autonomous vehicles or in 3D reconstruction of environments. We propose a consistent local- ization and 3D dense mapping framework considering as input a sequence of RGB-D images acquired from a mobile platform. The core of this framework explores and extends the domain of applicability of direct/dense appearance-based image registration methods. With regard to feature-based techniques, direct/dense image registration (or image alignment) techniques are more accurate and allow us a more consistent dense representation of the scene. However, these techniques have a smaller domain of convergence and rely on the assumption that the camera motion is small. In the first part of the thesis, we propose two formulations to relax this assumption. Firstly, we describe a fast pose estimation strategy to compute a rough estimate of large motions, based on the normal vectors of the scene surfaces and on the geometric properties between the RGB- D images. This rough estimation can be used as initialization to direct registration methods for refinement. Secondly, we propose a direct RGB-D camera tracking method that exploits adaptively the photometric and geometric error properties to improve the convergence of the image alignment. In the second part of the thesis, we propose techniques of regularization and fusion to create compact and accurate representations of large scale environments. The regularization is performed from a segmentation of spherical frames in piecewise patches using simultaneously the photometric and geometric information to improve the accuracy and the consistency of the scene 3D reconstruction. This segmentation is also adapted to tackle the non-uniform resolution of panoramic images. Finally, the regularized frames are combined to build a compact keyframe- based map composed of spherical RGB-D panoramas optimally distributed in the environment. These representations are helpful for autonomous navigation and guiding tasks as they allow us an access in constant time with a limited storage which does not depend on the size of the environment.
Document type :
Complete list of metadata

Cited literature [183 references]  Display  Hide  Download
Contributor : Eric Marchand Connect in order to contact the contributor
Submitted on : Wednesday, December 6, 2017 - 4:33:25 PM
Last modification on : Friday, August 5, 2022 - 2:54:52 PM


Files produced by the author(s)


  • HAL Id : tel-01770256, version 1


Renato Martins. Direct visual odometry and dense large-scale environment mapping from panoramic RGB-D images . Robotics [cs.RO]. Université de recherche Paris Sciences et Lettres; Mines Paristech, 2017. English. ⟨NNT : 2017PSLEM004⟩. ⟨tel-01770256v1⟩



Record views


Files downloads