Relating images and 3D models with convolutional neural networks

Abstract : The recent availability of large catalogs of 3D models enables new possibilities for a 3D reasoning on photographs. This thesis investigates the use of convolutional neural networks (CNNs) for relating 3D objects to 2D images.We first introduce two contributions that are used throughout this thesis: an automatic memory reduction library for deep CNNs, and a study of CNN features for cross-domain matching. In the first one, we develop a library built on top of Torch7 which automatically reduces up to 91% of the memory requirements for deploying a deep CNN. As a second point, we study the effectiveness of various CNN features extracted from a pre-trained network in the case of images from different modalities (real or synthetic images). We show that despite the large cross-domain difference between rendered views and photographs, it is possible to use some of these features for instance retrieval, with possible applications to image-based rendering.There has been a recent use of CNNs for the task of object viewpoint estimation, sometimes with very different design choices. We present these approaches in an unified framework and we analyse the key factors that affect performance. We propose a joint training method that combines both detection and viewpoint estimation, which performs better than considering the viewpoint estimation separately. We also study the impact of the formulation of viewpoint estimation either as a discrete or a continuous task, we quantify the benefits of deeper architectures and we demonstrate that using synthetic data is beneficial. With all these elements combined, we improve over previous state-of-the-art results on the Pascal3D+ dataset by a approximately 5% of mean average viewpoint precision.In the instance retrieval study, the image of the object is given and the goal is to identify among a number of 3D models which object it is. We extend this work to object detection, where instead we are given a 3D model (or a set of 3D models) and we are asked to locate and align the model in the image. We show that simply using CNN features are not enough for this task, and we propose to learn a transformation that brings the features from the real images close to the features from the rendered views. We evaluate our approach both qualitatively and quantitatively on two standard datasets: the IKEAobject dataset, and a subset of the Pascal VOC 2012 dataset of the chair category, and we show state-of-the-art results on both of them
Document type :
Theses
Complete list of metadatas

Cited literature [122 references]  Display  Hide  Download

https://pastel.archives-ouvertes.fr/tel-01762533
Contributor : Abes Star <>
Submitted on : Tuesday, April 10, 2018 - 11:09:08 AM
Last modification on : Thursday, May 16, 2019 - 6:11:21 PM

File

TH2017PESC1198.pdf
Version validated by the jury (STAR)

Identifiers

  • HAL Id : tel-01762533, version 1

Citation

Francisco Vitor Suzano Massa. Relating images and 3D models with convolutional neural networks. Signal and Image Processing. Université Paris-Est, 2017. English. ⟨NNT : 2017PESC1198⟩. ⟨tel-01762533⟩

Share

Metrics

Record views

266

Files downloads

3103