Skip to Main content Skip to Navigation
Theses

Video inpainting and semi-supervised object removal

Abstract : Nowadays, the rapid increase of video creates a massive demand for video-based editing applications. In this dissertation, we solve several problems relating to video post-processing and focus on objects removal application in video. To complete this task, we divided it into two problems: (1) A video objects segmentation step to select which objects to remove and (2) a video inpainting step to filling the damaged regions.For the video segmentation problem, we design a system which is suitable for object removal applications with different requirements in terms of accuracy and efficiency. Our approach relies on the combination of Convolutional Neural Networks (CNNs) for segmentation and the classical mask tracking method. In particular, we adopt the segmentation networks for image case and apply them to video case by performing frame-by-frame segmentation. By exploiting both offline and online training with first frame annotation only, the networks are able to produce highly accurate video object segmentation. Besides, we propose a mask tracking module to ensure temporal continuity and a mask linking module to ensure the identity coherence across frames. Moreover, we introduce a simple way to learn the dilation layer in the mask, which helps us create suitable masks for video objects removal application.For the video inpainting problem, we divide our work into two categories base on the type of background. In particular, we present a simple motion-guided pixel propagation method to deal with static background cases. We show that the problem of objects removal with a static background can be solved efficiently using a simple motion-based technique. To deal with dynamic background, we introduce video inpainting method by optimization a global patch-based energy function. To increase the speed of the algorithm, we proposed a parallel extension of the 3D PatchMatch algorithm. To improve accuracy, we systematically incorporate the optical flow in the overall process. We end up with a video inpainting method which is able to reconstruct moving objects as well as reproduce dynamic textures while running in a reasonable time.Finally, we combine the video objects segmentation and video inpainting methods into a unified system to removes undesired objects in videos. To the best of our knowledge, this is the first system of this kind. In our system, the user only needs to approximately delimit in the first frame the objects to be edited. These annotation process is facilitated by the help of superpixels. Then, these annotations are refined and propagated through the video by the video objects segmentation method. One or several objects can then be removed automatically using our video inpainting methods. This results in a flexible computational video editing tool, with numerous potential applications, ranging from crowd suppression to unphysical scenes correction.
Complete list of metadatas

Cited literature [316 references]  Display  Hide  Download

https://pastel.archives-ouvertes.fr/tel-02382805
Contributor : Abes Star :  Contact
Submitted on : Wednesday, November 27, 2019 - 1:24:09 PM
Last modification on : Wednesday, October 14, 2020 - 4:14:36 AM

File

75719_LE_2019_archivage.pdf
Version validated by the jury (STAR)

Identifiers

  • HAL Id : tel-02382805, version 1

Collections

Citation

Thuc Trinh Le. Video inpainting and semi-supervised object removal. Image Processing [eess.IV]. Université Paris-Saclay, 2019. English. ⟨NNT : 2019SACLT026⟩. ⟨tel-02382805⟩

Share

Metrics

Record views

228

Files downloads

379