Source
ISMAR
DATE OF PUBLICATION
11/03/2021
Authors
Ilya Makarov Gleb Borisenko
Share

Depth Inpainting via Vision Transformer

Abstract

Depth inpainting is a crucial task for working with augmented reality. In previous works missing depth values are completed by convolutional encoder-decoder networks, which is a kind of bottleneck. But nowadays vision transformers showed very good quality in various tasks of computer vision and some of them became state of the art. In this study, we presented a supervised method for depth inpainting by RGB images and sparse depth maps via vision transformers. The proposed model was trained and evaluated on the NYUv2 dataset. Experiments showed that a vision transformer with a restrictive convolutional tokenization model can improve the quality of the inpainted depth map.

Join AIRI