Semantic See-Through: Using CNNs For Users Segmentation In Video See-Through Augmented Virtuality

AWARDS NOMINEES

Information

Semantic See-Through addresses the issue of user visualization in a virtual reality experience. The system presented here relies on a video see-through approach, thus turning the experience into an augmented virtuality experience. We use deep learning techniques to integrate the user’s self-body and other participants into a head-mounted video see-through augmented virtuality scenario. It has been previously shown that seeing user’s bodies in such simulations may improve the feeling of both self and social presence in the virtual environment, as well as user performance. We propose to use a convolutional neural network for real time semantic segmentation of users’ bodies in stereoscopic RGB video streams acquired from the perspective of the user. Segmented video feeds are then composited into the visual rendering of the virtual environment. In this work we demonstrate the feasibility of using such neural networks for merging users’ bodies in an augmented virtuality simulation.

Contact details

Log in