The virtual reality HMD is developed to enable interaction with the outside by combining cameras in the front, and will provide a 360 degree full-body tracking virtual reality game environment so that it can freely roam the room-scale area without any complicated line. However, compared to the development of devices, the actual space in which to operate it is still limited, which will be the biggest obstacle to freely exploring a wide range of virtual reality environments. In order to solve this problem, Redirected Walking, which allows people to explore the wide virtual reality space by tricking their cognitive ability, is under study, but there are unresolved problems and there is a permanent safety risk of virtual reality. In this paper, we propose an enhanced redirected walking that uses tof camera to control variables that lead to other directions before they can not go forward in the real world, and to visualize real spatial information in virtual reality game environment to cope with walls, obstacles, or sudden risk factors.
3D depth perception has played an important role in robotics, and many sensory methods have also proposed for it. As a photodetector for 3D sensing, single photon avalanche diode (SPAD) is suggested due to sensitivity and accuracy. We have researched for applying a SPAD chip in our fusion system of time-of-fight (ToF) sensor and stereo camera. Our goal is to upsample of SPAD resolution using RGB stereo camera. Currently, we have 64 x 32 resolution SPAD ToF Sensor, even though there are higher resolution depth sensors such as Kinect V2 and Cube-Eye. This may be a weak point of our system, however we exploit this gap using a transition of idea. A convolution neural network (CNN) is designed to upsample our low resolution depth map using the data of the higher resolution depth as label data. Then, the upsampled depth data using CNN and stereo camera depth data are fused using semi-global matching (SGM) algorithm. We proposed simplified fusion method created for the embedded system.