This paper presents a robotic system that provides telepresence to the visually impaired by combining real-time haptic rendering with multi-modal interaction. A virtual-proxy based haptic rendering process using a RGB-D sensor is developed and integrated into a unified framework for control and feedback for the telepresence robot. We discuss the challenging problem of presenting environmental perception to a user with visual impairments and our solution for multi-modal interaction. We also explain the experimental design and protocols, and results with human subjects with and without visual impairments. Discussion on the performance of our system and our future goals are presented toward the end.
We present six-degree-of-freedom (6DoF) haptic rendering algorithms using translational (PDt) and generalized penetration depth (PDg). Our rendering algorithm can handle any type of object/object haptic interaction using penalty-based response and makes no assumption about the underlying geometry and topology. Moreover, our rendering algorithm can effectively deal with multiple contacts. Our penetration depth algorithms for PDt and PDg are based on a contact-space projection technique combined with iterative, local optimization on the contact-space. We circumvent the local minima problem, imposed by the local optimization, using motion coherence present in the haptic simulation. Our experimental results show that our methods can produce high-fidelity force feedback for general polygonal models consisting of tens of thousands of triangles at near-haptic rates, and are successfully integrated into an off-the-shelf 6DoF haptic device. We also discuss the benefits of using different formulations of penetration depth in the context of 6DoF haptics.