Navigating the world and interfacing with “analog” signs, items, and surroundings is challenging for people with low vision. While technologies like voice-over, display zoom, and other accessibility features support navigating the digital landscape, translating these features with low friction into the real world is an under-explored topic. Apple Vision Pro and the spatial computing paradigm present a transformative mechanism to address these missing gaps. We propose Spatial Continuity, the utilization of spatial computing, particularly through camera-based augmentation and AI-based annotations of their surroundings, to improve accessibility for individuals with low vision.
The concept of Spatial Continuity aims to create virtual representations of real-world objects, making them more accessible to low-vision users. A proof-of-concept use case is envisioned where users can comfortably read printed books while wearing Vision Pro. This system detects objects of interest in the user’s field of view, creates virtual representations of these objects, and provides further processing features to enhance accessibility, such as text enlargement, image explanation, or text-to-speech functionalities. These virtual representations can be dynamic, reacting to changes in their physical counterparts, or static, where they do not react to changes.
We are looking for talented and motivated contributors who would like to assist us in the development of Spatial Computing system as well as the conduction of a first usability study.
<aside> 📧 Our Choose Your Research Project page provides a great overview of how to engage with your supervisors. The following steps will help you familiarize yourself with the project and demonstrate your skillset before you reach out.
</aside>