Depth of field, both a focus and depth cue, is not currently implemented in wearable virtual reality displays today even though there is evidence that suggests it could significantly improve the immersive experience. In this project I have investigated different depth of field rendering algorithms that give accurate reproductions of depth of field (retinal blur) on scenes displayed in virtual reality.
Depth of field, also known as the effective focus range, is the distance between the nearest and farthest objects in a scene that appear to be in focus. Even though lenses are only able to focus at precisely on distance, the decrease in sharpness is gradual around around the focal plane. Within a certain range, the sensor (the retina in the case of the human visual system) is not able to register these small changes. Depth of field is a property of optics in the physical world and the human visual system has evolved to use this information to better understand scenes. Current virtual reality implementations do not implement depth of field rendering and have effectively removed an entire source of information. Because of this, entire scenes are in focus, which of course is perceptually inaccurate. An example of a scene rendered without and with a DOF can be found in Figure 1.
I explored different depth of field rendering algorithms for the purpose of VR applications. It has been shown that adding depth of field rendering to stereoscopic displays improves depth perception, reduces fusion time of images, and reduces visual fatigue. At the same time, the overall latency of the overall system must be kept to a minimum to minimize sickness and maximize immersion. and observed their accuracy when compared to a reference image as well as their computation time. I compared three different depth of field algorithms based on 2 different distance metrics to a reference image as well as the per pixel computation time. An interesting future work could investigate which of the artifacts created via image space methods has the largest impact on our visual system. Perhaps certain artifacts are insignificant in our perception of depth or in fusing images, and can be allowed to appear in the result resulting in a less complex and, perhaps, faster algorithm.