Rarely are VR and AR associated with anything but gaming and escape to foreign worlds. The technology allows people to slip away into other realities far from their physical location. This merger of worlds has brought VR technology into the limelight. However VR has capabilities far beyond recreational amusement, it can help with real life challenges.

VR is looking to be particularly helpful with what it does best, vision. Most people don’t think twice about their ability to read, perceive depths or even use VR headsets. The truth is, low vision impacts millions of people. The World Health Organization estimates that 246 million people have low vision which ranges from blurriness, tunnel vision to blind spots. Even more troubling is that the WHO estimates that around 90% of people with low vision have lower incomes.

Typically vision aid technology is very expensive, especially considering those who need it make less. Frank Werblin, professor of neuroscience at the University of California, Berkeley is working towards a cheaper and easier solution by taking advantage of VR technology.

“There’s a huge price gap between a magnifying glass, which you could buy for $25 or $50, and what these people could really use, which is a wearable portable device, which is many thousands of dollars,” he said.

Werblin is developing IrisVision, an app that uses the Samsung Gear VR headset. This technology is responsive to the users movements and magnifies the line of sight for the user. This app is going to be a real world solution that can even be used for looking at screens.

IrisVision can be purchased online for $2,500 and in specific clinics in the US experimentally. This may seem expensive at first but it is a complete package including the software, headset and phone to power the headset. If it still seems expensive consider that typical wearable vision aids cost up to $15,000.  

Away with lazy eye

Werblin’s app to help with low vision isn’t the only solution on the market. James Blaha the founder and CEO of Vivid Vision struggled his entire life with amblyopia otherwise known as lazy eye. Blaha got the idea for the technology while researching his condition and toying with an early stage Oculus Rift, which he found could strengthen his weaker eye.

Amblyopia is when one eye works less effectively so the brain will use it less focusing relying primarily on the good eye. One good eye by itself is certainly less effective than two eyes even if one eye is partially ineffective. This condition impacts depth perception making everyday transit and life a dangerous affair. The adult population has seen little help with this condition since the medical world sees it as virtually incurable beyond the age of 8.

Blaha and his technology is fighting to disprove this assumption. By increasing the headset brightness on the side with the weaker eye he was able to force his brain to stop ignoring it. Using this protocol he was able to see in 3D and now has 90 percent normal depth perception. Blaha’s company now is linked to over 50 clinics across the US and world.

“We’re sort of outside the context of VR, particularly for the patients who use it, and definitely for the doctors who are not playing any of the VR games, typically,” Blaha said of Vivid Vision.

Escaping the Darkness

Helping impaired vision seems like a feasible task, but helping those who can barely see?  

OxSight, a startup from the University of Oxford is making apps for the legally blind. OxSight has developed SmartSpecs, a portable headset to help these people be self-sufficient and confident in any environment.

“We have the potential,” says cofounder Oxford researcher Stephen Hicks, “to make a dent in this feeling of isolation and helplessness that many visually impaired individuals experience.”

SmartSpecs work by highlighting the edges of an object or face against a darkened background making the images more pronounced. This technology even works in the dark to help navigate through and around obstacles. SmartSpecs works by using real time algorithms to find important parts of an image like faces, expressions and text.

“I think the real trick is coming up with ways of delivering relevant information to the user without bombarding them with irrelevant info,” said Hicks.

SmartSpecs have yet to set a price for their product but plan on launching in mid 2017.

LEAVE A REPLY

Please enter your comment!
Please enter your name here