By Maureen A. Duffy, M.S., CVRT

Dr. Robert Massof head shot

Robert Massof, Ph.D., received his B.A. in Experimental Psychology at Hamline University and his Ph.D. in Physiological Optics from Indiana University, where he studied the psychophysics of color vision, including color vision at very low light levels. [Editor’s note: Psychophysics is the scientific study of the relationship between a stimulus and the sensations and perceptions that are evoked by that stimulus.]

During his postdoctoral fellowship, he trained in clinical vision science at the Johns Hopkins Wilmer Eye Institute and in theoretical psychophysics at the Johns Hopkins Applied Physics Laboratory.

After his postdoctoral fellowship, he was appointed to the ophthalmology faculty at the Johns Hopkins University School of Medicine and named Head of the Laboratory of Physiological Optics at the Wilmer Eye Institute. At Johns Hopkins, Dr. Massof was named Professor of Ophthalmology in 1991 and Professor of Neuroscience in 1994.

It was during this time at Hopkins that he also began to study how patients who were visually impaired or had low vision could improve their abilities to function more independently in everyday life. He began working with Lions Clubs in Maryland, Delaware, and the District of Columbia to help form the Multiple District 22 Lions Vision Research Foundation and raise an endowment to create and support the Lions Vision Research and Rehabilitation Center at the Johns Hopkins Wilmer Eye Institute. In 1991, Dr. Massof was named the Director.

As he worked to develop the Lions Vision Research and Rehabilitation Center, Dr. Massof also entered a collaborative project with the National Aeronautics and Space Administration (NASA), the Veterans Administration Rehabilitation Research and Development Service, and private companies and investors to develop, test, and commercialize the Low Vision Enhancement System (LVES). The LVES (pronounced “Elvis”) was the first head-mounted video system designed to enhance and compensate for low vision. “LVES does not fix vision or restore vision,” he explained during the development process. “Instead, it alters images to make them easier for people to see with the vision they still have.”

As a result of the LVES project, it became clear to Dr. Massof that it was necessary to emphasize comprehensive rehabilitation – and not only eye care – as an essential component of low vision services. Dr. Massof’s contributions to low vision rehabilitation include

the Visionize logo

Now, continuing his tradition of innovative low vision research, Dr. Massof, in collaboration with Dr. Frank Werblin, a former professor of neurobiology and visual neuroscience at the University of California at Berkeley, has embarked on the development of the second-generation version of the LVES, called Visionize.

First, a Brief History of the First Low Vision Enhancement System (LVES)

The first Low Vision Enhancement System (LVES), a video headset/visor and portable vision enhancement device, was introduced to the commercial marketplace in 1994 after almost a decade of development by a research team that included NASA, the Johns Hopkins Wilmer Eye Institute, the Department of Veterans Affairs, and the Visionics Corporation, which manufactured the system.

It consisted of a head-mounted video display worn like goggles, a set of three miniature video cameras, and a belt-mounted battery pack. Two of the cameras provided a regular three-dimensional view, while a third camera was used for seeing facial features, fine details of objects, close-up work, such as reading, and distant objects.

Controls built into the battery pack allowed the wearer to select and control the cameras, adjust the contrast, and magnify images from 1.5 to 10 times. The cameras fed the images to a computer that corrected for the particular vision condition of the user and then sent the images to the video display in the goggles.

Dr. Massof with LVES
Dr. Massof with the LVES

LVES development began in 1985, when Dr. Massof met with NASA officials to determine if there were emerging aerospace technologies that could be adapted to enhance the vision of patients with low vision. NASA and Wilmer then began developing a laboratory-based prototype system for real-time image processing.

In 1992, the Department of Veterans Affairs conducted clinical trials of the prototype LVES system. Final design modifications were incorporated into the LVES design based on the results of these clinical trials. The Visionics Corporation was founded in July 1992 to manufacture and market the LVES, including future improvements to the technology developed by Johns Hopkins. Although the LVES was manufactured and sold by Visionics on an exclusive basis worldwide, it was commercially available for a very short time.

The LVES is still available, however, at the Johns Hopkins Wilmer Eye Institute, described as follows:

Wilmer faculty, in collaboration with NASA and the Department of Veterans’ Affairs, have developed a powerful, portable vision enhancement device called LVES (Low Vision Enhancement System), a video visor tailored to each patient. The LVES has autofocus and zoom capability to magnify and clarify images. It can highlight low-contrast images such as faces, permitting wearers to recognize people more easily. The device displays images in front of the eyes at a size equivalent to watching a 60-inch television screen four feet away. Already available by prescription, the LVES continues to be refined to improve its vision-enhancing capabilities.

Fast-Forward to the Present: the Visionize

During the past several months, I began to hear about a head-mounted vision enhancement system that sounded very much like a second-generation LVES:

The Reader’s Digest Partners for Sight Foundation

In June 2015, the Reader’s Digest Partners for Sight Foundation announced the winners of their Special Grant Initiative, which honored “three innovative projects designed to help the blind and visually impaired navigate their communities and self-assess vision problems.” One of the awardees was Johns Hopkins University and Dr. Massof, for the “Next Generation of Low Vision Enhancement” project:

The goal of the project is to produce a binocular-style head-mounted display system that enhances a user’s low vision with programmable real-time image-processing that has been optimized for the individual patient. It is intended for full-time wear and would be the first versatile low-vision device that uses eye-tracking to enhance vision in the region of interest. It has a large binocular field of view so it can be used while walking. It automatically controls illumination, thereby eliminating glare. And with an auto-focus feature, it can be used at any viewing distance.

The Los Angeles Times

In July, the Los Angeles Times published a “Cutting Edge” technology story, entitled Visionize uses virtual reality headsets to help people with low vision, featuring the work of Dr. Frank Werblin and Visionize, “a piece of software designed to help [people with] low vision that uses the kind of virtual reality headsets popular in video gaming”:

The idea is relatively simple: Patients don the headset with a smartphone in it. The smartphone’s camera takes real-time images from the patients’ surroundings and magnifies them in front of their eyes. They can target the magnification and adjust its strength according to their needs.

“Traditional treatments use magnification, but they magnify everything,” Werblin said. Most people with low vision experience blurriness only in the center of their eye, he said, and don’t need the whole-eye magnification that existing treatments offer. “If you wear a telescope, then you lose peripheral vision,” he said. “So our challenge was to find a way for people to see the world in context, but to create a kind of telescope in the middle of that view.”

Using head-mounted displays to treat low vision isn’t new. In the 1990s, Robert Massof, a professor of ophthalmology and neuroscience at Johns Hopkins University School of Medicine, partnered with NASA and Polaroid Corp. to turn a head-mounted device that NASA engineers had originally intended for space use into a device that could help low-vision patients see. Named the Low-Vision Enhancement System, or LVES (pronounced “Elvis”), the device magnified objects as much as 10 times and made images brighter.

Werblin wanted to develop a tool accessible for the wider low-vision community. Having spent many years working on retinal chips that can be implanted in eyeballs to help the blind see, he sought a solution that was less invasive and more affordable. Seeking advice from Massof, now 67 and still at Johns Hopkins, he began developing hardware.

He quickly realized that he and the virtual reality community were operating in parallel. The Samsungs and Oculuses of the world were designing the tools for gaming, but the increasingly powerful processors they were squeezing into smartphones and the improvement of virtual reality headset comfort were exactly what Werblin needed. He saw an opportunity to repurpose the technology.

The (California) Point Reyes Light

And in August, I read this article about Dr. Werblin in the Point Reyes Light, a weekly newspaper serving Marin County in California:

The headset is part ski goggles, part baseball catcher’s mask, and it’s coated with sleek white plastic. The wearer views the world through the medium of a mobile app, activated via a Samsung smart phone clipped lengthwise across binocular-style lenses. Once the phone’s built-in camera captures images, an internal processor projects a virtual-reality replica of sights magnified and clarified.

“A year ago, this technology didn’t exist,” said Mr. Werblin, a neurobiology professor for 40 years who specializes in retina research. “In a few years, instead of ski goggles, it’ll be reading glasses.” Mr. Werblin recruits Visionize employees from among the best [ophthalmologists] and engineers; his main collaborator, Bob Massof, is the founder and director of the Lions Vision Research and Rehabilitation Center at Johns Hopkins University.

The goal, Mr. Werblin said, is to shrink the technology to a convenient size and offer it at an affordable price. Though Mr. Werblin has to wait for FDA-approval for the app software before he can market the headset, he plans to sell it for a couple thousand dollars—much less than the competition, eSight, which sells a similar headset for $15,000. And by utilizing smartphone app technology, Mr. Werblin foresees being able to tweak and update the headset’s system automatically, from anywhere across the globe.

Clearly, most roads pointed to Johns Hopkins and the Wilmer Eye Institute; thus, I decided to go to the source and speak with Dr. Massof about this apparent second-generation LVES.

My Conversation with Dr. Robert Massof

In September, I journeyed to Johns Hopkins in Baltimore, Maryland to spend the day with a gracious and welcoming Dr. Massof and James Deremeik, CLVT, RT, the Education/Rehabilitation Program Manager, Low Vision and Vision Rehabilitation Service at the Wilmer Eye Institute. Following is a condensed and edited version of our conversation.

Maureen Duffy: To begin, can you tell me how your association with Dr. Frank Werblin came to be?

Robert Massof: Frank Werblin was a superstar in neuro-physiology and vision. He’s probably one of the most famous vision scientists of our generation, studying the structure of the retina, how the cells talk to each other in the retina, and how they process information in the photoreceptors before sending it back to the brain. He developed a complete model of the retina – how it functions and how retinal cells interact – and was able to build an electronic retina. He was an electrical engineer by training. He actually created a set of circuits that behave exactly like the retina does. He was able to use that to process visual images in real time.

The Navy became very interested in what he was doing. He was at UC Berkeley and they asked if he could build an image processing system for their SEALs’ night vision system. And he said, ‘Yes, I think I can do that.’ But since the Navy doesn’t fund universities, they told him he’d have to form a company. So Frank went out and formed a company called Imagize that was based around building these image processing circuits, which then expanded way beyond the original concept. They developed all kinds of expertise in real-time processing of images primarily for military customers.

By the time I started working with Frank on this project, their company was about 15 years old. He continued being a professor at Berkeley, along with his position as the president and founder of Imagize. And you may have heard of the Second Sight program, the retinal chip.

MD: Indeed I have. I’ve written about it frequently for VisionAware.

RM: Well, they called Frank in to help design that. They needed a lot of processing on that chip, so Frank has a couple of patents that covered the retinal chip he developed. A lot of the success of Second Sight can be attributed to Frank; as a result, he developed a good reputation in the vision field.

So the Foundation Fighting Blindness, which has a strong interest in this technology, invited Frank to speak at a symposium they sponsored for scientists and also for consumers with retinal diseases. An audience member asked Frank, ‘Do you have to do it in the retina? Do you have to do it in the chip?’ He said, ‘No. You can do it to any image.’

And when he said that, a light bulb went off in his head. He realized that he could pre-process an image before displaying it to someone with low vision and thus compensate for the visual impairment.

The Visionize prototype
The Visionize prototype

A member of the Board of Directors of the Foundation then asked, ‘Well, do you think you could build that?’ Frank said, ‘Sure. That’s what I do,’ and the board member responded, ‘OK. If you start a company, I’ll fund it.’ Frank said, ‘I already have a company,’ and the board member said, ‘I think there should be a separate company that just does this project.’ And Frank said, ‘We can do that. I can set up a company that is a subsidiary to Imagize – called Visionize – for the purpose of funding this project.’

So Frank decided to start looking around to determine what the current state of the art was. When he discovered our work on LVES, he gave me a call, saying ‘I think we need to talk.’ I said fine and we agreed to meet. When we got together, he explained what he wanted to do. He had read all of my work and said, ‘This sounds like the LVES and I think we can do it now with current technology. We can pull this off. Do you want to do it?’

Frank said he had a source of funding that could help develop the technology. I told him that if we have the technology, I am pretty confident we could raise the money to fund research and development – but the technology has to be there. Frank agreed. He developed several prototypes and brought them to us at Hopkins.

MD: When did this occur?

RM: About 2½ years ago. And Frank totally understood. He was the first one who grasped the difficulty in taking it from concept to reality. He understood the limitations. He got right down to cosmetics (appearance), delivery system, costs, and trained personnel. I sat there thinking that this guy knows what he’s talking about. He has the technology. We’ve got the clinical capability. If we can combine the two, we can have a pretty good product.

I told him I was interested, but I wasn’t going to do another technology project. I have done two in my career and got it out of my system, but that’s not what I like to do for a living. I had no desire to that again. But … I ended up doing it again. (laughs) Frank said he would deal with the technology if we dealt with the science and the patients. I said, ‘We’ve got a deal’ and we started our collaboration.

MD: What were some of the challenges you faced initially?

RM: Frank said we needed some people who really understood head-mounted displays, so I got introduced to the people at Sensics [a developer and manufacturer of high-performance virtual reality products]. Sensics has been creating the hardware, as well as a lot of the programming, to our specifications. Frank is the one who is making it all happen. We’ve had a very rapid development of the technology.

To develop eye-tracking capabilities, we eventually started using one of Sensics’ head-mounted displays. They’re off-the-shelf head-mounted displays that the military uses to train tank drivers and other personnel. We built the eye-tracking into that, which it didn’t have previously.

And then suddenly, Oculus came out with a virtual reality system that we thought we might be able to use. So we started working with them. And you can get an Oculus for $300 and then for another $15,000 you can have the eye-tracking capability. There is a lot of interesting technology in the pipeline and a lot of rapid change happening. We are not wedded to any particular type of technology because we need as large a field of view and as high a resolution as we can get – along with eye-tracking and head-tracking capabilities.

MD: How did the smartphone enter the technology mix?

RM: The next thing that happened is that Oculus entered into a deal with Samsung to build a headset using a Samsung smartphone. What that means is that it’s now possible to go to Best Buy, buy the Oculus virtual reality headset and the Samsung phone, plug them in, and have a complete system. The good thing about the Samsung phone is that it uses Android software, which Sensics also uses. So Sensics developed all of the processing we needed to run it on a Samsung phone.

MD: What do you want to accomplish next with Visionize for people with low vision? And what are the challenges you’re facing?

Dr. Massof with Visionize
Dr. Massof wearing Visionize

RM: Right now, we can’t correct for refractive error and it’s not possible to wear corrective eyeglasses inside the device. We need to develop the ability to insert an eyeglass prescription, which we’re working on now.

Another problem is that when the device runs in virtual reality mode, it overheats within one hour and then shuts itself off. So we have to solve that problem. Also, these phones overheat if they are running for too long – and any phone will shut itself off when it reaches certain temperatures. But when it shuts itself off, you can’t start it up again until it cools down.

That’s the thing we are worried about. What if it shuts itself off when someone is using it and depending on it? We’ve been running tests to determine how long it will last before the battery dies, which right now is about three hours. But you can plug it in using a USB charger and keep going that way.

We can’t put eye tracking into the current system. It is way too expensive. Still, we can do a lot at this stage, more than what’s being done just with current technology, without the eye tracking.

MD: What are the next steps you’ll have to take to bring Visionize to the market?

RM: We really need market partners – people who understand the unique vision market. We need people who understand the concept of channel partners [a company that partners with a manufacturer or producer to market and sell the manufacturer’s products, services, or technologies]. We want to integrate it into existing programs that work with eye clinics, and we need marketing partners who know how to do that. Frank is exploring that right now. We also need to grow Visionize and turn it into an operating company.

We’ve already presented our work to our scientific and medical colleagues, but I think things are developing more rapidly than we expected, simply because of the Samsung technology. We didn’t think we would be this close to having a consumer product, but since Samsung has already created the product, now the question is how much can we truly sell it for? We are not AT&T, so we can’t go and buy Samsung phones for heavily discounted prices.

If you buy an over-the-counter phone, it’s $800 and the headset is another $250 – so you’ve paid $1,000 just for the two main components. You have to put custom software on it. You have to disable a lot of features. You can’t use it as a phone. You don’t want this thing hooking up to every WiFi it finds. And you don’t want it to ring.

The downside of the current version is that it looks like a ski mask. But there are other people coming into the process who are scaling it down and their goal is to make it look like eyeglasses for the consumer market.

Right now, we are focusing primarily on how we process the images, as opposed to what the platform is going to be. Our requirements are that it would have to be a wide horizontal field of view. What we are using right now is 70 degrees, and we think we can get it up to 110 degrees. The vertical field is fixed right now at 50 degrees. As the camera technology evolves, we’ll be able to do better.

MD: This has been a fascinating glimpse into the scientific, creative, research, and marketing processes. I thank you very much, Dr. Massof, for spending this time with me and I know our readers will appreciate this insight into your lifelong work in vision and low vision research. I hope to publish periodic updates of your progress as you move closer to your goal.

Watch Dr. Frank Werblin Explain How Visionize Works

At the 2015 Association for Research in Vision and Ophthalmology (ARVO) conference, Dr. Werblin was interviewed about Visionize. Of particular interest are his explanations of the small “region of interest” that can be magnified (also called “the bubble”) to help with seeing facial features, fine details of objects, close-up work, such as reading, and even distant objects.

You can view Dr. Werblin’s video at YouTube.