with CLIPREVIEWED learn the articleBuilding the Bionic Eye…With Car Tech?
Over the years, cyberpunk tales and sci-fi series have featured characters with cybernetic vision—most recently Star Trek Discovery’s Lieutenant Keyla Detmer and her ocular implants. In the real world, restoring “natural” vision is still a complex puzzle, though researchers at UC Santa Barbara are developing a smart prosthesis that provides cues to the visually impaired, much like a computer vision system talks to a self-driving car.
Today, over 10 million people worldwide are living with profound visual impairment, many due to retinal degeneration diseases. Ahead of this week’s Augmented Humans International Conference, we spoke with Dr. Michael Beyeler, Assistant Professor in Computer Science and Psychological & Brain Sciences at UCSB, who is forging ahead with synthetic sight trials at his Bionic Vision Lab and will be presenting a paper at the conference.
Dr. Beyeler, we spokewe spoke in 2019, and you’re about to present an update at Augmented Humans 2021. Your new paper, Deep Learning-Based Scene Simplification for Bionic VisionDeep Learning-Based Scene Simplification for Bionic Vision, argues that artificial vision, rather than sight restoration is the way forward, right?[MB] One appeal of bionic eye technologies is that these devices are being designed for people who have been blinded by degenerative disease of the eye as well as injury or trauma to the visual cortex. In other words, they have been able to see for the better part of their lives, but due to an accident or a hereditary disease they have lost their vision, and perhaps want it back.
This is why researchers are talking about the goal of “restoring” vision. However, as we learn more about how the brain distributes its computations across different brain areas, it becomes clear that in order to truly restore “natural” vision, we would need to develop technologies that can interact with tens or hundreds of thousands of individual neurons across different brain areas. This might be possible one day, but at present seems out of reach. In fact, current retinal implants have been shown to provide only “finger-counting” levels of vision. People can differentiate light from dark backgrounds and see motion, but their vision is blurry and often hard to interpret.
Which is where your research comes in?[MB] Right. Instead of focusing on one day restoring “natural” vision (which is a noble but perhaps close-to-impossible task), we might be better off thinking about how to create “practical” and “useful” artificial vision now. We have a real opportunity here to tap into the existing neural circuitry of the blind and augment their visual senses much, like Google Glass or the Microsoft HoloLens. In this new work that we are presenting, we are taking the first step. We can make things appear brighter the closer they get or use computer vision to highlight important objects in the scene.
In the future, these visual augmentations could be combined with GPS to give directions, warn users of impending dangers in their immediate surroundings, or even extend the range of “visible” light with the use of an infrared sensor (think bionic night-time vision). Once the quality of the generated artificial vision reaches a certain threshold, there are a lot of exciting avenues to pursue.
Does this take your work further—or away from—the field of neuroengineering and into a combination of software development, hardware, biomimicry, and AR?[MB] It is taking us further into a cross-disciplinary endeavor that will probably require skills from neuroscience, engineering, and computer science. But to be honest, this is exactly where I think the field should go. Why not take advantage of all the recent breakthroughs in machine learning and computer vision? We have the opportunity to build a smart prosthesis that provides real-time augmentations, much like people currently think about HMD-based AR.
In your new paper, you describe deploying state-of-the-art computer vision algorithms for image processing and building out computational models to simulate prosthetic vision.[MB] This work is really a first step in the direction of developing a smart prosthesis. The problem boils down to the fact that the vision provided by current (and near-future devices) is very limited. We might soon have devices with thousands of electrodes, but some of my previous research has shown that more electrodes does not necessarily mean more “pixels.” When we turn on a single electrode in the implant, patients do not report seeing pixels. Rather they see blurry shapes, such as streaks, blobs, and wedges.
Doesn’t sound optimum.[MB] No, it’s fair to say that patients are not going to see in 4K any time soon. It’s going to be more like playing Pong on the Atari while your reading glasses are fogging up.
So, what’s the solution?[MB] What we can do is simplify the scene for the patient using computer vision. Rather than worrying on how to paint a hyper-realistic picture of the world in the mind of the patient, we want to provide visual cues that support real-world tasks.
Give us an example of these visual cues and the tech used. [MB] Sure. If you are trying to find your way around town, you need to know where important landmarks are in the scene, and whether there are any obstacles in your immediate vicinity. So we experimented with state-of-the-art computer vision techniques to highlight visually salient information (using DeepGaze II), to segment objects of interest from background clutter (using detectron2), and to blend out objects that are far away from the observer (using monodepth2).
Importantly, we combined these strategies with a psychophysically validated computational model of the retina to generate realistic predictions of simulated prosthetic vision. This is important because I think we need to move away from thinking in “pixels” and consider how the neural code influences the quality of the generated visual experience. There’s a quick scientific presentation of this work on our YouTube channel.
How did you undertake the trials for SPV (Simulated Prosthetic Vision), and were these done with sighted, or visually impaired, volunteers? What were you hoping to achieve in this instance?[MB] The ultimate goal is, post-COVID-19, to test this on bionic eye users as soon as possible. For now, this work should be understood as a proof of concept—a first step towards our vision of a smart prosthesis. In fact, experimenting with different stimulation strategies and implant designs is very expensive for the companies and tedious for the participants, so in our lab we are building towards “virtual patients.” These are sighted subjects who are viewing simulated prosthetic vision through a virtual reality headset. This allows sighted subjects to “see” through the eyes of a retinal prosthesis patient, taking into account their head and (in future work) eye movements as they explore an immersive virtual environment.
It sounds almost as if you’re building computer vision of the type seen in automation/self-driving cars or vehicle-to-vehicle systems, but for humans. Is that an oversimplification?[MB] Not at all, that’s exactly where we’re going. We are very much inspired by the computer vision literature, and are looking for ways to adapt these state-of-the-art algorithms for the purpose of providing meaningful artificial vision. These V2V solutions are becoming more portable year after year. Think about the power of a smartphone—there is a lot of computation that could be packed into a small wearable device to provide real-time solutions at the edge. An alternative is to provide a cloud-based solution, like what Google is doing with their Cloud Vision API. Of course, the service would have to be fast and secure. We actually have experts, most notably Professor Rich Wolski and Professor Chandra Krintz here at UCSB, who have been working on IoT solutions for agriculture and other application domains.
Who funded your research, and to what end?[MB] We are truly fortunate to have received continued funding by the National Eye Institute at NIH. The R00 grant by which this research was made possible is invaluable especially in times like these, where COVID restrictions further complicate a research agenda that was already ambitious to begin with. It’s been an unpredictable year, but being able to rely on federal support assures me that I can pay my students, and that there is a way forward for this important research.
At the time of writing, we’re still under the occupation of COVID-19. Are you riding out lockdown in sunny Santa Barbara, or elsewhere? And have you adapted to remote teaching/supervision and research well?[MB] Remote teaching and supervision has been a challenge for sure, but I feel worse for the students who are missing out on a great campus experience. It’s weird that next month is March again (or still?), but we’re all trying to make the best of the situation as is. It is nice to work from home, though, so I am wondering if I’ll be getting what’s now known as “graduation goggles” as soon as we’re expected back on campus.
Finally, this year’s Augmented Humans conference is, like everything else, taking place online, but has been hosted in previous years by institutions in Japan, Korea, and across Europe. Once travel restrictions are lifted, and you’ve got your shot in the arm (and possibly a COVID passport in your hand), where will you go and why?[MB] First stop is Switzerland. I really miss family and friends, and I really want my son (who was born in Seattle) to see the other half of his heritage. We’ve been talking about a trip for what seems like ages, but that’s all we can do for now. Talk. After that, it’s all fair game. I can’t wait!
Dr. Michael Beyeler will co-present his research at the virtual Augmented Humans 2021Augmented Humans 2021 on Feb. 23.
keyword: Building the Bionic Eye…With Car Tech?Building the Bionic Eye…With Car Tech?Building the Bionic Eye…With Car Tech?