Challenge Accepted
How SBU scientists are collaborating to enhance accessibility
By Rob Emproto
To someone not savvy in technology, processing the complexities of the skills can be a daunting task. However, those with the know-how often face an inverse challenge: how to leverage their technical expertise to solve basic, real-world problems. That is exactly what researchers in Stony Brook’s Department of Computer Science — both individually and together — are facing.
For Aruna Balasubramanian, an associate professor in the department, it’s all part of the “problem-solving” philosophy that comes with being a scientist. Driven by this goal, Balasubramanian and two of her colleagues are each working on different projects that overlap and support the others’ work — all with a single goal to improve accessibility for those who have impairments.
“We didn’t get into this with a grandiose plan to save the world,” said Balasubramanian. “We are all scientists first. Our goal is to say, ‘here’s a problem, can we use our expertise to figure it out and explain it?’ But sometimes we’re interacting with the real world where real things are happening. When the two worlds collide, you want to know if you can solve the real-world problems using computing.”

A few of the Computer Science students who have worked on EyeCanDo.
One such question arose when one of Balasubramanian’s students had an idea to explore improving smartphone functionality for people who are blind or who have low vision. She immediately approached I.V. Ramakrishnan, professor of computer science and associate dean for strategic initiatives, who was exploring accessibility at that time.
“Our department is very collaborative,” said Balasubramanian. “If I have an idea and I know somebody here knows something about it, I can just walk up to them and say, ‘Hey, what do you think of this?’ Professor Ramakrishnan was working with Lighthouse Guild, an organization in New York City that supports people who have low vision and blindness. He suggested I talk to them and see whether this initiative was something that could be useful.”
Balasubramanian talked to some of the users there and found that they were extremely adept at using their smartphones despite their sight limitations. However, because they used their phones so much, they always worried about battery life.
“Most of them actually carry a battery pack with them because they don’t want to lose power,” she said. To address this, Balasubramanian and her student researchers developed a technology called DarkReader — a screen reader that bridges users’ perception of power consumption to reality — so these users wouldn’t need to constantly switch on their phone or screen to interact with it.
She also soon realized there were other challenges to address.
“Even without the battery power issue, we noticed then that another difficulty they had was interacting with their phones,” she said. “They needed to use both hands. So, for example, if you’re holding a kid, you can’t do this. Another thing we didn’t consider was that they have to take the phone out of their pocket to interact with it. On city streets they were worried about theft. We thought, ‘can we do something about this?’”
That’s how the idea for AccessWear was born.
AccessWear is technology aimed at improving the accessibility of smartphone applications, so they are more usable for those with motor disabilities and vision impairments. In 2021, Balasubramanian received a Google Research Scholar Award to help advance this research. As a potential solution, Balasubramanian and her research team considered leveraging contactless gestures instead of a keyboard to operate the phone. However, this solution will require more than writing code; it will need to incorporate recognition capabilities, a camera to detect gaze and then software to process it and interact with other devices.
“If we could enable them to interact by moving their head or arms, then they can leave their phone in their pocket,” she said. “This could work very well with blind users because they usually use a technology called TalkBack to interact with phones.”
TalkBack uses spoken words, vibration and other audible feedback to allow users to know what is happening on the screen, enabling them to better interact with their device. As it turned out, two of Balasubramanian’s department colleagues — fellow associate professors Fusheng Wang and Xiaojun Bi — had already done similar work on an app called EyeCanDo, an application to help ALS (Amyotrophic Lateral Sclerosis) patients communicate in their daily lives. Patients suffering from ALS experience limited mobility and communication.
“As the motor and speech capability of a patient at different stages evolves, assistive communication is essential,” said Wang. “However, the available capabilities of an ALS patient differ from each other and keep on evolving as the disease progresses. While there is a large space of assistive technologies, they come with major limitations such as a high price tag, limited availability, or they are big or cumbersome.”
EyeCanDo began as a Health Hackathon project for the 2018 Mount Sinai Health Hackathon for rare diseases by a group of students in Wang’s lab. Motivated by this effort, Wang decided to continue the project, partnering with Bi’s lab, which brought expertise on human computer interaction research, and in particular, text input for mobile devices. In 2021 Wang and Bi were awarded a two-year, $200,000 grant by the ALS Association to develop the technology. In May 2022 they received a second award for $777,000 from the Department of Defense’s Congressionally Directed Medical Research Programs to further advance their research.
“From the hackathon project we learned a lot about ALS and how it impacts people’s lives, including patients, caregivers and family members,” said Wang. “Later I joined several social media groups where patients and caregivers exchanged tips on daily living and shared inspiring stories of battling the disease with hope. We also had the opportunity to engage with caregivers and patients at the Stony Brook ALS Clinic with the help of our clinicians. Our increased awareness of the struggles faced by patients and their families has reinforced our commitment to helping them in any way we can.”

Graduate student Rui Liu shows how EyeCanDo to a practice patient.
Patients look at an iPad and use eye gaze to control the EyeCanDo app for a wide variety of needs, ranging from food and personal hygiene to web browsing, social media and entertainment. The app is free to download from the Apple App Store.
That experience with EyeCanDo led Bi into Balasubramanian’s AccessWear project.
“She had an idea of using contactless gestures and interaction for AccessWear,” said Bi. “She’s working on input technologies and I’m working on structuring techniques on mobile devices. Since this work is complementary, she invited me to join the project.”
Bi’s research in human computer interaction focuses on input modeling and artificial intelligence (AI)-powered input technologies. His work includes probability modeling for touch, gaze and voice-based input; model-based intelligent text and command input technologies; and accessible input technologies for people with disabilities such as blindness and motor impairments. For AccessWear, his contribution is about providing expertise on the interaction techniques and running studies to evaluate them.
Shubham Jain, an assistant professor in the Department of Computer Science, and Ramakrishnan provide expertise in mobile sensing and accessibility, respectively. Balasubramanian and her students conduct the research on the mobile technology and write the code to build and test the system.
“I meet with Professor Balasubramanian every week to discuss and critique each other’s ideas,” said Bi. “Many things emerge naturally from group-based discussion like this.”
“We are all focused on computer science,” added Balasubramanian. “It’s very hard to do interdisciplinary work, but we’re interdisciplinary even within computer science. I am used to working with machines, which has some nondeterminism, but when you work in the intersection of humans and machines, the number of ways that results can vary starts to approach the infinite. This kind of work is very collaborative at Stony Brook, and you’re able to talk to people with different expertise. It might take a while to talk the same language, but once we do, it helps us do great things.”
And though not every project in her lab has a societal angle to it, Balasubramanian said, it’s a good motivator when that happens.
“It’s something beyond you and your research,” she said. “When I see that, it means that I’m doing something that is also having a societal impact, and it makes sense.”
Looking Ahead:
Improving Input
Xiaojun Bi’s current research focuses on creating technology to help users input information to computers.
“Case-based instruction, which involves scenarios that resemble real-world examples, is one of the areas I’m working in,” said Bi. “I’m also working on introducing AI (artificial intelligence) into input technologies.”
Bi is also conducting research aimed at resolving ambiguity challenges to better help people who are compromised. “To address ambiguity, I’m working on combining multiple modalities such as gaze, voice and finger touch to help people input text and make edits and corrections more easily,” he said.
Bi’s earlier work with keyboard correction and completion algorithms, bimanual gesture typing and personalizing language models for text entry has been integrated into the stock Google keyboard used by Android users worldwide.
Looking Ahead:
Connectivity for All
Besides her work with AccessWear, Aruna Balasubramanian is working on improving the performance of internet applications in parts of the world that don’t have good internet access and where most people can’t afford the latest and most powerful phones. Her interest began with her PhD topic. For her PhD, she worked with vehicles equipped with computer capabilities that dump information to designated shops in villages.
“Think of it as dumping part of the internet in a shop and every time you pass by you keep updating so that anybody in there can easily access information,” she said. “The internet now is like electricity; there’s power to information. I feel like it should be a basic commodity. If it’s not, it just blocks out some people and causes more inequality.”
Looking Ahead:
AI and the Opioid Crisis
In addition to his work on EyeCanDo, Fusheng Wang is currently leading research focusing on the use of big data and AI to address the opioid epidemic. With more than 130 fatalities daily, opioid overdose has become a national crisis affecting approximately 2 million individuals struggling with opioid use disorder.
“To combat this issue, we are developing advanced machine-learning-based predictive models capable of identifying early signs of opioid risk in patients,” said Wang. “A significant part of our effort involves ensuring that these tools are interpretable to both clinicians and patients, thereby building trust in the adoption of clinical decision support systems. We hope to see an interactive and interpretable opioid risk prediction dashboard piloted in Stony Brook Hospital in the next few years.”