The mobility of individuals with visual impairments is a significant challenge as the cities are becoming more and more crowded each day. The technology is rapidly developing, offering novel high-tech smart white canes to aid the mobility of individuals with partial or total blindness. However, they are hardly affordable due to the high prices. Even more, they are impractical for in-vivo usage as they depend on thirdparty technologies and services, which require an Internet connection for data transfer and data processing on Cloud services. In this paper, we offer a novel methodology that aids the transportation of blind individuals, which is entirely integrated into the chip, thus avoiding the need for an Internet connection. Our methodology embeds three intelligent Deep learning models on a single smart mobile device, one model to localize the position of the bus line number approaching the individual, the second model to recognize the bus number, and the third is a text-to-speech model, which synthesizes speech to notify the individual in a pleasant and human-like manner about the number of the approaching bus. Our work presents one step closer to the completely independent embedded intelligent models that simplify the transportation of visually impaired persons using cutting-edge tools from AI.
Intelligent embedded systems, Deep learning, Disabilities, Assistive technology, Computer vision, Image processing, Speech technologies, Mobile technologies