Blind-Vision

Assisting blind people with the help of image-captioning via a smartphone app. The application makes use of two neural networks, a CNN-based image feature extractor, and an LSTM based sentence generator. The user is able to submit images to the app, which are fed to the CNN feature extractor. The extracted features are then fed to the LSTM network to generate the sentence that describes the image, which is then read aloud to the user.

Stars
1

Statistics for this project are still being loaded, please check back later.