1 Affiliation : Information Technology Bharati Vidyapeeth's College of Engg,New Delhi, India
Email : email@example.com
2 Affiliation : Information Technology Bharati Vidyapeeth's College of Engg,New Delhi, India
Email : firstname.lastname@example.org
3 Affiliation : Information Technology Bharati Vidyapeeth's College of Engg,New Delhi, India
Email : email@example.com
4 Affiliation : Information Technology Bharati Vidyapeeth's College of Engg,New Delhi, india
Email : firstname.lastname@example.org
The sixth sense is a multi-platform app for aiding the people in need, that is, people who are handicapped in the form of lack of speech (dumb), lack of hearing (deaf), lack sight (blind), lack of judicial power to differentiate between objects (visual agnosia) and people suffering from autism (characterized by great difficulty in communicating and forming relationships with other people and in using language and abstract concepts). Our current product implementation is on two platforms, namely, mobile and a web app. The mobile app even works for object detection cases in offline mode. What we want to achieve using this is to make a better world for the people suffering from disabilities as well as an educational end for people with cognitive disabilities using our app. The current implementation deals with object recognition, text to speech, and a speech-to-text converter. The speech-to-text converter and text-to-speech converter utilized the Web Speech API (Application Program Interface) for the website and the mobile platform's text-to- speech and speech-to-text library. The object recognition wouldn't fetch enough use out of a website. Hence, it has been implemented on the mobile app utilizing the Firebase ML toolkit and different pre-trained models, both available offline and online.
Sixth sense; disabilities; Web Speech API; Firebase ML toolkit; cognitive disabilities
 Bigham, J. P., Jayant, C., Miller, A., White, B., & Yeh, T. (2010, June). VizWiz:: LocateIt-enabling blind people to locate objects in their environment. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops (pp. 65-72). IEEE.
 Manduchi, R., Kurniawan, S., & Bagherinia, H. (2010, October). Blind guidance using mobile computer vision: A usability study. In Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibility (pp. 241-242).
 Ivanchenko, V., Coughlan, J., Gerrey, W., & Shen, H. (2008, October). Computer vision-based clear path guidance for blind wheelchair users. In Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility (pp. 291-292).
 Johnsen, A., Grønli, T. M., & Bygstad, B. (2012). Making touch-based mobile phones accessible for the visually impaired. Norsk informatikkonferanse,(Bodø, Norway, 2012).
 Matusiak, K., Skulimowski, P., & Strurniłło, P. (2013, June). Object recognition in a mobile phone application for visually impaired users. In 2013 6th International Conference on Human System Interactions (HSI) (pp. 479-484). IEEE.
 Hermus, K., & Wambacq, P. (2006). A review of signal subspace speech enhancement and its application to noise robust speech recognition. EURASIP Journal on Advances in Signal Processing, 2007(1), 045821.
 Dimitrov, V., Jullien, G., & Muscedere, R. (2017). Multiple-base number system: theory and applications. CRC press.
 Huyan, Z., Xu, L., Fang, S., Liu, Z., Zhang, X., & Li, L. (2014). Field information acquisition system research based on offline speech recognition. Int. J. Database Theory Appl, 7, 45-58.
 Omankhanlen, A. E., & Ogaga-Oghene, J. (2013). The Dynamics of Global Strategy and Strategic Alliances in International Trade and Investment. INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATION & MANAGEMENT, 3(12), 41-48.
 Kamble, K., & Kagalkar, R. (2014). A review: translation of text to speech conversion for Hindi language. International Journal of Science and Research (IJSR) Volume, 3.
 Arora, S. J., & Singh, R. P. (2012). Automatic speech recognition: a review. International Journal of Computer Applications, 60(9).
 BELGHIT, H., & BELLARBI, A. Object Recognition Based on ORB Descriptor for Markerless Augmented Reality.
 Gill, J. (2000). Personal electronic mobility devices. Information for Professionals Working with Visually Disabled People. http://www. tiresias. org.
 Chen, C., & Raman, T. V. (2009). Announcing eyes-free shell for Android. Retrieved December, 21, 2016.
 Coughlan, J., & Manduchi, R. (2009). Functional assessment of a camera phone-based wayfinding system operated by blind and visually impaired users. International Journal on Artificial Intelligence Tools, 18(03), 379-397.
 Coughlan, J., & Manduchi, R. (2007). Color targets: Fiducials to help visually impaired people find their way by camera phone. EURASIP Journal on Image and Video Processing, 2007, 1-13.
 Kumar, A., & Chourasia, A. (2018). Blind Navigation System Using Artificial Intelligence. International Research Journal of Engineering and Technology, 5(3).
 Jiang, R., Lin, Q., & Qu, S. (2016). Let Blind People See: Real-Time Visual Recognition with Results Converted to 3D Audio. Report No. 218, Standord University, Stanford, USA.