AIR CANVAS APPLICATION USING OPENCV AND NUMPY IN PYTHON
DOI:
https://doi.org/10.59367/77an2c28Keywords:
Object Tracking, Computer Vision, Object DetectionAbstract
In recent years, air writing has emerged as an attractive and challenging research area in image processing and pattern recognition. This has the potential to significantly advance automation processes and improve human-machine interfaces in a variety of applications. Numerous studies have been conducted to develop new techniques and methods that can reduce processing time while providing higher detection accuracy. Object tracking is an important task in computer vision. The popularity of object tracking techniques is increasing due to the availability of faster computers, cheaper and better video cameras, and the demand for automated video analysis. In general, video analysis methods include three main steps: detecting objects, tracking their movement from frame to frame, and analyzing their behavior. Object tracking includes four different topics: choosing a suitable representation of an object, choosing a tracking feature, object detection, and object tracking. Object tracking algorithms are widely used in various applications such as real-world automatic surveillance, video profiling, and vehicle navigation. The goal of the project is to develop a motion to text converter that can be used as software for smart wearable devices. It will allow users to type from the air by following the path of their finger using computer vision. The generated text can be used for various purposes such as sending messages and emails. This project is a reporter of occasional gestures and will be a powerful means of communication for the deaf. It is an effective communication method that reduces the use of mobile devices and laptops by eliminating the need for typing.
References
Y. Huang, X. Liu, X. Zhang, and L. Jin, Based On the Pointing Gesture Egocentric Enteraction System: Dataset, Approach, and Application in 2016 Computer Vision and Pattern Recognition Workshops (CVPRW), Las Vegas
X. Liu, Y. Huang, X. Zhang, and L. Jin. A cascading CNN pipeline for real-time fingertips Self-centered video recognition”, Corr, 2015.
Yuan-Hsiang Chang, Chen-Ming Chang, "Automatic Hand-Pose Detection Tracking System Using Video Serise", INTECH, Croatia, 2010.
Omar Javed and Mubarak Shah, "Object Tracking: ACM Computer Survey. Vol. 38, Issue. 4, Article 13.
EshedOhn-Bar, Mohan ManubhaiTrivedi, "Hand Gesture Recognition In Real-Time For Automotive Interfaces," IEEE Transactions on Intelligent Transportation Systems, VOL. 15, NO. 6, December 2014, pp 2368-2377.
Yusuke Araga, Makoto Shirabayashi, Keishi Kaida, Hiroomi Hikawa, "Real Time Gesture Recognition System Using Posture Classifier and Jordan Recurrent Neural Network", IEEE World Congress on Computational Intelligence, Brisbane, Australia, 2012.
Lowlesh Nandkishor Yadav, “Predictive Acknowledgement using TRE System to reduce cost and Bandwidth” IJRECE VOL. 7 ISSUE 1 (JANUARY- MARCH 2019) pg no 275-278.
Saoji, S., Dua, N., Choudhary, A. K., & Phogat, B. (2021). Air canvas application using Opencv and numpy in python. IRJET, 8(08).
Downloads
Published
Issue
Section
License
Copyright (c) 2024 International Journal of Futuristic Innovation in Arts, Humanities and Management (IJFIAHM)
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.