Before getting into further details, let us understand how could we possibly figure out the hand region. First, we need an efficient method to separate foreground from background. To do this, we use the concept of running averages. We make our system to look over a particular scene for 30 frames. During this period, we compute the running average over the current frame and the previous frames.
By doing this, we essentially tell our system that -. Ok robot! The video sequence that you stared at running average of those 30 frames is the background. After figuring out the background, we bring in our hand and make the system understand that our hand is a new entry into the background, which means it becomes the foreground object. But how are we going to take out this foreground alone? The answer is Background Subtraction. Look at the image below which describes how Background Subtraction works.
If you want to code using Python, read on. After figuring out the background model using running averages, we use the current frame which holds the foreground object hand in our case in addition to the background. We calculate the absolute difference between the background model updated over time and the current frame which has our hand to obtain a difference image that holds the newly added foreground object which is our hand. This is what Background Subtraction is all about.
To detect the hand region from this difference image, we need to threshold the difference image, so that only our hand region becomes visible and all the other unwanted regions are painted as black.
This is what Motion Detection is all about. Note: Thresholding is the assigment of pixel intensities to 0's and 1's based a particular threshold level so that our object of interest alone is captured from an image. After thresholding the difference image, we find contours in the resulting image. The contour with the largest area is assumed to be our hand.
First, we import all the essential packages to work with and initialize the background model. Next, we have our function that is used to compute the running average between the background model and the current frame. This function takes in two arguments - current frame and aWeight , which is like a threshold to perform running average over images. If the background model is None i. Then, compute the running average over the background model and the current frame using cv2.
Running average is calculated using the formula given below -. To learn more about what is happening behind this function, visit this link. Our next function is used to segment the hand region from the video sequence. This function takes in two parameters - current frame and threshold used for thresholding the difference image. First, we find the absolute difference between the background model and the current frame using cv2. Next, we threshold the difference image to reveal only the hand region.
Control media in the background Zesture lets you control music and videos playing in the background using just your hand gestures, so you can continue using other apps without any interruptions.
Control your screen from a distance Zesture allows you to control your screen from upto 5 feet away. Enjoy exercising while listening to music or watching videos Zesture makes it really easy for you to move to the next workout video without having to stop your exercise, so you can focus on losing weight. Control your Presentation staying away from your laptop If you deliver presentations often, Zesture is for you. Privacy and Security Using your webcam is totally safe.
List of supported apps and websites. Tested and loved by. The right price for you. Free Trial 7 days free trial. Download now. Buy now Buy Zesture Standard. Get notified about updates and new features. Defining custom areas can be beneficial in reducing the CPU usage or either to restrict movements from the background from being processed. In the gesture widow you also have a popup menu from where you can either add a new gesture pattern or also set the aspect ration for that windows as well.
On minimization the program will dock in the taskbar icon area. Right click on the icon to access the menu from where you will also be able to stop or start the tracking. With the application minimized the CPU usage will be reduced.
There in the first tab Action you can configure the following parameters:. Under the mouse tab the user can adjust the default mouse parameters as these are defined in windows. Note that this will only apply for the uMouse control and will not impact on your desktop mouse settings. In the camera panel you can set your camera id, in case your system has more then one camera.
The second entry refers to how many frames per second the camera should handle. However this parameter represents a maximum allowed not the actual value, which could well exceed your hardware capabilities. The last parameter specifies if either the system should draw or not the camera stream on the screen — this could be useful if your system is low on resources.
Under the application you can set if you either want or not uMouse to run on windows start-up. If you also set the option to enable then the mouse control will be started as well on start-up. In the gesture panel you can fine-tune all the parameters related to gesture recognition and their mappings to controls.
The minimum score and distance parameters are for the advanced so you should handle with care. For the moment the method used for gesture recognition is the default and that is based on a neural network approach.
0コメント