Whlie it is true that obviously getting a better picture is going to give better results, I can vouch for the methods I mention because I use them (for quite different end purposes, but same methodology nevertheless).

http://www.memo.tv/webcam_piano uses a very basic implementation of the first method I explain. http://www.memo.tv/webcam_piano_with_processing_v0_1 does a much more accurate implementation of the first method, but in processing.  I later did a version using optical flow which I cannot find right now :S  but you can easily adapt the optical flow sample I mention.

The key thing is to NOT do any motion averaging on the frame - thats when any subtle motion will be absorbed by the noise, and no tricks of math can save that. If your images have noise (like mine do when using a webcam) - that noise is homogenous  and evenly spread across the whole frame. Any genuine subtle movement (such as a mouse cursor moving, a person blinking) will be insignificant compared to the total amount of movement created in the whole frame due to noise - thats why you need to do local checks. A person blinking, or subtle mouse move is still quite a considerable movement if you look at a small region localized to that area.

I.e. if you take the webcam piano example, create quite a largish grid, lower the threshold as low as it goes but still high enough to not be triggered by the noise. Any movement beyond that (person blinking, subtle mouse movement), will trigger one of the boxes. And your check should be if ANY of the boxes trigger, then there is genuine movement in the frame...

I've attached a cut down version of the quartz composer webcam_piano (without any of the osc, note stuff). This simply scales down the video image a bit (that does the localized averaging), then iterates through all pixels, if any one pixel is greater than a threshold there is movement in the frame. The iterator is what goes through every pixel in the differenced image and displays a box if there is movement. Since you don't care where the movement is you can do something simpler... just do a CI threshold on the scaled down differenced image, then see if any pixel is white - and that you can do by doing an image histogram on the thresholded image, and looking to see if the value at [255] is non-zero (again, no averaging), i've added that in as well, just a few patches.... and since you don't need to do the iteration, you can get away with keeping the differenced frame much higher res (the numX and numY) - make sure you turn off the boxes though! will be very slow otherwise!. You probably will need to play with the threshold - bring it down as far as it goes before being triggered by noise.


Attachment: Webcam Piano just grid.qtz
Description: application/quartzcomposer




Memo (Mehmet S. Akten)

www.memo.tv

[EMAIL PROTECTED]


On 8 Jul 2008, at 04:41, aram wrote:

If the motion that is occurring is too small to be distinguished from 'noise'--(delta on the difference)--then I'm not sure
how any amount of averaging or other tricks of math elan will help.  It sounds like you need to enhance the capture.
A zoom lens or a carefully arranged set of mirrors might help.  The mirrors might allow some
redundant cross-feed to both cameras, which for sure would help you detect the signal from the noise.   Sometimes getting
a new (or additional) perspective on a problem can be illuminating and enlightening.  ILLUMINATING--gee, if you can
control the lighting this might help in a really big way!



On Monday, July 07, 2008, at 07:21PM, "Memo Akten" <[EMAIL PROTECTED]> wrote:

If the amount of motion you are waiting for is quite small, it might  
be hard to look at the motion via frame difference on the whole frame  
and compare to a threshold. If thats the case, I would suggest one of  
two options:

- do a frame difference as mentioned, but instead of summing the total  
difference in the whole frame and comparing to a threshold, divide the  
screen into a grid, and sum the difference in each grid. If any grid  
has more motion than a threshold, switch to that feed.
- similar to the above, (but more complex - but easier to do as one of  
the samples has something very similar), do an optical flow motion  
estimation on the feeds, then check to see if any velocity vector is  
greater than a threshold.

Normally if developing from scratch the first option is much easier.  
But there is already a QC optical flow implementation in the samples  
(developer/examples/Quartz Composer/Plugins/Optical Flow). There you  
will find 2 QTZ files, any one of those will do but the  
OpticalFlow.qtz is probably easier to start with, you just need the  
branch upto and including the CI Optical Flow. This core image kernel  
takes two frames (current and prev) and outputs a new image containing  
velocity vectors (R is horizontal speed, G is vertical speed).What you  
want to do is loop through this image and check to see if any velocity  
(i.e. any pixel) is greater than a threshold (you DONT want to average  
all velocities and compare to a threshold because you will have the  
same problem as the whole image frame differencing in that the subtle  
movements will be absorbed by the noise).

P.S. in those optical flow QTZ samples, it will give an error about  
missing plugin. The OpticalFlowDownloader QCPlugin is not that  
important, all it does is take the RG vector field image, and convert  
it into an array of numbers. If its easier you could just compile the  
plugin and iterate through the array instead of the image... or better  
still just modify the QCPlugin to do the looping (as it already does  
do that) and just return the maximum velocity instead of (or as well  
as) the array of numbers....

hope this helps,


Memo (Mehmet S. Akten)

www.memo.tv

[EMAIL PROTECTED]



On 7 Jul 2008, at 20:13, Jeffrey Weekley wrote:

Thanks to Chris W. for his reply using a QC Histogram approach, but  
since neither screen capture nor speaker video source changes all  
that much, we weren't able to generate a delta that was wide enough  
to elicit a switch from one source to another.

I _know_ I saw such a composition at the WWCD last year ('07), but I  
can't find that example online or in the developer directory with  
XCode.

Does anyone have this, because I think it's just what we're looking  
for.

In a nutshell, here's the problem again:

One Video Screen Capture + One Camera Video (plus audio). We want to  
be able to post both videos to a directory, bring them into QC and  
switch between the video sources, based on whether or not something  
is happening on the screen capture. Most of the time, it's subtle,  
like a mouse move. The audio from the camera plays through out. We  
then want a Quicktime movie from the QC.

Any ideas?

Thanks in advance.

-Jeff W.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Quartzcomposer-dev mailing list      ([email protected]
)
Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/quartzcomposer-dev/memo%40memo.tv

This email sent to [EMAIL PROTECTED]


_______________________________________________
Do not post admin requests to the list. They will be ignored.
Quartzcomposer-dev mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/quartzcomposer-dev/memo%40memo.tv

This email sent to [EMAIL PROTECTED]

 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Quartzcomposer-dev mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/quartzcomposer-dev/archive%40mail-archive.com

This email sent to [EMAIL PROTECTED]

Reply via email to