No it's fine. I've just seen motion trackers that analize 2 frames and use this difference for something, I haven't found quit yet a 'single point' tracker.I'm usind OSC Reciever and sender to communicate already... but I'm missing this detail... the coord of a single point, and follow these.
There are 2 general ways to do this that I've seen: use CVTools (freely available at kineme.net), and hope some kind soul helps fill in the blanks, or use Apple's CoreImage method.
CVTools will provide you with accurate point tracking, and provide numeric outputs in a structure. It's also slow, buggy, and requires a zillion support patches.
Apple's CI method does some fancy CI filter work, then uses Area Average (or something similar?) to condense the image-difference's motion vectors (converted to colors) into a single pixel value -- you then use the Image Pixel patch to extract its red/green/blue components, which represent the motion of the area that's been averaged. For a single point this works quite well (and requires no fancy patches), but for many points it gets very expensive and slow :/
Hope that helps point you vaguely where you're wanting to go :) -- [ christopher wright ] [email protected] http://kineme.net/
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ Do not post admin requests to the list. They will be ignored. Quartzcomposer-dev mailing list ([email protected]) Help/Unsubscribe/Update your Subscription: http://lists.apple.com/mailman/options/quartzcomposer-dev/archive%40mail-archive.com This email sent to [email protected]

