I'm currently starting a project that involves buffering a large number of
video frames, and blending between them based on a depth map (i.e depth 0 =
current frame, previous frames are blended in based on values from the depth
map). I'm currently using the QTCoreVideo201 as a guide for buffering in
frames, but I'm wondering what the best strategy for blending many of these
frames together for the final image. I've worked in opengl before, but this
is somewhat new territory and I'm wondering what the pitfalls and
bottlenecks might be.

cheers,
evan
_______________________________________________

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Reply via email to