Carmen,

I don't know anything about reading pixels back from VRAM and this CPU analysis you refer to. Why do you need to do that again?

You might want to look into using Core Animation for what you are trying to do. For that matter, you could very easily create a simple Quartz Composition that you can load into a QCCompositionLayer (Core Animation) and then apply filters on top of that. If all you are doing is obtaining filter values set by the user (e.g. input radius for a blur, etc.), then you can do this quite easily.

Take a look at this blog post to see using a QCCompositionLayer in action: http://www.cimgf.com/2008/09/24/core-animation-tutorial-core-animation-and-quartz-composer-qccompositionlayer/

In fact, better yet, look into using a QTCaptureLayer (also Core Animation). It's simply a capture view that can also easily be overlayed with any kind of Core Image filter.

Asking such a broad design question (i.e. "should I stick with this design?") may not be the most efficient way of getting what you need. I suggest you try a few things with your own design, post questions here about what is going wrong, and people will most certainly correct your design decisions should they need correcting.

Best regards,

-Matt



On Sep 29, 2008, at 8:03 PM, Carmen Cerino Jr. wrote:

When my application starts up, the user is presented with a settings window. It contains a view that will be attached to a web camera, and some widgets to control various filter settings. Once the settings are tweaked to the user's desire, the window will be closed, but the camera will still be processing images. In addition to standard CIFilters, I will also need to read the pixels back in from VRAM to perform an analysis on the CPU that I have yet to transform into a CIFilter. The way I plan on designing this application is to have an NSOpenGLView subclass to display my camera feed, and another class to control the camera and all of the image processing.

Questions:

1. Should I stick with this design path? Some of the sample code I have seen puts what I have broken down into two classes all in the NSOpenGLView subclass, ie CIVideoDemoGL. 2. If I leave the code in two seperate files, do I need two OpenGLContexts, one for the view and one to link to a CIContext for the image filters, or can I just use the one from the NSOpenGLView? 3. When I bring the images back in from the GPU they will already be rendered with the CIFilters, so is it worth it to push them back out to OpenGL for drawing?
_______________________________________________

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to [EMAIL PROTECTED]

Reply via email to