>
> 4k*4k*16bytes/pixel = 256MB/image. That's nothing. glReadPixels will let
> you snag all of that in one go (and if it's too large, you can snag
> subregions, and assemble them manually -- I can't think of any reason why
> that'd be useful though). The reason I'm a fan of not using QC/NSImage
> stuff is because all you honestly really care about is getting the raw pixel
> data to save to disk, and all the junk on top doesn't help you much at all
> -- NSImage hides the underlying pixels and doesn't let you save them,
> CIImage hides the underlying pixels and doesn't let you save them, and does
> filtering to boot. CGImageRef is your best bet, so the remainder of my
> reply assumes that I've converted you to The Light Side ;)
Yep, I'm going to have to figure out some way of limiting the image size
depending on vram at some point, but for now I'll just let the poor end
users ask for 4k x 4k x 32bit and suffer the consequences :D
Actually, you just made a very useful point re. regions - CI seems to glitch
with 32bit colour + high resolution, tiling the image might help. It seems
that if I use an image > 1024 pixels and run a CI Filter that samples at an
offset, it can't sample to the right of the 1024pixel boundary for a
destination to the left. I might have to implement some kind of tiling to
get around that..
Speaking of which, am I right in thinking thinking >8bit modes only work
with POT textures? If so, tiling might help performance too.
Anyway, I'm not fixated on NSImage or anything.. that's purely what the
valueForOutputKey: method gives me by default. There's also
valueForOutputKey: ofType:, so perhaps that will give me a convenient
CGImage? CGImage is one of the image types available, but I don't know if I
can specify the type I want or if it'll just tell me to reconsider my
request :). I'll try it later.
Btw, I'm using a published output port because I want the original data, not
the 'resized to fit on screen' data that the QC snapshot methods will give
me, so I suspect glReadPixels etc. won't quite cut it.
If you're hell-bent on using NSImage, TIFFRepresentation will dump out TIFF
> data. This will consume ungodly amounts of time and memory though. From
> the TIFF data, you can init a CGImageRef, which is completely ridiculous
> (you're encoding, duplicating, and decoding, for what really only needs to
> be a single "write" operation). You can also iterate through the NSImage's
> representations ([theImage representations]), and find an NSBitmapImageRep
> -- that'll give you raw pixel data too. However, I think using this will be
> unsatisfactory as well, since it's still not helping you accomplish your
> goal. You can make a CGImageRef from an NSBitmapImageRep though, I _think_
> (it's been a while since I've mucked about bridging all the various image
> formats on OS X...)
Yep, high speed + preserving quality are the goals here, so conversion to
tiff and back would be against both of those :D
Don't use NSImage -- it's too abstract. Use CGImageRef, which has support
> from ImageIO (we use that in QuartzCrystal when writing out image
> sequences).
> The pertinent code is this:
>
> static int saveCGImage(NSString *filename, CGImageRef img, CFStringRef
> codec, CFMutableDictionaryRef props)
> {
> NSURL *url = [NSURL fileURLWithPath: filename isDirectory:NO];
> CGImageDestinationRef dest =
> CGImageDestinationCreateWithURL((CFURLRef)url, codec, 1, props);
> CGImageDestinationAddImage(dest,img,NULL);
> CGImageDestinationFinalize(dest);
> CFRelease(dest);
> return 0;
> }
>
> takes an NSString for a file name (obviously predates snow leopard, as URLs
> are now the blessed way to access files), a CGImageRef, the codec cgstring
> ("com.ilm.openexr-image") -- (remember that CFStringRef is tollfree-bridged
> with NSString, so @"com.ilm.openexr-image" will work), and props, which is
> some settings dictionary -- we initialize it like this:
>
> props = CFDictionaryCreateMutable(NULL, 1, NULL, NULL);
> CFDictionaryAddValue(props, kCGImageDestinationBackgroundColor,
> CGColorGetConstantColor(kCGColorBlack));
>
> That's about it, regarding writing CGImageRefs.
That pretty much covers it - I'll have a good poke around and see how it
goes now. Lots more things to figure out - I'll be digging into what I can
store in the metadata next I think - exposure time etc. is pretty important,
but it'd be great to also store the original image + processing settings in
there so that editing is non-destructive and loading is transparent.. plenty
to think about :)
I really need a decent webcam though - this is supposed to be an
astro-photography app, and so far all I'm doing is seeing how far I can push
a built-in isight in a dark room (unbelievably far, as it happens!)
Thanks for all your help on this Chris, you're a real hero on this list!
Chris
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Quartzcomposer-dev mailing list ([email protected])
Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/quartzcomposer-dev/archive%40mail-archive.com
This email sent to [email protected]