Hello I have noticed that passing in an input image (a CVOpenGLTextureRef, or any CVImageBuffer) to a published input image port, that QC returns the image dimensions in terms of only the clean aperture key, rather than the production aperture (which is the full extent of the image and actual number of pixels).
If you pass CVImageBuffers output from a QTCaptureSession into QC, some cards will add title safe "clean aperture" metadata as an attachment key of the input CVImageBufferRef. This means now the Image Dimensions patch reports 1888x1062 rather than 1920x1080 for your image size, even though all the pixels are actually in the image. This becomes a serious issue when doing editing on the image, such as calculating image bounds, center, etc, as the computed bounds is wrong for the number of pixels actually there. Pass in an input image from say, a QTCapture Session. IN the delegate callback for say, a QTCaptureVideoPreviewOut log the sample buffers encoded, clean and production keys. In the QC composition, add an image dimensions patch to the input image you are passing in. Note that the bounds reported is the clean aperture. Now, you can do some work on the image, say like, cutting it in half via transforms or image texturing properties, and note you are not actually on the half way point of the image, due to the difference in reported bounds by clean aperture, and actual physical pixels there (production aperture). My solution is to remove the clean aperture key, which makes Image Dimensions and the internal mechanism seem to report the correct value, but that makes me feel dirty inside. Can anyone comment on this? Shall I file a bug?
_______________________________________________ Do not post admin requests to the list. They will be ignored. Quartzcomposer-dev mailing list ([email protected]) Help/Unsubscribe/Update your Subscription: http://lists.apple.com/mailman/options/quartzcomposer-dev/archive%40mail-archive.com This email sent to [email protected]

