I'm thinking it's more arbitrary/conservative -- the GMA950 is comfortable with images up to 2048x2048, but it subdivides them anyway (at unusual locations, no less). I'm sure it's trying to do the right thing, but it appears to be more limited than the capabilities alone.
My card (ati 2600) supports 4k x 4k, so I really doubt it's a hardware limit. When stretching up to 4k x 4k I see odd divisions too, with the bottom right tile split in two (and it varies randomly between a horizontal and vertical split as I change the resolution - odd!).

It would be interesting to learn where the edges are, and why, just out of curiosity :) Does your ATI split at widths similar to my GMA950?

In your case, you need to return the region passed in to the ROI function expanded by 'scale' pixels each way.
I added this ROI function:
function myROIFunction(samplerIndex, dstRect, info)
{
       dstRect[0] -= info*1;
       dstRect[1] -= info*1;
       dstRect[2] += info*2;
       dstRect[3] += info*2;
       return dstRect;
}
(info is scale in the apply function call)
however, I still get occasional seams when the viewer is sized at various sizes.

I've also tried clamping the sampler coords to within the image dimensions without luck. It gets rid of the lines around the edge, but not the ones inside the image.

The internal seams are from subdivisions (I'm assuming you know that now), which happen arbitrarily. So clamping won't handle those. The affine transform trick posted earlier on this list (regarding gaussian blur) would probably remove the edges without needing to clamp.

If I expand it a bit more (by *4 / *8 in place of *1 / *2, for example) it seems ok. why does it need to be enlarged so much? (honestly curious)

I'm not going to 'scale up' the image, the incoming video will be high res.

The scaling I was referring to was the ROI rectangle, not the image. The image size remains unchanged. I was curious as to why ROI needed to be set so much higher than Tom's simpler (and intuitive) explanation -- all the examples use CGRectInset(r, -radius), but we're effectively needing CGRectInset(r, -radius*4). Anyway, this just tells CoreImage how many pixels "over the edge" each region needs, so that it'll correctly sample them instead of black. This is applied for each subdivision, so that each subdivided region gets a bit of overlap.

--
[ christopher wright ]
[email protected]
http://kineme.net/

Attachment: smime.p7s
Description: S/MIME cryptographic signature

 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Quartzcomposer-dev mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/quartzcomposer-dev/archive%40mail-archive.com

This email sent to [email protected]

Reply via email to