> I am doing things the way I am doing them as this QC patch is literally my
> first contact with QC, and also my first program in objective c I have ever
> written.
No worries - sorry if I come off with hostility, I just do this a lot so it's
kinda impersonal and terse :) The first time is always the most difficult.
> I know that this is suboptimal and that I am leaking memory by the bucket,
> which is why I am striving to make it better.
> I already added a autorelease to the NSBitmapImageRep - although I am not
> using it, i need it to set up the struct for the vImage_Buffer that will hold
> my final image.
rather than autorelease, just release stuff when you know it doesn't need to
stick around any longer. This is often faster than autorelease, and much
easier to reason about.
> But yeah:
>> why not use malloc() to get a buffer of the right size (width * height * 3),
> Ok, sounds good, but how to go then about my scaling then?
> The image resizer patch before sending it out is your way then? One patch for
> every task, right?
> This sounds ok, yet the size of the LED wall displaying the image can be
> variable. The width and height needs to be set in the protocol frame, which
> is why I added the scaling in my patch.
If you're going to have a lot of different sizes, it might be worth doing the
scaling yourself inside the patch. I'm pretty rusty with how to do that with
the GPU in terms of QC's plugin API, but I'm sure someone on the list can
assist with that. For now, having an extra scale patch in the composition
shouldn't be a huge problem for now.
>> write into the buffer,
> Second problem: the buffer is now set up to be width * height * 3, thus has
> no alpha.
> How do I write the RGB pixel data in it without the dreaded alpha value?
at the end, you have a loop like this (pseudo code)
for y in height;
for x in width:
[imgData appendStuff:blah];
just make it like this:
unsigned char *imageBuffer = malloc(width * height * 3); // rgb
for y in height:
for x in width:
imageBuffer[width * y * 3 + x + 0] = data[y *
vDestBuffer.rowBytes + x * 4 + 1]; // Red
imageBuffer[width * y * 3 + x + 1] = data[y *
vDestBuffer.rowBytes + x * 4 + 2]; // Green
imageBuffer[width * y * 3 + x + 2] = data[y *
vDestBuffer.rowBytes + x * 4 + 3]; // Blue
NSData *imageData = [NSData dataWithSomePointerOrSomething:imgData
andSize:width * height * 3]; // note this is autoreleased, so no retain/release
stuff necessary
//free(imageBuffer); when we're done?
Note that this isn't particularly well-optimized, but it should be
significantly faster than what you're currently doing. I wouldn't bother
optimizing it more than that unless it's really a problem (and then you'll have
to do vector stuff).
If you have some time (and the ability to do so -- I'm unsure of what
agreements you may have signed etc) it wouldn't hurt to post the latest project
to have us give it a once-over for correctness and simple wins.
--
Christopher Wright
[email protected]
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Quartzcomposer-dev mailing list ([email protected])
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/quartzcomposer-dev/archive%40mail-archive.com
This email sent to [email protected]