Hi, I missed some IRC chats on this subject and there are some definition problems. Like what are tiles in the perspective of a user/artist and what are tiles in perspective of parallelization. Both definitions are right, but developers and users mix the definition and the meaning. Sorry for that. With tile I meant a part of a larger entity, to be used as Ton mentioned.
On th IRC there was a sample of 'normalize' an image. My question back, how will this work in an movie pipeline. Where every frame can have different high and low value? Don't you want/need to control the min and max values? Please clarify. If we need a normalize node, this node has two kernels. The first kernel will find the lowest and highest value of a complete image. This is passed to the second kernel and here is a pixel processor where the colors are changed based on values of kernel 1 and the colors in input image. The highest/lowest value is calculated once (not parallelized) the pixel processor is parallelized. Jeroen On 01/22/2011 08:50 AM, Jeroen Bakker wrote: > On 01/21/2011 04:14 PM, Aurel W. wrote: >> You are talking about things such as convolution with a defined kernel >> size. There are other operations and a compositor truly transforms an >> image to another image and not pixels to pixels etc. If it's >> implemented in such a naive way, the compositor will be very limited. >> I got a very bad feeling about this.... >> >> Ok, let's normalize an image with a tile based approach,... uh damn it.... > Aurel, don't worry on that. Tile based is that the output is part of a > tile. But the input data can be the whole or a part of the image. On the > technical side there will be some issues to overcome (mostly device > memory related). Btw there are possibilities when you need every image > pixel as input to use a intermediate to reduce memory need. I did this > already in the defocus node. > > Please help me to determine the case when a whole output image is > needed. IMO input is readonly and output is writeonly. I don't see the > need atm to support whole output images in a 'per output pixel' > approach. And every 'per input pixel' approach can be written by a 'per > output pixel' approach. In the current nodes the two approaches are mixed. > > Jeroen > > _______________________________________________ > Bf-committers mailing list > Bf-committers@blender.org > http://lists.blender.org/mailman/listinfo/bf-committers > -- Met vriendelijke groet, Jeroen Bakker *At Mind BV * Telefoon: 06 50 611 262 E-mail: j.bak...@atmind.nl <mailto:j.bak...@atmind.nl> _______________________________________________ Bf-committers mailing list Bf-committers@blender.org http://lists.blender.org/mailman/listinfo/bf-committers