Hi Hendrik,

You also need to look at the texture caching that goes on. Decoders bring the image into a RAM representation and that can sometimes be used directly and so it's format might matter, but we typically upload image data into textures in VRAM and it is that format which is critical for typical performance. The format of the RAM data mainly relates to how expensive it is to upload the data to VRAM in most cases.

Also, the RAM data produced by the Decoders may also be used to feed ImageConsumer objects which may be written to operate optimally on data in the "default RGB format" which is ARGB_nonPRE. If we suddenly started producing and supplying PRE data to those objects many of them may have to punt to method calls on the ColorModel to do their work (hopefully they do check the incoming ColorModel so that they would at least notice the difference)...

                        ...jim

On 3/27/15 3:48 AM, Hendrik Schreiber wrote:

On Mar 26, 2015, at 22:52, Jim Graham <james.gra...@oracle.com> wrote:
On 3/26/15 9:21 AM, Hendrik Schreiber wrote:
Nevertheless, I wouldn't mind some feedback regarding converting ToolKitImages 
easily to something that can be drawn faster (TYPE_INT_ARGB_PRE). Don't we all 
want that?
Or asked the other way around: Why isn't TYPE_INT_ARGB_PRE the default? To be 
more flexible?

Toolkit images manage their own internal storage formats.  We shouldn't be 
requiring applications to adapt them for a display.  If we are managing the 
internal formats wrong then that is a bug to be fixed, not a reason for a new 
API or a new mechanism...

Agreed and thanks for the comment.

I dug a little deeper to get a better understanding of what's going on.

As Jim pointed out, the ToolKit uses ImageDecoders to decode formats like Gif, 
PNG, Jpeg, XBM.
To this issue the only one that's really relevant is PNG, as it supports a full 
alpha channel (not sure about XBM).

Looking at the PNGImageDecoder's code, it uses ColorModel.getRGBdefault() as 
its color model with isAlphaPreMultiplied() == false for RGB, RGBA and 
GrayscaleA.

If my understanding of the current drawing pipeline is correct, RGBA without 
premultiplication is slow as premultiplication is done on-the-fly when 
drawing—at least for OS X and OpenGL, as pointed out in 
https://bugs.openjdk.java.net/browse/JDK-8059943

So to me it would make sense, if we changed the PNGImageDecoder's code so that 
for RGBA and GrayscaleA we use the premultiplied ColorModel. I believe the 
necessary code changes would be small. E.g. in PNGImageDecoder.produceImage():

case COLOR|ALPHA|(8<<3):
     wPixels[col+rowOffset] =
          ((rowByteBuffer[spos  ]&0xFF)<<16)
        | ((rowByteBuffer[spos+1]&0xFF)<< 8)
        | ((rowByteBuffer[spos+2]&0xFF)    )
        | ((rowByteBuffer[spos+3]&0xFF)<<24);
     spos+=4;
     break;

would change into something like this:

case COLOR|ALPHA|(8<<3):
     final int alpha = rowByteBuffer[spos+3]&0xFF
     wPixels[col+rowOffset] =
          (((rowByteBuffer[spos  ]&0xFF)*alpha/0xFF)<<16)
        | (((rowByteBuffer[spos+1]&0xFF)*alpha/0xFF)<< 8)
        | (((rowByteBuffer[spos+2]&0xFF)*alpha/0xFF)    )
        | alpha<<24;
     spos+=4;
     break;

Of course the color model would also need to be set to a different value 
(isAlphaPreMultiplied() == true).

Assuming that most folks use PNGs with transparency for their buttons and other 
UI graphics, this should make drawing of those items faster. And when it comes 
to MultiResolutionImages, one could use the ones produced by the Toolkit and 
wouldn't have to create premultiplied version manually somehow (which is what 
sparked my interest in this).

What is beyond me is the question, whether premultiplying has any 
disadvantages. I am aware of none, but that does not mean a lot.

Cheers,

-hendrik

Reply via email to