Hello, AWT Team.

Please review the fix for the issue:
https://bugs.openjdk.java.net/browse/JDK-8027148
The fix is available at:
http://cr.openjdk.java.net/~pchelko/9/8027148/webrev/

Sorry for the long text, but it’s quite tangled)

The problem:
The flavor map contains some predefined mappings which are stored in the 
flavormap.properties file. But we can extend these mappings using 
addUnencodedNativeForFlavor method. Javadoc states, that these new mappings 
would have lower priority that standard mappings. But in the current 
implementation this was not the case, because getNativesForFlavor method relied 
on the fact, that standard text mappings were stored as FlavorBaseType<->Native 
and newly added mappings were stored as DataFlavor<->Native, but after some fix 
in Java 8 this is not the case any more. Everything is stored as a DataFlavor 
as a key. This is important only for text flavors, because we support different 
text charsets and can reencode the text on the fly. So each native text format 
could be represented in many different DataFlavors with different encodings and 
representation classes. When we generate the set of DataFlavor’s that a text 
native can be translated to we no longer know how to distinguish the standard 
mappings and additional mappings and the get shuffled when we generate missing 
mappings for text formats.

The solution:
I’ve added an additional map for standard text mappings. With this map we can 
now take natives for mime types directly and not "find all flavors for a 
mime-type and than all natives for each flavor". This is not only faster, but 
we can distinguish standard text mappings from custom and return the list in 
the correct order. The new hash map contains only a few elements.

Also I’ve replaced the ArrayList as a collection of natives for a particular 
Flavor with a LinkedHashSet, because this list could not contain duplicated 
which we were enforcing ourselves. Now it works out of the box and some code 
can be removed.

I’ve measured the performance of a couple of most hot methods and on average 
the new implementation is 1.7 times faster. 

The test is being open sources.

I’ve tested this with JCK and our regression tests, everything looks good. Also 
I’ve tested with a couple of hand-made toys.

Thank you.
With best regards. Petr.

Reply via email to