Hi,
When re-running the benchmark [1] with different lengths of serialized
arrays of records, I found that, compared to classical classes, lookup
into the cache of adapted method handles starts to show when the length
of array is larger (# of instances of same record type deserialized in
single stream). Each record deserialized must lookup the method handle
in a ConcurrentHashMap:
Benchmark (length) Mode Cnt Score
Error Units
RecordSerializationBench.deserializeClasses 10 avgt 10 8.088 ±
0.013 us/op
RecordSerializationBench.deserializeClasses 100 avgt 10 32.171 ±
0.324 us/op
RecordSerializationBench.deserializeClasses 1000 avgt 10 279.762 ±
3.072 us/op
RecordSerializationBench.deserializeRecords 10 avgt 10 9.011 ±
0.027 us/op
RecordSerializationBench.deserializeRecords 100 avgt 10 33.206 ±
0.514 us/op
RecordSerializationBench.deserializeRecords 1000 avgt 10 325.137 ±
0.969 us/op
...so keeping the correctly shaped adapted method handle in the
per-serialization-session ObjectStreamClass instance [2] starts to make
sense:
Benchmark (length) Mode Cnt Score
Error Units
RecordSerializationBench.deserializeClasses 10 avgt 10 8.681 ±
0.155 us/op
RecordSerializationBench.deserializeClasses 100 avgt 10 32.496 ±
0.087 us/op
RecordSerializationBench.deserializeClasses 1000 avgt 10 279.014 ±
1.189 us/op
RecordSerializationBench.deserializeRecords 10 avgt 10 8.537 ±
0.032 us/op
RecordSerializationBench.deserializeRecords 100 avgt 10 31.451 ±
0.083 us/op
RecordSerializationBench.deserializeRecords 1000 avgt 10 250.854 ±
2.772 us/op
With that, more objects means advantage over classical classes instead
of disadvantage.
[1]
http://cr.openjdk.java.net/~plevart/jdk-dev/RecordsDeserialization/RecordSerializationBench.java
[2]
http://cr.openjdk.java.net/~plevart/jdk-dev/RecordsDeserialization/webrev.06/
Regards, Peter