[jira] [Comment Edited] (IGNITE-10920) Optimize HistoryAffinityAssignment heap usage.

2019-01-31 Thread Alexei Scherbakov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757648#comment-16757648
 ] 

Alexei Scherbakov edited comment on IGNITE-10920 at 1/31/19 7:20 PM:
-

[~kbolyandra],

I've played a bit with JOL benchmark and got following results for a single 
history assignment with 32 and 128 nodes in topology, results are impressive:

 
{noformat}
Heap usage [optimized=false, parts=32768, nodeCnt=32, backups=2, footprint:
COUNT AVG SUM DESCRIPTION
33714 39 1340960 [Ljava.lang.Object;
33714 24 809136 java.util.ArrayList
2 24 48 java.util.Collections$UnmodifiableRandomAccessList
1 24 24 org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion
1 40 40 org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment
67432 2150208 (total)

]
Heap usage [optimized=true, parts=32768, nodeCnt=32, backups=2, footprint:
COUNT AVG SUM DESCRIPTION
945 232 219280 [C
1 8208 8208 [Ljava.util.HashMap$Node;
1 144 144 [Lorg.apache.ignite.cluster.ClusterNode;
944 16 15104 java.lang.Integer
1 48 48 java.util.HashMap
944 32 30208 java.util.HashMap$Node
1 24 24 org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion
1 40 40 org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment
1 32 32 
org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment$1
1 32 32 
org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment$2
2840 273120 (total)

]
Optimization: optimized=273120, deoptimized=2150208 rate: 7.872{noformat}
 
{noformat}
Heap usage [optimized=false, parts=32768, nodeCnt=128, backups=2, footprint:
COUNT AVG SUM DESCRIPTION
33066 39 1320224 [Ljava.lang.Object;
33066 24 793584 java.util.ArrayList
2 24 48 java.util.Collections$UnmodifiableRandomAccessList
1 24 24 org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion
1 40 40 org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment
66136 2113920 (total)

]
Heap usage [optimized=true, parts=32768, nodeCnt=128, backups=2, footprint:
COUNT AVG SUM DESCRIPTION
297 685 203728 [C
1 2064 2064 [Ljava.util.HashMap$Node;
1 528 528 [Lorg.apache.ignite.cluster.ClusterNode;
296 16 4736 java.lang.Integer
1 48 48 java.util.HashMap
296 32 9472 java.util.HashMap$Node
1 24 24 org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion
1 40 40 org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment
1 32 32 
org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment$1
1 32 32 
org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment$2
896 220704 (total)

Optimization: optimized=220704, deoptimized=2113920 rate: 9.578{noformat}
No objections from my side.

I think somebody of the commiters will pass by shortly and finish your 
contribution.

 

 


was (Author: ascherbakov):
[~kbolyandra],

I've played a bit with JOL benchmark and got following results for a single 
history assignment with 32 and 128 nodes in topology, results are impressive:

{{Heap usage [optimized=false, parts=32768, nodeCnt=32, backups=2, footprint:
 COUNT AVG SUM DESCRIPTION
 33714 39 1340960 [Ljava.lang.Object;
 33714 24 809136 java.util.ArrayList
 2 24 48 java.util.Collections$UnmodifiableRandomAccessList
 1 24 24 org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion
 1 40 40 
org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment
 67432 2150208 (total)

]
Heap usage [optimized=true, parts=32768, nodeCnt=32, backups=2, footprint:
 COUNT AVG SUM DESCRIPTION
 945 232 219280 [C
 1 8208 8208 [Ljava.util.HashMap$Node;
 1 144 144 [Lorg.apache.ignite.cluster.ClusterNode;
 944 16 15104 java.lang.Integer
 1 48 48 java.util.HashMap
 944 32 30208 java.util.HashMap$Node
 1 24 24 org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion
 1 40 40 
org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment
 1 32 32 
org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment$1
 1 32 32 
org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment$2
 2840 273120 (total)

]
Optimization: optimized=273120, deoptimized=2150208 rate: 7.872}}

{{Heap usage [optimized=false, parts=32768, nodeCnt=128, backups=2, footprint:
 COUNT AVG SUM DESCRIPTION
 33066 39 1320224 [Ljava.lang.Object;
 33066 24 793584 java.util.ArrayList
 2 24 48 java.util.Collections$UnmodifiableRandomAccessList
 1 24 24 org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion
 1 40 40 
org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment
 66136 2113920 (total)

]
Heap usage [optimized=true, parts=32768, nodeCnt=128, backups=2, footprint:
 COUNT AVG SUM DESCRIPTION
 297 685 203728 [C
 1 2064 2064 [Ljava.util.HashMap$Node;
 1 528 528 [Lorg.apache.ignite.cluster.ClusterNode;
 296 16 4736 java.lang.Integer
 1 48 48 java.util.HashMap
 296 32 9472 

[jira] [Comment Edited] (IGNITE-10920) Optimize HistoryAffinityAssignment heap usage.

2019-01-28 Thread Konstantin Bolyandra (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754348#comment-16754348
 ] 

Konstantin Bolyandra edited comment on IGNITE-10920 at 1/28/19 8:59 PM:


[~ascherbakov], thanks for review.
 # Done.
 # Studiyng JOL benchark revelead what proposed optimization was almost useless 
because main heap consumer was Object overhead due to many instances of 
ArrayLists. I tried to remove many instances by storing node primary-backup 
order in char[] array, because partition count couldn't exceed 2 bytes. Also it 
seems good because of strong locality. Such optimization has diminishing 
returns in case of many backups, so I disabled it for replicated caches. Please 
check PR for details. TC run is in progress. Below JOL affinity cache heap size 
calculation results for configuration with 32768 partitions and 32 nodes added 
to topology, it shows almost 4x heap size reduction:

 
{noformat}
Heap usage [optimized=false, parts=32768, nodeCnt=32, backups=2, footprint:
COUNT AVG SUM DESCRIPTION
1115802 42 47500664 [Ljava.lang.Object;
1 16 16 java.lang.Object
1115802 24 26779248 java.util.ArrayList
93 24 2232 java.util.Collections$UnmodifiableRandomAccessList
1 48 48 java.util.concurrent.ConcurrentSkipListMap
3 32 96 java.util.concurrent.ConcurrentSkipListMap$HeadIndex
24 24 576 java.util.concurrent.ConcurrentSkipListMap$Index
63 24 1512 java.util.concurrent.ConcurrentSkipListMap$Node
62 24 1488 
org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion
62 40 2480 
org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment
2231913 74288360 (total)

Heap usage [optimized=true, parts=32768, nodeCnt=32, backups=2, footprint:
COUNT AVG SUM DESCRIPTION
99963 144 14457240 [C
31 24327 754160 [Ljava.util.HashMap$Node;
62 85 5328 [Lorg.apache.ignite.cluster.ClusterNode;
99664 16 1594624 java.lang.Integer
1 16 16 java.lang.Object
62 48 2976 java.util.HashMap
99901 32 3196832 java.util.HashMap$Node
1 48 48 java.util.concurrent.ConcurrentSkipListMap
4 32 128 java.util.concurrent.ConcurrentSkipListMap$HeadIndex
32 24 768 java.util.concurrent.ConcurrentSkipListMap$Index
63 24 1512 java.util.concurrent.ConcurrentSkipListMap$Node
62 24 1488 
org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion
62 40 2480 
org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment
62 32 1984 
org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment$1
31 32 992 
org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment$2
31 20020576 (total)

Optimization: optimized=20020576, deoptimized=74288360 rate: 3.71

{noformat}


was (Author: kbolyandra):
[~ascherbakov], thanks for review.
 # Done.
 # Studiyng JOL benchark revelead what proposed optimization was almost useless 
because main heap consumer was Object overhead due to many instances of 
ArrayLists. I tried to remove many instances by storing node primary-backup 
order in char[] array, because partition count couldn't exceed 2 bytes. Also it 
seems good because of strong locality. Such optimization has diminishing 
returns in case of many backups, so I disabled for replicated caches. Please 
check PR for details. TC run is in progress. Below JOL affinity cache heap size 
calculation results for configuration with 32768 partitions and 32 nodes added 
to topology, it shows almost 4x heap size reduction:

 
{noformat}
Heap usage [optimized=false, parts=32768, nodeCnt=32, backups=2, footprint:
COUNT AVG SUM DESCRIPTION
1115802 42 47500664 [Ljava.lang.Object;
1 16 16 java.lang.Object
1115802 24 26779248 java.util.ArrayList
93 24 2232 java.util.Collections$UnmodifiableRandomAccessList
1 48 48 java.util.concurrent.ConcurrentSkipListMap
3 32 96 java.util.concurrent.ConcurrentSkipListMap$HeadIndex
24 24 576 java.util.concurrent.ConcurrentSkipListMap$Index
63 24 1512 java.util.concurrent.ConcurrentSkipListMap$Node
62 24 1488 
org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion
62 40 2480 
org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment
2231913 74288360 (total)

Heap usage [optimized=true, parts=32768, nodeCnt=32, backups=2, footprint:
COUNT AVG SUM DESCRIPTION
99963 144 14457240 [C
31 24327 754160 [Ljava.util.HashMap$Node;
62 85 5328 [Lorg.apache.ignite.cluster.ClusterNode;
99664 16 1594624 java.lang.Integer
1 16 16 java.lang.Object
62 48 2976 java.util.HashMap
99901 32 3196832 java.util.HashMap$Node
1 48 48 java.util.concurrent.ConcurrentSkipListMap
4 32 128 java.util.concurrent.ConcurrentSkipListMap$HeadIndex
32 24 768 java.util.concurrent.ConcurrentSkipListMap$Index
63 24 1512 java.util.concurrent.ConcurrentSkipListMap$Node
62 24 1488 
org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion
62 40 2480 
org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment
62 32 1984