[jira] [Comment Edited] (IGNITE-10877) GridAffinityAssignment.initPrimaryBackupMaps memory pressure
[ https://issues.apache.org/jira/browse/IGNITE-10877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16747703#comment-16747703 ] Pavel Voronkin edited comment on IGNITE-10877 at 1/21/19 6:59 AM: -- I don't think it breaks compatibilty, cause we have ignite property to rollback to original behaviour for mixed envs. Moveover GridAffinityAssignment serialization is broken right now. See IGNITE-10925, we need to fix all issues there. was (Author: voropava): I don't think it breaks compatibilty, cause we have ignite property to rollback to original behaviour for mixed envs. Moveover GridAffinityAssignment serialization is broken right now. See IGNITE-10925, we need to fix issue there. > GridAffinityAssignment.initPrimaryBackupMaps memory pressure > > > Key: IGNITE-10877 > URL: https://issues.apache.org/jira/browse/IGNITE-10877 > Project: Ignite > Issue Type: Improvement >Reporter: Pavel Voronkin >Assignee: Pavel Voronkin >Priority: Major > Fix For: 2.8 > > Attachments: grid.srv.node.1.0-29.12.2018-12.50.15.jfr, > image-2019-01-17-12-58-07-382.png, image-2019-01-17-12-59-52-137.png, > image-2019-01-17-15-45-49-561.png, image-2019-01-17-15-45-53-043.png, > image-2019-01-17-15-46-32-872.png, image-2019-01-18-11-36-57-451.png, > image-2019-01-18-11-38-39-410.png, image-2019-01-18-11-55-39-496.png, > image-2019-01-18-11-56-10-339.png, image-2019-01-18-11-56-18-040.png, > image-2019-01-18-12-09-04-835.png, image-2019-01-18-12-09-32-876.png > > Time Spent: 1h 40m > Remaining Estimate: 0h > > 1) While running tests with JFR we observe huge memory allocation pressure > produced by: > *Stack Trace TLABs Total TLAB Size(bytes) Pressure(%)* > java.util.HashMap.newNode(int, Object, Object, HashMap$Node) 481 298 044 784 > 100 > java.util.HashMap.putVal(int, Object, Object, boolean, boolean) 481 298 044 > 784 100 > java.util.HashMap.put(Object, Object) 481 298 044 784 100 > java.util.HashSet.add(Object) 480 297 221 040 99,724 > > org.apache.ignite.internal.processors.affinity.GridAffinityAssignment.initPrimaryBackupMaps() > 1 823 744 0,276 > org.apache.ignite.internal.processors.affinity.GridAffinityAssignment.(AffinityTopologyVersion, > List, List) 1 823 744 0,276 > *Allocation stats* > Class Average Object Size(bytes) Total Object Size(bytes) TLABs Average TLAB > Size(bytes) Total TLAB Size(bytes) Pressure(%) > java.util.HashMap$Node 32 15 392 481 619 635,726 298 044 784 32,876 > java.lang.Object[] 1 470,115 461 616 314 655 019,236 205 676 040 22,687 > java.util.HashMap$Node[] 41 268,617 6 149 024 149 690 046,067 102 816 864 > 11,341 > java.lang.Integer 16 1 456 91 662 911,385 60 324 936 6,654 > java.util.ArrayList 24 1 608 67 703 389,97 47 127 128 5,198 > 2) Also another hot place found > Stack Trace TLABs Total TLAB Size(bytes) Pressure(%) > java.util.ArrayList.grow(int) 7 5 766 448 9,554 > java.util.ArrayList.ensureExplicitCapacity(int) 7 5 766 448 9,554 > java.util.ArrayList.ensureCapacityInternal(int) 7 5 766 448 9,554 > java.util.ArrayList.add(Object) 7 5 766 448 9,554 > > org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopologyImpl.nodes(int, > AffinityTopologyVersion, GridDhtPartitionState, GridDhtPartitionState[]) 7 5 > 766 448 9,554 > The reason of that is defail > I think we need to improve memory efficiency by switching from from Sets to > BitSets > > JFR attached, see Allocations in 12:50:28 - 12:50:29 > > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-10877) GridAffinityAssignment.initPrimaryBackupMaps memory pressure
[ https://issues.apache.org/jira/browse/IGNITE-10877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16746083#comment-16746083 ] Pavel Voronkin edited comment on IGNITE-10877 at 1/18/19 9:34 AM: -- 16part, 16 nodes HashSet !image-2019-01-18-12-09-32-876.png! BitSet !image-2019-01-18-12-09-04-835.png! In total we have: N - number of nodes P - number of parts low P, low N - BitSet better high P, low N - BitSet better low P, high N - BitSet slightly better high P, high N - HashSet is better At nodes more than 500 we need compacted BitSet see was (Author: voropava): 16part, 16 nodes HashSet !image-2019-01-18-12-09-32-876.png! BitSet !image-2019-01-18-12-09-04-835.png! In total we have: N - number of nodes P - number of parts low P, low N - BitSet better high P, low N - BitSet better low P, high N - BitSet slightly better high P, high N - HashSet is better I suggest to have threshold of 500. > GridAffinityAssignment.initPrimaryBackupMaps memory pressure > > > Key: IGNITE-10877 > URL: https://issues.apache.org/jira/browse/IGNITE-10877 > Project: Ignite > Issue Type: Improvement >Reporter: Pavel Voronkin >Assignee: Pavel Voronkin >Priority: Major > Attachments: grid.srv.node.1.0-29.12.2018-12.50.15.jfr, > image-2019-01-17-12-58-07-382.png, image-2019-01-17-12-59-52-137.png, > image-2019-01-17-15-45-49-561.png, image-2019-01-17-15-45-53-043.png, > image-2019-01-17-15-46-32-872.png, image-2019-01-18-11-36-57-451.png, > image-2019-01-18-11-38-39-410.png, image-2019-01-18-11-55-39-496.png, > image-2019-01-18-11-56-10-339.png, image-2019-01-18-11-56-18-040.png, > image-2019-01-18-12-09-04-835.png, image-2019-01-18-12-09-32-876.png > > Time Spent: 1.5h > Remaining Estimate: 0h > > 1) While running tests with JFR we observe huge memory allocation pressure > produced by: > *Stack Trace TLABs Total TLAB Size(bytes) Pressure(%)* > java.util.HashMap.newNode(int, Object, Object, HashMap$Node) 481 298 044 784 > 100 > java.util.HashMap.putVal(int, Object, Object, boolean, boolean) 481 298 044 > 784 100 > java.util.HashMap.put(Object, Object) 481 298 044 784 100 > java.util.HashSet.add(Object) 480 297 221 040 99,724 > > org.apache.ignite.internal.processors.affinity.GridAffinityAssignment.initPrimaryBackupMaps() > 1 823 744 0,276 > org.apache.ignite.internal.processors.affinity.GridAffinityAssignment.(AffinityTopologyVersion, > List, List) 1 823 744 0,276 > *Allocation stats* > Class Average Object Size(bytes) Total Object Size(bytes) TLABs Average TLAB > Size(bytes) Total TLAB Size(bytes) Pressure(%) > java.util.HashMap$Node 32 15 392 481 619 635,726 298 044 784 32,876 > java.lang.Object[] 1 470,115 461 616 314 655 019,236 205 676 040 22,687 > java.util.HashMap$Node[] 41 268,617 6 149 024 149 690 046,067 102 816 864 > 11,341 > java.lang.Integer 16 1 456 91 662 911,385 60 324 936 6,654 > java.util.ArrayList 24 1 608 67 703 389,97 47 127 128 5,198 > 2) Also another hot place found > Stack Trace TLABs Total TLAB Size(bytes) Pressure(%) > java.util.ArrayList.grow(int) 7 5 766 448 9,554 > java.util.ArrayList.ensureExplicitCapacity(int) 7 5 766 448 9,554 > java.util.ArrayList.ensureCapacityInternal(int) 7 5 766 448 9,554 > java.util.ArrayList.add(Object) 7 5 766 448 9,554 > > org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopologyImpl.nodes(int, > AffinityTopologyVersion, GridDhtPartitionState, GridDhtPartitionState[]) 7 5 > 766 448 9,554 > The reason of that is defail > I think we need to improve memory efficiency by switching from from Sets to > BitSets > > JFR attached, see Allocations in 12:50:28 - 12:50:29 > > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-10877) GridAffinityAssignment.initPrimaryBackupMaps memory pressure
[ https://issues.apache.org/jira/browse/IGNITE-10877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16745009#comment-16745009 ] Pavel Voronkin edited comment on IGNITE-10877 at 1/17/19 12:47 PM: --- 65k partitions, 160 nodes, 3 backups !image-2019-01-17-15-45-53-043.png! !image-2019-01-17-15-46-32-872.png! was (Author: voropava): 65k partitions 160 nodes !image-2019-01-17-15-45-53-043.png! !image-2019-01-17-15-46-32-872.png! > GridAffinityAssignment.initPrimaryBackupMaps memory pressure > > > Key: IGNITE-10877 > URL: https://issues.apache.org/jira/browse/IGNITE-10877 > Project: Ignite > Issue Type: Improvement >Reporter: Pavel Voronkin >Assignee: Pavel Voronkin >Priority: Major > Attachments: grid.srv.node.1.0-29.12.2018-12.50.15.jfr, > image-2019-01-17-12-58-07-382.png, image-2019-01-17-12-59-52-137.png, > image-2019-01-17-15-45-49-561.png, image-2019-01-17-15-45-53-043.png, > image-2019-01-17-15-46-32-872.png > > Time Spent: 1.5h > Remaining Estimate: 0h > > 1) While running tests with JFR we observe huge memory allocation pressure > produced by: > *Stack Trace TLABs Total TLAB Size(bytes) Pressure(%)* > java.util.HashMap.newNode(int, Object, Object, HashMap$Node) 481 298 044 784 > 100 > java.util.HashMap.putVal(int, Object, Object, boolean, boolean) 481 298 044 > 784 100 > java.util.HashMap.put(Object, Object) 481 298 044 784 100 > java.util.HashSet.add(Object) 480 297 221 040 99,724 > > org.apache.ignite.internal.processors.affinity.GridAffinityAssignment.initPrimaryBackupMaps() > 1 823 744 0,276 > org.apache.ignite.internal.processors.affinity.GridAffinityAssignment.(AffinityTopologyVersion, > List, List) 1 823 744 0,276 > *Allocation stats* > Class Average Object Size(bytes) Total Object Size(bytes) TLABs Average TLAB > Size(bytes) Total TLAB Size(bytes) Pressure(%) > java.util.HashMap$Node 32 15 392 481 619 635,726 298 044 784 32,876 > java.lang.Object[] 1 470,115 461 616 314 655 019,236 205 676 040 22,687 > java.util.HashMap$Node[] 41 268,617 6 149 024 149 690 046,067 102 816 864 > 11,341 > java.lang.Integer 16 1 456 91 662 911,385 60 324 936 6,654 > java.util.ArrayList 24 1 608 67 703 389,97 47 127 128 5,198 > 2) Also another hot place found > Stack Trace TLABs Total TLAB Size(bytes) Pressure(%) > java.util.ArrayList.grow(int) 7 5 766 448 9,554 > java.util.ArrayList.ensureExplicitCapacity(int) 7 5 766 448 9,554 > java.util.ArrayList.ensureCapacityInternal(int) 7 5 766 448 9,554 > java.util.ArrayList.add(Object) 7 5 766 448 9,554 > > org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopologyImpl.nodes(int, > AffinityTopologyVersion, GridDhtPartitionState, GridDhtPartitionState[]) 7 5 > 766 448 9,554 > The reason of that is defail > I think we need to improve memory efficiency by switching from from Sets to > BitSets > > JFR attached, see Allocations in 12:50:28 - 12:50:29 > > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-10877) GridAffinityAssignment.initPrimaryBackupMaps memory pressure
[ https://issues.apache.org/jira/browse/IGNITE-10877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16743205#comment-16743205 ] Pavel Voronkin edited comment on IGNITE-10877 at 1/15/19 4:41 PM: -- Benchmark Mode Cnt Score Error Units SmallHashSetsVsReadOnlyViewBenchmark.hashSetContainsRandom thrpt 20 25690717,349 ± 200741,979 ops/s SmallHashSetsVsReadOnlyViewBenchmark.hashSetIteratorRandom thrpt 20 12836581,770 ± 248020,906 ops/s SmallHashSetsVsReadOnlyViewBenchmark.readOnlyViewContainsRandom thrpt 20 22278517,368 ± 339376,502 ops/s SmallHashSetsVsReadOnlyViewBenchmark.readOnlyViewIteratorRandom thrpt 20 19959598,363 ± 709696,316 ops/s we see 2x improvement in iterator and slight reduce in contains() in addition to reduced allocations. was (Author: voropava): Benchmark Mode Cnt Score Error Units SmallHashSetsVsReadOnlyViewBenchmark.hashSetContainsRandom thrpt 20 26221395,193 ± 240929,392 ops/s SmallHashSetsVsReadOnlyViewBenchmark.hashSetIteratorRandom thrpt 20 12626598,194 ± 1742223,886 ops/s SmallHashSetsVsReadOnlyViewBenchmark.readOnlyViewContainsRandom thrpt 20 23301229,681 ± 534549,170 ops/s SmallHashSetsVsReadOnlyViewBenchmark.readOnlyViewIteratorRandom thrpt 20 21134614,093 ± 666488,488 ops/s we see 2x improvement in iterator and slight reduce in contains() in addition to reduced allocations. > GridAffinityAssignment.initPrimaryBackupMaps memory pressure > > > Key: IGNITE-10877 > URL: https://issues.apache.org/jira/browse/IGNITE-10877 > Project: Ignite > Issue Type: Improvement >Reporter: Pavel Voronkin >Assignee: Pavel Voronkin >Priority: Major > Attachments: grid.srv.node.1.0-29.12.2018-12.50.15.jfr > > > 1) While running tests with JFR we observe huge memory allocation pressure > produced by: > *Stack Trace TLABs Total TLAB Size(bytes) Pressure(%)* > java.util.HashMap.newNode(int, Object, Object, HashMap$Node) 481 298 044 784 > 100 > java.util.HashMap.putVal(int, Object, Object, boolean, boolean) 481 298 044 > 784 100 > java.util.HashMap.put(Object, Object) 481 298 044 784 100 > java.util.HashSet.add(Object) 480 297 221 040 99,724 > > org.apache.ignite.internal.processors.affinity.GridAffinityAssignment.initPrimaryBackupMaps() > 1 823 744 0,276 > org.apache.ignite.internal.processors.affinity.GridAffinityAssignment.(AffinityTopologyVersion, > List, List) 1 823 744 0,276 > *Allocation stats* > Class Average Object Size(bytes) Total Object Size(bytes) TLABs Average TLAB > Size(bytes) Total TLAB Size(bytes) Pressure(%) > java.util.HashMap$Node 32 15 392 481 619 635,726 298 044 784 32,876 > java.lang.Object[] 1 470,115 461 616 314 655 019,236 205 676 040 22,687 > java.util.HashMap$Node[] 41 268,617 6 149 024 149 690 046,067 102 816 864 > 11,341 > java.lang.Integer 16 1 456 91 662 911,385 60 324 936 6,654 > java.util.ArrayList 24 1 608 67 703 389,97 47 127 128 5,198 > 2) Also another hot place found > Stack Trace TLABs Total TLAB Size(bytes) Pressure(%) > java.util.ArrayList.grow(int) 7 5 766 448 9,554 > java.util.ArrayList.ensureExplicitCapacity(int) 7 5 766 448 9,554 > java.util.ArrayList.ensureCapacityInternal(int) 7 5 766 448 9,554 > java.util.ArrayList.add(Object) 7 5 766 448 9,554 > > org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopologyImpl.nodes(int, > AffinityTopologyVersion, GridDhtPartitionState, GridDhtPartitionState[]) 7 5 > 766 448 9,554 > The reason of that is defail > I think we need to improve memory efficiency by switching from from Sets to > BitSets > > JFR attached, see Allocations in 12:50:28 - 12:50:29 > > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)