[jira] [Updated] (IGNITE-8299) Optimize allocations and CPU consumption in active page replacement scenario

2020-12-29 Thread Maxim Muzafarov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maxim Muzafarov updated IGNITE-8299:

Fix Version/s: (was: 2.10)

> Optimize allocations and CPU consumption in active page replacement scenario
> 
>
> Key: IGNITE-8299
> URL: https://issues.apache.org/jira/browse/IGNITE-8299
> Project: Ignite
>  Issue Type: Improvement
>  Components: persistence
>Reporter: Ivan Rakov
>Priority: Major
> Attachments: loader-2018-04-17T12-12-21.jfr, 
> loader-2018-04-17T12-12-21.jfr, loader-2018-04-17T15-10-52.jfr, 
> loader-2018-04-17T15-10-52.jfr
>
>
> Ignite performance significantly decreases when total size of local data is 
> much greater than size of RAM. It can be explained by change of disk access 
> pattern (random reads + random writes is complex even for SSDs), but after 
> analysis of persistence code and JFRs it's clear that there's still room for 
> optimization.
> The following possible optimizations should be investigated:
> 1) PageMemoryImpl.Segment#partGeneration performs allocation of 
> GroupPartitionId during HashMap.get - we can get rid of it.
> 2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
> PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations 
> - we can get rid of it.
> 3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
> reuse remaining 4 instead.
> JFRs that highlight excessive CPU usage by page replacement code is attached. 
> See 1st and 3rd positions in "Hot Methods" section:
> Stack Trace   Sample CountPercentage(%)
> PageMemoryImpl.acquirePage(int, long, boolean)4 963   19,73
> scala.Some.equals(Object) 4 932   19,606
> java.util.HashMap.getNode(int, Object)3 236   12,864



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-8299) Optimize allocations and CPU consumption in active page replacement scenario

2020-06-26 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-8299:
--
Fix Version/s: (was: 2.9)
   2.10

> Optimize allocations and CPU consumption in active page replacement scenario
> 
>
> Key: IGNITE-8299
> URL: https://issues.apache.org/jira/browse/IGNITE-8299
> Project: Ignite
>  Issue Type: Improvement
>  Components: persistence
>Reporter: Ivan Rakov
>Priority: Major
> Fix For: 2.10
>
> Attachments: loader-2018-04-17T12-12-21.jfr, 
> loader-2018-04-17T12-12-21.jfr, loader-2018-04-17T15-10-52.jfr, 
> loader-2018-04-17T15-10-52.jfr
>
>
> Ignite performance significantly decreases when total size of local data is 
> much greater than size of RAM. It can be explained by change of disk access 
> pattern (random reads + random writes is complex even for SSDs), but after 
> analysis of persistence code and JFRs it's clear that there's still room for 
> optimization.
> The following possible optimizations should be investigated:
> 1) PageMemoryImpl.Segment#partGeneration performs allocation of 
> GroupPartitionId during HashMap.get - we can get rid of it.
> 2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
> PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations 
> - we can get rid of it.
> 3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
> reuse remaining 4 instead.
> JFRs that highlight excessive CPU usage by page replacement code is attached. 
> See 1st and 3rd positions in "Hot Methods" section:
> Stack Trace   Sample CountPercentage(%)
> PageMemoryImpl.acquirePage(int, long, boolean)4 963   19,73
> scala.Some.equals(Object) 4 932   19,606
> java.util.HashMap.getNode(int, Object)3 236   12,864



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-8299) Optimize allocations and CPU consumption in active page replacement scenario

2019-10-03 Thread Maxim Muzafarov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maxim Muzafarov updated IGNITE-8299:

Fix Version/s: (was: 2.8)
   2.9

> Optimize allocations and CPU consumption in active page replacement scenario
> 
>
> Key: IGNITE-8299
> URL: https://issues.apache.org/jira/browse/IGNITE-8299
> Project: Ignite
>  Issue Type: Improvement
>  Components: persistence
>Reporter: Ivan Rakov
>Priority: Major
> Fix For: 2.9
>
> Attachments: loader-2018-04-17T12-12-21.jfr, 
> loader-2018-04-17T12-12-21.jfr, loader-2018-04-17T15-10-52.jfr, 
> loader-2018-04-17T15-10-52.jfr
>
>
> Ignite performance significantly decreases when total size of local data is 
> much greater than size of RAM. It can be explained by change of disk access 
> pattern (random reads + random writes is complex even for SSDs), but after 
> analysis of persistence code and JFRs it's clear that there's still room for 
> optimization.
> The following possible optimizations should be investigated:
> 1) PageMemoryImpl.Segment#partGeneration performs allocation of 
> GroupPartitionId during HashMap.get - we can get rid of it.
> 2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
> PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations 
> - we can get rid of it.
> 3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
> reuse remaining 4 instead.
> JFRs that highlight excessive CPU usage by page replacement code is attached. 
> See 1st and 3rd positions in "Hot Methods" section:
> Stack Trace   Sample CountPercentage(%)
> PageMemoryImpl.acquirePage(int, long, boolean)4 963   19,73
> scala.Some.equals(Object) 4 932   19,606
> java.util.HashMap.getNode(int, Object)3 236   12,864



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-8299) Optimize allocations and CPU consumption in active page replacement scenario

2018-08-29 Thread Ivan Rakov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Rakov updated IGNITE-8299:
---
Fix Version/s: (was: 2.7)
   2.8

> Optimize allocations and CPU consumption in active page replacement scenario
> 
>
> Key: IGNITE-8299
> URL: https://issues.apache.org/jira/browse/IGNITE-8299
> Project: Ignite
>  Issue Type: Improvement
>  Components: persistence
>Reporter: Ivan Rakov
>Priority: Major
> Fix For: 2.8
>
> Attachments: loader-2018-04-17T12-12-21.jfr, 
> loader-2018-04-17T12-12-21.jfr, loader-2018-04-17T15-10-52.jfr, 
> loader-2018-04-17T15-10-52.jfr
>
>
> Ignite performance significantly decreases when total size of local data is 
> much greater than size of RAM. It can be explained by change of disk access 
> pattern (random reads + random writes is complex even for SSDs), but after 
> analysis of persistence code and JFRs it's clear that there's still room for 
> optimization.
> The following possible optimizations should be investigated:
> 1) PageMemoryImpl.Segment#partGeneration performs allocation of 
> GroupPartitionId during HashMap.get - we can get rid of it.
> 2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
> PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations 
> - we can get rid of it.
> 3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
> reuse remaining 4 instead.
> JFRs that highlight excessive CPU usage by page replacement code is attached. 
> See 1st and 3rd positions in "Hot Methods" section:
> Stack Trace   Sample CountPercentage(%)
> PageMemoryImpl.acquirePage(int, long, boolean)4 963   19,73
> scala.Some.equals(Object) 4 932   19,606
> java.util.HashMap.getNode(int, Object)3 236   12,864



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8299) Optimize allocations and CPU consumption in active page replacement scenario

2018-06-26 Thread Dmitriy Pavlov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-8299:
---
Fix Version/s: (was: 2.6)
   2.7

> Optimize allocations and CPU consumption in active page replacement scenario
> 
>
> Key: IGNITE-8299
> URL: https://issues.apache.org/jira/browse/IGNITE-8299
> Project: Ignite
>  Issue Type: Improvement
>  Components: persistence
>Reporter: Ivan Rakov
>Priority: Major
> Fix For: 2.7
>
> Attachments: loader-2018-04-17T12-12-21.jfr, 
> loader-2018-04-17T12-12-21.jfr, loader-2018-04-17T15-10-52.jfr, 
> loader-2018-04-17T15-10-52.jfr
>
>
> Ignite performance significantly decreases when total size of local data is 
> much greater than size of RAM. It can be explained by change of disk access 
> pattern (random reads + random writes is complex even for SSDs), but after 
> analysis of persistence code and JFRs it's clear that there's still room for 
> optimization.
> The following possible optimizations should be investigated:
> 1) PageMemoryImpl.Segment#partGeneration performs allocation of 
> GroupPartitionId during HashMap.get - we can get rid of it.
> 2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
> PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations 
> - we can get rid of it.
> 3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
> reuse remaining 4 instead.
> JFRs that highlight excessive CPU usage by page replacement code is attached. 
> See 1st and 3rd positions in "Hot Methods" section:
> Stack Trace   Sample CountPercentage(%)
> PageMemoryImpl.acquirePage(int, long, boolean)4 963   19,73
> scala.Some.equals(Object) 4 932   19,606
> java.util.HashMap.getNode(int, Object)3 236   12,864



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8299) Optimize allocations and CPU consumption in active page replacement scenario

2018-04-17 Thread Ivan Rakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Rakov updated IGNITE-8299:
---
Component/s: persistence

> Optimize allocations and CPU consumption in active page replacement scenario
> 
>
> Key: IGNITE-8299
> URL: https://issues.apache.org/jira/browse/IGNITE-8299
> Project: Ignite
>  Issue Type: Improvement
>  Components: persistence
>Reporter: Ivan Rakov
>Assignee: Ivan Rakov
>Priority: Major
> Fix For: 2.6
>
> Attachments: loader-2018-04-17T12-12-21.jfr, 
> loader-2018-04-17T12-12-21.jfr, loader-2018-04-17T15-10-52.jfr, 
> loader-2018-04-17T15-10-52.jfr
>
>
> Ignite performance significantly decreases when total size of local data is 
> much greater than size of RAM. It can be explained by change of disk access 
> pattern (random reads + random writes is complex even for SSDs), but after 
> analysis of persistence code and JFRs it's clear that there's still room for 
> optimization.
> The following possible optimizations should be investigated:
> 1) PageMemoryImpl.Segment#partGeneration performs allocation of 
> GroupPartitionId during HashMap.get - we can get rid of it.
> 2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
> PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations 
> - we can get rid of it.
> 3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
> reuse remaining 4 instead.
> JFRs that highlight excessive CPU usage by page replacement code is attached. 
> See 1st and 3rd positions in "Hot Methods" section:
> Stack Trace   Sample CountPercentage(%)
> PageMemoryImpl.acquirePage(int, long, boolean)4 963   19,73
> scala.Some.equals(Object) 4 932   19,606
> java.util.HashMap.getNode(int, Object)3 236   12,864



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8299) Optimize allocations and CPU consumption in active page replacement scenario

2018-04-17 Thread Ivan Rakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Rakov updated IGNITE-8299:
---
Fix Version/s: 2.6

> Optimize allocations and CPU consumption in active page replacement scenario
> 
>
> Key: IGNITE-8299
> URL: https://issues.apache.org/jira/browse/IGNITE-8299
> Project: Ignite
>  Issue Type: Improvement
>  Components: persistence
>Reporter: Ivan Rakov
>Assignee: Ivan Rakov
>Priority: Major
> Fix For: 2.6
>
> Attachments: loader-2018-04-17T12-12-21.jfr, 
> loader-2018-04-17T12-12-21.jfr, loader-2018-04-17T15-10-52.jfr, 
> loader-2018-04-17T15-10-52.jfr
>
>
> Ignite performance significantly decreases when total size of local data is 
> much greater than size of RAM. It can be explained by change of disk access 
> pattern (random reads + random writes is complex even for SSDs), but after 
> analysis of persistence code and JFRs it's clear that there's still room for 
> optimization.
> The following possible optimizations should be investigated:
> 1) PageMemoryImpl.Segment#partGeneration performs allocation of 
> GroupPartitionId during HashMap.get - we can get rid of it.
> 2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
> PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations 
> - we can get rid of it.
> 3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
> reuse remaining 4 instead.
> JFRs that highlight excessive CPU usage by page replacement code is attached. 
> See 1st and 3rd positions in "Hot Methods" section:
> Stack Trace   Sample CountPercentage(%)
> PageMemoryImpl.acquirePage(int, long, boolean)4 963   19,73
> scala.Some.equals(Object) 4 932   19,606
> java.util.HashMap.getNode(int, Object)3 236   12,864



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8299) Optimize allocations and CPU consumption in active page replacement scenario

2018-04-17 Thread Ivan Rakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Rakov updated IGNITE-8299:
---
Description: 
Ignite performance significantly decreases when total size of local data is 
much greater than size of RAM. It can be explained by change of disk access 
pattern (random reads + random writes is complex even for SSDs), but after 
analysis of persistence code and JFRs it's clear that there's still room for 
optimization.
The following possible optimizations should be investigated:
1) PageMemoryImpl.Segment#partGeneration performs allocation of 
GroupPartitionId during HashMap.get - we can get rid of it.
2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations - 
we can get rid of it.
3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
reuse remaining 4 instead.

JFRs that highlights excessive CPU usage by page replacement code is attached. 
See 1st and 3rd positions in "Hot Methods" section:
Stack Trace Sample CountPercentage(%)
PageMemoryImpl.acquirePage(int, long, boolean)  4 963   19,73
scala.Some.equals(Object)   4 932   19,606
java.util.HashMap.getNode(int, Object)  3 236   12,864


  was:
Ignite performance significantly decreases when total size of local data is 
much greater than size of RAM. It can be explained by change of disk access 
pattern (random reads + random writes is complex even for SSDs), but after 
analysis of persistence code and JFRs it's clear that there's still room for 
optimization.
The following possible optimizations should be investigated:
1) PageMemoryImpl.Segment#partGeneration performs allocation of 
GroupPartitionId during HashMap.get - we can get rid of it.
2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations - 
we can get rid of it.
3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
reuse remaining 4 instead.

JFR that highlights excessive CPU usage by page replacement code is attached. 
See 1st and 3rd positions in "Hot Methods" section:
Stack Trace Sample CountPercentage(%)
PageMemoryImpl.acquirePage(int, long, boolean)  4 963   19,73
scala.Some.equals(Object)   4 932   19,606
java.util.HashMap.getNode(int, Object)  3 236   12,864



> Optimize allocations and CPU consumption in active page replacement scenario
> 
>
> Key: IGNITE-8299
> URL: https://issues.apache.org/jira/browse/IGNITE-8299
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Rakov
>Assignee: Ivan Rakov
>Priority: Major
> Attachments: loader-2018-04-17T12-12-21.jfr, 
> loader-2018-04-17T12-12-21.jfr, loader-2018-04-17T15-10-52.jfr, 
> loader-2018-04-17T15-10-52.jfr
>
>
> Ignite performance significantly decreases when total size of local data is 
> much greater than size of RAM. It can be explained by change of disk access 
> pattern (random reads + random writes is complex even for SSDs), but after 
> analysis of persistence code and JFRs it's clear that there's still room for 
> optimization.
> The following possible optimizations should be investigated:
> 1) PageMemoryImpl.Segment#partGeneration performs allocation of 
> GroupPartitionId during HashMap.get - we can get rid of it.
> 2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
> PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations 
> - we can get rid of it.
> 3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
> reuse remaining 4 instead.
> JFRs that highlights excessive CPU usage by page replacement code is 
> attached. See 1st and 3rd positions in "Hot Methods" section:
> Stack Trace   Sample CountPercentage(%)
> PageMemoryImpl.acquirePage(int, long, boolean)4 963   19,73
> scala.Some.equals(Object) 4 932   19,606
> java.util.HashMap.getNode(int, Object)3 236   12,864



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8299) Optimize allocations and CPU consumption in active page replacement scenario

2018-04-17 Thread Ivan Rakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Rakov updated IGNITE-8299:
---
Attachment: loader-2018-04-17T15-10-52.jfr
loader-2018-04-17T12-12-21.jfr

> Optimize allocations and CPU consumption in active page replacement scenario
> 
>
> Key: IGNITE-8299
> URL: https://issues.apache.org/jira/browse/IGNITE-8299
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Rakov
>Assignee: Ivan Rakov
>Priority: Major
> Attachments: loader-2018-04-17T12-12-21.jfr, 
> loader-2018-04-17T12-12-21.jfr, loader-2018-04-17T15-10-52.jfr, 
> loader-2018-04-17T15-10-52.jfr
>
>
> Ignite performance significantly decreases when total size of local data is 
> much greater than size of RAM. It can be explained by change of disk access 
> pattern (random reads + random writes is complex even for SSDs), but after 
> analysis of persistence code and JFRs it's clear that there's still room for 
> optimization.
> The following possible optimizations should be investigated:
> 1) PageMemoryImpl.Segment#partGeneration performs allocation of 
> GroupPartitionId during HashMap.get - we can get rid of it.
> 2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
> PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations 
> - we can get rid of it.
> 3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
> reuse remaining 4 instead.
> JFR that highlights excessive CPU usage by page replacement code is attached. 
> See 1st and 3rd positions in "Hot Methods" section:
> Stack Trace   Sample CountPercentage(%)
> PageMemoryImpl.acquirePage(int, long, boolean)4 963   19,73
> scala.Some.equals(Object) 4 932   19,606
> java.util.HashMap.getNode(int, Object)3 236   12,864



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8299) Optimize allocations and CPU consumption in active page replacement scenario

2018-04-17 Thread Ivan Rakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Rakov updated IGNITE-8299:
---
Description: 
Ignite performance significantly decreases when total size of local data is 
much greater than size of RAM. It can be explained by change of disk access 
pattern (random reads + random writes is complex even for SSDs), but after 
analysis of persistence code and JFRs it's clear that there's still room for 
optimization.
The following possible optimizations should be investigated:
1) PageMemoryImpl.Segment#partGeneration performs allocation of 
GroupPartitionId during HashMap.get - we can get rid of it.
2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations - 
we can get rid of it.
3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
reuse remaining 4 instead.

JFRs that highlight excessive CPU usage by page replacement code is attached. 
See 1st and 3rd positions in "Hot Methods" section:
Stack Trace Sample CountPercentage(%)
PageMemoryImpl.acquirePage(int, long, boolean)  4 963   19,73
scala.Some.equals(Object)   4 932   19,606
java.util.HashMap.getNode(int, Object)  3 236   12,864


  was:
Ignite performance significantly decreases when total size of local data is 
much greater than size of RAM. It can be explained by change of disk access 
pattern (random reads + random writes is complex even for SSDs), but after 
analysis of persistence code and JFRs it's clear that there's still room for 
optimization.
The following possible optimizations should be investigated:
1) PageMemoryImpl.Segment#partGeneration performs allocation of 
GroupPartitionId during HashMap.get - we can get rid of it.
2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations - 
we can get rid of it.
3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
reuse remaining 4 instead.

JFRs that highlights excessive CPU usage by page replacement code is attached. 
See 1st and 3rd positions in "Hot Methods" section:
Stack Trace Sample CountPercentage(%)
PageMemoryImpl.acquirePage(int, long, boolean)  4 963   19,73
scala.Some.equals(Object)   4 932   19,606
java.util.HashMap.getNode(int, Object)  3 236   12,864



> Optimize allocations and CPU consumption in active page replacement scenario
> 
>
> Key: IGNITE-8299
> URL: https://issues.apache.org/jira/browse/IGNITE-8299
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Rakov
>Assignee: Ivan Rakov
>Priority: Major
> Attachments: loader-2018-04-17T12-12-21.jfr, 
> loader-2018-04-17T12-12-21.jfr, loader-2018-04-17T15-10-52.jfr, 
> loader-2018-04-17T15-10-52.jfr
>
>
> Ignite performance significantly decreases when total size of local data is 
> much greater than size of RAM. It can be explained by change of disk access 
> pattern (random reads + random writes is complex even for SSDs), but after 
> analysis of persistence code and JFRs it's clear that there's still room for 
> optimization.
> The following possible optimizations should be investigated:
> 1) PageMemoryImpl.Segment#partGeneration performs allocation of 
> GroupPartitionId during HashMap.get - we can get rid of it.
> 2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
> PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations 
> - we can get rid of it.
> 3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
> reuse remaining 4 instead.
> JFRs that highlight excessive CPU usage by page replacement code is attached. 
> See 1st and 3rd positions in "Hot Methods" section:
> Stack Trace   Sample CountPercentage(%)
> PageMemoryImpl.acquirePage(int, long, boolean)4 963   19,73
> scala.Some.equals(Object) 4 932   19,606
> java.util.HashMap.getNode(int, Object)3 236   12,864



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8299) Optimize allocations and CPU consumption in active page replacement scenario

2018-04-17 Thread Ivan Rakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Rakov updated IGNITE-8299:
---
Attachment: loader-2018-04-17T12-12-21.jfr
loader-2018-04-17T15-10-52.jfr

> Optimize allocations and CPU consumption in active page replacement scenario
> 
>
> Key: IGNITE-8299
> URL: https://issues.apache.org/jira/browse/IGNITE-8299
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Rakov
>Assignee: Ivan Rakov
>Priority: Major
> Attachments: loader-2018-04-17T12-12-21.jfr, 
> loader-2018-04-17T12-12-21.jfr, loader-2018-04-17T15-10-52.jfr, 
> loader-2018-04-17T15-10-52.jfr
>
>
> Ignite performance significantly decreases when total size of local data is 
> much greater than size of RAM. It can be explained by change of disk access 
> pattern (random reads + random writes is complex even for SSDs), but after 
> analysis of persistence code and JFRs it's clear that there's still room for 
> optimization.
> The following possible optimizations should be investigated:
> 1) PageMemoryImpl.Segment#partGeneration performs allocation of 
> GroupPartitionId during HashMap.get - we can get rid of it.
> 2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
> PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations 
> - we can get rid of it.
> 3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
> reuse remaining 4 instead.
> JFR that highlights excessive CPU usage by page replacement code is attached. 
> See 1st and 3rd positions in "Hot Methods" section:
> Stack Trace   Sample CountPercentage(%)
> PageMemoryImpl.acquirePage(int, long, boolean)4 963   19,73
> scala.Some.equals(Object) 4 932   19,606
> java.util.HashMap.getNode(int, Object)3 236   12,864



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8299) Optimize allocations and CPU consumption in active page replacement scenario

2018-04-17 Thread Ivan Rakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Rakov updated IGNITE-8299:
---
Description: 
Ignite performance significantly decreases when total size of local data is 
much greater than size of RAM. It can be explained by change of disk access 
pattern (random reads + random writes is complex even for SSDs), but after 
analysis of persistence code and JFRs it's clear that there's still room for 
optimization.
The following possible optimizations should be investigated:
1) PageMemoryImpl.Segment#partGeneration performs allocation of 
GroupPartitionId during HashMap.get - we can get rid of it
2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations - 
we can get rid of it
3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
reuse remaining 4 instead
JFR that highlights excessive CPU usage by page replacement code is attached. 
See 1st and 3rd positions in "Hot Methods" section:
Stack Trace Sample CountPercentage(%)
PageMemoryImpl.acquirePage(int, long, boolean)  4 963   19,73
scala.Some.equals(Object)   4 932   19,606
java.util.HashMap.getNode(int, Object)  3 236   12,864


  was:
Ignite performance significantly decreases when total size of local data is 
much greater than size of RAM. It can be explained by change of disk access 
pattern (random reads + random writes is complex even for SSDs), but after 
analysis of persistence code and JFRs it's clear that there's still room for 
optimization.
The following possible optimizations should be investigated:
1) PageMemoryImpl.Segment#partGeneration performs allocation of 
GroupPartitionId during HashMap.get - we can get rid of it
2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations - 
we can get rid of it
3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
reuse remaining 4 instead
JFR that highlights excessive CPU usage by page replacement code is attached. 
See 1st and 3rd positions in "Hot Methods" section:
Stack Trace Sample CountPercentage(%)
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(int,
 long, boolean)  4 963   19,73
scala.Some.equals(Object)   4 932   19,606
java.util.HashMap.getNode(int, Object)  3 236   12,864



> Optimize allocations and CPU consumption in active page replacement scenario
> 
>
> Key: IGNITE-8299
> URL: https://issues.apache.org/jira/browse/IGNITE-8299
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Rakov
>Assignee: Ivan Rakov
>Priority: Major
>
> Ignite performance significantly decreases when total size of local data is 
> much greater than size of RAM. It can be explained by change of disk access 
> pattern (random reads + random writes is complex even for SSDs), but after 
> analysis of persistence code and JFRs it's clear that there's still room for 
> optimization.
> The following possible optimizations should be investigated:
> 1) PageMemoryImpl.Segment#partGeneration performs allocation of 
> GroupPartitionId during HashMap.get - we can get rid of it
> 2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
> PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations 
> - we can get rid of it
> 3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
> reuse remaining 4 instead
> JFR that highlights excessive CPU usage by page replacement code is attached. 
> See 1st and 3rd positions in "Hot Methods" section:
> Stack Trace   Sample CountPercentage(%)
> PageMemoryImpl.acquirePage(int, long, boolean)4 963   19,73
> scala.Some.equals(Object) 4 932   19,606
> java.util.HashMap.getNode(int, Object)3 236   12,864



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8299) Optimize allocations and CPU consumption in active page replacement scenario

2018-04-17 Thread Ivan Rakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Rakov updated IGNITE-8299:
---
Description: 
Ignite performance significantly decreases when total size of local data is 
much greater than size of RAM. It can be explained by change of disk access 
pattern (random reads + random writes is complex even for SSDs), but after 
analysis of persistence code and JFRs it's clear that there's still room for 
optimization.
The following possible optimizations should be investigated:
1) PageMemoryImpl.Segment#partGeneration performs allocation of 
GroupPartitionId during HashMap.get - we can get rid of it
2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations - 
we can get rid of it
3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
reuse remaining 4 instead

JFR that highlights excessive CPU usage by page replacement code is attached. 
See 1st and 3rd positions in "Hot Methods" section:
Stack Trace Sample CountPercentage(%)
PageMemoryImpl.acquirePage(int, long, boolean)  4 963   19,73
scala.Some.equals(Object)   4 932   19,606
java.util.HashMap.getNode(int, Object)  3 236   12,864


  was:
Ignite performance significantly decreases when total size of local data is 
much greater than size of RAM. It can be explained by change of disk access 
pattern (random reads + random writes is complex even for SSDs), but after 
analysis of persistence code and JFRs it's clear that there's still room for 
optimization.
The following possible optimizations should be investigated:
1) PageMemoryImpl.Segment#partGeneration performs allocation of 
GroupPartitionId during HashMap.get - we can get rid of it
2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations - 
we can get rid of it
3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
reuse remaining 4 instead
JFR that highlights excessive CPU usage by page replacement code is attached. 
See 1st and 3rd positions in "Hot Methods" section:
Stack Trace Sample CountPercentage(%)
PageMemoryImpl.acquirePage(int, long, boolean)  4 963   19,73
scala.Some.equals(Object)   4 932   19,606
java.util.HashMap.getNode(int, Object)  3 236   12,864



> Optimize allocations and CPU consumption in active page replacement scenario
> 
>
> Key: IGNITE-8299
> URL: https://issues.apache.org/jira/browse/IGNITE-8299
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Rakov
>Assignee: Ivan Rakov
>Priority: Major
>
> Ignite performance significantly decreases when total size of local data is 
> much greater than size of RAM. It can be explained by change of disk access 
> pattern (random reads + random writes is complex even for SSDs), but after 
> analysis of persistence code and JFRs it's clear that there's still room for 
> optimization.
> The following possible optimizations should be investigated:
> 1) PageMemoryImpl.Segment#partGeneration performs allocation of 
> GroupPartitionId during HashMap.get - we can get rid of it
> 2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
> PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations 
> - we can get rid of it
> 3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
> reuse remaining 4 instead
> JFR that highlights excessive CPU usage by page replacement code is attached. 
> See 1st and 3rd positions in "Hot Methods" section:
> Stack Trace   Sample CountPercentage(%)
> PageMemoryImpl.acquirePage(int, long, boolean)4 963   19,73
> scala.Some.equals(Object) 4 932   19,606
> java.util.HashMap.getNode(int, Object)3 236   12,864



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8299) Optimize allocations and CPU consumption in active page replacement scenario

2018-04-17 Thread Ivan Rakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Rakov updated IGNITE-8299:
---
Description: 
Ignite performance significantly decreases when total size of local data is 
much greater than size of RAM. It can be explained by change of disk access 
pattern (random reads + random writes is complex even for SSDs), but after 
analysis of persistence code and JFRs it's clear that there's still room for 
optimization.
The following possible optimizations should be investigated:
1) PageMemoryImpl.Segment#partGeneration performs allocation of 
GroupPartitionId during HashMap.get - we can get rid of it.
2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations - 
we can get rid of it.
3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
reuse remaining 4 instead.

JFR that highlights excessive CPU usage by page replacement code is attached. 
See 1st and 3rd positions in "Hot Methods" section:
Stack Trace Sample CountPercentage(%)
PageMemoryImpl.acquirePage(int, long, boolean)  4 963   19,73
scala.Some.equals(Object)   4 932   19,606
java.util.HashMap.getNode(int, Object)  3 236   12,864


  was:
Ignite performance significantly decreases when total size of local data is 
much greater than size of RAM. It can be explained by change of disk access 
pattern (random reads + random writes is complex even for SSDs), but after 
analysis of persistence code and JFRs it's clear that there's still room for 
optimization.
The following possible optimizations should be investigated:
1) PageMemoryImpl.Segment#partGeneration performs allocation of 
GroupPartitionId during HashMap.get - we can get rid of it
2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations - 
we can get rid of it
3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
reuse remaining 4 instead

JFR that highlights excessive CPU usage by page replacement code is attached. 
See 1st and 3rd positions in "Hot Methods" section:
Stack Trace Sample CountPercentage(%)
PageMemoryImpl.acquirePage(int, long, boolean)  4 963   19,73
scala.Some.equals(Object)   4 932   19,606
java.util.HashMap.getNode(int, Object)  3 236   12,864



> Optimize allocations and CPU consumption in active page replacement scenario
> 
>
> Key: IGNITE-8299
> URL: https://issues.apache.org/jira/browse/IGNITE-8299
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Rakov
>Assignee: Ivan Rakov
>Priority: Major
>
> Ignite performance significantly decreases when total size of local data is 
> much greater than size of RAM. It can be explained by change of disk access 
> pattern (random reads + random writes is complex even for SSDs), but after 
> analysis of persistence code and JFRs it's clear that there's still room for 
> optimization.
> The following possible optimizations should be investigated:
> 1) PageMemoryImpl.Segment#partGeneration performs allocation of 
> GroupPartitionId during HashMap.get - we can get rid of it.
> 2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
> PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations 
> - we can get rid of it.
> 3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
> reuse remaining 4 instead.
> JFR that highlights excessive CPU usage by page replacement code is attached. 
> See 1st and 3rd positions in "Hot Methods" section:
> Stack Trace   Sample CountPercentage(%)
> PageMemoryImpl.acquirePage(int, long, boolean)4 963   19,73
> scala.Some.equals(Object) 4 932   19,606
> java.util.HashMap.getNode(int, Object)3 236   12,864



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8299) Optimize allocations and CPU consumption in active page replacement scenario

2018-04-17 Thread Ivan Rakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Rakov updated IGNITE-8299:
---
Description: 
Ignite performance significantly decreases when total size of local data is 
much greater than size of RAM. It can be explained by change of disk access 
pattern (random reads + random writes is complex even for SSDs), but after 
analysis of persistence code and JFRs it's clear that there's still room for 
optimization.
The following possible optimizations should be investigated:
1) PageMemoryImpl.Segment#partGeneration performs allocation of 
GroupPartitionId during HashMap.get - we can get rid of it
2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations - 
we can get rid of it
3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
reuse remaining 4 instead
JFR that highlights excessive CPU usage by page replacement code is attached. 
See 1st and 3rd positions in "Hot Methods" section:
Stack Trace Sample CountPercentage(%)
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(int,
 long, boolean)  4 963   19,73
scala.Some.equals(Object)   4 932   19,606
java.util.HashMap.getNode(int, Object)  3 236   12,864


  was:
Ignite performance significantly decreases when total size of local data is 
much greater than size of RAM. It can be explained by change of disk access 
pattern (random reads + random writes is complex even for SSDs), but after 
analysis of persistence code and JFRs it's clear that there's still room for 
optimization.
The following possible optimizations should be investigated:
1) PageMemoryImpl.Segment#partGeneration performs allocation of 
GroupPartitionId during HashMap.get - we can get rid of it
2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations - 
we can get rid of it
3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
reuse remaining 4 instead


> Optimize allocations and CPU consumption in active page replacement scenario
> 
>
> Key: IGNITE-8299
> URL: https://issues.apache.org/jira/browse/IGNITE-8299
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Rakov
>Assignee: Ivan Rakov
>Priority: Major
>
> Ignite performance significantly decreases when total size of local data is 
> much greater than size of RAM. It can be explained by change of disk access 
> pattern (random reads + random writes is complex even for SSDs), but after 
> analysis of persistence code and JFRs it's clear that there's still room for 
> optimization.
> The following possible optimizations should be investigated:
> 1) PageMemoryImpl.Segment#partGeneration performs allocation of 
> GroupPartitionId during HashMap.get - we can get rid of it
> 2) LoadedPagesMap#getNearestAt is invoked at least 5 times in 
> PageMemoryImpl.Segment#removePageForReplacement. It performs two allocations 
> - we can get rid of it
> 3) If one of 5 evict candidates was erroneous, we'll find 5 new ones - we can 
> reuse remaining 4 instead
> JFR that highlights excessive CPU usage by page replacement code is attached. 
> See 1st and 3rd positions in "Hot Methods" section:
> Stack Trace   Sample CountPercentage(%)
> org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(int,
>  long, boolean)4 963   19,73
> scala.Some.equals(Object) 4 932   19,606
> java.util.HashMap.getNode(int, Object)3 236   12,864



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)