[ 
https://issues.apache.org/jira/browse/GEODE-8035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17094897#comment-17094897
 ] 

ASF GitHub Bot commented on GEODE-8035:
---------------------------------------

gesterzhou commented on a change in pull request #5014:
URL: https://github.com/apache/geode/pull/5014#discussion_r416956303



##########
File path: 
geode-core/src/main/java/org/apache/geode/internal/cache/xmlcache/CacheCreation.java
##########
@@ -521,12 +521,12 @@ void create(InternalCache cache)
 
     cache.initializePdxRegistry();
 
-    for (DiskStore diskStore : diskStores.values()) {
+    diskStores.values().parallelStream().forEach(diskStore -> {

Review comment:
       In Geode use case, it won't be large number of diskstore. The worst case 
is less than region number. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


> Parallel Disk Store Recovery when Cluster Restarts
> --------------------------------------------------
>
>                 Key: GEODE-8035
>                 URL: https://issues.apache.org/jira/browse/GEODE-8035
>             Project: Geode
>          Issue Type: Improvement
>            Reporter: Jianxia Chen
>            Assignee: Jianxia Chen
>            Priority: Major
>              Labels: GeodeCommons
>
> Currently, when Geode cluster restarts, the disk store recovery is 
> serialized. When all regions share the same disk store, the restart process 
> is time consuming. To improve the performance, different regions can use 
> different disk stores with different disk controllers. And adding parallel 
> disk store recovery. This is expected to significantly reduce the time to 
> restart Geode cluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to