-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23340/#review47478
-----------------------------------------------------------


I did not look at the semaphore part carefully.
@majakabiljo, can you also please take a look once. thanks


giraph-core/src/main/java/org/apache/giraph/comm/messages/queue/AsyncMessageStoreWrapper.java
<https://reviews.apache.org/r/23340/#comment83340>

    so at the end of every superstep threadCount number of thereads are being 
created. why not have a pool & reuse it. possibly verifying its integrity like 
say expected number of threads are alive, etc.



giraph-core/src/main/java/org/apache/giraph/conf/GiraphConstants.java
<https://reviews.apache.org/r/23340/#comment83322>

    can keep default threads to 0 & check the intention to use 
async_message_store based on number of threads.
    
    but not sure if that is very helpful!



giraph-core/src/main/java/org/apache/giraph/graph/GraphTaskManager.java
<https://reviews.apache.org/r/23340/#comment83341>

    instead of defining a new method on serverData, which is only used on 
incoming message store can't u just do
    getServerData().getOutGoingMessageStore().waitToComplete()
    
    also this has to be part of BspServiceWorker#finishSuperstep
    because u cannot finish the superstep without properly flushing messages
    


- Pavan Kumar Athivarapu


On July 8, 2014, 4:55 p.m., Sergey Edunov wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/23340/
> -----------------------------------------------------------
> 
> (Updated July 8, 2014, 4:55 p.m.)
> 
> 
> Review request for giraph.
> 
> 
> Repository: giraph-git
> 
> 
> Description
> -------
> 
> Our profiling shows that a lot of apps are neither CPU nor memory or network 
> bound. Instead they waste a lot of time waiting for lock in MessageStore. 
> That happens in netty threads. 
> We should be able to put messages into queue and then process them in other 
> set of threads. 
> It has to be configurable because adding another thread level will introduce 
> additional overhead.
> 
> I introduced new options: 
> giraph.async.message.store (false by default) that will enable async messaging
> giraph.async.message.store.threads (8 by default) that will configure number 
> of background threads required to process messages.
> 
> 
> Diffs
> -----
> 
>   giraph-core/src/main/java/org/apache/giraph/comm/ServerData.java b3f8733 
>   
> giraph-core/src/main/java/org/apache/giraph/comm/messages/InMemoryMessageStoreFactory.java
>  f691d3e 
>   
> giraph-core/src/main/java/org/apache/giraph/comm/messages/NonBlockingMessageStore.java
>  PRE-CREATION 
>   
> giraph-core/src/main/java/org/apache/giraph/comm/messages/primitives/IntByteArrayMessageStore.java
>  dbc1ce8 
>   
> giraph-core/src/main/java/org/apache/giraph/comm/messages/primitives/IntFloatMessageStore.java
>  be75ee8 
>   
> giraph-core/src/main/java/org/apache/giraph/comm/messages/primitives/LongByteArrayMessageStore.java
>  3110864 
>   
> giraph-core/src/main/java/org/apache/giraph/comm/messages/primitives/LongDoubleMessageStore.java
>  264e65a 
>   
> giraph-core/src/main/java/org/apache/giraph/comm/messages/queue/AsyncMessageStoreWrapper.java
>  PRE-CREATION 
>   
> giraph-core/src/main/java/org/apache/giraph/comm/messages/queue/PartitionMessage.java
>  PRE-CREATION 
>   
> giraph-core/src/main/java/org/apache/giraph/comm/messages/queue/package-info.java
>  PRE-CREATION 
>   giraph-core/src/main/java/org/apache/giraph/conf/GiraphConstants.java 
> 7d7ceb2 
>   giraph-core/src/main/java/org/apache/giraph/graph/GraphTaskManager.java 
> e13eedd 
>   
> giraph-core/src/test/java/org/apache/giraph/comm/messages/queue/AsyncMessageStoreWrapperTest.java
>  PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/23340/diff/
> 
> 
> Testing
> -------
> 
> I run pagerank and it gives ~7% improvement over and along with G1 GC it 
> gives ~15% improvement. And CPU usage is now close to 90% 
> 
> 
> Thanks,
> 
> Sergey Edunov
> 
>

Reply via email to