It should be 24 per worker then. We cannot replicate your problem. Can
you give us exactly your setup and the command you run, and a log from one
worker?

On Thursday, February 13, 2014, Sebastian Schelter <s...@apache.org> wrote:

> I'm using 24 workers to process a dataset of 50M vertices. Where can I see
> the number of partitions assigned to each worker?
>
> Best,
> Sebastian
>
> On 02/12/2014 04:34 PM, Claudio Martella wrote:
>
> If you set giraph.maxInMemoryPartitions accordingly to a number larger to
> that max(), and you're not setting stickyPartitions, then my question is
> whether there are at least that number of partitions in the worker. Can you
> validate whether there are enough partitions in the worker. In other terms,
> assuming you're using hash partitioner, how many workers are you using?
>
>
> On Wed, Feb 12, 2014 at 2:59 PM, Sebastian Schelter <s...@apache.org>
> wrote:
>
>  No. Should I have done that?
>
>
> On 02/12/2014 02:57 PM, Claudio Martella wrote:
>
>  did you also set stickyPartitions to some numbers?
>
>
> On Wed, Feb 12, 2014 at 1:00 PM, Sebastian Schelter <s...@apache.org>
> wrote:
>
>   Updating documentation is never a bad idea :)
>
>
> I reran my test with giraph.maxPartitionsInMemory >
> max(giraph.numComputeThreads,giraph.numInputThreads,giraph.
> numOutputThreads)
> and still got the same behavior. I'll wait for the updated patch.
>
> Get well Armando!
>
>
> On 02/12/2014 12:53 PM, Armando Miraglia wrote:
>
>
>  btw: I as also thinking to update the documentation page on the Giraph
> website to better explain the sticky partition logic. What do  you
> think?
>
> Cheers,
> Armando
>
>
> On Wed, Feb 12, 2014 at 12:50:25PM +0100, Armando Miraglia wrote:
>
>   Indeed, yesterday I was fixing a couple of things and I think I missed
>
> a
> case that I have to exclude. Sorry for this, I have fever at the
> momento
> so it could be that yesterday I was under the effect of the fever :D
>
> I checked that the tests were passing but I think a missed something.
>
> I'll come back to you very soon
>
> On Wed, Feb 12, 2014 at 10:26:12AM +0100, Claudio Martella wrote:
>
>   the problem is that you're running with more threads than in-memory
>
> partitions. increase the number of partitions in memory to be at least
> the
> number of threads. i have no time right now to check the latest code,
> but
> you should not set the number of stickypartitions by hand.
>
>
> On Wed, Feb 12, 2014 at 10:03 AM, Sebastian Schelter <s...@apache.org>
> wrote:
>
>    I ran a first test with the new DiskBackedPartitionStore and it
> didn't
>
>  work for me unfortunately. The job never leaves the input phase
> (superstep
> -1). I sshd onto one of the workers and it seems to wait forever on
> DiskBackedPartitionStore.getOrCreatePartition:
>
>       java.lang.Thread.State: BLOCKED (on object monitor)
>            at org.apache.giraph.partition.DiskBackedPartitionStore.
> getOrCreatePartition(DiskBackedPartitionStore.java:226)
>            - waiting to lock <0x00000000aeb757c8> (a
> org.apache.giraph.partition.DiskBackedPartitionStore$MetaPartition)
>
> Here are the custom arguments for my run, let me know if I should do
> another run with a different config.
>
> giraph.oneToAllMsgSending=true
> giraph.isStaticGraph=true
> giraph.numComputeThreads=15
> giraph.numInputThreads=15
> giraph.numOutputThreads=15
> giraph.maxNumberOfSupersteps=30
> giraph.useOutOfCoreGraph=true
> giraph.stickyPartitions=5
>
> I also ran the job without using oneToAllMsgSending and saw the same
> behavior.
>
> Best,
> Sebastian
>
>
>
> On 02/12/2014 12:44 AM, Claudio Martella wrote:
>
>    please give it a test. i've been working on this with armando. i'll
>
>  give a
> review, but we have been testing it for while. we'd really
> appreciate
> if
> somebody else could run some additional tests as well. thanks!
>
>
> On Wed, Feb 12, 2014 at 12:39 AM, Sebastian Schelter <
> s...@apache.org>
> wrote:
>
>     I'll test the patch from GIRAPH-825 this week.
>
>
>
> On 0
>
>

-- 
   Claudio Martella

Reply via email to