Hi Manu,

Partition distribution is determined by the claim algorithm. In this case
it more evenly distributes partitions in a "from scratch" scenario vs.
adding nodes. There has been work to improve the algorithm that you can
find here: https://github.com/basho/riak_core/pull/183

--
Luke Bakken
CSE
lbak...@basho.com


On Mon, Jun 2, 2014 at 11:51 PM, Manu Mäki - Compare Group <
m.m...@comparegroup.eu> wrote:

>  Hi Luke,
>
>  Do you have any idea why creating the cluster from scratch creates “more
> balanced” cluster? Is this because of the actual partition sizes not being
> of equal size?
>
>
>  Manu
>
>   From: Luke Bakken <lbak...@basho.com>
> Date: Monday 2 June 2014 19:34
> To: Manu Maki <m.m...@comparegroup.eu>
> Cc: "riak-users@lists.basho.com" <riak-users@lists.basho.com>
> Subject: Re: Partition distribution between nodes
>
>   Hi Manu,
>
>  I see similar vnode distribution in my local dev cluster. This is due to
> 64 not being evenly divisible by 5.
>
>  4 nodes:
>
>  $ dev1/bin/riak-admin member-status
> ================================= Membership
> ==================================
> Status     Ring    Pending    Node
>
> -------------------------------------------------------------------------------
> valid      25.0%      --      'dev1@127.0.0.1'
> valid      25.0%      --      'dev2@127.0.0.1'
> valid      25.0%      --      'dev3@127.0.0.1'
> valid      25.0%      --      'dev4@127.0.0.1'
>
> -------------------------------------------------------------------------------
>
>  5th node added:
>
>  $ dev1/bin/riak-admin member-status
> ================================= Membership
> ==================================
> Status     Ring    Pending    Node
>
> -------------------------------------------------------------------------------
> valid      18.8%      --      'dev1@127.0.0.1'
> valid      18.8%      --      'dev2@127.0.0.1'
> valid      18.8%      --      'dev3@127.0.0.1'
> valid      25.0%      --      'dev4@127.0.0.1'
> valid      18.8%      --      'dev5@127.0.0.1'
>
> -------------------------------------------------------------------------------
>
>  Cluster *from scratch* with 5 nodes:
>
>  $ dev1/bin/riak-admin member-status
> ================================= Membership
> ==================================
> Status     Ring    Pending    Node
>
> -------------------------------------------------------------------------------
> valid      20.3%      --      'dev1@127.0.0.1'
> valid      20.3%      --      'dev2@127.0.0.1'
> valid      20.3%      --      'dev3@127.0.0.1'
> valid      20.3%      --      'dev4@127.0.0.1'
> valid      18.8%      --      'dev5@127.0.0.1'
>
> -------------------------------------------------------------------------------
>
>  --
> Luke Bakken
> CSE
> lbak...@basho.com
>
>
> On Mon, Jun 2, 2014 at 6:52 AM, Manu Mäki - Compare Group <
> m.m...@comparegroup.eu> wrote:
>
>>  Hi all,
>>
>>  In the beginning we were running four nodes with n-value of 2. The
>> partitions were distributed 25% for each node. Now when we added fifth node
>> (still having n-value of 2), the partitions are distributed in following
>> way: 25%, 19%, 19%, 19% and 19%. The ring size in use is 64. Is this normal
>> behavior? The cluster seems to be working correctly. However I was
>> expecting each node to have 20% of the partitions.
>>
>>
>>  Best regards,
>> Manu Mäki
>>
>> _______________________________________________
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to