Re: [Gluster-users] disperse volume brick counts limits in RHES

2017-05-09 Thread Xavier Hernandez

Hi Alastair,

the numbers I'm giving correspond to an Intel Xeon E5-2630L 2 GHz CPU.

On 08/05/17 22:44, Alastair Neil wrote:

so the bottleneck is that computations with 16x20 matrix require  ~4
times the cycles?


This is only part of the problem. A 16x16 matrix can be processed at a 
rate of 400 MB/s, so a single fragment on a brick will be processed at 
400/16 = 25 MB/s which is not the case.


Note that the fragment on a brick is only part of a whole file, so 25 
MB/s on a brick means that the real file is being processed at 400 MB/s.



It seems then that there is ample room for
improvement, as there are many linear algebra packages out there that
scale better than O(nxm).


That's true for much bigger matrices where synchronization time between 
threads is negligible compared to the computation time. In this case the 
algorithm is highly optimized and any attempt to distribute the 
computation would be worse.


Note that the current algorithm can rebuild the original data at a rate 
of ~5 CPU cycles per byte with a 16x16 configuration without any SIMD 
extension. With SSE or AVX this goes down to near 1 cycle per byte.


In this case the best we can do is to do more than one heal in parallel. 
This will use more than one core to compute the matrices, getting an 
overall better performance.



Is the healing time dominated by the EC
compute time?  If Serkan saw a hard 2x scaling then it seems likely.


Partially. The computation speed is doubled on a 8+2 configuration, but 
also the number of IOPS is halved, and each one is of twice the size of 
a 16+4 operation. This means that we only have half of the latencies 
when using 8+2 and bandwidth is better utilized.


The theoretical speed of matrix processing is 25 MB/s per brick, but the 
real speed seen is considerably smaller, so network latencies and other 
factors also contribute to the heal time.


Xavi



-Alastair




On 8 May 2017 at 03:02, Xavier Hernandez > wrote:

On 05/05/17 13:49, Pranith Kumar Karampuri wrote:



On Fri, May 5, 2017 at 2:38 PM, Serkan Çoban

>>
wrote:

It is the over all time, 8TB data disk healed 2x faster in 8+2
configuration.


Wow, that is counter intuitive for me. I will need to explore
about this
to find out why that could be. Thanks a lot for this feedback!


Matrix multiplication for encoding/decoding of 8+2 is 4 times faster
than 16+4 (one matrix of 16x16 is composed by 4 submatrices of 8x8),
however each matrix operation on a 16+4 configuration takes twice
the amount of data of a 8+2, so net effect is that 8+2 is twice as
fast as 16+4.

An 8+2 also uses bigger blocks on each brick, processing the same
amount of data in less I/O operations and bigger network packets.

Probably these are the reasons why 16+4 is slower than 8+2.

See my other email for more detailed description.

Xavi




On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri

>> wrote:
>
>
> On Fri, May 5, 2017 at 11:42 AM, Serkan Çoban

>>
wrote:
>>
>> Healing gets slower as you increase m in m+n configuration.
>> We are using 16+4 configuration without any problems
other then heal
>> speed.
>> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see
that heals on
>> 8+2 is faster by 2x.
>
>
> As you increase number of nodes that are participating in
an EC
set number
> of parallel heals increase. Is the heal speed you saw
improved per
file or
> the over all time it took to heal the data?
>
>>
>>
>>
>> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey

>> wrote:
>> >
>> > 8+2 and 8+3 configurations are not the limitation but just
suggestions.
>> > You can create 16+3 volume without any issue.
>> >
>> > Ashish
>> >
>> > 
>> > From: "Alastair Neil" 
>>
>> > To: "gluster-users" 

Re: [Gluster-users] disperse volume brick counts limits in RHES

2017-05-08 Thread Alastair Neil
so the bottleneck is that computations with 16x20 matrix require  ~4 times
the cycles?  It seems then that there is ample room for improvement, as
there are many linear algebra packages out there that scale better than
O(nxm).  Is the healing time dominated by the EC compute time?  If Serkan
saw a hard 2x scaling then it seems likely.

-Alastair




On 8 May 2017 at 03:02, Xavier Hernandez  wrote:

> On 05/05/17 13:49, Pranith Kumar Karampuri wrote:
>
>>
>>
>> On Fri, May 5, 2017 at 2:38 PM, Serkan Çoban > > wrote:
>>
>> It is the over all time, 8TB data disk healed 2x faster in 8+2
>> configuration.
>>
>>
>> Wow, that is counter intuitive for me. I will need to explore about this
>> to find out why that could be. Thanks a lot for this feedback!
>>
>
> Matrix multiplication for encoding/decoding of 8+2 is 4 times faster than
> 16+4 (one matrix of 16x16 is composed by 4 submatrices of 8x8), however
> each matrix operation on a 16+4 configuration takes twice the amount of
> data of a 8+2, so net effect is that 8+2 is twice as fast as 16+4.
>
> An 8+2 also uses bigger blocks on each brick, processing the same amount
> of data in less I/O operations and bigger network packets.
>
> Probably these are the reasons why 16+4 is slower than 8+2.
>
> See my other email for more detailed description.
>
> Xavi
>
>
>>
>>
>> On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri
>> > wrote:
>> >
>> >
>> > On Fri, May 5, 2017 at 11:42 AM, Serkan Çoban
>> > wrote:
>> >>
>> >> Healing gets slower as you increase m in m+n configuration.
>> >> We are using 16+4 configuration without any problems other then
>> heal
>> >> speed.
>> >> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals
>> on
>> >> 8+2 is faster by 2x.
>> >
>> >
>> > As you increase number of nodes that are participating in an EC
>> set number
>> > of parallel heals increase. Is the heal speed you saw improved per
>> file or
>> > the over all time it took to heal the data?
>> >
>> >>
>> >>
>> >>
>> >> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey
>> > wrote:
>> >> >
>> >> > 8+2 and 8+3 configurations are not the limitation but just
>> suggestions.
>> >> > You can create 16+3 volume without any issue.
>> >> >
>> >> > Ashish
>> >> >
>> >> > 
>> >> > From: "Alastair Neil" > >
>> >> > To: "gluster-users" > >
>> >> > Sent: Friday, May 5, 2017 2:23:32 AM
>> >> > Subject: [Gluster-users] disperse volume brick counts limits in
>> RHES
>> >> >
>> >> >
>> >> > Hi
>> >> >
>> >> > we are deploying a large (24node/45brick) cluster and noted
>> that the
>> >> > RHES
>> >> > guidelines limit the number of data bricks in a disperse set to
>> 8.  Is
>> >> > there
>> >> > any reason for this.  I am aware that you want this to be a
>> power of 2,
>> >> > but
>> >> > as we have a large number of nodes we were planning on going
>> with 16+3.
>> >> > Dropping to 8+2 or 8+3 will be a real waste for us.
>> >> >
>> >> > Thanks,
>> >> >
>> >> >
>> >> > Alastair
>> >> >
>> >> >
>> >> > ___
>> >> > Gluster-users mailing list
>> >> > Gluster-users@gluster.org 
>> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
>> 
>> >> >
>> >> >
>> >> > ___
>> >> > Gluster-users mailing list
>> >> > Gluster-users@gluster.org 
>> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
>> 
>> >> ___
>> >> Gluster-users mailing list
>> >> Gluster-users@gluster.org 
>> >> http://lists.gluster.org/mailman/listinfo/gluster-users
>> 
>> >
>> >
>> >
>> >
>> > --
>> > Pranith
>>
>>
>>
>>
>> --
>> Pranith
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>

Re: [Gluster-users] disperse volume brick counts limits in RHES

2017-05-08 Thread Serkan Çoban
>What network do you have?
We have 2X10G bonded interfaces on each server.

Thanks to Xavier for detailed explanation of EC details.

On Sat, May 6, 2017 at 2:20 AM, Alastair Neil  wrote:
> What network do you have?
>
>
> On 5 May 2017 at 09:51, Serkan Çoban  wrote:
>>
>> In our use case every node has 26 bricks. I am using 60 nodes, one 9PB
>> volume with 16+4 EC configuration, each brick in a sub-volume is on
>> different host.
>> We put 15-20k 2GB files every day into 10-15 folders. So it is 1500K
>> files/folder. Our gluster version is 3.7.11.
>> Heal speed in this environment is 8-10MB/sec/brick.
>>
>> I did some tests for parallel self heal feature with version 3.9, two
>> servers 26 bricks each, 8+2 and 16+4 EC configuration.
>> This was a small test environment and the results are as I said 8+2 is
>> 2x faster then 16+4 with parallel self heal threads set to 2/4.
>> In 1-2 months our new servers arriving, I will do detailed tests for
>> heal performance for 8+2 and 16+4 and inform you the results.
>>
>>
>> On Fri, May 5, 2017 at 2:54 PM, Pranith Kumar Karampuri
>>  wrote:
>> >
>> >
>> > On Fri, May 5, 2017 at 5:19 PM, Pranith Kumar Karampuri
>> >  wrote:
>> >>
>> >>
>> >>
>> >> On Fri, May 5, 2017 at 2:38 PM, Serkan Çoban 
>> >> wrote:
>> >>>
>> >>> It is the over all time, 8TB data disk healed 2x faster in 8+2
>> >>> configuration.
>> >>
>> >>
>> >> Wow, that is counter intuitive for me. I will need to explore about
>> >> this
>> >> to find out why that could be. Thanks a lot for this feedback!
>> >
>> >
>> > From memory I remember you said you have a lot of small files hosted on
>> > the
>> > volume, right? It could be because of the bug
>> > https://review.gluster.org/17151 is fixing. That is the only reason I
>> > could
>> > guess right now. We will try to test this kind of case if you could give
>> > us
>> > a bit more details about average file-size/depth of directories etc to
>> > simulate similar looking directory structure.
>> >
>> >>
>> >>
>> >>>
>> >>>
>> >>> On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri
>> >>>  wrote:
>> >>> >
>> >>> >
>> >>> > On Fri, May 5, 2017 at 11:42 AM, Serkan Çoban
>> >>> > 
>> >>> > wrote:
>> >>> >>
>> >>> >> Healing gets slower as you increase m in m+n configuration.
>> >>> >> We are using 16+4 configuration without any problems other then
>> >>> >> heal
>> >>> >> speed.
>> >>> >> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals
>> >>> >> on
>> >>> >> 8+2 is faster by 2x.
>> >>> >
>> >>> >
>> >>> > As you increase number of nodes that are participating in an EC set
>> >>> > number
>> >>> > of parallel heals increase. Is the heal speed you saw improved per
>> >>> > file
>> >>> > or
>> >>> > the over all time it took to heal the data?
>> >>> >
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey 
>> >>> >> wrote:
>> >>> >> >
>> >>> >> > 8+2 and 8+3 configurations are not the limitation but just
>> >>> >> > suggestions.
>> >>> >> > You can create 16+3 volume without any issue.
>> >>> >> >
>> >>> >> > Ashish
>> >>> >> >
>> >>> >> > 
>> >>> >> > From: "Alastair Neil" 
>> >>> >> > To: "gluster-users" 
>> >>> >> > Sent: Friday, May 5, 2017 2:23:32 AM
>> >>> >> > Subject: [Gluster-users] disperse volume brick counts limits in
>> >>> >> > RHES
>> >>> >> >
>> >>> >> >
>> >>> >> > Hi
>> >>> >> >
>> >>> >> > we are deploying a large (24node/45brick) cluster and noted that
>> >>> >> > the
>> >>> >> > RHES
>> >>> >> > guidelines limit the number of data bricks in a disperse set to
>> >>> >> > 8.
>> >>> >> > Is
>> >>> >> > there
>> >>> >> > any reason for this.  I am aware that you want this to be a power
>> >>> >> > of
>> >>> >> > 2,
>> >>> >> > but
>> >>> >> > as we have a large number of nodes we were planning on going with
>> >>> >> > 16+3.
>> >>> >> > Dropping to 8+2 or 8+3 will be a real waste for us.
>> >>> >> >
>> >>> >> > Thanks,
>> >>> >> >
>> >>> >> >
>> >>> >> > Alastair
>> >>> >> >
>> >>> >> >
>> >>> >> > ___
>> >>> >> > Gluster-users mailing list
>> >>> >> > Gluster-users@gluster.org
>> >>> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
>> >>> >> >
>> >>> >> >
>> >>> >> > ___
>> >>> >> > Gluster-users mailing list
>> >>> >> > Gluster-users@gluster.org
>> >>> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
>> >>> >> ___
>> >>> >> Gluster-users mailing list
>> >>> >> Gluster-users@gluster.org
>> >>> >> http://lists.gluster.org/mailman/listinfo/gluster-users
>> >>> >
>> >>> >
>> >>> >
>> >>> >
>> >>> > --
>> >>> > Pranith
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> Pranith
>> >
>> >
>> >
>> >
>> > 

Re: [Gluster-users] disperse volume brick counts limits in RHES

2017-05-08 Thread Xavier Hernandez

On 05/05/17 13:49, Pranith Kumar Karampuri wrote:



On Fri, May 5, 2017 at 2:38 PM, Serkan Çoban > wrote:

It is the over all time, 8TB data disk healed 2x faster in 8+2
configuration.


Wow, that is counter intuitive for me. I will need to explore about this
to find out why that could be. Thanks a lot for this feedback!


Matrix multiplication for encoding/decoding of 8+2 is 4 times faster 
than 16+4 (one matrix of 16x16 is composed by 4 submatrices of 8x8), 
however each matrix operation on a 16+4 configuration takes twice the 
amount of data of a 8+2, so net effect is that 8+2 is twice as fast as 16+4.


An 8+2 also uses bigger blocks on each brick, processing the same amount 
of data in less I/O operations and bigger network packets.


Probably these are the reasons why 16+4 is slower than 8+2.

See my other email for more detailed description.

Xavi





On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri
> wrote:
>
>
> On Fri, May 5, 2017 at 11:42 AM, Serkan Çoban
> wrote:
>>
>> Healing gets slower as you increase m in m+n configuration.
>> We are using 16+4 configuration without any problems other then heal
>> speed.
>> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals on
>> 8+2 is faster by 2x.
>
>
> As you increase number of nodes that are participating in an EC
set number
> of parallel heals increase. Is the heal speed you saw improved per
file or
> the over all time it took to heal the data?
>
>>
>>
>>
>> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey
> wrote:
>> >
>> > 8+2 and 8+3 configurations are not the limitation but just
suggestions.
>> > You can create 16+3 volume without any issue.
>> >
>> > Ashish
>> >
>> > 
>> > From: "Alastair Neil" >
>> > To: "gluster-users" >
>> > Sent: Friday, May 5, 2017 2:23:32 AM
>> > Subject: [Gluster-users] disperse volume brick counts limits in
RHES
>> >
>> >
>> > Hi
>> >
>> > we are deploying a large (24node/45brick) cluster and noted
that the
>> > RHES
>> > guidelines limit the number of data bricks in a disperse set to
8.  Is
>> > there
>> > any reason for this.  I am aware that you want this to be a
power of 2,
>> > but
>> > as we have a large number of nodes we were planning on going
with 16+3.
>> > Dropping to 8+2 or 8+3 will be a real waste for us.
>> >
>> > Thanks,
>> >
>> >
>> > Alastair
>> >
>> >
>> > ___
>> > Gluster-users mailing list
>> > Gluster-users@gluster.org 
>> > http://lists.gluster.org/mailman/listinfo/gluster-users

>> >
>> >
>> > ___
>> > Gluster-users mailing list
>> > Gluster-users@gluster.org 
>> > http://lists.gluster.org/mailman/listinfo/gluster-users

>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org 
>> http://lists.gluster.org/mailman/listinfo/gluster-users

>
>
>
>
> --
> Pranith




--
Pranith


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] disperse volume brick counts limits in RHES

2017-05-05 Thread Alastair Neil
What network do you have?


On 5 May 2017 at 09:51, Serkan Çoban  wrote:

> In our use case every node has 26 bricks. I am using 60 nodes, one 9PB
> volume with 16+4 EC configuration, each brick in a sub-volume is on
> different host.
> We put 15-20k 2GB files every day into 10-15 folders. So it is 1500K
> files/folder. Our gluster version is 3.7.11.
> Heal speed in this environment is 8-10MB/sec/brick.
>
> I did some tests for parallel self heal feature with version 3.9, two
> servers 26 bricks each, 8+2 and 16+4 EC configuration.
> This was a small test environment and the results are as I said 8+2 is
> 2x faster then 16+4 with parallel self heal threads set to 2/4.
> In 1-2 months our new servers arriving, I will do detailed tests for
> heal performance for 8+2 and 16+4 and inform you the results.
>
>
> On Fri, May 5, 2017 at 2:54 PM, Pranith Kumar Karampuri
>  wrote:
> >
> >
> > On Fri, May 5, 2017 at 5:19 PM, Pranith Kumar Karampuri
> >  wrote:
> >>
> >>
> >>
> >> On Fri, May 5, 2017 at 2:38 PM, Serkan Çoban 
> >> wrote:
> >>>
> >>> It is the over all time, 8TB data disk healed 2x faster in 8+2
> >>> configuration.
> >>
> >>
> >> Wow, that is counter intuitive for me. I will need to explore about this
> >> to find out why that could be. Thanks a lot for this feedback!
> >
> >
> > From memory I remember you said you have a lot of small files hosted on
> the
> > volume, right? It could be because of the bug
> > https://review.gluster.org/17151 is fixing. That is the only reason I
> could
> > guess right now. We will try to test this kind of case if you could give
> us
> > a bit more details about average file-size/depth of directories etc to
> > simulate similar looking directory structure.
> >
> >>
> >>
> >>>
> >>>
> >>> On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri
> >>>  wrote:
> >>> >
> >>> >
> >>> > On Fri, May 5, 2017 at 11:42 AM, Serkan Çoban  >
> >>> > wrote:
> >>> >>
> >>> >> Healing gets slower as you increase m in m+n configuration.
> >>> >> We are using 16+4 configuration without any problems other then heal
> >>> >> speed.
> >>> >> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals on
> >>> >> 8+2 is faster by 2x.
> >>> >
> >>> >
> >>> > As you increase number of nodes that are participating in an EC set
> >>> > number
> >>> > of parallel heals increase. Is the heal speed you saw improved per
> file
> >>> > or
> >>> > the over all time it took to heal the data?
> >>> >
> >>> >>
> >>> >>
> >>> >>
> >>> >> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey 
> >>> >> wrote:
> >>> >> >
> >>> >> > 8+2 and 8+3 configurations are not the limitation but just
> >>> >> > suggestions.
> >>> >> > You can create 16+3 volume without any issue.
> >>> >> >
> >>> >> > Ashish
> >>> >> >
> >>> >> > 
> >>> >> > From: "Alastair Neil" 
> >>> >> > To: "gluster-users" 
> >>> >> > Sent: Friday, May 5, 2017 2:23:32 AM
> >>> >> > Subject: [Gluster-users] disperse volume brick counts limits in
> RHES
> >>> >> >
> >>> >> >
> >>> >> > Hi
> >>> >> >
> >>> >> > we are deploying a large (24node/45brick) cluster and noted that
> the
> >>> >> > RHES
> >>> >> > guidelines limit the number of data bricks in a disperse set to 8.
> >>> >> > Is
> >>> >> > there
> >>> >> > any reason for this.  I am aware that you want this to be a power
> of
> >>> >> > 2,
> >>> >> > but
> >>> >> > as we have a large number of nodes we were planning on going with
> >>> >> > 16+3.
> >>> >> > Dropping to 8+2 or 8+3 will be a real waste for us.
> >>> >> >
> >>> >> > Thanks,
> >>> >> >
> >>> >> >
> >>> >> > Alastair
> >>> >> >
> >>> >> >
> >>> >> > ___
> >>> >> > Gluster-users mailing list
> >>> >> > Gluster-users@gluster.org
> >>> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
> >>> >> >
> >>> >> >
> >>> >> > ___
> >>> >> > Gluster-users mailing list
> >>> >> > Gluster-users@gluster.org
> >>> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
> >>> >> ___
> >>> >> Gluster-users mailing list
> >>> >> Gluster-users@gluster.org
> >>> >> http://lists.gluster.org/mailman/listinfo/gluster-users
> >>> >
> >>> >
> >>> >
> >>> >
> >>> > --
> >>> > Pranith
> >>
> >>
> >>
> >>
> >> --
> >> Pranith
> >
> >
> >
> >
> > --
> > Pranith
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] disperse volume brick counts limits in RHES

2017-05-05 Thread Pranith Kumar Karampuri
Wondering if Xavi knows something.

On Fri, May 5, 2017 at 7:24 PM, Pranith Kumar Karampuri  wrote:

>
>
> On Fri, May 5, 2017 at 7:21 PM, Serkan Çoban 
> wrote:
>
>> In our use case every node has 26 bricks. I am using 60 nodes, one 9PB
>> volume with 16+4 EC configuration, each brick in a sub-volume is on
>> different host.
>> We put 15-20k 2GB files every day into 10-15 folders. So it is 1500K
>> files/folder. Our gluster version is 3.7.11.
>> Heal speed in this environment is 8-10MB/sec/brick.
>>
>> I did some tests for parallel self heal feature with version 3.9, two
>> servers 26 bricks each, 8+2 and 16+4 EC configuration.
>> This was a small test environment and the results are as I said 8+2 is
>> 2x faster then 16+4 with parallel self heal threads set to 2/4.
>> In 1-2 months our new servers arriving, I will do detailed tests for
>> heal performance for 8+2 and 16+4 and inform you the results.
>>
>
> In that case I still don't know why this is the case. Thanks for the
> inputs. I will also try to find out how long a 2GB file takes in 8+2 vs
> 16+4 and see if there is something I need to look closely.
>
>
>>
>>
>> On Fri, May 5, 2017 at 2:54 PM, Pranith Kumar Karampuri
>>  wrote:
>> >
>> >
>> > On Fri, May 5, 2017 at 5:19 PM, Pranith Kumar Karampuri
>> >  wrote:
>> >>
>> >>
>> >>
>> >> On Fri, May 5, 2017 at 2:38 PM, Serkan Çoban 
>> >> wrote:
>> >>>
>> >>> It is the over all time, 8TB data disk healed 2x faster in 8+2
>> >>> configuration.
>> >>
>> >>
>> >> Wow, that is counter intuitive for me. I will need to explore about
>> this
>> >> to find out why that could be. Thanks a lot for this feedback!
>> >
>> >
>> > From memory I remember you said you have a lot of small files hosted on
>> the
>> > volume, right? It could be because of the bug
>> > https://review.gluster.org/17151 is fixing. That is the only reason I
>> could
>> > guess right now. We will try to test this kind of case if you could
>> give us
>> > a bit more details about average file-size/depth of directories etc to
>> > simulate similar looking directory structure.
>> >
>> >>
>> >>
>> >>>
>> >>>
>> >>> On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri
>> >>>  wrote:
>> >>> >
>> >>> >
>> >>> > On Fri, May 5, 2017 at 11:42 AM, Serkan Çoban <
>> cobanser...@gmail.com>
>> >>> > wrote:
>> >>> >>
>> >>> >> Healing gets slower as you increase m in m+n configuration.
>> >>> >> We are using 16+4 configuration without any problems other then
>> heal
>> >>> >> speed.
>> >>> >> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals
>> on
>> >>> >> 8+2 is faster by 2x.
>> >>> >
>> >>> >
>> >>> > As you increase number of nodes that are participating in an EC set
>> >>> > number
>> >>> > of parallel heals increase. Is the heal speed you saw improved per
>> file
>> >>> > or
>> >>> > the over all time it took to heal the data?
>> >>> >
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey > >
>> >>> >> wrote:
>> >>> >> >
>> >>> >> > 8+2 and 8+3 configurations are not the limitation but just
>> >>> >> > suggestions.
>> >>> >> > You can create 16+3 volume without any issue.
>> >>> >> >
>> >>> >> > Ashish
>> >>> >> >
>> >>> >> > 
>> >>> >> > From: "Alastair Neil" 
>> >>> >> > To: "gluster-users" 
>> >>> >> > Sent: Friday, May 5, 2017 2:23:32 AM
>> >>> >> > Subject: [Gluster-users] disperse volume brick counts limits in
>> RHES
>> >>> >> >
>> >>> >> >
>> >>> >> > Hi
>> >>> >> >
>> >>> >> > we are deploying a large (24node/45brick) cluster and noted that
>> the
>> >>> >> > RHES
>> >>> >> > guidelines limit the number of data bricks in a disperse set to
>> 8.
>> >>> >> > Is
>> >>> >> > there
>> >>> >> > any reason for this.  I am aware that you want this to be a
>> power of
>> >>> >> > 2,
>> >>> >> > but
>> >>> >> > as we have a large number of nodes we were planning on going with
>> >>> >> > 16+3.
>> >>> >> > Dropping to 8+2 or 8+3 will be a real waste for us.
>> >>> >> >
>> >>> >> > Thanks,
>> >>> >> >
>> >>> >> >
>> >>> >> > Alastair
>> >>> >> >
>> >>> >> >
>> >>> >> > ___
>> >>> >> > Gluster-users mailing list
>> >>> >> > Gluster-users@gluster.org
>> >>> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
>> >>> >> >
>> >>> >> >
>> >>> >> > ___
>> >>> >> > Gluster-users mailing list
>> >>> >> > Gluster-users@gluster.org
>> >>> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
>> >>> >> ___
>> >>> >> Gluster-users mailing list
>> >>> >> Gluster-users@gluster.org
>> >>> >> http://lists.gluster.org/mailman/listinfo/gluster-users
>> >>> >
>> >>> >
>> >>> >
>> >>> >
>> >>> > --
>> >>> > Pranith
>> >>
>> >>
>> >>
>> >>
>> >> 

Re: [Gluster-users] disperse volume brick counts limits in RHES

2017-05-05 Thread Pranith Kumar Karampuri
On Fri, May 5, 2017 at 7:21 PM, Serkan Çoban  wrote:

> In our use case every node has 26 bricks. I am using 60 nodes, one 9PB
> volume with 16+4 EC configuration, each brick in a sub-volume is on
> different host.
> We put 15-20k 2GB files every day into 10-15 folders. So it is 1500K
> files/folder. Our gluster version is 3.7.11.
> Heal speed in this environment is 8-10MB/sec/brick.
>
> I did some tests for parallel self heal feature with version 3.9, two
> servers 26 bricks each, 8+2 and 16+4 EC configuration.
> This was a small test environment and the results are as I said 8+2 is
> 2x faster then 16+4 with parallel self heal threads set to 2/4.
> In 1-2 months our new servers arriving, I will do detailed tests for
> heal performance for 8+2 and 16+4 and inform you the results.
>

In that case I still don't know why this is the case. Thanks for the
inputs. I will also try to find out how long a 2GB file takes in 8+2 vs
16+4 and see if there is something I need to look closely.


>
>
> On Fri, May 5, 2017 at 2:54 PM, Pranith Kumar Karampuri
>  wrote:
> >
> >
> > On Fri, May 5, 2017 at 5:19 PM, Pranith Kumar Karampuri
> >  wrote:
> >>
> >>
> >>
> >> On Fri, May 5, 2017 at 2:38 PM, Serkan Çoban 
> >> wrote:
> >>>
> >>> It is the over all time, 8TB data disk healed 2x faster in 8+2
> >>> configuration.
> >>
> >>
> >> Wow, that is counter intuitive for me. I will need to explore about this
> >> to find out why that could be. Thanks a lot for this feedback!
> >
> >
> > From memory I remember you said you have a lot of small files hosted on
> the
> > volume, right? It could be because of the bug
> > https://review.gluster.org/17151 is fixing. That is the only reason I
> could
> > guess right now. We will try to test this kind of case if you could give
> us
> > a bit more details about average file-size/depth of directories etc to
> > simulate similar looking directory structure.
> >
> >>
> >>
> >>>
> >>>
> >>> On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri
> >>>  wrote:
> >>> >
> >>> >
> >>> > On Fri, May 5, 2017 at 11:42 AM, Serkan Çoban  >
> >>> > wrote:
> >>> >>
> >>> >> Healing gets slower as you increase m in m+n configuration.
> >>> >> We are using 16+4 configuration without any problems other then heal
> >>> >> speed.
> >>> >> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals on
> >>> >> 8+2 is faster by 2x.
> >>> >
> >>> >
> >>> > As you increase number of nodes that are participating in an EC set
> >>> > number
> >>> > of parallel heals increase. Is the heal speed you saw improved per
> file
> >>> > or
> >>> > the over all time it took to heal the data?
> >>> >
> >>> >>
> >>> >>
> >>> >>
> >>> >> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey 
> >>> >> wrote:
> >>> >> >
> >>> >> > 8+2 and 8+3 configurations are not the limitation but just
> >>> >> > suggestions.
> >>> >> > You can create 16+3 volume without any issue.
> >>> >> >
> >>> >> > Ashish
> >>> >> >
> >>> >> > 
> >>> >> > From: "Alastair Neil" 
> >>> >> > To: "gluster-users" 
> >>> >> > Sent: Friday, May 5, 2017 2:23:32 AM
> >>> >> > Subject: [Gluster-users] disperse volume brick counts limits in
> RHES
> >>> >> >
> >>> >> >
> >>> >> > Hi
> >>> >> >
> >>> >> > we are deploying a large (24node/45brick) cluster and noted that
> the
> >>> >> > RHES
> >>> >> > guidelines limit the number of data bricks in a disperse set to 8.
> >>> >> > Is
> >>> >> > there
> >>> >> > any reason for this.  I am aware that you want this to be a power
> of
> >>> >> > 2,
> >>> >> > but
> >>> >> > as we have a large number of nodes we were planning on going with
> >>> >> > 16+3.
> >>> >> > Dropping to 8+2 or 8+3 will be a real waste for us.
> >>> >> >
> >>> >> > Thanks,
> >>> >> >
> >>> >> >
> >>> >> > Alastair
> >>> >> >
> >>> >> >
> >>> >> > ___
> >>> >> > Gluster-users mailing list
> >>> >> > Gluster-users@gluster.org
> >>> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
> >>> >> >
> >>> >> >
> >>> >> > ___
> >>> >> > Gluster-users mailing list
> >>> >> > Gluster-users@gluster.org
> >>> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
> >>> >> ___
> >>> >> Gluster-users mailing list
> >>> >> Gluster-users@gluster.org
> >>> >> http://lists.gluster.org/mailman/listinfo/gluster-users
> >>> >
> >>> >
> >>> >
> >>> >
> >>> > --
> >>> > Pranith
> >>
> >>
> >>
> >>
> >> --
> >> Pranith
> >
> >
> >
> >
> > --
> > Pranith
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] disperse volume brick counts limits in RHES

2017-05-05 Thread Serkan Çoban
In our use case every node has 26 bricks. I am using 60 nodes, one 9PB
volume with 16+4 EC configuration, each brick in a sub-volume is on
different host.
We put 15-20k 2GB files every day into 10-15 folders. So it is 1500K
files/folder. Our gluster version is 3.7.11.
Heal speed in this environment is 8-10MB/sec/brick.

I did some tests for parallel self heal feature with version 3.9, two
servers 26 bricks each, 8+2 and 16+4 EC configuration.
This was a small test environment and the results are as I said 8+2 is
2x faster then 16+4 with parallel self heal threads set to 2/4.
In 1-2 months our new servers arriving, I will do detailed tests for
heal performance for 8+2 and 16+4 and inform you the results.


On Fri, May 5, 2017 at 2:54 PM, Pranith Kumar Karampuri
 wrote:
>
>
> On Fri, May 5, 2017 at 5:19 PM, Pranith Kumar Karampuri
>  wrote:
>>
>>
>>
>> On Fri, May 5, 2017 at 2:38 PM, Serkan Çoban 
>> wrote:
>>>
>>> It is the over all time, 8TB data disk healed 2x faster in 8+2
>>> configuration.
>>
>>
>> Wow, that is counter intuitive for me. I will need to explore about this
>> to find out why that could be. Thanks a lot for this feedback!
>
>
> From memory I remember you said you have a lot of small files hosted on the
> volume, right? It could be because of the bug
> https://review.gluster.org/17151 is fixing. That is the only reason I could
> guess right now. We will try to test this kind of case if you could give us
> a bit more details about average file-size/depth of directories etc to
> simulate similar looking directory structure.
>
>>
>>
>>>
>>>
>>> On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri
>>>  wrote:
>>> >
>>> >
>>> > On Fri, May 5, 2017 at 11:42 AM, Serkan Çoban 
>>> > wrote:
>>> >>
>>> >> Healing gets slower as you increase m in m+n configuration.
>>> >> We are using 16+4 configuration without any problems other then heal
>>> >> speed.
>>> >> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals on
>>> >> 8+2 is faster by 2x.
>>> >
>>> >
>>> > As you increase number of nodes that are participating in an EC set
>>> > number
>>> > of parallel heals increase. Is the heal speed you saw improved per file
>>> > or
>>> > the over all time it took to heal the data?
>>> >
>>> >>
>>> >>
>>> >>
>>> >> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey 
>>> >> wrote:
>>> >> >
>>> >> > 8+2 and 8+3 configurations are not the limitation but just
>>> >> > suggestions.
>>> >> > You can create 16+3 volume without any issue.
>>> >> >
>>> >> > Ashish
>>> >> >
>>> >> > 
>>> >> > From: "Alastair Neil" 
>>> >> > To: "gluster-users" 
>>> >> > Sent: Friday, May 5, 2017 2:23:32 AM
>>> >> > Subject: [Gluster-users] disperse volume brick counts limits in RHES
>>> >> >
>>> >> >
>>> >> > Hi
>>> >> >
>>> >> > we are deploying a large (24node/45brick) cluster and noted that the
>>> >> > RHES
>>> >> > guidelines limit the number of data bricks in a disperse set to 8.
>>> >> > Is
>>> >> > there
>>> >> > any reason for this.  I am aware that you want this to be a power of
>>> >> > 2,
>>> >> > but
>>> >> > as we have a large number of nodes we were planning on going with
>>> >> > 16+3.
>>> >> > Dropping to 8+2 or 8+3 will be a real waste for us.
>>> >> >
>>> >> > Thanks,
>>> >> >
>>> >> >
>>> >> > Alastair
>>> >> >
>>> >> >
>>> >> > ___
>>> >> > Gluster-users mailing list
>>> >> > Gluster-users@gluster.org
>>> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
>>> >> >
>>> >> >
>>> >> > ___
>>> >> > Gluster-users mailing list
>>> >> > Gluster-users@gluster.org
>>> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
>>> >> ___
>>> >> Gluster-users mailing list
>>> >> Gluster-users@gluster.org
>>> >> http://lists.gluster.org/mailman/listinfo/gluster-users
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > Pranith
>>
>>
>>
>>
>> --
>> Pranith
>
>
>
>
> --
> Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] disperse volume brick counts limits in RHES

2017-05-05 Thread Pranith Kumar Karampuri
On Fri, May 5, 2017 at 5:19 PM, Pranith Kumar Karampuri  wrote:

>
>
> On Fri, May 5, 2017 at 2:38 PM, Serkan Çoban 
> wrote:
>
>> It is the over all time, 8TB data disk healed 2x faster in 8+2
>> configuration.
>>
>
> Wow, that is counter intuitive for me. I will need to explore about this
> to find out why that could be. Thanks a lot for this feedback!
>

>From memory I remember you said you have a lot of small files hosted on the
volume, right? It could be because of the bug
https://review.gluster.org/17151 is fixing. That is the only reason I could
guess right now. We will try to test this kind of case if you could give us
a bit more details about average file-size/depth of directories etc to
simulate similar looking directory structure.


>
>
>>
>> On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri
>>  wrote:
>> >
>> >
>> > On Fri, May 5, 2017 at 11:42 AM, Serkan Çoban 
>> wrote:
>> >>
>> >> Healing gets slower as you increase m in m+n configuration.
>> >> We are using 16+4 configuration without any problems other then heal
>> >> speed.
>> >> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals on
>> >> 8+2 is faster by 2x.
>> >
>> >
>> > As you increase number of nodes that are participating in an EC set
>> number
>> > of parallel heals increase. Is the heal speed you saw improved per file
>> or
>> > the over all time it took to heal the data?
>> >
>> >>
>> >>
>> >>
>> >> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey 
>> wrote:
>> >> >
>> >> > 8+2 and 8+3 configurations are not the limitation but just
>> suggestions.
>> >> > You can create 16+3 volume without any issue.
>> >> >
>> >> > Ashish
>> >> >
>> >> > 
>> >> > From: "Alastair Neil" 
>> >> > To: "gluster-users" 
>> >> > Sent: Friday, May 5, 2017 2:23:32 AM
>> >> > Subject: [Gluster-users] disperse volume brick counts limits in RHES
>> >> >
>> >> >
>> >> > Hi
>> >> >
>> >> > we are deploying a large (24node/45brick) cluster and noted that the
>> >> > RHES
>> >> > guidelines limit the number of data bricks in a disperse set to 8.
>> Is
>> >> > there
>> >> > any reason for this.  I am aware that you want this to be a power of
>> 2,
>> >> > but
>> >> > as we have a large number of nodes we were planning on going with
>> 16+3.
>> >> > Dropping to 8+2 or 8+3 will be a real waste for us.
>> >> >
>> >> > Thanks,
>> >> >
>> >> >
>> >> > Alastair
>> >> >
>> >> >
>> >> > ___
>> >> > Gluster-users mailing list
>> >> > Gluster-users@gluster.org
>> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
>> >> >
>> >> >
>> >> > ___
>> >> > Gluster-users mailing list
>> >> > Gluster-users@gluster.org
>> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
>> >> ___
>> >> Gluster-users mailing list
>> >> Gluster-users@gluster.org
>> >> http://lists.gluster.org/mailman/listinfo/gluster-users
>> >
>> >
>> >
>> >
>> > --
>> > Pranith
>>
>
>
>
> --
> Pranith
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] disperse volume brick counts limits in RHES

2017-05-05 Thread Pranith Kumar Karampuri
On Fri, May 5, 2017 at 2:38 PM, Serkan Çoban  wrote:

> It is the over all time, 8TB data disk healed 2x faster in 8+2
> configuration.
>

Wow, that is counter intuitive for me. I will need to explore about this to
find out why that could be. Thanks a lot for this feedback!


>
> On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri
>  wrote:
> >
> >
> > On Fri, May 5, 2017 at 11:42 AM, Serkan Çoban 
> wrote:
> >>
> >> Healing gets slower as you increase m in m+n configuration.
> >> We are using 16+4 configuration without any problems other then heal
> >> speed.
> >> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals on
> >> 8+2 is faster by 2x.
> >
> >
> > As you increase number of nodes that are participating in an EC set
> number
> > of parallel heals increase. Is the heal speed you saw improved per file
> or
> > the over all time it took to heal the data?
> >
> >>
> >>
> >>
> >> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey 
> wrote:
> >> >
> >> > 8+2 and 8+3 configurations are not the limitation but just
> suggestions.
> >> > You can create 16+3 volume without any issue.
> >> >
> >> > Ashish
> >> >
> >> > 
> >> > From: "Alastair Neil" 
> >> > To: "gluster-users" 
> >> > Sent: Friday, May 5, 2017 2:23:32 AM
> >> > Subject: [Gluster-users] disperse volume brick counts limits in RHES
> >> >
> >> >
> >> > Hi
> >> >
> >> > we are deploying a large (24node/45brick) cluster and noted that the
> >> > RHES
> >> > guidelines limit the number of data bricks in a disperse set to 8.  Is
> >> > there
> >> > any reason for this.  I am aware that you want this to be a power of
> 2,
> >> > but
> >> > as we have a large number of nodes we were planning on going with
> 16+3.
> >> > Dropping to 8+2 or 8+3 will be a real waste for us.
> >> >
> >> > Thanks,
> >> >
> >> >
> >> > Alastair
> >> >
> >> >
> >> > ___
> >> > Gluster-users mailing list
> >> > Gluster-users@gluster.org
> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
> >> >
> >> >
> >> > ___
> >> > Gluster-users mailing list
> >> > Gluster-users@gluster.org
> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
> >> ___
> >> Gluster-users mailing list
> >> Gluster-users@gluster.org
> >> http://lists.gluster.org/mailman/listinfo/gluster-users
> >
> >
> >
> >
> > --
> > Pranith
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] disperse volume brick counts limits in RHES

2017-05-05 Thread Serkan Çoban
It is the over all time, 8TB data disk healed 2x faster in 8+2 configuration.

On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri
 wrote:
>
>
> On Fri, May 5, 2017 at 11:42 AM, Serkan Çoban  wrote:
>>
>> Healing gets slower as you increase m in m+n configuration.
>> We are using 16+4 configuration without any problems other then heal
>> speed.
>> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals on
>> 8+2 is faster by 2x.
>
>
> As you increase number of nodes that are participating in an EC set number
> of parallel heals increase. Is the heal speed you saw improved per file or
> the over all time it took to heal the data?
>
>>
>>
>>
>> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey  wrote:
>> >
>> > 8+2 and 8+3 configurations are not the limitation but just suggestions.
>> > You can create 16+3 volume without any issue.
>> >
>> > Ashish
>> >
>> > 
>> > From: "Alastair Neil" 
>> > To: "gluster-users" 
>> > Sent: Friday, May 5, 2017 2:23:32 AM
>> > Subject: [Gluster-users] disperse volume brick counts limits in RHES
>> >
>> >
>> > Hi
>> >
>> > we are deploying a large (24node/45brick) cluster and noted that the
>> > RHES
>> > guidelines limit the number of data bricks in a disperse set to 8.  Is
>> > there
>> > any reason for this.  I am aware that you want this to be a power of 2,
>> > but
>> > as we have a large number of nodes we were planning on going with 16+3.
>> > Dropping to 8+2 or 8+3 will be a real waste for us.
>> >
>> > Thanks,
>> >
>> >
>> > Alastair
>> >
>> >
>> > ___
>> > Gluster-users mailing list
>> > Gluster-users@gluster.org
>> > http://lists.gluster.org/mailman/listinfo/gluster-users
>> >
>> >
>> > ___
>> > Gluster-users mailing list
>> > Gluster-users@gluster.org
>> > http://lists.gluster.org/mailman/listinfo/gluster-users
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
> --
> Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] disperse volume brick counts limits in RHES

2017-05-05 Thread Pranith Kumar Karampuri
On Fri, May 5, 2017 at 11:42 AM, Serkan Çoban  wrote:

> Healing gets slower as you increase m in m+n configuration.
> We are using 16+4 configuration without any problems other then heal speed.
> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals on
> 8+2 is faster by 2x.
>

As you increase number of nodes that are participating in an EC set number
of parallel heals increase. Is the heal speed you saw improved per file or
the over all time it took to heal the data?


>
>
> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey  wrote:
> >
> > 8+2 and 8+3 configurations are not the limitation but just suggestions.
> > You can create 16+3 volume without any issue.
> >
> > Ashish
> >
> > 
> > From: "Alastair Neil" 
> > To: "gluster-users" 
> > Sent: Friday, May 5, 2017 2:23:32 AM
> > Subject: [Gluster-users] disperse volume brick counts limits in RHES
> >
> >
> > Hi
> >
> > we are deploying a large (24node/45brick) cluster and noted that the RHES
> > guidelines limit the number of data bricks in a disperse set to 8.  Is
> there
> > any reason for this.  I am aware that you want this to be a power of 2,
> but
> > as we have a large number of nodes we were planning on going with 16+3.
> > Dropping to 8+2 or 8+3 will be a real waste for us.
> >
> > Thanks,
> >
> >
> > Alastair
> >
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-users
> >
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-users
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] disperse volume brick counts limits in RHES

2017-05-05 Thread Serkan Çoban
Healing gets slower as you increase m in m+n configuration.
We are using 16+4 configuration without any problems other then heal speed.
I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals on
8+2 is faster by 2x.


On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey  wrote:
>
> 8+2 and 8+3 configurations are not the limitation but just suggestions.
> You can create 16+3 volume without any issue.
>
> Ashish
>
> 
> From: "Alastair Neil" 
> To: "gluster-users" 
> Sent: Friday, May 5, 2017 2:23:32 AM
> Subject: [Gluster-users] disperse volume brick counts limits in RHES
>
>
> Hi
>
> we are deploying a large (24node/45brick) cluster and noted that the RHES
> guidelines limit the number of data bricks in a disperse set to 8.  Is there
> any reason for this.  I am aware that you want this to be a power of 2, but
> as we have a large number of nodes we were planning on going with 16+3.
> Dropping to 8+2 or 8+3 will be a real waste for us.
>
> Thanks,
>
>
> Alastair
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] disperse volume brick counts limits in RHES

2017-05-05 Thread Ashish Pandey

8+2 and 8+3 configurations are not the limitation but just suggestions. 
You can create 16+3 volume without any issue. 

Ashish 

- Original Message -

From: "Alastair Neil"  
To: "gluster-users"  
Sent: Friday, May 5, 2017 2:23:32 AM 
Subject: [Gluster-users] disperse volume brick counts limits in RHES 

Hi 

we are deploying a large (24node/45brick) cluster and noted that the RHES 
guidelines limit the number of data bricks in a disperse set to 8. Is there any 
reason for this. I am aware that you want this to be a power of 2, but as we 
have a large number of nodes we were planning on going with 16+3. Dropping to 
8+2 or 8+3 will be a real waste for us. 

Thanks, 


Alastair 


___ 
Gluster-users mailing list 
Gluster-users@gluster.org 
http://lists.gluster.org/mailman/listinfo/gluster-users 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users