Re: [Gluster-users] EC clarification

2016-09-21 Thread Jeff Darcy
> 2016-09-21 20:56 GMT+02:00 Serkan Çoban :
> > Then you can use 8+3 with 11 servers.
> 
> Stripe size won't be good: 512*(8-3) = 2560 and not 2048 (or multiple)

It's not really 512*(8+3) though.  Even though there are 11 fragments,
they only contain 8 fragments' worth of data.  They just encode it with
enough redundancy that *any* 8 contains the whole.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] EC clarification

2016-09-21 Thread Gandalf Corvotempesta
2016-09-21 20:56 GMT+02:00 Serkan Çoban :
> Then you can use 8+3 with 11 servers.

Stripe size won't be good: 512*(8-3) = 2560 and not 2048 (or multiple)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] EC clarification

2016-09-21 Thread Serkan Çoban
Then you can use 8+3 with 11 servers.

On Wed, Sep 21, 2016 at 9:17 PM, Gandalf Corvotempesta
 wrote:
> 2016-09-21 16:13 GMT+02:00 Serkan Çoban :
>> 8+2 is recommended for 10 servers. From n+k servers it will be good to
>> choose n with a power of 2(4,8,16,vs)
>> You need to add 10 bricks if you want to extend the volume.
>
> 8+2 means at least 2 failed bricks, right?
> That's too low. I need at least 3 bricks
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] EC clarification

2016-09-21 Thread Gandalf Corvotempesta
2016-09-21 16:13 GMT+02:00 Serkan Çoban :
> 8+2 is recommended for 10 servers. From n+k servers it will be good to
> choose n with a power of 2(4,8,16,vs)
> You need to add 10 bricks if you want to extend the volume.

8+2 means at least 2 failed bricks, right?
That's too low. I need at least 3 bricks
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] EC clarification

2016-09-21 Thread Serkan Çoban
8+2 is recommended for 10 servers. From n+k servers it will be good to
choose n with a power of 2(4,8,16,vs)
You need to add 10 bricks if you want to extend the volume.


On Wed, Sep 21, 2016 at 4:12 PM, Gandalf Corvotempesta
 wrote:
> 2016-09-21 14:42 GMT+02:00 Xavier Hernandez :
>> You *must* ensure that *all* bricks forming a single disperse set are placed
>> in a different server. There are no 4 special fragments. All fragments have
>> the same importance. The way to do that is ordering them when the volume is
>> created:
>>
>> gluster volume create test disperse 16 redundancy 4
>> server{1..20}:/bricks/test1 server{1..20}:/bricks/test2
>> server{1..20}:/bricks/test3
>>
>> This way all 20 fragments from each disperse set will be placed in a
>> different server. However each server will have 3 bricks and no fragment
>> from a single file will be stored in more than one brick of each server.
>
> Now it's clear.
> So, at very minimum, EC is good starting from 7+3 (10 servers with 1
> brick each) because: 512*(7-3) = 2048
> Any smaller combinations would mean less redundancy (6+2) or not
> optimized stripe size like 512*(5-3)=1024
>
> Is this correct ? And what if I have to add some brick to the current
> servers or I have to add new servers?
> Can I add them freely or I have to follow some rules ?
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] EC clarification

2016-09-21 Thread Gandalf Corvotempesta
2016-09-21 14:42 GMT+02:00 Xavier Hernandez :
> You *must* ensure that *all* bricks forming a single disperse set are placed
> in a different server. There are no 4 special fragments. All fragments have
> the same importance. The way to do that is ordering them when the volume is
> created:
>
> gluster volume create test disperse 16 redundancy 4
> server{1..20}:/bricks/test1 server{1..20}:/bricks/test2
> server{1..20}:/bricks/test3
>
> This way all 20 fragments from each disperse set will be placed in a
> different server. However each server will have 3 bricks and no fragment
> from a single file will be stored in more than one brick of each server.

Now it's clear.
So, at very minimum, EC is good starting from 7+3 (10 servers with 1
brick each) because: 512*(7-3) = 2048
Any smaller combinations would mean less redundancy (6+2) or not
optimized stripe size like 512*(5-3)=1024

Is this correct ? And what if I have to add some brick to the current
servers or I have to add new servers?
Can I add them freely or I have to follow some rules ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] EC clarification

2016-09-21 Thread Xavier Hernandez

On 21/09/16 14:36, Gandalf Corvotempesta wrote:

Il 01 set 2016 10:18 AM, "Xavier Hernandez" mailto:xhernan...@datalab.es>> ha scritto:

If you put more than one fragment into the same server, you will lose

all the fragments if the server goes down. If there are more than 4
fragments on that server, the file will be unrecoverable until the
server is brought up again.


Putting more than one fragment into a single server only makes sense

to account for disk failures, since the protection against server
failures is lower.




Exactly, what i would like to ensure is that the 4 segments needed for
recovery are placed automatically on at least 4 servers (and not on 4
different bricks that could be on the same server)



You *must* ensure that *all* bricks forming a single disperse set are 
placed in a different server. There are no 4 special fragments. All 
fragments have the same importance. The way to do that is ordering them 
when the volume is created:


gluster volume create test disperse 16 redundancy 4 
server{1..20}:/bricks/test1 server{1..20}:/bricks/test2 
server{1..20}:/bricks/test3


This way all 20 fragments from each disperse set will be placed in a 
different server. However each server will have 3 bricks and no fragment 
from a single file will be stored in more than one brick of each server.


Xavi
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] EC clarification

2016-09-21 Thread Gandalf Corvotempesta
Il 01 set 2016 10:18 AM, "Xavier Hernandez"  ha
scritto:
> If you put more than one fragment into the same server, you will lose all
the fragments if the server goes down. If there are more than 4 fragments
on that server, the file will be unrecoverable until the server is brought
up again.
>
> Putting more than one fragment into a single server only makes sense to
account for disk failures, since the protection against server failures is
lower.
>

Exactly, what i would like to ensure is that the 4 segments needed for
recovery are placed automatically on at least 4 servers (and not on 4
different bricks that could be on the same server)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] EC clarification

2016-09-01 Thread Xavier Hernandez

Hi,

On 27/08/16 10:57, Gandalf Corvotempesta wrote:

In short: how can i set the node hosting the erasure codes? In a 16+4 EC
(or bigger) i would like to put the 4 bricks hosting the ECs on 4
different servers so that i can loose 4 servers and still be able to
access/recover data


EC builds several fragments of data for each file. In the case of a 16+4 
configuration, a file of size 1MB is transformed into 20 smaller files 
(fragments) of 64KB.


To recover the original file *any* subset of 16 fragments is enough. 
There aren't special fragments with more information or importance.


If you put more than one fragment into the same server, you will lose 
all the fragments if the server goes down. If there are more than 4 
fragments on that server, the file will be unrecoverable until the 
server is brought up again.


Putting more than one fragment into a single server only makes sense to 
account for disk failures, since the protection against server failures 
is lower.


Xavi





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] EC clarification

2016-08-27 Thread Gandalf Corvotempesta
In short: how can i set the node hosting the erasure codes? In a 16+4 EC
(or bigger) i would like to put the 4 bricks hosting the ECs on 4 different
servers so that i can loose 4 servers and still be able to access/recover
data
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users