Re: [Gluster-users] To RAID or not to RAID...

2020-01-14 Thread Hu Bert
Hi,

our old setup is not really comparable, but i thought i'd drop some
lines... we once had a Distributed-Replicate setup with 4 x 3 = 12
disks (10 TB hdd). Simple JBOD, every disk == brick. Was running
pretty good, until one of the disks died. The restore (reset-brick)
took about a month, because the application has a quite high I/O and
therefore slows down the volume and the disks.

Next step: take servers with 10x10TB disks and build a RAID 10; raid
array == brick, replicate volume (1 x 3 = 3). When a disk fails, you
only have to rebuild the SW RAID which takes about 3-4 days, plus the
periodic redundany checks. This was way better than the
JBOD/reset-scenario before. But still not optimal. Upcoming step:
build a distribute-replicate with lots of SSDs (maybe again with a
RAID underneath) .

tl;dr what i wanted to say: we waste a lot of disks. It simply depends
on which setup you have and how to handle the situation when one of
the disks fails - and it will! ;-(


regards
Hubert

Am Di., 14. Jan. 2020 um 12:36 Uhr schrieb Markus Kern :
>
>
> Greetings again!
>
> After reading RedHat documentation regarding optimizing Gluster storage
> another question comes to my mind:
>
> Let's presume that I want to go the distributed dispersed volume way.
> Three nodes which two bricks each.
> According to RedHat's recommendation, I should use RAID6 as underlying
> RAID for my planned workload.
> I am frightened by that "waste" of disks in such a case:
> When each brick is a RAID6, I would "loose" two disks per brick - 12
> lossed disks in total.
> In addition to this, distributed dispersed volume adds another layer of
> lossed disk space.
>
> Am I wrong here? Maybe I didn't understand the recommendations wrong?
>
> Markus
> 
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/441850968
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] To RAID or not to RAID...

2020-01-14 Thread Strahil
Hi Markus,


You are right  .I think that the 3 node setup matches distributed volume.

According to 
https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/

Dispersed  volumes  use erasure codes to have a 'parity' on a separate brick. 
In such case you can afford to  loose a brick without loosing data and you will 
need more bricks.

Yet, I don't see anything about RAID6 being required.

Use the gluster's official documentation (if possible) as it is the most recent 
info.

Maybe you can share the ammount of disks , raid controllers and servers you 
have and your tolerance to data loss. Then I can share my thoughts on the 
possible volume types.

Best Regards,
Strahil Nikolov

On Jan 14, 2020 20:33, Markus Kern  wrote:
>
> Hi Strahil, 
>
> thanks for you answer - but now I am completely lost :) 
>
> From this documentation: 
> https://docs.oracle.com/cd/E52668_01/F10040/html/gluster-312-volume-distr-disp.html
>  
>
> "As a dispersed volume must have a minimum of three bricks, a 
> distributed dispersed volume must have at least six bricks. For example, 
> six nodes with one brick, or three nodes with two bricks on each node 
> are needed for this volume type." 
>
> So for a distributed dispersed volume I need at least six bricks. If 
> each brick is a RAID6, I have 6 x 2 Parity disks = 12 disks for parity. 
>
> In your example you only have one brick per node in a three node setup. 
> This is no distributed dispersed volume then, right? 
>
> A confused Markus 
>
>
> Am 14.01.2020 16:29, schrieb Strahil: 
> > Hi Markus, 
> > 
> > Distributed dispersed volume is just LVM's linear LV -> so in case of 
> > brick failiure - you loose the data on it. 
> > 
> > Raid 6 requires  2 disks  for parity, so you can make a large RAID6 
> > and use that as a single brick - so the disks that hold  the parity 
> > data are  only 6 ( 3 nodes x 2 disks). 
> > 
> > Of course  if you have too many disks  for a single raid controller 
> > ,that you can consider a  replica volume  with an arbiter. 
> > 
> > 
> > Best Regards, 
> > Strahil Nikolov 
> > 
> > On Jan 14, 2020 13:36, Markus Kern  wrote: 
> >> 
> >> 
> >> Greetings again! 
> >> 
> >> After reading RedHat documentation regarding optimizing Gluster 
> >> storage 
> >> another question comes to my mind: 
> >> 
> >> Let's presume that I want to go the distributed dispersed volume way. 
> >> Three nodes which two bricks each. 
> >> According to RedHat's recommendation, I should use RAID6 as underlying 
> >> RAID for my planned workload. 
> >> I am frightened by that "waste" of disks in such a case: 
> >> When each brick is a RAID6, I would "loose" two disks per brick - 12 
> >> lossed disks in total. 
> >> In addition to this, distributed dispersed volume adds another layer 
> >> of 
> >> lossed disk space. 
> >> 
> >> Am I wrong here? Maybe I didn't understand the recommendations wrong? 
> >> 
> >> Markus 
> >>  
> >> 
> >> Community Meeting Calendar: 
> >> 
> >> APAC Schedule - 
> >> Every 2nd and 4th Tuesday at 11:30 AM IST 
> >> Bridge: https://bluejeans.com/441850968 
> >> 
> >> NA/EMEA Schedule - 
> >> Every 1st and 3rd Tuesday at 01:00 PM EDT 
> >> Bridge: https://bluejeans.com/441850968 
> >> 
> >> Gluster-users mailing list 
> >> Gluster-users@gluster.org 
> >> https://lists.gluster.org/mailman/listinfo/gluster-users 


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] To RAID or not to RAID...

2020-01-14 Thread Markus Kern

Hi Strahil,

thanks for you answer - but now I am completely lost :)

From this documentation:
https://docs.oracle.com/cd/E52668_01/F10040/html/gluster-312-volume-distr-disp.html

"As a dispersed volume must have a minimum of three bricks, a 
distributed dispersed volume must have at least six bricks. For example, 
six nodes with one brick, or three nodes with two bricks on each node 
are needed for this volume type."


So for a distributed dispersed volume I need at least six bricks. If 
each brick is a RAID6, I have 6 x 2 Parity disks = 12 disks for parity.


In your example you only have one brick per node in a three node setup. 
This is no distributed dispersed volume then, right?


A confused Markus


Am 14.01.2020 16:29, schrieb Strahil:

Hi Markus,

Distributed dispersed volume is just LVM's linear LV -> so in case of
brick failiure - you loose the data on it.

Raid 6 requires  2 disks  for parity, so you can make a large RAID6
and use that as a single brick - so the disks that hold  the parity
data are  only 6 ( 3 nodes x 2 disks).

Of course  if you have too many disks  for a single raid controller
,that you can consider a  replica volume  with an arbiter.


Best Regards,
Strahil Nikolov

On Jan 14, 2020 13:36, Markus Kern  wrote:



Greetings again!

After reading RedHat documentation regarding optimizing Gluster 
storage

another question comes to my mind:

Let's presume that I want to go the distributed dispersed volume way.
Three nodes which two bricks each.
According to RedHat's recommendation, I should use RAID6 as underlying
RAID for my planned workload.
I am frightened by that "waste" of disks in such a case:
When each brick is a RAID6, I would "loose" two disks per brick - 12
lossed disks in total.
In addition to this, distributed dispersed volume adds another layer 
of

lossed disk space.

Am I wrong here? Maybe I didn't understand the recommendations wrong?

Markus


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users



Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] to RAID or not?

2016-07-04 Thread Dmitry Melekhov

04.07.2016 19:01, Matt Robinson пишет:

With mdadm any raid6 (especially with 12 disks) will be rubbish.


Well, this can be offtopic, but could you, please, explain why? (never 
used md raid other than raid1... )


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] to RAID or not?

2016-07-04 Thread tom
If you go the ZFS route - be absolutely sure you set xattr=sa on all 
filesystems that will hold bricks BEFORE you create bricks on same.  Not doing 
so will cause major problems with data that should be deleted not being 
reclaimed until after a forced dismount or reboot (which can take hours -> days 
if there are several terabytes of data to reclaim.)

Setting it also vastly improves directory and stat() performance.

Setting it after the bricks had been created led to data inconsistencies and 
eventual data loss on a cluster we used to operate.

-t

> On Jul 4, 2016, at 4:35 PM, Lindsay Mathieson  
> wrote:
> 
> On 5/07/2016 12:54 AM, Gandalf Corvotempesta wrote:
>> No suggestions ?
>> 
>> Il 14 giu 2016 10:01 AM, "Gandalf Corvotempesta" 
>> > 
>> ha scritto:
>> Let's assume a small cluster made by 3 servers, 12 disks/bricks each.
>> This cluster would be expanded to a maximum of 15 servers in near future.
>> 
>> What do you suggest, a JBOD or a RAID? Which RAID level?
> 
> 
> I setup my much smaller cluster with ZFS RAID10 on each node. 
> - Greatly increased the iops per node
> 
> - auto bitrot detection and repair
> 
> - SSD caches
> 
> - compression clawed back 30% of the disk space I lost to RAID10.
> -- 
> Lindsay Mathieson
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] to RAID or not?

2016-07-04 Thread Lindsay Mathieson

On 5/07/2016 12:54 AM, Gandalf Corvotempesta wrote:


No suggestions ?

Il 14 giu 2016 10:01 AM, "Gandalf Corvotempesta" 
> ha scritto:


Let's assume a small cluster made by 3 servers, 12 disks/bricks each.
This cluster would be expanded to a maximum of 15 servers in near
future.

What do you suggest, a JBOD or a RAID? Which RAID level?




I setup my much smaller cluster with ZFS RAID10 on each node.

- Greatly increased the iops per node

- auto bitrot detection and repair

- SSD caches

- compression clawed back 30% of the disk space I lost to RAID10.

--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] to RAID or not?

2016-07-04 Thread Russell Purinton
Agreed… It took me almost 2 years of tweaking and testing to get the 
performance I wanted.   

Different workloads require different configurations.Test different 
configurations and find what works best for you!

> On Jul 4, 2016, at 2:15 PM, t...@encoding.com wrote:
> 
> I would highly stress, regardless of whatever solution you choose - make sure 
> you test actual workload performance before going all-in.
> 
> In my testing, performance (esp. iops and latency) decreased as I added 
> bricks and additional nodes.  Since you have many spindles now, I would 
> encourage you to test your workload up to and including the total brick count 
> you ultimately expect.  RAID level and whether it’s md, zfs, or hardware 
> isn’t likely to make as significant of a performance impact as Gluster and 
> its various clients will.  Test failure scenarios and performance 
> characteristics during impairment events thoroughly.  Make sure heals happen 
> as you expect, including final contents of files modified during an 
> impairment.  If you have many small files or directories that will be 
> accessed concurrently, make sure to stress that behavior in your testing.
> 
> Gluster can be great for targeting availability and distribution at low 
> software cost, and I would say as of today at the expense of performance, but 
> as with any scale-out NAS there are limitations and some surprises along the 
> path.
> 
> Good hunting,
> -t
> 
>> On Jul 4, 2016, at 10:44 AM, Gandalf Corvotempesta 
>>  wrote:
>> 
>> 2016-07-04 19:35 GMT+02:00 Russell Purinton :
>>> For 3 servers with 12 disks each, I would do Hardware RAID0 (or madam if 
>>> you don’t have a RAID card) of 3 disks.  So four 3-disk RAID0’s per server.
>> 
>> 3 servers is just to start. We plan to use 5 server in shorter time
>> and up to 15 on production.
>> 
>>> I would set them up as Replica 3 Arbiter 1
>>> 
>>> server1:/brickA server2:/brickC server3:/brickA
>>> server1:/brickB server2:/brickD server3:/brickB
>>> server2:/brickA server3:/brickC server1:/brickA
>>> server2:/brickB server3:/brickD server1:/brickB
>>> server3:/brickA server1:/brickC server2:/brickA
>>> server3:/brickB server1:/brickD server2:/brickB
>>> 
>>> The benefit of this is that you can lose an entire server node (12 disks) 
>>> and all of your data is still accessible.   And you get the same space as 
>>> if they were all in a RAID10.
>>> 
>>> If you lose any disk, the entire 3 disk brick will need to be healed from 
>>> the replica.   I have 20GbE on each server so it doesn’t take long.   It 
>>> copied 20TB in about 18 hours once.
>> 
>> So, any disk failure would me at least 6TB to be recovered via
>> network. This mean an high network utilization and as long gluster
>> doesn't have a dedicated network for replica,
>> this can slow down client access.
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] to RAID or not?

2016-07-04 Thread Russell Purinton
Sorry, example of 5 servers should read

> server1 A & B   replica to server 2 C & D
> server2 A & B   replica to server 3 C & D
> server3 A & B   replica to server 4 C & D
> server4 A & B   replica to server 5 C & D
> server5 A & B   replica to server 1 C & D


Adding each server should be as simple as using the brick-replace command to 
move bricks C and D from server1 onto bricks C and D of the new server.

Then you can add-brick to create 2 new brick replicas from new server A and B 
to server1 C and D.


> On Jul 4, 2016, at 1:54 PM, Russell Purinton  
> wrote:
> 
> The fault tolerance is provided by Gluster replica translator.
> 
> RAID0 to me is preferable to JBOD because you get 3x read performance and 3x 
> write performance.   If performance is not a concern, or if you only have 
> 1GbE, then it may not matter, and you could just do JBOD with a ton of bricks.
> 
> The same method scales to how ever many servers you need… imagine them in a 
> ring…
> 
> server1 A & B   replica to server 2 C & D
> server2 A & B   replica to server 3 C & D
> server3 A & B   replica to server 1 C & D
> 
> Adding a 4th server?  No problem… you can move the reconfigure the bricks to 
> do
> server1 A & B   replica to server 2 C & D
> server2 A & B   replica to server 3 C & D
> server3 A & B   replica to server 4 C & D
> server4 A & B   replica to server 1 C & D
> 
> or 5 servers
> server1 A & B   replica to server 2 C & D
> server2 A & B   replica to server 3 C & D
> server3 A & B   replica to server 4 C & D
> server4 A & B   replica to server 5 C & D
> server5 A & B   replica to server 6 C & D
> 
> I guess my recommendation is not the best for redundancy and data protection… 
> because I’m concerned with performance, and space, as long as I have 2 copies 
> of the data on different servers then I’m happy.  
> 
> If you care more about performance than space, and want extra data redundancy 
> (more than 2 copies), then use RAID 10 on the nodes, and use gluster replica. 
>  This means you have every byte of data on 4 disks.
> 
> If you care more about space than performance and want extra redundancy use 
> RAID 6, and gluster replica.
> 
> I always recommend gluster replica, because several times I have lost entire 
> servers… and its nice to have the data on more than server.
> 
>> On Jul 4, 2016, at 1:46 PM, Gandalf Corvotempesta 
>>  wrote:
>> 
>> 2016-07-04 19:44 GMT+02:00 Gandalf Corvotempesta
>> :
>>> So, any disk failure would me at least 6TB to be recovered via
>>> network. This mean an high network utilization and as long gluster
>>> doesn't have a dedicated network for replica,
>>> this can slow down client access.
>> 
>> Additionally, using a RAID-0 doesn't give any fault tollerance.
>> My question was for archieving the bast redundancy and data proction
>> available. If I have to use RAID-0 that doesn't protect data, why not
>> removing raid at all ?
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] to RAID or not?

2016-07-04 Thread tom
I would highly stress, regardless of whatever solution you choose - make sure 
you test actual workload performance before going all-in.

In my testing, performance (esp. iops and latency) decreased as I added bricks 
and additional nodes.  Since you have many spindles now, I would encourage you 
to test your workload up to and including the total brick count you ultimately 
expect.  RAID level and whether it’s md, zfs, or hardware isn’t likely to make 
as significant of a performance impact as Gluster and its various clients will. 
 Test failure scenarios and performance characteristics during impairment 
events thoroughly.  Make sure heals happen as you expect, including final 
contents of files modified during an impairment.  If you have many small files 
or directories that will be accessed concurrently, make sure to stress that 
behavior in your testing.

Gluster can be great for targeting availability and distribution at low 
software cost, and I would say as of today at the expense of performance, but 
as with any scale-out NAS there are limitations and some surprises along the 
path.

Good hunting,
-t

> On Jul 4, 2016, at 10:44 AM, Gandalf Corvotempesta 
>  wrote:
> 
> 2016-07-04 19:35 GMT+02:00 Russell Purinton :
>> For 3 servers with 12 disks each, I would do Hardware RAID0 (or madam if you 
>> don’t have a RAID card) of 3 disks.  So four 3-disk RAID0’s per server.
> 
> 3 servers is just to start. We plan to use 5 server in shorter time
> and up to 15 on production.
> 
>> I would set them up as Replica 3 Arbiter 1
>> 
>> server1:/brickA server2:/brickC server3:/brickA
>> server1:/brickB server2:/brickD server3:/brickB
>> server2:/brickA server3:/brickC server1:/brickA
>> server2:/brickB server3:/brickD server1:/brickB
>> server3:/brickA server1:/brickC server2:/brickA
>> server3:/brickB server1:/brickD server2:/brickB
>> 
>> The benefit of this is that you can lose an entire server node (12 disks) 
>> and all of your data is still accessible.   And you get the same space as if 
>> they were all in a RAID10.
>> 
>> If you lose any disk, the entire 3 disk brick will need to be healed from 
>> the replica.   I have 20GbE on each server so it doesn’t take long.   It 
>> copied 20TB in about 18 hours once.
> 
> So, any disk failure would me at least 6TB to be recovered via
> network. This mean an high network utilization and as long gluster
> doesn't have a dedicated network for replica,
> this can slow down client access.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] to RAID or not?

2016-07-04 Thread Russell Purinton
The fault tolerance is provided by Gluster replica translator.

RAID0 to me is preferable to JBOD because you get 3x read performance and 3x 
write performance.   If performance is not a concern, or if you only have 1GbE, 
then it may not matter, and you could just do JBOD with a ton of bricks.

The same method scales to how ever many servers you need… imagine them in a 
ring…

server1 A & B   replica to server 2 C & D
server2 A & B   replica to server 3 C & D
server3 A & B   replica to server 1 C & D

Adding a 4th server?  No problem… you can move the reconfigure the bricks to do
server1 A & B   replica to server 2 C & D
server2 A & B   replica to server 3 C & D
server3 A & B   replica to server 4 C & D
server4 A & B   replica to server 1 C & D

or 5 servers
server1 A & B   replica to server 2 C & D
server2 A & B   replica to server 3 C & D
server3 A & B   replica to server 4 C & D
server4 A & B   replica to server 5 C & D
server5 A & B   replica to server 6 C & D

I guess my recommendation is not the best for redundancy and data protection… 
because I’m concerned with performance, and space, as long as I have 2 copies 
of the data on different servers then I’m happy.  

If you care more about performance than space, and want extra data redundancy 
(more than 2 copies), then use RAID 10 on the nodes, and use gluster replica.  
This means you have every byte of data on 4 disks.

If you care more about space than performance and want extra redundancy use 
RAID 6, and gluster replica.

I always recommend gluster replica, because several times I have lost entire 
servers… and its nice to have the data on more than server.

> On Jul 4, 2016, at 1:46 PM, Gandalf Corvotempesta 
>  wrote:
> 
> 2016-07-04 19:44 GMT+02:00 Gandalf Corvotempesta
> :
>> So, any disk failure would me at least 6TB to be recovered via
>> network. This mean an high network utilization and as long gluster
>> doesn't have a dedicated network for replica,
>> this can slow down client access.
> 
> Additionally, using a RAID-0 doesn't give any fault tollerance.
> My question was for archieving the bast redundancy and data proction
> available. If I have to use RAID-0 that doesn't protect data, why not
> removing raid at all ?

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] to RAID or not?

2016-07-04 Thread Gandalf Corvotempesta
2016-07-04 19:44 GMT+02:00 Gandalf Corvotempesta
:
> So, any disk failure would me at least 6TB to be recovered via
> network. This mean an high network utilization and as long gluster
> doesn't have a dedicated network for replica,
> this can slow down client access.

Additionally, using a RAID-0 doesn't give any fault tollerance.
My question was for archieving the bast redundancy and data proction
available. If I have to use RAID-0 that doesn't protect data, why not
removing raid at all ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] to RAID or not?

2016-07-04 Thread Gandalf Corvotempesta
2016-07-04 19:35 GMT+02:00 Russell Purinton :
> For 3 servers with 12 disks each, I would do Hardware RAID0 (or madam if you 
> don’t have a RAID card) of 3 disks.  So four 3-disk RAID0’s per server.

3 servers is just to start. We plan to use 5 server in shorter time
and up to 15 on production.

> I would set them up as Replica 3 Arbiter 1
>
> server1:/brickA server2:/brickC server3:/brickA
> server1:/brickB server2:/brickD server3:/brickB
> server2:/brickA server3:/brickC server1:/brickA
> server2:/brickB server3:/brickD server1:/brickB
> server3:/brickA server1:/brickC server2:/brickA
> server3:/brickB server1:/brickD server2:/brickB
>
> The benefit of this is that you can lose an entire server node (12 disks) and 
> all of your data is still accessible.   And you get the same space as if they 
> were all in a RAID10.
>
> If you lose any disk, the entire 3 disk brick will need to be healed from the 
> replica.   I have 20GbE on each server so it doesn’t take long.   It copied 
> 20TB in about 18 hours once.

So, any disk failure would me at least 6TB to be recovered via
network. This mean an high network utilization and as long gluster
doesn't have a dedicated network for replica,
this can slow down client access.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] to RAID or not?

2016-07-04 Thread Russell Purinton
For 3 servers with 12 disks each, I would do Hardware RAID0 (or madam if you 
don’t have a RAID card) of 3 disks.  So four 3-disk RAID0’s per server. 

I would set them up as Replica 3 Arbiter 1

server1:/brickA server2:/brickC server3:/brickA
server1:/brickB server2:/brickD server3:/brickB
server2:/brickA server3:/brickC server1:/brickA
server2:/brickB server3:/brickD server1:/brickB
server3:/brickA server1:/brickC server2:/brickA
server3:/brickB server1:/brickD server2:/brickB

The benefit of this is that you can lose an entire server node (12 disks) and 
all of your data is still accessible.   And you get the same space as if they 
were all in a RAID10.

If you lose any disk, the entire 3 disk brick will need to be healed from the 
replica.   I have 20GbE on each server so it doesn’t take long.   It copied 
20TB in about 18 hours once.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] to RAID or not?

2016-07-04 Thread Gandalf Corvotempesta
2016-07-04 19:25 GMT+02:00 Matt Robinson :
> If you don't trust the hardware raid, then steer clear of raid-6 as mdadm 
> raid 6 is stupidly slow.
> I don't completely trust hardware raid either, but rebuild times should be 
> under a day and in order to lose a raid-6 array you have to lose 3 disks.
> My own systems are hardware raid-6.
> If you're not terribly worried about maximising usable storage, then mdadm 
> raid-10 is your friend.

All of my servers are hardware RAID-6 with 8x300GB SAS 15K (some
servers with 600GB)

A rebuild of a single disk in a 6x600GB SAS RAID-6 takes exactly 22 hours.

This with 15K SAS disks. Now try with 2TB (more than twice the size)
SATA 7200 (less than half speed)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] to RAID or not?

2016-07-04 Thread Matt Robinson
If you don't trust the hardware raid, then steer clear of raid-6 as mdadm raid 
6 is stupidly slow.
I don't completely trust hardware raid either, but rebuild times should be 
under a day and in order to lose a raid-6 array you have to lose 3 disks.
My own systems are hardware raid-6.
If you're not terribly worried about maximising usable storage, then mdadm 
raid-10 is your friend.


> On 4 Jul 2016, at 18:15:26, Gandalf Corvotempesta 
>  wrote:
> 
> 2016-07-04 17:01 GMT+02:00 Matt Robinson :
>> Hi Gandalf,
>> 
>> Are you using hardware raid or mdadm?
>> On high quality hardware raid, a 12 disk raid-6 is pretty solid.  With mdadm 
>> any raid6 (especially with 12 disks) will be rubbish.
> 
> I can use both.
> I don't like very much hardware raid, even high quality. Recently i'm
> having too many issue with hardware raid (like multiple disks kicked
> out with no apparent reasons and virtual-disk failed with data loss)
> 
> A RAID-6 with 12x2TB SATA disks would take days to rebuild, in the
> meanwhile, multiple disks could fail resulting in data loss.
> Yes, gluster is able to recover from this, but I prefere to avoid have
> to resync 24TB of data via networks.
> 
> What about a software RAID-1 ? 6 raid for each gluster nodes and 6
> disks wasted but SATA disks are cheaper.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] to RAID or not?

2016-07-04 Thread Joe Julian
IMHO you use raid for performance reasons and gluster for fault tolerance and 
scale.

On July 4, 2016 7:54:44 AM PDT, Gandalf Corvotempesta 
 wrote:
>No suggestions ?
>Il 14 giu 2016 10:01 AM, "Gandalf Corvotempesta" <
>gandalf.corvotempe...@gmail.com> ha scritto:
>
>> Let's assume a small cluster made by 3 servers, 12 disks/bricks each.
>> This cluster would be expanded to a maximum of 15 servers in near
>future.
>>
>> What do you suggest, a JBOD or a RAID? Which RAID level?
>>
>> 15 servers with 12 disks/bricks in JBOD are 180 bricks. Is this an
>> acceptable value?
>> Multiple raid6 for each servers? In example, RAID-6 with 6 disks and
>> another RAID-6 with the other 6 disks. I'll loose 4 disks on each
>> servers, performance would be affected and rebuild times would be
>huge
>> (by using 2TB/4TB disks)
>>
>> Any suggestions?
>>
>
>
>
>
>___
>Gluster-users mailing list
>Gluster-users@gluster.org
>http://www.gluster.org/mailman/listinfo/gluster-users

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] to RAID or not?

2016-07-04 Thread ML mail
Hi Gandalf
Not suggesting really here but just mentioning what I am using: I am using an 
HBA adapter with 12 disks so basically JBOD but I am using ZFS and have an 
array of 12 disks in RAIDZ2 (sort of RAID6 but ZFS-style). I am pretty happy 
with that setup so far.
CheersML
 

On Monday, July 4, 2016 4:54 PM, Gandalf Corvotempesta 
 wrote:
 

 No suggestions ?Il 14 giu 2016 10:01 AM, "Gandalf Corvotempesta" 
 ha scritto:

Let's assume a small cluster made by 3 servers, 12 disks/bricks each.
This cluster would be expanded to a maximum of 15 servers in near future.

What do you suggest, a JBOD or a RAID? Which RAID level?

15 servers with 12 disks/bricks in JBOD are 180 bricks. Is this an
acceptable value?
Multiple raid6 for each servers? In example, RAID-6 with 6 disks and
another RAID-6 with the other 6 disks. I'll loose 4 disks on each
servers, performance would be affected and rebuild times would be huge
(by using 2TB/4TB disks)

Any suggestions?


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

   ___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] to RAID or not?

2016-07-04 Thread Matt Robinson
Hi Gandalf,

Are you using hardware raid or mdadm?
On high quality hardware raid, a 12 disk raid-6 is pretty solid.  With mdadm 
any raid6 (especially with 12 disks) will be rubbish.

Matt.

> On 4 Jul 2016, at 15:54:44, Gandalf Corvotempesta 
>  wrote:
> 
> No suggestions ?
> 
> Il 14 giu 2016 10:01 AM, "Gandalf Corvotempesta" 
>  ha scritto:
> Let's assume a small cluster made by 3 servers, 12 disks/bricks each.
> This cluster would be expanded to a maximum of 15 servers in near future.
> 
> What do you suggest, a JBOD or a RAID? Which RAID level?
> 
> 15 servers with 12 disks/bricks in JBOD are 180 bricks. Is this an
> acceptable value?
> Multiple raid6 for each servers? In example, RAID-6 with 6 disks and
> another RAID-6 with the other 6 disks. I'll loose 4 disks on each
> servers, performance would be affected and rebuild times would be huge
> (by using 2TB/4TB disks)
> 
> Any suggestions?
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] to RAID or not?

2016-07-04 Thread Gandalf Corvotempesta
No suggestions ?
Il 14 giu 2016 10:01 AM, "Gandalf Corvotempesta" <
gandalf.corvotempe...@gmail.com> ha scritto:

> Let's assume a small cluster made by 3 servers, 12 disks/bricks each.
> This cluster would be expanded to a maximum of 15 servers in near future.
>
> What do you suggest, a JBOD or a RAID? Which RAID level?
>
> 15 servers with 12 disks/bricks in JBOD are 180 bricks. Is this an
> acceptable value?
> Multiple raid6 for each servers? In example, RAID-6 with 6 disks and
> another RAID-6 with the other 6 disks. I'll loose 4 disks on each
> servers, performance would be affected and rebuild times would be huge
> (by using 2TB/4TB disks)
>
> Any suggestions?
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Is RAID necessary/recommended?

2014-10-28 Thread Lindsay Mathieson
On 29 October 2014 00:43, Juan José Pavlik Salles jjpav...@gmail.com wrote:
 It could be both necessary and recommended, depending on what you want to
 achieve. I've gone through a few awkward moments because of not having RAID
 in our distribute-replicated volume, but nothing you can't solve shutting
 down the node and replacing the drive. RAID will give you transparency and
 tolerance to drive failures and is even better if we talk about a good HW
 RAID. What about RAID 5, is that possible?

I thought RAID5 was no longer considered a good option these days,
with RAID10 being preferred?


 --
 Pavlik Salles Juan José
 Blog - http://viviendolared.blogspot.com



-- 
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Is RAID necessary/recommended?

2014-10-28 Thread James
On Tue, Oct 28, 2014 at 7:53 PM, Lindsay Mathieson
lindsay.mathie...@gmail.com wrote:
 I thought RAID5 was no longer considered a good option these days,
 with RAID10 being preferred?


RAID6 preferred
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Is RAID necessary/recommended?

2014-10-28 Thread James
On Tue, Oct 28, 2014 at 9:53 PM, Lindsay Mathieson
lindsay.mathie...@gmail.com wrote:
 Ok, thanks James, Juan.

 Given my budget, I think I'll switch to using a single 3TB drive in
 each node, but add an extra 1GB Intel network card to each node and
 bond them for better network performance.
Did you test your workload to find your bottlenecks, or is this all
just conjecture? Test!



 Also I will be adding a third proxmox node for quorum purposes - it
 will just be a Intel NUC, won't be used for running VM's (though it
 could manage a couple).
Sweet... I almost bought a NUC to replace my Pentium 4 home server,
but they were kind of pricey. How is it?



 On 29 October 2014 11:10, Juan José Pavlik Salles jjpav...@gmail.com wrote:
 RAID6 is the best choice when working with arrays with many disks. RAID10
 doesn't make sense to me since you already have replication with gluster.

Keep in mind that if you've got an array of 24 disks, you'd probably
want to split that up into multiple RAID 6's. IOW, you'll have a few
bricks per host, each composed of a RAID 6. I think the magic number
of disks for a set is probably at least six, but not much more than
eight. I got this number from my imagination. Test your workloads
(also under failure scenarios) to decide for yourself.

Cheers,
James
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Is RAID necessary/recommended?

2014-10-28 Thread Lindsay Mathieson
On 29 October 2014 12:46, Dan Mons dm...@cuttingedge.com.au wrote:

 RAID10 provided no practical benefit.  All of Gluster's performance
 bottlenecks are related to DHT lookups and clustering over Ethernet.
 Speaking specifically for Gluster and in my use case, the disk has
 never been the bottlneck.

Thats what I suspected. Given I'm using a whitebox setup without hot
pluggable bays I feel I'm better off improving my network performance
than adding extra drives for RAID.

What I'm looking for here is shared storage with redundancy and
reasonable performance. I imagine I'm running a smaller, lower spec'd
environment with less requirements than most here :)

Thanks,
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Is RAID necessary/recommended?

2014-10-28 Thread Lindsay Mathieson
On 29 October 2014 12:47, James purplei...@gmail.com wrote:
 On Tue, Oct 28, 2014 at 9:53 PM, Lindsay Mathieson
 lindsay.mathie...@gmail.com wrote:
 Ok, thanks James, Juan.

 Given my budget, I think I'll switch to using a single 3TB drive in
 each node, but add an extra 1GB Intel network card to each node and
 bond them for better network performance.
 Did you test your workload to find your bottlenecks, or is this all
 just conjecture? Test!


Very true:)

I'm not committing to anything as of yet and I have no urgency in
getting things set up. I know what my current bottleneck is - the NAS.
Once I get my new hardware installed I'll be testing and experimenting
with configuartions before choosing a setup and migrating the VM's.

-- 
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster = RAID 10 over the network?

2014-09-21 Thread Alexey Zilber
Hi Ryan,

   I think if you could provide more info on the storage systems it would
help.  Things like total drives per raid set and size of each drive.  This
is a complicated question,  but a simple Googling brings up this
interesting article:
http://wolfcrow.com/blog/which-is-the-best-raid-level-for-video-editing-and-post-production-part-three-number-soup-for-the-soul/

  Imho,  without knowing any of these details,  my personal preference,
unless you're running a database is to do multiple raid-1 sets, stripe them
with lvm and drop xfs on them.

  I would like to add that if your storage provider only offers raid-5 or
raid-10 it might behoove you to look for another storage provider.  :)

-Alex
On Sep 21, 2014 8:24 PM, Ryan Nix ryan@gmail.com wrote:

 Hi All,

 So my boss and I decided to make a good size investment in a Gluster
 cluster.  I'm super excited and I will be taking a Redhat Storage class
 soon.

 However, we're debating the hardware configuration we intend to purchase.
 We agree that each brick/node, and we're buying four, each configured as
 RAID 10 will help us sleep at night, but to me, it seems like such an
 unfortunate waste of disk space.  Our graduate and PHD students work with
 lots of video and they filled up our proof-of-concept 4 TB ownCloud/Gluster
 setup in  2 months.

 I stumbled upon Howtoforge's Gluster setup guide from two years ago and
 I'm wondering if this is correct and or still relevant:

 http://bit.ly/1qkLoVe

 *This tutorial shows how to combine four single storage servers (running
 Ubuntu 12.10) to a distributed replicated storage with GlusterFS
 http://www.gluster.org/. Nodes 1 and 2 (replication1) as well as 3 and 4
 (replication2) will mirror each other,
 and replication1 and replication2 will be combined to one larger storage
 server (distribution). Basically, this is RAID10 over network. If you lose
 one server from replication1 and one from replication2, the distributed
 volume continues to work. The client system (Ubuntu 12.10 as well) will be
 able to access the storage as if it was a local filesystem*

 The vendor we have chosen, System 76, offers either RAID 5 or RAID 10 in
 each server.  Does anyone have insights or opinions on this?  It would seem
 to be that RAID 5 would be okay and that some kind drive monitoring
 (opinions also welcome, please) would be sufficient with the inherent
 nature of Gluster's Distributed/Replicated setup.  RAID 5 at System 76
 allows us to max out at 42 TB of useable space.  RAID 10 makes it 24 TB
 useable.

 I'd love to hear any insights or opinions on this.  To me, RAID 5 with
 Gluster in a distributed replicated setup should be sufficient and help us
 sleep well each night.  :)

 Thanks in advance!

 Ryan

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Rebuilt RAID array, now heal is failing

2012-11-24 Thread Gerald Brandt
Interestingly enough, a couple of reboots later, things started syncing again.

Gerald


- Original Message -
 From: Bryan Whitehead dri...@megahappy.net
 To: Gerald Brandt g...@majentis.com
 Cc: gluster-users gluster-users@gluster.org
 Sent: Friday, November 23, 2012 8:59:05 PM
 Subject: Re: [Gluster-users] Rebuilt RAID array, now heal is failing
 
 You'll need to share more information. gluster volume info to start
 would be helpful.
 
 So far I have no clue how your setup is.
 
 Example: if you have a distributed setup with no replication then all
 your files on that volume are just lost.
 
 On Fri, Nov 23, 2012 at 9:55 AM, Gerald Brandt g...@majentis.com
 wrote:
  Any ideas?  I'm going in tomorrow to try and fix things, so any
  help is appreciated.
 
  Gerald
 
 
  - Original Message -
  From: Gerald Brandt g...@majentis.com
  To: gluster-users gluster-users@gluster.org
  Sent: Thursday, November 22, 2012 9:17:49 AM
  Subject: Re: [Gluster-users] Rebuilt RAID array, now heal is
  failing
 
  Hi,
 
  Any ideas on this?  I'm currently running non-replicated, and I'm
  not
  comfortable with that.
 
  Gerald
 
 
  - Original Message -
   From: Gerald Brandt g...@majentis.com
   To: gluster-users gluster-users@gluster.org
   Sent: Wednesday, November 21, 2012 12:34:12 PM
   Subject: [Gluster-users] Rebuilt RAID array, now heal is failing
  
   Hi,
  
   I had a RAID-6 array fail on me, so I got some new HDD and
   rebuilt
   it.  The glusterfs config didn't change at all.
  
   When the array was rebuilt and mounted, it (naturally) had no
   files
   on it.  GlusterFS seems to have created the .gluster directory.
  
   However, self heal isn't working.  I tried to start it with
   'gluster
   volume heal NFS_RAID6_FO full', and no go.  A 'gluster volume
   heal
   NFS_RAID_6 heal-failed' listed all the files that wre on the
   array.
  
   How can I get all the files on the good replica over to the
   newly
   created RAID6 array?
  
   Gerald
   ___
   Gluster-users mailing list
   Gluster-users@gluster.org
   http://supercolony.gluster.org/mailman/listinfo/gluster-users
  
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-users
 
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-users
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Rebuilt RAID array, now heal is failing

2012-11-23 Thread Gerald Brandt
Any ideas?  I'm going in tomorrow to try and fix things, so any help is 
appreciated.

Gerald


- Original Message -
 From: Gerald Brandt g...@majentis.com
 To: gluster-users gluster-users@gluster.org
 Sent: Thursday, November 22, 2012 9:17:49 AM
 Subject: Re: [Gluster-users] Rebuilt RAID array, now heal is failing
 
 Hi,
 
 Any ideas on this?  I'm currently running non-replicated, and I'm not
 comfortable with that.
 
 Gerald
 
 
 - Original Message -
  From: Gerald Brandt g...@majentis.com
  To: gluster-users gluster-users@gluster.org
  Sent: Wednesday, November 21, 2012 12:34:12 PM
  Subject: [Gluster-users] Rebuilt RAID array, now heal is failing
  
  Hi,
  
  I had a RAID-6 array fail on me, so I got some new HDD and rebuilt
  it.  The glusterfs config didn't change at all.
  
  When the array was rebuilt and mounted, it (naturally) had no files
  on it.  GlusterFS seems to have created the .gluster directory.
  
  However, self heal isn't working.  I tried to start it with
  'gluster
  volume heal NFS_RAID6_FO full', and no go.  A 'gluster volume heal
  NFS_RAID_6 heal-failed' listed all the files that wre on the array.
  
  How can I get all the files on the good replica over to the newly
  created RAID6 array?
  
  Gerald
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-users
  
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Rebuilt RAID array, now heal is failing

2012-11-23 Thread Bryan Whitehead
You'll need to share more information. gluster volume info to start
would be helpful.

So far I have no clue how your setup is.

Example: if you have a distributed setup with no replication then all
your files on that volume are just lost.

On Fri, Nov 23, 2012 at 9:55 AM, Gerald Brandt g...@majentis.com wrote:
 Any ideas?  I'm going in tomorrow to try and fix things, so any help is 
 appreciated.

 Gerald


 - Original Message -
 From: Gerald Brandt g...@majentis.com
 To: gluster-users gluster-users@gluster.org
 Sent: Thursday, November 22, 2012 9:17:49 AM
 Subject: Re: [Gluster-users] Rebuilt RAID array, now heal is failing

 Hi,

 Any ideas on this?  I'm currently running non-replicated, and I'm not
 comfortable with that.

 Gerald


 - Original Message -
  From: Gerald Brandt g...@majentis.com
  To: gluster-users gluster-users@gluster.org
  Sent: Wednesday, November 21, 2012 12:34:12 PM
  Subject: [Gluster-users] Rebuilt RAID array, now heal is failing
 
  Hi,
 
  I had a RAID-6 array fail on me, so I got some new HDD and rebuilt
  it.  The glusterfs config didn't change at all.
 
  When the array was rebuilt and mounted, it (naturally) had no files
  on it.  GlusterFS seems to have created the .gluster directory.
 
  However, self heal isn't working.  I tried to start it with
  'gluster
  volume heal NFS_RAID6_FO full', and no go.  A 'gluster volume heal
  NFS_RAID_6 heal-failed' listed all the files that wre on the array.
 
  How can I get all the files on the good replica over to the newly
  created RAID6 array?
 
  Gerald
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-users
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Rebuilt RAID array, now heal is failing

2012-11-22 Thread Gerald Brandt
Hi,

Any ideas on this?  I'm currently running non-replicated, and I'm not 
comfortable with that.

Gerald


- Original Message -
 From: Gerald Brandt g...@majentis.com
 To: gluster-users gluster-users@gluster.org
 Sent: Wednesday, November 21, 2012 12:34:12 PM
 Subject: [Gluster-users] Rebuilt RAID array, now heal is failing
 
 Hi,
 
 I had a RAID-6 array fail on me, so I got some new HDD and rebuilt
 it.  The glusterfs config didn't change at all.
 
 When the array was rebuilt and mounted, it (naturally) had no files
 on it.  GlusterFS seems to have created the .gluster directory.
 
 However, self heal isn't working.  I tried to start it with 'gluster
 volume heal NFS_RAID6_FO full', and no go.  A 'gluster volume heal
 NFS_RAID_6 heal-failed' listed all the files that wre on the array.
 
 How can I get all the files on the good replica over to the newly
 created RAID6 array?
 
 Gerald
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] HW raid or not

2011-08-08 Thread Daniel Müller
I am using raid 5 1 spare disk without any problem on centos 5.6


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 

Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Uwe Kastens
Gesendet: Montag, 8. August 2011 08:55
An: Gluster-users@gluster.org
Betreff: [Gluster-users] HW raid or not

Hi,

I know, that there is no general answer to this question :)

Is it better to use HW Raid or LVM as gluster backend or raw disks?

Regards

Uwe

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] HW raid or not

2011-08-08 Thread Gabriel-Adrian Samfira
We use raw disks with our setup. Gluster takes care of the replication
part, so RAID would be useless for us. Performance wise, you are
better off just adding a new brick and let gluster do the rest.

Best regards,
Gabriel

On Mon, Aug 8, 2011 at 9:54 AM, Uwe Kastens kiste...@googlemail.com wrote:
 Hi,

 I know, that there is no general answer to this question :)

 Is it better to use HW Raid or LVM as gluster backend or raw disks?

 Regards

 Uwe


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] HW raid or not

2011-08-08 Thread Nathan Stratton

On Mon, 8 Aug 2011, Uwe Kastens wrote:


Hi,

I know, that there is no general answer to this question :)

Is it better to use HW Raid or LVM as gluster backend or raw disks?


HW Raid.





Nathan StrattonCTO, BlinkMind, Inc.
nathan at robotics.net nathan at blinkmind.com
http://www.robotics.nethttp://www.blinkmind.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] HW raid or not

2011-08-08 Thread Burnash, James
I would agree with this.

While GlusterFS mirroring is good on a server to server level, it's not as 
robust (provably on my installation) as HW RAID due to continuing GlusterFS  
issues with replication and extended attributes (through versions 3.2.2). 
That's not to say that the server replication is rubbish - it's just that there 
are edge cases for which bugs have already been submitted that affect file 
integrity from a mounted GlusterFS point of view.

James Burnash
Unix Engineer
Knight Capital Group


-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Nathan Stratton
Sent: Monday, August 08, 2011 11:04 AM
To: Uwe Kastens
Cc: Gluster-users@gluster.org
Subject: Re: [Gluster-users] HW raid or not

On Mon, 8 Aug 2011, Uwe Kastens wrote:

 Hi,

 I know, that there is no general answer to this question :)

 Is it better to use HW Raid or LVM as gluster backend or raw disks?

HW Raid.



Nathan StrattonCTO, BlinkMind, Inc.
nathan at robotics.net nathan at blinkmind.com
http://www.robotics.nethttp://www.blinkmind.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] HW raid or not

2011-08-08 Thread Liam Slusser
I'm in the HW raid camp.  Mostly because gluster is not block level,
so with large quantities of files replication can take days or weeks.
In my case a rebuild/resync can take weeks because of how many
files/directories I have in my cluster.

With hardware RAID I can just replace the disk and a rebuild happens
automatically and very quickly.

liam

On Mon, Aug 8, 2011 at 4:12 AM, Gabriel-Adrian Samfira
samfiragabr...@gmail.com wrote:
 We use raw disks with our setup. Gluster takes care of the replication
 part, so RAID would be useless for us. Performance wise, you are
 better off just adding a new brick and let gluster do the rest.

 Best regards,
 Gabriel

 On Mon, Aug 8, 2011 at 9:54 AM, Uwe Kastens kiste...@googlemail.com wrote:
 Hi,

 I know, that there is no general answer to this question :)

 Is it better to use HW Raid or LVM as gluster backend or raw disks?

 Regards

 Uwe


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] hardware raid controller

2011-07-17 Thread Daniel Müller
Hello again,

I think the job to do your raid controller is the part of your OS.
Glusters serves upon your file system nothing else.
Gluster 3.2 is working on my raid controller (raid 5 1 spare disk) without
any problems.


On Fri, 15 Jul 2011 10:55:11 +0200, Derk Roesink derkroes...@viditech.nl
wrote:
 Hello!
 
 Im trying to install my first Gluster Storage Platform server.
 
 It has a Jetway JNF99FL-525-LF motherboard with an internal raid
 controller (based on a Intel ICH9R chipset) which has 4x 1tb drives for
 data that i would like to run in a RAID5 configuration
 
 It seems Gluster doesnt support the raid controller.. Because i still
see
 the 4 disks as 'servers' in the WebUI.
 
 Any ideas?!
 
 Kind Regards,
 
 Derk
  
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] hardware raid controller

2011-07-15 Thread Liam Slusser
That's not a hardware raid controller.  The raid is done in software
via the raid driver.  You can probably find a linux driver however its
a really really crappy raid card.  I'd recommend getting something
else like a 3ware/lsi etc.

liam

On Fri, Jul 15, 2011 at 1:55 AM, Derk Roesink derkroes...@viditech.nl wrote:
 Hello!

 Im trying to install my first Gluster Storage Platform server.

 It has a Jetway JNF99FL-525-LF motherboard with an internal raid
 controller (based on a Intel ICH9R chipset) which has 4x 1tb drives for
 data that i would like to run in a RAID5 configuration

 It seems Gluster doesnt support the raid controller.. Because i still see
 the 4 disks as 'servers' in the WebUI.

 Any ideas?!

 Kind Regards,

 Derk



 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] hardware raid controller

2011-07-15 Thread Mohit Anchlia
You should also be able to use server console during boot time to setup raid. 
Check hw dove

Sent from my iPad

On Jul 15, 2011, at 2:05 AM, Liam Slusser lslus...@gmail.com wrote:

 That's not a hardware raid controller.  The raid is done in software
 via the raid driver.  You can probably find a linux driver however its
 a really really crappy raid card.  I'd recommend getting something
 else like a 3ware/lsi etc.
 
 liam
 
 On Fri, Jul 15, 2011 at 1:55 AM, Derk Roesink derkroes...@viditech.nl wrote:
 Hello!
 
 Im trying to install my first Gluster Storage Platform server.
 
 It has a Jetway JNF99FL-525-LF motherboard with an internal raid
 controller (based on a Intel ICH9R chipset) which has 4x 1tb drives for
 data that i would like to run in a RAID5 configuration
 
 It seems Gluster doesnt support the raid controller.. Because i still see
 the 4 disks as 'servers' in the WebUI.
 
 Any ideas?!
 
 Kind Regards,
 
 Derk
 
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Mount RAID 1 volume of a couple of servers

2010-04-08 Thread Kelvin Westlake
I didn't explain this very well, basically I've got several glusterfs
clients and I'd like to connect them all to the same replicated
glusterfs volume. Has anybody else tried this? If so, are there any
problems I need to be aware of?



Thanks

K




This email with any attachments is for the exclusive and confidential use of 
the addressee(s) and may contain legally privileged information. Any other 
distribution, use or reproduction without the senders prior consent is 
unauthorised and strictly prohibited. If you receive this message in error 
please notify the sender by email and delete the message from your computer.

Netbasic Limited registered office and business address is 9 Funtley Court, 
Funtley Hill, Fareham, Hampshire PO16 7UY. Company No. 04906681. Netbasic 
Limited is authorised and regulated by the Financial Services Authority in 
respect of regulated activities. Please note that many of our activities do not 
require FSA regulation.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Mount RAID 1 volume of a couple of

2010-04-08 Thread Larry Bates
Kelvin,

This works as you would expect.  To NOT have a single point of failure you need
to have a minimum of two (2) GlusterFS servers and you need to use replicate
translator to mirror (between volumes on each server).  Replicating between two
volumes on a single GFS server would give you a RAID1-like setup, but the server
would be a single point of failure.  In addition you may want to use the
distribute translator if you want to distribute across multiple volumes on each
server.  For lack of a better term that gives you a RAID10-like setup.  You can
then mount the GFS volume on as many clients as you wish.

-Larry 


Message: 8
Date: Thu, 8 Apr 2010 10:45:11 +0100
From: Kelvin Westlake kelvin.westl...@netbasic.co.uk
Subject: Re: [Gluster-users] Mount RAID 1 volume of a couple of
servers
To: gluster-users@gluster.org
Message-ID:
9ecac59dbf16a744bacd207c9bdafa34ca2...@zippy.rainbow.local
Content-Type: text/plain; charset=us-ascii

I didn't explain this very well, basically I've got several glusterfs
clients and I'd like to connect them all to the same replicated
glusterfs volume. Has anybody else tried this? If so, are there any
problems I need to be aware of?

 

Thanks

K


 
 
This email with any attachments is for the exclusive and confidential use of the
addressee(s) and may contain legally privileged information. Any other
distribution, use or reproduction without the senders prior consent is
unauthorised and strictly prohibited. If you receive this message in error
please notify the sender by email and delete the message from your computer.
 
Netbasic Limited registered office and business address is 9 Funtley Court,
Funtley Hill, Fareham, Hampshire PO16 7UY. Company No. 04906681. Netbasic
Limited is authorised and regulated by the Financial Services Authority in
respect of regulated activities. Please note that many of our activities do not
require FSA regulation.

--

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


End of Gluster-users Digest, Vol 24, Issue 11
*



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users