Re: [Gluster-users] Bricks suggestions

2012-05-03 Thread Gandalf Corvotempesta
2012/5/3 Arnold Krille 
>
> Either wait for 3.3 or do the distributed-replicated-volume way I outlined
> earlier.
>
> Absolutely, i'm waiting for 3.3
We don't have to start right now, we will start with 3.3
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-05-03 Thread Arnold Krille
On Thursday 03 May 2012 11:52:13 you wrote:
> 2012/5/2 Arnold Krille 
> > As far as I understand it (and for the current stable 3.2) there are two
> > ways
> > depending on what you want:
> >  When you want a third copy of the data, it seems you can't simply
> >  increase
> > the replication-level and add a brick. Instead you have to stop usage,
> > delete
> > the volume (without deleting the underlying bricks of course) and then
> > rebuild
> > the volume with the new number of replication and bricks. Then self-heal
> > should do the trick and copy the data onto the third machine.
> I can't stop a production volume used by many customers.

Yes you can. There are scheduled maintainance-times. Unless your SLA is about 
100% availability (in which case you are f***ed as every outage will make you 
miss your SLA).

And the outage I outlined should be something like 30s when done by a (tested) 
script, 1-2min when done by hand. Could be that you can even keep it mounted 
(as glusterfs) on the clients. Though I haven't tested this.

> I think that the best way should start with the correct number of
> replication nodes
> even if one of these node is not present.

Good luck with that. While you can do that with software-raid and with drbd, 
you can't do that with gluster. When you create a volume with replication set, 
it only accepts multiple of that replication for the number of bricks.
I am told that 3.3 will have the ability to change the replication-level while 
the volume is up. And I think I was even promised dynamic replication from the 
kind "here are three bricks, give me two replicas of everything".

> In this way, the volume is created properly, and when needed I have to just
> add the new machine and trigger the self-healing.

Either wait for 3.3 or do the distributed-replicated-volume way I outlined 
earlier.

Have fun,

Arnold

signature.asc
Description: This is a digitally signed message part.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-05-03 Thread Gandalf Corvotempesta
2012/5/2 Arnold Krille 

> As far as I understand it (and for the current stable 3.2) there are two
> ways
> depending on what you want:
>  When you want a third copy of the data, it seems you can't simply increase
> the replication-level and add a brick. Instead you have to stop usage,
> delete
> the volume (without deleting the underlying bricks of course) and then
> rebuild
> the volume with the new number of replication and bricks. Then self-heal
> should do the trick and copy the data onto the third machine.
>

I can't stop a production volume used by many customers.
I think that the best way should start with the correct number of
replication nodes
even if one of these node is not present.

In this way, the volume is created properly, and when needed I have to just
add
the new machine and trigger the self-healing.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-05-02 Thread Arnold Krille
On Monday 30 April 2012 18:17:02 Gandalf Corvotempesta wrote:
> Is possible to add a third server for replication in a working cluster?
> If yes, how can I do this?

As far as I understand it (and for the current stable 3.2) there are two ways 
depending on what you want:
 When you want a third copy of the data, it seems you can't simply increase 
the replication-level and add a brick. Instead you have to stop usage, delete 
the volume (without deleting the underlying bricks of course) and then rebuild 
the volume with the new number of replication and bricks. Then self-heal 
should do the trick and copy the data onto the third machine.
 When you are fine with two-out-of-three replication, you would use two bricks 
per server and combine these three pairs of bricks for a distributed-
replicated volume and spread the bricks to the servers so that each server has 
bricks of two different pairs. When you think of this when building the two-
node setup, you would first build a distributed-replicated volume with two 
bricks per server (thus four bricks in two pairs). When the third node is 
needed, you first move one brick (with the replace-brick), then build the third 
pair of bricks.

Have fun,

Arnold

signature.asc
Description: This is a digitally signed message part.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-04-30 Thread Gandalf Corvotempesta
2012/4/30 Brian Candler 

> Of course the same is true of software RAID10 (md RAID):
>
> - no dedicated hardware
> - inexpensive disks
> - fast healing
>
> Plus there is no need to build a new filesystem, since the healing is done
> at block level.
>

You should use disks with TLER enabled.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-04-30 Thread Brian Candler
On Mon, Apr 30, 2012 at 06:17:02PM +0200, Gandalf Corvotempesta wrote:
>2012/4/30 Brian Candler <[1]b.cand...@pobox.com>
> 
>  - separate filesystem on each disk, lots of tiny bricks,
>   distributed+replicated volume
> 
>This should be the best choise as explained before by other guys.
> 
>- no dedicated hardware for raid
> 
>- inexpensive disks
> 
>- fast healing (only one disk at time)

Of course the same is true of software RAID10 (md RAID):

- no dedicated hardware
- inexpensive disks
- fast healing

Plus there is no need to build a new filesystem, since the healing is done
at block level.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-04-30 Thread Gandalf Corvotempesta
2012/4/30 Brian Candler 

> - separate filesystem on each disk, lots of tiny bricks,
>  distributed+replicated volume
>
>
This should be the best choise as explained before by other guys.
- no dedicated hardware for raid
- inexpensive disks
- fast healing (only one disk at time)

Is possible to add a third server for replication in a working cluster?
If yes, how can I do this?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-04-30 Thread Brian Candler
On Mon, Apr 30, 2012 at 05:10:40PM +0200, Gandalf Corvotempesta wrote:
>2012/4/30 Brian Candler <[1]b.cand...@pobox.com>
> 
>  However it doesn't give me resilience against server failure, unless
>  I have
>  a distributed volume across two servers, which with RAID10 as well
>  means
>  each block is written to 4 separate disks.
> 
>Distribution across two servers is implied as we are talking about
>gluster and not about a standard raid.

Sorry, my mistake, I meant to say "replicated volume".

>So, we have to create a DISTRIBUTED AND REPLICATED gluster volume with
>at least 2 servers.
>On each server, which is the best way to manage disks? RAID10? No RAID?
>RAID0?

I definitely wouldn't go RAID0: one disk failed will toast the whole volume,
and you'll have to heal the whole lot (i.e. copy from the other server). And
if one disk fails in the other server in the mean time, your entire dataset
is lost.

Your main options are:

- RAID10 on each server, a few large bricks, distributed volume

- separate filesystem on each disk, lots of tiny bricks,
  distributed+replicated volume

- RAID10 on each server PLUS replicated between servers

There are pros and cons to each, so your choice may not be the same as my
choice.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-04-30 Thread Gandalf Corvotempesta
2012/4/30 Larry Bates 
>
> Fluster V3.0.0 in two SuperMicro servers each with 8x2TB hard drives
>> configured as JBOD. I use Gluster to replicate each drive between
>> servers and the distribute across the drives giving me approx 16TB as
>> a single volume.  I can pull a single drive an replace and then use
>> self heal to rebuild. I can shutdown or reboot a server and traffic
>> continues to the other server (good for kernel updates).  I use logdog
>> to alert me via email/text if a drive fails.
>>
>>
That's ok for me, but my question is:
how does gluster detect a failed cluster on a disk?
Will it detect properly and start serving data from the other server?

In case of whole disk failure its easy, disk is not working and gluster will
use the other server, but if the disk is ok and only a single sector
is not readable?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-04-30 Thread Gandalf Corvotempesta
2012/4/30 Brian Candler 
>
> However it doesn't give me resilience against server failure, unless I have
> a distributed volume across two servers, which with RAID10 as well means
> each block is written to 4 separate disks.
>

Distribution across two servers is implied as we are talking about gluster
and not about a standard raid.

So, we have to create a DISTRIBUTED AND REPLICATED gluster volume with at
least 2 servers.
On each server, which is the best way to manage disks? RAID10? No RAID?
RAID0?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-04-30 Thread Larry Bates

Message: 6
Date: Mon, 30 Apr 2012 10:53:42 +0200
From: Gandalf Corvotempesta 
Subject: Re: [Gluster-users] Bricks suggestions
To: Brian Candler 
Cc: Gluster-users@gluster.org
Message-ID:
   


Content-Type: text/plain; charset="iso-8859-1"

2012/4/30 Brian Candler 

KO or OK? With a RAID controller (or software RAID), the RAID 
subsystem

should quietly mark the failed drive as unusable and redirect all
operations
to the working drive.  And you will have a way to detect this 
situation,

e.g. /proc/mdstat for Linux software RAID.



KO.
As you wrote, in a raid environment, the controller will detect a 
failed

disk and redirect I/O to the working drive.

With no RAID, is gluster smart enough to detect a disk failure and 
redirect

all I/O to the other server?

A disk can have a damed cluster, so only a portion of itself will 
became

unusable.
A raid controller is able to detect this, gluster will do the same 
or still

try to reply
with brokend data?

So,  do you suggest to use a RAID10 on each server?
- disk1+disk2 raid1
- disk3+disk4 raid1

raid0 over these raid1 and then replicate it with gluster?
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://gluster.org/pipermail/gluster-users/attachments/20120430/763be665/attachment-0001.htm>


--

I have been running the following configuration for over 16 months
with no issues:

Fluster V3.0.0 in two SuperMicro servers each with 8x2TB hard drives
configured as JBOD. I use Gluster to replicate each drive between
servers and the distribute across the drives giving me approx 16TB as
a single volume.  I can pull a single drive an replace and then use
self heal to rebuild. I can shutdown or reboot a server and traffic
continues to the other server (good for kernel updates).  I use 
logdog

to alert me via email/text if a drive fails.

I chose this config because it was 1) simplest, 2) maximized my disk
storage, 3) effectively resulted in a shared nothing RAID10 SAN-like
storage system, 4) minimized the amount of data movement during a
rebuild, 5) it didn't require any hardware RAID controllers which
would increase my cost.  This config has worked for me exactly as
planned.

I'm currently building a new server with 8x4TB drives and will be
replacing one of the existing servers in a couple of weeks.  I will
force a self heal to populate if wit files from primary server.  When
done I'll repeat process for the other server.

Larry Bates
vitalEsafe, Inc.

--


I should have added the other reasons for choosing this configuration:

6) Hardware RAID REQUIRES the use of hard drives that support TLER, 
which

forces you to use Enterprise drives which are much more expensive than
desktop drives, lastly 7) I'm an old-timer that has over 30 years of
exprience doing this and I've seen almost every RAID5 array that was 
ever

set up fail due to some "glitch" where the controller just decides that
multiple drives have failed simultaneously. Sometimes it takes a couple
of years, but I've seen a LOT of arrays fail this way, so I don't trust
RAID5/RAID6 with vital data. Hardware RAID10 is OK, but that would have
more than doubled my storage cost. My goal was highly available mid-
performance (30MB/sec) storage that is immune to single device failures
and that can be rebuilt quickly after a failure.

Larry Bates
vitalEsafe, Inc.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-04-30 Thread Brian Candler
On Mon, Apr 30, 2012 at 10:53:42AM +0200, Gandalf Corvotempesta wrote:
>So,  do you suggest to use a RAID10 on each server?

In my opinion (and let me make it clear, I'm a pretty new user myself too),
for me that's the lowest risk option and the easiest for day-to-day
management.

However it doesn't give me resilience against server failure, unless I have
a distributed volume across two servers, which with RAID10 as well means
each block is written to 4 separate disks.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-04-30 Thread Gandalf Corvotempesta
2012/4/30 Brian Candler 

> KO or OK? With a RAID controller (or software RAID), the RAID subsystem
> should quietly mark the failed drive as unusable and redirect all
> operations
> to the working drive.  And you will have a way to detect this situation,
> e.g. /proc/mdstat for Linux software RAID.
>

KO.
As you wrote, in a raid environment, the controller will detect a failed
disk and redirect I/O to the working drive.

With no RAID, is gluster smart enough to detect a disk failure and redirect
all I/O to the other server?

A disk can have a damed cluster, so only a portion of itself will became
unusable.
A raid controller is able to detect this, gluster will do the same or still
try to reply
with brokend data?

So,  do you suggest to use a RAID10 on each server?
- disk1+disk2 raid1
- disk3+disk4 raid1

raid0 over these raid1 and then replicate it with gluster?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-04-30 Thread Brian Candler
On Sun, Apr 29, 2012 at 11:22:20PM +0200, Gandalf Corvotempesta wrote:
>So, what will you do? RAID1? No raid?

RAID10 for write-active filesystems, and RAID6 for archive filesystems.

>How does gluster detect a failed disk with no raid? What I don't
>understand is how gluster will detect a failure on a disk and the reply
>with data on the other server.

I'm not sure - that's what the risk is. One would hope that gluster would
detect the failed disk and take it out of service, but I see a lot of posts
on this list from people who have problems in various failure scenarios
(failures to heal and the like).  I'm not sure that glusterfs has really got
these situations nailed.

Indeed, in my experience the gluster client won't even reconnect to a
glusterfsd (brick) if the brick has gone away and come back up.  You have to
manually unmount and remount. That's about the simplest failure scenario you
can imagine.

>With a raid controller, if controller detect a failure, will reply with
>KO to the operating system

KO or OK? With a RAID controller (or software RAID), the RAID subsystem
should quietly mark the failed drive as unusable and redirect all operations
to the working drive.  And you will have a way to detect this situation,
e.g. /proc/mdstat for Linux software RAID.

>Is safer to use a 24 disks server with no raid and with 24 replicated
>and distributed bricks (24 on one server and 24 on other server)?

In theory they should be the same, and with replicated/distributed you also
get the benefit that if an entire server dies, the data remains available.
In practice I am not convinced that glusterfs will work well this way.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-04-29 Thread Larry Bates
That is correct.  I have 2 servers set up that way and it works well.

Larry Bates
vitalEsafe, Inc.

Sent from my iPhone, 
please forgive my typos ;-)

On Apr 29, 2012, at 2:00 PM, gluster-users-requ...@gluster.org wrote:

> Send Gluster-users mailing list submissions to
>gluster-users@gluster.org
> 
> To subscribe or unsubscribe via the World Wide Web, visit
>http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> or, via email, send a message with subject or body 'help' to
>gluster-users-requ...@gluster.org
> 
> You can reach the person managing the list at
>gluster-users-ow...@gluster.org
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Gluster-users digest..."
> 
> 
> Today's Topics:
> 
>   1. Re: Bricks suggestions (Gandalf Corvotempesta)
> 
> 
> --
> 
> Message: 1
> Date: Sat, 28 Apr 2012 23:25:30 +0200
> From: Gandalf Corvotempesta 
> Subject: Re: [Gluster-users] Bricks suggestions
> To: Gluster-users@gluster.org
> Message-ID:
>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> 2012/4/27 Gandalf Corvotempesta 
>> 
>> So, what do you suggest? A simple RAID10?
>> 
>> I have servers with 8 SATA disk, what do you suggest to 'merge' these disks 
>> to a bigger volume?
>> 
>> I think that having "du" output wrong, is not a good solution.
>> 
>> I'm also considering no raid at all.
> For example, with 2 server and 8 SATA disk each, I can create a single XFS
> filesystem for every disk and then creating a replicated bricks for each.
> 
> For example:
> 
> server1:brick1 => server2:brick1
> server1:brick2 => server2:brick2
> 
> and so on.
> After that, I can use these bricks to create a distributed volume.
> In case of a disk failure, I have to heal only on disk at time and not the
> whole volume, right?
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-04-29 Thread John Jolet
I'm doing software raid-0 with a  cluster volume at replica 2 across 2 nodes 
(essentially getting raid 10, i hope).  The OS will monitor the software raid 
and email root when it becomes degraded.  then i'll take the whole NODE out of 
the volume, fix the software raid, then bring it back in.  that's the plan.  
haven't tested it yet.

On Apr 29, 2012, at 4:18 PM, Brian Candler wrote:

> On Sat, Apr 28, 2012 at 11:25:30PM +0200, Gandalf Corvotempesta wrote:
>>   I'm also considering no raid at all.
>> 
>>   For example, with 2 server and 8 SATA disk each, I can create a single
>>   XFS filesystem for every disk and then creating a replicated bricks for
>>   each.
>> 
>>   For example:
>> 
>>   server1:brick1 => server2:brick1
>> 
>>   server1:brick2 => server2:brick2
>> 
>>   and so on.
>> 
>>   After that, I can use these bricks to create a distributed volume.
>> 
>>   In case of a disk failure, I have to heal only on disk at time and not
>>   the whole volume, right?
> 
> Yes. I considered that too. What you have to weigh it up against is the
> management overhead:
> 
> - recognising a failed disk
> - replacing a failed disk (which involves creating a new XFS filesystem
>  and mounting it at the right place)
> - forcing a self-heal
> 
> Whereas detecting a failed RAID disk is straightforward, and so is swapping
> it out.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-04-29 Thread Gandalf Corvotempesta
2012/4/29 Brian Candler 
>
> Yes. I considered that too. What you have to weigh it up against is the
> management overhead:
>
> - recognising a failed disk
> - replacing a failed disk (which involves creating a new XFS filesystem
>  and mounting it at the right place)
> - forcing a self-heal
>
> Whereas detecting a failed RAID disk is straightforward, and so is swapping
> it out.
>

So, what will you do? RAID1? No raid?
How does gluster detect a failed disk with no raid? What I don't understand
is how gluster will detect a failure on a disk and the reply with data on
the other server.

With a raid controller, if controller detect a failure, will reply with KO
to the operating system, but with no raid? What will happens?

Is safer to use a 24 disks server with no raid and with 24 replicated and
distributed bricks (24 on one server and 24 on other server)?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-04-29 Thread Brian Candler
On Sat, Apr 28, 2012 at 11:25:30PM +0200, Gandalf Corvotempesta wrote:
>I'm also considering no raid at all.
> 
>For example, with 2 server and 8 SATA disk each, I can create a single
>XFS filesystem for every disk and then creating a replicated bricks for
>each.
> 
>For example:
> 
>server1:brick1 => server2:brick1
> 
>server1:brick2 => server2:brick2
> 
>and so on.
> 
>After that, I can use these bricks to create a distributed volume.
> 
>In case of a disk failure, I have to heal only on disk at time and not
>the whole volume, right?

Yes. I considered that too. What you have to weigh it up against is the
management overhead:

- recognising a failed disk
- replacing a failed disk (which involves creating a new XFS filesystem
  and mounting it at the right place)
- forcing a self-heal

Whereas detecting a failed RAID disk is straightforward, and so is swapping
it out.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-04-28 Thread Gandalf Corvotempesta
2012/4/27 Gandalf Corvotempesta 
>
> So, what do you suggest? A simple RAID10?
>
> I have servers with 8 SATA disk, what do you suggest to 'merge' these disks 
> to a bigger volume?
>
> I think that having "du" output wrong, is not a good solution.
>
> I'm also considering no raid at all.
For example, with 2 server and 8 SATA disk each, I can create a single XFS
filesystem for every disk and then creating a replicated bricks for each.

For example:

server1:brick1 => server2:brick1
server1:brick2 => server2:brick2

and so on.
After that, I can use these bricks to create a distributed volume.
In case of a disk failure, I have to heal only on disk at time and not the
whole volume, right?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-04-27 Thread Gandalf Corvotempesta
2012/4/25 Brian Candler 
>
> Personally I wouldn't, but it's your choice. In my opinion LVM adds another
> complexity layer, more stuff to manage, and you will need to keep the sizes
> of the various logical volumes in sync (whether you are distributed or
> replicated), and grow your XFS filesystems whenever you grow the logical
> volumes.


You are right.


> But if you need that level of control, it's fine.
>

So, what do you suggest? A simple RAID10?

I have servers with 8 SATA disk, what do you suggest to 'merge' these
disks to a bigger volume?

I think that having "du" output wrong, is not a good solution.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-04-25 Thread Brian Candler
On Wed, Apr 25, 2012 at 10:51:08AM +0200, Gandalf Corvotempesta wrote:
>What do you think by starting with 2 distinct logical volume, one for
>web and one for mail and then making a single XFS filesystem on each?

Personally I wouldn't, but it's your choice. In my opinion LVM adds another
complexity layer, more stuff to manage, and you will need to keep the sizes
of the various logical volumes in sync (whether you are distributed or
replicated), and grow your XFS filesystems whenever you grow the logical
volumes.  But if you need that level of control, it's fine.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-04-25 Thread Gandalf Corvotempesta
2012/4/25 Brian Candler 
>
> Of course, if two bricks are on the same mountpoint, then the output of
> "df"
> will see the same amount of free space on both bricks. That is, adding
> together all the "df" output from the volumes which contain these bricks
> will be more than the actual free space available.
>

This is no good.
What do you think by starting with 2 distinct logical volume, one for web
and one for mail and then making a single XFS filesystem on each?

In this way, I'll be able to see the read disk usage for each brick and
i'll be able to move single bricks around.

I'll start with a single RAID10 and two LV on it and one bricks for each
LV. Is this good?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-04-25 Thread Brian Candler
On Tue, Apr 24, 2012 at 11:48:49PM +0200, Gandalf Corvotempesta wrote:
>just to clarify:
>are bricks like server1:/data/mail and server1:/data/web standard
>directory on the server?
>Can I create a brick from a single directory and not from the whole
>mountpoint?

Yes.

Of course, if two bricks are on the same mountpoint, then the output of "df"
will see the same amount of free space on both bricks. That is, adding
together all the "df" output from the volumes which contain these bricks
will be more than the actual free space available.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-04-24 Thread Gandalf Corvotempesta
2012/4/19 Brian Candler 
>
> As for partitioning: note that you can create separate bricks on the same
> disk.  e.g. if the RAID arrays are mounted on server1:/data, server2:/data
> etc then you can create your web volume out of
>
>  server1:/data/web server2:/data/web ...etc
>
> and your mail volume out of
>
>  server1:/data/mail server2:/data/mail ...etc
>
> This doesn't gain you any performance, but gives you a little more
> flexibility and manageability (e.g. you can more easily look at I/O usage
> patterns separately for the two volumes, or migrate your web volume onto
> other bricks without moving your mail volume)
>
>
just to clarify:
are bricks like server1:/data/mail and server1:/data/web standard directory
on the server?
Can I create a brick from a single directory and not from the whole
mountpoint?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-04-19 Thread Gandalf Corvotempesta
2012/4/19 Brian Candler 
>
> Much less than the performance degradation you'll get from using RAID-5!
>
>
I think that i'll go with RAID1 plus LVM.
This will give me much more flexibility.

Just a question:
let's assume a brick made with 1 RAID1+LVM volume of 100GB.
What happens when I add another RAID1 to the same logical volume and resize
the FS?
WIll gluster detect the change automatically?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-04-19 Thread Brian Candler
On Wed, Apr 18, 2012 at 09:54:58PM +0200, Gandalf Corvotempesta wrote:
>we are planning a new infrastructure based on gluster to be used by
>some mail servers and some web servers.
> 
>We plan 4 server, with 6x 2TB SATA disks in RAID-5 hardware each.
> 
>In a replicate-distribute volume we will have 20TB of available space.
> 
>What do you suggest, a single XFS volume and then split webstorage and
>mailstorage by directory or do you suggest to create two different
>mount points with different replicate-distribute volume?
> 
>any performance degradation making 2 or more volumes instead a single
>one?

Much less than the performance degradation you'll get from using RAID-5!

If you use 6 x 3TB disks in RAID10 you'll have almost the same amount of
storage space, but far better performance, at almost the same price.  I'd
say this is very important for a mail server where you are constantly
writing and deleting small files.

RAID10 with a large stripe size should localise file I/O from one user to
one disk (read) or two disks (write), so that there is spare seek capacity
on the other 4 disks for concurrent accesses from other users.

As for partitioning: note that you can create separate bricks on the same
disk.  e.g. if the RAID arrays are mounted on server1:/data, server2:/data
etc then you can create your web volume out of

  server1:/data/web server2:/data/web ...etc

and your mail volume out of

  server1:/data/mail server2:/data/mail ...etc

This doesn't gain you any performance, but gives you a little more
flexibility and manageability (e.g. you can more easily look at I/O usage
patterns separately for the two volumes, or migrate your web volume onto
other bricks without moving your mail volume)

Regards,

Brian.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Bricks suggestions

2012-04-18 Thread Gandalf Corvotempesta
Hi all,
we are planning a new infrastructure based on gluster to be used by some
mail servers and some web servers.
We plan 4 server, with 6x 2TB SATA disks in RAID-5 hardware each.
In a replicate-distribute volume we will have 20TB of available space.

What do you suggest, a single XFS volume and then split webstorage and
mailstorage by directory or do you suggest to create two different mount
points with different replicate-distribute volume?

any performance degradation making 2 or more volumes instead a single one?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users