Re: [Gluster-users] Moving external storage between bricks

2010-12-01 Thread Craig Carl

James -
We will track the CentOS 6 time line, so not for a while yet.  We 
use Fedora 11 and 12 quite a bit so I would be very surprised if there 
was an issue, please let us know how your testing goes.



Thanks,

Craig

-->
Craig Carl
Senior Systems Engineer
Gluster

On 12/01/2010 11:59 AM, Burnash, James wrote:

Gluster development and support team:

Is there a projected timeline for Glusterfs support for RHEL 6?

Has anybody out there on the list tried this yet? We are about to try some 
simple testing, mostly out of curiosity.

James Burnash, Unix Engineering


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Moving external storage between bricks

2010-12-01 Thread Burnash, James
Gluster development and support team:

Is there a projected timeline for Glusterfs support for RHEL 6?

Has anybody out there on the list tried this yet? We are about to try some 
simple testing, mostly out of curiosity.

James Burnash, Unix Engineering


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Moving external storage between bricks

2010-12-01 Thread Burnash, James
Excellent and clearly explained. Thanks Carl!

James Burnash, Unix Engineering
T. 201-239-2248 
jburn...@knight.com | www.knight.com

545 Washington Ave. | Jersey City, NJ


-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Craig Carl
Sent: Wednesday, December 01, 2010 12:19 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] Moving external storage between bricks

James -
The setup you've described is pretty standard, if we assume that you 
are going to mount each array at /mnt/array{1-8}, your volume will be 
called vol1, and your servers are named server{1-4} your gluster volume 
create command would be -

Without replicas -

#gluster volume create vol1 transport tcp server1:/mnt/array1 
server2:/mnt/array1 server3:/mnt/array1 server4:/mnt/array1 
server1:/mnt/array2 server2:/mnt/array2 server3:/mnt/array2 
server4:/mnt/array2 server1:/mnt/array3 server2:/mnt/array3 
server3:/mnt/array3 server4:/mnt/array3 server1:/mnt/array4 
server2:/mnt/array4 server3:/mnt/array4 server4:/mnt/array4 
server1:/mnt/array5 server2:/mnt/array5 server3:/mnt/array5 
server4:/mnt/array5 server1:/mnt/array6 server2:/mnt/array6 
server3:/mnt/array6 server4:/mnt/array6 server1:/mnt/array7 
server2:/mnt/array7 server3:/mnt/array7 server4:/mnt/array7 
server1:/mnt/array8 server2:/mnt/array8 server3:/mnt/array8 
server4:/mnt/array8
This would get you a single 512TB NFS mount.

With replicas(2) -

#gluster volume create vol1 replica 2 transport tcp server1:/mnt/array1 
server2:/mnt/array1 server3:/mnt/array1 server4:/mnt/array1 
server1:/mnt/array2 server2:/mnt/array2 server3:/mnt/array2 
server4:/mnt/array2 server1:/mnt/array3 server2:/mnt/array3 
server3:/mnt/array3 server4:/mnt/array3 server1:/mnt/array4 
server2:/mnt/array4 server3:/mnt/array4 server4:/mnt/array4 
server1:/mnt/array5 server2:/mnt/array5 server3:/mnt/array5 
server4:/mnt/array5 server1:/mnt/array6 server2:/mnt/array6 
server3:/mnt/array6 server4:/mnt/array6 server1:/mnt/array7 
server2:/mnt/array7 server3:/mnt/array7 server4:/mnt/array7 
server1:/mnt/array8 server2:/mnt/array8 server3:/mnt/array8 
server4:/mnt/array8
This would get you a single 256TB HA NFS mount.

Gluster specifically doesn't care about LUN/brick size, the ability to 
create smaller LUNs without affecting the presentation of that space is 
a positive side effect of using Gluster. Smaller LUN's are useful in 
several ways, faster fsck's on the LUN if that is ever required, there 
is a minor performance hit to running bricks of different sizes in the 
same volume, small LUNs make that easier.


Thanks,

Craig

-->
Craig Carl
Senior Systems Engineer
Gluster

On 12/01/2010 08:29 AM, Burnash, James wrote:
> Hello.
>
> So, here's my problem.
>
> I have 4 storage servers that will be configured as replicate + distribute, 
> each of which has two external storage arrays, each with their own 
> controller. Those external arrays will be used to store archived large (10GB) 
> files that will only be read-only after their initial copy to the glusterfs 
> storage.
>
> Currently, the external arrays are the items of interest. What I'd like to do 
> is this:
>
> - Create multiple hardware RAID 5 arrays on each storage server, which would 
> present to the OS as approx 8 16TB physical drives.
> - Create an ext3 file system on each of those devices (I'm using CentOS 5.5. 
> so ext4 is still not really an option for me)
> - Mount those multiple file systems to the storage server, and then aggregate 
> them all under gluster to export under a single namespace to NFS and the 
> Gluster client.
>
> How do I aggregate those multiple file systems without involving LVM in some 
> way.
>
> I've read that Glusterfs likes "small" bricks, though I haven't really been 
> able to track down why. Any pointers to good technical info on this subject 
> would also be greatly appreciated.
>
> Thanks,
>
> James Burnash, Unix Engineering
>
>
> DISCLAIMER:
> This e-mail, and any attachments thereto, is intended only for use by the 
> addressee(s) named herein and may contain legally privileged and/or 
> confidential information. If you are not the intended recipient of this 
> e-mail, you are hereby notified that any dissemination, distribution or 
> copying of this e-mail, and any attachments thereto, is strictly prohibited. 
> If you have received this in error, please immediately notify me and 
> permanently delete the original and any copy of any e-mail and any printout 
> thereof. E-mail transmission cannot be guaranteed to be secure or error-free. 
> The sender therefore does not accept liability for any errors or omissions in 
> the contents of this message which arise as a result of e-mail tran

Re: [Gluster-users] Moving external storage between bricks

2010-12-01 Thread Jacob Shucart
James,

Gluster itself aggregates the multiple filesystems, so you don't need to
use LVM.  You can give Gluster as many filesystems as you want and it will
pool them together and present them as one large volume if that is what
you have in mind.  If ext4 is really what you want to use you can
certainly install Fedora and that has ext4 support.  CentOS 6 will also
have support for ext4 when it is released which won't be too far away I
imagine.  Gluster itself does not have a preference between small bricks
and large bricks.  The reasons for using smaller or larger filesystems
underneath Gluster as storage bricks is more to get around issues that
those filesystems might have related to the maximum file sizes or maximum
number of files or that sort of thing.

-Jacob

-Original Message-
From: Burnash, James [mailto:jburn...@knight.com] 
Sent: Wednesday, December 01, 2010 8:30 AM
To: gluster-users@gluster.org
Subject: Moving external storage between bricks

Hello.

So, here's my problem.

I have 4 storage servers that will be configured as replicate +
distribute, each of which has two external storage arrays, each with their
own controller. Those external arrays will be used to store archived large
(10GB) files that will only be read-only after their initial copy to the
glusterfs storage.

Currently, the external arrays are the items of interest. What I'd like to
do is this:

- Create multiple hardware RAID 5 arrays on each storage server, which
would present to the OS as approx 8 16TB physical drives.
- Create an ext3 file system on each of those devices (I'm using CentOS
5.5. so ext4 is still not really an option for me)
- Mount those multiple file systems to the storage server, and then
aggregate them all under gluster to export under a single namespace to NFS
and the Gluster client.

How do I aggregate those multiple file systems without involving LVM in
some way.

I've read that Glusterfs likes "small" bricks, though I haven't really
been able to track down why. Any pointers to good technical info on this
subject would also be greatly appreciated.

Thanks,

James Burnash, Unix Engineering


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the
addressee(s) named herein and may contain legally privileged and/or
confidential information. If you are not the intended recipient of this
e-mail, you are hereby notified that any dissemination, distribution or
copying of this e-mail, and any attachments thereto, is strictly
prohibited. If you have received this in error, please immediately notify
me and permanently delete the original and any copy of any e-mail and any
printout thereof. E-mail transmission cannot be guaranteed to be secure or
error-free. The sender therefore does not accept liability for any errors
or omissions in the contents of this message which arise as a result of
e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at
its discretion, monitor and review the content of all e-mail
communications. http://www.knight.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Moving external storage between bricks

2010-12-01 Thread Craig Carl

James -
   The setup you've described is pretty standard, if we assume that you 
are going to mount each array at /mnt/array{1-8}, your volume will be 
called vol1, and your servers are named server{1-4} your gluster volume 
create command would be -


Without replicas -

#gluster volume create vol1 transport tcp server1:/mnt/array1 
server2:/mnt/array1 server3:/mnt/array1 server4:/mnt/array1 
server1:/mnt/array2 server2:/mnt/array2 server3:/mnt/array2 
server4:/mnt/array2 server1:/mnt/array3 server2:/mnt/array3 
server3:/mnt/array3 server4:/mnt/array3 server1:/mnt/array4 
server2:/mnt/array4 server3:/mnt/array4 server4:/mnt/array4 
server1:/mnt/array5 server2:/mnt/array5 server3:/mnt/array5 
server4:/mnt/array5 server1:/mnt/array6 server2:/mnt/array6 
server3:/mnt/array6 server4:/mnt/array6 server1:/mnt/array7 
server2:/mnt/array7 server3:/mnt/array7 server4:/mnt/array7 
server1:/mnt/array8 server2:/mnt/array8 server3:/mnt/array8 
server4:/mnt/array8

This would get you a single 512TB NFS mount.

With replicas(2) -

#gluster volume create vol1 replica 2 transport tcp server1:/mnt/array1 
server2:/mnt/array1 server3:/mnt/array1 server4:/mnt/array1 
server1:/mnt/array2 server2:/mnt/array2 server3:/mnt/array2 
server4:/mnt/array2 server1:/mnt/array3 server2:/mnt/array3 
server3:/mnt/array3 server4:/mnt/array3 server1:/mnt/array4 
server2:/mnt/array4 server3:/mnt/array4 server4:/mnt/array4 
server1:/mnt/array5 server2:/mnt/array5 server3:/mnt/array5 
server4:/mnt/array5 server1:/mnt/array6 server2:/mnt/array6 
server3:/mnt/array6 server4:/mnt/array6 server1:/mnt/array7 
server2:/mnt/array7 server3:/mnt/array7 server4:/mnt/array7 
server1:/mnt/array8 server2:/mnt/array8 server3:/mnt/array8 
server4:/mnt/array8

This would get you a single 256TB HA NFS mount.

Gluster specifically doesn't care about LUN/brick size, the ability to 
create smaller LUNs without affecting the presentation of that space is 
a positive side effect of using Gluster. Smaller LUN's are useful in 
several ways, faster fsck's on the LUN if that is ever required, there 
is a minor performance hit to running bricks of different sizes in the 
same volume, small LUNs make that easier.



Thanks,

Craig

-->
Craig Carl
Senior Systems Engineer
Gluster

On 12/01/2010 08:29 AM, Burnash, James wrote:

Hello.

So, here's my problem.

I have 4 storage servers that will be configured as replicate + distribute, 
each of which has two external storage arrays, each with their own controller. 
Those external arrays will be used to store archived large (10GB) files that 
will only be read-only after their initial copy to the glusterfs storage.

Currently, the external arrays are the items of interest. What I'd like to do 
is this:

- Create multiple hardware RAID 5 arrays on each storage server, which would 
present to the OS as approx 8 16TB physical drives.
- Create an ext3 file system on each of those devices (I'm using CentOS 5.5. so 
ext4 is still not really an option for me)
- Mount those multiple file systems to the storage server, and then aggregate 
them all under gluster to export under a single namespace to NFS and the 
Gluster client.

How do I aggregate those multiple file systems without involving LVM in some 
way.

I've read that Glusterfs likes "small" bricks, though I haven't really been 
able to track down why. Any pointers to good technical info on this subject would also be 
greatly appreciated.

Thanks,

James Burnash, Unix Engineering


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Moving external storage between bricks

2010-12-01 Thread Burnash, James
Hello.

So, here's my problem.

I have 4 storage servers that will be configured as replicate + distribute, 
each of which has two external storage arrays, each with their own controller. 
Those external arrays will be used to store archived large (10GB) files that 
will only be read-only after their initial copy to the glusterfs storage.

Currently, the external arrays are the items of interest. What I'd like to do 
is this:

- Create multiple hardware RAID 5 arrays on each storage server, which would 
present to the OS as approx 8 16TB physical drives.
- Create an ext3 file system on each of those devices (I'm using CentOS 5.5. so 
ext4 is still not really an option for me)
- Mount those multiple file systems to the storage server, and then aggregate 
them all under gluster to export under a single namespace to NFS and the 
Gluster client.

How do I aggregate those multiple file systems without involving LVM in some 
way.

I've read that Glusterfs likes "small" bricks, though I haven't really been 
able to track down why. Any pointers to good technical info on this subject 
would also be greatly appreciated.

Thanks,

James Burnash, Unix Engineering


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users