Re: [Gluster-users] Looking for use cases / opinions

2016-11-08 Thread Thomas Wakefield
We haven’t decided how the JBODS would be configured.  They would likely be SAS 
attached without a raid controller for improved performance.  I run large ZFS 
arrays this way, but only in single server NFS setups right now.
Mounting each hard drive as it’s own brick would probably give the most usable 
space, but would need scripting to manage building all the bricks.  But does 
Gluster handle 1000’s of small bricks?



> On Nov 8, 2016, at 9:18 AM, Frank Rothenstein 
>  wrote:
> 
> Hi Thomas,
> 
> thats a huge storage.
> What I can say from my usecase - dont use Gluster directly if the files
> are small. I dont know, if the file count matters, but if the files are
> small (few KiB), Gluster takes ages to remove for example. Doing the
> same in a VM with e.g. ext4 disk on the very same Gluster gives a big
> speedup.
> There are many options for a new Gluster volume, like Lindsay
> mentioned.
> And there are other options, like Ceph, OrangeFS.
> How do you want to use the JBODs? I dont think you would use every
> single drive as a brick... How are these connected to the servers?
> 
> Im only dealing with about 10TiB Gluster volumes, so by far not at your
> planned level, but I really would like to see some results, if you go
> for Gluster!
> 
> Frank
> 
> 
> Am Dienstag, den 08.11.2016, 13:49 + schrieb Thomas Wakefield:
>> I think we are leaning towards erasure coding with 3 or 4
>> copies.  But open to suggestions.
>> 
>> 
>>> On Nov 8, 2016, at 8:43 AM, Lindsay Mathieson >> ail.com> wrote:
>>> 
>>> On 8/11/2016 11:38 PM, Thomas Wakefield wrote:
>>>> High Performance Computing, we have a small cluster on campus of
>>>> about 50 linux compute servers.
>>>> 
>>> 
>>> D'oh! I should have thought of that.
>>> 
>>> 
>>> Are you looking at replication (2 or 3)/disperse or pure disperse?
>>> 
>>> -- 
>>> Lindsay Mathieson
>>> 
>> 
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
> 
> 
> 
> 
> 
> __
> BODDEN-KLINIKEN Ribnitz-Damgarten GmbH
> Sandhufe 2
> 18311 Ribnitz-Damgarten
> 
> Telefon: 03821-700-0
> Fax:   03821-700-240
> 
> E-Mail: i...@bodden-kliniken.de   Internet: http://www.bodden-kliniken.de
> 
> Sitz: Ribnitz-Damgarten, Amtsgericht: Stralsund, HRB 2919, Steuer-Nr.: 
> 079/133/40188
> Aufsichtsratsvorsitzende: Carmen Schröter, Geschäftsführer: Dr. Falko Milski
> 
> Der Inhalt dieser E-Mail ist ausschließlich für den bezeichneten Adressaten 
> bestimmt. Wenn Sie nicht der vorge- 
> sehene Adressat dieser E-Mail oder dessen Vertreter sein sollten, beachten 
> Sie bitte, dass jede Form der Veröf- 
> fentlichung, Vervielfältigung oder Weitergabe des Inhalts dieser E-Mail 
> unzulässig ist. Wir bitten Sie, sofort den 
> Absender zu informieren und die E-Mail zu löschen. 
> 
> 
> Bodden-Kliniken Ribnitz-Damgarten GmbH 2016
> *** Virenfrei durch Kerio Mail Server und Sophos Antivirus ***
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Looking for use cases / opinions

2016-11-08 Thread Thomas Wakefield
I think we are leaning towards erasure coding with 3 or 4 copies.  But open to 
suggestions.


> On Nov 8, 2016, at 8:43 AM, Lindsay Mathieson  
> wrote:
> 
> On 8/11/2016 11:38 PM, Thomas Wakefield wrote:
>> High Performance Computing, we have a small cluster on campus of about 50 
>> linux compute servers.
>> 
> 
> D'oh! I should have thought of that.
> 
> 
> Are you looking at replication (2 or 3)/disperse or pure disperse?
> 
> -- 
> Lindsay Mathieson
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Looking for use cases / opinions

2016-11-08 Thread Thomas Wakefield
High Performance Computing, we have a small cluster on campus of about 50 linux 
compute servers.


> On Nov 8, 2016, at 8:37 AM, Lindsay Mathieson  
> wrote:
> 
> On 8/11/2016 9:58 PM, Thomas Wakefield wrote:
>> Still looking for use cases and opinions for Gluster in an education / HPC 
>> environment.  Thanks.
> 
> Sorry, whats a HPC environment?
> 
> -- 
> Lindsay Mathieson
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Looking for use cases / opinions

2016-11-08 Thread Thomas Wakefield
Still looking for use cases and opinions for Gluster in an education / HPC 
environment.  Thanks.


> On Nov 4, 2016, at 2:05 PM, Thomas Wakefield  wrote:
> 
> Everyone, thanks in advance.
> 
> We are looking to add a large filesystem to our compute facility at GMU.  We 
> are investigating if Gluster can work in a University setting for some HPC 
> work, and general research computing.  Does anyone have use cases where 
> Gluster has been used in a university setting?
> 
> The idea we have is to build a 2-3PB GlusterFS system.   We would probably 
> use commodity 1u servers with 60 drive JBODS attached, from a reputable 
> vendor (Dell, HP, etc).  Is 60 the maximum number of drives that’s 
> recommended per server, or could we use 2 jbods per server?  Everything will 
> be connected to a 10GB network.  Clients are mostly linux.
> 
> Are there ways to minimize the latency for stat commands like ls and du?  We 
> have some previous experience with Gluster and found the stat performance to 
> be sluggish frequently.  
> 
> Thanks,
> 
> Thomas
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Looking for use cases / opnions

2016-11-04 Thread Thomas Wakefield
Everyone, thanks in advance.

We are looking to add a large filesystem to our compute facility at GMU.  We 
are investigating if Gluster can work in a University setting for some HPC 
work, and general research computing.  Does anyone have use cases where Gluster 
has been used in a university setting?

The idea we have is to build a 2-3PB GlusterFS system.   We would probably use 
commodity 1u servers with 60 drive JBODS attached, from a reputable vendor 
(Dell, HP, etc).  Is 60 the maximum number of drives that’s recommended per 
server, or could we use 2 jbods per server?  Everything will be connected to a 
10GB network.  Clients are mostly linux.

Are there ways to minimize the latency for stat commands like ls and du?  We 
have some previous experience with Gluster and found the stat performance to be 
sluggish frequently.  

Thanks,

Thomas
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Looking for use cases / opnions

2016-11-04 Thread Thomas Wakefield
Everyone, thanks in advance.

We are looking to add a large filesystem to our compute facility at GMU.  We 
are investigating if Gluster can work in a University setting for some HPC 
work, and general research computing.  Does anyone have use cases where Gluster 
has been used in a university setting?

The idea we have is to build a 2-3PB GlusterFS system.   We would probably use 
commodity 1u servers with 60 drive JBODS attached, from a reputable vendor 
(Dell, HP, etc).  Is 60 the maximum number of drives that’s recommended per 
server, or could we use 2 jbods per server?  Everything will be connected to a 
10GB network.  Clients are mostly linux.

Are there ways to minimize the latency for stat commands like ls and du?  We 
have some previous experience with Gluster and found the stat performance to be 
sluggish frequently.  

Thanks,

Thomas
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Bricks filling up

2013-04-16 Thread Thomas Wakefield
Do you have the bug # for this patch?

On Apr 16, 2013, at 3:48 PM, Ling Ho  wrote:

> Maybe I was wrong. I just did a diff and looks like the fix is not in 3.3.1. 
> This is the patch I applied to my 3.3.0 build. I didn't fix the the check for 
> inodes though. If you look at the code, max is defined as 0.
> 
> --- glusterfs-3.3.0.orig/xlators/cluster/dht/src/dht-diskusage.c 2012-05-30 
> 10:53:24.0 -0700
> +++ glusterfs-3.3.0-slac/xlators/cluster/dht/src/dht-diskusage.c 2013-03-20 
> 02:25:53.761415662 -0700
> @@ -263,14 +263,14 @@
>{
>for (i = 0; i < conf->subvolume_cnt; i++) {
>if (conf->disk_unit == 'p') {
> -   if ((conf->du_stats[i].avail_percent > max)
> +   if ((conf->du_stats[i].avail_percent > 
> conf->min_free_disk)
>&& (conf->du_stats[i].avail_inodes > 
> max_inodes)) {
>max = conf->du_stats[i].avail_percent;
>max_inodes = 
> conf->du_stats[i].avail_inodes;
>avail_subvol = conf->subvolumes[i];
>}
>} else {
> -   if ((conf->du_stats[i].avail_space > max)
> +   if ((conf->du_stats[i].avail_space > 
> conf->min_free_disk)
>&& (conf->du_stats[i].avail_inodes > 
> max_inodes)) {
>    max = conf->du_stats[i].avail_space;
>max_inodes = 
> conf->du_stats[i].avail_inodes;
> 
> 
> ...
> ling
> 
> 
> On 04/16/2013 12:38 PM, Thomas Wakefield wrote:
>> Running 3.3.1 on everything, client and servers :(
>> 
>> Thomas Wakefield
>> Sr Sys Admin @ COLA
>> 301-902-1268
>> 
>> 
>> 
>> On Apr 16, 2013, at 3:23 PM, Ling Ho  wrote:
>> 
>>> On 04/15/2013 06:35 PM, Thomas Wakefield wrote:
>>>> Help-
>>>> 
>>>> I have multiple gluster filesystems, all with the setting: 
>>>> cluster.min-free-disk: 500GB.  My understanding is that this setting 
>>>> should stop new writes to a brick with less than 500GB of free space.  But 
>>>> that existing files might expand, which is why I went with a high number 
>>>> like 500GB.  But I am still getting full bricks, frequently it's the first 
>>>> brick in the cluster that suddenly fills up.
>>>> 
>>>> Can someone tell me how gluster chooses where to write a file.  And why 
>>>> the min-free-disk is being ignored.
>>>> 
>>>> Running 3.3.1 currently on all servers.
>>>> 
>>>> Thanks,
>>>> -Tom
>>>> ___
>>>> Gluster-users mailing list
>>>> Gluster-users@gluster.org
>>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>> Make sure you are running 3.3.1 also on all the clients also. It is 
>>> determined by the clients. I noticed there is a fix there is in 3.3.1 which 
>>> is not in 3.3.0. In 3.3.0, it will try writing to the next brick which is 
>>> the 1st brick, but only check if it is not 100% (completely) free. If it 
>>> has 1 byte left, it will start writing to it, and that's why the 1st brick 
>>> will get filled up.
>>> 
>>> ...
>>> ling
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks filling up

2013-04-16 Thread Thomas Wakefield
Running 3.3.1 on everything, client and servers :(

Thomas Wakefield
Sr Sys Admin @ COLA
301-902-1268



On Apr 16, 2013, at 3:23 PM, Ling Ho  wrote:

> On 04/15/2013 06:35 PM, Thomas Wakefield wrote:
>> Help-
>> 
>> I have multiple gluster filesystems, all with the setting: 
>> cluster.min-free-disk: 500GB.  My understanding is that this setting should 
>> stop new writes to a brick with less than 500GB of free space.  But that 
>> existing files might expand, which is why I went with a high number like 
>> 500GB.  But I am still getting full bricks, frequently it's the first brick 
>> in the cluster that suddenly fills up.
>> 
>> Can someone tell me how gluster chooses where to write a file.  And why the 
>> min-free-disk is being ignored.
>> 
>> Running 3.3.1 currently on all servers.
>> 
>> Thanks,
>> -Tom
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> Make sure you are running 3.3.1 also on all the clients also. It is 
> determined by the clients. I noticed there is a fix there is in 3.3.1 which 
> is not in 3.3.0. In 3.3.0, it will try writing to the next brick which is the 
> 1st brick, but only check if it is not 100% (completely) free. If it has 1 
> byte left, it will start writing to it, and that's why the 1st brick will get 
> filled up.
> 
> ...
> ling
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] cluster.min-free-disk not working

2013-04-15 Thread Thomas Wakefield
I can't view that bug, I get "You are not authorized to access bug #874554"

What version of gluster will have the fix? Will it be in 3.3.2, and if so when 
will that be released?

-Tom

On Apr 16, 2013, at 1:02 AM, Varun Shastry  wrote:

> Hi Thomos,
> 
> It was a bug https://bugzilla.redhat.com/show_bug.cgi?id=874554 and its fixed 
> now.
> 
> -Varun Shastry
> 
> On Tuesday 16 April 2013 12:18 AM, Thomas Wakefield wrote:
>> Was there ever a solution for setting min-free-disk?  I have a cluster of 
>> about 30 bricks, some are 8TB and new bricks are 50TB.  I should be able to 
>> set gluster to leave 500GB free on each brick.  But one of the 8TB bricks 
>> keeps filling up with new data.
>> 
>> This is my current setting:
>> cluster.min-free-disk: 500GB
>> 
>> 
>> Thoughts?
>> 
>> Running 3.3.1.
>> 
>> -Tom
>> 
>> 
>> On Aug 26, 2012, at 2:15 AM, James Kahn  wrote:
>> 
>>> Further to my last email, I've been trying to find out why GlusterFS is
>>> favouring one brick over another. In pretty much all of my tests gluster
>>> is favouring the MOST full brick to write to. This is not a good thing
>>> when the most full brick has less than 200GB free and I need to write a
>>> huge file to it.
>>> 
>>> I've set cluster.min-free-disk on the volume, and it doesn't seem to have
>>> an effect. At all. I've tried setting it to 25%, and 400GB. When I run
>>> tests from an NFS client, they get written to the most full brick. There
>>> is less than 10% free on that brick so it should be ignore with the
>>> defaults anyway.
>>> 
>>> Any ideas?
>>> 
>>> JK
>>> 
>>> 
>>> 
>>> 
>>> 
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Bricks filling up

2013-04-15 Thread Thomas Wakefield
Help-

I have multiple gluster filesystems, all with the setting: 
cluster.min-free-disk: 500GB.  My understanding is that this setting should 
stop new writes to a brick with less than 500GB of free space.  But that 
existing files might expand, which is why I went with a high number like 500GB. 
 But I am still getting full bricks, frequently it's the first brick in the 
cluster that suddenly fills up.

Can someone tell me how gluster chooses where to write a file.  And why the 
min-free-disk is being ignored.

Running 3.3.1 currently on all servers.

Thanks,
-Tom
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] glusterfs-3.3.2qa1 released

2013-04-15 Thread Thomas Wakefield
Are there release notes for what has been fixed?


On Apr 13, 2013, at 10:37 AM, John Walker  wrote:

> Try the new qa build for 3.3.2. We're hopeful that this will solve some 
> lingering problems out there.
> 
>  Original Message 
> Subject: [Gluster-devel] glusterfs-3.3.2qa1 released
> From: jenk...@build.gluster.org
> To: gluster-users@gluster.org,gluster-de...@nongnu.org
> CC: 
> 
> 
> 
> 
> 
> RPM: http://bits.gluster.org/pub/gluster/glusterfs/3.3.2qa1/
> 
> SRC: 
> http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.3.2qa1.tar.gz
> 
> This release is made off jenkins-release-20
> 
> -- Gluster Build System
> 
> ___
> Gluster-devel mailing list
> gluster-de...@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] cluster.min-free-disk not working

2013-04-15 Thread Thomas Wakefield
Was there ever a solution for setting min-free-disk?  I have a cluster of about 
30 bricks, some are 8TB and new bricks are 50TB.  I should be able to set 
gluster to leave 500GB free on each brick.  But one of the 8TB bricks keeps 
filling up with new data.

This is my current setting:
cluster.min-free-disk: 500GB


Thoughts?

Running 3.3.1.

-Tom


On Aug 26, 2012, at 2:15 AM, James Kahn  wrote:

> Further to my last email, I've been trying to find out why GlusterFS is
> favouring one brick over another. In pretty much all of my tests gluster
> is favouring the MOST full brick to write to. This is not a good thing
> when the most full brick has less than 200GB free and I need to write a
> huge file to it.
> 
> I've set cluster.min-free-disk on the volume, and it doesn't seem to have
> an effect. At all. I've tried setting it to 25%, and 400GB. When I run
> tests from an NFS client, they get written to the most full brick. There
> is less than 10% free on that brick so it should be ignore with the
> defaults anyway.
> 
> Any ideas?
> 
> JK
> 
> 
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] different size of nodes

2013-03-18 Thread Thomas Wakefield
You can set the free disk space limit.  This will force gluster to write files 
to another volume.

gluster volume set "volume"  cluster.min-free-disk XXGB(you insert your 
volume name and the amount of free space you want, probably like 2-300GB)

Running a rebalance would help move your files around so that gl4 is not filled 
up.
gluster volume rebalance "volume"  start

-Tom


On Mar 16, 2013, at 6:54 AM, Papp Tamas  wrote:

> hi All,
> 
> There is a distributed cluster with 5 bricks:
> 
> gl0
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/sda4   5.5T  4.1T  1.5T  75% /mnt/brick1
> gl1
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/sda4   5.5T  4.3T  1.3T  78% /mnt/brick1
> gl2
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/sda4   5.5T  4.1T  1.4T  76% /mnt/brick1
> gl3
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/sda4   4.1T  4.1T  2.1G 100% /mnt/brick1
> gl4
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/sda4   4.1T  4.1T   24M 100% /mnt/brick1
> 
> 
> Volume Name: w-vol
> Type: Distribute
> Volume ID: 89e31546-cc2e-4a27-a448-17befda04726
> Status: Started
> Number of Bricks: 5
> Transport-type: tcp
> Bricks:
> Brick1: gl0:/mnt/brick1/export
> Brick2: gl1:/mnt/brick1/export
> Brick3: gl2:/mnt/brick1/export
> Brick4: gl3:/mnt/brick1/export
> Brick5: gl4:/mnt/brick1/export
> Options Reconfigured:
> nfs.mount-udp: on
> nfs.addr-namelookup: off
> nfs.ports-insecure: on
> nfs.port: 2049
> cluster.stripe-coalesce: on
> nfs.disable: off
> performance.flush-behind: on
> performance.io-thread-count: 64
> performance.quick-read: on
> performance.stat-prefetch: on
> performance.io-cache: on
> performance.write-behind: on
> performance.read-ahead: on
> performance.write-behind-window-size: 4MB
> performance.cache-refresh-timeout: 1
> performance.cache-size: 4GB
> network.frame-timeout: 60
> performance.cache-max-file-size: 1GB
> 
> 
> 
> As you can see 2 of the bricks are smaller and they're full.
> The gluster volume is not full of course:
> 
> gl0:/w-vol   25T   21T  4.0T  84% /W/Projects
> 
> 
> I'm not able to write to the volume. Why? Is it an issue? If so, is it known?
> How can I stop writing to full nodes?
> 
> Thanks,
> tamas
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Slow read performance

2013-02-27 Thread Thomas Wakefield
Help please-


I am running 3.3.1 on Centos using a 10GB network.  I get reasonable write 
speeds, although I think they could be faster.  But my read speeds are REALLY 
slow.

Executive summary:

On gluster client-
Writes average about 700-800MB/s
Reads average about 70-80MB/s

On server-
Writes average about 1-1.5GB/s
Reads average about 2-3GB/s

Any thoughts?



Here are some additional details:

Nothing interesting in any of the log files, everything is very quite.
All servers had no other load, and all clients are performing the same way.


Volume Name: shared
Type: Distribute
Volume ID: de11cc19-0085-41c3-881e-995cca244620
Status: Started
Number of Bricks: 26
Transport-type: tcp
Bricks:
Brick1: fs-disk2:/storage/disk2a
Brick2: fs-disk2:/storage/disk2b
Brick3: fs-disk2:/storage/disk2d
Brick4: fs-disk2:/storage/disk2e
Brick5: fs-disk2:/storage/disk2f
Brick6: fs-disk2:/storage/disk2g
Brick7: fs-disk2:/storage/disk2h
Brick8: fs-disk2:/storage/disk2i
Brick9: fs-disk2:/storage/disk2j
Brick10: fs-disk2:/storage/disk2k
Brick11: fs-disk2:/storage/disk2l
Brick12: fs-disk2:/storage/disk2m
Brick13: fs-disk2:/storage/disk2n
Brick14: fs-disk2:/storage/disk2o
Brick15: fs-disk2:/storage/disk2p
Brick16: fs-disk2:/storage/disk2q
Brick17: fs-disk2:/storage/disk2r
Brick18: fs-disk2:/storage/disk2s
Brick19: fs-disk2:/storage/disk2t
Brick20: fs-disk2:/storage/disk2u
Brick21: fs-disk2:/storage/disk2v
Brick22: fs-disk2:/storage/disk2w
Brick23: fs-disk2:/storage/disk2x
Brick24: fs-disk3:/storage/disk3a
Brick25: fs-disk3:/storage/disk3b
Brick26: fs-disk3:/storage/disk3c
Options Reconfigured:
performance.write-behind: on
performance.read-ahead: on
performance.io-cache: on
performance.stat-prefetch: on
performance.quick-read: on
cluster.min-free-disk: 500GB
nfs.disable: off


sysctl.conf settings for 10GBe
# increase TCP max buffer size settable using setsockopt()
net.core.rmem_max = 67108864 
net.core.wmem_max = 67108864 
# increase Linux autotuning TCP buffer limit
net.ipv4.tcp_rmem = 4096 87380 67108864
net.ipv4.tcp_wmem = 4096 65536 67108864
# increase the length of the processor input queue
net.core.netdev_max_backlog = 25
# recommended default congestion control is htcp 
net.ipv4.tcp_congestion_control=htcp
# recommended for hosts with jumbo frames enabled
net.ipv4.tcp_mtu_probing=1






Thomas W.
Sr.  Systems Administrator COLA/IGES
tw...@cola.iges.org
Affiliate Computer Scientist GMU

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Link files showing on mount point, 3.3.1

2013-01-06 Thread Thomas Wakefield
Can anyone tell me how to fix having link files show on the client mount point. 
 This started after an upgrade to 3.3.1

file name and user and group info have been changed, but this is the basic 
problem.  There are about 5 files in just this directory, and I am sure there 
are more directories with this issue.

-rw-r--r--  1 user group29120 Aug 17  2010 file1
-T  1 root root0 Apr 21  2012 file1
-T  1 root root0 Apr 21  2012 file1



Thanks,

Thomas

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Gluster upgrade 3.2.5 to 3.3.1

2013-01-03 Thread Thomas Wakefield
Help please-

Last night I tried to upgrade from 3.2.5 to 3.3.1 and had no success and rolled 
back to 3.2.5.

I followed the instructions for the upgrade as exactly as possible, but don't 
understand this section:
5) If you have installed from RPM, goto 6).  Else, start glusterd in upgrade 
mode. glusterd terminates after it performs the necessary steps for upgrade. 
Re-start glusterd normally after this termination. Essentially this process 
boils down to:

a) killall glusterd

b) glusterd –xlator-option *.upgrade=on -N

I used the yum gluster repo which I thought would be the easiest way.  I also 
tried an RPM install and no success.  I could start the volume, but none of the 
bricks would be online.  They were all listed as N in the online column of a 
"gluster volume info" output.  Very little help from the error logs.


Any thoughts?

Thomas___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Multiple glusterfsd instances on 1 server

2011-05-10 Thread Thomas Wakefield
Is it possible to have multiple deamons running on the same server?  I have 2 
infiniband ports, and want to dedicate 1 glusterfs instance to each port.  I am 
maxing out the CPU on a single glusterfsd, pushing about 1.1GB/s over 
infiniband to my server.  But i know the disks are capable of doing about 
2GB/s, so i think adding another glusterfsd instance will help.

Are there any instructions or pointers for how to do this.  I figure bind each 
to different ports, which i did, but when i start up the 2nd deamon, it shuts 
the first one down.

Thanks,

Thomas
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] iSCSI or FC

2011-02-28 Thread Thomas Wakefield
For back end storage when not using direct attached JBODS, what does Gluster 
prefer, iSCSI or FC.

Looking at 2 different setups:

1.  Hitachi AMS 2300 with 8Gb/s FC (8 ports)

or 

2.  Dell Equallogic boxes with 10Gb/s iSCSI  (unknown number of ports, at least 
6 i think)

Either setup would be connected to 2-3 Linux gluster servers, pushing the 
filesystem out over our existing DDR infiniband network.  Filesystem would be 
200TB+.  


Thoughts?

Thanks in advance.


Thomas 
Systems Administrator COLA/IGES
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Removing bricks

2011-02-09 Thread Thomas Wakefield
I am not using replicate, so that won't help me.


On Feb 4, 2011, at 5:23 PM, Roberto Franchini wrote:

> On Fri, Feb 4, 2011 at 3:28 PM, Thomas Wakefield  wrote:
>> What's the best process for removing a brick from a gluster setup running 
>> 3.0  (possibly getting upgraded to 3.1 soon)?
>> 
>> We have 32 bricks, over 8 servers, and need to start thinking about how we 
>> will age out the smaller disks in favor of larger disk sizes.
> 
> We a 6 servers cluster in distribute/replicate running on 3.0.5.
> Yesterday we drop a node in favor of a a new one.
> So we umount and the stopped all the gluster servers, then modify the
> client vol file to point the new server instead of the old one and
> then restart/remount the cluster.
> At the beginning the new node was empty, so we did a ls -lR on a dir
> on the gluster storage to see the new node filled with data.
> Hope this help, cheers,
> RF
> -- 
> Roberto Franchini
> http://www.celi.it
> http://www.blogmeter.it
> http://www.memesphere.it
> Tel +39.011.562.71.15
> jabber:ro.franch...@gmail.com skype:ro.franchini

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Removing bricks

2011-02-04 Thread Thomas Wakefield
What's the best process for removing a brick from a gluster setup running 3.0  
(possibly getting upgraded to 3.1 soon)?

We have 32 bricks, over 8 servers, and need to start thinking about how we will 
age out the smaller disks in favor of larger disk sizes.


Thanks,

Thomas
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] ib-verbs on centos 5 hangs

2010-01-25 Thread Thomas Wakefield
Can you clarify, does your disk mount and work for a period of time, and then 
fail?  Or does your disk mount, but not become active for a period of time?

I run ib_verbs, and find that sometimes it takes a while for a connection to be 
setup, but that it does start working fine in 2-5 minutes.  I think the problem 
is my fabric manager, but i haven't tried switching it yet.



Thomas




On Jan 25, 2010, at 4:37 AM, Pedro Damián Cruz Santiago wrote:

> The scenario:
> 
> distro
> CentOS release 5.2 (Final)
> 
> 
> glusterfs:
> glusterfs-client-3.0.0-1.x86_64.rpm
> glusterfs-common-3.0.0-1.x86_64.rpm
> glusterfs-server-3.0.0-1.x86_64.rpm
> 
> 
> configuration (server & client) from:
> glusterfs-volgen -n storage --transport ib-verbs storage01-ib:/storage
> 
> infiniband
> InfiniBand: Mellanox Technologies MT25204 [InfiniHost III Lx HCA]
> 
> 
> mount command on client return:
> ...
> glusterfs#storage-ib-verbs.vol on /storage type fuse
> (rw,allow_other,default_permissions,max_read=131072)
> ...
> 
> error:
> 
> 1. Copy from local disk to directory /storage  (glusterfs)
> 
> pv < CentOS-5.2-x86_64-bin-DVD.iso > /storage/file
> 4.29GB 0:00:10 [ 429MB/s] [=>] 100%
> 
> 2. after that when a run any command to gluster storage, this fails whit the
> next error:
> ls /storage
> ls: /storage: Transport endpoint is not connected
> 
> df
> FilesystemSize  Used Avail Use% Mounted on
> /dev/sda1  48G   11G   35G  23% /
> /dev/sda3 171G  188M  162G   1% /data
> tmpfs 7.9G 0  7.9G   0% /dev/shm
> df: `/storage': Transport endpoint is not connected
> 
> 
> before a couple of minutes these commands works whell !!!
> 
> what is wrong with my configuration?
> 
> The storage node (glusterfs) is  a RAID 6 system of 18 HD, dual quad core
> whit 32GB RAM.
> The cliente node is too dual quad core system with 16GB RAM.
> 
> Thanks in advanced.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] adding bricks

2009-12-10 Thread Thomas Wakefield
Also i am running XFS.

On Dec 10, 2009, at 10:40 AM, Thomas Wakefield wrote:

> Should this command be run on the servers or the clients? I have done both, 
> and still have issues.
> 
> And also is it correct as listed below?  I get "no such attribute" errors.
> 
> [r...@g1 ~]# find /export/g1a -type d -exec setfattr -x trusted.glusterfs.dht 
> {} \;
> setfattr: /export/g1a/data: No such attribute
> setfattr: /export/g1a/data/prj: No such attribute
> setfattr: /export/g1a/data/prj/N89: No such attribute
> setfattr: /export/g1a/data/prj/N89/DATA2-STFF: No such attribute
> setfattr: /export/g1a/data/prj/N89/DATA2-STFF/DATA: No such attribute
> ...
> 
> 
> Thanks,
> 
> Thomas
> 
> 
> 
> On Nov 19, 2009, at 12:12 PM, Amar Tumballi wrote:
> 
>>> What's the best way to add bricks, and get distribute to use them?  I
>>> added 2 more bricks, and the total size increased for the filesystem,
>>> but i can't get any traffic on the new disks.  I remounted the
>>> filesystem, and ran an ls -Rl , but i still don't see any traffic to
>>> the disks.  I do see that the file tree was created on the new disks.
>> 
>> Currently there is no 'hot' add feature which will take care of the 
>> addition of new bricks. To achieve the proper distribution, you need 
>> to force 'distribute' self heal by removing the extended attribute on 
>> the directories.
>> 
>> Try running this:
>> 
>> bash# find /mnt/glusterfs -type d -exec setfattr -x trusted.glusterfs.dht {} 
>> \;
>> 
>> (Note that this works for versions 2.0.8 or higher).
>> 
>> Regards,
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] adding bricks

2009-12-10 Thread Thomas Wakefield
Should this command be run on the servers or the clients? I have done both, and 
still have issues.

And also is it correct as listed below?  I get "no such attribute" errors.

[r...@g1 ~]# find /export/g1a -type d -exec setfattr -x trusted.glusterfs.dht 
{} \;
setfattr: /export/g1a/data: No such attribute
setfattr: /export/g1a/data/prj: No such attribute
setfattr: /export/g1a/data/prj/N89: No such attribute
setfattr: /export/g1a/data/prj/N89/DATA2-STFF: No such attribute
setfattr: /export/g1a/data/prj/N89/DATA2-STFF/DATA: No such attribute
...


Thanks,

Thomas



On Nov 19, 2009, at 12:12 PM, Amar Tumballi wrote:

>> What's the best way to add bricks, and get distribute to use them?  I
>> added 2 more bricks, and the total size increased for the filesystem,
>> but i can't get any traffic on the new disks.  I remounted the
>> filesystem, and ran an ls -Rl , but i still don't see any traffic to
>> the disks.  I do see that the file tree was created on the new disks.
> 
> Currently there is no 'hot' add feature which will take care of the 
> addition of new bricks. To achieve the proper distribution, you need 
> to force 'distribute' self heal by removing the extended attribute on 
> the directories.
> 
> Try running this:
> 
> bash# find /mnt/glusterfs -type d -exec setfattr -x trusted.glusterfs.dht {} 
> \;
> 
> (Note that this works for versions 2.0.8 or higher).
> 
> Regards,

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Deletion problem after expanding

2009-12-10 Thread Thomas Wakefield
I can't seem to delete whole directories with gluster:

[r...@cola14 gluster]# rm -rf aaron/
rm: cannot remove directory `aaron//code/lib3.2/src/phspf': Directory not empty


I just added more disk space to my /gluster partition.  I had 4 bricks, and now 
i have 10 bricks.  I ran the following 2 commands after the expansion:

find /gluster -type d -exec setfattr -x trusted.glusterfs.dht {} \;

ls -lR



Is there anything else i should have done?



Thanks,
Thomas
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] adding bricks

2009-11-19 Thread Thomas Wakefield
This seems to have helped, thanks.


On Nov 19, 2009, at 12:12 PM, Amar Tumballi wrote:

>> What's the best way to add bricks, and get distribute to use them?  I
>> added 2 more bricks, and the total size increased for the filesystem,
>> but i can't get any traffic on the new disks.  I remounted the
>> filesystem, and ran an ls -Rl , but i still don't see any traffic to
>> the disks.  I do see that the file tree was created on the new disks.
> 
> Currently there is no 'hot' add feature which will take care of the 
> addition of new bricks. To achieve the proper distribution, you need 
> to force 'distribute' self heal by removing the extended attribute on 
> the directories.
> 
> Try running this:
> 
> bash# find /mnt/glusterfs -type d -exec setfattr -x trusted.glusterfs.dht {} 
> \;
> 
> (Note that this works for versions 2.0.8 or higher).
> 
> Regards,

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] adding bricks

2009-11-19 Thread Thomas Wakefield
What's the best way to add bricks, and get distribute to use them?  I added 2 
more bricks, and the total size increased for the filesystem, but i can't get 
any traffic on the new disks.  I remounted the filesystem, and ran an ls -Rl , 
but i still don't see any traffic to the disks.  I do see that the file tree 
was created on the new disks. 

This is what i have for distribute:

volume distribute
 type cluster/distribute
 subvolumes brick_g1a brick_g1b brick_g1c brick_g1d
end-volume



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Multiple volumes on one server

2009-10-08 Thread Thomas Wakefield
I want to have two or more servers each serving out 40TB of disk space  
(80TB+ of total space).  And i am wondering the best way to configure  
this amount of disk.


Is it possible to have multiple volumes mounted on single gluster  
server,  but for the client to see the volumes as one mount point?  I  
couldn't find an answer in the documentation.  Even with XFS, i am  
worried about having a single 40TB volume.  So i was thinking either 2  
or 4 volumes to combine together to get to 40TB.


Thanks in advance,

Thomas Wakefield
Systems Administrator COLA/IGES
tw...@cola.iges.org

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users