Re: [Gluster-users] Gluster and public/private LAN

2012-12-19 Thread Washer, Bryan
How did you set it up?  Did you export via NFS on the 10GB and then 
ran
gluster on the IB?  OR was it all native mountsŠ.if it was native
mountsŠhow did you make it listen on a different interface from the one
used by the bricks to talk to each other?  I did not think this was
possible.

Bryan

-Original Message-
From: Andrew Holway 
Date: Wednesday, December 19, 2012 5:21 AM
To: Shawn Heisey 
Cc: "gluster-users@gluster.org" 
Subject: Re: [Gluster-users] Gluster and public/private LAN

>I've built gluster systems that did the internal stuff over an IB network
>and did the external stuff over 10G..
>
>I didn't even think about it and it just worked.
>
>On Dec 18, 2012, at 11:26 PM, Shawn Heisey wrote:
>
>> I have an idea I'd like to run past everyone.  Every gluster peer would
>>have two NICs - one "public" and the other "private" with different IP
>>subnets.  The idea that I am proposing would be to have every gluster
>>peer have all private peer addresses in /etc/hosts, but the public
>>addresses would be in DNS.  Clients would use DNS.
>> 
>> The goal is to have all peer-to-peer communication (self-heal,
>>rebalance, etc) happen on the private network, leaving all the bandwidth
>>on the public network available for client connections.
>> 
>> Will this work on 3.3.1 or newer? If the volume information that
>>gluster clients and servers pass to each other only has hostnames, I
>>would expect it to work.  Of course I would have the usual scalability
>>problems associated with relying in part on /etc/hosts, but knowing that
>>in advance, we can take the proper precautions.
>> 
>> Side note: the public and private NICs would each actually be a bonded
>>pair and plugged into separate switches for network redundancy.
>> 
>> Thanks,
>> Shawn
>> 
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>___
>Gluster-users mailing list
>Gluster-users@gluster.org
>http://supercolony.gluster.org/mailman/listinfo/gluster-users
>


NOTICE: This email and any attachments may contain confidential and proprietary 
information of NetSuite Inc. and is for the sole use of the intended recipient 
for the stated purpose. Any improper use or distribution is prohibited. If you 
are not the intended recipient, please notify the sender; do not review, copy 
or distribute; and promptly delete or destroy all transmitted information. 
Please note that all communications and information transmitted through this 
email system may be monitored by NetSuite or its agents and that all incoming 
email is automatically scanned by a third party spam and filtering service

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Transport endpoint

2012-12-17 Thread Washer, Bryan
Just to make sure we don't miss the obviousŠwhen you say sync'd 
over to
the mount pointŠyou mean where you did a glusterfs mount and not eh actual
location of the brick on one of the mirrors in your replica.

Once you setup the volume and start it.you should NEVER write or delete
directly on the backend brick unless you really know what you are doing.

Bryan

-Original Message-
From: Joe Julian 
Date: Monday, December 17, 2012 9:29 AM
To: "gluster-users@gluster.org" 
Subject: Re: [Gluster-users] Transport endpoint

>On 12/17/2012 06:56 AM, Robin, Robin wrote:
>> Hi,
>>
>> I've got Gluster error: Transport endpoint not connected.
>>
>> It came up twice after trying to rsync 2 TB filesystem over; it
>> reached about 1.8 TB and got the error.
>>
>> Logs on the server side (on reverse time order):
>> [2012-12-15 00:53:24.747934] I
>> [server-helpers.c:629:server_connection_destroy]
>> 0-RedhawkShared-server: destroyed connection of
>> 
>>mualhpcp01.hpc.muohio.edu-17684-2012/12/13-17:25:16:994209-RedhawkShared-
>>client-0-0
>> [2012-12-15 00:53:24.743459] I [server-helpers.c:474:do_fd_cleanup]
>> 0-RedhawkShared-server: fd cleanup on
>> 
>>/mkennedy/tramelot_nwfs/rpr3/rpr3/rpr3_sparky/matrix/.4d_ccnoesy.ucsf.QTQ
>>swL
>> [2012-12-15 00:53:24.743430] I
>> [server-helpers.c:330:do_lock_table_cleanup] 0-RedhawkShared-server:
>> finodelk released on
>> 
>>/mkennedy/tramelot_nwfs/rpr3/rpr3/rpr3_sparky/matrix/.4d_ccnoesy.ucsf.QTQ
>>swL
>> [2012-12-15 00:53:24.743400] I
>> [server-helpers.c:741:server_connection_put] 0-RedhawkShared-server:
>> Shutting down connection
>> 
>>mualhpcp01.hpc.muohio.edu-17684-2012/12/13-17:25:16:994209-RedhawkShared-
>>client-0-0
>> [2012-12-15 00:53:24.743368] I [server.c:685:server_rpc_notify]
>> 0-RedhawkShared-server: disconnecting connectionfrom
>> 
>>mualhpcp01.hpc.muohio.edu-17684-2012/12/13-17:25:16:994209-RedhawkShared-
>>client-0-0
>> [2012-12-15 00:53:24.740055] W [socket.c:195:__socket_rwv]
>> 0-tcp.RedhawkShared-server: readv failed (Connection reset by peer)
>>
>> I can't find relevant logs on the client side.
>>
>> From the logs, can we judge for sure that this is a network reset
>> problem ?
>>
>When you say, "I can't find relevant logs on the client side," do you
>mean that you can't find the log, or that there's nothing in there from
>around the same timestamp? The client log will be in /var/log/glusterfs
>and will be named based on the mountpoint.
>___
>Gluster-users mailing list
>Gluster-users@gluster.org
>http://supercolony.gluster.org/mailman/listinfo/gluster-users
>


NOTICE: This email and any attachments may contain confidential and proprietary 
information of NetSuite Inc. and is for the sole use of the intended recipient 
for the stated purpose. Any improper use or distribution is prohibited. If you 
are not the intended recipient, please notify the sender; do not review, copy 
or distribute; and promptly delete or destroy all transmitted information. 
Please note that all communications and information transmitted through this 
email system may be monitored by NetSuite or its agents and that all incoming 
email is automatically scanned by a third party spam and filtering service

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Howto find out volume topology

2012-11-15 Thread Washer, Bryan
You will have to wait on others for that.  Those are new in 3.3 and 
I still run 3.2.5 everywhere and have no experience with the new options.

Bryan

Sent from my little friend.


On Nov 15, 2012, at 1:18 AM, "Fred van Zwieten" 
<mailto:fvzwie...@vxcompany.com>> wrote:

Thanks Bryan,

How would that play out with:

striped
striped replicated
distributed striped replicated



Met vriendelijke groeten,

Fred van Zwieten
Enterprise Open Source Services

Consultant
(vrijdags afwezig)

VX Company IT Services B.V.
T (035) 539 09 50 mobiel (06) 41 68 28 48
F (035) 539 09 08
E fvzwie...@vxcompany.com<mailto:fvzwie...@vxcompany.com>
I  www.vxcompany.com<http://www.vxcompany.com/>



On Wed, Nov 14, 2012 at 8:16 PM, Washer, Bryan 
<mailto:bwas...@netsuite.com>> wrote:


They are listed in “gluster volume info” the pairs are setup in order….

So if you have host1 host 2 host 3 host 4 pairs would be 1 & 2 and 3 &4 and so 
on….with a replica of 2….if it was a replica of 3…then host1,2,3 would all be 
set…and 4,5,6 would be next set and so on…

Bryan

From: 
gluster-users-boun...@gluster.org<mailto:gluster-users-boun...@gluster.org> 
[mailto:gluster-users-boun...@gluster.org<mailto:gluster-users-boun...@gluster.org>]
 On Behalf Of Fred van Zwieten
Sent: Wednesday, November 14, 2012 1:14 PM
To: gluster-users@gluster.org<mailto:gluster-users@gluster.org>
Subject: [Gluster-users] Howto find out volume topology

Hello,

I would like to find out the topology of an existing volume. For example, if I 
have a distributed replicated volume, what bricks are the replication partners?

Fred


NOTICE: This email and any attachments may contain confidential and proprietary 
information of NetSuite Inc. and is for the sole use of the intended recipient 
for the stated purpose. Any improper use or distribution is prohibited. If you 
are not the intended recipient, please notify the sender; do not review, copy 
or distribute; and promptly delete or destroy all transmitted information. 
Please note that all communications and information transmitted through this 
email system may be monitored by NetSuite or its agents and that all incoming 
email is automatically scanned by a third party spam and filtering service.


NOTICE: This email and any attachments may contain confidential and proprietary 
information of NetSuite Inc. and is for the sole use of the intended recipient 
for the stated purpose. Any improper use or distribution is prohibited. If you 
are not the intended recipient, please notify the sender; do not review, copy 
or distribute; and promptly delete or destroy all transmitted information. 
Please note that all communications and information transmitted through this 
email system may be monitored by NetSuite or its agents and that all incoming 
email is automatically scanned by a third party spam and filtering service


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Howto find out volume topology

2012-11-14 Thread Washer, Bryan
They are listed in “gluster volume info” the pairs are setup in 
order….

So if you have host1 host 2 host 3 host 4 pairs would be 1 & 2 and 3 &4 and so 
on….with a replica of 2….if it was a replica of 3…then host1,2,3 would all be 
set…and 4,5,6 would be next set and so on…

Bryan

From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Fred van Zwieten
Sent: Wednesday, November 14, 2012 1:14 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] Howto find out volume topology

Hello,

I would like to find out the topology of an existing volume. For example, if I 
have a distributed replicated volume, what bricks are the replication partners?

Fred


NOTICE: This email and any attachments may contain confidential and proprietary 
information of NetSuite Inc. and is for the sole use of the intended recipient 
for the stated purpose. Any improper use or distribution is prohibited. If you 
are not the intended recipient, please notify the sender; do not review, copy 
or distribute; and promptly delete or destroy all transmitted information. 
Please note that all communications and information transmitted through this 
email system may be monitored by NetSuite or its agents and that all incoming 
email is automatically scanned by a third party spam and filtering service


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Switch experiences

2012-11-06 Thread Washer, Bryan
I am using HP 10GB switches.


-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Bryan Whitehead
Sent: Tuesday, November 06, 2012 1:22 PM
To: muel...@tropenklinik.de
Cc: Gluster-users@gluster.org
Subject: Re: [Gluster-users] Switch experiences

Infiniband switches / cards FTW. :) I use Mellanox switchs/cards.

On Mon, Nov 5, 2012 at 11:34 PM, Daniel Müller  wrote:
> I do not have any special switches and everything is running fine. GB Network 
> is ok for me.
>
> EDV Daniel Müller
>
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: muel...@tropenklinik.de
> Internet: www.tropenklinik.de
>
> Von: gluster-users-boun...@gluster.org 
> [mailto:gluster-users-boun...@gluster.org] Im Auftrag von Runar Ingebrigtsen
> Gesendet: Montag, 5. November 2012 20:04
> An: Gluster-users@gluster.org
> Betreff: [Gluster-users] Switch experiences
>
> Hi,
>
> I would like to know what experiences you all have with different brands of 
> switches used with Gluster. I don't know why I should buy a Cisco or HP 
> premium switch, when all I want is a Gb network. No VPN or any advanced 
> features.
> --
> Runar Ingebrigtsen
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


NOTICE: This email and any attachments may contain confidential and proprietary 
information of NetSuite Inc. and is for the sole use of the intended recipient 
for the stated purpose. Any improper use or distribution is prohibited. If you 
are not the intended recipient, please notify the sender; do not review, copy 
or distribute; and promptly delete or destroy all transmitted information. 
Please note that all communications and information transmitted through this 
email system may be monitored by NetSuite or its agents and that all incoming 
email is automatically scanned by a third party spam and filtering service

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Instructions for converting a distributed volume to a distributed-replicated?

2012-11-06 Thread Washer, Bryan
Joe can offer more information but I would delete the existing 
volume...which will leave the data

Then recreate the volume as a distribute-replica by doing...

Gluster volume create gluster-ssd-vol replica 2 transport tcp 1A 1B 2A 2B 3A 3B 
4A 4B 5A 5B 6A 6B 7A 7B 8A 8B

The order in which you add the brick determines the replica sets...

This will use the brick from the previous volume and mirror it with a new 
brickthen run a self heal and they will sync...

Bryan

From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Kushnir, Michael 
(NIH/NLM/LHC) [C]
Sent: Tuesday, November 06, 2012 11:45 AM
To: 'gluster-users@gluster.org'
Subject: Re: [Gluster-users] Instructions for converting a distributed volume 
to a distributed-replicated?

Thanks for getting back to me guys.

I don't have an existing volume yet. This is a planned deployment. At first I 
will have one server with 8 bricks forming a distributed volume.

So, on server 1, I will have volume "gluster-ssd-vol" with bricks: 1A, 2A, 3A, 
4A, 5A, 6A, 7A, 8A.

Then (a week later) I will add server two and would like to convert to a 
distributed-replicated volume by adding  bricks: 1B, 2B, 3B, 4B, 5B, 6B, 7B, 8B.

So would "gluster volume add-brick gluster-ssd-vol replica 2 <1B> <2B> <3B> 
<4B> <5B> <6B> <7B> <8B>" result in a distributed-replicated volume with 
replica sets 1AB, 2AB, 3AB, etc...?

Thanks,
Michael



From: Joe Julian [mailto:j...@julianfamily.org]
Sent: Tuesday, November 06, 2012 12:23 PM
To: 'gluster-users@gluster.org'
Subject: Re: [Gluster-users] Instructions for converting a distributed volume 
to a distributed-replicated?

The text isn't 100% clear, I agree, but it is safe to infer from this what you 
would naturally infer:

# gluster volume add-brick

Usage: volume add-brick  [ ]  ...
meaning you can add-brick, specify the new replica (or stripe) count and list 
enough bricks to make the change possible and it will do just that.
On 11/05/2012 10:30 AM, Kushnir, Michael (NIH/NLM/LHC) [C] wrote:
Hi everyone,

Where can I find instructions for converting a distributed volume to a 
distributed replicated volume?

Thanks,
Michael

__
Michael Kushnir
System Architect / Engineer
Communications Engineering Branch
Lister Hill National Center for Biomedical Communications
National Library of Medicine

8600 Rockville Pike, Building 38A, Floor 10
Besthesda, MD 20894

Phone: 301-435-3219
Email: michael.kush...@nih.gov

[cid:image001.jpg@01CDBC17.485E9440]




___

Gluster-users mailing list

Gluster-users@gluster.org

http://supercolony.gluster.org/mailman/listinfo/gluster-users


NOTICE: This email and any attachments may contain confidential and proprietary 
information of NetSuite Inc. and is for the sole use of the intended recipient 
for the stated purpose. Any improper use or distribution is prohibited. If you 
are not the intended recipient, please notify the sender; do not review, copy 
or distribute; and promptly delete or destroy all transmitted information. 
Please note that all communications and information transmitted through this 
email system may be monitored by NetSuite or its agents and that all incoming 
email is automatically scanned by a third party spam and filtering service

<>___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Instructions for converting a distributed volume to a distributed-replicated?

2012-11-06 Thread Washer, Bryan
Michael.

What is the configuration of your existing volume?  "gluster volume info"  This 
will allow me to provide a bit more detail on what I would do.

Bryan

From: Kushnir, Michael (NIH/NLM/LHC) [C] [mailto:michael.kush...@nih.gov]
Sent: Tuesday, November 06, 2012 11:15 AM
To: Washer, Bryan; gluster-users@gluster.org
Subject: RE: [Gluster-users] Instructions for converting a distributed volume 
to a distributed-replicated?

Hi Bryan,

Yes. I intend to deploy one distributed volume and then add a second full set 
of bricks for the replicas. I understand the general approach you described. I 
was more interested in specific instructions.

Thanks,
Michael

From: Washer, Bryan [mailto:bwas...@netsuite.com]
Sent: Tuesday, November 06, 2012 12:12 PM
To: Kushnir, Michael (NIH/NLM/LHC) [C]; 
gluster-users@gluster.org<mailto:gluster-users@gluster.org>
Subject: RE: [Gluster-users] Instructions for converting a distributed volume 
to a distributed-replicated?


Are you adding more hardware?  If so...add the additional hardware..reconfigure 
the volume to be replicates distributed with one side of each replica being one 
of your distributed bricks and then one of the new bricks..and it will mirror 
over...and you will be on replicated distributed...but this will require 2x 
hardware

Anything else will require a lot of data moving...

Bryan

From: 
gluster-users-boun...@gluster.org<mailto:gluster-users-boun...@gluster.org> 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Kushnir, Michael 
(NIH/NLM/LHC) [C]
Sent: Tuesday, November 06, 2012 10:58 AM
To: gluster-users@gluster.org<mailto:gluster-users@gluster.org>
Subject: Re: [Gluster-users] Instructions for converting a distributed volume 
to a distributed-replicated?

Anyone?

Thanks!
Michael

From: Kushnir, Michael (NIH/NLM/LHC) [C]
Sent: Monday, November 05, 2012 1:31 PM
To: gluster-users@gluster.org<mailto:gluster-users@gluster.org>
Subject: [Gluster-users] Instructions for converting a distributed volume to a 
distributed-replicated?

Hi everyone,

Where can I find instructions for converting a distributed volume to a 
distributed replicated volume?

Thanks,
Michael

__
Michael Kushnir
System Architect / Engineer
Communications Engineering Branch
Lister Hill National Center for Biomedical Communications
National Library of Medicine

8600 Rockville Pike, Building 38A, Floor 10
Besthesda, MD 20894

Phone: 301-435-3219
Email: michael.kush...@nih.gov<mailto:michael.kush...@nih.gov>

[cid:image001.jpg@01CDBC10.A3189BD0]



NOTICE: This email and any attachments may contain confidential and proprietary 
information of NetSuite Inc. and is for the sole use of the intended recipient 
for the stated purpose. Any improper use or distribution is prohibited. If you 
are not the intended recipient, please notify the sender; do not review, copy 
or distribute; and promptly delete or destroy all transmitted information. 
Please note that all communications and information transmitted through this 
email system may be monitored by NetSuite or its agents and that all incoming 
email is automatically scanned by a third party spam and filtering service.


NOTICE: This email and any attachments may contain confidential and proprietary 
information of NetSuite Inc. and is for the sole use of the intended recipient 
for the stated purpose. Any improper use or distribution is prohibited. If you 
are not the intended recipient, please notify the sender; do not review, copy 
or distribute; and promptly delete or destroy all transmitted information. 
Please note that all communications and information transmitted through this 
email system may be monitored by NetSuite or its agents and that all incoming 
email is automatically scanned by a third party spam and filtering service

<>___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Instructions for converting a distributed volume to a distributed-replicated?

2012-11-06 Thread Washer, Bryan
Are you adding more hardware?  If so...add the additional 
hardware..reconfigure the volume to be replicates distributed with one side of 
each replica being one of your distributed bricks and then one of the new 
bricks..and it will mirror over...and you will be on replicated 
distributed...but this will require 2x hardware

Anything else will require a lot of data moving...

Bryan

From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Kushnir, Michael 
(NIH/NLM/LHC) [C]
Sent: Tuesday, November 06, 2012 10:58 AM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] Instructions for converting a distributed volume 
to a distributed-replicated?

Anyone?

Thanks!
Michael

From: Kushnir, Michael (NIH/NLM/LHC) [C]
Sent: Monday, November 05, 2012 1:31 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] Instructions for converting a distributed volume to a 
distributed-replicated?

Hi everyone,

Where can I find instructions for converting a distributed volume to a 
distributed replicated volume?

Thanks,
Michael

__
Michael Kushnir
System Architect / Engineer
Communications Engineering Branch
Lister Hill National Center for Biomedical Communications
National Library of Medicine

8600 Rockville Pike, Building 38A, Floor 10
Besthesda, MD 20894

Phone: 301-435-3219
Email: michael.kush...@nih.gov

[cid:image001.jpg@01CDBC0F.8EADAB50]


NOTICE: This email and any attachments may contain confidential and proprietary 
information of NetSuite Inc. and is for the sole use of the intended recipient 
for the stated purpose. Any improper use or distribution is prohibited. If you 
are not the intended recipient, please notify the sender; do not review, copy 
or distribute; and promptly delete or destroy all transmitted information. 
Please note that all communications and information transmitted through this 
email system may be monitored by NetSuite or its agents and that all incoming 
email is automatically scanned by a third party spam and filtering service

<>___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS performance

2012-09-26 Thread Washer, Bryan
Steve,

  If you are going to grow to that size...then you should make a larger test 
system.  I run a 6x2 Distributed Replicated setup and I maintain 400-575MB/s 
write speeds and 780-800MB/s read speeds.  I would share with you my settings 
but I have my system tuned for 100MB - 2 GB files.  To give you an ideaI 
roll roughly 4 TB of data every night writing new data in and deleting out old 
data.   

  Many people try to take the minimum setup to test gluster when you don't 
really start seeing the benefits of gluster until you start to scale it both in 
number of clients as well as number of bricks.

Just thought I would share my experiences.

Bryan

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Steve Thompson
Sent: Wednesday, September 26, 2012 4:57 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] GlusterFS performance

On Wed, 26 Sep 2012, Joe Landman wrote:

> Read performance with the gluster client isn't that good, write 
> performance (effectively write caching at the brick layer) is pretty good.

Yep. I found out today that if I set up a 2-brick distributed non-replicated 
volume using two servers, GlusterFS read performance is good from the server 
that does _not_ contain a copy of the file. In fact, I got 148 MB/sec, largely 
due to the two servers having dual-bonded gigabit links (balance-alb mode) to 
each other via a common switch. From the server that _does_ have a copy of the 
file, of course read performance is excellent (over 580 MB/sec).

It remains that read performance on another client (same subnet but an extra 
switch hop) is too low to be useable, and I can point the finger at GlusterFS 
here since NFS on the same client gets good performance, as does MooseFS 
(although MooseFS has other issues). And if using a replicated volume, 
GlusterFS write performance is too low to be useable also.

> I know its a generalization, but this is basically what we see.  In 
> the best case scenario, we can tune it pretty hard to get within 50% 
> of native speed. But it takes lots of work to get it to that point, as 
> well as an application which streams large IO.  Small IO is a (still) 
> bad on the system IMO.

I'm very new to GlusterFS, so it looks like I have my work cut out for me. 
My aim was ultimately to build a large (100 TB) file system with redundancy for 
linux home directories and samba shares. I've already given up on MooseFS after 
several months' work.

Thanks for all your comments,

Steve
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


NOTICE: This email and any attachments may contain confidential and proprietary 
information of NetSuite Inc. and is for the sole use of the intended recipient 
for the stated purpose. Any improper use or distribution is prohibited. If you 
are not the intended recipient, please notify the sender; do not review, copy 
or distribute; and promptly delete or destroy all transmitted information. 
Please note that all communications and information transmitted through this 
email system may be monitored by NetSuite or its agents and that all incoming 
email is automatically scanned by a third party spam and filtering service

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Throughout over infiniband

2012-09-10 Thread Washer, Bryan
Everyone,

  This is just a response to the issue of nfs vs glusterfs and the performance 
for glsuter as I think some of the information may be useful here and has not 
been discussed.

  For the sake of clarity, I do not run infiniband..but I am running 10GB. My 
normal production speeds sit around 600MB/s to 700MB/s with the native gluster 
client.

My setup has 12 servers each with a single 24 disk sata raid 5 10TB brick.  
Gluster setup of 6x2.

  Before I settled on this setup I run extensive tests over about 6 weeks to 
confirm my setup...  in my case glusterfs native out performed NFS considerably 
in aggretgate data xfersI also found that the peak performance of the 
glusterfs client in my setup was at about 12 servers.  This distributed the 
write and read loads very wellafter 12 servers adding more produces 
diminishing returns.

  I bring this up as no one has been talking about how their brick setup may be 
effecting the performance as well as the number of servers hosting 
bricksPutting multiple bricks on a single server does not increase the load 
capacity anywhere near like adding another server with the additional brick.

The point I wanted to make was you need to look at all sides of your setup in 
order to get the best performance...In my case this involved evaluating the 
raid setup (tuning the size of blocks), the file system used for the brick on 
the raid (and tuning it, specifically for the size and types of files be 
manipulated), the memory and cpu in the servers, the network BW, and the client 
access method.   I had to look at all of these (and each of them had an impact 
on the final performance numbers) before I found my best setup.I do not 
think you can just unilaterally dismiss the gluster setup until you have done a 
COMPLETE analysis for how to best setup your environment.

Just sharing my thoughts as when I first was setting up gluster I thought I 
could just install it and tweak the options and be good to go but once I 
understood everything it is dependent on and addressed all of those options and 
tuning as wellI significantly improved my overall performance to well over 
what I was able to achieve with nfs.

Feel free to shoot me with comments or questions.

Bryan Washer



-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Fernando Frediani (Qube)
Sent: Monday, September 10, 2012 8:14 AM
To: 'Stephan von Krawczynski'; 'Whit Blauvelt'
Cc: 'gluster-users@gluster.org'; 'Brian Candler'
Subject: Re: [Gluster-users] Throughout over infiniband

Well, I would say there is a reason, if the Gluster client performed as 
expected.
Using the Gluster client it should in theory access the file(s) directly from 
the nodes where they reside and not having to go though a single node exporting 
the NFS folder which would then have to gather the file.
Yes the NFS has all the caching stuff but if the Gluster client behaviour was  
similar it should be able to get similar performance which doesn't seem to be 
what has been resported.
I did tests myself using Gluster client and NFS and NFS got better performance 
also and I believe this is due the caching.

Fernando

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Stephan von Krawczynski
Sent: 10 September 2012 13:57
To: Whit Blauvelt
Cc: gluster-users@gluster.org; Brian Candler
Subject: Re: [Gluster-users] Throughout over infiniband

On Mon, 10 Sep 2012 08:06:51 -0400
Whit Blauvelt  wrote:

> On Mon, Sep 10, 2012 at 11:13:11AM +0200, Stephan von Krawczynski wrote:
> > [...]
> > If you're lucky you reach something like 1/3 of the NFS performance.
> [Gluster NFS Client]
> Whit

There is a reason why one would switch from NFS to GlusterFS, and mostly it is 
redundancy. If you start using a NFS-client type you cut yourself off the 
"complete solution". As said elsewhere you can as well export GlusterFS via 
kernel-nfs-server. But honestly, it is a patch. It would be better by far if 
things are done right, native glusterfs client in kernel-space.
And remember, generally there should be no big difference between NFS and 
GlusterFS with bricks spread over several networks - if it is done how it 
should be, without userspace.

--
MfG,
Stephan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


NOTICE: This email and any attachments may contain confidential and proprietary 
information of NetSuite Inc. and is for the sole use of the intended recipient 
for the stated purpose. Any improper use or distribution is prohibited. If you 
are not the intended recipient, please notify the sender; do not review, copy 
or distr

Re: [Gluster-users] FeedBack Requested : Changes to CLI output of 'peer status'

2012-08-28 Thread Washer, Bryan
Pranith,

  I will look at it...Thanks.

Bryan

-Original Message-
From: Pranith Kumar Karampuri [mailto:pkara...@redhat.com] 
Sent: Tuesday, August 28, 2012 7:51 AM
To: Washer, Bryan
Cc: Amar Tumballi; Gluster Devel; gluster-users
Subject: Re: [Gluster-users] FeedBack Requested : Changes to CLI output of 
'peer status'

Bryan,
   Why not use --xml at the end of the command? That will print the output in 
xml format. Would that make it easy to parse?.

Pranith.
- Original Message -
From: "Bryan Washer" 
To: "Amar Tumballi" , "Gluster Devel" 
, "gluster-users" 
Sent: Tuesday, August 28, 2012 5:40:16 PM
Subject: Re: [Gluster-users] FeedBack Requested : Changes to CLI output of 
'peerstatus'




I would love thisand would be more than happy to change my current 
parsing...this would make it A LOT easier to parseas well as easier to see 
all information on a disconnected peer as the information will be in a single 
line. 

Bryan Washer 

-Original Message- 
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Amar Tumballi 
Sent: Tuesday, August 28, 2012 6:06 AM 
To: Gluster Devel; gluster-users 
Subject: [Gluster-users] FeedBack Requested : Changes to CLI output of 'peer 
status' 

Hi, 

Wanted to check if any one is using gluster CLI output of 'peer status' 
in their scripts/programs? If yes, let me know. If not, we are trying to 
make it more script friendly. 

For example the current output would look something like: 

- 
Hostname: 10.70.36.7 
Uuid: c7283ee7-0e8d-4cb8-8552-a63ab05deaa7 
State: Peer in Cluster (Connected) 

Hostname: 10.70.36.6 
Uuid: 5a2fdeb3-e63e-4e56-aebe-8b68a5abfcef 
State: Peer in Cluster (Connected) 

- 

New changes would make it look like : 

--- 
UUID Hostname Status 
c7283ee7-0e8d-4cb8-8552-a63ab05deaa7 10.70.36.7 Connected 
5a2fdeb3-e63e-4e56-aebe-8b68a5abfcef 10.70.36.6 Connected 

--- 

If anyone has better format, or want more information, let us know now. 
I would keep timeout for this mail as 3 more working days, and without 
any response, we will go ahead with the change. 

Regards, 
Amar 
___ 
Gluster-users mailing list 
Gluster-users@gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users 


NOTICE: This email and any attachments may contain confidential and proprietary 
information of NetSuite Inc. and is for the sole use of the intended recipient 
for the stated purpose. Any improper use or distribution is prohibited. If you 
are not the intended recipient, please notify the sender; do not review, copy 
or distribute; and promptly delete or destroy all transmitted information. 
Please note that all communications and information transmitted through this 
email system may be monitored by NetSuite or its agents and that all incoming 
email is automatically scanned by a third party spam and filtering service. 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


NOTICE: This email and any attachments may contain confidential and proprietary 
information of NetSuite Inc. and is for the sole use of the intended recipient 
for the stated purpose. Any improper use or distribution is prohibited. If you 
are not the intended recipient, please notify the sender; do not review, copy 
or distribute; and promptly delete or destroy all transmitted information. 
Please note that all communications and information transmitted through this 
email system may be monitored by NetSuite or its agents and that all incoming 
email is automatically scanned by a third party spam and filtering service


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] FeedBack Requested : Changes to CLI output of 'peer status'

2012-08-28 Thread Washer, Bryan
If this is the case...why not add a flag for the more verbose 
information ...and let the default provide just the basics with output very 
easy to parse like this...

Bryan Washer

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Pranith Kumar Karampuri
Sent: Tuesday, August 28, 2012 7:46 AM
To: Amar Tumballi
Cc: gluster-users; Gluster Devel
Subject: Re: [Gluster-users] FeedBack Requested : Changes to CLI output of 
'peer status'

hi Amar,
 This is the format we considered initially but we did not go with this 
because it may exceed 80 chars and wrap over for small terminals if we want to 
add more fields in future.

Pranith.
- Original Message -
From: "Amar Tumballi" 
To: "Gluster Devel" , "gluster-users" 

Sent: Tuesday, August 28, 2012 4:36:07 PM
Subject: [Gluster-users] FeedBack Requested : Changes to CLI output of 'peer
status'

Hi,

Wanted to check if any one is using gluster CLI output of 'peer status' 
in their scripts/programs? If yes, let me know. If not, we are trying to 
make it more script friendly.

For example the current output would look something like:

-
Hostname: 10.70.36.7
Uuid: c7283ee7-0e8d-4cb8-8552-a63ab05deaa7
State: Peer in Cluster (Connected)

Hostname: 10.70.36.6
Uuid: 5a2fdeb3-e63e-4e56-aebe-8b68a5abfcef
State: Peer in Cluster (Connected)

-

New changes would make it look like :

---
UUID  Hostname   Status
c7283ee7-0e8d-4cb8-8552-a63ab05deaa7  10.70.36.7 Connected
5a2fdeb3-e63e-4e56-aebe-8b68a5abfcef  10.70.36.6 Connected

---

If anyone has better format, or want more information, let us know now. 
I would keep timeout for this mail as 3 more working days, and without 
any response, we will go ahead with the change.

Regards,
Amar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


NOTICE: This email and any attachments may contain confidential and proprietary 
information of NetSuite Inc. and is for the sole use of the intended recipient 
for the stated purpose. Any improper use or distribution is prohibited. If you 
are not the intended recipient, please notify the sender; do not review, copy 
or distribute; and promptly delete or destroy all transmitted information. 
Please note that all communications and information transmitted through this 
email system may be monitored by NetSuite or its agents and that all incoming 
email is automatically scanned by a third party spam and filtering service

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] FeedBack Requested : Changes to CLI output of 'peer status'

2012-08-28 Thread Washer, Bryan
I would love thisand would be more than happy to change my 
current parsing...this would make it A LOT easier to parseas well as easier 
to see all information on a disconnected peer as the information will be in a 
single line.

Bryan Washer

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Amar Tumballi
Sent: Tuesday, August 28, 2012 6:06 AM
To: Gluster Devel; gluster-users
Subject: [Gluster-users] FeedBack Requested : Changes to CLI output of 'peer 
status'

Hi,

Wanted to check if any one is using gluster CLI output of 'peer status' 
in their scripts/programs? If yes, let me know. If not, we are trying to 
make it more script friendly.

For example the current output would look something like:

-
Hostname: 10.70.36.7
Uuid: c7283ee7-0e8d-4cb8-8552-a63ab05deaa7
State: Peer in Cluster (Connected)

Hostname: 10.70.36.6
Uuid: 5a2fdeb3-e63e-4e56-aebe-8b68a5abfcef
State: Peer in Cluster (Connected)

-

New changes would make it look like :

---
UUID  Hostname   Status
c7283ee7-0e8d-4cb8-8552-a63ab05deaa7  10.70.36.7 Connected
5a2fdeb3-e63e-4e56-aebe-8b68a5abfcef  10.70.36.6 Connected

---

If anyone has better format, or want more information, let us know now. 
I would keep timeout for this mail as 3 more working days, and without 
any response, we will go ahead with the change.

Regards,
Amar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


NOTICE: This email and any attachments may contain confidential and proprietary 
information of NetSuite Inc. and is for the sole use of the intended recipient 
for the stated purpose. Any improper use or distribution is prohibited. If you 
are not the intended recipient, please notify the sender; do not review, copy 
or distribute; and promptly delete or destroy all transmitted information. 
Please note that all communications and information transmitted through this 
email system may be monitored by NetSuite or its agents and that all incoming 
email is automatically scanned by a third party spam and filtering service

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] kernel parameters for improving gluster writes on millions of small writes (long)

2012-07-26 Thread Washer, Bryan
Harry,

  Just a question but what file system are you using under the gluster system?  
You may need to tune that before you continue to try and tune the output 
system.   I found that by tuning using the xfs file system and tuning it for 
very large files I was able to improve my performance quite a bit.  In this 
case though I was working with a lot of big files so my tuning would not help 
you..but just wanted to make sure you had looked at this detail in your setup.

Bryan

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Harry Mangalam
Sent: Wednesday, July 25, 2012 8:02 PM
To: gluster-users
Subject: [Gluster-users] kernel parameters for improving gluster writes on 
millions of small writes (long)

This is a continuation of my previous posts about improving write perf
when trapping millions of small writes to a gluster filesystem.
I was able to improve write perf by ~30x by running STDOUT thru gzip
to consolidate and reduce the output stream.

Today, another similar problem, having to do with yet another
bioinformatics program (which these days typically handle the 'short
reads' that come out of the majority of sequencing hardware, each read
being 30-150 characters, with some metadata typically in an ASCII file
containing millions of such entries).  Reading them doesn't seem to be
a problem (at least on our systems) but writing them is quite awful..

The program is called 'art_illumina' from the Broad Inst's 'ALLPATHS'
suite and it generates an artificial Illumina data set from an input
genome.  In this case about 5GB of the type of data described above.
Like before, the gluster process goes to >100% and the program itself
slows to ~20-30% of a CPU.  In this case, the app's output cannot be
extrnally trapped by redirecting thru gzip since the output flag
specifies the base filename for 2 files that are created internally
and then written directly.  This prevents even setting up a named pipe
to trap and process the output.

Since this gluster storage was set up specifically for bioinformatics,
this is a repeating problem and while some of the issues can be dealt
with by trapping and converting output, it would be VERY NICE if we
could deal with it at the OS level.

The gluster volume is running over IPoIB on QDR IB and looks like this:
Volume Name: gl
Type: Distribute
Volume ID: 21f480f7-fc5a-4fd8-a084-3964634a9332
Status: Started
Number of Bricks: 8
Transport-type: tcp,rdma
Bricks:
Brick1: bs2:/raid1
Brick2: bs2:/raid2
Brick3: bs3:/raid1
Brick4: bs3:/raid2
Brick5: bs4:/raid1
Brick6: bs4:/raid2
Brick7: bs1:/raid1
Brick8: bs1:/raid2
Options Reconfigured:
performance.write-behind-window-size: 1024MB
performance.flush-behind: on
performance.cache-size: 268435456
nfs.disable: on
performance.io-cache: on
performance.quick-read: on
performance.io-thread-count: 64
auth.allow: 10.2.*.*,10.1.*.*

I've tried to increase every caching option that might improve this
kind of performance, but it doesn't seem to help.  At this point, I'm
wondering whether changing the client (or server) kernel parameters
will help.

The client's meminfo is:
 cat  /proc/meminfo
MemTotal:   529425924 kB
MemFree:241833188 kB
Buffers:  355248 kB
Cached: 279699444 kB
SwapCached:0 kB
Active:  2241580 kB
Inactive:   278287248 kB
Active(anon): 190988 kB
Inactive(anon):   287952 kB
Active(file):2050592 kB
Inactive(file): 277999296 kB
Unevictable:   16856 kB
Mlocked:   16856 kB
SwapTotal:  563198732 kB
SwapFree:   563198732 kB
Dirty:  1656 kB
Writeback: 0 kB
AnonPages:486876 kB
Mapped:19808 kB
Shmem:   164 kB
Slab:1475476 kB
SReclaimable:1205944 kB
SUnreclaim:   269532 kB
KernelStack:5928 kB
PageTables:27312 kB
NFS_Unstable:  0 kB
Bounce:0 kB
WritebackTmp:  0 kB
CommitLimit:827911692 kB
Committed_AS: 536852 kB
VmallocTotal:   34359738367 kB
VmallocUsed: 1227732 kB
VmallocChunk:   33888774404 kB
HardwareCorrupted: 0 kB
AnonHugePages:376832 kB
HugePages_Total:   0
HugePages_Free:0
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
DirectMap4k:  201088 kB
DirectMap2M:15509504 kB
DirectMap1G:521142272 kB

and the server's meminfo is:

$ cat  /proc/meminfo
MemTotal:   32861400 kB
MemFree: 1232172 kB
Buffers:   29116 kB
Cached: 30017272 kB
SwapCached:   44 kB
Active: 18840852 kB
Inactive:   11772428 kB
Active(anon): 492928 kB
Inactive(anon):75264 kB
Active(file):   18347924 kB
Inactive(file): 11697164 kB
Unevictable:   0 kB
Mlocked:   0 kB
SwapTotal:  16382900 kB
SwapFree:   16382680 kB
Dirty: 8 kB
Writeback: 0 kB
AnonPages:566876 kB
Mapped:14212 kB
Shmem:  1276 k

Re: [Gluster-users] Performance question

2012-02-13 Thread Washer, Bryan
Arnold,

  I would love to see the numbers you get for dbenchas I have been doing 
exstensive testing with iozone and would like to be able to compare numbers to 
see that we are coming to the same conclusions.  If you would like to see the 
basics of what I have been testing please look at http://community.gluster.com 
and look for the performance question.  You can add your information there as 
well so we have it documented for easy look up by others.  If there is anything 
I can to do help out with your testing please let me know.

Bryan

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Brian Candler
Sent: Monday, February 13, 2012 8:13 AM
To: Arnold Krille
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Performance question

On Mon, Feb 13, 2012 at 03:07:53PM +0100, Arnold Krille wrote:
> > If I understand it right (and I'm quite new to gluster myself), 
> > whenever you do a read on a replicated volume, gluster dispatches 
> > the operation to both nodes, waits for both results to come back, 
> > and checks they are the same (and if not, works out which is wrong 
> > and kicks off a self-heal operation)
> > 
> > http://www.youtube.com/watch?v=AsgtE7Ph2_k
> > 
> > And of course, writes have to be dispatched to both nodes, and won't 
> > complete until the slowest has finished.  This may be the reason for 
> > your poor latency.
> 
> I understand that writes have to happen on all (running) replicas and 
> only return when the last finished (like the C-protocol with drbd). 
> But reads can (and should) happen from the nearest only. Or from the 
> fastest. With two nodes you can't decide which node has the 'true' 
> data except to check for the attributes.

The attributes say which was written most recently, and (as I understand it) 
that information is used to decide which is the correct one.

> NFS on these nodes is limited by the Gigabit-Network-Performance and 
> the disk and results in min(120MBit, ~100MBit) from network and disk.
> But I will run dbench on the nfs shares (without gluster) this evening.

Yes, I think a useful comparison would be:

* NFS
* Gluster with single disk volume [or distributed-only volume]
* Gluster with replicated volume [or replicated/distributed volume]

Using the same dbench parameters in each case, of course.

Regards,

Brian.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

NOTICE: This email and any attachments may contain confidential and proprietary 
information of NetSuite Inc. and is for the sole use of the intended recipient 
for the stated purpose.  Any improper use or distribution is prohibited.  If 
you are not the intended recipient, please notify the sender; do not review, 
copy or distribute; and promptly delete or destroy all transmitted information. 
 Please note that all communications and information transmitted through this 
email system may be monitored by NetSuite or its agents and that all incoming 
email is automatically scanned by a third party spam and filtering service.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users