[Gluster-users] glusterfs-volgen

2010-04-09 Thread Gmail - Activepage
How to use glusterfs-volgen to creat an Replicate Volume of 3 Servers?

With option raid=1 I an only use 2 or 4 servers, not 3.

I want to use 3 Replicate identical servers.

Dows glusterfs-volgen support AFR?

ThankĀ“s___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] can we have multiple server sections in a single volume?

2010-04-09 Thread Joe Landman

Amar Tumballi wrote:
| Is it possible to provide a single volume file to serve a gluster file 
system over ib-verbs and tcp

| simultaneously?

Yes. Its possible.

Run 'glusterfs-volgen' with '--transport tcp,ib-verbs' as option. It 
will generate the required export volumes. (Technically, you have to 
define two protocol/server volume with same options other than 
'transport-type' which will be 'tcp' for one, and 'ib-verbs' for another).


Very good.  Will glusterf-volgen also generate files so that for two DHT 
bricks per storage node, we have seperate daemons?  Is this still needed 
in 3.0.3 for performance reasons (when you have multiple bricks on each 
storage node) as compared tp 2.0.9?


Thanks!



Regards,
Amar





--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics, Inc.
email: land...@scalableinformatics.com
web  : http://scalableinformatics.com
   http://scalableinformatics.com/jackrabbit
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Lustre vs. Gluster performance testing - was RE: other cluster fs-- was - No space left on device (when there is actually lots of free space)

2010-04-09 Thread Burnash, James
Yes, after reading, that was largely my take as well. I didn't come away from 
it with a good understanding of:

1) Was the same hardware configuration used for both the Lustre and Gluster 
tests? (it looks like yes, but it's not explicitly stated

2) No configuration files were provided, nor were generalities like whether 
distributed or mirrored configurations were used.

3) The test was on what are by now ancient versions of both platforms, so I'm 
pretty sure that all those metrics would be different by now.

I'm organizing my test metrics now, and will answer with my tests the questions 
I've given above.

I'm also open to suggestions for certain test scenarios, with the caveat that 
my time is quite limited and the fact that I only have 7 clients and 6 bricks 
to test on.




DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Tom Lanyon
Sent: Thursday, April 08, 2010 8:47 PM
To: Ian Rogers
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] other cluster fs-- was - No space left on device 
(when there is actually lots of free space)

On 09/04/2010, at 9:49 AM, Ian Rogers wrote:

 It would be interesting to compare against  - 
 http://www.gluster.com/community/documentation/index.php/GlusterFS_1.3.pre2-VERGACION_vs_Lustre-1.4.9.1_Various_Benchmarks

 Ian

I'm not so sure - I may have missed something, but that comparison gives no 
information on the configuration of storage under GlusterFS or Lustre and no 
further information on the testing methods used...

Tom
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] rmdir?

2010-04-09 Thread m . roth
Got glusterfs up, and re-exported using unfs. All lovely, if not the
fastest thing on the planet.

However, my manager notes that he can't rmdir. Mounting it on my system, I
created a directory, then tried to rmdir. That fails with i/o error.
Trying from the head node that I'm re-exporting it from, again trying to
rmdir as me (not as root), I get transport endpoint is not connected.

Now, rmdir is not exactly an unusual thing to do - what's going on here?

 mark

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Gluster crashing

2010-04-09 Thread Kelvin Westlake
Hi Guys

I have 2 physical servers, which I'm trying to setup in a HA pair.. I'm
sharing 2 volumes (home-gfs and shared-gfs) across these servers in
RAID1 (replicated), then I'm mounting clients on each server as home/
and shared/, if I lose one of the Servers and then remount it, the
client on the other server seems to crash out. The following is the long
entries leading to the crash -


[2010-04-09 16:50:44] E [socket.c:762:socket_connect_finish]
192.168.100.31-2: connection to 192.168.100.31:6996 failed (Connection
refused)
[2010-04-09 16:50:44] E [socket.c:762:socket_connect_finish]
192.168.100.31-2: connection to 192.168.100.31:6996 failed (Connection
refused)
[2010-04-09 16:51:31] N [client-protocol.c:6246:client_setvolume_cbk]
192.168.100.31-2: Connected to 192.168.100.31:6996, attached to remote
volume 'brick2'.
[2010-04-09 16:51:31] E
[afr-self-heal-common.c:1237:sh_missing_entries_create] mirror-1:
unknown file type: 01
pending frames:
frame : type(1) op(READDIRP)

patchset: git://git.sv.gnu.org/gluster.git
signal received: 11
time of crash: 2010-04-09 16:51:31
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.0.0git
[0x371420]
/usr/local/lib/glusterfs/3.0.0git/xlator/protocol/client.so(client_readd
irp+0x1b4)[0x28d224]
/usr/local/lib/glusterfs/3.0.0git/xlator/cluster/replicate.so(afr_do_rea
ddir+0x4e2)[0x98a722]
/usr/local/lib/glusterfs/3.0.0git/xlator/cluster/replicate.so(afr_readdi
rp+0x48)[0x98a988]
/usr/local/lib/glusterfs/3.0.0git/xlator/mount/fuse.so[0x113e98]
/usr/local/lib/glusterfs/3.0.0git/xlator/mount/fuse.so[0x11b25a]
/lib/libpthread.so.0[0xbeb73b]
/lib/libc.so.6(clone+0x5e)[0xb69cfe]
-

 
 
This email with any attachments is for the exclusive and confidential use of 
the addressee(s) and may contain legally privileged information. Any other 
distribution, use or reproduction without the senders prior consent is 
unauthorised and strictly prohibited. If you receive this message in error 
please notify the sender by email and delete the message from your computer.
 
Netbasic Limited registered office and business address is 9 Funtley Court, 
Funtley Hill, Fareham, Hampshire PO16 7UY. Company No. 04906681. Netbasic 
Limited is authorised and regulated by the Financial Services Authority in 
respect of regulated activities. Please note that many of our activities do not 
require FSA regulation.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] rmdir?

2010-04-09 Thread Tejas N. Bhise
Hi Mark,

Debug using strace and also look/post debug logs for glusterfs.

Regards,
Tejas.

- Original Message -
From: m roth m.r...@5-cent.us
To: gluster-users@gluster.org
Sent: Friday, April 9, 2010 9:21:59 PM
Subject: [Gluster-users] rmdir?

Got glusterfs up, and re-exported using unfs. All lovely, if not the
fastest thing on the planet.

However, my manager notes that he can't rmdir. Mounting it on my system, I
created a directory, then tried to rmdir. That fails with i/o error.
Trying from the head node that I'm re-exporting it from, again trying to
rmdir as me (not as root), I get transport endpoint is not connected.

Now, rmdir is not exactly an unusual thing to do - what's going on here?

 mark

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] WAN Challenge

2010-04-09 Thread Count Zero
Hi Guys, I've sent this once but I did not even get it myself from the mailing 
list so i'm not sure it was even received correctly so I am re-posting. My 
apologies if this is a re-post.


I have an interesting situation, and I'm wondering if there's a solution for it 
in the glusterfs realm or if I will have to resort to other solutions that 
complement glusterfs (such as rsync or unison).

I have 9 servers in 3 locations on the internet (3 servers per location). 
Unfortunately, the network distance between them is such that setting up a 
Distribute or NUFA cluster between them all is difficult (I'm not saying 
impossible, because it may be possible and I just don't know how to pull it 
off).

There are 3 servers in each data center, and they are all clustered via NUFA:

DC-A
-+ NUFA-Cluster
---+ SRV-A1
---+ SRV-A2
---+ SRV-A3

DC-B (  rsync from A)
-+ NUFA-Cluster
---+ SRV-B1
---+ SRV-B2
---+ SRV-B3

DC-C (  rsync from B)
-+ NUFA-Cluster
---+ SRV-C1
---+ SRV-C2
---+ SRV-C3

The reason I did it like this, so far:

1) I needed file reads to be fast on each local node, so I have the option 
local-volume-name `hostname` trick in my glusterfs.vol file (like in the 
cookbook).

2) Bandwidth between DC-A and DC-B and DC-C is kinda low... and since glusterfs 
waits for the last server to finish, this severely slows down the entire 
cluster for any operation, including just listing the files in a directory.

Is there a better way to implement this? All the examples I find are about 4 
node replication, etc.

What about inter-continent replication of data between NUFA Clusters?
Any advice would be greatly appreciated :-)

At the moment, out of lack of options, I plan to sync between the 3 NUFA 
clusters with INOSYNC.

Thanks,
Count Zero

P.S. Below is my configuration file, from /etc/glusterfs/glusterfs.vol:

-88--

volume posix
type storage/posix
option directory /data/export
end-volume

volume locks
type features/locks
subvolumes posix
end-volume

volume brick
type performance/io-threads
subvolumes locks
end-volume

volume server
type protocol/server
option transport-type tcp
option auth.addr.brick.allow *
subvolumes brick
end-volume

volume srv-a1
type protocol/client
option transport-type tcp
option remote-host srv-a1
option remote-subvolume brick
end-volume

volume srv-a2
type protocol/client
option transport-type tcp
option remote-host srv-a2
option remote-subvolume brick
end-volume

volume srv-a3
type protocol/client
option transport-type tcp
option remote-host srv-a3
option remote-subvolume brick
end-volume

volume nufa
type cluster/nufa
option local-volume-name `hostname`
subvolumes srv-a1 srv-a2 srv-a3
end-volume

volume writebehind
type performance/write-behind
option cache-size 1MB
subvolumes nufa
end-volume

volume cache
type performance/io-cache
option cache-size 512MB
subvolumes writebehind
end-volume

-88--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] WAN Challenge

2010-04-09 Thread Burnash, James
Hi,

I'm no expert on this - more like a somewhat informed amateur - but your 
configuration sounds perhaps more suitable for a grid architecture using each 
location as a node in the grid. Grids often communicate via WANs like you have, 
and are tolerant of the slow bandwidth and possible loss of connections, which 
is not a strength of any parallel file systems that I know of. The disadvantage 
of the grid architecture is that it is not real-time. I guess your solution 
will depend on what latency (measure in seconds to minutes) you can accommodate 
in syncing the data.

More info and leads can be found here: http://www.isgtw.org/?pid=1002049

James

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Count Zero
Sent: Friday, April 09, 2010 12:56 PM
To: Tejas N. Bhise
Subject: [Gluster-users] WAN Challenge

Hi Guys, I've sent this once but I did not even get it myself from the mailing 
list so i'm not sure it was even received correctly so I am re-posting. My 
apologies if this is a re-post.


I have an interesting situation, and I'm wondering if there's a solution for it 
in the glusterfs realm or if I will have to resort to other solutions that 
complement glusterfs (such as rsync or unison).

I have 9 servers in 3 locations on the internet (3 servers per location). 
Unfortunately, the network distance between them is such that setting up a 
Distribute or NUFA cluster between them all is difficult (I'm not saying 
impossible, because it may be possible and I just don't know how to pull it 
off).

There are 3 servers in each data center, and they are all clustered via NUFA:

DC-A
-+ NUFA-Cluster
---+ SRV-A1
---+ SRV-A2
---+ SRV-A3

DC-B (  rsync from A)
-+ NUFA-Cluster
---+ SRV-B1
---+ SRV-B2
---+ SRV-B3

DC-C (  rsync from B)
-+ NUFA-Cluster
---+ SRV-C1
---+ SRV-C2
---+ SRV-C3

The reason I did it like this, so far:

1) I needed file reads to be fast on each local node, so I have the option 
local-volume-name `hostname` trick in my glusterfs.vol file (like in the 
cookbook).

2) Bandwidth between DC-A and DC-B and DC-C is kinda low... and since glusterfs 
waits for the last server to finish, this severely slows down the entire 
cluster for any operation, including just listing the files in a directory.

Is there a better way to implement this? All the examples I find are about 4 
node replication, etc.

What about inter-continent replication of data between NUFA Clusters?
Any advice would be greatly appreciated :-)

At the moment, out of lack of options, I plan to sync between the 3 NUFA 
clusters with INOSYNC.

Thanks,
Count Zero

P.S. Below is my configuration file, from /etc/glusterfs/glusterfs.vol:

-88--

volume posix
type storage/posix
option directory /data/export
end-volume

volume locks
type features/locks
subvolumes posix
end-volume

volume brick
type performance/io-threads
subvolumes locks
end-volume

volume server
type protocol/server
option transport-type tcp
option auth.addr.brick.allow *
subvolumes brick
end-volume

volume srv-a1
type protocol/client
option transport-type tcp
option remote-host srv-a1
option remote-subvolume brick
end-volume

volume srv-a2
type protocol/client
option transport-type tcp
option remote-host srv-a2
option remote-subvolume brick
end-volume

volume srv-a3
type protocol/client
option transport-type tcp
option remote-host srv-a3
option remote-subvolume brick
end-volume

volume nufa
type cluster/nufa
option local-volume-name `hostname`
subvolumes srv-a1 srv-a2 srv-a3
end-volume

volume writebehind
type performance/write-behind
option cache-size 1MB
subvolumes nufa
end-volume

volume cache
type performance/io-cache
option cache-size 512MB
subvolumes writebehind
end-volume

-88--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com

Re: [Gluster-users] glusterfs-volgen

2010-04-09 Thread Harshavardhana

On 04/08/2010 11:18 PM, Gmail - Activepage wrote:

How to use glusterfs-volgen to creat an Replicate Volume of 3 Servers?

With option raid=1 I an only use 2 or 4 servers, not 3.

   
Volgen currently only supports 2way mirroring.  If you need to do 3 way 
mirroring you need write your own volume file without using volgen.


Regards

--
Harshavardhana
Gluster Inc - http://www.gluster.com
+1(408)-480-1730

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users