Re: [Gluster-users] Gluster 3.1.1 Source within the Upgrade Guide

2010-12-01 Thread Vijay Bellur

On Wednesday 01 December 2010 10:42 AM, Deadpan110 wrote:

Should I be using the source code dated '29-Nov-2010 09:21' or simply
wait a little bit longer?

   


You can use the source code tarball available at the download site.

Regards,
Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Does gluster passes LAN boundaries?

2010-12-01 Thread Amar Tumballi
It may be firewalls preventing port 24007 and above.. By default most of the
firewall rules allow ssh/nfs/ftp etc.

Check by flushing the iptables rules to be sure. Also, Check the log file on
the client machine which may give some hints.

Regards,
Amar

On Wed, Dec 1, 2010 at 12:08 PM, Horacio Sanson hsan...@gmail.com wrote:

 On Wednesday 01 December 2010 15:15:49 Horacio Sanson wrote:
  I have four gluster nodes with several volumes configured that work
  perfectly with all my clients (64bit and 32bit) on the LAN but I have a
  client in a different IP subnetwork that has trouble accessing the
  volumes.
 
  I can mount the gluster volumes but issuing ls or df commands on the
 mount
  point hangs forever.
 
 
  This is my configuration:
 
4 Gluster Server Nodes:
 IP range:  192.168.4.90 to 192.168.4.93
 OS:  Ubuntu 10.10 64bit
  Kernel: 2.6.35-22-server
  GlusterFS: 3.1.1-stable installed from deb package
 
 One client that works without problems
 IP: 192.168.4.111
 OS: Ubuntu 10.10  32bit
  Kernel:  2.6.35-22-generic
  Gluster: 3.1.1-stable installed from source
 
 One client that does not work!!
IP: 192.168.0.228
OS: Unbuntu 8.10 32 bit
Kernel:  2.6.24-28-generic
Gluster: 3.1.1-stable installed from source
Problem: Can mount volumes but ls and df hang forever on the mount
   points.
 
 
  The differences between the client that works and the one that does not
  work are that they are in different networks and the kernel version. I
 can
  access via ssh/ping/etc to the problem client so it is not a networking
  problem.

 Forgot to mention that I can mount the volumes using nfs without problems
 from
 the problematic client.

 --
 regards,
 Horacio Sanson
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS replica question

2010-12-01 Thread Stephan von Krawczynski
Which is a regression compared to 2.X btw...

On Wed, 1 Dec 2010 02:40:53 -0600 (CST)
Raghavendra Bhat raghavendrab...@gluster.com wrote:

 
 If you create a volume with only one brick, and then add one more brick to 
 the volume then, the volume will be of distribute type and not replicate. If 
 replica feature is neede , then a replicate volume itself should be created 
 and to create replicate volume minimum 2 bricks are needed.
 
 
 - Original Message -
 From: Raghavendra G raghaven...@gluster.com
 To: raveenpl ravee...@gmail.com
 Cc: gluster-users@gluster.org
 Sent: Wednesday, December 1, 2010 12:52:03 PM
 Subject: Re: [Gluster-users] GlusterFS replica question
 
 Yes, it is possible in 3.1.x without downtime.
 
 - Original Message -
 From: raveenpl ravee...@gmail.com
 To: gluster-users@gluster.org
 Sent: Sunday, November 28, 2010 2:54:13 AM
 Subject: [Gluster-users] GlusterFS replica question
 
 Hi,
 
 For small lab environment I want to use GlusterFS with only ONE node.
 
 After some time I would like to add the second node as the redundant
 node (replica).
 
 Is it possible in GlusterFS 3.1 without downtime?
 
 Cheers
 PK
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 


-- 
MfG,
Stephan von Krawczynski


--
ith Kommunikationstechnik GmbH

Lieferanschrift  : Reiterstrasse 24, D-94447 Plattling
Telefon  : +49 9931 9188 0
Fax  : +49 9931 9188 44
Geschaeftsfuehrer: Stephan von Krawczynski
Registergericht  : Deggendorf HRB 1625
--

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs 3.1.0 disable-io-mode

2010-12-01 Thread Kakaletris Kostas

Hello,

I remember that i tried that to with mount -t glusterfs and i was 
getting error that glustrerfs don't understand --disable-direct-io-mode 
(wasn't writing disable but probably was translating the command to that) .
We removed gluster 3.1 and install gluster 3.0.5 from the centos rpms 
where the parameter gave no error but we continue having problems with 
xen and paravirtualized with tap disks . We installed the patched fuse 
and didn't solved our problem. We get to the conclusion that maybe 
problem was that the kernel on these machines was 2.6.18-194.el5xen 
(centos 5.5). On debian machines that xen  kernel was 2.6.26 was working 
with tap, but the machines we wanted to use are dell that debian is not 
installing. We gonna have a look for testing with other clustered file 
system on these machines and check again with glusterfs when we are 
going to install centos6 and make use of new glusterfs version.


Thanks for your replies
Kostas


Στις 1/12/2010 8:23 πμ, ο/η Raghavendra G έγραψε:

Hi Kostas,

Is --direct-io-mode=off not working for you?

regards,
Raghavendra.
- Original Message -
From: Kakaletris Kostaskka...@yahoo.gr
To: gluster-users@gluster.org
Sent: Tuesday, November 23, 2010 3:27:39 PM
Subject: [Gluster-users] glusterfs 3.1.0 disable-io-mode

Hello,

I'm trying gluster 3.1.0 in conjunction with XEN on centos 5.5.
I have problem installing a vm cause installation hangs. Searched the
net and found that for xen to work in previous gluster versions thay had
to disable direct io mode.
glusterfs says that don't recognize --disable-direct-io-mode. I tried
several syntax with --diasble-direct-io-mode , direct-io-mode=disable,
direct-io-mode=off , and in different positions of the mount command
(mount -t glusterfs -oopt  ..  , mount -oxxx  -t glusterfs
:/xxx /xxx , etc)

Thanks in advance
Kostas
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] RDMA Problems with GlusterFS 3.1.1

2010-12-01 Thread Jeremy Stout
Whenever I try to start or mount a GlusterFS 3.1.1 volume that uses
RDMA, I'm seeing the following error messages in the log file on the
server:
[2010-11-30 18:37:53.51270] I [nfs.c:652:init] nfs: NFS service started
[2010-11-30 18:37:53.51362] W [dict.c:1204:data_to_str] dict: @data=(nil)
[2010-11-30 18:37:53.51375] W [dict.c:1204:data_to_str] dict: @data=(nil)
[2010-11-30 18:37:53.59628] E [rdma.c:2066:rdma_create_cq]
rpc-transport/rdma: testdir-client-0: creation of send_cq failed
[2010-11-30 18:37:53.59851] E [rdma.c:3771:rdma_get_device]
rpc-transport/rdma: testdir-client-0: could not create CQ
[2010-11-30 18:37:53.59925] E [rdma.c:3957:rdma_init]
rpc-transport/rdma: could not create rdma device for mthca0
[2010-11-30 18:37:53.60009] E [rdma.c:4789:init] testdir-client-0:
Failed to initialize IB Device
[2010-11-30 18:37:53.60030] E [rpc-transport.c:971:rpc_transport_load]
rpc-transport: 'rdma' initialization failed

On the client, I see:
[2010-11-30 18:43:49.653469] W [io-stats.c:1644:init] testdir:
dangling volume. check volfile
[2010-11-30 18:43:49.653573] W [dict.c:1204:data_to_str] dict: @data=(nil)
[2010-11-30 18:43:49.653607] W [dict.c:1204:data_to_str] dict: @data=(nil)
[2010-11-30 18:43:49.736275] E [rdma.c:2066:rdma_create_cq]
rpc-transport/rdma: testdir-client-0: creation of send_cq failed
[2010-11-30 18:43:49.736651] E [rdma.c:3771:rdma_get_device]
rpc-transport/rdma: testdir-client-0: could not create CQ
[2010-11-30 18:43:49.736689] E [rdma.c:3957:rdma_init]
rpc-transport/rdma: could not create rdma device for mthca0
[2010-11-30 18:43:49.736805] E [rdma.c:4789:init] testdir-client-0:
Failed to initialize IB Device
[2010-11-30 18:43:49.736841] E
[rpc-transport.c:971:rpc_transport_load] rpc-transport: 'rdma'
initialization failed

This results in an unsuccessful mount.

I created the mount using the following commands:
/usr/local/glusterfs/3.1.1/sbin/gluster volume create testdir
transport rdma submit-1:/exports
/usr/local/glusterfs/3.1.1/sbin/gluster volume start testdir

To mount the directory, I use:
mount -t glusterfs submit-1:/testdir /mnt/glusterfs

I don't think it is an Infiniband problem since GlusterFS 3.0.6 and
GlusterFS 3.1.0 worked on the same systems. For GlusterFS 3.1.0, the
commands listed above produced no error messages.

If anyone can provide help with debugging these error messages, it
would be appreciated.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] RDMA Problems with GlusterFS 3.1.1

2010-12-01 Thread Anand Avati
Can you verify that ibv_srq_pingpong works from the server where this log
file is from?

Thanks,
Avati

On Wed, Dec 1, 2010 at 7:44 PM, Jeremy Stout stout.jer...@gmail.com wrote:

 Whenever I try to start or mount a GlusterFS 3.1.1 volume that uses
 RDMA, I'm seeing the following error messages in the log file on the
 server:
 [2010-11-30 18:37:53.51270] I [nfs.c:652:init] nfs: NFS service started
 [2010-11-30 18:37:53.51362] W [dict.c:1204:data_to_str] dict: @data=(nil)
 [2010-11-30 18:37:53.51375] W [dict.c:1204:data_to_str] dict: @data=(nil)
 [2010-11-30 18:37:53.59628] E [rdma.c:2066:rdma_create_cq]
 rpc-transport/rdma: testdir-client-0: creation of send_cq failed
 [2010-11-30 18:37:53.59851] E [rdma.c:3771:rdma_get_device]
 rpc-transport/rdma: testdir-client-0: could not create CQ
 [2010-11-30 18:37:53.59925] E [rdma.c:3957:rdma_init]
 rpc-transport/rdma: could not create rdma device for mthca0
 [2010-11-30 18:37:53.60009] E [rdma.c:4789:init] testdir-client-0:
 Failed to initialize IB Device
 [2010-11-30 18:37:53.60030] E [rpc-transport.c:971:rpc_transport_load]
 rpc-transport: 'rdma' initialization failed

 On the client, I see:
 [2010-11-30 18:43:49.653469] W [io-stats.c:1644:init] testdir:
 dangling volume. check volfile
 [2010-11-30 18:43:49.653573] W [dict.c:1204:data_to_str] dict: @data=(nil)
 [2010-11-30 18:43:49.653607] W [dict.c:1204:data_to_str] dict: @data=(nil)
 [2010-11-30 18:43:49.736275] E [rdma.c:2066:rdma_create_cq]
 rpc-transport/rdma: testdir-client-0: creation of send_cq failed
 [2010-11-30 18:43:49.736651] E [rdma.c:3771:rdma_get_device]
 rpc-transport/rdma: testdir-client-0: could not create CQ
 [2010-11-30 18:43:49.736689] E [rdma.c:3957:rdma_init]
 rpc-transport/rdma: could not create rdma device for mthca0
 [2010-11-30 18:43:49.736805] E [rdma.c:4789:init] testdir-client-0:
 Failed to initialize IB Device
 [2010-11-30 18:43:49.736841] E
 [rpc-transport.c:971:rpc_transport_load] rpc-transport: 'rdma'
 initialization failed

 This results in an unsuccessful mount.

 I created the mount using the following commands:
 /usr/local/glusterfs/3.1.1/sbin/gluster volume create testdir
 transport rdma submit-1:/exports
 /usr/local/glusterfs/3.1.1/sbin/gluster volume start testdir

 To mount the directory, I use:
 mount -t glusterfs submit-1:/testdir /mnt/glusterfs

 I don't think it is an Infiniband problem since GlusterFS 3.0.6 and
 GlusterFS 3.1.0 worked on the same systems. For GlusterFS 3.1.0, the
 commands listed above produced no error messages.

 If anyone can provide help with debugging these error messages, it
 would be appreciated.
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] RDMA Problems with GlusterFS 3.1.1

2010-12-01 Thread Jeremy Stout
Here are the results of the test:
submit-1:/usr/local/glusterfs/3.1.1/var/log/glusterfs # ibv_srq_pingpong
  local address:  LID 0x0002, QPN 0x000406, PSN 0x703b96, GID ::
  local address:  LID 0x0002, QPN 0x000407, PSN 0x618cc8, GID ::
  local address:  LID 0x0002, QPN 0x000408, PSN 0xd62272, GID ::
  local address:  LID 0x0002, QPN 0x000409, PSN 0x5db5d9, GID ::
  local address:  LID 0x0002, QPN 0x00040a, PSN 0xc51978, GID ::
  local address:  LID 0x0002, QPN 0x00040b, PSN 0x05fd7a, GID ::
  local address:  LID 0x0002, QPN 0x00040c, PSN 0xaa4a51, GID ::
  local address:  LID 0x0002, QPN 0x00040d, PSN 0xb7a676, GID ::
  local address:  LID 0x0002, QPN 0x00040e, PSN 0x56bde2, GID ::
  local address:  LID 0x0002, QPN 0x00040f, PSN 0xa662bc, GID ::
  local address:  LID 0x0002, QPN 0x000410, PSN 0xee27b0, GID ::
  local address:  LID 0x0002, QPN 0x000411, PSN 0x89c683, GID ::
  local address:  LID 0x0002, QPN 0x000412, PSN 0xd025b3, GID ::
  local address:  LID 0x0002, QPN 0x000413, PSN 0xcec8e4, GID ::
  local address:  LID 0x0002, QPN 0x000414, PSN 0x37e5d2, GID ::
  local address:  LID 0x0002, QPN 0x000415, PSN 0x29562e, GID ::
  remote address: LID 0x000b, QPN 0x000406, PSN 0x3b644e, GID ::
  remote address: LID 0x000b, QPN 0x000407, PSN 0x173320, GID ::
  remote address: LID 0x000b, QPN 0x000408, PSN 0xc105ea, GID ::
  remote address: LID 0x000b, QPN 0x000409, PSN 0x5e5ff1, GID ::
  remote address: LID 0x000b, QPN 0x00040a, PSN 0xff15b0, GID ::
  remote address: LID 0x000b, QPN 0x00040b, PSN 0xf0b152, GID ::
  remote address: LID 0x000b, QPN 0x00040c, PSN 0x4ced49, GID ::
  remote address: LID 0x000b, QPN 0x00040d, PSN 0x01da0e, GID ::
  remote address: LID 0x000b, QPN 0x00040e, PSN 0x69459a, GID ::
  remote address: LID 0x000b, QPN 0x00040f, PSN 0x197c14, GID ::
  remote address: LID 0x000b, QPN 0x000410, PSN 0xd50228, GID ::
  remote address: LID 0x000b, QPN 0x000411, PSN 0xbc9b9b, GID ::
  remote address: LID 0x000b, QPN 0x000412, PSN 0x0870eb, GID ::
  remote address: LID 0x000b, QPN 0x000413, PSN 0xfb1fbc, GID ::
  remote address: LID 0x000b, QPN 0x000414, PSN 0x3eefca, GID ::
  remote address: LID 0x000b, QPN 0x000415, PSN 0xbd64c6, GID ::
8192000 bytes in 0.01 seconds = 5917.47 Mbit/sec
1000 iters in 0.01 seconds = 11.07 usec/iter

fs-1:/usr/local/glusterfs/3.1.1/var/log/glusterfs # ibv_srq_pingpong submit-1
  local address:  LID 0x000b, QPN 0x000406, PSN 0x3b644e, GID ::
  local address:  LID 0x000b, QPN 0x000407, PSN 0x173320, GID ::
  local address:  LID 0x000b, QPN 0x000408, PSN 0xc105ea, GID ::
  local address:  LID 0x000b, QPN 0x000409, PSN 0x5e5ff1, GID ::
  local address:  LID 0x000b, QPN 0x00040a, PSN 0xff15b0, GID ::
  local address:  LID 0x000b, QPN 0x00040b, PSN 0xf0b152, GID ::
  local address:  LID 0x000b, QPN 0x00040c, PSN 0x4ced49, GID ::
  local address:  LID 0x000b, QPN 0x00040d, PSN 0x01da0e, GID ::
  local address:  LID 0x000b, QPN 0x00040e, PSN 0x69459a, GID ::
  local address:  LID 0x000b, QPN 0x00040f, PSN 0x197c14, GID ::
  local address:  LID 0x000b, QPN 0x000410, PSN 0xd50228, GID ::
  local address:  LID 0x000b, QPN 0x000411, PSN 0xbc9b9b, GID ::
  local address:  LID 0x000b, QPN 0x000412, PSN 0x0870eb, GID ::
  local address:  LID 0x000b, QPN 0x000413, PSN 0xfb1fbc, GID ::
  local address:  LID 0x000b, QPN 0x000414, PSN 0x3eefca, GID ::
  local address:  LID 0x000b, QPN 0x000415, PSN 0xbd64c6, GID ::
  remote address: LID 0x0002, QPN 0x000406, PSN 0x703b96, GID ::
  remote address: LID 0x0002, QPN 0x000407, PSN 0x618cc8, GID ::
  remote address: LID 0x0002, QPN 0x000408, PSN 0xd62272, GID ::
  remote address: LID 0x0002, QPN 0x000409, PSN 0x5db5d9, GID ::
  remote address: LID 0x0002, QPN 0x00040a, PSN 0xc51978, GID ::
  remote address: LID 0x0002, QPN 0x00040b, PSN 0x05fd7a, GID ::
  remote address: LID 0x0002, QPN 0x00040c, PSN 0xaa4a51, GID ::
  remote address: LID 0x0002, QPN 0x00040d, PSN 0xb7a676, GID ::
  remote address: LID 0x0002, QPN 0x00040e, PSN 0x56bde2, GID ::
  remote address: LID 0x0002, QPN 0x00040f, PSN 0xa662bc, GID ::
  remote address: LID 0x0002, QPN 0x000410, PSN 0xee27b0, GID ::
  remote address: LID 0x0002, QPN 0x000411, PSN 0x89c683, GID ::
  remote address: LID 0x0002, QPN 0x000412, PSN 0xd025b3, GID ::
  remote address: LID 0x0002, QPN 0x000413, PSN 0xcec8e4, GID ::
  remote address: LID 0x0002, QPN 0x000414, PSN 0x37e5d2, GID ::
  remote address: LID 0x0002, QPN 0x000415, PSN 0x29562e, GID ::
8192000 bytes in 0.01 seconds = 7423.65 Mbit/sec
1000 iters in 0.01 seconds = 8.83 usec/iter

Based on the output, I believe it ran correctly.

On Wed, Dec 1, 2010 at 9:51 AM, Anand Avati anand.av...@gmail.com wrote:
 Can you verify that ibv_srq_pingpong works from the server where this log
 file is from?

 Thanks,
 Avati

 On Wed, Dec 1, 2010 at 7:44 PM, Jeremy Stout stout.jer...@gmail.com wrote:

 Whenever I try to start or mount a GlusterFS 3.1.1 volume that uses
 RDMA, I'm seeing the following error messages in the log file on the
 server:
 

[Gluster-users] Who's using Fedora in production on Glusterfs storage servers?

2010-12-01 Thread Burnash, James
How many people on the list are using Fedora 12 (or 13) in production for 
Glusterfs storage servers? I know that Gluster Platform uses Fedora 12 as its 
OS - I was thinking of building my new glusterfs storage servers using Fedora, 
and was wondering whether Fedora 13 was tested by Gluster for v 3.1.1 and what 
other people's experiences were.

One of the reasons for my interest was so that I could use ext4 as the backend 
file store, instead of ext3.

Thanks,

James Burnash, Unix Engineering


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Moving external storage between bricks

2010-12-01 Thread Craig Carl

James -
   The setup you've described is pretty standard, if we assume that you 
are going to mount each array at /mnt/array{1-8}, your volume will be 
called vol1, and your servers are named server{1-4} your gluster volume 
create command would be -


Without replicas -

#gluster volume create vol1 transport tcp server1:/mnt/array1 
server2:/mnt/array1 server3:/mnt/array1 server4:/mnt/array1 
server1:/mnt/array2 server2:/mnt/array2 server3:/mnt/array2 
server4:/mnt/array2 server1:/mnt/array3 server2:/mnt/array3 
server3:/mnt/array3 server4:/mnt/array3 server1:/mnt/array4 
server2:/mnt/array4 server3:/mnt/array4 server4:/mnt/array4 
server1:/mnt/array5 server2:/mnt/array5 server3:/mnt/array5 
server4:/mnt/array5 server1:/mnt/array6 server2:/mnt/array6 
server3:/mnt/array6 server4:/mnt/array6 server1:/mnt/array7 
server2:/mnt/array7 server3:/mnt/array7 server4:/mnt/array7 
server1:/mnt/array8 server2:/mnt/array8 server3:/mnt/array8 
server4:/mnt/array8

This would get you a single 512TB NFS mount.

With replicas(2) -

#gluster volume create vol1 replica 2 transport tcp server1:/mnt/array1 
server2:/mnt/array1 server3:/mnt/array1 server4:/mnt/array1 
server1:/mnt/array2 server2:/mnt/array2 server3:/mnt/array2 
server4:/mnt/array2 server1:/mnt/array3 server2:/mnt/array3 
server3:/mnt/array3 server4:/mnt/array3 server1:/mnt/array4 
server2:/mnt/array4 server3:/mnt/array4 server4:/mnt/array4 
server1:/mnt/array5 server2:/mnt/array5 server3:/mnt/array5 
server4:/mnt/array5 server1:/mnt/array6 server2:/mnt/array6 
server3:/mnt/array6 server4:/mnt/array6 server1:/mnt/array7 
server2:/mnt/array7 server3:/mnt/array7 server4:/mnt/array7 
server1:/mnt/array8 server2:/mnt/array8 server3:/mnt/array8 
server4:/mnt/array8

This would get you a single 256TB HA NFS mount.

Gluster specifically doesn't care about LUN/brick size, the ability to 
create smaller LUNs without affecting the presentation of that space is 
a positive side effect of using Gluster. Smaller LUN's are useful in 
several ways, faster fsck's on the LUN if that is ever required, there 
is a minor performance hit to running bricks of different sizes in the 
same volume, small LUNs make that easier.



Thanks,

Craig

--
Craig Carl
Senior Systems Engineer
Gluster

On 12/01/2010 08:29 AM, Burnash, James wrote:

Hello.

So, here's my problem.

I have 4 storage servers that will be configured as replicate + distribute, 
each of which has two external storage arrays, each with their own controller. 
Those external arrays will be used to store archived large (10GB) files that 
will only be read-only after their initial copy to the glusterfs storage.

Currently, the external arrays are the items of interest. What I'd like to do 
is this:

- Create multiple hardware RAID 5 arrays on each storage server, which would 
present to the OS as approx 8 16TB physical drives.
- Create an ext3 file system on each of those devices (I'm using CentOS 5.5. so 
ext4 is still not really an option for me)
- Mount those multiple file systems to the storage server, and then aggregate 
them all under gluster to export under a single namespace to NFS and the 
Gluster client.

How do I aggregate those multiple file systems without involving LVM in some 
way.

I've read that Glusterfs likes small bricks, though I haven't really been 
able to track down why. Any pointers to good technical info on this subject would also be 
greatly appreciated.

Thanks,

James Burnash, Unix Engineering


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Moving external storage between bricks

2010-12-01 Thread Burnash, James
Excellent and clearly explained. Thanks Carl!

James Burnash, Unix Engineering
T. 201-239-2248 
jburn...@knight.com | www.knight.com

545 Washington Ave. | Jersey City, NJ


-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Craig Carl
Sent: Wednesday, December 01, 2010 12:19 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] Moving external storage between bricks

James -
The setup you've described is pretty standard, if we assume that you 
are going to mount each array at /mnt/array{1-8}, your volume will be 
called vol1, and your servers are named server{1-4} your gluster volume 
create command would be -

Without replicas -

#gluster volume create vol1 transport tcp server1:/mnt/array1 
server2:/mnt/array1 server3:/mnt/array1 server4:/mnt/array1 
server1:/mnt/array2 server2:/mnt/array2 server3:/mnt/array2 
server4:/mnt/array2 server1:/mnt/array3 server2:/mnt/array3 
server3:/mnt/array3 server4:/mnt/array3 server1:/mnt/array4 
server2:/mnt/array4 server3:/mnt/array4 server4:/mnt/array4 
server1:/mnt/array5 server2:/mnt/array5 server3:/mnt/array5 
server4:/mnt/array5 server1:/mnt/array6 server2:/mnt/array6 
server3:/mnt/array6 server4:/mnt/array6 server1:/mnt/array7 
server2:/mnt/array7 server3:/mnt/array7 server4:/mnt/array7 
server1:/mnt/array8 server2:/mnt/array8 server3:/mnt/array8 
server4:/mnt/array8
This would get you a single 512TB NFS mount.

With replicas(2) -

#gluster volume create vol1 replica 2 transport tcp server1:/mnt/array1 
server2:/mnt/array1 server3:/mnt/array1 server4:/mnt/array1 
server1:/mnt/array2 server2:/mnt/array2 server3:/mnt/array2 
server4:/mnt/array2 server1:/mnt/array3 server2:/mnt/array3 
server3:/mnt/array3 server4:/mnt/array3 server1:/mnt/array4 
server2:/mnt/array4 server3:/mnt/array4 server4:/mnt/array4 
server1:/mnt/array5 server2:/mnt/array5 server3:/mnt/array5 
server4:/mnt/array5 server1:/mnt/array6 server2:/mnt/array6 
server3:/mnt/array6 server4:/mnt/array6 server1:/mnt/array7 
server2:/mnt/array7 server3:/mnt/array7 server4:/mnt/array7 
server1:/mnt/array8 server2:/mnt/array8 server3:/mnt/array8 
server4:/mnt/array8
This would get you a single 256TB HA NFS mount.

Gluster specifically doesn't care about LUN/brick size, the ability to 
create smaller LUNs without affecting the presentation of that space is 
a positive side effect of using Gluster. Smaller LUN's are useful in 
several ways, faster fsck's on the LUN if that is ever required, there 
is a minor performance hit to running bricks of different sizes in the 
same volume, small LUNs make that easier.


Thanks,

Craig

--
Craig Carl
Senior Systems Engineer
Gluster

On 12/01/2010 08:29 AM, Burnash, James wrote:
 Hello.

 So, here's my problem.

 I have 4 storage servers that will be configured as replicate + distribute, 
 each of which has two external storage arrays, each with their own 
 controller. Those external arrays will be used to store archived large (10GB) 
 files that will only be read-only after their initial copy to the glusterfs 
 storage.

 Currently, the external arrays are the items of interest. What I'd like to do 
 is this:

 - Create multiple hardware RAID 5 arrays on each storage server, which would 
 present to the OS as approx 8 16TB physical drives.
 - Create an ext3 file system on each of those devices (I'm using CentOS 5.5. 
 so ext4 is still not really an option for me)
 - Mount those multiple file systems to the storage server, and then aggregate 
 them all under gluster to export under a single namespace to NFS and the 
 Gluster client.

 How do I aggregate those multiple file systems without involving LVM in some 
 way.

 I've read that Glusterfs likes small bricks, though I haven't really been 
 able to track down why. Any pointers to good technical info on this subject 
 would also be greatly appreciated.

 Thanks,

 James Burnash, Unix Engineering


 DISCLAIMER:
 This e-mail, and any attachments thereto, is intended only for use by the 
 addressee(s) named herein and may contain legally privileged and/or 
 confidential information. If you are not the intended recipient of this 
 e-mail, you are hereby notified that any dissemination, distribution or 
 copying of this e-mail, and any attachments thereto, is strictly prohibited. 
 If you have received this in error, please immediately notify me and 
 permanently delete the original and any copy of any e-mail and any printout 
 thereof. E-mail transmission cannot be guaranteed to be secure or error-free. 
 The sender therefore does not accept liability for any errors or omissions in 
 the contents of this message which arise as a result of e-mail transmission.
 NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
 discretion, monitor and review the content of all e-mail communications. 
 http://www.knight.com
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 

Re: [Gluster-users] Moving external storage between bricks

2010-12-01 Thread Burnash, James
Gluster development and support team:

Is there a projected timeline for Glusterfs support for RHEL 6?

Has anybody out there on the list tried this yet? We are about to try some 
simple testing, mostly out of curiosity.

James Burnash, Unix Engineering


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs 3.1.0 disable-io-mode

2010-12-01 Thread Raghavendra G
in 3.1.x the options --disable-direct-io-mode and --enable-direct-io-mode are 
replaced by --direct-io-mode=[off/on]. So, I was interested in knowing whether 
you used the option --disable-direct-io-mode or --direct-io-mode. From your 
previous mail, it seems you are trying to use --disable-direct-io-mode (which 
is not supported in 3.1.x). Hence please use --direct-io-mode and let us know 
whether the issue still persists.

regards,
- Original Message -
From: Kakaletris Kostas kka...@yahoo.gr
To: Raghavendra G raghaven...@gluster.com
Cc: gluster-users@gluster.org
Sent: Wednesday, December 1, 2010 3:34:10 PM
Subject: Re: [Gluster-users] glusterfs 3.1.0 disable-io-mode

Hello,

I remember that i tried that to with mount -t glusterfs and i was 
getting error that glustrerfs don't understand --disable-direct-io-mode 
(wasn't writing disable but probably was translating the command to that) .
We removed gluster 3.1 and install gluster 3.0.5 from the centos rpms 
where the parameter gave no error but we continue having problems with 
xen and paravirtualized with tap disks . We installed the patched fuse 
and didn't solved our problem. We get to the conclusion that maybe 
problem was that the kernel on these machines was 2.6.18-194.el5xen 
(centos 5.5). On debian machines that xen  kernel was 2.6.26 was working 
with tap, but the machines we wanted to use are dell that debian is not 
installing. We gonna have a look for testing with other clustered file 
system on these machines and check again with glusterfs when we are 
going to install centos6 and make use of new glusterfs version.

Thanks for your replies
Kostas


Στις 1/12/2010 8:23 πμ, ο/η Raghavendra G έγραψε:
 Hi Kostas,

 Is --direct-io-mode=off not working for you?

 regards,
 Raghavendra.
 - Original Message -
 From: Kakaletris Kostaskka...@yahoo.gr
 To: gluster-users@gluster.org
 Sent: Tuesday, November 23, 2010 3:27:39 PM
 Subject: [Gluster-users] glusterfs 3.1.0 disable-io-mode

 Hello,

 I'm trying gluster 3.1.0 in conjunction with XEN on centos 5.5.
 I have problem installing a vm cause installation hangs. Searched the
 net and found that for xen to work in previous gluster versions thay had
 to disable direct io mode.
 glusterfs says that don't recognize --disable-direct-io-mode. I tried
 several syntax with --diasble-direct-io-mode , direct-io-mode=disable,
 direct-io-mode=off , and in different positions of the mount command
 (mount -t glusterfs -oopt  ..  , mount -oxxx  -t glusterfs
 :/xxx /xxx , etc)

 Thanks in advance
 Kostas
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs 3.1.0 disable-io-mode

2010-12-01 Thread Kakaletris Kostas

From what i remember i tried and the following
mount -t glusterfs --direct-io-mode=off
mount -t glusterfs -o direct-io-mode=off
mount -t glusterfs --direct-io-mode=disable
mount -t glusterfs -o direct-io-mode=disable
mount -t glusterfs -o --direct-io-mode=disable
and others but didn't managed to solve the problem we had. Sorry but 
can't try it again to be 100% sure for the result of all the above since 
we removed gluster3.1.0 from theses 4 servers that we were testing :(
Maybe the problem was somewhere else to the systems (centos5.5) since 
3.0.5 that installed after that wasn't working right with xen and tap 
for disk files either, while 3.0.5 on debian servers is working just 
fine with xen and tap.
Probably has to do with the different kernel version as i mentioned to 
my previous mail (lenny has xen 2.6.26 and centos 2.6.18).
The other difference between the two setups was that on debian servers 
downloaded source and on centos servers downloaded the rpms for centos.
Thanks for your interest and your very nice software and sorry that 
can't give more details that maybe could help others too but as i wrote 
we removed from the centos (working fine on debian).
It's a bit difficult time these days for more retries :( , so will try 
again but later on centos 6 and new xen kernel.


Thank you very much,
Kostas






Στις 2/12/2010 1:43 πμ, ο/η Raghavendra G έγραψε:

in 3.1.x the options --disable-direct-io-mode and --enable-direct-io-mode are 
replaced by --direct-io-mode=[off/on]. So, I was interested in knowing whether you 
used the option --disable-direct-io-mode or --direct-io-mode. From your 
previous mail, it seems you are trying to use --disable-direct-io-mode (which is 
not supported in 3.1.x). Hence please use --direct-io-mode and let us know whether 
the issue still persists.

regards,
- Original Message -
From: Kakaletris Kostaskka...@yahoo.gr
To: Raghavendra Graghaven...@gluster.com
Cc: gluster-users@gluster.org
Sent: Wednesday, December 1, 2010 3:34:10 PM
Subject: Re: [Gluster-users] glusterfs 3.1.0 disable-io-mode

Hello,

I remember that i tried that to with mount -t glusterfs and i was
getting error that glustrerfs don't understand --disable-direct-io-mode
(wasn't writing disable but probably was translating the command to that) .
We removed gluster 3.1 and install gluster 3.0.5 from the centos rpms
where the parameter gave no error but we continue having problems with
xen and paravirtualized with tap disks . We installed the patched fuse
and didn't solved our problem. We get to the conclusion that maybe
problem was that the kernel on these machines was 2.6.18-194.el5xen
(centos 5.5). On debian machines that xen  kernel was 2.6.26 was working
with tap, but the machines we wanted to use are dell that debian is not
installing. We gonna have a look for testing with other clustered file
system on these machines and check again with glusterfs when we are
going to install centos6 and make use of new glusterfs version.

Thanks for your replies
Kostas


Στις 1/12/2010 8:23 πμ, ο/η Raghavendra G έγ�α�ε:

Hi Kostas,

Is --direct-io-mode=off not working for you?

regards,
Raghavendra.
- Original Message -
From: Kakaletris Kostaskka...@yahoo.gr
To: gluster-users@gluster.org
Sent: Tuesday, November 23, 2010 3:27:39 PM
Subject: [Gluster-users] glusterfs 3.1.0 disable-io-mode

Hello,

I'm trying gluster 3.1.0 in conjunction with XEN on centos 5.5.
I have problem installing a vm cause installation hangs. Searched the
net and found that for xen to work in previous gluster versions thay had
to disable direct io mode.
glusterfs says that don't recognize --disable-direct-io-mode. I tried
several syntax with --diasble-direct-io-mode , direct-io-mode=disable,
direct-io-mode=off , and in different positions of the mount command
(mount -t glusterfs -oopt   ..  , mount -oxxx   -t glusterfs
:/xxx /xxx , etc)

Thanks in advance
Kostas
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Who's using Fedora in production on Glusterfs storage servers?

2010-12-01 Thread Mark Naoki Rogers

Hi James,

I'm using 3.1.1 on six bricks in dist+replicate all running F14+BTRFS, 
the clients are on fedora12/13/14. I build the RPMs from source on a F14 
machine. The cluster is running entirely on GbE (with some 10Gb lines 
going in shortly), no RDMA/infiniband so I can't help there.


It's gone through a series of looped benchmarks for a while now (from 
3.1.0 through a few qa releases) and have so far pushed/pulled over 
110TB through it - I'm happy in the stability but not /entirely/ sure of 
the performance just yet, just started up more testing under 3.1.1.


But back to your main question there really isn't enough difference 
between the near-term releases of Fedora for it to make a huge 
difference either way. I do think you're better off using the latest 
Fedora release than an older one that will be end of life soon (f12 
tomorrow). Being able to patch/maintain your system is more important 
than an, often very arbitrary, vendor support list which is usually just 
an outcome of what people have had time to look into, rather than any 
measured reason a newer OS isn't supported. Besides the only thing you 
ever have to /really/ care about is the kernel and glibc major versions, 
so if it compiles you're pretty much ok (ldd it, that's all it needs).



On 12/02/2010 01:45 AM, Burnash, James wrote:

How many people on the list are using Fedora 12 (or 13) in production for 
Glusterfs storage servers? I know that Gluster Platform uses Fedora 12 as its 
OS - I was thinking of building my new glusterfs storage servers using Fedora, 
and was wondering whether Fedora 13 was tested by Gluster for v 3.1.1 and what 
other people's experiences were.

One of the reasons for my interest was so that I could use ext4 as the backend 
file store, instead of ext3.

Thanks,

James Burnash, Unix Engineering


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] RDMA Problems with GlusterFS 3.1.1

2010-12-01 Thread Jeremy Stout
As an update to my situation, I think I have GlusterFS 3.1.1 working
now. I was able to create and mount RDMA volumes without any errors.

To fix the problem, I had to make the following changes on lines 3562
and 3563 in rdma.c:
options-send_count = 32;
options-recv_count = 32;

The values were set to 128.

I'll run some tests tomorrow to verify that it is working correctly.
Assuming it does, what would be the expected side-effect of changing
the values from 128 to 32? Will there be a decrease in performance?


On Wed, Dec 1, 2010 at 10:07 AM, Jeremy Stout stout.jer...@gmail.com wrote:
 Here are the results of the test:
 submit-1:/usr/local/glusterfs/3.1.1/var/log/glusterfs # ibv_srq_pingpong
  local address:  LID 0x0002, QPN 0x000406, PSN 0x703b96, GID ::
  local address:  LID 0x0002, QPN 0x000407, PSN 0x618cc8, GID ::
  local address:  LID 0x0002, QPN 0x000408, PSN 0xd62272, GID ::
  local address:  LID 0x0002, QPN 0x000409, PSN 0x5db5d9, GID ::
  local address:  LID 0x0002, QPN 0x00040a, PSN 0xc51978, GID ::
  local address:  LID 0x0002, QPN 0x00040b, PSN 0x05fd7a, GID ::
  local address:  LID 0x0002, QPN 0x00040c, PSN 0xaa4a51, GID ::
  local address:  LID 0x0002, QPN 0x00040d, PSN 0xb7a676, GID ::
  local address:  LID 0x0002, QPN 0x00040e, PSN 0x56bde2, GID ::
  local address:  LID 0x0002, QPN 0x00040f, PSN 0xa662bc, GID ::
  local address:  LID 0x0002, QPN 0x000410, PSN 0xee27b0, GID ::
  local address:  LID 0x0002, QPN 0x000411, PSN 0x89c683, GID ::
  local address:  LID 0x0002, QPN 0x000412, PSN 0xd025b3, GID ::
  local address:  LID 0x0002, QPN 0x000413, PSN 0xcec8e4, GID ::
  local address:  LID 0x0002, QPN 0x000414, PSN 0x37e5d2, GID ::
  local address:  LID 0x0002, QPN 0x000415, PSN 0x29562e, GID ::
  remote address: LID 0x000b, QPN 0x000406, PSN 0x3b644e, GID ::
  remote address: LID 0x000b, QPN 0x000407, PSN 0x173320, GID ::
  remote address: LID 0x000b, QPN 0x000408, PSN 0xc105ea, GID ::
  remote address: LID 0x000b, QPN 0x000409, PSN 0x5e5ff1, GID ::
  remote address: LID 0x000b, QPN 0x00040a, PSN 0xff15b0, GID ::
  remote address: LID 0x000b, QPN 0x00040b, PSN 0xf0b152, GID ::
  remote address: LID 0x000b, QPN 0x00040c, PSN 0x4ced49, GID ::
  remote address: LID 0x000b, QPN 0x00040d, PSN 0x01da0e, GID ::
  remote address: LID 0x000b, QPN 0x00040e, PSN 0x69459a, GID ::
  remote address: LID 0x000b, QPN 0x00040f, PSN 0x197c14, GID ::
  remote address: LID 0x000b, QPN 0x000410, PSN 0xd50228, GID ::
  remote address: LID 0x000b, QPN 0x000411, PSN 0xbc9b9b, GID ::
  remote address: LID 0x000b, QPN 0x000412, PSN 0x0870eb, GID ::
  remote address: LID 0x000b, QPN 0x000413, PSN 0xfb1fbc, GID ::
  remote address: LID 0x000b, QPN 0x000414, PSN 0x3eefca, GID ::
  remote address: LID 0x000b, QPN 0x000415, PSN 0xbd64c6, GID ::
 8192000 bytes in 0.01 seconds = 5917.47 Mbit/sec
 1000 iters in 0.01 seconds = 11.07 usec/iter

 fs-1:/usr/local/glusterfs/3.1.1/var/log/glusterfs # ibv_srq_pingpong submit-1
  local address:  LID 0x000b, QPN 0x000406, PSN 0x3b644e, GID ::
  local address:  LID 0x000b, QPN 0x000407, PSN 0x173320, GID ::
  local address:  LID 0x000b, QPN 0x000408, PSN 0xc105ea, GID ::
  local address:  LID 0x000b, QPN 0x000409, PSN 0x5e5ff1, GID ::
  local address:  LID 0x000b, QPN 0x00040a, PSN 0xff15b0, GID ::
  local address:  LID 0x000b, QPN 0x00040b, PSN 0xf0b152, GID ::
  local address:  LID 0x000b, QPN 0x00040c, PSN 0x4ced49, GID ::
  local address:  LID 0x000b, QPN 0x00040d, PSN 0x01da0e, GID ::
  local address:  LID 0x000b, QPN 0x00040e, PSN 0x69459a, GID ::
  local address:  LID 0x000b, QPN 0x00040f, PSN 0x197c14, GID ::
  local address:  LID 0x000b, QPN 0x000410, PSN 0xd50228, GID ::
  local address:  LID 0x000b, QPN 0x000411, PSN 0xbc9b9b, GID ::
  local address:  LID 0x000b, QPN 0x000412, PSN 0x0870eb, GID ::
  local address:  LID 0x000b, QPN 0x000413, PSN 0xfb1fbc, GID ::
  local address:  LID 0x000b, QPN 0x000414, PSN 0x3eefca, GID ::
  local address:  LID 0x000b, QPN 0x000415, PSN 0xbd64c6, GID ::
  remote address: LID 0x0002, QPN 0x000406, PSN 0x703b96, GID ::
  remote address: LID 0x0002, QPN 0x000407, PSN 0x618cc8, GID ::
  remote address: LID 0x0002, QPN 0x000408, PSN 0xd62272, GID ::
  remote address: LID 0x0002, QPN 0x000409, PSN 0x5db5d9, GID ::
  remote address: LID 0x0002, QPN 0x00040a, PSN 0xc51978, GID ::
  remote address: LID 0x0002, QPN 0x00040b, PSN 0x05fd7a, GID ::
  remote address: LID 0x0002, QPN 0x00040c, PSN 0xaa4a51, GID ::
  remote address: LID 0x0002, QPN 0x00040d, PSN 0xb7a676, GID ::
  remote address: LID 0x0002, QPN 0x00040e, PSN 0x56bde2, GID ::
  remote address: LID 0x0002, QPN 0x00040f, PSN 0xa662bc, GID ::
  remote address: LID 0x0002, QPN 0x000410, PSN 0xee27b0, GID ::
  remote address: LID 0x0002, QPN 0x000411, PSN 0x89c683, GID ::
  remote address: LID 0x0002, QPN 0x000412, PSN 0xd025b3, GID ::
  remote address: LID 0x0002, QPN 0x000413, PSN 0xcec8e4, GID ::
  remote address: LID 0x0002, QPN 0x000414, PSN 0x37e5d2, GID ::
  remote address: LID 

Re: [Gluster-users] Does gluster passes LAN boundaries?

2010-12-01 Thread Amar Tumballi


 Could you point to me were this log is located?? On the servers I have a
 /var/log/glusterfs folder but in the client I cannot see where the logs are
 located..


Do 'which glusterfs' and get the prefix. The logs will be located by default
@ '$prefix/var/log/glusterfs/*'

Regards,
Amar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] RDMA Problems with GlusterFS 3.1.1

2010-12-01 Thread Raghavendra G
Hi Jeremy,

Yes, there might be some performance decrease. But, it should not affect 
working of rdma.

regards,
- Original Message -
From: Jeremy Stout stout.jer...@gmail.com
To: gluster-users@gluster.org
Sent: Thursday, December 2, 2010 8:30:20 AM
Subject: Re: [Gluster-users] RDMA Problems with GlusterFS 3.1.1

As an update to my situation, I think I have GlusterFS 3.1.1 working
now. I was able to create and mount RDMA volumes without any errors.

To fix the problem, I had to make the following changes on lines 3562
and 3563 in rdma.c:
options-send_count = 32;
options-recv_count = 32;

The values were set to 128.

I'll run some tests tomorrow to verify that it is working correctly.
Assuming it does, what would be the expected side-effect of changing
the values from 128 to 32? Will there be a decrease in performance?


On Wed, Dec 1, 2010 at 10:07 AM, Jeremy Stout stout.jer...@gmail.com wrote:
 Here are the results of the test:
 submit-1:/usr/local/glusterfs/3.1.1/var/log/glusterfs # ibv_srq_pingpong
  local address:  LID 0x0002, QPN 0x000406, PSN 0x703b96, GID ::
  local address:  LID 0x0002, QPN 0x000407, PSN 0x618cc8, GID ::
  local address:  LID 0x0002, QPN 0x000408, PSN 0xd62272, GID ::
  local address:  LID 0x0002, QPN 0x000409, PSN 0x5db5d9, GID ::
  local address:  LID 0x0002, QPN 0x00040a, PSN 0xc51978, GID ::
  local address:  LID 0x0002, QPN 0x00040b, PSN 0x05fd7a, GID ::
  local address:  LID 0x0002, QPN 0x00040c, PSN 0xaa4a51, GID ::
  local address:  LID 0x0002, QPN 0x00040d, PSN 0xb7a676, GID ::
  local address:  LID 0x0002, QPN 0x00040e, PSN 0x56bde2, GID ::
  local address:  LID 0x0002, QPN 0x00040f, PSN 0xa662bc, GID ::
  local address:  LID 0x0002, QPN 0x000410, PSN 0xee27b0, GID ::
  local address:  LID 0x0002, QPN 0x000411, PSN 0x89c683, GID ::
  local address:  LID 0x0002, QPN 0x000412, PSN 0xd025b3, GID ::
  local address:  LID 0x0002, QPN 0x000413, PSN 0xcec8e4, GID ::
  local address:  LID 0x0002, QPN 0x000414, PSN 0x37e5d2, GID ::
  local address:  LID 0x0002, QPN 0x000415, PSN 0x29562e, GID ::
  remote address: LID 0x000b, QPN 0x000406, PSN 0x3b644e, GID ::
  remote address: LID 0x000b, QPN 0x000407, PSN 0x173320, GID ::
  remote address: LID 0x000b, QPN 0x000408, PSN 0xc105ea, GID ::
  remote address: LID 0x000b, QPN 0x000409, PSN 0x5e5ff1, GID ::
  remote address: LID 0x000b, QPN 0x00040a, PSN 0xff15b0, GID ::
  remote address: LID 0x000b, QPN 0x00040b, PSN 0xf0b152, GID ::
  remote address: LID 0x000b, QPN 0x00040c, PSN 0x4ced49, GID ::
  remote address: LID 0x000b, QPN 0x00040d, PSN 0x01da0e, GID ::
  remote address: LID 0x000b, QPN 0x00040e, PSN 0x69459a, GID ::
  remote address: LID 0x000b, QPN 0x00040f, PSN 0x197c14, GID ::
  remote address: LID 0x000b, QPN 0x000410, PSN 0xd50228, GID ::
  remote address: LID 0x000b, QPN 0x000411, PSN 0xbc9b9b, GID ::
  remote address: LID 0x000b, QPN 0x000412, PSN 0x0870eb, GID ::
  remote address: LID 0x000b, QPN 0x000413, PSN 0xfb1fbc, GID ::
  remote address: LID 0x000b, QPN 0x000414, PSN 0x3eefca, GID ::
  remote address: LID 0x000b, QPN 0x000415, PSN 0xbd64c6, GID ::
 8192000 bytes in 0.01 seconds = 5917.47 Mbit/sec
 1000 iters in 0.01 seconds = 11.07 usec/iter

 fs-1:/usr/local/glusterfs/3.1.1/var/log/glusterfs # ibv_srq_pingpong submit-1
  local address:  LID 0x000b, QPN 0x000406, PSN 0x3b644e, GID ::
  local address:  LID 0x000b, QPN 0x000407, PSN 0x173320, GID ::
  local address:  LID 0x000b, QPN 0x000408, PSN 0xc105ea, GID ::
  local address:  LID 0x000b, QPN 0x000409, PSN 0x5e5ff1, GID ::
  local address:  LID 0x000b, QPN 0x00040a, PSN 0xff15b0, GID ::
  local address:  LID 0x000b, QPN 0x00040b, PSN 0xf0b152, GID ::
  local address:  LID 0x000b, QPN 0x00040c, PSN 0x4ced49, GID ::
  local address:  LID 0x000b, QPN 0x00040d, PSN 0x01da0e, GID ::
  local address:  LID 0x000b, QPN 0x00040e, PSN 0x69459a, GID ::
  local address:  LID 0x000b, QPN 0x00040f, PSN 0x197c14, GID ::
  local address:  LID 0x000b, QPN 0x000410, PSN 0xd50228, GID ::
  local address:  LID 0x000b, QPN 0x000411, PSN 0xbc9b9b, GID ::
  local address:  LID 0x000b, QPN 0x000412, PSN 0x0870eb, GID ::
  local address:  LID 0x000b, QPN 0x000413, PSN 0xfb1fbc, GID ::
  local address:  LID 0x000b, QPN 0x000414, PSN 0x3eefca, GID ::
  local address:  LID 0x000b, QPN 0x000415, PSN 0xbd64c6, GID ::
  remote address: LID 0x0002, QPN 0x000406, PSN 0x703b96, GID ::
  remote address: LID 0x0002, QPN 0x000407, PSN 0x618cc8, GID ::
  remote address: LID 0x0002, QPN 0x000408, PSN 0xd62272, GID ::
  remote address: LID 0x0002, QPN 0x000409, PSN 0x5db5d9, GID ::
  remote address: LID 0x0002, QPN 0x00040a, PSN 0xc51978, GID ::
  remote address: LID 0x0002, QPN 0x00040b, PSN 0x05fd7a, GID ::
  remote address: LID 0x0002, QPN 0x00040c, PSN 0xaa4a51, GID ::
  remote address: LID 0x0002, QPN 0x00040d, PSN 0xb7a676, GID ::
  remote address: LID 0x0002, QPN 0x00040e, PSN 0x56bde2, GID ::
  remote address: LID 0x0002, QPN 0x00040f, PSN 0xa662bc, GID ::
  remote address: LID 

Re: [Gluster-users] RDMA Problems with GlusterFS 3.1.1

2010-12-01 Thread Raghavendra G
Hi Jeremy,

In order to diagnoise why completion queue creation is failing (as indicated by 
logs), we want to know what was the free memory available in your system when 
glusterfs was started.

regards,
- Original Message -
From: Raghavendra G raghaven...@gluster.com
To: Jeremy Stout stout.jer...@gmail.com
Cc: gluster-users@gluster.org
Sent: Thursday, December 2, 2010 10:11:18 AM
Subject: Re: [Gluster-users] RDMA Problems with GlusterFS 3.1.1

Hi Jeremy,

Yes, there might be some performance decrease. But, it should not affect 
working of rdma.

regards,
- Original Message -
From: Jeremy Stout stout.jer...@gmail.com
To: gluster-users@gluster.org
Sent: Thursday, December 2, 2010 8:30:20 AM
Subject: Re: [Gluster-users] RDMA Problems with GlusterFS 3.1.1

As an update to my situation, I think I have GlusterFS 3.1.1 working
now. I was able to create and mount RDMA volumes without any errors.

To fix the problem, I had to make the following changes on lines 3562
and 3563 in rdma.c:
options-send_count = 32;
options-recv_count = 32;

The values were set to 128.

I'll run some tests tomorrow to verify that it is working correctly.
Assuming it does, what would be the expected side-effect of changing
the values from 128 to 32? Will there be a decrease in performance?


On Wed, Dec 1, 2010 at 10:07 AM, Jeremy Stout stout.jer...@gmail.com wrote:
 Here are the results of the test:
 submit-1:/usr/local/glusterfs/3.1.1/var/log/glusterfs # ibv_srq_pingpong
  local address:  LID 0x0002, QPN 0x000406, PSN 0x703b96, GID ::
  local address:  LID 0x0002, QPN 0x000407, PSN 0x618cc8, GID ::
  local address:  LID 0x0002, QPN 0x000408, PSN 0xd62272, GID ::
  local address:  LID 0x0002, QPN 0x000409, PSN 0x5db5d9, GID ::
  local address:  LID 0x0002, QPN 0x00040a, PSN 0xc51978, GID ::
  local address:  LID 0x0002, QPN 0x00040b, PSN 0x05fd7a, GID ::
  local address:  LID 0x0002, QPN 0x00040c, PSN 0xaa4a51, GID ::
  local address:  LID 0x0002, QPN 0x00040d, PSN 0xb7a676, GID ::
  local address:  LID 0x0002, QPN 0x00040e, PSN 0x56bde2, GID ::
  local address:  LID 0x0002, QPN 0x00040f, PSN 0xa662bc, GID ::
  local address:  LID 0x0002, QPN 0x000410, PSN 0xee27b0, GID ::
  local address:  LID 0x0002, QPN 0x000411, PSN 0x89c683, GID ::
  local address:  LID 0x0002, QPN 0x000412, PSN 0xd025b3, GID ::
  local address:  LID 0x0002, QPN 0x000413, PSN 0xcec8e4, GID ::
  local address:  LID 0x0002, QPN 0x000414, PSN 0x37e5d2, GID ::
  local address:  LID 0x0002, QPN 0x000415, PSN 0x29562e, GID ::
  remote address: LID 0x000b, QPN 0x000406, PSN 0x3b644e, GID ::
  remote address: LID 0x000b, QPN 0x000407, PSN 0x173320, GID ::
  remote address: LID 0x000b, QPN 0x000408, PSN 0xc105ea, GID ::
  remote address: LID 0x000b, QPN 0x000409, PSN 0x5e5ff1, GID ::
  remote address: LID 0x000b, QPN 0x00040a, PSN 0xff15b0, GID ::
  remote address: LID 0x000b, QPN 0x00040b, PSN 0xf0b152, GID ::
  remote address: LID 0x000b, QPN 0x00040c, PSN 0x4ced49, GID ::
  remote address: LID 0x000b, QPN 0x00040d, PSN 0x01da0e, GID ::
  remote address: LID 0x000b, QPN 0x00040e, PSN 0x69459a, GID ::
  remote address: LID 0x000b, QPN 0x00040f, PSN 0x197c14, GID ::
  remote address: LID 0x000b, QPN 0x000410, PSN 0xd50228, GID ::
  remote address: LID 0x000b, QPN 0x000411, PSN 0xbc9b9b, GID ::
  remote address: LID 0x000b, QPN 0x000412, PSN 0x0870eb, GID ::
  remote address: LID 0x000b, QPN 0x000413, PSN 0xfb1fbc, GID ::
  remote address: LID 0x000b, QPN 0x000414, PSN 0x3eefca, GID ::
  remote address: LID 0x000b, QPN 0x000415, PSN 0xbd64c6, GID ::
 8192000 bytes in 0.01 seconds = 5917.47 Mbit/sec
 1000 iters in 0.01 seconds = 11.07 usec/iter

 fs-1:/usr/local/glusterfs/3.1.1/var/log/glusterfs # ibv_srq_pingpong submit-1
  local address:  LID 0x000b, QPN 0x000406, PSN 0x3b644e, GID ::
  local address:  LID 0x000b, QPN 0x000407, PSN 0x173320, GID ::
  local address:  LID 0x000b, QPN 0x000408, PSN 0xc105ea, GID ::
  local address:  LID 0x000b, QPN 0x000409, PSN 0x5e5ff1, GID ::
  local address:  LID 0x000b, QPN 0x00040a, PSN 0xff15b0, GID ::
  local address:  LID 0x000b, QPN 0x00040b, PSN 0xf0b152, GID ::
  local address:  LID 0x000b, QPN 0x00040c, PSN 0x4ced49, GID ::
  local address:  LID 0x000b, QPN 0x00040d, PSN 0x01da0e, GID ::
  local address:  LID 0x000b, QPN 0x00040e, PSN 0x69459a, GID ::
  local address:  LID 0x000b, QPN 0x00040f, PSN 0x197c14, GID ::
  local address:  LID 0x000b, QPN 0x000410, PSN 0xd50228, GID ::
  local address:  LID 0x000b, QPN 0x000411, PSN 0xbc9b9b, GID ::
  local address:  LID 0x000b, QPN 0x000412, PSN 0x0870eb, GID ::
  local address:  LID 0x000b, QPN 0x000413, PSN 0xfb1fbc, GID ::
  local address:  LID 0x000b, QPN 0x000414, PSN 0x3eefca, GID ::
  local address:  LID 0x000b, QPN 0x000415, PSN 0xbd64c6, GID ::
  remote address: LID 0x0002, QPN 0x000406, PSN 0x703b96, GID ::
  remote address: LID 0x0002, QPN 0x000407, PSN 0x618cc8, GID ::
  remote address: LID 0x0002, QPN 0x000408, PSN 0xd62272, GID ::
  remote address: LID 

Re: [Gluster-users] simple 2 server replication question - how to mount for HA

2010-12-01 Thread Ben Blakley
We are setting up a redundant storage cluster with 2 servers with Gluster FS 3.1

 

 

Per the documentation here:

http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Configuring_Distributed_Replicated_Volumes

 

We would create a volume using this:

 

#gluster volume create test-volume replica 2 transport tcp server1:/exp1 
server2:/exp2

 

 

Now my question is how do we mount the clients in a HA fashion so that if one 
of the servers goes down things stay online?

 

If we use this mount option:

 

 

#mount -t glusterfs [-o options] volumeserver:volumeid mount‐point

 

 

with a single server then there is an issue if that server goes down,

 

 

 

I understand we could point to a volume file instead,  could that volume file 
contain references for both servers?

 

 

#mount -t glusterfs [-o options] path/to/volumefile mountpoint

 

 

 

Should/is the replication done on the server side or the client side?  It seems 
it is being replicated by the server with this “replica 2” volume, so the 
question is how do we do the mount?  

 

Is it possible that the volume information is synched between the 2 servers so 
that we can point the clients in our environment to either Gluster servers?

 

 

Many thanks,

 

 

 

 

 

 

 

 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users