Re: [Gluster-users] Unable to peer probe after upgrade to 3.3

2012-08-03 Thread Rahul Hinduja
Hi Dan,

I had a setup where I manually edited the username to rahul and password to 
hinduja. Volume reset was done after editing the file. Looks like it does not 
matter weather what format the creds are (UUID in this case).

Please try this out.

Copied the gluster-users list.

Thanks,
Rahul Hinduja

- Original Message -
From: Dan Bretherton d.a.brether...@reading.ac.uk
To: Rahul Hinduja rhind...@redhat.com
Cc: v...@redhat.com, Sudhir Dharanendraiah sdhar...@redhat.com
Sent: Friday, August 3, 2012 3:50:21 PM
Subject: Re: [Gluster-users] Unable to peer probe after upgrade to 3.3

Hello Rahul,
Thanks for looking into this for me.  You are correct; my volume files 
don't have username and password entries.

I created a test volume on a machine with 3.3 freshly installed and the 
username and password entries look like this.

volume-id=d018547d-ceeb-4a55-b5fa-2237deaf572f
username=5b1c5055-98b5-4b5b-a448-29eb7ac878b3

Please can you tell me how to generate these long strings.  Are they 
randomly generated and can they be the same for every volume?  Can I 
copy and paste the above into my existing volume files?

By the way I notice that your reply wasn't copied to the gluster-users 
list.  Was that deliberate?  I think it would be useful to update 
everyone on the list with the solution to my problem, so please let me 
know if you would be happy for me to CC gluster-users next time.

Regards,
Dan Bretherton

-- 
Mr. D.A. Bretherton
Computer System Manager
Environmental Systems Science Centre (ESSC)
Harry Pitt Building
3 Earley Gate
University of Reading
Reading, RG6 7BE (or RG6 6AL for postal service deliveries)
UK
Tel. +44 118 378 5205, Fax: +44 118 378 6413


On 08/03/2012 08:48 AM, Rahul Hinduja wrote:
 Hi Dan,

 Can you please confirm whether your vol info file has the username and 
 password entries for the volumes after upgrade from 3.2 to 3.3?

 This issue is probably because 3.2′s volume config lacks “username/password” 
 authentication entries which is required for 3.3's.

 Vol info file should be located under 
 /var/lib/glusterd/vols/vol-name/info. If the entries(username and 
 password) is not available , then my workaround is

 # gluster vol stop
 # service glusterd stop
 Modify vol info files manually by (adding username/password auth entries.)
 # service glusterd start
 # gluster vol reset
 # gluster vol start force

 Thanks,
 Rahul Hinduja



 - Original Message -
 From: Dan Brethertond.a.brether...@reading.ac.uk
 To: Harry Mangalamhjmanga...@gmail.com
 Cc: gluster-usersgluster-users@gluster.org
 Sent: Thursday, August 2, 2012 10:38:54 PM
 Subject: Re: [Gluster-users] Unable to peer probe after upgrade to 3.3

 Hello Harry,
 Thanks for that suggestion.  That machine is indeed in a Rocks cluster,
 and I used it as an example because until recently I was adding the
 Rocks cluster nodes as GlusterFS peers so I could NFS mount from localhost.

 Can your
 machines do DNS lookups and reverse lookups to each other (ie names
 resolve to the correct IP #s and vice versa)?
 Yes, I just tested forward and reverse lookups on them all using pdsh,
 for all addresses and hostnames.

 To avoid muddying the waters with Rocks cluster issues I tried gluster
 peer probe again, this time with a spare storage server which has only
 one network interface.  Unfortunately the result was the same.  I
 checked that the firewall and SELinux were disabled on all machines first.

 -Dan.

 On 08/02/2012 04:49 PM, Harry Mangalam wrote:
 Based on the error log, I'd guess at a DNS problems.  Can your
 machines do DNS lookups and reverse lookups to each other (ie names
 resolve to the correct IP #s and vice versa)?  Based on your
 hostnames, it looks like you're running on a ROCKS cluster so you
 might have competing (or incorrect) DNS info (cluster DNS vs
 institutional DNS vs /etc/hosts info).

 It shouldn't be the case in a cluster but firewalls can obviously be a 
 problem.

 hjm

 On Thu, Aug 2, 2012 at 8:21 AM, Dan Bretherton
 d.a.brether...@reading.ac.uk   wrote:
 Dear All-
 My recent upgrade from 3.2.6 to 3.3.0 went well, but now I can't add new
 peers to the cluster.  I can create a new peer group of servers all with 3.3
 freshly installed, but if any one of them was upgraded from 3.2 the gluster
 peer probe commands just hang for a while and return nothing. Following
 that, gluster peer status results in output like the following for the new
 peer being added.

 Hostname: compute-0-4.nerc-essc.ac.uk
 Uuid: 111612e4-537b-49b4-9e88-2e0e1bae7fdf
 State: Establishing Connection (Connected)

 Errors like these are produced in etc-glusterfs-glusterd.vol.log.

 [2012-08-02 13:00:53.553927] I
 [glusterd-op-sm.c:2653:glusterd_op_txn_complete] 0-glusterd: Cleared local
 lock
 [2012-08-02 15:55:19.244849] I
 [glusterd-handler.c:679:glusterd_handle_cli_probe] 0-glusterd: Received CLI
 probe req compute-0-4.nerc-essc.ac.uk 24007
 [2012-08-02 15:55:19.357191] I [glusterd-handler.c:423:glusterd_friend_find]
 

Re: [Gluster-users] Gluster-users Digest, Vol 51, Issue 49

2012-08-03 Thread Ben England
 Message: 4
 Date: Fri, 27 Jul 2012 15:29:41 -0700
 From: Harry Mangalam hjmanga...@gmail.com
 Subject: [Gluster-users] Change NFS parameters post-start
 To: gluster-users gluster-users@gluster.org
 Message-ID:
   CAEib2OnKfENr8NhVwkvpsw21C5QJmzu_=C9j144p2Gkn7KP=l...@mail.gmail.com
 Content-Type: text/plain; charset=ISO-8859-1
 
 In trying to convert clients from using the gluster native client to
 an NFS client, I'm trying to get the gluster volume mounted on a test
 mount point on the same client that the native client has mounted the
 volume.  The client refuses with the error:
 
  mount -t nfs bs1:/gl /mnt/glnfs
 mount: bs1:/gl failed, reason given by server: No such file or
 directory
 

Harry,

Have you tried: 
# mount -t nfs -o nfsvers=3,tcp bs1:/gl /mnt/glnfs

Also, there is an /etc/sysconfig/nfs file that may let you remove RDMA as a 
mount option for NFS.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] kernel parameters for improving gluster writes on millions of small writes (long)

2012-08-03 Thread Brian Foster
On 07/26/2012 11:47 PM, Harry Mangalam wrote:
...
 
 So why doesn't the gluster native client do client-side caching like
 NFS?  It looks like it's explicitly refusing to be cached by the usual
 (and usually excellent) Linux mechanisms.
 What's the reason for declining this OS advantage on the client side
 while providing such a technically sweet solution on the server side?
 I'm at a loss to explain this behavior to our technical group.
 

My understanding is this is a limitation of fuse moreso than glusterfs.
fuse currently fires off each write() it receives to the client fs
(gluster). FWIW, there is a fuse enhancement under development that you
can check out over on fuse-devel:

http://article.gmane.org/gmane.linux.file-systems/65661

I can't say whether that would solve your performance problems, but you
could certainly give it a try. I believe the intent is to sink writes
into the page cache (similar to NFS or a local filesystem) and send out
larger requests down to the fuse filesystem when writeback kicks in.

Brian
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Change NFS parameters post-start

2012-08-03 Thread Joe Julian

On 08/03/2012 01:21 PM, Ben England wrote:

Message: 4
Date: Fri, 27 Jul 2012 15:29:41 -0700
From: Harry Mangalamhjmanga...@gmail.com
Subject: [Gluster-users] Change NFS parameters post-start
To: gluster-usersgluster-users@gluster.org
Message-ID:
CAEib2OnKfENr8NhVwkvpsw21C5QJmzu_=C9j144p2Gkn7KP=l...@mail.gmail.com
Content-Type: text/plain; charset=ISO-8859-1

In trying to convert clients from using the gluster native client to
an NFS client, I'm trying to get the gluster volume mounted on a test
mount point on the same client that the native client has mounted the
volume.  The client refuses with the error:

  mount -t nfs bs1:/gl /mnt/glnfs
mount: bs1:/gl failed, reason given by server: No such file or
directory


Harry,

Have you tried:
# mount -t nfs -o nfsvers=3,tcp bs1:/gl /mnt/glnfs

Also, there is an /etc/sysconfig/nfs file that may let you remove RDMA as a 
mount option for NFS.


You also have to ensure that the kernel nfs server isn't running.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster-users Digest, Vol 51, Issue 46

2012-08-03 Thread Harry Mangalam
Hi Ben,

Thanks for the expert advice.

On Fri, Aug 3, 2012 at 2:35 PM, Ben England bengl...@redhat.com wrote:

 4. Re: kernel parameters for improving gluster writes on millions of small
 writes (long) (Harry Mangalam)

 Harry, You are correct, Glusterfs throughput with small write transfer
 sizes is a client-side problem, here are workarounds that at least some
 applications could use.


Not to be impertinent nor snarky, but why is the gluster client written in
this way and is that a high priority for fixing?  It seems that
caching/buffering is one of the great central truths of computer science in
general.  Is there a countering argument for not doing this?

1) NFS client is one workaround, since it buffers writes using the kernel
 buffer cache.


Yes, I tried this and I find the same thing.  One thing I am unclear about
tho is whether you can set up and run 1 NFS server per gluster server node.
 ie my glusterfs runs on 4 servers - could I connect clients to each one
using a round robin selection or other load/bandwidth balancing approach?
 I've read opinions that seem to support both yes and no.



 2) If your app does not have a configurable I/O size, but it lets you
 write to stdout, you can try piping your output to stdout and letting dd
 aggregate your I/O to the filesystem for you.  In this example we triple
 single-thread write throughput for 4-KB I/O requests in this example.


I agree again - I wrote up this for the gluster 'hints' http://goo.gl/NyMXO
using gzip (other utilities seem to work as well, as do named pipes for
handling more complex output options.



[nice examples deleted]



 3) If your program is written in C and it uses stdio.h, you can probably
 do setvbuf() C RTL call to increase buffer size to something greater than
 8 KB, which is the default in gcc-4.4.


Most of our users are not programmers and so this is not an option in most
cases.


 http://en.cppreference.com/w/c/io/setvbuf
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users




-- 
Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
[m/c 2225] / 92697 Google Voice Multiplexer: (949) 478-4487
415 South Circle View Dr, Irvine, CA, 92697 [shipping]
MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Change NFS parameters post-start

2012-08-03 Thread Harry Mangalam
Thanks Joe, for this (and other help on the IRC)
Yes, I did check this and no it's not running.

Harry

On Fri, Aug 3, 2012 at 4:26 PM, Joe Julian j...@julianfamily.org wrote:

 You also have to ensure that the kernel nfs server isn't running.




-- 
Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
[m/c 2225] / 92697 Google Voice Multiplexer: (949) 478-4487
415 South Circle View Dr, Irvine, CA, 92697 [shipping]
MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users