[Gluster-devel] 1.3.8pre5 glusterfsd WARNING message

2008-04-09 Thread Daniel Maher
On Sun, 6 Apr 2008 00:42:15 -0700 Amar S. Tumballi [EMAIL PROTECTED] wrote: Hi all, GlusterFS-1.3.8pre5 (Release candidate for 1.3.8-stable) is available for download now. Thanks for the new release. I built the RPMs and upgraded my test cluster on 1.3.8pre5. After restarting glusterfsd on

Re: [Gluster-devel] ls again

2008-04-09 Thread Jordi Moles
Hi, I restarted both nodes and clients, cause it got to the point where the filesystem was totally inaccessible, which means that a ls command didn't work anymore, i only could get that error saying that the system was no ready. Anyway... without doing anything else... just restarting

Re: [Gluster-devel] two-node HA cluster failover test - failed again :(

2008-04-09 Thread Vikas Gorur
Excerpts from Daniel Maher's message of Wed Apr 09 16:40:20 +0530 2008: Hello all, After upgrading to 1.3.8pre5, i performed a simple failover test of my two-node HA Gluster cluster (wherein one of the nodes is unplugged from the network). Unfortunately, the results were - once again -

Re: [Gluster-devel] 1.3.8pre5 glusterfsd WARNING message

2008-04-09 Thread Krishna Srinivas
Daniel, you don't need unify/namespace for your setup here, you can directly export gfs-ds-afr from the server which can be mounted on the client. Can you paste your client spec? if you are connecting the client to the server which goes down then client mount point becomes inaccesible, in which

Re: [Gluster-devel] two-node HA cluster failover test - failed again :(

2008-04-09 Thread Daniel Maher
On Wed, 09 Apr 2008 17:19:49 +0530 Vikas Gorur [EMAIL PROTECTED] wrote: This is definitely something GlusterFS is designed to handle. I've set up this configuration in our lab and am looking into it. Excellent ! Specifically, on dfsC you should have subvolumes gfs-dfsD-ds gfs-ds and

Re: [Gluster-devel] Failover test early success (WAS: 1.3.8pre5 glusterfsd WARNING message)

2008-04-09 Thread Daniel Maher
On Wed, 9 Apr 2008 18:50:31 +0530 Krishna Srinivas [EMAIL PROTECTED] wrote: OK, we are trying to reproduce the setup and fix the problem... btw, can you without unify and see if failover happens cleanly. Correct the wiki needs to be changed. I am CC'ing its author Paul England - pengland

[Gluster-devel] oddity with the mountpoint

2008-04-09 Thread Daniel Maher
After my initial success in removing the Unify block from my config, i rebooted each machine in the test network (to get a nice fresh start), and now i was unable to access the mountpoint properly. Note the following cut and paste from the user session on the client : [EMAIL PROTECTED]

Re: [Gluster-devel] Failover test early success (WAS: 1.3.8pre5 glusterfsd WARNING message)

2008-04-09 Thread Brandon Lamb
On Wed, Apr 9, 2008 at 7:43 AM, Daniel Maher [EMAIL PROTECTED] wrote: On Wed, 9 Apr 2008 18:50:31 +0530 Krishna Srinivas [EMAIL PROTECTED] wrote: OK, we are trying to reproduce the setup and fix the problem... btw, can you without unify and see if failover happens cleanly. Correct the

[Gluster-devel] YaAQ

2008-04-09 Thread Brandon Lamb
Yet another AFR Question.. ha ha. Im a snowflake... Ok so I was just reading the glusterfs.org wiki and there are quite a few new documentations / howtos / examples than previously (very big thank you to whoever has contributed to that). It brought up a question, do I need to use unify with AFR?