Re: [Gluster-users] GlusterFS 3.3 beta on Debian

2012-05-03 Thread Sachidananda Urs
Hi,

On Thu, May 3, 2012 at 10:49 AM, Toby Corkindale 
toby.corkind...@strategicdata.com.au wrote:


 The files are located in a directory that looks like they were built for
 Debian Lenny, here:

 http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.3.0beta3/Debian/5.0.3/

 Note the 5.0.3 at the end of the path..


However, when attempting to install the .deb file, it gives an error
 about package libssl1.0.0 being missing.

 That package is only available in the upcoming Debian version 7.0
 (Wheezy), and is not available for Debian 5.0 or 6.0 at all.

 The packages are built for wheezy/sid:

debian5:~# cat /etc/debian_version
wheezy/sid
debian5:~# uname -a
Linux debian5.0.3 2.6.26-2-amd64 #1 SMP Wed Aug 19 22:33:18 UTC 2009 x86_64
GNU/Linux
debian5:~#

Can you see if libssl0.9.8 is available? That version should work.
Otherwise compiling from source should definitely work.

-sachidananda.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 3.3 beta on Debian

2012-05-03 Thread Toby Corkindale

On 03/05/12 17:55, Sachidananda Urs wrote:

Hi,

On Thu, May 3, 2012 at 10:49 AM, Toby Corkindale
toby.corkind...@strategicdata.com.au
mailto:toby.corkind...@strategicdata.com.au wrote:

The files are located in a directory that looks like they were built for
Debian Lenny, here:

http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.3.0beta3/Debian/5.0.3/

Note the 5.0.3 at the end of the path..

However, when attempting to install the .deb file, it gives an error
about package libssl1.0.0 being missing.

That package is only available in the upcoming Debian version 7.0
(Wheezy), and is not available for Debian 5.0 or 6.0 at all.

The packages are built for wheezy/sid:

debian5:~# cat /etc/debian_version
wheezy/sid
debian5:~# uname -a
Linux debian5.0.3 2.6.26-2-amd64 #1 SMP Wed Aug 19 22:33:18 UTC 2009
x86_64 GNU/Linux
debian5:~#

Can you see if libssl0.9.8 is available? That version should work.
Otherwise compiling from source should definitely work.


Hi,
libssl0.9.8 is available on Debian Squeeze (6.0), however that doesn't 
satisfy the package dependencies, so it won't install.


I gather there were ABI/API changes between libssl0.9.8 and libssl1.0.0 too.

Toby
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bricks suggestions

2012-05-03 Thread Gandalf Corvotempesta
2012/5/2 Arnold Krille arn...@arnoldarts.de

 As far as I understand it (and for the current stable 3.2) there are two
 ways
 depending on what you want:
  When you want a third copy of the data, it seems you can't simply increase
 the replication-level and add a brick. Instead you have to stop usage,
 delete
 the volume (without deleting the underlying bricks of course) and then
 rebuild
 the volume with the new number of replication and bricks. Then self-heal
 should do the trick and copy the data onto the third machine.


I can't stop a production volume used by many customers.
I think that the best way should start with the correct number of
replication nodes
even if one of these node is not present.

In this way, the volume is created properly, and when needed I have to just
add
the new machine and trigger the self-healing.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] [3.3 beta3] When should the self-heal daemon be triggered?

2012-05-03 Thread Toby Corkindale

PS. The use-case I have in mind is this scenario.

Say we have two Gluster storage bricks, replicated.
We want to do something that involves taking down the machines for an 
hour or two each, one at a time.


So, we turn one machine off.. All traffic now goes to the single other 
machine.


After a couple of hours, we turn the upgraded machine back on.

We now want to take the second machine down for a few hours.. but of 
course, its full of data written during the last two hours.


Will the self-heal mechanism automatically replicate that back to the 
second machine, now it's online?
Does that sync start happening immediately, or after a certain time 
period, or only after we manually run the 'volume heal' command?


ta,
Toby


On 03/05/12 19:45, Toby Corkindale wrote:

Hi,
I eventually installed three Debian unstable machines, so I could
install the GlusterFS 3.3 beta3.

I have a question about the self-heal daemon.

I'm trying to get a volume which is replicated, with two bricks.

I started up the volume, wrote some data, then killed one machine, and
then wrote more data to a few folders from the client machine.
Then I restarted the second brick server.

At this point, the second server seemed to self-heal enough that it
registered the new directories, but all the files inside were zero-length.

I then ran the command:
gluster volume heal testvol

After I ran that, there was some activity, and now all the files were
populated.


Was that supposed to happen automatically, eventually, or am I missing
something about how the self-heal daemon works?


Thanks,
Toby
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users



--
.signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] [3.3 beta3] When should the self-heal daemon be triggered?

2012-05-03 Thread Geoff Galitz



I then ran the command:
gluster volume heal testvol

After I ran that, there was some activity, and now all the files were  
populated.



Was that supposed to happen automatically, eventually, or am I missing  
something about how the self-heal daemon works?




The self-heal daemon triggers a crawl once every 600 seconds. If you  
wait out that interval, you should be able to see self-heals happening  
automatically. Else you can trigger it explicitly the way you did.




As a follow-up question to that: does this all apply to gluster 3.2.4  
also?  And if you manually trigger a self-heal (via doing a find + stat as  
some of us were originally trained) multiple times within a 10 minute  
window, will that cause a problem?







Geoff Galitz, ggal...@shutterstock.com
WebOps Engineer, Europe
Shutterstock Images
http://.shutterstock.com/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] [3.3 beta3] When should the self-heal daemon be triggered?

2012-05-03 Thread Vijay Bellur

On 05/03/2012 03:44 PM, Geoff Galitz wrote:



I then ran the command:
gluster volume heal testvol

After I ran that, there was some activity, and now all the files 
were populated.



Was that supposed to happen automatically, eventually, or am I 
missing something about how the self-heal daemon works?




The self-heal daemon triggers a crawl once every 600 seconds. If you 
wait out that interval, you should be able to see self-heals 
happening automatically. Else you can trigger it explicitly the way 
you did.




As a follow-up question to that: does this all apply to gluster 3.2.4 
also?  And if you manually trigger a self-heal (via doing a find + 
stat as some of us were originally trained) multiple times within a 10 
minute window, will that cause a problem?





This does not apply to gluster 3.2.4. Manually triggering a self-heal 
multiple times should not cause a problem. If you notice any aberration, 
please do let us know.




Vijay

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 3.3 beta on Debian

2012-05-03 Thread Gavin
Hi,

I also have a few Squeeze servers ready to go with Gluster 3.3 beta 3.

What is the workaround here ?

Upgrade to Wheezy or compile from source ?

Thanks,
Gavin

On 03/05/2012, Toby Corkindale toby.corkind...@strategicdata.com.au wrote:
 On 03/05/12 17:55, Sachidananda Urs wrote:
 Hi,

 On Thu, May 3, 2012 at 10:49 AM, Toby Corkindale
 toby.corkind...@strategicdata.com.au
 mailto:toby.corkind...@strategicdata.com.au wrote:

 The files are located in a directory that looks like they were built
 for
 Debian Lenny, here:

 http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.3.0beta3/Debian/5.0.3/

 Note the 5.0.3 at the end of the path..

 However, when attempting to install the .deb file, it gives an error
 about package libssl1.0.0 being missing.

 That package is only available in the upcoming Debian version 7.0
 (Wheezy), and is not available for Debian 5.0 or 6.0 at all.

 The packages are built for wheezy/sid:

 debian5:~# cat /etc/debian_version
 wheezy/sid
 debian5:~# uname -a
 Linux debian5.0.3 2.6.26-2-amd64 #1 SMP Wed Aug 19 22:33:18 UTC 2009
 x86_64 GNU/Linux
 debian5:~#

 Can you see if libssl0.9.8 is available? That version should work.
 Otherwise compiling from source should definitely work.

 Hi,
 libssl0.9.8 is available on Debian Squeeze (6.0), however that doesn't
 satisfy the package dependencies, so it won't install.

 I gather there were ABI/API changes between libssl0.9.8 and libssl1.0.0
 too.

 Toby
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] unable to get Geo-replication working

2012-05-03 Thread Andreas Kurz
Hello Scot,

On 04/27/2012 03:53 PM, Scot Kreienkamp wrote:
 Hey everyone,
 
  
 
 I'm trying to get geo-replication working from a two brick replicated
 volume to a single directory on a remote host.  I can ssh as either root
 or georep-user to the destination as either georep-user or root with no
 password using the default ssh commands given by the config command: ssh
 -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
 /etc/glusterd/geo-replication/secret.pem.  All the glusterfs rpms are
 installed on the remote host.  There are no firewalls running on any of
 the hosts and no firewalls in between them.  The remote_gsyncd command
 is correct as I can copy and paste it to the command line and run it on
 both source hosts and destination host.  I'm using the current
 production version of glusterfs 3.2.6, rsync 3.0.9, fuse-2.8.3 rpm's are
 installed, OpenSSH 5.3, and Python 2.6.6 on RHEL6.2.  The remote
 directory is set to 777, world read-write so there are no permission
 errors. 
 
  
 
 I'm using this command to start replication: gluster volume
 geo-replication RMSNFSMOUNT hptv3130:/nfs start
 
  
 
 Whenever I try to initiate geo-replication the status goes to starting
 for about 30 seconds, then goes to faulty.  On the slave I get these
 messages repeating in the geo-replication-slaves log:

The filesystem on the slave has xattrs enabled? Is there any improvement
if glusterd is stopped on the slave?

Regards,
Andreas

 
  
 
 [2012-04-27 09:37:59.485424] I [resource(slave):201:service_loop] FILE:
 slave listening
 
 [2012-04-27 09:38:05.413768] I [repce(slave):60:service_loop]
 RepceServer: terminating on reaching EOF.
 
 [2012-04-27 09:38:15.35907] I [resource(slave):207:service_loop] FILE:
 connection inactive for 120 seconds, stopping
 
 [2012-04-27 09:38:15.36382] I [gsyncd(slave):302:main_i] top: exiting.
 
 [2012-04-27 09:38:19.952683] I [gsyncd(slave):290:main_i] top:
 syncing: file:///nfs
 
 [2012-04-27 09:38:19.955024] I [resource(slave):201:service_loop] FILE:
 slave listening
 
  
 
  
 
 I get these messages in etc-glusterfs-glusterd.vol.log on the slave:
 
  
 
 [2012-04-27 09:39:23.667930] W
 [socket.c:1494:__socket_proto_state_machine] 0-socket.management:
 reading from socket failed. Error (Transport endpoint is not connected),
 peer (127.0.0.1:1021)
 
 [2012-04-27 09:39:43.736138] I
 [glusterd-handler.c:3226:glusterd_handle_getwd] 0-glusterd: Received
 getwd req
 
 [2012-04-27 09:39:43.740749] W
 [socket.c:1494:__socket_proto_state_machine] 0-socket.management:
 reading from socket failed. Error (Transport endpoint is not connected),
 peer (127.0.0.1:1023)
 
  
 
 As I understand it from searching the list that message is benign and
 can be ignored though.
 
  
 
  
 
 Here are tails of all the logs on one of the sources:
 
  
 
 [root@retv3130 RMSNFSMOUNT]# tail
 ssh%3A%2F%2Fgeorep-user%4010.2.1.60%3Afile%3A%2F%2F%2Fnfs.gluster.log
 
 +--+
 
 [2012-04-26 16:16:40.804047] E [socket.c:1685:socket_connect_finish]
 0-RMSNFSMOUNT-client-1: connection to  failed (Connection refused)
 
 [2012-04-26 16:16:40.804852] I [rpc-clnt.c:1536:rpc_clnt_reconfig]
 0-RMSNFSMOUNT-client-0: changing port to 24009 (from 0)
 
 [2012-04-26 16:16:44.779451] I [rpc-clnt.c:1536:rpc_clnt_reconfig]
 0-RMSNFSMOUNT-client-1: changing port to 24010 (from 0)
 
 [2012-04-26 16:16:44.855903] I
 [client-handshake.c:1090:select_server_supported_programs]
 0-RMSNFSMOUNT-client-0: Using Program GlusterFS 3.2.6, Num (1298437),
 Version (310)
 
 [2012-04-26 16:16:44.856893] I
 [client-handshake.c:913:client_setvolume_cbk] 0-RMSNFSMOUNT-client-0:
 Connected to 10.170.1.222:24009, attached to remote volume '/nfs'.
 
 [2012-04-26 16:16:44.856943] I [afr-common.c:3141:afr_notify]
 0-RMSNFSMOUNT-replicate-0: Subvolume 'RMSNFSMOUNT-client-0' came back
 up; going online.
 
 [2012-04-26 16:16:44.866734] I [fuse-bridge.c:3339:fuse_graph_setup]
 0-fuse: switched to graph 0
 
 [2012-04-26 16:16:44.867391] I [fuse-bridge.c:3241:fuse_thread_proc]
 0-fuse: unmounting /tmp/gsyncd-aux-mount-8zMs0J
 
 [2012-04-26 16:16:44.868538] W [glusterfsd.c:727:cleanup_and_exit]
 (--/lib64/libc.so.6(clone+0x6d) [0x31494e5ccd]
 (--/lib64/libpthread.so.0() [0x3149c077f1]
 (--/opt/glusterfs/3.2.6/sbin/glusterfs(glusterfs_sigwaiter+0x17c)
 [0x40477c]))) 0-: received signum (15), shutting down
 
 [root@retv3130 RMSNFSMOUNT]# tail
 ssh%3A%2F%2Fgeorep-user%4010.2.1.60%3Afile%3A%2F%2F%2Fnfs.log
 
 [2012-04-26 16:16:39.263871] I [gsyncd:290:main_i] top: syncing:
 gluster://localhost:RMSNFSMOUNT - ssh://georep-user@hptv3130:/nfs
 
 [2012-04-26 16:16:41.332690] E [syncdutils:133:log_raise_exception]
 top: FAIL:
 
 Traceback (most recent call last):
 
   File
 /opt/glusterfs/3.2.6/local/libexec/glusterfs/python/syncdaemon/syncdutils.py,
 line 154, in twrap
 
 tf(*aa)
 
   File
 /opt/glusterfs/3.2.6/local/libexec/glusterfs/python/syncdaemon/repce.py,
 line 117, in listen
 
   

[Gluster-users] Problem stressing an Apache Web server on GlusterFS volume

2012-05-03 Thread PANICHI MASSIMILIANO
Hi,

we are testing an Apache Web server on a GFS volume. Our goal is to buil an HA 
Reverse Proxy with pacemaker and GFS.

We installed and configured four GFS nodes with distributed replica, each node 
with 10GB of storage. So we configured a distributed replica storage for 20GB 
of disk space. We configured an Apache web server using the GFS volume for 
saving logs (access_log and error_log).

From another server we stressed the apache server calling a mod_perl script  
printing an HTML page with apache environment variables. We used httperf to 
stress the web server. So, we haven't load performance problems but when we 
tested the availability of GFS we faced with a volume mounting hang from the 
client. We tested rebooting one of the GFS nodes and when this happens the 
client hang in writing on the GFS volume and df doesn't respond.

We used VM on VMWARE and all server are running Oracle Enterprise Linux 6.2 
64bit with GlusterFS 3.2.6 (recompiled).

The mount from the apache server

mount -t glusterfs gfs01-dev:/VOLUME01 /opt/VOLUME01/

The files on all GFS nodes

---files on node 1
4   /VOLUME01/GFS01-DEV_1/proxy_logs/logs/error_log
33152   /VOLUME01/GFS01-DEV_1/proxy_logs/logs/access_log
33156   /VOLUME01/GFS01-DEV_1/proxy_logs/logs
33156   /VOLUME01/GFS01-DEV_1/proxy_logs
33160   /VOLUME01/GFS01-DEV_1
33164   /VOLUME01
---files on node 2
4   /VOLUME01/GFS02-DEV_1/proxy_logs/logs/error_log
33216   /VOLUME01/GFS02-DEV_1/proxy_logs/logs/access_log
33220   /VOLUME01/GFS02-DEV_1/proxy_logs/logs
33220   /VOLUME01/GFS02-DEV_1/proxy_logs
33224   /VOLUME01/GFS02-DEV_1
33228   /VOLUME01
---files on node 3
0   /VOLUME01/GFS03-DEV_1/proxy_logs/logs
0   /VOLUME01/GFS03-DEV_1/proxy_logs
0   /VOLUME01/GFS03-DEV_1/prova.4
0   /VOLUME01/GFS03-DEV_1
4   /VOLUME01
---files on node 4
0   /VOLUME01/GFS04-DEV_1/proxy_logs/logs
0   /VOLUME01/GFS04-DEV_1/proxy_logs
0   /VOLUME01/GFS04-DEV_1/prova.4
0   /VOLUME01/GFS04-DEV_1

So, if I reboot node 01GFS mount hangs. df doesn't works.If I reboot node 02 I 
have performance problems. df works but slowly.

When the problem occurs we much of the apache processes trying to log as we can 
see from the server-status
Current Time: Thursday, 03-May-2012 11:52:34 CEST
Restart Time: Thursday, 03-May-2012 11:44:56 CEST
Parent Server Generation: 0
Server uptime: 7 minutes 37 seconds
Total accesses: 178751 - Total Traffic: 134.8 MB
CPU Usage: u33.7 s12.25 cu0 cs0 - 10.1% CPU load
391 requests/sec - 302.0 kB/second - 790 B/request
512 requests currently being processed, 0 idle workers
LLCCCCLCLLCLLCCCCLLL
LLLCLCCLLCLLWCLLLCLCCCCLLLCL
LLLCLLCLCLLCWLLL
LLLCLCLCLCWL

LLRW
LLLWLWLL
LLRLLLCLWLLL
and the reply rate goes down ...

[root@proxycoll02 src]# ./httperf -v --server=10..110 --port=80 
--uri=/perl/stress_test.pl --num-conns=1000 --rate=1000
httperf --verbose --client=0/1 --server=10...110 --port=80 
--uri=/perl/stress_test.pl --rate=1000 --send-buffer=4096 --recv-buffer=16384 
--num-conns=1000 --num-calls=1
httperf: warning: open file limit  FD_SETSIZE; limiting max. # of open files 
to FD_SETSIZE
httperf: maximum number of open descriptors = 1024
reply-rate = 1001.1
reply-rate = 1000.3
reply-rate = 999.7
reply-rate = 1000.3
reply-rate = 1000.1
reply-rate = 1000.3
reply-rate = 1000.1
reply-rate = 998.1
reply-rate = 328.0
reply-rate = 17.6
reply-rate = 25.6
reply-rate = 25.6
reply-rate = 7.6
reply-rate = 0.0
reply-rate = 0.0
reply-rate = 0.0
reply-rate = 0.0
reply-rate = 0.0
reply-rate = 0.0
reply-rate = 0.0
reply-rate = 171.0
reply-rate = 230.6
reply-rate = 0.2
reply-rate = 0.0
reply-rate = 200.6
reply-rate = 200.6
reply-rate = 151.8
reply-rate = 49.0
reply-rate = 199.8
reply-rate = 0.2
reply-rate = 201.2

Furthermore, during the problem the client is swapping

top - 16:26:07 up 22 min,  1 user,  load average: 619.87, 190.81, 76.03
Tasks: 1069 total,   6 running, 1063 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.1%us, 10.3%sy,  0.0%ni, 83.0%id,  5.5%wa,  0.0%hi,  0.2%si,  0.0%st
Mem:   1016524k total,  1008364k used, 8160k free,  212k buffers
Swap:  2064376k total,  2064376k used,0k free, 3700k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 3468 apache20   0  229m 4248 1244 D  3.0  0.4   0:00.76 httpd
   25 root  20   0 000 D  2.0  0.0   0:09.31 kswapd0
 1806 root  20   0 4440m  36m  340 R  1.8  3.6   3:13.39 glusterfs
 3487 apache20   0  219m 3620 1416 D  1.4  0.4   0:00.20 httpd
 3485 apache20   0  214m 3616 1456 D  1.3  

Re: [Gluster-users] Bricks suggestions

2012-05-03 Thread Gandalf Corvotempesta
2012/5/3 Arnold Krille arn...@arnoldarts.de

 Either wait for 3.3 or do the distributed-replicated-volume way I outlined
 earlier.

 Absolutely, i'm waiting for 3.3
We don't have to start right now, we will start with 3.3
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] How to set backup volume server in mount options

2012-05-03 Thread Toby Corkindale

Hi,
I saw in the 3.3 changelog that now it is possible to set a secondary 
server to retrieve the volume information from, when mounting a volume 
via the native client.


However... I can't find any documentation in the man pages explaining 
how to do this.


Currently I have:

mount -t glusterfs storage01:/testvol /mnt/somewhere

I guess I need to add -o backup-volume=storage02 or something?


-Toby
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] How to set backup volume server in mount options

2012-05-03 Thread Toby Corkindale

On 04/05/12 11:35, Toby Corkindale wrote:

Hi,
I saw in the 3.3 changelog that now it is possible to set a secondary
server to retrieve the volume information from, when mounting a volume
via the native client.

However... I can't find any documentation in the man pages explaining
how to do this.

Currently I have:

mount -t glusterfs storage01:/testvol /mnt/somewhere

I guess I need to add -o backup-volume=storage02 or something?


I dug this option out of the source code, *but it still doesn't work*:

mount -t glusterfs -o backupvolfile-server=storage02 storage01:/testvol 
/mnt/somewhere

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] How to set backup volume server in mount options

2012-05-03 Thread Amar Tumballi

Currently I have:

mount -t glusterfs storage01:/testvol /mnt/somewhere

I guess I need to add -o backup-volume=storage02 or something?


I dug this option out of the source code, *but it still doesn't work*:

mount -t glusterfs -o backupvolfile-server=storage02 storage01:/testvol
/mnt/somewhere



Yes, thats precisely the option you should use.

When you say *it still doesn't work*, it would be helpful if you let us 
know what are the logs in /var/log/glusterfs/mnt-somewhere.log ?


Regards,
Amar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] How to set backup volume server in mount options

2012-05-03 Thread Kaushal M
Didn't send to the list :(

On Fri, May 4, 2012 at 11:07 AM, Kaushal M kshlms...@gmail.com wrote:

 Hi Toby.

 This is the correct way to do it and it should work.

 However, there was a bug reported earlier where the outputs of mount/df
 command showed the original server even when the backup server was used for
 gluster mounts. This bug doesn't happen anymore on the latest qa release.

 I believe you are testing with 3.3 beta 3. Could you check with the latest
 3.3.0qa release (src tarball here
 http://bits.gluster.com/pub/gluster/glusterfs/src/glusterfs-3.3.0qa39.tar.gz, 
 no debs) and see if the problem still exists?

 Thanks,
 Kaushal

 On Fri, May 4, 2012 at 7:15 AM, Toby Corkindale 
 toby.corkind...@strategicdata.com.au wrote:

 On 04/05/12 11:35, Toby Corkindale wrote:

 Hi,
 I saw in the 3.3 changelog that now it is possible to set a secondary
 server to retrieve the volume information from, when mounting a volume
 via the native client.

 However... I can't find any documentation in the man pages explaining
 how to do this.

 Currently I have:

 mount -t glusterfs storage01:/testvol /mnt/somewhere

 I guess I need to add -o backup-volume=storage02 or something?


 I dug this option out of the source code, *but it still doesn't work*:

 mount -t glusterfs -o backupvolfile-server=storage02 storage01:/testvol
 /mnt/somewhere


 __**_
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/**mailman/listinfo/gluster-usershttp://gluster.org/cgi-bin/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users