Re: [Gluster-users] single storage server

2011-05-06 Thread Nick Birkett
On 05/06/2011 03:05 PM, Whit Blauvelt wrote: On Fri, May 06, 2011 at 09:04:20AM +, Max Ivanov wrote: Related question: is FUSE client smart enough to access storage directly, not via network if it is running on brick? Not sure I follow the question. By "running on a brick" do you mean that

[Gluster-users] single storage server

2011-05-06 Thread Nick Birkett
I have a single storage server which exports /data to number of clients. Is it ok to access the data on the storage server directly (ie not via glusterfs mount) ? (I know this causes problems when there are multiple servers ). This would simplify some configurations. Nick __

[Gluster-users] Glusterfs and NX

2011-05-04 Thread Nick Birkett
Anyone know if there is a problem using nx on glusterfs ? I have just remounted the users filesystem as glusterfs. It appears that NX does not work anymore. nx version 3.4 glusterfs version 3.0.7 Description of users problem: "Not sure how whatever you did would affect this, but nomachine f

[Gluster-users] ha translator

2010-07-07 Thread Nick Birkett
Is the ha translator now deprecated in 3.x ? Thanks, Nick This e-mail message may contain confidential and/or privileged information. If you are not an addressee or otherwise authorized to receive this message, you should not use, copy, disclose or take any action based on this e-mail or an

Re: [Gluster-users] gluster stripe

2010-06-29 Thread Nick Birkett
Jeff Darcy wrote: On 06/28/2010 11:23 AM, Nick Birkett wrote: Some questions regarding the working of glusterfs striped over multiple servers (glusterfs 3.0.4) (1) I notice that when writing files I seem to get a file of (approximately) the same size on each file server: eg using /tmp

[Gluster-users] gluster stripe

2010-06-28 Thread Nick Birkett
Some questions regarding the working of glusterfs striped over multiple servers (glusterfs 3.0.4) (1) I notice that when writing files I seem to get a file of (approximately) the same size on each file server: eg using /tmp (ext3) filesystem on each of 8 servers: comp00: -rw-r- 1 sccomp u

[Gluster-users] xen image on glusterfs

2010-04-27 Thread Nick Birkett
I tried installing a xen disk image to glusterfs : version 3.0.4. Transport tcp (Gbit). Glusterfs works fine with other applications. Iozone reports 110Mbytes/s read / write. I copied and existing xen disk image to glusterfs as root. [r...@xenbox xen]# ls /data1/vm/xen/SL54/sl54.img /data1

[Gluster-users] ib-sdp

2010-02-18 Thread Nick Birkett
Is ib-sdp still supported on 3.0.2 (eg for use in solaris environment) ? If so do we compile for tcp socket or is there a special configure option ? Thanks, Nick This e-mail message may contain confidential and/or privileged information. If you are not an addressee or otherwise authorized t

[Gluster-users] Glusterfs 3.0.2 on Opensolaris

2010-02-15 Thread Nick Birkett
Just some experiments with 3.0.2. Using Gluster doc instruction source compiles fine to 32 bit binaries/libraries (64 bit x86 server). Performance issues when Server is 64 bit x86 osol-0906 /ZFS fs and single gigabit e1000 nics. 1 server. + 1 Linux client works at wire speed (> 100Mbytes/s).

[Gluster-users] Fuse 2.8.1

2010-02-12 Thread Nick Birkett
Currently we are using fuse-2.7.4glfs11 . Does glusterfs 3+ work with fuse 2.8.1 libraries ? Thanks, Nick This e-mail message may contain confidential and/or privileged information. If you are not an addressee or otherwise authorized to receive this message, you should not use, copy, dis

[Gluster-users] hash with non-existent / down servers

2010-01-07 Thread Nick Birkett
Is it possible to use a gluster hash when some of the servers are missing ? Eg if glusterfs.vol has 8 servers but only 6 available. I dont mind files being missing (due to being on offline servers). I do mind filesystem inoperable eg transport endpoint not connected error . when some sev

[Gluster-users] gluster 3.0 read hangs

2009-12-23 Thread Nick Birkett
I ran some benchmarks last week using 2.0.8. Single server with 8 Intel e1000e bonded mode=balance-alb All worked fine and I got some good results using 8 clients. All Gigabit. The benchmarks did 2 passes of IOZONE in network mode using 1-8 threads per client and using 1 - 8 clients. Each c