Re: [Gluster-users] Gluster FS replication

2012-10-22 Thread Haris Zukanovic
Thank you for your answer... Does using the NFS client insure replication to all bricks? My problem is that I see Gluster has unfinished replication tasks that lie around. Seems like the Gluster needs an external trigger to like ls -l on the file in question to re-trigger and complete the

Re: [Gluster-users] rdma tansport on 3.3?

2012-10-22 Thread Ivan Dimitrov
I have the same experiance. All comunication goes through ethernet. I think the documentation should be changed to NOT SUPPORTED AT ALL!, because with my broken english I figured that there was no commercial support for rdma, but the code is there. On 10/19/12 9:00 PM, Bartek Krawczyk wrote:

[Gluster-users] How to add new bricks to a volume?

2012-10-22 Thread Tao Lin
Hi, dear glfs experts: I've been using glusterfs (version 3.2.6) for months,so far it works very well.Now I'm facing a problem of adding two new bricks to an existed replicated (rep=2) volume,which is consisted of only two bricks and is mounted by multiple clients.Can I just use the following

[Gluster-users] Extremely high load after 100% full bricks

2012-10-22 Thread Dan Bretherton
Dear All- A replicated pair of servers in my GlusterFS 3.3.0 cluster have been experiencing extremely high load for the past few days after a replicated brick pair became 100% full. The GlusterFS related load on one of the servers was fluctuating at around 60, and this high load would swap

Re: [Gluster-users] upgrade from 3.3.0 to 3.3.1

2012-10-22 Thread s19n
* Patrick Irvine p...@cybersites.ca [2012 10 16, 22:03]: Can I do a simple rolling upgrade from 3.3.0 to 3.3.1? Or are there some gotchas? I haven't seen any answer to this enquiry, though I think it is very simple and important to know the answer. Can 3.3.0/3.3.1 bricks coexist? What about

Re: [Gluster-users] How to add new bricks to a volume?

2012-10-22 Thread Bartek Krawczyk
On 22 October 2012 14:57, Tao Lin linba...@gmail.com wrote: Hi, dear glfs experts: I've been using glusterfs (version 3.2.6) for months,so far it works very well.Now I'm facing a problem of adding two new bricks to an existed replicated (rep=2) volume,which is consisted of only two bricks

[Gluster-users] 3.3.1 breaks NFS CARP setup

2012-10-22 Thread Dan Bretherton
Dear All- I upgraded from 3.3.0 to 3.3.1 from the epel-glusterfs repository a few days ago, but I discovered that NFS in the new version does not work with virtual IP addresses managed by CARP. NFS crashed as soon as an NFS client made an attempt to mount a volume using a virtual IP address,

Re: [Gluster-users] 3.3.1 breaks NFS CARP setup

2012-10-22 Thread John Mark Walker
Dan - please file a bug re: the NFS issue. Also, older builds were sacrificed when the older download server was lost. I'll contact our main packager about doing 3.0 builds. The builds on bits.gluster.org are strictly for testing purposes only - they don't upgrade cleanly to other builds. I

Re: [Gluster-users] 3.3.1 breaks NFS CARP setup

2012-10-22 Thread Whit Blauvelt
On Mon, Oct 22, 2012 at 09:49:21AM -0400, John Mark Walker wrote: Dan - please file a bug re: the NFS issue.  Glad to hear this will be treated as a bug. If NFS is to be supported at all, being able to use a virtual-IP setup (whether mediated by CARP or otherwise) is essential. And considering

Re: [Gluster-users] 3.3.1 breaks NFS CARP setup

2012-10-22 Thread John Mark Walker
Dan - if you need to downgrade, see Kaleb's repo here: http://repos.fedorapeople.org/repos/kkeithle/glusterfs/old/3.3.0-11/ -JM On Mon, Oct 22, 2012 at 9:42 AM, Dan Bretherton d.a.brether...@reading.ac.uk wrote: Dear All- I upgraded from 3.3.0 to 3.3.1 from the epel-glusterfs repository a

Re: [Gluster-users] 3.3.1 breaks NFS CARP setup

2012-10-22 Thread Jeff Darcy
On 10/22/2012 09:42 AM, Dan Bretherton wrote: Dear All- I upgraded from 3.3.0 to 3.3.1 from the epel-glusterfs repository a few days ago, but I discovered that NFS in the new version does not work with virtual IP addresses managed by CARP. NFS crashed as soon as an NFS client made an

Re: [Gluster-users] 3.3.1 breaks NFS CARP setup

2012-10-22 Thread Kaleb S. KEITHLEY
On 10/22/2012 09:42 AM, Dan Bretherton wrote: Incidentally, when I decided to downgrade 3.3.0 I discovered that those RPMs aren't available for download from http://download.glusterfs.org or http://repos.fedorapeople.org/repos/kkeithle/glusterfs (epel-glusterfs) any more. The old 3.3.0 RPMs

Re: [Gluster-users] Gluster FS replication

2012-10-22 Thread Joe Julian
On 10/21/2012 02:18 PM, Israel Shirk wrote: Haris, try the NFS mount. Gluster typically triggers healing through the client, so if you skip the client, nothing heals. Not true anymore. With 3.3 there's a self-heal daemon that will handle the heals. You do risk reading stale data if you don't

Re: [Gluster-users] Gluster FS replication

2012-10-22 Thread John Mark Walker
- Original Message - False. The client will read from the first-to-respond. Yes, if Singapore is responding faster than Virginia you might want to figure out why Virginia is so overloaded that it's taking more than 200ms to respond, but really that shouldn't be the case. I

Re: [Gluster-users] Normal replication and geo-replication, different animals

2012-10-22 Thread whit . gluster
On Mon, Oct 22, 2012 at 08:55:18AM -0600, Israel Shirk wrote: I'm simply saying that I keep hearing that Gluster is supposed to work great on distributed applications (as in distributed to more than one place), but the reality of the situation is that it's really buggy and nobody is willing

Re: [Gluster-users] Gluster-users Digest, Vol 54, Issue 27

2012-10-22 Thread Joe Julian
I'm sorry you took that email as an I'm always right sort of thing. You were telling a user that things were a certain way and for the majority of users that I encounter on a daily basis, it's just not true. I wanted to set the record straight so that user can make a properly informed

Re: [Gluster-users] Gluster-users Digest, Vol 54, Issue 27

2012-10-22 Thread John Mark Walker
Israel - thank you for taking the time to write out more thoughtful responses. See below for my responses inline. - Original Message - On 10/21/2012 02:18 PM, Israel Shirk wrote: Haris, try the NFS mount. Gluster typically triggers healing through the client, so if you skip the

Re: [Gluster-users] 'replace-brick' - why we plan to deprecate

2012-10-22 Thread 任英杰
hi Amar, I met across a problem when I replace a brick in a stripe-replicate volume. I used both methods you mentioned in your post. #gluster volume replace-brick VOL brick1 brick2 start [1] # gluster volume replace-brick VOL brick1 brick2 commit force (self-heal daemon heals the data) the

[Gluster-users] slow write to non-hosted replica in distributed-replicated volume

2012-10-22 Thread Rowley, Shane K
I have four servers, absolutely identical, connected to the same switches. One interface is on a 100Mb switch, the other is on a 1Gb switch. I access the nodes via the 100Mb port, gluster is configured on the 1Gb port. The nodes are all loaded with Scientific Linux 6.3, Virtualization Host,

[Gluster-users] GlusterFS failover with UCarp

2012-10-22 Thread Runar Ingebrigtsen
Hi, we've successfully configured GlusterFS mirroring across two identical nodes [1]. We're running the file share under a Virtual IP address using UCarp. We have different clients connected using NFS, CIFS and GlusterFS. When we simulate a node failure, by unplugging it, it takes about 5

[Gluster-users] deleting failed once add-brick is starting

2012-10-22 Thread 田媛媛
hi,all: when I`m deleting a file (about 1GB) or folders of the volume, I tried to add one brick to the volume. Then the deleting was halted with the error msg: rm: cann't delete XXX, transport endpoint is not connected. I`m not sure, is it a bug, or a yet-to-be-added

Re: [Gluster-users] Gluster download link redirect to redhat

2012-10-22 Thread John Mark Walker
Thanks for reporting. That particular item is no longer relevant and will be deprecated. Thanks, JM On Sat, Oct 20, 2012 at 2:03 AM, kunal khaneja kunal8...@gmail.com wrote: Dear Team, Please note that many download links of gluster.org redirect to redhat.com. Please refer

Re: [Gluster-users] GlusterFS failover with UCarp

2012-10-22 Thread Brian Candler
On Thu, Oct 18, 2012 at 06:48:42PM +0200, Runar Ingebrigtsen wrote: The connection break behavior is to be expected - the TCP connection doesn't handle the switch of host. I didn't expect the NFS client to go stale. I can't answer this directly but I did notice something in the 3.3.1

Re: [Gluster-users] Gluster FS replication

2012-10-22 Thread Jeff Darcy
On 10/22/2012 10:37 AM, Joe Julian wrote: The client will read from the first-to-respond. That's true, but it's going to change in a couple of ways. * Commit 0baa12e8 (March 23) will cause replication to read from a brick on the same machine if there is one. * Commit 97819bf2 (June 24)

Re: [Gluster-users] GlusterFS failover with UCarp

2012-10-22 Thread Runar Ingebrigtsen
On ma. 22. okt. 2012 kl. 19.11 +0200, Brian Candler wrote: On Thu, Oct 18, 2012 at 06:48:42PM +0200, Runar Ingebrigtsen wrote: The connection break behavior is to be expected - the TCP connection doesn't handle the switch of host. I didn't expect the NFS client to go stale. I

Re: [Gluster-users] slow write to non-hosted replica in distributed-replicated volume

2012-10-22 Thread Bryan Whitehead
gluster volume create vol1 replica 2 transport tcp server1:/brick1 server2:/brick2 server3:/brick3 server4:/brick4 server1:/brick1 and server2:/brick2 are the first replica pair server3:/brick3 and server4:/brick4 are the second replica pair server1.. file1 goes into brick1/brick2 - fast

Re: [Gluster-users] Throughout over infiniband

2012-10-22 Thread Gluster Mailing List
Corey, Make sure to test with direct I/O, otherwise the caching can give you unrealistic expectations of your actual throughput.  Typically, using the ipoib driver is not recommended with Infiniband since you will introduce unnecessary overhead via TCP. Knowing how you have Gluster configured

[Gluster-users] NASA uses gluster..

2012-10-22 Thread Paul Simpson
thought this might be of interest to you all out there: http://opensource.com/life/12/10/NASA-achieves-data-goals-Mars-rover-open-source-software ___ Gluster-users mailing list Gluster-users@gluster.org