Re: [Gluster-users] Renaming Peers

2017-01-05 Thread Atin Mukherjee
On Fri, Jan 6, 2017 at 4:14 AM, Michael Watters wrote: > I've set up a small gluster cluster running three nodes and I would like > to rename one of the hosts. What is the proper procedure for changing > the host name on a node? Do I simply stop the gluster service, detach

Re: [Gluster-users] Cheers and some thoughts

2017-01-05 Thread Lindsay Mathieson
On 6 January 2017 at 08:42, Michael Watters wrote: > Have you done comparisons against Lustre? From what I've seen Lustre > performance is 2x faster than a replicated gluster volume. No Lustre packages for debian and I really dislike installing from src for production

Re: [Gluster-users] GFID Mismatch - Automatic Correction ?

2017-01-05 Thread Ravishankar N
On 01/06/2017 07:22 AM, Michael Ward wrote: Hi, Sorry about the delayed response, I initially missed this message. The GFID on gluster 02 and gluster 03 were the same, it was only different on gluster 01. I don’t have a test environment at this stage, so I haven’t been able to try to

Re: [Gluster-users] GFID Mismatch - Automatic Correction ?

2017-01-05 Thread Michael Ward
Hi, Sorry about the delayed response, I initially missed this message. The GFID on gluster 02 and gluster 03 were the same, it was only different on gluster 01. I don’t have a test environment at this stage, so I haven’t been able to try to reproduce the problem. Regards, Michael Ward.

Re: [Gluster-users] Cheers and some thoughts

2017-01-05 Thread Gandalf Corvotempesta
Il 05 gen 2017 6:33 PM, "Joe Julian" ha scritto: That's still not without it's drawbacks, though I'm sure my instance is pretty rare. Ceph's automatic migration of data caused a cascading failure and a complete loss of 580Tb of data due to a hardware bug. If it had been on

Re: [Gluster-users] Cheers and some thoughts

2017-01-05 Thread Joe Julian
On 01/05/17 11:32, Gandalf Corvotempesta wrote: Il 05 gen 2017 2:00 PM, "Jeff Darcy" > ha scritto: There used to be an idea called "data classification" to cover this kind of case. You're right that setting arbitrary goals for arbitrary

Re: [Gluster-users] Cheers and some thoughts

2017-01-05 Thread Jeff Darcy
> Both ceph and lizard manage this automatically. > If you want, you can add a single disk to a working cluster and automatically > the whole cluster is rebalanced transparently with no user intervention This relates to the granularity problem I mentioned earlier. As long as we're not splitting

[Gluster-users] Renaming Peers

2017-01-05 Thread Michael Watters
I've set up a small gluster cluster running three nodes and I would like to rename one of the hosts. What is the proper procedure for changing the host name on a node? Do I simply stop the gluster service, detach the peer and then run sed on the files under /var/lib/gluster to use the new name?

Re: [Gluster-users] Cheers and some thoughts

2017-01-05 Thread Michael Watters
Have you done comparisons against Lustre? From what I've seen Lustre performance is 2x faster than a replicated gluster volume. On 1/4/17 5:43 PM, Lindsay Mathieson wrote: > Hi all, just wanted to mention that since I had sole use of our > cluster over the holidays and a complete set of

Re: [Gluster-users] gluster native client failover testing

2017-01-05 Thread Kevin Lemonnier
> Can I add a quorum only node to V3.7.18? I guess you can add a peer without puttin a brick on it, not sure how safe that is though. If the issue is space or performances, just use an arbiter node, it won't use much disk and it'll just keep metadatas I believe. That way the volume will always

Re: [Gluster-users] gluster native client failover testing

2017-01-05 Thread Colin Coe
Ahh, that makes sense. Can I add a quorum only node to V3.7.18? Thanks CC On 5 Jan. 2017 4:02 pm, "Kevin Lemonnier" wrote: > > I've configured two test gluster servers (RHEL7) running glusterfs > 3.7.18. > > [...] > > Any ideas what I'm doing wrong? > > I'd say you need

Re: [Gluster-users] Cheers and some thoughts

2017-01-05 Thread Gandalf Corvotempesta
Il 05 gen 2017 2:00 PM, "Jeff Darcy" ha scritto: There used to be an idea called "data classification" to cover this kind of case. You're right that setting arbitrary goals for arbitrary objects would be too difficult. However, we could have multiple pools with different

Re: [Gluster-users] Cheers and some thoughts

2017-01-05 Thread Jeff Darcy
> Gluster (3.8.7) coped perfectly - no data loss, no maintenance required, > each time it came up by itself with no hand holding and started healing > nodes, which completed very quickly. VM's on gluster auto started with > no problems, i/o load while healing was ok. I felt quite confident in it.

Re: [Gluster-users] Performance testing striped 4 volume

2017-01-05 Thread Cedric Lemarchand
It could be some extended attributes that still exists on folders brick{1.4}, you could either remove them with attr or simply remove/recreate them. Cheers, > On 5 Jan 2017, at 01:23, Zack Boll wrote: > > In performance testing a striped 4 volume, I appeared to have

Re: [Gluster-users] gluster native client failover testing

2017-01-05 Thread Kevin Lemonnier
> I've configured two test gluster servers (RHEL7) running glusterfs 3.7.18. > [...] > Any ideas what I'm doing wrong? I'd say you need 3 servers. GlusterFS goes RO without a quorum, and one server isn't a quorum. That's to avoid split brains. -- Kevin Lemonnier PGP Fingerprint : 89A5 2283