On Fri, Jan 6, 2017 at 4:14 AM, Michael Watters wrote:
> I've set up a small gluster cluster running three nodes and I would like
> to rename one of the hosts. What is the proper procedure for changing
> the host name on a node? Do I simply stop the gluster service, detach
On 6 January 2017 at 08:42, Michael Watters wrote:
> Have you done comparisons against Lustre? From what I've seen Lustre
> performance is 2x faster than a replicated gluster volume.
No Lustre packages for debian and I really dislike installing from src
for production
On 01/06/2017 07:22 AM, Michael Ward wrote:
Hi,
Sorry about the delayed response, I initially missed this message.
The GFID on gluster 02 and gluster 03 were the same, it was only
different on gluster 01.
I don’t have a test environment at this stage, so I haven’t been able
to try to
Hi,
Sorry about the delayed response, I initially missed this message.
The GFID on gluster 02 and gluster 03 were the same, it was only different on
gluster 01.
I don’t have a test environment at this stage, so I haven’t been able to try to
reproduce the problem.
Regards,
Michael Ward.
Il 05 gen 2017 6:33 PM, "Joe Julian" ha scritto:
That's still not without it's drawbacks, though I'm sure my instance is
pretty rare. Ceph's automatic migration of data caused a cascading failure
and a complete loss of 580Tb of data due to a hardware bug. If it had been
on
On 01/05/17 11:32, Gandalf Corvotempesta wrote:
Il 05 gen 2017 2:00 PM, "Jeff Darcy" > ha scritto:
There used to be an idea called "data classification" to cover this
kind of case. You're right that setting arbitrary goals for arbitrary
> Both ceph and lizard manage this automatically.
> If you want, you can add a single disk to a working cluster and automatically
> the whole cluster is rebalanced transparently with no user intervention
This relates to the granularity problem I mentioned earlier. As long as
we're not splitting
I've set up a small gluster cluster running three nodes and I would like
to rename one of the hosts. What is the proper procedure for changing
the host name on a node? Do I simply stop the gluster service, detach
the peer and then run sed on the files under /var/lib/gluster to use the
new name?
Have you done comparisons against Lustre? From what I've seen Lustre
performance is 2x faster than a replicated gluster volume.
On 1/4/17 5:43 PM, Lindsay Mathieson wrote:
> Hi all, just wanted to mention that since I had sole use of our
> cluster over the holidays and a complete set of
> Can I add a quorum only node to V3.7.18?
I guess you can add a peer without puttin a brick on it,
not sure how safe that is though.
If the issue is space or performances, just use an arbiter node,
it won't use much disk and it'll just keep metadatas I believe.
That way the volume will always
Ahh, that makes sense.
Can I add a quorum only node to V3.7.18?
Thanks
CC
On 5 Jan. 2017 4:02 pm, "Kevin Lemonnier" wrote:
> > I've configured two test gluster servers (RHEL7) running glusterfs
> 3.7.18.
> > [...]
> > Any ideas what I'm doing wrong?
>
> I'd say you need
Il 05 gen 2017 2:00 PM, "Jeff Darcy" ha scritto:
There used to be an idea called "data classification" to cover this
kind of case. You're right that setting arbitrary goals for arbitrary
objects would be too difficult. However, we could have multiple pools
with different
> Gluster (3.8.7) coped perfectly - no data loss, no maintenance required,
> each time it came up by itself with no hand holding and started healing
> nodes, which completed very quickly. VM's on gluster auto started with
> no problems, i/o load while healing was ok. I felt quite confident in it.
It could be some extended attributes that still exists on folders brick{1.4},
you could either remove them with attr or simply remove/recreate them.
Cheers,
> On 5 Jan 2017, at 01:23, Zack Boll wrote:
>
> In performance testing a striped 4 volume, I appeared to have
> I've configured two test gluster servers (RHEL7) running glusterfs 3.7.18.
> [...]
> Any ideas what I'm doing wrong?
I'd say you need 3 servers. GlusterFS goes RO without a quorum, and one server
isn't a quorum. That's to avoid split brains.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283
15 matches
Mail list logo