On 9/06/2017 5:54 PM, Lindsay Mathieson wrote:
I've started the process as above, seems to be going ok - cluster is
going to be unusable for the next couple of days.
Just as an update - I was mistaken in this, cluster was actually quite
usable while this was going on, except for on the new ser
On Sat, Jun 10, 2017 at 2:53 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 9/06/2017 9:56 PM, Pranith Kumar Karampuri wrote:
>
> > gluster volume remove-brick datastore4 replica 2
>> > vna.proxmox.softlog:/tank/vmdata/datastore4 force
>> >
>> > gluster volume add-brick datastore
On 9/06/2017 9:56 PM, Pranith Kumar Karampuri wrote:
> gluster volume remove-brick datastore4 replica 2
> vna.proxmox.softlog:/tank/vmdata/datastore4 force
>
> gluster volume add-brick datastore4 replica 3
> vnd.proxmox.softlog:/tank/vmdata/datastore4
I think that should
On Fri, Jun 9, 2017 at 12:41 PM, wrote:
> > I'm thinking the following:
> >
> > gluster volume remove-brick datastore4 replica 2
> > vna.proxmox.softlog:/tank/vmdata/datastore4 force
> >
> > gluster volume add-brick datastore4 replica 3
> > vnd.proxmox.softlog:/tank/vmdata/datastore4
>
> I think
> Must admit this sort of process - replacing bricks and/or node is *very*
> stressful with gluster. That sick feeling in the stomach - will I have to
> restore everything from backups?
>
> Shouldn't be this way.
I know exactly what you mean.
Last week end I replaced a server (it was working fine
On 9 June 2017 at 17:12, wrote:
> Heh, on that, did you think to take a look at the Media_Wearout indicator ?
> I recently learned that existed, and it explained A LOT.
>
Yah, that has been useful in the past for journal/cache ssd's that get a
lot of writes. However all the stats on this boot SS
> And a big thanks (*not*) to the smart reporting which showed no issues at
> all.
Heh, on that, did you think to take a look at the Media_Wearout indicator ?
I recently learned that existed, and it explained A LOT.
signature.asc
Description: Digital signature
___
> I'm thinking the following:
>
> gluster volume remove-brick datastore4 replica 2
> vna.proxmox.softlog:/tank/vmdata/datastore4 force
>
> gluster volume add-brick datastore4 replica 3
> vnd.proxmox.softlog:/tank/vmdata/datastore4
I think that should work perfectly fine yes, either that
or direc
On 9 June 2017 at 10:51, Lindsay Mathieson
wrote:
> Or I should say we *had* a 3 node cluster, one node died today.
Boot SSD failed, definitely a reinstall from scratch.
And a big thanks (*not*) to the smart reporting which showed no issues at
all.
--
Lindsay
___
Status: We have a 3 node gluster cluster (proxmox based)
- gluster 3.8.12
- Replica 3
- VM Hosting Only
- Sharded Storage
Or I should say we *had* a 3 node cluster, one node died today. Possibly I
can recover it, in whcih case no issues, we just let it heal itself. For
now its running happily on 2
10 matches
Mail list logo