On 9 June 2017 at 10:51, Lindsay Mathieson
wrote:
> Or I should say we *had* a 3 node cluster, one node died today.
Boot SSD failed, definitely a reinstall from scratch.
And a big thanks (*not*) to the smart reporting which showed no issues at
all.
--
Lindsay
Status: We have a 3 node gluster cluster (proxmox based)
- gluster 3.8.12
- Replica 3
- VM Hosting Only
- Sharded Storage
Or I should say we *had* a 3 node cluster, one node died today. Possibly I
can recover it, in whcih case no issues, we just let it heal itself. For
now its running happily on
+Raghavendra/Nithya
On Tue, Jun 6, 2017 at 7:41 PM, Jarsulic, Michael [CRI] <
mjarsu...@bsd.uchicago.edu> wrote:
> Hello,
>
> I am still working at recovering from a few failed OS hard drives on my
> gluster storage and have been removing, and re-adding bricks quite a bit. I
> noticed yesterday
This mail was not there in the same thread as earlier because the subject
has extra "?==?utf-8?q? " so thought it was not answered and answered
again. Sorry about that.
On Sat, Jun 3, 2017 at 1:45 AM, Xavier Hernandez
wrote:
> Hi Serkan,
>
> On Thursday, June 01, 2017
On Thu, Jun 8, 2017 at 12:49 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On Fri, Jun 2, 2017 at 1:01 AM, Serkan Çoban
> wrote:
>
>> >Is it possible that this matches your observations ?
>> Yes that matches what I see. So 19 files is being in parallel by
On Fri, Jun 2, 2017 at 1:01 AM, Serkan Çoban wrote:
> >Is it possible that this matches your observations ?
> Yes that matches what I see. So 19 files is being in parallel by 19
> SHD processes. I thought only one file is being healed at a time.
> Then what is the meaning