On 17 May 2016 at 10:02, WK wrote:
> That being said, when we lose a brick, we've traditionally just live
> migrated those VMs off onto other clusters because we didn't want to take
> the heal hit which at best slowed down our VMs at on the pickier ones cause
> them to RO out.
>
That should be an important clue.
That being said, when we lose a brick, we've traditionally just live
migrated those VMs off onto other clusters because we didn't want to
take the heal hit which at best slowed down our VMs at on the pickier
ones cause them to RO out.
We have not yet
Ok, this is probably an interesting data point. I was unable to
reproduce the problem when using the fuse mount.
Its late here so I might not have time to repeat with the gfapi, but I
will tomorrow.
On 16/05/2016 4:55 PM, Krutika Dhananjay wrote:
Yes, that would probably be useful in terms
Yes, that would probably be useful in terms of at least having access to
the client logs.
-Krutika
On Mon, May 16, 2016 at 12:18 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 16 May 2016 at 16:46, Krutika Dhananjay wrote:
> > Could you share the mount
Hi,
Could you share the mount and glustershd logs for investigation?
-Krutika
On Sun, May 15, 2016 at 12:22 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 15/05/2016 12:45 AM, Lindsay Mathieson wrote:
>
> *First off I tried removing/adding a brick.*
>
> gluster v
Response inline.
- Original Message -
> From: "Atin Mukherjee"
> To: "Lindsay Mathieson"
> Cc: "gluster-users" , "Anuradha Talur"
>
> Sent: Saturday, May 14, 2016 8:35:38 AM
>