On Fri, Feb 16, 2018 at 05:44:43AM -0600, Dave Sherohman wrote:
> On Thu, Feb 15, 2018 at 09:34:02PM +0200, Alex K wrote:
> > Have you checked for any file system errors on the brick mount point?
>
> I hadn't. fsck reports no errors.
>
> > What about the heal? Does it report any pending heals?
>
On Thu, Feb 15, 2018 at 09:34:02PM +0200, Alex K wrote:
> Have you checked for any file system errors on the brick mount point?
I hadn't. fsck reports no errors.
> What about the heal? Does it report any pending heals?
There are now. It looks like taking the brick offline to fsck it was
enough
Hi,
Have you checked for any file system errors on the brick mount point?
I once was facing weird io errors and xfs_repair fixed the issue.
What about the heal? Does it report any pending heals?
On Feb 15, 2018 14:20, "Dave Sherohman" wrote:
> Well, it looks like I've stumped the list, so I d
Well, it looks like I've stumped the list, so I did a bit of additional
digging myself:
azathoth replicates with yog-sothoth, so I compared their brick
directories. `ls -R /var/local/brick0/data | md5sum` gives the same
result on both servers, so the filenames are identical in both bricks.
Howeve
I'm using gluster for a virt-store with 3x2 distributed/replicated
servers for 16 qemu/kvm/libvirt virtual machines using image files
stored in gluster and accessed via libgfapi. Eight of these disk images
are standalone, while the other eight are qcow2 images which all share a
single backing file