Adding Ravi to look into the heal issue.
As for the fsync hang and subsequent IO errors, it seems a lot like
https://bugzilla.redhat.com/show_bug.cgi?id=1497156 and Paolo Bonzini from
qemu had pointed out that this would be fixed by the following commit:
commit
https://docs.gluster.org/en/latest/release-notes/3.12.6/
The major issue in 3.12.6 is not present in 3.12.7. Bugzilla ID listed in link.
On May 29, 2018 8:50:56 PM EDT, Dan Lavu wrote:
>What shard corruption bug? bugzilla url? I'm running into some odd
>behavior
>in my lab with shards and
Forgot to mention, sometimes I have to do force start other volumes as
well, its hard to determine which brick process is locked up from the logs.
Status of volume: rhev_vms_primary
Gluster process
TCP Port RDMA Port Online Pid
What shard corruption bug? bugzilla url? I'm running into some odd behavior
in my lab with shards and RHEV/KVM data, trying to figure out if it's
related.
Thanks.
On Fri, May 4, 2018 at 11:13 AM, Jim Kinney wrote:
> I upgraded my ovirt stack to 3.12.9, added a brick to a volume and left it
>
Stefan,
Sounds like a brick process is not running. I have notice some strangeness
in my lab when using RDMA, I often have to forcibly restart the brick
process, often as in every single time I do a major operation, add a new
volume, remove a volume, stop a volume, etc.
gluster volume status
Dear all,
I faced a problem with a glusterfs volume (pure distributed, _not_ dispersed)
over RDMA transport. One user had a directory with a large number of files
(50,000 files) and just doing an "ls" in this directory yields a "Transport
endpoint not connected" error. The effect is, that
On Tue, May 29, 2018 at 09:03:04AM +0900, 김경표 wrote:
> Sometimes os disk hang occured and re-mounted with ro in vm guest(centos6)
> when storage was busy.
I had similar problems in the early days of running my gluster volume,
then I switched the gluster mounts from fuse to libgfapi and haven't
Hi,I've gone through a bit of testing around using Gluster as a VMWare
datastore, here are my findings:
running VMWare vSphere 6.5 with ESXi nodesGluster running on Supermicro kit, 6
SAS disks with 2 SSD's for caching all carved up using LVM on to of CentOS 7.
I set up a 4 node cluster,