In my lab, one of my RAID cards started acting up and took one of my three
gluster nodes offline (two nodes with data and an arbiter node). I'm hoping
it's simply the backplane, but during that time troubleshooting and waiting
for parts, the hypervisors was fenced. Since the firewall was replaced a
The Gluster community is pleased to announce the release of Gluster
4.0.1 (packages available at [1]).
Release notes for the release can be found at [2].
Thanks,
Gluster community
[1] Packages:
https://download.gluster.org/pub/gluster/glusterfs/4.0/4.0.1/
[2] Release notes:
https://github.com/g
Raghavenda,
The issue typically appears during heavy write operations to the VM
image. Its most noticeable during the filesystem creation process on a
virtual machine image. I'll get some specific data while executing that
process and will get back to you soon.
thanks
-- Ian
-- Origin
Ian,
Do you've a reproducer for this bug? If not a specific one, a general
outline of what operations where done on the file will help.
regards,
Raghavendra
On Mon, Mar 26, 2018 at 12:55 PM, Raghavendra Gowdappa
wrote:
>
>
> On Mon, Mar 26, 2018 at 12:40 PM, Krutika Dhananjay
> wrote:
>
>> Th
On Mon, Mar 26, 2018 at 12:40 PM, Krutika Dhananjay
wrote:
> The gfid mismatch here is between the shard and its "link-to" file, the
> creation of which happens at a layer below that of shard translator on the
> stack.
>
> Adding DHT devs to take a look.
>
Thanks Krutika. I assume shard doesn't
The gfid mismatch here is between the shard and its "link-to" file, the
creation of which happens at a layer below that of shard translator on the
stack.
Adding DHT devs to take a look.
-Krutika
On Mon, Mar 26, 2018 at 1:09 AM, Ian Halliday wrote:
> Hello all,
>
> We are having a rather intere