Raghavendra,
Sorry for the late follow up. I have some more data on the issue.
The issue tends to happen when the shards are created. The easiest time
to reproduce this is during an initial VM disk format. This is a log
from a test VM that was launched, and then partitioned and formatted
with
On Mon, Apr 2, 2018 at 11:37 PM, Andreas Davour wrote:
> On Mon, 2 Apr 2018, Nithya Balachandran wrote:
>
> On 2 April 2018 at 14:48, Andreas Davour wrote:
>>
>>
>>> Hi
>>>
>>> I've found something that works so weird I'm certain I have missed how
>>> gluster is supposed to be used, but I can no
On Mon, 2018-04-02 at 20:07 +0200, Andreas Davour wrote:
> On Mon, 2 Apr 2018, Nithya Balachandran wrote:
>
> > On 2 April 2018 at 14:48, Andreas Davour wrote:
> >
> > > Hi
> > >
> > > I've found something that works so weird I'm certain I have
> > > missed how
> > > gluster is supposed to be u
Hi,
I've got a pair of systems running CentOS 7.4 as a testbed for kvm +
gluster. It's a very basic config with a single SSD on each system.
Gluster is configured on the two systems and I'm testing performance with
fio. My test numbers directly against the brick and the the fuse
mountpoint are
On 2 April 2018 at 14:48, Andreas Davour wrote:
>
> Hi
>
> I've found something that works so weird I'm certain I have missed how
> gluster is supposed to be used, but I can not figure out how. This is my
> scenario.
>
> I have a volume, created from 16 nodes, each with a brick of the same
> size