On Fri, Jan 19, 2018 at 08:19:09PM +, Niklas Hambüchen wrote:
> What's /proc/sys/kernel/core_pattern set to for you? For me it is
>
> % cat /proc/sys/kernel/core_pattern
> core
>
> which will drop a core file in the working directory of the process.
Same here, still I'm unable to find any
On Tue, Jan 23, 2018 at 1:04 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen
> wrote:
>
>> Hi again,
>>
>> here is more information regarding issue described earlier
>>
>> It looks like self healing is
On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen
wrote:
> Hi again,
>
> here is more information regarding issue described earlier
>
> It looks like self healing is stuck. According to "heal statistics" crawl
> began at Sat Jan 20 12:56:19 2018 and it's still going on
On 17 January 2018 at 16:04, Ing. Luca Lazzeroni - Trend Servizi Srl <
l...@trendservizi.it> wrote:
> Here's the volume info:
>
>
> Volume Name: gv2a2
> Type: Replicate
> Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
>
Okay so i've found that one of the hypervisors was only connected to 3 gluster
nodes instead of 4.
is there a way to tell which nodes glusterfs client is connected to ? or list
clients from glusterfs server ?
--
Respectfully
Mahdi A. Mahdi
From:
So from the logs what it looks to be a regression caused by commit 635c1c3
( and the good news is that this is now fixed in release-3.12 branch and
should be part of 3.12.5.
Commit which fixes this issue:
COMMIT: https://review.gluster.org/19146 committed in release-3.12 by
\"Atin Mukherjee\"