Hello!
Yesterday we hit something like this on 4.1.2
Centos 7.5.
Volume is replicated - two bricks and one arbiter.
We rebooted arbiter, waited for heal end, and tried to live migrate VM
to another node ( we run VMs on gluster nodes ):
[2018-08-27 09:56:22.085411] I [MSGID: 115029]
Good Morning,
today i update + rebooted all gluster servers, kernel update to
4.9.0-8 and gluster to 3.12.13. Reboots went fine, but on one of the
gluster servers (gluster13) one of the bricks did come up at the
beginning but then lost connection.
OK:
Status of volume: shared
Gluster process
On Monday 27 August 2018 01:57 PM, Pasi Kärkkäinen wrote:
Hi,
On Mon, Aug 27, 2018 at 11:10:21AM +0530, Jiffin Tony Thottan wrote:
The Gluster community is pleased to announce the release of Gluster
3.12.13 (packages available at [1,2,3]).
Release notes for the release can be
yeah, on debian xyz.log.1 is always the former logfile which has been
rotated by logrotate. Just checked the 3 servers: now it looks good, i
will check it again tomorrow. very strange, maybe logrotate hasn't
worked properly.
the performance problems remain :-)
2018-08-27 15:41 GMT+02:00 Milind
On Thu, Aug 23, 2018 at 5:28 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> On Wed, Aug 22, 2018 at 12:01 PM Hu Bert wrote:
>
>> Just an addition: in general there are no log messages in
>> /var/log/glusterfs/ (if you don't all 'gluster volume ...'), but on
>> the node with the
Hi,
On Mon, Aug 27, 2018 at 11:10:21AM +0530, Jiffin Tony Thottan wrote:
>The Gluster community is pleased to announce the release of Gluster
>3.12.13 (packages available at [1,2,3]).
>
>Release notes for the release can be found at [4].
>
>Thanks,
>Gluster community
>
>
Hi,
Seems like you linked the 3.12.12 changelog instead of the 3.12.13 one.
Does it fix the memory leak problem ?
Thanks
On Mon, Aug 27, 2018 at 11:10:21AM +0530, Jiffin Tony Thottan wrote:
> The Gluster community is pleased to announce the release of Gluster
> 3.12.13 (packages available at