glusterd/src/glusterd-utils.c.
But even if we fix it today, I don't think we have a release planned
immediately for shipping this. Are you planning to fix the code and re-compile?
Regards,
Amar
On Wed, Jan 31, 2018 at 10:00 PM, Freer, Eva B.
<free...@ornl.gov<mailto:free...@ornl.gov>
ed to get in touch with another developer to check about the
changes here and he will be available only tomorrow. Is there someone else I
could work with while you are away?
Regards,
Nithya
On 31 January 2018 at 22:00, Freer, Eva B.
<free...@ornl.gov<mailto:free...@ornl.gov>> wro
Cc: "Greene, Tami McFarlin" <gree...@ornl.gov>, "gluster-users@gluster.org"
<gluster-users@gluster.org>, Amar Tumballi <atumb...@redhat.com>
Subject: Re: [Gluster-users] df does not show full volume capacity after update
to 3.12.4
On 31 January 2018 at 2
o: Eva Freer <free...@ornl.gov>
Cc: "Greene, Tami McFarlin" <gree...@ornl.gov>, "gluster-users@gluster.org"
<gluster-users@gluster.org>, Amar Tumballi <atumb...@redhat.com>
Subject: Re: [Gluster-users] df does not show full volume capacity after update
to 3
orks post that?
gluster v set dataeng cluster.min-free-inodes 6%
If it doesn;t work, please send us the stat -f output for each brick.
Regards,
Nithya
On 31 January 2018 at 20:41, Freer, Eva B.
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:
Nithya,
The file for one of the servers
IRC)
https://smcleod.net
https://twitter.com/s_mcleod
Words are my own opinions and do not necessarily represent those of my employer
or partners.
On 31 Jan 2018, at 12:47 pm, Freer, Eva B.
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:
After OS update to CentOS 7.4 or RedHat 6.9 an
After OS update to CentOS 7.4 or RedHat 6.9 and update to Gluster 3.12.4, the
‘df’ command shows only part of the available space on the mount point for
multi-brick volumes. All nodes are at 3.12.4. This occurs on both servers and
clients.
We have 2 different server configurations.
Our configuration is a distributed, replicated volume with 7 pairs of bricks on
2 servers. We are in the process of adding additional storage for another brick
pair. I placed the new disks in one of the servers late last week and used the
LSI storcli command to make a RAID 6 volume of the new
Update: I was able to use the TestDisk program from cgsecurity.org to find and
rewrite the partition info for the LVM partition. I was then able to mount the
disk and restart the gluster volume to bring the brick back online. To make
sure everything was OK, I then rebooted the node with the