Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-31 Thread Amar Tumballi
On Thu, Feb 1, 2018 at 9:31 AM, Nithya Balachandran <nbala...@redhat.com>
wrote:

> Hi,
>
> I think we have a workaround for until we have a fix in the code. The
> following worked on my system.
>
> Copy the attached file to */usr/lib*/glusterfs/**3.12.4**/filter/*. (You
> might need to create the *filter* directory in this path.)
> Make sure the file has execute permissions. On my system:
>
> [root@rhgsserver1 fuse2]# cd /usr/lib/glusterfs/3.12.5/
> [root@rhgsserver1 3.12.5]# l
> total 4.0K
> drwxr-xr-x.  2 root root   64 Feb  1 08:56 auth
> drwxr-xr-x.  2 root root   34 Feb  1 09:12 filter
> drwxr-xr-x.  2 root root   66 Feb  1 08:55 rpc-transport
> drwxr-xr-x. 13 root root 4.0K Feb  1 08:57 xlator
>
> [root@rhgsserver1 fuse2]# cd filter
> [root@rhgsserver1 filter]# pwd
> /usr/lib/glusterfs/3.12.5/filter
> [root@rhgsserver1 filter]# ll
> total 4
> -rwxr-xr-x. 1 root root 95 Feb  1 09:12 shared-brick-count.sh
>
> Rerun:
> *gluster v set dataeng cluster.min-free-inodes 6%*
>
>
>
Wow! I like this approach :-) Awesome! Thanks Nithya.

-Amar


> Check the .vol files to see if the value has changed. It should now be
> 1.You do not need to restart the volume.
>
> See [1] for more details.
>
> Regards,
> Nithya
>
>
>
> [1] http://docs.gluster.org/en/latest/Administrator%
> 20Guide/GlusterFS%20Filter/
>
>
> On 31 January 2018 at 23:39, Freer, Eva B. <free...@ornl.gov> wrote:
>
>> Amar,
>>
>>
>>
>> Thanks for your prompt reply. No, I do not plan to fix the code and
>> re-compile. I was hoping it could be fixed with setting the
>> shared-brick-count or some other option. Since this is a production system,
>> we will wait until a fix is in a release.
>>
>>
>>
>> Thanks,
>>
>> Eva (865) 574-6894
>>
>>
>>
>> *From: *Amar Tumballi <atumb...@redhat.com>
>> *Date: *Wednesday, January 31, 2018 at 12:15 PM
>> *To: *Eva Freer <free...@ornl.gov>
>> *Cc: *Nithya Balachandran <nbala...@redhat.com>, "Greene, Tami McFarlin"
>> <gree...@ornl.gov>, "gluster-users@gluster.org" <
>> gluster-users@gluster.org>
>>
>> *Subject: *Re: [Gluster-users] df does not show full volume capacity
>> after update to 3.12.4
>>
>>
>>
>> Hi Freer,
>>
>>
>>
>> Our analysis is that this issue is caused by https://review.gluster.org/
>> 17618. Specifically, in 'gd_set_shared_brick_count()' from
>> https://review.gluster.org/#/c/17618/9/xlators/mgmt/glu
>> sterd/src/glusterd-utils.c.
>>
>>
>>
>> But even if we fix it today, I don't think we have a release planned
>> immediately for shipping this. Are you planning to fix the code and
>> re-compile?
>>
>>
>>
>> Regards,
>>
>> Amar
>>
>>
>>
>> On Wed, Jan 31, 2018 at 10:00 PM, Freer, Eva B. <free...@ornl.gov> wrote:
>>
>> Nithya,
>>
>>
>>
>> I will be out of the office for ~10 days starting tomorrow. Is there any
>> way we could possibly resolve it today?
>>
>>
>>
>> Thanks,
>>
>> Eva (865) 574-6894
>>
>>
>>
>> *From: *Nithya Balachandran <nbala...@redhat.com>
>> *Date: *Wednesday, January 31, 2018 at 11:26 AM
>>
>>
>> *To: *Eva Freer <free...@ornl.gov>
>> *Cc: *"Greene, Tami McFarlin" <gree...@ornl.gov>, "
>> gluster-users@gluster.org" <gluster-users@gluster.org>, Amar Tumballi <
>> atumb...@redhat.com>
>> *Subject: *Re: [Gluster-users] df does not show full volume capacity
>> after update to 3.12.4
>>
>>
>>
>>
>>
>> On 31 January 2018 at 21:50, Freer, Eva B. <free...@ornl.gov> wrote:
>>
>> The values for shared-brick-count are still the same. I did not re-start
>> the volume after setting the cluster.min-free-inodes to 6%. Do I need to
>> restart it?
>>
>>
>>
>> That is not necessary. Let me get back to you on this tomorrow.
>>
>>
>>
>> Regards,
>>
>> Nithya
>>
>>
>>
>>
>>
>> Thanks,
>>
>> Eva (865) 574-6894
>>
>>
>>
>> *From: *Nithya Balachandran <nbala...@redhat.com>
>> *Date: *Wednesday, January 31, 2018 at 11:14 AM
>> *To: *Eva Freer <free...@ornl.gov>
>> *Cc: *"Greene, Tami McFarlin" <gree...@ornl.gov>, "
>> gluster-users@gluster.org" <gluster-users@gluster.org>, Amar Tumballi <
>> atumb...

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-31 Thread Nithya Balachandran
Please note, the file needs to be copied to all nodes.

On 1 February 2018 at 09:31, Nithya Balachandran <nbala...@redhat.com>
wrote:

> Hi,
>
> I think we have a workaround for until we have a fix in the code. The
> following worked on my system.
>
> Copy the attached file to */usr/lib*/glusterfs/**3.12.4**/filter/*. (You
> might need to create the *filter* directory in this path.)
> Make sure the file has execute permissions. On my system:
>
> [root@rhgsserver1 fuse2]# cd /usr/lib/glusterfs/3.12.5/
> [root@rhgsserver1 3.12.5]# l
> total 4.0K
> drwxr-xr-x.  2 root root   64 Feb  1 08:56 auth
> drwxr-xr-x.  2 root root   34 Feb  1 09:12 filter
> drwxr-xr-x.  2 root root   66 Feb  1 08:55 rpc-transport
> drwxr-xr-x. 13 root root 4.0K Feb  1 08:57 xlator
>
> [root@rhgsserver1 fuse2]# cd filter
> [root@rhgsserver1 filter]# pwd
> /usr/lib/glusterfs/3.12.5/filter
> [root@rhgsserver1 filter]# ll
> total 4
> -rwxr-xr-x. 1 root root 95 Feb  1 09:12 shared-brick-count.sh
>
> Rerun:
> *gluster v set dataeng cluster.min-free-inodes 6%*
>
>
> Check the .vol files to see if the value has changed. It should now be
> 1.You do not need to restart the volume.
>
> See [1] for more details.
>
> Regards,
> Nithya
>
>
>
> [1] http://docs.gluster.org/en/latest/Administrator%
> 20Guide/GlusterFS%20Filter/
>
>
> On 31 January 2018 at 23:39, Freer, Eva B. <free...@ornl.gov> wrote:
>
>> Amar,
>>
>>
>>
>> Thanks for your prompt reply. No, I do not plan to fix the code and
>> re-compile. I was hoping it could be fixed with setting the
>> shared-brick-count or some other option. Since this is a production system,
>> we will wait until a fix is in a release.
>>
>>
>>
>> Thanks,
>>
>> Eva (865) 574-6894
>>
>>
>>
>> *From: *Amar Tumballi <atumb...@redhat.com>
>> *Date: *Wednesday, January 31, 2018 at 12:15 PM
>> *To: *Eva Freer <free...@ornl.gov>
>> *Cc: *Nithya Balachandran <nbala...@redhat.com>, "Greene, Tami McFarlin"
>> <gree...@ornl.gov>, "gluster-users@gluster.org" <
>> gluster-users@gluster.org>
>>
>> *Subject: *Re: [Gluster-users] df does not show full volume capacity
>> after update to 3.12.4
>>
>>
>>
>> Hi Freer,
>>
>>
>>
>> Our analysis is that this issue is caused by https://review.gluster.org/
>> 17618. Specifically, in 'gd_set_shared_brick_count()' from
>> https://review.gluster.org/#/c/17618/9/xlators/mgmt/glu
>> sterd/src/glusterd-utils.c.
>>
>>
>>
>> But even if we fix it today, I don't think we have a release planned
>> immediately for shipping this. Are you planning to fix the code and
>> re-compile?
>>
>>
>>
>> Regards,
>>
>> Amar
>>
>>
>>
>> On Wed, Jan 31, 2018 at 10:00 PM, Freer, Eva B. <free...@ornl.gov> wrote:
>>
>> Nithya,
>>
>>
>>
>> I will be out of the office for ~10 days starting tomorrow. Is there any
>> way we could possibly resolve it today?
>>
>>
>>
>> Thanks,
>>
>> Eva (865) 574-6894
>>
>>
>>
>> *From: *Nithya Balachandran <nbala...@redhat.com>
>> *Date: *Wednesday, January 31, 2018 at 11:26 AM
>>
>>
>> *To: *Eva Freer <free...@ornl.gov>
>> *Cc: *"Greene, Tami McFarlin" <gree...@ornl.gov>, "
>> gluster-users@gluster.org" <gluster-users@gluster.org>, Amar Tumballi <
>> atumb...@redhat.com>
>> *Subject: *Re: [Gluster-users] df does not show full volume capacity
>> after update to 3.12.4
>>
>>
>>
>>
>>
>> On 31 January 2018 at 21:50, Freer, Eva B. <free...@ornl.gov> wrote:
>>
>> The values for shared-brick-count are still the same. I did not re-start
>> the volume after setting the cluster.min-free-inodes to 6%. Do I need to
>> restart it?
>>
>>
>>
>> That is not necessary. Let me get back to you on this tomorrow.
>>
>>
>>
>> Regards,
>>
>> Nithya
>>
>>
>>
>>
>>
>> Thanks,
>>
>> Eva (865) 574-6894
>>
>>
>>
>> *From: *Nithya Balachandran <nbala...@redhat.com>
>> *Date: *Wednesday, January 31, 2018 at 11:14 AM
>> *To: *Eva Freer <free...@ornl.gov>
>> *Cc: *"Greene, Tami McFarlin" <gree...@ornl.gov>, "
>> gluster-users@gluster.org" <gluster-users@gluster.org>, Amar Tumballi <
>> atumb...@re

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-31 Thread Nithya Balachandran
Hi,

I think we have a workaround for until we have a fix in the code. The
following worked on my system.

Copy the attached file to */usr/lib*/glusterfs/**3.12.4**/filter/*. (You
might need to create the *filter* directory in this path.)
Make sure the file has execute permissions. On my system:

[root@rhgsserver1 fuse2]# cd /usr/lib/glusterfs/3.12.5/
[root@rhgsserver1 3.12.5]# l
total 4.0K
drwxr-xr-x.  2 root root   64 Feb  1 08:56 auth
drwxr-xr-x.  2 root root   34 Feb  1 09:12 filter
drwxr-xr-x.  2 root root   66 Feb  1 08:55 rpc-transport
drwxr-xr-x. 13 root root 4.0K Feb  1 08:57 xlator

[root@rhgsserver1 fuse2]# cd filter
[root@rhgsserver1 filter]# pwd
/usr/lib/glusterfs/3.12.5/filter
[root@rhgsserver1 filter]# ll
total 4
-rwxr-xr-x. 1 root root 95 Feb  1 09:12 shared-brick-count.sh

Rerun:
*gluster v set dataeng cluster.min-free-inodes 6%*


Check the .vol files to see if the value has changed. It should now be
1.You do not need to restart the volume.

See [1] for more details.

Regards,
Nithya



[1]
http://docs.gluster.org/en/latest/Administrator%20Guide/GlusterFS%20Filter/


On 31 January 2018 at 23:39, Freer, Eva B. <free...@ornl.gov> wrote:

> Amar,
>
>
>
> Thanks for your prompt reply. No, I do not plan to fix the code and
> re-compile. I was hoping it could be fixed with setting the
> shared-brick-count or some other option. Since this is a production system,
> we will wait until a fix is in a release.
>
>
>
> Thanks,
>
> Eva (865) 574-6894
>
>
>
> *From: *Amar Tumballi <atumb...@redhat.com>
> *Date: *Wednesday, January 31, 2018 at 12:15 PM
> *To: *Eva Freer <free...@ornl.gov>
> *Cc: *Nithya Balachandran <nbala...@redhat.com>, "Greene, Tami McFarlin" <
> gree...@ornl.gov>, "gluster-users@gluster.org" <gluster-users@gluster.org>
>
> *Subject: *Re: [Gluster-users] df does not show full volume capacity
> after update to 3.12.4
>
>
>
> Hi Freer,
>
>
>
> Our analysis is that this issue is caused by https://review.gluster.org/
> 17618. Specifically, in 'gd_set_shared_brick_count()' from
> https://review.gluster.org/#/c/17618/9/xlators/mgmt/
> glusterd/src/glusterd-utils.c.
>
>
>
> But even if we fix it today, I don't think we have a release planned
> immediately for shipping this. Are you planning to fix the code and
> re-compile?
>
>
>
> Regards,
>
> Amar
>
>
>
> On Wed, Jan 31, 2018 at 10:00 PM, Freer, Eva B. <free...@ornl.gov> wrote:
>
> Nithya,
>
>
>
> I will be out of the office for ~10 days starting tomorrow. Is there any
> way we could possibly resolve it today?
>
>
>
> Thanks,
>
> Eva (865) 574-6894
>
>
>
> *From: *Nithya Balachandran <nbala...@redhat.com>
> *Date: *Wednesday, January 31, 2018 at 11:26 AM
>
>
> *To: *Eva Freer <free...@ornl.gov>
> *Cc: *"Greene, Tami McFarlin" <gree...@ornl.gov>, "
> gluster-users@gluster.org" <gluster-users@gluster.org>, Amar Tumballi <
> atumb...@redhat.com>
> *Subject: *Re: [Gluster-users] df does not show full volume capacity
> after update to 3.12.4
>
>
>
>
>
> On 31 January 2018 at 21:50, Freer, Eva B. <free...@ornl.gov> wrote:
>
> The values for shared-brick-count are still the same. I did not re-start
> the volume after setting the cluster.min-free-inodes to 6%. Do I need to
> restart it?
>
>
>
> That is not necessary. Let me get back to you on this tomorrow.
>
>
>
> Regards,
>
> Nithya
>
>
>
>
>
> Thanks,
>
> Eva (865) 574-6894
>
>
>
> *From: *Nithya Balachandran <nbala...@redhat.com>
> *Date: *Wednesday, January 31, 2018 at 11:14 AM
> *To: *Eva Freer <free...@ornl.gov>
> *Cc: *"Greene, Tami McFarlin" <gree...@ornl.gov>, "
> gluster-users@gluster.org" <gluster-users@gluster.org>, Amar Tumballi <
> atumb...@redhat.com>
>
>
> *Subject: *Re: [Gluster-users] df does not show full volume capacity
> after update to 3.12.4
>
>
>
>
>
>
>
> On 31 January 2018 at 21:34, Freer, Eva B. <free...@ornl.gov> wrote:
>
> Nithya,
>
>
>
> Responding to an earlier question: Before the upgrade, we were at 3.103 on
> these servers, but some of the clients were 3.7.6. From below, does this
> mean that “shared-brick-count” needs to be set to 1 for all bricks.
>
>
>
> All of the bricks are on separate xfs partitions composed hardware of RAID
> 6 volumes. LVM is not used. The current setting for cluster.min-free-inodes
> was 5%. I changed it to 6% per your instructions below. The df output is
> still the same, but I haven’t done the
>
> *find /var/lib/gl

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-31 Thread Freer, Eva B.
Amar,

Thanks for your prompt reply. No, I do not plan to fix the code and re-compile. 
I was hoping it could be fixed with setting the shared-brick-count or some 
other option. Since this is a production system, we will wait until a fix is in 
a release.

Thanks,
Eva (865) 574-6894

From: Amar Tumballi <atumb...@redhat.com>
Date: Wednesday, January 31, 2018 at 12:15 PM
To: Eva Freer <free...@ornl.gov>
Cc: Nithya Balachandran <nbala...@redhat.com>, "Greene, Tami McFarlin" 
<gree...@ornl.gov>, "gluster-users@gluster.org" <gluster-users@gluster.org>
Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4

Hi Freer,

Our analysis is that this issue is caused by https://review.gluster.org/17618. 
Specifically, in 'gd_set_shared_brick_count()' from 
https://review.gluster.org/#/c/17618/9/xlators/mgmt/glusterd/src/glusterd-utils.c.

But even if we fix it today, I don't think we have a release planned 
immediately for shipping this. Are you planning to fix the code and re-compile?

Regards,
Amar

On Wed, Jan 31, 2018 at 10:00 PM, Freer, Eva B. 
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:
Nithya,

I will be out of the office for ~10 days starting tomorrow. Is there any way we 
could possibly resolve it today?

Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com<mailto:nbala...@redhat.com>>
Date: Wednesday, January 31, 2018 at 11:26 AM

To: Eva Freer <free...@ornl.gov<mailto:free...@ornl.gov>>
Cc: "Greene, Tami McFarlin" <gree...@ornl.gov<mailto:gree...@ornl.gov>>, 
"gluster-users@gluster.org<mailto:gluster-users@gluster.org>" 
<gluster-users@gluster.org<mailto:gluster-users@gluster.org>>, Amar Tumballi 
<atumb...@redhat.com<mailto:atumb...@redhat.com>>
Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4


On 31 January 2018 at 21:50, Freer, Eva B. 
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:
The values for shared-brick-count are still the same. I did not re-start the 
volume after setting the cluster.min-free-inodes to 6%. Do I need to restart it?

That is not necessary. Let me get back to you on this tomorrow.

Regards,
Nithya


Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com<mailto:nbala...@redhat.com>>
Date: Wednesday, January 31, 2018 at 11:14 AM
To: Eva Freer <free...@ornl.gov<mailto:free...@ornl.gov>>
Cc: "Greene, Tami McFarlin" <gree...@ornl.gov<mailto:gree...@ornl.gov>>, 
"gluster-users@gluster.org<mailto:gluster-users@gluster.org>" 
<gluster-users@gluster.org<mailto:gluster-users@gluster.org>>, Amar Tumballi 
<atumb...@redhat.com<mailto:atumb...@redhat.com>>

Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4



On 31 January 2018 at 21:34, Freer, Eva B. 
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:
Nithya,

Responding to an earlier question: Before the upgrade, we were at 3.103 on 
these servers, but some of the clients were 3.7.6. From below, does this mean 
that “shared-brick-count” needs to be set to 1 for all bricks.

All of the bricks are on separate xfs partitions composed hardware of RAID 6 
volumes. LVM is not used. The current setting for cluster.min-free-inodes was 
5%. I changed it to 6% per your instructions below. The df output is still the 
same, but I haven’t done the
find /var/lib/glusterd/vols -type f|xargs sed -i -e 's/option 
shared-brick-count [0-9]*/option shared-brick-count 1/g'
Should I go ahead and do this?

Can you check if the values have been changed in the .vol files before you try 
this?

These files will be regenerated every time the volume is changed so changing 
them directly may not be permanent. I was hoping setting the 
cluster.min-free-inodes would have corrected this automatically and helped us 
figure out what was happening as we have not managed to reproduce this issue 
yet.




Output of stat –f for all the bricks:

[root@jacen ~]# stat -f /bricks/data_A*
  File: "/bricks/data_A1"
ID: 801 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 4530515093 Available: 4530515093
Inodes: Total: 1250159424 Free: 1250028064
  File: "/bricks/data_A2"
ID: 811 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 3653183901 Available: 3653183901
Inodes: Total: 1250159424 Free: 1250029262
  File: "/bricks/data_A3"
ID: 821 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15134840607 Available: 15134840607
Inodes: Total: 1250159424 Free: 1250128031
  File: "/bricks/data_A4"

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-31 Thread Amar Tumballi
Hi Freer,

Our analysis is that this issue is caused by
https://review.gluster.org/17618. Specifically, in
'gd_set_shared_brick_count()' from
https://review.gluster.org/#/c/17618/9/xlators/mgmt/glusterd/src/glusterd-utils.c
.

But even if we fix it today, I don't think we have a release planned
immediately for shipping this. Are you planning to fix the code and
re-compile?

Regards,
Amar

On Wed, Jan 31, 2018 at 10:00 PM, Freer, Eva B. <free...@ornl.gov> wrote:

> Nithya,
>
>
>
> I will be out of the office for ~10 days starting tomorrow. Is there any
> way we could possibly resolve it today?
>
>
>
> Thanks,
>
> Eva (865) 574-6894
>
>
>
> *From: *Nithya Balachandran <nbala...@redhat.com>
> *Date: *Wednesday, January 31, 2018 at 11:26 AM
>
> *To: *Eva Freer <free...@ornl.gov>
> *Cc: *"Greene, Tami McFarlin" <gree...@ornl.gov>, "
> gluster-users@gluster.org" <gluster-users@gluster.org>, Amar Tumballi <
> atumb...@redhat.com>
> *Subject: *Re: [Gluster-users] df does not show full volume capacity
> after update to 3.12.4
>
>
>
>
>
> On 31 January 2018 at 21:50, Freer, Eva B. <free...@ornl.gov> wrote:
>
> The values for shared-brick-count are still the same. I did not re-start
> the volume after setting the cluster.min-free-inodes to 6%. Do I need to
> restart it?
>
>
>
> That is not necessary. Let me get back to you on this tomorrow.
>
>
>
> Regards,
>
> Nithya
>
>
>
>
>
> Thanks,
>
> Eva (865) 574-6894
>
>
>
> *From: *Nithya Balachandran <nbala...@redhat.com>
> *Date: *Wednesday, January 31, 2018 at 11:14 AM
> *To: *Eva Freer <free...@ornl.gov>
> *Cc: *"Greene, Tami McFarlin" <gree...@ornl.gov>, "
> gluster-users@gluster.org" <gluster-users@gluster.org>, Amar Tumballi <
> atumb...@redhat.com>
>
>
> *Subject: *Re: [Gluster-users] df does not show full volume capacity
> after update to 3.12.4
>
>
>
>
>
>
>
> On 31 January 2018 at 21:34, Freer, Eva B. <free...@ornl.gov> wrote:
>
> Nithya,
>
>
>
> Responding to an earlier question: Before the upgrade, we were at 3.103 on
> these servers, but some of the clients were 3.7.6. From below, does this
> mean that “shared-brick-count” needs to be set to 1 for all bricks.
>
>
>
> All of the bricks are on separate xfs partitions composed hardware of RAID
> 6 volumes. LVM is not used. The current setting for cluster.min-free-inodes
> was 5%. I changed it to 6% per your instructions below. The df output is
> still the same, but I haven’t done the
>
> *find /var/lib/glusterd/vols -type f|xargs sed -i -e 's/option
> shared-brick-count [0-9]*/option shared-brick-count 1/g'*
>
> Should I go ahead and do this?
>
>
>
> Can you check if the values have been changed in the .vol files before you
> try this?
>
>
>
> These files will be regenerated every time the volume is changed so
> changing them directly may not be permanent. I was hoping setting the
> cluster.min-free-inodes would have corrected this automatically and helped
> us figure out what was happening as we have not managed to reproduce this
> issue yet.
>
>
>
>
>
>
>
>
>
> Output of stat –f for all the bricks:
>
>
>
> [root@jacen ~]# stat -f /bricks/data_A*
>
>   File: "/bricks/data_A1"
>
> ID: 801 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 4530515093 Available: 4530515093
>
> Inodes: Total: 1250159424 Free: 1250028064
>
>   File: "/bricks/data_A2"
>
> ID: 811 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 3653183901 Available: 3653183901
>
> Inodes: Total: 1250159424 Free: 1250029262
>
>   File: "/bricks/data_A3"
>
> ID: 821 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 15134840607 Available: 15134840607
>
> Inodes: Total: 1250159424 Free: 1250128031
>
>   File: "/bricks/data_A4"
>
> ID: 831 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 15626461604 Available: 15626461604
>
> Inodes: Total: 1250159424 Free: 1250153857
>
>
>
> [root@jaina dataeng]# stat -f /bricks/data_B*
>
>   File: "/bricks/data_B1"
>
> ID: 801 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
&g

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-31 Thread Freer, Eva B.
Nithya,

Yes, Tami Greene, who is copied on the emails. I will monitor them also and 
work with her to get this resolved.

Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com>
Date: Wednesday, January 31, 2018 at 12:10 PM
To: Eva Freer <free...@ornl.gov>
Cc: "Greene, Tami McFarlin" <gree...@ornl.gov>, "gluster-users@gluster.org" 
<gluster-users@gluster.org>, Amar Tumballi <atumb...@redhat.com>
Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4

Hi Eva,

I'm sorry but I need to get in touch with another developer to check about the 
changes here and he will be available only tomorrow. Is there someone else I 
could work with while you are away?

Regards,
Nithya

On 31 January 2018 at 22:00, Freer, Eva B. 
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:
Nithya,

I will be out of the office for ~10 days starting tomorrow. Is there any way we 
could possibly resolve it today?

Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com<mailto:nbala...@redhat.com>>
Date: Wednesday, January 31, 2018 at 11:26 AM

To: Eva Freer <free...@ornl.gov<mailto:free...@ornl.gov>>
Cc: "Greene, Tami McFarlin" <gree...@ornl.gov<mailto:gree...@ornl.gov>>, 
"gluster-users@gluster.org<mailto:gluster-users@gluster.org>" 
<gluster-users@gluster.org<mailto:gluster-users@gluster.org>>, Amar Tumballi 
<atumb...@redhat.com<mailto:atumb...@redhat.com>>
Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4


On 31 January 2018 at 21:50, Freer, Eva B. 
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:
The values for shared-brick-count are still the same. I did not re-start the 
volume after setting the cluster.min-free-inodes to 6%. Do I need to restart it?

That is not necessary. Let me get back to you on this tomorrow.

Regards,
Nithya


Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com<mailto:nbala...@redhat.com>>
Date: Wednesday, January 31, 2018 at 11:14 AM
To: Eva Freer <free...@ornl.gov<mailto:free...@ornl.gov>>
Cc: "Greene, Tami McFarlin" <gree...@ornl.gov<mailto:gree...@ornl.gov>>, 
"gluster-users@gluster.org<mailto:gluster-users@gluster.org>" 
<gluster-users@gluster.org<mailto:gluster-users@gluster.org>>, Amar Tumballi 
<atumb...@redhat.com<mailto:atumb...@redhat.com>>

Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4



On 31 January 2018 at 21:34, Freer, Eva B. 
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:
Nithya,

Responding to an earlier question: Before the upgrade, we were at 3.103 on 
these servers, but some of the clients were 3.7.6. From below, does this mean 
that “shared-brick-count” needs to be set to 1 for all bricks.

All of the bricks are on separate xfs partitions composed hardware of RAID 6 
volumes. LVM is not used. The current setting for cluster.min-free-inodes was 
5%. I changed it to 6% per your instructions below. The df output is still the 
same, but I haven’t done the
find /var/lib/glusterd/vols -type f|xargs sed -i -e 's/option 
shared-brick-count [0-9]*/option shared-brick-count 1/g'
Should I go ahead and do this?

Can you check if the values have been changed in the .vol files before you try 
this?

These files will be regenerated every time the volume is changed so changing 
them directly may not be permanent. I was hoping setting the 
cluster.min-free-inodes would have corrected this automatically and helped us 
figure out what was happening as we have not managed to reproduce this issue 
yet.




Output of stat –f for all the bricks:

[root@jacen ~]# stat -f /bricks/data_A*
  File: "/bricks/data_A1"
ID: 801 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 4530515093 Available: 4530515093
Inodes: Total: 1250159424 Free: 1250028064
  File: "/bricks/data_A2"
ID: 811 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 3653183901 Available: 3653183901
Inodes: Total: 1250159424 Free: 1250029262
  File: "/bricks/data_A3"
ID: 821 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15134840607 Available: 15134840607
Inodes: Total: 1250159424 Free: 1250128031
  File: "/bricks/data_A4"
ID: 831 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15626461604 Available: 15626461604
Inodes: Total: 1250159424 Free: 1250153857

[root@jaina dataeng]# stat -f /bricks/data_B*
  File: "/bricks/data_B1"
ID: 801 Name

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-31 Thread Nithya Balachandran
Hi Eva,

I'm sorry but I need to get in touch with another developer to check about
the changes here and he will be available only tomorrow. Is there someone
else I could work with while you are away?

Regards,
Nithya

On 31 January 2018 at 22:00, Freer, Eva B. <free...@ornl.gov> wrote:

> Nithya,
>
>
>
> I will be out of the office for ~10 days starting tomorrow. Is there any
> way we could possibly resolve it today?
>
>
>
> Thanks,
>
> Eva (865) 574-6894
>
>
>
> *From: *Nithya Balachandran <nbala...@redhat.com>
> *Date: *Wednesday, January 31, 2018 at 11:26 AM
>
> *To: *Eva Freer <free...@ornl.gov>
> *Cc: *"Greene, Tami McFarlin" <gree...@ornl.gov>, "
> gluster-users@gluster.org" <gluster-users@gluster.org>, Amar Tumballi <
> atumb...@redhat.com>
> *Subject: *Re: [Gluster-users] df does not show full volume capacity
> after update to 3.12.4
>
>
>
>
>
> On 31 January 2018 at 21:50, Freer, Eva B. <free...@ornl.gov> wrote:
>
> The values for shared-brick-count are still the same. I did not re-start
> the volume after setting the cluster.min-free-inodes to 6%. Do I need to
> restart it?
>
>
>
> That is not necessary. Let me get back to you on this tomorrow.
>
>
>
> Regards,
>
> Nithya
>
>
>
>
>
> Thanks,
>
> Eva (865) 574-6894
>
>
>
> *From: *Nithya Balachandran <nbala...@redhat.com>
> *Date: *Wednesday, January 31, 2018 at 11:14 AM
> *To: *Eva Freer <free...@ornl.gov>
> *Cc: *"Greene, Tami McFarlin" <gree...@ornl.gov>, "
> gluster-users@gluster.org" <gluster-users@gluster.org>, Amar Tumballi <
> atumb...@redhat.com>
>
>
> *Subject: *Re: [Gluster-users] df does not show full volume capacity
> after update to 3.12.4
>
>
>
>
>
>
>
> On 31 January 2018 at 21:34, Freer, Eva B. <free...@ornl.gov> wrote:
>
> Nithya,
>
>
>
> Responding to an earlier question: Before the upgrade, we were at 3.103 on
> these servers, but some of the clients were 3.7.6. From below, does this
> mean that “shared-brick-count” needs to be set to 1 for all bricks.
>
>
>
> All of the bricks are on separate xfs partitions composed hardware of RAID
> 6 volumes. LVM is not used. The current setting for cluster.min-free-inodes
> was 5%. I changed it to 6% per your instructions below. The df output is
> still the same, but I haven’t done the
>
> *find /var/lib/glusterd/vols -type f|xargs sed -i -e 's/option
> shared-brick-count [0-9]*/option shared-brick-count 1/g'*
>
> Should I go ahead and do this?
>
>
>
> Can you check if the values have been changed in the .vol files before you
> try this?
>
>
>
> These files will be regenerated every time the volume is changed so
> changing them directly may not be permanent. I was hoping setting the
> cluster.min-free-inodes would have corrected this automatically and helped
> us figure out what was happening as we have not managed to reproduce this
> issue yet.
>
>
>
>
>
>
>
>
>
> Output of stat –f for all the bricks:
>
>
>
> [root@jacen ~]# stat -f /bricks/data_A*
>
>   File: "/bricks/data_A1"
>
> ID: 801 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 4530515093 Available: 4530515093
>
> Inodes: Total: 1250159424 Free: 1250028064
>
>   File: "/bricks/data_A2"
>
> ID: 811 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 3653183901 Available: 3653183901
>
> Inodes: Total: 1250159424 Free: 1250029262
>
>   File: "/bricks/data_A3"
>
> ID: 821 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 15134840607 Available: 15134840607
>
> Inodes: Total: 1250159424 Free: 1250128031
>
>   File: "/bricks/data_A4"
>
> ID: 831 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 15626461604 Available: 15626461604
>
> Inodes: Total: 1250159424 Free: 1250153857
>
>
>
> [root@jaina dataeng]# stat -f /bricks/data_B*
>
>   File: "/bricks/data_B1"
>
> ID: 801 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 5689640723 Available: 5689640723
>
> Inodes: Total: 1250159424 Free: 1250047934
>
>   File: "/bricks/data_B2"

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-31 Thread Freer, Eva B.
Nithya,

I will be out of the office for ~10 days starting tomorrow. Is there any way we 
could possibly resolve it today?

Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com>
Date: Wednesday, January 31, 2018 at 11:26 AM
To: Eva Freer <free...@ornl.gov>
Cc: "Greene, Tami McFarlin" <gree...@ornl.gov>, "gluster-users@gluster.org" 
<gluster-users@gluster.org>, Amar Tumballi <atumb...@redhat.com>
Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4


On 31 January 2018 at 21:50, Freer, Eva B. 
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:
The values for shared-brick-count are still the same. I did not re-start the 
volume after setting the cluster.min-free-inodes to 6%. Do I need to restart it?

That is not necessary. Let me get back to you on this tomorrow.

Regards,
Nithya


Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com<mailto:nbala...@redhat.com>>
Date: Wednesday, January 31, 2018 at 11:14 AM
To: Eva Freer <free...@ornl.gov<mailto:free...@ornl.gov>>
Cc: "Greene, Tami McFarlin" <gree...@ornl.gov<mailto:gree...@ornl.gov>>, 
"gluster-users@gluster.org<mailto:gluster-users@gluster.org>" 
<gluster-users@gluster.org<mailto:gluster-users@gluster.org>>, Amar Tumballi 
<atumb...@redhat.com<mailto:atumb...@redhat.com>>

Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4



On 31 January 2018 at 21:34, Freer, Eva B. 
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:
Nithya,

Responding to an earlier question: Before the upgrade, we were at 3.103 on 
these servers, but some of the clients were 3.7.6. From below, does this mean 
that “shared-brick-count” needs to be set to 1 for all bricks.

All of the bricks are on separate xfs partitions composed hardware of RAID 6 
volumes. LVM is not used. The current setting for cluster.min-free-inodes was 
5%. I changed it to 6% per your instructions below. The df output is still the 
same, but I haven’t done the
find /var/lib/glusterd/vols -type f|xargs sed -i -e 's/option 
shared-brick-count [0-9]*/option shared-brick-count 1/g'
Should I go ahead and do this?

Can you check if the values have been changed in the .vol files before you try 
this?

These files will be regenerated every time the volume is changed so changing 
them directly may not be permanent. I was hoping setting the 
cluster.min-free-inodes would have corrected this automatically and helped us 
figure out what was happening as we have not managed to reproduce this issue 
yet.




Output of stat –f for all the bricks:

[root@jacen ~]# stat -f /bricks/data_A*
  File: "/bricks/data_A1"
ID: 801 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 4530515093 Available: 4530515093
Inodes: Total: 1250159424 Free: 1250028064
  File: "/bricks/data_A2"
ID: 811 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 3653183901 Available: 3653183901
Inodes: Total: 1250159424 Free: 1250029262
  File: "/bricks/data_A3"
ID: 821 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15134840607 Available: 15134840607
Inodes: Total: 1250159424 Free: 1250128031
  File: "/bricks/data_A4"
ID: 831 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15626461604 Available: 15626461604
Inodes: Total: 1250159424 Free: 1250153857

[root@jaina dataeng]# stat -f /bricks/data_B*
  File: "/bricks/data_B1"
ID: 801 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 5689640723 Available: 5689640723
Inodes: Total: 1250159424 Free: 1250047934
  File: "/bricks/data_B2"
ID: 811 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 6623312785 Available: 6623312785
Inodes: Total: 1250159424 Free: 1250048131
  File: "/bricks/data_B3"
ID: 821 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15106888485 Available: 15106888485
Inodes: Total: 1250159424 Free: 1250122139
  File: "/bricks/data_B4"
ID: 831 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15626461604 Available: 15626461604
Inodes: Total: 1250159424 Free: 1250153857


Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com<mailto:nbala...@redhat.com>>
Date: Wednesday, January 31, 2018 at 10:46 AM
To: Eva Freer <

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-31 Thread Nithya Balachandran
On 31 January 2018 at 21:50, Freer, Eva B. <free...@ornl.gov> wrote:

> The values for shared-brick-count are still the same. I did not re-start
> the volume after setting the cluster.min-free-inodes to 6%. Do I need to
> restart it?
>
>
>
That is not necessary. Let me get back to you on this tomorrow.

Regards,
Nithya



> Thanks,
>
> Eva (865) 574-6894
>
>
>
> *From: *Nithya Balachandran <nbala...@redhat.com>
> *Date: *Wednesday, January 31, 2018 at 11:14 AM
> *To: *Eva Freer <free...@ornl.gov>
> *Cc: *"Greene, Tami McFarlin" <gree...@ornl.gov>, "
> gluster-users@gluster.org" <gluster-users@gluster.org>, Amar Tumballi <
> atumb...@redhat.com>
>
> *Subject: *Re: [Gluster-users] df does not show full volume capacity
> after update to 3.12.4
>
>
>
>
>
>
>
> On 31 January 2018 at 21:34, Freer, Eva B. <free...@ornl.gov> wrote:
>
> Nithya,
>
>
>
> Responding to an earlier question: Before the upgrade, we were at 3.103 on
> these servers, but some of the clients were 3.7.6. From below, does this
> mean that “shared-brick-count” needs to be set to 1 for all bricks.
>
>
>
> All of the bricks are on separate xfs partitions composed hardware of RAID
> 6 volumes. LVM is not used. The current setting for cluster.min-free-inodes
> was 5%. I changed it to 6% per your instructions below. The df output is
> still the same, but I haven’t done the
>
> *find /var/lib/glusterd/vols -type f|xargs sed -i -e 's/option
> shared-brick-count [0-9]*/option shared-brick-count 1/g'*
>
> Should I go ahead and do this?
>
>
>
> Can you check if the values have been changed in the .vol files before you
> try this?
>
>
>
> These files will be regenerated every time the volume is changed so
> changing them directly may not be permanent. I was hoping setting the
> cluster.min-free-inodes would have corrected this automatically and helped
> us figure out what was happening as we have not managed to reproduce this
> issue yet.
>
>
>
>
>
>
>
>
>
> Output of stat –f for all the bricks:
>
>
>
> [root@jacen ~]# stat -f /bricks/data_A*
>
>   File: "/bricks/data_A1"
>
> ID: 801 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 4530515093 Available: 4530515093
>
> Inodes: Total: 1250159424 Free: 1250028064
>
>   File: "/bricks/data_A2"
>
> ID: 811 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 3653183901 Available: 3653183901
>
> Inodes: Total: 1250159424 Free: 1250029262
>
>   File: "/bricks/data_A3"
>
> ID: 821 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 15134840607 Available: 15134840607
>
> Inodes: Total: 1250159424 Free: 1250128031
>
>   File: "/bricks/data_A4"
>
> ID: 831 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 15626461604 Available: 15626461604
>
> Inodes: Total: 1250159424 Free: 1250153857
>
>
>
> [root@jaina dataeng]# stat -f /bricks/data_B*
>
>   File: "/bricks/data_B1"
>
> ID: 801 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 5689640723 Available: 5689640723
>
> Inodes: Total: 1250159424 Free: 1250047934
>
>   File: "/bricks/data_B2"
>
> ID: 811 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 6623312785 Available: 6623312785
>
> Inodes: Total: 1250159424 Free: 1250048131
>
>   File: "/bricks/data_B3"
>
> ID: 821 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 15106888485 Available: 15106888485
>
> Inodes: Total: 1250159424 Free: 1250122139
>
>   File: "/bricks/data_B4"
>
> ID: 831 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 15626461604 Available: 15626461604
>
> Inodes: Total: 1250159424 Free: 1250153857
>
>
>
>
>
> Thanks,
>
> Eva (865) 574-6894
>
>
>
> *From: *Nithya Balachandran <nbala...@redhat.com>
> *Date: *Wednesday, January 31, 2018 at 10:46 AM
> *To: *Eva Freer <free...@ornl.gov>,

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-31 Thread Freer, Eva B.
The values for shared-brick-count are still the same. I did not re-start the 
volume after setting the cluster.min-free-inodes to 6%. Do I need to restart it?

Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com>
Date: Wednesday, January 31, 2018 at 11:14 AM
To: Eva Freer <free...@ornl.gov>
Cc: "Greene, Tami McFarlin" <gree...@ornl.gov>, "gluster-users@gluster.org" 
<gluster-users@gluster.org>, Amar Tumballi <atumb...@redhat.com>
Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4



On 31 January 2018 at 21:34, Freer, Eva B. 
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:
Nithya,

Responding to an earlier question: Before the upgrade, we were at 3.103 on 
these servers, but some of the clients were 3.7.6. From below, does this mean 
that “shared-brick-count” needs to be set to 1 for all bricks.

All of the bricks are on separate xfs partitions composed hardware of RAID 6 
volumes. LVM is not used. The current setting for cluster.min-free-inodes was 
5%. I changed it to 6% per your instructions below. The df output is still the 
same, but I haven’t done the
find /var/lib/glusterd/vols -type f|xargs sed -i -e 's/option 
shared-brick-count [0-9]*/option shared-brick-count 1/g'
Should I go ahead and do this?

Can you check if the values have been changed in the .vol files before you try 
this?

These files will be regenerated every time the volume is changed so changing 
them directly may not be permanent. I was hoping setting the 
cluster.min-free-inodes would have corrected this automatically and helped us 
figure out what was happening as we have not managed to reproduce this issue 
yet.




Output of stat –f for all the bricks:

[root@jacen ~]# stat -f /bricks/data_A*
  File: "/bricks/data_A1"
ID: 801 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 4530515093 Available: 4530515093
Inodes: Total: 1250159424 Free: 1250028064
  File: "/bricks/data_A2"
ID: 811 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 3653183901 Available: 3653183901
Inodes: Total: 1250159424 Free: 1250029262
  File: "/bricks/data_A3"
ID: 821 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15134840607 Available: 15134840607
Inodes: Total: 1250159424 Free: 1250128031
  File: "/bricks/data_A4"
ID: 831 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15626461604 Available: 15626461604
Inodes: Total: 1250159424 Free: 1250153857

[root@jaina dataeng]# stat -f /bricks/data_B*
  File: "/bricks/data_B1"
ID: 801 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 5689640723 Available: 5689640723
Inodes: Total: 1250159424 Free: 1250047934
  File: "/bricks/data_B2"
ID: 811 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 6623312785 Available: 6623312785
Inodes: Total: 1250159424 Free: 1250048131
  File: "/bricks/data_B3"
ID: 821 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15106888485 Available: 15106888485
Inodes: Total: 1250159424 Free: 1250122139
  File: "/bricks/data_B4"
ID: 831 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15626461604 Available: 15626461604
Inodes: Total: 1250159424 Free: 1250153857


Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com<mailto:nbala...@redhat.com>>
Date: Wednesday, January 31, 2018 at 10:46 AM
To: Eva Freer <free...@ornl.gov<mailto:free...@ornl.gov>>, "Greene, Tami 
McFarlin" <gree...@ornl.gov<mailto:gree...@ornl.gov>>
Cc: Amar Tumballi <atumb...@redhat.com<mailto:atumb...@redhat.com>>

Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4

Thank you Eva.

From the files you sent:
dataeng.jacen.bricks-data_A1-dataeng.vol:option shared-brick-count 2
dataeng.jacen.bricks-data_A2-dataeng.vol:option shared-brick-count 2
dataeng.jacen.bricks-data_A3-dataeng.vol:option shared-brick-count 1
dataeng.jacen.bricks-data_A4-dataeng.vol:option shared-brick-count 1
dataeng.jaina.bricks-data_B1-dataeng.vol:option shared-brick-count 0
dataeng.jaina.bricks-data_B2-dataeng.vol:option shared-brick-count 0
dataeng.jaina.bricks-data_B3-dataeng.vol:option shared-brick-count 0
dataeng.jaina.bricks-data_B4-dataeng.vol:option shared-brick-count 0


Are all of these bricks on s

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-31 Thread Nithya Balachandran
On 31 January 2018 at 21:34, Freer, Eva B. <free...@ornl.gov> wrote:

> Nithya,
>
>
>
> Responding to an earlier question: Before the upgrade, we were at 3.103 on
> these servers, but some of the clients were 3.7.6. From below, does this
> mean that “shared-brick-count” needs to be set to 1 for all bricks.
>
>
>
> All of the bricks are on separate xfs partitions composed hardware of RAID
> 6 volumes. LVM is not used. The current setting for cluster.min-free-inodes
> was 5%. I changed it to 6% per your instructions below. The df output is
> still the same, but I haven’t done the
>
> *find /var/lib/glusterd/vols -type f|xargs sed -i -e 's/option
> shared-brick-count [0-9]*/option shared-brick-count 1/g'*
>
> Should I go ahead and do this?
>

Can you check if the values have been changed in the .vol files before you
try this?

These files will be regenerated every time the volume is changed so
changing them directly may not be permanent. I was hoping setting the
cluster.min-free-inodes would have corrected this automatically and helped
us figure out what was happening as we have not managed to reproduce this
issue yet.




>
> Output of stat –f for all the bricks:
>
>
>
> [root@jacen ~]# stat -f /bricks/data_A*
>
>   File: "/bricks/data_A1"
>
> ID: 801 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 4530515093 Available: 4530515093
>
> Inodes: Total: 1250159424 Free: 1250028064
>
>   File: "/bricks/data_A2"
>
> ID: 811 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 3653183901 Available: 3653183901
>
> Inodes: Total: 1250159424 Free: 1250029262
>
>   File: "/bricks/data_A3"
>
> ID: 821 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 15134840607 Available: 15134840607
>
> Inodes: Total: 1250159424 Free: 1250128031
>
>   File: "/bricks/data_A4"
>
> ID: 831 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 15626461604 Available: 15626461604
>
> Inodes: Total: 1250159424 Free: 1250153857
>
>
>
> [root@jaina dataeng]# stat -f /bricks/data_B*
>
>   File: "/bricks/data_B1"
>
> ID: 801 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 5689640723 Available: 5689640723
>
> Inodes: Total: 1250159424 Free: 1250047934
>
>   File: "/bricks/data_B2"
>
> ID: 811 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 6623312785 Available: 6623312785
>
> Inodes: Total: 1250159424 Free: 1250048131
>
>   File: "/bricks/data_B3"
>
> ID: 821 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 15106888485 Available: 15106888485
>
> Inodes: Total: 1250159424 Free: 1250122139
>
>   File: "/bricks/data_B4"
>
> ID: 831 Namelen: 255 Type: xfs
>
> Block size: 4096   Fundamental block size: 4096
>
> Blocks: Total: 15626471424 Free: 15626461604 Available: 15626461604
>
> Inodes: Total: 1250159424 Free: 1250153857
>
>
>
>
>
> Thanks,
>
> Eva (865) 574-6894
>
>
>
> *From: *Nithya Balachandran <nbala...@redhat.com>
> *Date: *Wednesday, January 31, 2018 at 10:46 AM
> *To: *Eva Freer <free...@ornl.gov>, "Greene, Tami McFarlin" <
> gree...@ornl.gov>
> *Cc: *Amar Tumballi <atumb...@redhat.com>
>
> *Subject: *Re: [Gluster-users] df does not show full volume capacity
> after update to 3.12.4
>
>
>
> Thank you Eva.
>
>
>
> From the files you sent:
>
> dataeng.jacen.bricks-data_A1-dataeng.vol:option shared-brick-count 2
>
> dataeng.jacen.bricks-data_A2-dataeng.vol:option shared-brick-count 2
>
> dataeng.jacen.bricks-data_A3-dataeng.vol:option shared-brick-count 1
>
> dataeng.jacen.bricks-data_A4-dataeng.vol:option shared-brick-count 1
>
> dataeng.jaina.bricks-data_B1-dataeng.vol:option shared-brick-count 0
>
> dataeng.jaina.bricks-data_B2-dataeng.vol:option shared-brick-count 0
>
> dataeng.jaina.bricks-data_B3-dataeng.vol:option shared-brick-count 0
>
> dataeng.jaina.bricks-data_B4-dataeng.vol:option shared-brick-count 0
>
>
>
>
>
> Are all of the

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-31 Thread Freer, Eva B.
Nithya,

Responding to an earlier question: Before the upgrade, we were at 3.103 on 
these servers, but some of the clients were 3.7.6. From below, does this mean 
that “shared-brick-count” needs to be set to 1 for all bricks.

All of the bricks are on separate xfs partitions composed hardware of RAID 6 
volumes. LVM is not used. The current setting for cluster.min-free-inodes was 
5%. I changed it to 6% per your instructions below. The df output is still the 
same, but I haven’t done the
find /var/lib/glusterd/vols -type f|xargs sed -i -e 's/option 
shared-brick-count [0-9]*/option shared-brick-count 1/g'
Should I go ahead and do this?

Output of stat –f for all the bricks:

[root@jacen ~]# stat -f /bricks/data_A*
  File: "/bricks/data_A1"
ID: 801 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 4530515093 Available: 4530515093
Inodes: Total: 1250159424 Free: 1250028064
  File: "/bricks/data_A2"
ID: 811 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 3653183901 Available: 3653183901
Inodes: Total: 1250159424 Free: 1250029262
  File: "/bricks/data_A3"
ID: 821 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15134840607 Available: 15134840607
Inodes: Total: 1250159424 Free: 1250128031
  File: "/bricks/data_A4"
ID: 831 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15626461604 Available: 15626461604
Inodes: Total: 1250159424 Free: 1250153857

[root@jaina dataeng]# stat -f /bricks/data_B*
  File: "/bricks/data_B1"
ID: 801 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 5689640723 Available: 5689640723
Inodes: Total: 1250159424 Free: 1250047934
  File: "/bricks/data_B2"
ID: 811 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 6623312785 Available: 6623312785
Inodes: Total: 1250159424 Free: 1250048131
  File: "/bricks/data_B3"
ID: 821 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15106888485 Available: 15106888485
Inodes: Total: 1250159424 Free: 1250122139
  File: "/bricks/data_B4"
ID: 831 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15626461604 Available: 15626461604
Inodes: Total: 1250159424 Free: 1250153857


Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com>
Date: Wednesday, January 31, 2018 at 10:46 AM
To: Eva Freer <free...@ornl.gov>, "Greene, Tami McFarlin" <gree...@ornl.gov>
Cc: Amar Tumballi <atumb...@redhat.com>
Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4

Thank you Eva.

From the files you sent:
dataeng.jacen.bricks-data_A1-dataeng.vol:option shared-brick-count 2
dataeng.jacen.bricks-data_A2-dataeng.vol:option shared-brick-count 2
dataeng.jacen.bricks-data_A3-dataeng.vol:option shared-brick-count 1
dataeng.jacen.bricks-data_A4-dataeng.vol:option shared-brick-count 1
dataeng.jaina.bricks-data_B1-dataeng.vol:option shared-brick-count 0
dataeng.jaina.bricks-data_B2-dataeng.vol:option shared-brick-count 0
dataeng.jaina.bricks-data_B3-dataeng.vol:option shared-brick-count 0
dataeng.jaina.bricks-data_B4-dataeng.vol:option shared-brick-count 0


Are all of these bricks on separate Filesystem partitions? If yes, can you 
please try running the following on one of the gluster nodes and see if the df 
output works post that?

gluster v set dataeng cluster.min-free-inodes 6%


If it doesn;t work, please send us the stat -f output for each brick.

Regards,
Nithya

On 31 January 2018 at 20:41, Freer, Eva B. 
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:
Nithya,

The file for one of the servers is attached.

Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com<mailto:nbala...@redhat.com>>
Date: Wednesday, January 31, 2018 at 1:17 AM
To: Eva Freer <free...@ornl.gov<mailto:free...@ornl.gov>>
Cc: "gluster-users@gluster.org<mailto:gluster-users@gluster.org>" 
<gluster-users@gluster.org<mailto:gluster-users@gluster.org>>, "Greene, Tami 
McFarlin" <gree...@ornl.gov<mailto:gree...@ornl.gov>>
Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4

I found this on the mailing list:
I found the issue.

The CentOS 7 RPMs, upon upgrade, modifies the .vol files. Among other things, 
it adds "option shared-brick-count \d", using the number of brick

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-30 Thread Nithya Balachandran
I found this on the mailing list:








*I found the issue.The CentOS 7 RPMs, upon upgrade, modifies the .vol
files. Among other things, it adds "option shared-brick-count \d", using
the number of bricks in the volume.This gives you an average free space per
brick, instead of total free space in the volume.When I create a new
volume, the value of "shared-brick-count" is "1".find
/var/lib/glusterd/vols -type f|xargs sed -i -e 's/option shared-brick-count
[0-9]*/option shared-brick-count 1/g'*



Eva, can you send me the contents of the /var/lib/glusterd/ folder
from any one node so I can confirm if this is the problem?

Regards,
Nithya


On 31 January 2018 at 10:47, Nithya Balachandran 
wrote:

> Hi Eva,
>
> One more question. What version of gluster were you running before the
> upgrade?
>
> Thanks,
> Nithya
>
> On 31 January 2018 at 09:52, Nithya Balachandran 
> wrote:
>
>> Hi Eva,
>>
>> Can you send us the following:
>>
>> gluster volume info
>> gluster volume status
>>
>> The log files and tcpdump for df on a fresh mount point for that volume.
>>
>> Thanks,
>> Nithya
>>
>>
>> On 31 January 2018 at 07:17, Freer, Eva B.  wrote:
>>
>>> After OS update to CentOS 7.4 or RedHat 6.9 and update to Gluster
>>> 3.12.4, the ‘df’ command shows only part of the available space on the
>>> mount point for multi-brick volumes. All nodes are at 3.12.4. This occurs
>>> on both servers and clients.
>>>
>>>
>>>
>>> We have 2 different server configurations.
>>>
>>>
>>>
>>> Configuration 1: A distributed volume of 8 bricks with 4 on each server.
>>> The initial configuration had 4 bricks of 59TB each with 2 on each server.
>>> Prior to the update to CentOS 7.4 and gluster 3.12.4, ‘df’ correctly showed
>>> the size for the volume as 233TB. After the update, we added 2 bricks with
>>> 1 on each server, but the output of ‘df’ still only listed 233TB for the
>>> volume. We added 2 more bricks, again with 1 on each server. The output of
>>> ‘df’ now shows 350TB, but the aggregate of 8 – 59TB bricks should be ~466TB.
>>>
>>>
>>>
>>> Configuration 2: A distributed, replicated volume with 9 bricks on each
>>> server for a total of ~350TB of storage. After the server update to RHEL
>>> 6.9 and gluster 3.12.4, the volume now shows as having 50TB with ‘df’. No
>>> changes were made to this volume after the update.
>>>
>>>
>>>
>>> In both cases, examining the bricks shows that the space and files are
>>> still there, just not reported correctly with ‘df’. All machines have been
>>> rebooted and the problem persists.
>>>
>>>
>>>
>>> Any help/advice you can give on this would be greatly appreciated.
>>>
>>>
>>>
>>> Thanks in advance.
>>>
>>> Eva Freer
>>>
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-30 Thread Nithya Balachandran
Hi Eva,

Can you send us the following:

gluster volume info
gluster volume status

The log files and tcpdump for df on a fresh mount point for that volume.

Thanks,
Nithya


On 31 January 2018 at 07:17, Freer, Eva B.  wrote:

> After OS update to CentOS 7.4 or RedHat 6.9 and update to Gluster 3.12.4,
> the ‘df’ command shows only part of the available space on the mount point
> for multi-brick volumes. All nodes are at 3.12.4. This occurs on both
> servers and clients.
>
>
>
> We have 2 different server configurations.
>
>
>
> Configuration 1: A distributed volume of 8 bricks with 4 on each server.
> The initial configuration had 4 bricks of 59TB each with 2 on each server.
> Prior to the update to CentOS 7.4 and gluster 3.12.4, ‘df’ correctly showed
> the size for the volume as 233TB. After the update, we added 2 bricks with
> 1 on each server, but the output of ‘df’ still only listed 233TB for the
> volume. We added 2 more bricks, again with 1 on each server. The output of
> ‘df’ now shows 350TB, but the aggregate of 8 – 59TB bricks should be ~466TB.
>
>
>
> Configuration 2: A distributed, replicated volume with 9 bricks on each
> server for a total of ~350TB of storage. After the server update to RHEL
> 6.9 and gluster 3.12.4, the volume now shows as having 50TB with ‘df’. No
> changes were made to this volume after the update.
>
>
>
> In both cases, examining the bricks shows that the space and files are
> still there, just not reported correctly with ‘df’. All machines have been
> rebooted and the problem persists.
>
>
>
> Any help/advice you can give on this would be greatly appreciated.
>
>
>
> Thanks in advance.
>
> Eva Freer
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-30 Thread Sam McLeod
Very similar to what we noticed too.

I suspect it's something to do with the metadata or xattrs stored on the 
filesystem that gives quite different results from the actual file sizes.

--
Sam McLeod (protoporpoise on IRC)
https://smcleod.net
https://twitter.com/s_mcleod

Words are my own opinions and do not necessarily represent those of my employer 
or partners.

> On 31 Jan 2018, at 2:14 pm, Freer, Eva B. <free...@ornl.gov> wrote:
> 
> Sam,
>  
> For du –sh on my newer volume, the result is 161T. The sum of the Used space 
> in the df –h output for all the bricks is ~163T. Close enough for me to 
> believe everything is there. The total for used space in the df –h of the 
> mountpoint it 83T, roughly half what is used.
>  
> Relevant lines from df –h on server-A:
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/sda159T   42T   17T  72% /bricks/data_A1
> /dev/sdb159T   45T   14T  77% /bricks/data_A2
> /dev/sdd159T   39M   59T   1% /bricks/data_A4
> /dev/sdc159T  1.9T   57T   4% /bricks/data_A3
> server-A:/dataeng   350T   83T  268T  24% /dataeng
>  
> And on server-B:
> Filesystem Size  Used Avail Use% Mounted on
> /dev/sdb1   59T   34T   25T  58% /bricks/data_B2
> /dev/sdc1   59T  2.0T   57T   4% /bricks/data_B3
> /dev/sdd1   59T   39M   59T   1% /bricks/data_B4
> /dev/sda1   59T   38T   22T  64% /bricks/data_B1
> server-B:/dataeng  350T   83T  268T  24% /dataeng
>  
> Eva Freer
>  
> From: Sam McLeod <mailingli...@smcleod.net>
> Date: Tuesday, January 30, 2018 at 9:43 PM
> To: Eva Freer <free...@ornl.gov>
> Cc: "gluster-users@gluster.org" <gluster-users@gluster.org>, "Greene, Tami 
> McFarlin" <gree...@ornl.gov>
> Subject: Re: [Gluster-users] df does not show full volume capacity after 
> update to 3.12.4
>  
> We noticed something similar. 
>  
> Out of interest, does du -sh . show the same size?
>  
> --
> Sam McLeod (protoporpoise on IRC)
> https://smcleod.net <https://smcleod.net/>
> https://twitter.com/s_mcleod
> 
> Words are my own opinions and do not necessarily represent those of my 
> employer or partners.
> 
> 
>> On 31 Jan 2018, at 12:47 pm, Freer, Eva B. <free...@ornl.gov 
>> <mailto:free...@ornl.gov>> wrote:
>>  
>> After OS update to CentOS 7.4 or RedHat 6.9 and update to Gluster 3.12.4, 
>> the ‘df’ command shows only part of the available space on the mount point 
>> for multi-brick volumes. All nodes are at 3.12.4. This occurs on both 
>> servers and clients. 
>>  
>> We have 2 different server configurations.
>>  
>> Configuration 1: A distributed volume of 8 bricks with 4 on each server. The 
>> initial configuration had 4 bricks of 59TB each with 2 on each server. Prior 
>> to the update to CentOS 7.4 and gluster 3.12.4, ‘df’ correctly showed the 
>> size for the volume as 233TB. After the update, we added 2 bricks with 1 on 
>> each server, but the output of ‘df’ still only listed 233TB for the volume. 
>> We added 2 more bricks, again with 1 on each server. The output of ‘df’ now 
>> shows 350TB, but the aggregate of 8 – 59TB bricks should be ~466TB.
>>  
>> Configuration 2: A distributed, replicated volume with 9 bricks on each 
>> server for a total of ~350TB of storage. After the server update to RHEL 6.9 
>> and gluster 3.12.4, the volume now shows as having 50TB with ‘df’. No 
>> changes were made to this volume after the update.
>>  
>> In both cases, examining the bricks shows that the space and files are still 
>> there, just not reported correctly with ‘df’. All machines have been 
>> rebooted and the problem persists.
>>  
>> Any help/advice you can give on this would be greatly appreciated.
>>  
>> Thanks in advance.
>> Eva Freer
>> 
>> 
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
>> http://lists.gluster.org/mailman/listinfo/gluster-users 
>> <http://lists.gluster.org/mailman/listinfo/gluster-users>
>  

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-30 Thread Freer, Eva B.
Sam,

For du –sh on my newer volume, the result is 161T. The sum of the Used space in 
the df –h output for all the bricks is ~163T. Close enough for me to believe 
everything is there. The total for used space in the df –h of the mountpoint it 
83T, roughly half what is used.

Relevant lines from df –h on server-A:
Filesystem  Size  Used Avail Use% Mounted on
/dev/sda159T   42T   17T  72% /bricks/data_A1
/dev/sdb159T   45T   14T  77% /bricks/data_A2
/dev/sdd159T   39M   59T   1% /bricks/data_A4
/dev/sdc159T  1.9T   57T   4% /bricks/data_A3
server-A:/dataeng   350T   83T  268T  24% /dataeng

And on server-B:
Filesystem Size  Used Avail Use% Mounted on
/dev/sdb1   59T   34T   25T  58% /bricks/data_B2
/dev/sdc1   59T  2.0T   57T   4% /bricks/data_B3
/dev/sdd1   59T   39M   59T   1% /bricks/data_B4
/dev/sda1   59T   38T   22T  64% /bricks/data_B1
server-B:/dataeng  350T   83T  268T  24% /dataeng

Eva Freer

From: Sam McLeod <mailingli...@smcleod.net>
Date: Tuesday, January 30, 2018 at 9:43 PM
To: Eva Freer <free...@ornl.gov>
Cc: "gluster-users@gluster.org" <gluster-users@gluster.org>, "Greene, Tami 
McFarlin" <gree...@ornl.gov>
Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4

We noticed something similar.

Out of interest, does du -sh . show the same size?

--
Sam McLeod (protoporpoise on IRC)
https://smcleod.net
https://twitter.com/s_mcleod

Words are my own opinions and do not necessarily represent those of my employer 
or partners.


On 31 Jan 2018, at 12:47 pm, Freer, Eva B. 
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:

After OS update to CentOS 7.4 or RedHat 6.9 and update to Gluster 3.12.4, the 
‘df’ command shows only part of the available space on the mount point for 
multi-brick volumes. All nodes are at 3.12.4. This occurs on both servers and 
clients.

We have 2 different server configurations.

Configuration 1: A distributed volume of 8 bricks with 4 on each server. The 
initial configuration had 4 bricks of 59TB each with 2 on each server. Prior to 
the update to CentOS 7.4 and gluster 3.12.4, ‘df’ correctly showed the size for 
the volume as 233TB. After the update, we added 2 bricks with 1 on each server, 
but the output of ‘df’ still only listed 233TB for the volume. We added 2 more 
bricks, again with 1 on each server. The output of ‘df’ now shows 350TB, but 
the aggregate of 8 – 59TB bricks should be ~466TB.

Configuration 2: A distributed, replicated volume with 9 bricks on each server 
for a total of ~350TB of storage. After the server update to RHEL 6.9 and 
gluster 3.12.4, the volume now shows as having 50TB with ‘df’. No changes were 
made to this volume after the update.

In both cases, examining the bricks shows that the space and files are still 
there, just not reported correctly with ‘df’. All machines have been rebooted 
and the problem persists.

Any help/advice you can give on this would be greatly appreciated.

Thanks in advance.
Eva Freer


___
Gluster-users mailing list
Gluster-users@gluster.org<mailto:Gluster-users@gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-30 Thread Sam McLeod
We noticed something similar.

Out of interest, does du -sh . show the same size?

--
Sam McLeod (protoporpoise on IRC)
https://smcleod.net
https://twitter.com/s_mcleod

Words are my own opinions and do not necessarily represent those of my employer 
or partners.

> On 31 Jan 2018, at 12:47 pm, Freer, Eva B.  wrote:
> 
> After OS update to CentOS 7.4 or RedHat 6.9 and update to Gluster 3.12.4, the 
> ‘df’ command shows only part of the available space on the mount point for 
> multi-brick volumes. All nodes are at 3.12.4. This occurs on both servers and 
> clients. 
>  
> We have 2 different server configurations.
>  
> Configuration 1: A distributed volume of 8 bricks with 4 on each server. The 
> initial configuration had 4 bricks of 59TB each with 2 on each server. Prior 
> to the update to CentOS 7.4 and gluster 3.12.4, ‘df’ correctly showed the 
> size for the volume as 233TB. After the update, we added 2 bricks with 1 on 
> each server, but the output of ‘df’ still only listed 233TB for the volume. 
> We added 2 more bricks, again with 1 on each server. The output of ‘df’ now 
> shows 350TB, but the aggregate of 8 – 59TB bricks should be ~466TB.
>  
> Configuration 2: A distributed, replicated volume with 9 bricks on each 
> server for a total of ~350TB of storage. After the server update to RHEL 6.9 
> and gluster 3.12.4, the volume now shows as having 50TB with ‘df’. No changes 
> were made to this volume after the update.
>  
> In both cases, examining the bricks shows that the space and files are still 
> there, just not reported correctly with ‘df’. All machines have been rebooted 
> and the problem persists.
>  
> Any help/advice you can give on this would be greatly appreciated.
>  
> Thanks in advance.
> Eva Freer
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> http://lists.gluster.org/mailman/listinfo/gluster-users 
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-30 Thread Freer, Eva B.
After OS update to CentOS 7.4 or RedHat 6.9 and update to Gluster 3.12.4, the 
‘df’ command shows only part of the available space on the mount point for 
multi-brick volumes. All nodes are at 3.12.4. This occurs on both servers and 
clients.

We have 2 different server configurations.

Configuration 1: A distributed volume of 8 bricks with 4 on each server. The 
initial configuration had 4 bricks of 59TB each with 2 on each server. Prior to 
the update to CentOS 7.4 and gluster 3.12.4, ‘df’ correctly showed the size for 
the volume as 233TB. After the update, we added 2 bricks with 1 on each server, 
but the output of ‘df’ still only listed 233TB for the volume. We added 2 more 
bricks, again with 1 on each server. The output of ‘df’ now shows 350TB, but 
the aggregate of 8 – 59TB bricks should be ~466TB.

Configuration 2: A distributed, replicated volume with 9 bricks on each server 
for a total of ~350TB of storage. After the server update to RHEL 6.9 and 
gluster 3.12.4, the volume now shows as having 50TB with ‘df’. No changes were 
made to this volume after the update.

In both cases, examining the bricks shows that the space and files are still 
there, just not reported correctly with ‘df’. All machines have been rebooted 
and the problem persists.

Any help/advice you can give on this would be greatly appreciated.

Thanks in advance.
Eva Freer

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users