Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-31 Thread Freer, Eva B.
Amar,

Thanks for your prompt reply. No, I do not plan to fix the code and re-compile. 
I was hoping it could be fixed with setting the shared-brick-count or some 
other option. Since this is a production system, we will wait until a fix is in 
a release.

Thanks,
Eva (865) 574-6894

From: Amar Tumballi <atumb...@redhat.com>
Date: Wednesday, January 31, 2018 at 12:15 PM
To: Eva Freer <free...@ornl.gov>
Cc: Nithya Balachandran <nbala...@redhat.com>, "Greene, Tami McFarlin" 
<gree...@ornl.gov>, "gluster-users@gluster.org" <gluster-users@gluster.org>
Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4

Hi Freer,

Our analysis is that this issue is caused by https://review.gluster.org/17618. 
Specifically, in 'gd_set_shared_brick_count()' from 
https://review.gluster.org/#/c/17618/9/xlators/mgmt/glusterd/src/glusterd-utils.c.

But even if we fix it today, I don't think we have a release planned 
immediately for shipping this. Are you planning to fix the code and re-compile?

Regards,
Amar

On Wed, Jan 31, 2018 at 10:00 PM, Freer, Eva B. 
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:
Nithya,

I will be out of the office for ~10 days starting tomorrow. Is there any way we 
could possibly resolve it today?

Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com<mailto:nbala...@redhat.com>>
Date: Wednesday, January 31, 2018 at 11:26 AM

To: Eva Freer <free...@ornl.gov<mailto:free...@ornl.gov>>
Cc: "Greene, Tami McFarlin" <gree...@ornl.gov<mailto:gree...@ornl.gov>>, 
"gluster-users@gluster.org<mailto:gluster-users@gluster.org>" 
<gluster-users@gluster.org<mailto:gluster-users@gluster.org>>, Amar Tumballi 
<atumb...@redhat.com<mailto:atumb...@redhat.com>>
Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4


On 31 January 2018 at 21:50, Freer, Eva B. 
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:
The values for shared-brick-count are still the same. I did not re-start the 
volume after setting the cluster.min-free-inodes to 6%. Do I need to restart it?

That is not necessary. Let me get back to you on this tomorrow.

Regards,
Nithya


Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com<mailto:nbala...@redhat.com>>
Date: Wednesday, January 31, 2018 at 11:14 AM
To: Eva Freer <free...@ornl.gov<mailto:free...@ornl.gov>>
Cc: "Greene, Tami McFarlin" <gree...@ornl.gov<mailto:gree...@ornl.gov>>, 
"gluster-users@gluster.org<mailto:gluster-users@gluster.org>" 
<gluster-users@gluster.org<mailto:gluster-users@gluster.org>>, Amar Tumballi 
<atumb...@redhat.com<mailto:atumb...@redhat.com>>

Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4



On 31 January 2018 at 21:34, Freer, Eva B. 
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:
Nithya,

Responding to an earlier question: Before the upgrade, we were at 3.103 on 
these servers, but some of the clients were 3.7.6. From below, does this mean 
that “shared-brick-count” needs to be set to 1 for all bricks.

All of the bricks are on separate xfs partitions composed hardware of RAID 6 
volumes. LVM is not used. The current setting for cluster.min-free-inodes was 
5%. I changed it to 6% per your instructions below. The df output is still the 
same, but I haven’t done the
find /var/lib/glusterd/vols -type f|xargs sed -i -e 's/option 
shared-brick-count [0-9]*/option shared-brick-count 1/g'
Should I go ahead and do this?

Can you check if the values have been changed in the .vol files before you try 
this?

These files will be regenerated every time the volume is changed so changing 
them directly may not be permanent. I was hoping setting the 
cluster.min-free-inodes would have corrected this automatically and helped us 
figure out what was happening as we have not managed to reproduce this issue 
yet.




Output of stat –f for all the bricks:

[root@jacen ~]# stat -f /bricks/data_A*
  File: "/bricks/data_A1"
ID: 801 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 4530515093 Available: 4530515093
Inodes: Total: 1250159424 Free: 1250028064
  File: "/bricks/data_A2"
ID: 811 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 3653183901 Available: 3653183901
Inodes: Total: 1250159424 Free: 1250029262
  File: "/bricks/data_A3"
ID: 821 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15134840607 Available: 15134840607
Inodes: Total: 1250159424 Free: 1250128031
  File: "/bricks/data_A4"

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-31 Thread Freer, Eva B.
Nithya,

Yes, Tami Greene, who is copied on the emails. I will monitor them also and 
work with her to get this resolved.

Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com>
Date: Wednesday, January 31, 2018 at 12:10 PM
To: Eva Freer <free...@ornl.gov>
Cc: "Greene, Tami McFarlin" <gree...@ornl.gov>, "gluster-users@gluster.org" 
<gluster-users@gluster.org>, Amar Tumballi <atumb...@redhat.com>
Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4

Hi Eva,

I'm sorry but I need to get in touch with another developer to check about the 
changes here and he will be available only tomorrow. Is there someone else I 
could work with while you are away?

Regards,
Nithya

On 31 January 2018 at 22:00, Freer, Eva B. 
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:
Nithya,

I will be out of the office for ~10 days starting tomorrow. Is there any way we 
could possibly resolve it today?

Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com<mailto:nbala...@redhat.com>>
Date: Wednesday, January 31, 2018 at 11:26 AM

To: Eva Freer <free...@ornl.gov<mailto:free...@ornl.gov>>
Cc: "Greene, Tami McFarlin" <gree...@ornl.gov<mailto:gree...@ornl.gov>>, 
"gluster-users@gluster.org<mailto:gluster-users@gluster.org>" 
<gluster-users@gluster.org<mailto:gluster-users@gluster.org>>, Amar Tumballi 
<atumb...@redhat.com<mailto:atumb...@redhat.com>>
Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4


On 31 January 2018 at 21:50, Freer, Eva B. 
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:
The values for shared-brick-count are still the same. I did not re-start the 
volume after setting the cluster.min-free-inodes to 6%. Do I need to restart it?

That is not necessary. Let me get back to you on this tomorrow.

Regards,
Nithya


Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com<mailto:nbala...@redhat.com>>
Date: Wednesday, January 31, 2018 at 11:14 AM
To: Eva Freer <free...@ornl.gov<mailto:free...@ornl.gov>>
Cc: "Greene, Tami McFarlin" <gree...@ornl.gov<mailto:gree...@ornl.gov>>, 
"gluster-users@gluster.org<mailto:gluster-users@gluster.org>" 
<gluster-users@gluster.org<mailto:gluster-users@gluster.org>>, Amar Tumballi 
<atumb...@redhat.com<mailto:atumb...@redhat.com>>

Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4



On 31 January 2018 at 21:34, Freer, Eva B. 
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:
Nithya,

Responding to an earlier question: Before the upgrade, we were at 3.103 on 
these servers, but some of the clients were 3.7.6. From below, does this mean 
that “shared-brick-count” needs to be set to 1 for all bricks.

All of the bricks are on separate xfs partitions composed hardware of RAID 6 
volumes. LVM is not used. The current setting for cluster.min-free-inodes was 
5%. I changed it to 6% per your instructions below. The df output is still the 
same, but I haven’t done the
find /var/lib/glusterd/vols -type f|xargs sed -i -e 's/option 
shared-brick-count [0-9]*/option shared-brick-count 1/g'
Should I go ahead and do this?

Can you check if the values have been changed in the .vol files before you try 
this?

These files will be regenerated every time the volume is changed so changing 
them directly may not be permanent. I was hoping setting the 
cluster.min-free-inodes would have corrected this automatically and helped us 
figure out what was happening as we have not managed to reproduce this issue 
yet.




Output of stat –f for all the bricks:

[root@jacen ~]# stat -f /bricks/data_A*
  File: "/bricks/data_A1"
ID: 801 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 4530515093 Available: 4530515093
Inodes: Total: 1250159424 Free: 1250028064
  File: "/bricks/data_A2"
ID: 811 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 3653183901 Available: 3653183901
Inodes: Total: 1250159424 Free: 1250029262
  File: "/bricks/data_A3"
ID: 821 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15134840607 Available: 15134840607
Inodes: Total: 1250159424 Free: 1250128031
  File: "/bricks/data_A4"
ID: 831 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15626461604 Available: 15626461604
Inodes: Total: 1250159424 Free: 1250153857

[root@jaina dataeng]# stat -f /bricks/data_B*
  File: "/bricks/data_B1"
ID: 801 Name

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-31 Thread Freer, Eva B.
Nithya,

I will be out of the office for ~10 days starting tomorrow. Is there any way we 
could possibly resolve it today?

Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com>
Date: Wednesday, January 31, 2018 at 11:26 AM
To: Eva Freer <free...@ornl.gov>
Cc: "Greene, Tami McFarlin" <gree...@ornl.gov>, "gluster-users@gluster.org" 
<gluster-users@gluster.org>, Amar Tumballi <atumb...@redhat.com>
Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4


On 31 January 2018 at 21:50, Freer, Eva B. 
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:
The values for shared-brick-count are still the same. I did not re-start the 
volume after setting the cluster.min-free-inodes to 6%. Do I need to restart it?

That is not necessary. Let me get back to you on this tomorrow.

Regards,
Nithya


Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com<mailto:nbala...@redhat.com>>
Date: Wednesday, January 31, 2018 at 11:14 AM
To: Eva Freer <free...@ornl.gov<mailto:free...@ornl.gov>>
Cc: "Greene, Tami McFarlin" <gree...@ornl.gov<mailto:gree...@ornl.gov>>, 
"gluster-users@gluster.org<mailto:gluster-users@gluster.org>" 
<gluster-users@gluster.org<mailto:gluster-users@gluster.org>>, Amar Tumballi 
<atumb...@redhat.com<mailto:atumb...@redhat.com>>

Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4



On 31 January 2018 at 21:34, Freer, Eva B. 
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:
Nithya,

Responding to an earlier question: Before the upgrade, we were at 3.103 on 
these servers, but some of the clients were 3.7.6. From below, does this mean 
that “shared-brick-count” needs to be set to 1 for all bricks.

All of the bricks are on separate xfs partitions composed hardware of RAID 6 
volumes. LVM is not used. The current setting for cluster.min-free-inodes was 
5%. I changed it to 6% per your instructions below. The df output is still the 
same, but I haven’t done the
find /var/lib/glusterd/vols -type f|xargs sed -i -e 's/option 
shared-brick-count [0-9]*/option shared-brick-count 1/g'
Should I go ahead and do this?

Can you check if the values have been changed in the .vol files before you try 
this?

These files will be regenerated every time the volume is changed so changing 
them directly may not be permanent. I was hoping setting the 
cluster.min-free-inodes would have corrected this automatically and helped us 
figure out what was happening as we have not managed to reproduce this issue 
yet.




Output of stat –f for all the bricks:

[root@jacen ~]# stat -f /bricks/data_A*
  File: "/bricks/data_A1"
ID: 801 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 4530515093 Available: 4530515093
Inodes: Total: 1250159424 Free: 1250028064
  File: "/bricks/data_A2"
ID: 811 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 3653183901 Available: 3653183901
Inodes: Total: 1250159424 Free: 1250029262
  File: "/bricks/data_A3"
ID: 821 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15134840607 Available: 15134840607
Inodes: Total: 1250159424 Free: 1250128031
  File: "/bricks/data_A4"
ID: 831 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15626461604 Available: 15626461604
Inodes: Total: 1250159424 Free: 1250153857

[root@jaina dataeng]# stat -f /bricks/data_B*
  File: "/bricks/data_B1"
ID: 801 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 5689640723 Available: 5689640723
Inodes: Total: 1250159424 Free: 1250047934
  File: "/bricks/data_B2"
ID: 811 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 6623312785 Available: 6623312785
Inodes: Total: 1250159424 Free: 1250048131
  File: "/bricks/data_B3"
ID: 821 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15106888485 Available: 15106888485
Inodes: Total: 1250159424 Free: 1250122139
  File: "/bricks/data_B4"
ID: 831 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15626461604 Available: 15626461604
Inodes: Total: 1250159424 Free: 1250153857


Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com<mailto:nbala...@redhat.com>>
Date: Wednesday, January 31, 2018 at 10:46 AM
To: Eva Freer <

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-31 Thread Freer, Eva B.
The values for shared-brick-count are still the same. I did not re-start the 
volume after setting the cluster.min-free-inodes to 6%. Do I need to restart it?

Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com>
Date: Wednesday, January 31, 2018 at 11:14 AM
To: Eva Freer <free...@ornl.gov>
Cc: "Greene, Tami McFarlin" <gree...@ornl.gov>, "gluster-users@gluster.org" 
<gluster-users@gluster.org>, Amar Tumballi <atumb...@redhat.com>
Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4



On 31 January 2018 at 21:34, Freer, Eva B. 
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:
Nithya,

Responding to an earlier question: Before the upgrade, we were at 3.103 on 
these servers, but some of the clients were 3.7.6. From below, does this mean 
that “shared-brick-count” needs to be set to 1 for all bricks.

All of the bricks are on separate xfs partitions composed hardware of RAID 6 
volumes. LVM is not used. The current setting for cluster.min-free-inodes was 
5%. I changed it to 6% per your instructions below. The df output is still the 
same, but I haven’t done the
find /var/lib/glusterd/vols -type f|xargs sed -i -e 's/option 
shared-brick-count [0-9]*/option shared-brick-count 1/g'
Should I go ahead and do this?

Can you check if the values have been changed in the .vol files before you try 
this?

These files will be regenerated every time the volume is changed so changing 
them directly may not be permanent. I was hoping setting the 
cluster.min-free-inodes would have corrected this automatically and helped us 
figure out what was happening as we have not managed to reproduce this issue 
yet.




Output of stat –f for all the bricks:

[root@jacen ~]# stat -f /bricks/data_A*
  File: "/bricks/data_A1"
ID: 801 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 4530515093 Available: 4530515093
Inodes: Total: 1250159424 Free: 1250028064
  File: "/bricks/data_A2"
ID: 811 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 3653183901 Available: 3653183901
Inodes: Total: 1250159424 Free: 1250029262
  File: "/bricks/data_A3"
ID: 821 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15134840607 Available: 15134840607
Inodes: Total: 1250159424 Free: 1250128031
  File: "/bricks/data_A4"
ID: 831 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15626461604 Available: 15626461604
Inodes: Total: 1250159424 Free: 1250153857

[root@jaina dataeng]# stat -f /bricks/data_B*
  File: "/bricks/data_B1"
ID: 801 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 5689640723 Available: 5689640723
Inodes: Total: 1250159424 Free: 1250047934
  File: "/bricks/data_B2"
ID: 811 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 6623312785 Available: 6623312785
Inodes: Total: 1250159424 Free: 1250048131
  File: "/bricks/data_B3"
ID: 821 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15106888485 Available: 15106888485
Inodes: Total: 1250159424 Free: 1250122139
  File: "/bricks/data_B4"
ID: 831 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15626461604 Available: 15626461604
Inodes: Total: 1250159424 Free: 1250153857


Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com<mailto:nbala...@redhat.com>>
Date: Wednesday, January 31, 2018 at 10:46 AM
To: Eva Freer <free...@ornl.gov<mailto:free...@ornl.gov>>, "Greene, Tami 
McFarlin" <gree...@ornl.gov<mailto:gree...@ornl.gov>>
Cc: Amar Tumballi <atumb...@redhat.com<mailto:atumb...@redhat.com>>

Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4

Thank you Eva.

From the files you sent:
dataeng.jacen.bricks-data_A1-dataeng.vol:option shared-brick-count 2
dataeng.jacen.bricks-data_A2-dataeng.vol:option shared-brick-count 2
dataeng.jacen.bricks-data_A3-dataeng.vol:option shared-brick-count 1
dataeng.jacen.bricks-data_A4-dataeng.vol:option shared-brick-count 1
dataeng.jaina.bricks-data_B1-dataeng.vol:option shared-brick-count 0
dataeng.jaina.bricks-data_B2-dataeng.vol:option shared-brick-count 0
dataeng.jaina.bricks-data_B3-dataeng.vol:option shared-brick-count 0
dataeng.jaina.bricks-data_B4-dataeng.vol:option shared-brick-count 0


Are all of these bricks on s

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-31 Thread Freer, Eva B.
Nithya,

Responding to an earlier question: Before the upgrade, we were at 3.103 on 
these servers, but some of the clients were 3.7.6. From below, does this mean 
that “shared-brick-count” needs to be set to 1 for all bricks.

All of the bricks are on separate xfs partitions composed hardware of RAID 6 
volumes. LVM is not used. The current setting for cluster.min-free-inodes was 
5%. I changed it to 6% per your instructions below. The df output is still the 
same, but I haven’t done the
find /var/lib/glusterd/vols -type f|xargs sed -i -e 's/option 
shared-brick-count [0-9]*/option shared-brick-count 1/g'
Should I go ahead and do this?

Output of stat –f for all the bricks:

[root@jacen ~]# stat -f /bricks/data_A*
  File: "/bricks/data_A1"
ID: 801 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 4530515093 Available: 4530515093
Inodes: Total: 1250159424 Free: 1250028064
  File: "/bricks/data_A2"
ID: 811 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 3653183901 Available: 3653183901
Inodes: Total: 1250159424 Free: 1250029262
  File: "/bricks/data_A3"
ID: 821 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15134840607 Available: 15134840607
Inodes: Total: 1250159424 Free: 1250128031
  File: "/bricks/data_A4"
ID: 831 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15626461604 Available: 15626461604
Inodes: Total: 1250159424 Free: 1250153857

[root@jaina dataeng]# stat -f /bricks/data_B*
  File: "/bricks/data_B1"
ID: 801 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 5689640723 Available: 5689640723
Inodes: Total: 1250159424 Free: 1250047934
  File: "/bricks/data_B2"
ID: 811 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 6623312785 Available: 6623312785
Inodes: Total: 1250159424 Free: 1250048131
  File: "/bricks/data_B3"
ID: 821 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15106888485 Available: 15106888485
Inodes: Total: 1250159424 Free: 1250122139
  File: "/bricks/data_B4"
ID: 831 Namelen: 255 Type: xfs
Block size: 4096   Fundamental block size: 4096
Blocks: Total: 15626471424 Free: 15626461604 Available: 15626461604
Inodes: Total: 1250159424 Free: 1250153857


Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com>
Date: Wednesday, January 31, 2018 at 10:46 AM
To: Eva Freer <free...@ornl.gov>, "Greene, Tami McFarlin" <gree...@ornl.gov>
Cc: Amar Tumballi <atumb...@redhat.com>
Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4

Thank you Eva.

From the files you sent:
dataeng.jacen.bricks-data_A1-dataeng.vol:option shared-brick-count 2
dataeng.jacen.bricks-data_A2-dataeng.vol:option shared-brick-count 2
dataeng.jacen.bricks-data_A3-dataeng.vol:option shared-brick-count 1
dataeng.jacen.bricks-data_A4-dataeng.vol:option shared-brick-count 1
dataeng.jaina.bricks-data_B1-dataeng.vol:option shared-brick-count 0
dataeng.jaina.bricks-data_B2-dataeng.vol:option shared-brick-count 0
dataeng.jaina.bricks-data_B3-dataeng.vol:option shared-brick-count 0
dataeng.jaina.bricks-data_B4-dataeng.vol:option shared-brick-count 0


Are all of these bricks on separate Filesystem partitions? If yes, can you 
please try running the following on one of the gluster nodes and see if the df 
output works post that?

gluster v set dataeng cluster.min-free-inodes 6%


If it doesn;t work, please send us the stat -f output for each brick.

Regards,
Nithya

On 31 January 2018 at 20:41, Freer, Eva B. 
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:
Nithya,

The file for one of the servers is attached.

Thanks,
Eva (865) 574-6894

From: Nithya Balachandran <nbala...@redhat.com<mailto:nbala...@redhat.com>>
Date: Wednesday, January 31, 2018 at 1:17 AM
To: Eva Freer <free...@ornl.gov<mailto:free...@ornl.gov>>
Cc: "gluster-users@gluster.org<mailto:gluster-users@gluster.org>" 
<gluster-users@gluster.org<mailto:gluster-users@gluster.org>>, "Greene, Tami 
McFarlin" <gree...@ornl.gov<mailto:gree...@ornl.gov>>
Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4

I found this on the mailing list:
I found the issue.

The CentOS 7 RPMs, upon upgrade, modifies the .vol files. Among other things, 
it adds "option shared-brick-count \d", using the number of brick

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-30 Thread Freer, Eva B.
Sam,

For du –sh on my newer volume, the result is 161T. The sum of the Used space in 
the df –h output for all the bricks is ~163T. Close enough for me to believe 
everything is there. The total for used space in the df –h of the mountpoint it 
83T, roughly half what is used.

Relevant lines from df –h on server-A:
Filesystem  Size  Used Avail Use% Mounted on
/dev/sda159T   42T   17T  72% /bricks/data_A1
/dev/sdb159T   45T   14T  77% /bricks/data_A2
/dev/sdd159T   39M   59T   1% /bricks/data_A4
/dev/sdc159T  1.9T   57T   4% /bricks/data_A3
server-A:/dataeng   350T   83T  268T  24% /dataeng

And on server-B:
Filesystem Size  Used Avail Use% Mounted on
/dev/sdb1   59T   34T   25T  58% /bricks/data_B2
/dev/sdc1   59T  2.0T   57T   4% /bricks/data_B3
/dev/sdd1   59T   39M   59T   1% /bricks/data_B4
/dev/sda1   59T   38T   22T  64% /bricks/data_B1
server-B:/dataeng  350T   83T  268T  24% /dataeng

Eva Freer

From: Sam McLeod <mailingli...@smcleod.net>
Date: Tuesday, January 30, 2018 at 9:43 PM
To: Eva Freer <free...@ornl.gov>
Cc: "gluster-users@gluster.org" <gluster-users@gluster.org>, "Greene, Tami 
McFarlin" <gree...@ornl.gov>
Subject: Re: [Gluster-users] df does not show full volume capacity after update 
to 3.12.4

We noticed something similar.

Out of interest, does du -sh . show the same size?

--
Sam McLeod (protoporpoise on IRC)
https://smcleod.net
https://twitter.com/s_mcleod

Words are my own opinions and do not necessarily represent those of my employer 
or partners.


On 31 Jan 2018, at 12:47 pm, Freer, Eva B. 
<free...@ornl.gov<mailto:free...@ornl.gov>> wrote:

After OS update to CentOS 7.4 or RedHat 6.9 and update to Gluster 3.12.4, the 
‘df’ command shows only part of the available space on the mount point for 
multi-brick volumes. All nodes are at 3.12.4. This occurs on both servers and 
clients.

We have 2 different server configurations.

Configuration 1: A distributed volume of 8 bricks with 4 on each server. The 
initial configuration had 4 bricks of 59TB each with 2 on each server. Prior to 
the update to CentOS 7.4 and gluster 3.12.4, ‘df’ correctly showed the size for 
the volume as 233TB. After the update, we added 2 bricks with 1 on each server, 
but the output of ‘df’ still only listed 233TB for the volume. We added 2 more 
bricks, again with 1 on each server. The output of ‘df’ now shows 350TB, but 
the aggregate of 8 – 59TB bricks should be ~466TB.

Configuration 2: A distributed, replicated volume with 9 bricks on each server 
for a total of ~350TB of storage. After the server update to RHEL 6.9 and 
gluster 3.12.4, the volume now shows as having 50TB with ‘df’. No changes were 
made to this volume after the update.

In both cases, examining the bricks shows that the space and files are still 
there, just not reported correctly with ‘df’. All machines have been rebooted 
and the problem persists.

Any help/advice you can give on this would be greatly appreciated.

Thanks in advance.
Eva Freer


___
Gluster-users mailing list
Gluster-users@gluster.org<mailto:Gluster-users@gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-30 Thread Freer, Eva B.
After OS update to CentOS 7.4 or RedHat 6.9 and update to Gluster 3.12.4, the 
‘df’ command shows only part of the available space on the mount point for 
multi-brick volumes. All nodes are at 3.12.4. This occurs on both servers and 
clients.

We have 2 different server configurations.

Configuration 1: A distributed volume of 8 bricks with 4 on each server. The 
initial configuration had 4 bricks of 59TB each with 2 on each server. Prior to 
the update to CentOS 7.4 and gluster 3.12.4, ‘df’ correctly showed the size for 
the volume as 233TB. After the update, we added 2 bricks with 1 on each server, 
but the output of ‘df’ still only listed 233TB for the volume. We added 2 more 
bricks, again with 1 on each server. The output of ‘df’ now shows 350TB, but 
the aggregate of 8 – 59TB bricks should be ~466TB.

Configuration 2: A distributed, replicated volume with 9 bricks on each server 
for a total of ~350TB of storage. After the server update to RHEL 6.9 and 
gluster 3.12.4, the volume now shows as having 50TB with ‘df’. No changes were 
made to this volume after the update.

In both cases, examining the bricks shows that the space and files are still 
there, just not reported correctly with ‘df’. All machines have been rebooted 
and the problem persists.

Any help/advice you can give on this would be greatly appreciated.

Thanks in advance.
Eva Freer

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Replace corrupted brick

2015-09-22 Thread Freer, Eva B.
Our configuration is a distributed, replicated volume with 7 pairs of bricks on 
2 servers. We are in the process of adding additional storage for another brick 
pair. I placed the new disks in one of the servers late last week and used the 
LSI storcli command to make a RAID 6 volume of the new disks. We are running 
RedHat 6.6 and Gluster 3.7.1 on both servers. Yesterday, I ran 'parted 
/dev/sdj' to create a partition on the new volume. Unfortunately, /dev/sdj was 
not the new volume (which is /dev/sdh). I realized the error right away, but 
the system was operating OK and it was late at night so I decided to wait until 
today to try to fix this. This morning, I ran 'parted rescue 0 36.0TB'. This 
runs, but does not find a partition to restore. I am using LVM, and the 
partition is /dev/mapper/vg_data5-lv_data5 with an xfs filesystem on it. The 
system continued to operate, but I expected that there would be problems on 
re-boot. I re-booted and indeed, the system can't find the volume at 
/dev/mapper/vg_data5-lv_data5. Is it possible to recover this volume in place, 
or do I need to just drop it from the gluster volume, recreate the lvm 
partition, and then copy the files from its partner brick on the other server? 
If I need to copy the files, what is the best procedure for doing it?

TIA,
Eva Freer
Oak Ridge National Laboratory
free...@ornl.gov
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Replace corrupted brick

2015-09-22 Thread Freer, Eva B.
Update: I was able to use the TestDisk program from cgsecurity.org to find and 
rewrite the partition info for the LVM partition. I was then able to mount the 
disk and restart the gluster volume to bring the brick back online. To make 
sure everything was OK, I then rebooted the node with the problem. I also 
rebooted all the client nodes so they have a nice clean start for the morning.
Regards,
Eva
--
Eva Broadaway Freer
Senior Development Engineer
RF, Communications, and Intelligent Systems Group
Electrical and Electronics Systems Research Division
Oak Ridge National Laboratory
free...@ornl.gov(865) 574-6894

From: Eva Freer >
Date: Tuesday, September 22, 2015 5:18 PM
To: "gluster-users@gluster.org" 
>
Cc: Eva Freer >, Toby Flynn 
>
Subject: Replace corrupted brick

Our configuration is a distributed, replicated volume with 7 pairs of bricks on 
2 servers. We are in the process of adding additional storage for another brick 
pair. I placed the new disks in one of the servers late last week and used the 
LSI storcli command to make a RAID 6 volume of the new disks. We are running 
RedHat 6.6 and Gluster 3.7.1 on both servers. Yesterday, I ran 'parted 
/dev/sdj' to create a partition on the new volume. Unfortunately, /dev/sdj was 
not the new volume (which is /dev/sdh). I realized the error right away, but 
the system was operating OK and it was late at night so I decided to wait until 
today to try to fix this. This morning, I ran 'parted rescue 0 36.0TB'. This 
runs, but does not find a partition to restore. I am using LVM, and the 
partition is /dev/mapper/vg_data5-lv_data5 with an xfs filesystem on it. The 
system continued to operate, but I expected that there would be problems on 
re-boot. I re-booted and indeed, the system can't find the volume at 
/dev/mapper/vg_data5-lv_data5. Is it possible to recover this volume in place, 
or do I need to just drop it from the gluster volume, recreate the lvm 
partition, and then copy the files from its partner brick on the other server? 
If I need to copy the files, what is the best procedure for doing it?

TIA,
Eva Freer
Oak Ridge National Laboratory
free...@ornl.gov
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users