Re: [Gluster-users] Hot Tier exceeding watermark-hi

2018-09-30 Thread David Brown
t connected]
[2018-09-30 12:25:58.626029] W [MSGID: 114031]
[client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
remote operation failed. Path: //test1
(5ac7caba-f2c3-4bf1-bb38-cf6ed940dac0). Key: trusted.glusterfs.node-uuid
[Transport endpoint is not connected]
[2018-09-30 12:25:58.629459] W [MSGID: 114031]
[client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
remote operation failed. Path: //c7947fa1-a496-400c-b6a4-b4e084b8f316
(5e909f4e-6263-4091-8378-26479496e715). Key: trusted.glusterfs.node-uuid
[Transport endpoint is not connected]
[2018-09-30 12:25:58.632994] W [MSGID: 114031]
[client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
remote operation failed. Path: //ea37891d-1ab8-40f8-95a3-eee822c7040a
(6dfe1d97-34f4-440b-9502-5eab172de58a). Key: trusted.glusterfs.node-uuid
[Transport endpoint is not connected]
[2018-09-30 12:25:58.636669] W [MSGID: 114031]
[client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
remote operation failed. Path: //a42d1e12-fc11-4a51-a744-8e6c3b11be0a
(7a081218-3cc1-442c-be4b-43bd7dd01724). Key: trusted.glusterfs.node-uuid
[Transport endpoint is not connected]
[2018-09-30 12:25:58.640155] W [MSGID: 114031]
[client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
remote operation failed. Path: //b0acb442-fe60-4022-bee2-d11d49422f20
(8788d650-9800-47ab-bf07-87f9dcd0392c). Key: trusted.glusterfs.node-uuid
[Transport endpoint is not connected]
[2018-09-30 12:25:58.643516] W [MSGID: 114031]
[client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
remote operation failed. Path: //f7357147-c2ea-4abe-9c59-136f049bfccb
(91ecb7b5-84fb-48d2-af2b-440ab6f25cfa). Key: trusted.glusterfs.node-uuid
[Transport endpoint is not connected]
[2018-09-30 12:25:58.648787] W [MSGID: 114031]
[client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
remote operation failed. Path: //35949c80-5496-445d-b2d6-e7d2061e9135
(972256c3-8eb8-49d5-a4ab-cca34abc7b0a). Key: trusted.glusterfs.node-uuid
[Transport endpoint is not connected]
[2018-09-30 12:25:58.652106] W [MSGID: 114031]
[client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
remote operation failed. Path: //13660ae8-4138-47f2-a858-8880d97b4e8d
(a6027333-b269-4810-a188-3af51c04fdcb). Key: trusted.glusterfs.node-uuid
[Transport endpoint is not connected]
[2018-09-30 12:25:58.655577] I [MSGID: 109038]
[tier.c:1122:tier_migrate_using_query_file] 0-FFPrimary-tier-dht: Demotion
failed for melvin(gfid:a9b49996-ba84-4b88-b182-7a0e677749aa)
[2018-09-30 12:25:58.658482] W [MSGID: 114031]
[client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
remote operation failed. Path: //d1cb5906-e2d9-444a-9622-599509a73e3b
(ac3cd35b-d766-46c1-ae4b-a8373e35a77b). Key: trusted.glusterfs.node-uuid
[Transport endpoint is not connected]
[2018-09-30 12:25:58.661703] W [MSGID: 114031]
[client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
remote operation failed. Path: //74ad729b-17dd-44d1-8b86-1db0dce862d8
(ac54d8b4-f826-48b2-ab18-75b561526689). Key: trusted.glusterfs.node-uuid
[Transport endpoint is not connected]
[2018-09-30 12:25:58.665051] W [MSGID: 114031]
[client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
remote operation failed. Path: //0368ebe6-9ac3-4d72-9795-0a46800aa90b
(c627ff61-1819-4a0b-81ac-d53b8ab872d4). Key: trusted.glusterfs.node-uuid
[Transport endpoint is not connected]
The message "W [MSGID: 114031]
[client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
remote operation failed. Path: //e8a6aba1-e5ce-4ded-b474-1c5bf49b1285
(edffea55-296b-4d1b-8114-b4f8dd10920a). Key: trusted.glusterfs.node-uuid
[Transport endpoint is not connected]" repeated 3 times between [2018-09-30
12:25:55.406782] and [2018-09-30 12:25:58.674842]
[2018-09-30 12:25:58.675351] E [MSGID: 109037] [tier.c:2532:tier_run]
0-FFPrimary-tier-dht: Demotion failed
[root@Glus1 FFPrimary]#

On Sat, Sep 29, 2018 at 3:54 PM David Brown  wrote:

> 4 hours later, no files have been demoted
>
> [root@Glus1 ~]# gluster volume status  FFPrimary detail
> Status of volume: FFPrimary
> Hot Bricks:
>
> --
> Brick: Brick Glus3:/data/glusterfs/FFPrimary/brick3
> TCP Port : 49155
> RDMA Port: 0
> Online   : Y
> Pid  : 24177
> File System  : xfs
> Device   : /dev/nvme0n1
> Mount Options: rw,seclabel,relatime,attr2,inode64,noquota
> Inode Size   : 512
> Disk Space Free  : 24.3GB
> Total Disk Space : 476.7GB
> Inode Count  : 50877088
> Free Inodes  : 50874696
>
> --
> Brick: Brick Glus2:/data/glusterfs/FFPrimary/brick2
> TCP Port : 49155
> RDMA 

Re: [Gluster-users] Hot Tier exceeding watermark-hi

2018-09-29 Thread David Brown
 Name: FFPrimary
Type: Tier
Volume ID: 466ec53c-d1ef-4ebc-8414-d7d070dfe61e
Status: Started
Snapshot Count: 0
Number of Bricks: 9
Transport-type: tcp
Hot Tier :
Hot Tier Type : Replicate
Number of Bricks: 1 x 3 = 3
Brick1: Glus3:/data/glusterfs/FFPrimary/brick3
Brick2: Glus2:/data/glusterfs/FFPrimary/brick2
Brick3: Glus1:/data/glusterfs/FFPrimary/brick1
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 3 = 6
Brick4: Glus1:/data/glusterfs/FFPrimary/brick5
Brick5: Glus2:/data/glusterfs/FFPrimary/brick6
Brick6: Glus3:/data/glusterfs/FFPrimary/brick7
Brick7: Glus1:/data/glusterfs/FFPrimary/brick8
Brick8: Glus2:/data/glusterfs/FFPrimary/brick9
Brick9: Glus3:/data/glusterfs/FFPrimary/brick10
Options Reconfigured:
cluster.tier-promote-frequency: 1800
cluster.tier-demote-frequency: 120
cluster.watermark-low: 60
cluster.watermark-hi: 80
performance.flush-behind: on
performance.cache-max-file-size: 128MB
performance.cache-size: 25GB
diagnostics.count-fop-hits: off
diagnostics.latency-measurement: off
cluster.tier-mode: cache
features.ctr-enabled: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

[root@Glus1 ~]# gluster volume tier FFPrimary status
Node Promoted files   Demoted filesStatus
 run time in h:m:s
----
-
localhost49   0in progress
5159:15:50
Glus2 00in progress
5159:15:50
Glus3 02075 in progress
5159:15:52
Tiering Migration Functionality: FFPrimary: success
[root@Glus1 ~]#



On Sat, Sep 29, 2018 at 11:56 AM David Brown  wrote:

> Thank you Hari,
>
> I have set:
> cluster.tier-promote-frequency: 1800
> cluster.tier-demote-frequency: 120
>
> I will let you know if it makes a difference after some time. So far (10
> minutes), nothing has changed.
> I would agree with you, that by looking at the result of 'gluster volume
> tier FFPrimary status' it would seem that demoting is happening. However,
> for the last 24hrs, nothing has changed in the tier status report except
> the time. Could it be stuck? How would I know? Is there a way to restart it
> without restarting the cluster?
>
>
>
>
>
> On Sat, Sep 29, 2018 at 11:08 AM Hari Gowtham  wrote:
>
>> Hi,
>>
>> I can see that the demotion is happening from the status provided by you.
>> Do verify it.
>> I would recommend you to change the cluster.tier-demote-frequency to 120
>> and cluster.tier-promote-frequency to 1800 to increase the demotions
>> until the
>> hot tier is emptied to a certain extent. Later you can use the values
>> existing now.
>> On Sat, Sep 29, 2018 at 5:39 PM David Brown  wrote:
>> >
>> > Hey Everyone,
>> >
>> > I have a 3 node GlusterFS cluster that uses NVMe hot tier and a HDD
>> cold tier.
>> > I recently ran into some problems when the hot tier became full with
>> df-h showing 100%.
>> >
>> > I did not have a watermark-hi set, but it is my understanding that 90%
>> is the default. In an attempt to get the cluster to demote some files, I
>> set cluster.watermark-hi: 80 but it is still not demoting.
>> >
>> >
>> > [root@Glus1 ~]# gluster volume info
>> >
>> > Volume Name: FFPrimary
>> > Type: Tier
>> > Volume ID: 466ec53c-d1ef-4ebc-8414-d7d070dfe61e
>> > Status: Started
>> > Snapshot Count: 0
>> > Number of Bricks: 9
>> > Transport-type: tcp
>> > Hot Tier :
>> > Hot Tier Type : Replicate
>> > Number of Bricks: 1 x 3 = 3
>> > Brick1: Glus3:/data/glusterfs/FFPrimary/brick3
>> > Brick2: Glus2:/data/glusterfs/FFPrimary/brick2
>> > Brick3: Glus1:/data/glusterfs/FFPrimary/brick1
>> > Cold Tier:
>> > Cold Tier Type : Distributed-Replicate
>> > Number of Bricks: 2 x 3 = 6
>> > Brick4: Glus1:/data/glusterfs/FFPrimary/brick5
>> > Brick5: Glus2:/data/glusterfs/FFPrimary/brick6
>> > Brick6: Glus3:/data/glusterfs/FFPrimary/brick7
>> > Brick7: Glus1:/data/glusterfs/FFPrimary/brick8
>> > Brick8: Glus2:/data/glusterfs/FFPrimary/brick9
>> > Brick9: Glus3:/data/glusterfs/FFPrimary/brick10
>> > Options Reconfigured:
>> > cluster.tier-promote-frequency: 120
>> > cluster.tier-demote-frequency: 1800
>> > cluster.watermark-low: 60
>> > cluster.watermark-hi: 80
>> > performance.flush-behind: on
>> > performance.cache-max-file-size: 128MB
>> > performance.cache-size: 25GB
>> > diagnostics.count-fop-hits: off
>> > diagnostics.latency-mea

Re: [Gluster-users] Hot Tier exceeding watermark-hi

2018-09-29 Thread David Brown
Thank you Hari,

I have set:
cluster.tier-promote-frequency: 1800
cluster.tier-demote-frequency: 120

I will let you know if it makes a difference after some time. So far (10
minutes), nothing has changed.
I would agree with you, that by looking at the result of 'gluster volume
tier FFPrimary status' it would seem that demoting is happening. However,
for the last 24hrs, nothing has changed in the tier status report except
the time. Could it be stuck? How would I know? Is there a way to restart it
without restarting the cluster?





On Sat, Sep 29, 2018 at 11:08 AM Hari Gowtham  wrote:

> Hi,
>
> I can see that the demotion is happening from the status provided by you.
> Do verify it.
> I would recommend you to change the cluster.tier-demote-frequency to 120
> and cluster.tier-promote-frequency to 1800 to increase the demotions until
> the
> hot tier is emptied to a certain extent. Later you can use the values
> existing now.
> On Sat, Sep 29, 2018 at 5:39 PM David Brown  wrote:
> >
> > Hey Everyone,
> >
> > I have a 3 node GlusterFS cluster that uses NVMe hot tier and a HDD cold
> tier.
> > I recently ran into some problems when the hot tier became full with
> df-h showing 100%.
> >
> > I did not have a watermark-hi set, but it is my understanding that 90%
> is the default. In an attempt to get the cluster to demote some files, I
> set cluster.watermark-hi: 80 but it is still not demoting.
> >
> >
> > [root@Glus1 ~]# gluster volume info
> >
> > Volume Name: FFPrimary
> > Type: Tier
> > Volume ID: 466ec53c-d1ef-4ebc-8414-d7d070dfe61e
> > Status: Started
> > Snapshot Count: 0
> > Number of Bricks: 9
> > Transport-type: tcp
> > Hot Tier :
> > Hot Tier Type : Replicate
> > Number of Bricks: 1 x 3 = 3
> > Brick1: Glus3:/data/glusterfs/FFPrimary/brick3
> > Brick2: Glus2:/data/glusterfs/FFPrimary/brick2
> > Brick3: Glus1:/data/glusterfs/FFPrimary/brick1
> > Cold Tier:
> > Cold Tier Type : Distributed-Replicate
> > Number of Bricks: 2 x 3 = 6
> > Brick4: Glus1:/data/glusterfs/FFPrimary/brick5
> > Brick5: Glus2:/data/glusterfs/FFPrimary/brick6
> > Brick6: Glus3:/data/glusterfs/FFPrimary/brick7
> > Brick7: Glus1:/data/glusterfs/FFPrimary/brick8
> > Brick8: Glus2:/data/glusterfs/FFPrimary/brick9
> > Brick9: Glus3:/data/glusterfs/FFPrimary/brick10
> > Options Reconfigured:
> > cluster.tier-promote-frequency: 120
> > cluster.tier-demote-frequency: 1800
> > cluster.watermark-low: 60
> > cluster.watermark-hi: 80
> > performance.flush-behind: on
> > performance.cache-max-file-size: 128MB
> > performance.cache-size: 25GB
> > diagnostics.count-fop-hits: off
> > diagnostics.latency-measurement: off
> > cluster.tier-mode: cache
> > features.ctr-enabled: on
> > transport.address-family: inet
> > nfs.disable: on
> > performance.client-io-threads: off
> > [root@Glus1 ~]# gluster volume tier FFPrimary status
> > Node Promoted files   Demoted filesStatus
>run time in h:m:s
> > ---
> --
> > localhost49   0in
> progress  5151:30:45
> > Glus2 00in progress
> 5151:30:45
> > Glus3 02075 in progress
> 5151:30:47
> > Tiering Migration Functionality: FFPrimary: success
> > [root@Glus1 ~]#
> >
> > What can cause GlusterFS to stop demoting files and allow it to
> completely fill the Hot Tier?
> >
> > Thank you!
> >
> >
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> Regards,
> Hari Gowtham.
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Hot Tier exceeding watermark-hi

2018-09-29 Thread David Brown
Hey Everyone,

I have a 3 node GlusterFS cluster that uses NVMe hot tier and a HDD cold
tier.
I recently ran into some problems when the hot tier became full with df-h
showing 100%.

I did not have a watermark-hi set, but it is my understanding that 90% is
the default. In an attempt to get the cluster to demote some files, I set
cluster.watermark-hi: 80 but it is still not demoting.


[root@Glus1 ~]# gluster volume info

Volume Name: FFPrimary
Type: Tier
Volume ID: 466ec53c-d1ef-4ebc-8414-d7d070dfe61e
Status: Started
Snapshot Count: 0
Number of Bricks: 9
Transport-type: tcp
Hot Tier :
Hot Tier Type : Replicate
Number of Bricks: 1 x 3 = 3
Brick1: Glus3:/data/glusterfs/FFPrimary/brick3
Brick2: Glus2:/data/glusterfs/FFPrimary/brick2
Brick3: Glus1:/data/glusterfs/FFPrimary/brick1
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 3 = 6
Brick4: Glus1:/data/glusterfs/FFPrimary/brick5
Brick5: Glus2:/data/glusterfs/FFPrimary/brick6
Brick6: Glus3:/data/glusterfs/FFPrimary/brick7
Brick7: Glus1:/data/glusterfs/FFPrimary/brick8
Brick8: Glus2:/data/glusterfs/FFPrimary/brick9
Brick9: Glus3:/data/glusterfs/FFPrimary/brick10
Options Reconfigured:
cluster.tier-promote-frequency: 120
cluster.tier-demote-frequency: 1800
cluster.watermark-low: 60
cluster.watermark-hi: 80
performance.flush-behind: on
performance.cache-max-file-size: 128MB
performance.cache-size: 25GB
diagnostics.count-fop-hits: off
diagnostics.latency-measurement: off
cluster.tier-mode: cache
features.ctr-enabled: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
[root@Glus1 ~]# gluster volume tier FFPrimary status
Node Promoted files   Demoted filesStatus
 run time in h:m:s
----
-
localhost49   0in progress
5151:30:45
Glus2 00in progress
5151:30:45
Glus3 02075 in progress
5151:30:47
Tiering Migration Functionality: FFPrimary: success
[root@Glus1 ~]#

What can cause GlusterFS to stop demoting files and allow it to completely
fill the Hot Tier?

Thank you!
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Tier Volumes

2018-02-10 Thread David Brown
 Hello everyone.
I have a new GlusterFS setup with 3 servers and 2 volumes. The "HotTier"
volume uses Nvme and the "ColdTier" volume uses HDD's. How do I specify the
tiers for each volume?

I will be adding 2 more HDDs to each server. I would then like to change
from a Replicate to Distributed-Replicated. Not sure if that makes a
difference in the tiering setup.

[root@Glus1 ~]# gluster volume info

Volume Name: ColdTier
Type: Replicate
Volume ID: 1647487b-c05a-4cf7-81a7-08102ae348b6
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: Glus1:/data/glusterfs/ColdTier/brick1
Brick2: Glus2:/data/glusterfs/ColdTier/brick2
Brick3: Glus3:/data/glusterfs/ColdTier/brick3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

Volume Name: HotTier
Type: Replicate
Volume ID: 6294035d-a199-4574-be11-d48ab7c4b33c
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: Glus1:/data/glusterfs/HotTier/brick1
Brick2: Glus2:/data/glusterfs/HotTier/brick2
Brick3: Glus3:/data/glusterfs/HotTier/brick3
Options Reconfigured:
auth.allow: 10.0.1.*
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
[root@Glus1 ~]#


Thank you all for your help!
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Tiering Volumns

2018-02-10 Thread David Brown
Hello everyone.
I have a new GlusterFS setup with 3 servers and 2 volumes. The "HotTier"
volume uses Nvme and the "ColdTier" volume uses HDD's. How do I specify the
tiers for each volume?

I will be adding 2 more HDDs to each server. I would then like to change
from a Replicate to Distributed-Replicated. Not sure if that makes a
difference in the tiering setup.

[root@Glus1 ~]# gluster volume info

Volume Name: ColdTier
Type: Replicate
Volume ID: 1647487b-c05a-4cf7-81a7-08102ae348b6
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: Glus1:/data/glusterfs/ColdTier/brick1
Brick2: Glus2:/data/glusterfs/ColdTier/brick2
Brick3: Glus3:/data/glusterfs/ColdTier/brick3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

Volume Name: HotTier
Type: Replicate
Volume ID: 6294035d-a199-4574-be11-d48ab7c4b33c
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: Glus1:/data/glusterfs/HotTier/brick1
Brick2: Glus2:/data/glusterfs/HotTier/brick2
Brick3: Glus3:/data/glusterfs/HotTier/brick3
Options Reconfigured:
auth.allow: 10.0.1.*
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
[root@Glus1 ~]#


Thank you all for your help!
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users