Re: [Gluster-users] Hot Tier exceeding watermark-hi

2018-09-29 Thread David Brown
4 hours later, no files have been demoted

[root@Glus1 ~]# gluster volume status  FFPrimary detail
Status of volume: FFPrimary
Hot Bricks:
--
Brick: Brick Glus3:/data/glusterfs/FFPrimary/brick3
TCP Port : 49155
RDMA Port: 0
Online   : Y
Pid  : 24177
File System  : xfs
Device   : /dev/nvme0n1
Mount Options: rw,seclabel,relatime,attr2,inode64,noquota
Inode Size   : 512
Disk Space Free  : 24.3GB
Total Disk Space : 476.7GB
Inode Count  : 50877088
Free Inodes  : 50874696
--
Brick: Brick Glus2:/data/glusterfs/FFPrimary/brick2
TCP Port : 49155
RDMA Port: 0
Online   : Y
Pid  : 17994
File System  : xfs
Device   : /dev/nvme0n1
Mount Options: rw,seclabel,relatime,attr2,inode64,noquota
Inode Size   : 512
Disk Space Free  : 15.5GB
Total Disk Space : 476.7GB
Inode Count  : 32560288
Free Inodes  : 32557896
--
Brick: Brick Glus1:/data/glusterfs/FFPrimary/brick1
TCP Port : 49154
RDMA Port: 0
Online   : Y
Pid  : 23573
File System  : xfs
Device   : /dev/nvme0n1
Mount Options: rw,seclabel,relatime,attr2,inode64,noquota
Inode Size   : 512
Disk Space Free  : 13.8GB
Total Disk Space : 476.7GB
Inode Count  : 29027000
Free Inodes  : 29024515
Cold Bricks:
--
Brick: Brick Glus1:/data/glusterfs/FFPrimary/brick5
TCP Port : 49152
RDMA Port: 0
Online   : Y
Pid  : 23442
File System  : xfs
Device   : /dev/sdb1
Mount Options: rw,seclabel,relatime,attr2,inode64,noquota
Inode Size   : 512
Disk Space Free  : 2.5TB
Total Disk Space : 2.7TB
Inode Count  : 292971904
Free Inodes  : 292969488
--
Brick: Brick Glus2:/data/glusterfs/FFPrimary/brick6
TCP Port : 49153
RDMA Port: 0
Online   : Y
Pid  : 17856
File System  : xfs
Device   : /dev/sdb1
Mount Options: rw,seclabel,relatime,attr2,inode64,noquota
Inode Size   : 512
Disk Space Free  : 2.5TB
Total Disk Space : 2.7TB
Inode Count  : 292971904
Free Inodes  : 292969489
--
Brick: Brick Glus3:/data/glusterfs/FFPrimary/brick7
TCP Port : 49153
RDMA Port: 0
Online   : Y
Pid  : 24018
File System  : xfs
Device   : /dev/sdb1
Mount Options: rw,seclabel,relatime,attr2,inode64,noquota
Inode Size   : 512
Disk Space Free  : 2.5TB
Total Disk Space : 2.7TB
Inode Count  : 292971904
Free Inodes  : 292969488
--
Brick: Brick Glus1:/data/glusterfs/FFPrimary/brick8
TCP Port : 49153
RDMA Port: 0
Online   : Y
Pid  : 23518
File System  : xfs
Device   : /dev/sdc1
Mount Options: rw,seclabel,relatime,attr2,inode64,noquota
Inode Size   : 512
Disk Space Free  : 2.7TB
Total Disk Space : 2.7TB
Inode Count  : 292971904
Free Inodes  : 292969607
--
Brick: Brick Glus2:/data/glusterfs/FFPrimary/brick9
TCP Port : 49154
RDMA Port: 0
Online   : Y
Pid  : 17943
File System  : xfs
Device   : /dev/sdc1
Mount Options: rw,seclabel,relatime,attr2,inode64,noquota
Inode Size   : 512
Disk Space Free  : 2.7TB
Total Disk Space : 2.7TB
Inode Count  : 292971904
Free Inodes  : 292969607
--
Brick: Brick Glus3:/data/glusterfs/FFPrimary/brick10
TCP Port : 49154
RDMA Port: 0
Online   : Y
Pid  : 24108
File System  : xfs
Device   : /dev/sdc1
Mount Options: rw,seclabel,relatime,attr2,inode64,noquota
Inode Size   : 512
Disk Space Free  : 2.7TB
Total Disk Space : 2.7TB
Inode Count  : 292971904
Free Inodes  : 292969604

[root@Glus1 ~]# gluster volume info

Volume N

Re: [Gluster-users] Hot Tier exceeding watermark-hi

2018-09-29 Thread David Brown
Thank you Hari,

I have set:
cluster.tier-promote-frequency: 1800
cluster.tier-demote-frequency: 120

I will let you know if it makes a difference after some time. So far (10
minutes), nothing has changed.
I would agree with you, that by looking at the result of 'gluster volume
tier FFPrimary status' it would seem that demoting is happening. However,
for the last 24hrs, nothing has changed in the tier status report except
the time. Could it be stuck? How would I know? Is there a way to restart it
without restarting the cluster?





On Sat, Sep 29, 2018 at 11:08 AM Hari Gowtham  wrote:

> Hi,
>
> I can see that the demotion is happening from the status provided by you.
> Do verify it.
> I would recommend you to change the cluster.tier-demote-frequency to 120
> and cluster.tier-promote-frequency to 1800 to increase the demotions until
> the
> hot tier is emptied to a certain extent. Later you can use the values
> existing now.
> On Sat, Sep 29, 2018 at 5:39 PM David Brown  wrote:
> >
> > Hey Everyone,
> >
> > I have a 3 node GlusterFS cluster that uses NVMe hot tier and a HDD cold
> tier.
> > I recently ran into some problems when the hot tier became full with
> df-h showing 100%.
> >
> > I did not have a watermark-hi set, but it is my understanding that 90%
> is the default. In an attempt to get the cluster to demote some files, I
> set cluster.watermark-hi: 80 but it is still not demoting.
> >
> >
> > [root@Glus1 ~]# gluster volume info
> >
> > Volume Name: FFPrimary
> > Type: Tier
> > Volume ID: 466ec53c-d1ef-4ebc-8414-d7d070dfe61e
> > Status: Started
> > Snapshot Count: 0
> > Number of Bricks: 9
> > Transport-type: tcp
> > Hot Tier :
> > Hot Tier Type : Replicate
> > Number of Bricks: 1 x 3 = 3
> > Brick1: Glus3:/data/glusterfs/FFPrimary/brick3
> > Brick2: Glus2:/data/glusterfs/FFPrimary/brick2
> > Brick3: Glus1:/data/glusterfs/FFPrimary/brick1
> > Cold Tier:
> > Cold Tier Type : Distributed-Replicate
> > Number of Bricks: 2 x 3 = 6
> > Brick4: Glus1:/data/glusterfs/FFPrimary/brick5
> > Brick5: Glus2:/data/glusterfs/FFPrimary/brick6
> > Brick6: Glus3:/data/glusterfs/FFPrimary/brick7
> > Brick7: Glus1:/data/glusterfs/FFPrimary/brick8
> > Brick8: Glus2:/data/glusterfs/FFPrimary/brick9
> > Brick9: Glus3:/data/glusterfs/FFPrimary/brick10
> > Options Reconfigured:
> > cluster.tier-promote-frequency: 120
> > cluster.tier-demote-frequency: 1800
> > cluster.watermark-low: 60
> > cluster.watermark-hi: 80
> > performance.flush-behind: on
> > performance.cache-max-file-size: 128MB
> > performance.cache-size: 25GB
> > diagnostics.count-fop-hits: off
> > diagnostics.latency-measurement: off
> > cluster.tier-mode: cache
> > features.ctr-enabled: on
> > transport.address-family: inet
> > nfs.disable: on
> > performance.client-io-threads: off
> > [root@Glus1 ~]# gluster volume tier FFPrimary status
> > Node Promoted files   Demoted filesStatus
>run time in h:m:s
> > ---
> --
> > localhost49   0in
> progress  5151:30:45
> > Glus2 00in progress
> 5151:30:45
> > Glus3 02075 in progress
> 5151:30:47
> > Tiering Migration Functionality: FFPrimary: success
> > [root@Glus1 ~]#
> >
> > What can cause GlusterFS to stop demoting files and allow it to
> completely fill the Hot Tier?
> >
> > Thank you!
> >
> >
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> Regards,
> Hari Gowtham.
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Hot Tier exceeding watermark-hi

2018-09-29 Thread Hari Gowtham
Hi,

I can see that the demotion is happening from the status provided by you.
Do verify it.
I would recommend you to change the cluster.tier-demote-frequency to 120
and cluster.tier-promote-frequency to 1800 to increase the demotions until the
hot tier is emptied to a certain extent. Later you can use the values
existing now.
On Sat, Sep 29, 2018 at 5:39 PM David Brown  wrote:
>
> Hey Everyone,
>
> I have a 3 node GlusterFS cluster that uses NVMe hot tier and a HDD cold tier.
> I recently ran into some problems when the hot tier became full with df-h 
> showing 100%.
>
> I did not have a watermark-hi set, but it is my understanding that 90% is the 
> default. In an attempt to get the cluster to demote some files, I set 
> cluster.watermark-hi: 80 but it is still not demoting.
>
>
> [root@Glus1 ~]# gluster volume info
>
> Volume Name: FFPrimary
> Type: Tier
> Volume ID: 466ec53c-d1ef-4ebc-8414-d7d070dfe61e
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 9
> Transport-type: tcp
> Hot Tier :
> Hot Tier Type : Replicate
> Number of Bricks: 1 x 3 = 3
> Brick1: Glus3:/data/glusterfs/FFPrimary/brick3
> Brick2: Glus2:/data/glusterfs/FFPrimary/brick2
> Brick3: Glus1:/data/glusterfs/FFPrimary/brick1
> Cold Tier:
> Cold Tier Type : Distributed-Replicate
> Number of Bricks: 2 x 3 = 6
> Brick4: Glus1:/data/glusterfs/FFPrimary/brick5
> Brick5: Glus2:/data/glusterfs/FFPrimary/brick6
> Brick6: Glus3:/data/glusterfs/FFPrimary/brick7
> Brick7: Glus1:/data/glusterfs/FFPrimary/brick8
> Brick8: Glus2:/data/glusterfs/FFPrimary/brick9
> Brick9: Glus3:/data/glusterfs/FFPrimary/brick10
> Options Reconfigured:
> cluster.tier-promote-frequency: 120
> cluster.tier-demote-frequency: 1800
> cluster.watermark-low: 60
> cluster.watermark-hi: 80
> performance.flush-behind: on
> performance.cache-max-file-size: 128MB
> performance.cache-size: 25GB
> diagnostics.count-fop-hits: off
> diagnostics.latency-measurement: off
> cluster.tier-mode: cache
> features.ctr-enabled: on
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> [root@Glus1 ~]# gluster volume tier FFPrimary status
> Node Promoted files   Demoted filesStatus 
>   run time in h:m:s
> ----  
>   -
> localhost49   0in progress
>   5151:30:45
> Glus2 00in progress  
> 5151:30:45
> Glus3 02075 in progress  
> 5151:30:47
> Tiering Migration Functionality: FFPrimary: success
> [root@Glus1 ~]#
>
> What can cause GlusterFS to stop demoting files and allow it to completely 
> fill the Hot Tier?
>
> Thank you!
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users



-- 
Regards,
Hari Gowtham.
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Hot Tier exceeding watermark-hi

2018-09-29 Thread David Brown
Hey Everyone,

I have a 3 node GlusterFS cluster that uses NVMe hot tier and a HDD cold
tier.
I recently ran into some problems when the hot tier became full with df-h
showing 100%.

I did not have a watermark-hi set, but it is my understanding that 90% is
the default. In an attempt to get the cluster to demote some files, I set
cluster.watermark-hi: 80 but it is still not demoting.


[root@Glus1 ~]# gluster volume info

Volume Name: FFPrimary
Type: Tier
Volume ID: 466ec53c-d1ef-4ebc-8414-d7d070dfe61e
Status: Started
Snapshot Count: 0
Number of Bricks: 9
Transport-type: tcp
Hot Tier :
Hot Tier Type : Replicate
Number of Bricks: 1 x 3 = 3
Brick1: Glus3:/data/glusterfs/FFPrimary/brick3
Brick2: Glus2:/data/glusterfs/FFPrimary/brick2
Brick3: Glus1:/data/glusterfs/FFPrimary/brick1
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 3 = 6
Brick4: Glus1:/data/glusterfs/FFPrimary/brick5
Brick5: Glus2:/data/glusterfs/FFPrimary/brick6
Brick6: Glus3:/data/glusterfs/FFPrimary/brick7
Brick7: Glus1:/data/glusterfs/FFPrimary/brick8
Brick8: Glus2:/data/glusterfs/FFPrimary/brick9
Brick9: Glus3:/data/glusterfs/FFPrimary/brick10
Options Reconfigured:
cluster.tier-promote-frequency: 120
cluster.tier-demote-frequency: 1800
cluster.watermark-low: 60
cluster.watermark-hi: 80
performance.flush-behind: on
performance.cache-max-file-size: 128MB
performance.cache-size: 25GB
diagnostics.count-fop-hits: off
diagnostics.latency-measurement: off
cluster.tier-mode: cache
features.ctr-enabled: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
[root@Glus1 ~]# gluster volume tier FFPrimary status
Node Promoted files   Demoted filesStatus
 run time in h:m:s
----
-
localhost49   0in progress
5151:30:45
Glus2 00in progress
5151:30:45
Glus3 02075 in progress
5151:30:47
Tiering Migration Functionality: FFPrimary: success
[root@Glus1 ~]#

What can cause GlusterFS to stop demoting files and allow it to completely
fill the Hot Tier?

Thank you!
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users