[ceph-users] Re: Concerns about swap in ceph nodes

2023-03-17 Thread sbryan Song
In our case, we are using centos 8 and created a swap partition:
#cat /etc/redhat-release
CentOS Linux release 8.3.2011

Fstab info:
# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Sat May  1 00:33:34 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/vg0-root/   xfs defaults0 0
UUID=9b66c5ea-5d44-4311-b386-91c43cd04d4d /boot   ext4
defaults1 2
UUID=1D56-477E  /boot/efi   vfat
defaults,uid=0,gid=0,umask=077,shortname=winnt 0 2
/dev/mapper/vg0-swapnoneswapdefaults0 0
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Concerns about swap in ceph nodes

2023-03-16 Thread Boris Behrens
Maybe worth to mention, because it caught me by surprise:
Ubuntu creates a swap file (/swap.img) if you do not specify a swap
partition (check /etc/fstab).

Cheers
 Boris

Am Mi., 15. März 2023 um 22:11 Uhr schrieb Anthony D'Atri <
a...@dreamsnake.net>:

>
> With CentOS/Rocky 7-8 I’ve observed unexpected usage of swap when there is
> plenty of physmem available.
>
> Swap IMHO is a relic of a time when RAM capacities were lower and much
> more expensive.
>
> In years beginning with a 2, and with Ceph explicitly, I assert that swap
> should never be enabled during day to day operation.
>
> The RAM you need depends in part on the media you’re using, and how many
> per node, but most likely you can and should disable your swap.
>
> >
> > Hello,
> > We have a 6-node ceph cluster, all of them have osd running and 3 of
> them (ceph-1 to ceph-3 )also has the ceph-mgr and ceph-mon. Here is the
> detailed configuration of each node (swap on ceph-1 to ceph-3 has been
> disabled after the alarm):
> >
> > # ceph-1 free -h
> >totalusedfree  shared
> buff/cache   available
> > Mem:  187Gi38Gi   5.4Gi   4.1Gi   143Gi
>  142Gi
> > Swap:0B  0B  0B
> > # ceph-2 free -h
> >totalusedfree  shared
> buff/cache   available
> > Mem:  187Gi49Gi   2.6Gi   4.0Gi   135Gi
>  132Gi
> > Swap:0B  0B  0B
> > # ceph-3 free -h
> >totalusedfree  shared
> buff/cache   available
> > Mem:  187Gi37Gi   4.6Gi   4.0Gi   145Gi
>  144Gi
> > Swap:0B  0B  0B
> > # ceph-4 free -h
> >totalusedfree  shared
> buff/cache   available
> > Mem:  251Gi31Gi   8.3Gi   231Mi   211Gi
>  217Gi
> > Swap: 124Gi   3.8Gi   121Gi
> > # ceph-5 free -h
> >totalusedfree  shared
> buff/cache   available
> > Mem:  251Gi32Gi14Gi   135Mi   204Gi
>  216Gi
> > Swap: 124Gi   4.0Gi   121Gi
> > # ceph-6 free -h
> >totalusedfree  shared
> buff/cache   available
> > Mem:  251Gi30Gi16Gi   145Mi   204Gi
>  218Gi
> > Swap: 124Gi   4.0Gi   121Gi
> >
> > We have configured swap space on all of them, for ceph-mgr nodes, we
> have 8G swap space and 128G swap configured for osd nodes, and our zabbix
> has monitored a swap over 50% usage for ceph-1 to ceph-3, but our available
> space are still around 140G against the total 187G. Just wondering whether
> the swap space is necessary when we have lots of memory available?
> >
> > Thanks very much for your answering.
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>


-- 
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Concerns about swap in ceph nodes

2023-03-15 Thread Anthony D'Atri

With CentOS/Rocky 7-8 I’ve observed unexpected usage of swap when there is 
plenty of physmem available.

Swap IMHO is a relic of a time when RAM capacities were lower and much more 
expensive.

In years beginning with a 2, and with Ceph explicitly, I assert that swap 
should never be enabled during day to day operation.

The RAM you need depends in part on the media you’re using, and how many per 
node, but most likely you can and should disable your swap.

> 
> Hello,
> We have a 6-node ceph cluster, all of them have osd running and 3 of them 
> (ceph-1 to ceph-3 )also has the ceph-mgr and ceph-mon. Here is the detailed 
> configuration of each node (swap on ceph-1 to ceph-3 has been disabled after 
> the alarm):
> 
> # ceph-1 free -h
>totalusedfree  shared  buff/cache   
> available
> Mem:  187Gi38Gi   5.4Gi   4.1Gi   143Gi   
> 142Gi
> Swap:0B  0B  0B
> # ceph-2 free -h
>totalusedfree  shared  buff/cache   
> available
> Mem:  187Gi49Gi   2.6Gi   4.0Gi   135Gi   
> 132Gi
> Swap:0B  0B  0B
> # ceph-3 free -h
>totalusedfree  shared  buff/cache   
> available
> Mem:  187Gi37Gi   4.6Gi   4.0Gi   145Gi   
> 144Gi
> Swap:0B  0B  0B
> # ceph-4 free -h
>totalusedfree  shared  buff/cache   
> available
> Mem:  251Gi31Gi   8.3Gi   231Mi   211Gi   
> 217Gi
> Swap: 124Gi   3.8Gi   121Gi
> # ceph-5 free -h
>totalusedfree  shared  buff/cache   
> available
> Mem:  251Gi32Gi14Gi   135Mi   204Gi   
> 216Gi
> Swap: 124Gi   4.0Gi   121Gi
> # ceph-6 free -h
>totalusedfree  shared  buff/cache   
> available
> Mem:  251Gi30Gi16Gi   145Mi   204Gi   
> 218Gi
> Swap: 124Gi   4.0Gi   121Gi
> 
> We have configured swap space on all of them, for ceph-mgr nodes, we have 8G 
> swap space and 128G swap configured for osd nodes, and our zabbix has 
> monitored a swap over 50% usage for ceph-1 to ceph-3, but our available space 
> are still around 140G against the total 187G. Just wondering whether the swap 
> space is necessary when we have lots of memory available?
> 
> Thanks very much for your answering.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io