[ceph-users] data on sda with metadata on lvm partition?

2021-01-30 Thread Matt Piermarini
Hello all - Hi. After running this dev cluster with a single osd (/dev/sda) hdd in each node (6), I want to now put the metadata on the nvme disk which is also used as boot. There is plenty of space left on the nvme, so I re-did the logical volumes to make a 50gb LV for the metadata, thinking I'd p

[ceph-users] Using RBD to pack billions of small files

2021-01-30 Thread Loïc Dachary
Bonjour, In the context Software Heritage (a noble mission to preserve all source code)[0], artifacts have an average size of ~3KB and there are billions of them. They never change and are never deleted. To save space it would make sense to write them, one after the other, in an every growing R

[ceph-users] Re: Balancing with upmap

2021-01-30 Thread Francois Legrand
Hi, Thanks for your advices. Here is the output of ceph osd df tree : ID  CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP    META    AVAIL   %USE  VAR  PGS STATUS TYPE NAME  -1   1018.65833    -  466 TiB 214 TiB 213 TiB 117 GiB 605 GiB 252 TiB 0    0   -    root default -1