Re: [ceph-users] Usage of devices in SSD pool vary very much

2019-01-05 Thread Konstantin Shalygin
On 1/5/19 4:17 PM, Kevin Olbrich wrote: root@adminnode:~# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 30.82903 root default -16 30.82903 datacenter dc01 -19 30.82903 pod dc01-agg01 -10 17.43365 rack dc

Re: [ceph-users] Balancer=on with crush-compat mode

2019-01-05 Thread David C
On Sat, 5 Jan 2019, 13:38 Marc Roos > I have straw2, balancer=on, crush-compat and it gives worst spread over > my ssd drives (4 only) being used by only 2 pools. One of these pools > has pg 8. Should I increase this to 16 to create a better result, or > will it never be any better. > > For now I

Re: [ceph-users] problem w libvirt version 4.5 and 12.2.7

2019-01-05 Thread Jason Dillaman
Thanks for tracking this down. It appears the libvirt needs to check whether or not the fast-diff map is invalid before attempting to use it. However, assuming the map is valid, I don't immediately see a difference between the libvirt and "rbd du" implementation. Can you provide a pastebin "debug r

Re: [ceph-users] Balancer=on with crush-compat mode

2019-01-05 Thread Kevin Olbrich
If I understand the balancer correct, it balances PGs not data. This worked perfectly fine in your case. I prefer a PG count of ~100 per OSD, you are at 30. Maybe it would help to bump the PGs. Kevin Am Sa., 5. Jan. 2019 um 14:39 Uhr schrieb Marc Roos : > > > I have straw2, balancer=on, crush-co

[ceph-users] Balancer=on with crush-compat mode

2019-01-05 Thread Marc Roos
I have straw2, balancer=on, crush-compat and it gives worst spread over my ssd drives (4 only) being used by only 2 pools. One of these pools has pg 8. Should I increase this to 16 to create a better result, or will it never be any better. For now I like to stick to crush-compat, so I can use

Re: [ceph-users] Ceph community - how to make it even stronger

2019-01-05 Thread ceph . novice
Hi. What makes us struggle / wonder again and again is the absence of CEPH __man pages__. On *NIX systems man pages are always the first way to go for help, right? Or is this considered "old school" from the CEPH makers / community? :O And as many ppl complain again and again, the same here as

[ceph-users] MDS uses up to 150 GByte of memory during journal replay

2019-01-05 Thread Matthias Aebi
Hello everyone, We are running a small cluster on 5 machines with 48 OSDs / 5 MDSs / 5 MONs based on Luminous 12.2.10 and Debian Stretch 9.6. When using a single MDS configuration everything works fine and looking at the active MDS's memory, as configured, it uses ~1 GByte of memory for cache:

Re: [ceph-users] Usage of devices in SSD pool vary very much

2019-01-05 Thread Kevin Olbrich
root@adminnode:~# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 30.82903 root default -16 30.82903 datacenter dc01 -19 30.82903 pod dc01-agg01 -10 17.43365 rack dc01-rack02 -47.20665