[ceph-users] Re: PG inactive - why?

2022-11-02 Thread Paweł Kowalski
No, I couldn't find anything odd in osd.2 log, but I'm not very familiar with ceph so it's likely I missed something. Did I hit the 300PGs/osd limit? I'm not sure since I can't find any log entry about it, and I don't know how to calculate PGs count on that OSD for that moment. One thing whi

[ceph-users] Re: PG inactive - why?

2022-11-02 Thread Eugen Block
Hi, So I guess, that if max PGs per OSD was an issue, the problem should appear right after creating new pool, am I right? it would happen right after removing or adding OSDs (btw, the default is 250 PGs/OSD). But with only around 400 PG and assuming a pool size of 3 you shouldn't be faci

[ceph-users] Re: Nautilus slow using "ceph tell osd.* bench"

2022-11-02 Thread Olivier Chaze
Hi, Did you find a workaround or explanation by any chance ? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] RBD and Ceph FS for private cloud

2022-11-02 Thread Mevludin Blazevic
Hi all, i am planning to set up on my ceph cluster an RBD pool for virtual machines created on my Cloudstack environment. In parallel, a Ceph FS pool should be used as a secondary storage for VM snapshots, ISOs etc. Are there any performance issues when using both RBD and CephFS or is it bett

[ceph-users] Re: PG inactive - why?

2022-11-02 Thread Paweł Kowalski
I had to check logs once again to find out what I exactly did... 1. Destroyed 2 OSDs from host pirat and recreated them, but backfilling was still in progress: 2022-10-26T13:22:13.744545+0200 mgr.skarb (mgr.40364478) 93039 : cluster [DBG] pgmap v94205: 285 pgs: 2 active+undersized+degraded+

[ceph-users] Re: How to force PG merging in one step?

2022-11-02 Thread Frank Schilder
Hi Eugen, the PG merge finished and I still observe that no PG warning shows up. We have mgr advanced mon_max_pg_per_osd 300 and I have an OSD with 306 PGs. Still, no warning: # ceph health detail HEALTH_OK Is this not checked per OSD? This wo

[ceph-users] Missing OSD in up set

2022-11-02 Thread Nicola Mori
Dear Ceph users, I have one PG in my cluster that is constantly in active+clean+remapped state. From what I understand there might a problem with the up set: # ceph pg map 3.5e osdmap e23638 pg 3.5e (3.5e) -> up [38,78,55,49,40,39,64,2147483647] acting [38,78,55,49,40,39,64,68] The last OSD

[ceph-users] Developers asked, and users answered: What is the use case of your Ceph cluster?

2022-11-02 Thread Laura Flores
Dear Ceph Users, Two weeks ago, the Ceph project conducted a user survey to understand how people are using their Ceph clusters in the wild The results are summarized in this blog post! https://ceph.io/en/news/blog/2022/ceph-use-case-survey-2022/ We received quite a few interesting use cases tha

[ceph-users] Re: OSDs are not utilized evenly

2022-11-02 Thread Denis Polom
Hi Joseph, thank you for answer. But if I'm looking correctly to 'ceph osd df' output I posted I see there are about 195 PGs per OSD. There are 608 OSDs in the pool, which is the only data pool. What I have calculated - PG calc says that PG number is fine. On 11/1/22 14:03, Joseph Mundacka

[ceph-users] Re: Missing OSD in up set

2022-11-02 Thread Frank Schilder
Hi Nicola, might be https://docs.ceph.com/en/quincy/rados/troubleshooting/troubleshooting-pg/#crush-gives-up-too-soon or https://tracker.ceph.com/issues/57348. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Nic

[ceph-users] Lots of OSDs with failed asserts

2022-11-02 Thread Daniel Brunner
Hi, more and more OSDs now crash all the time and I've lost more OSDs than my replication allows, all my data is currently down or inactive. Can somebody help me fix those asserts and get them up again (so i can start my distaster recovery backup)? $ sudo /usr/bin/ceph-osd -f --cluster ceph --id