[ceph-users] Re: Frequest LARGE_OMAP_OBJECTS in cephfs metadata pool

2020-02-24 Thread Uday Bhaskar jalagam
Thanks Patrick, is this the bug you are referring to https://tracker.ceph.com/issues/42515 ? We also see performance issues mainly on metadata operations like finding file stats operations , however mds perf dump shows no sign of any latencies . could this bug cause any performance issues ?

[ceph-users] Re: Frequest LARGE_OMAP_OBJECTS in cephfs metadata pool

2020-02-24 Thread Patrick Donnelly
It's probably a recently fixed openfiletable bug. Please upgrade to v14.2.8 when it is released in the next week or so. On Mon, Feb 24, 2020 at 1:46 PM Uday Bhaskar jalagam wrote: > > Hello Patrick, > > File system created around 4 months back. Using ceph version 14.2.3 version. > >

[ceph-users] Re: Frequest LARGE_OMAP_OBJECTS in cephfs metadata pool

2020-02-24 Thread Uday Bhaskar jalagam
Hello Patrick, File system created around 4 months back. Using ceph version 14.2.3 version. [root@knode25 /]# ceph fs dump dumped fsmap epoch 577 e577 enable_multiple, ever_enabled_multiple: 0,0 compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file

[ceph-users] Changing allocation size

2020-02-24 Thread Kristof Coucke
Hi all, A while back, I indicated we had an issue with our cluster filling up too fast. After checking everything, we've concluded this was because we had a lot of small files and the allocation size on the bluestore was too high (64kb). We are now recreating the OSD's (2 disk at the same time)

[ceph-users] Re: Frequest LARGE_OMAP_OBJECTS in cephfs metadata pool

2020-02-24 Thread Patrick Donnelly
On Mon, Feb 24, 2020 at 11:14 AM Uday Bhaskar jalagam wrote: > > Hello Team , > > I am getting frequent LARGE_OMAP_OBJECTS 1 large omap objects in one of my > cephfs metadata pools , anyone can explain why would this pool getting into > this state frequently and how could I prevent this in

[ceph-users] Frequest LARGE_OMAP_OBJECTS in cephfs metadata pool

2020-02-24 Thread Uday Bhaskar jalagam
Hello Team , I am getting frequent LARGE_OMAP_OBJECTS 1 large omap objects in one of my cephfs metadata pools , anyone can explain why would this pool getting into this state frequently and how could I prevent this in future ? # ceph health detail HEALTH_WARN 1 large omap objects

[ceph-users] Limited performance

2020-02-24 Thread Fabian Zimmermann
Hi, we currently creating a new cluster. This cluster is (as far as we can tell) an config-copy (ansible) of our existing cluster, just 5 years later - with new hardware (nvme instead of ssd, bigger disks, ...) The setup: * NVMe for Journals and "Cache"-Pool * HDD with NVMe Journals for

[ceph-users] Migrating data to a more efficient EC pool

2020-02-24 Thread Vladimir Brik
Hello I have ~300TB of data in default.rgw.buckets.data k2m2 pool and I would like to move it to a new k5m2 pool. I found instructions using cache tiering[1], but they come with a vague scary warning, and it looks like EC-EC may not even be possible [2] (is it still the case?). Can

[ceph-users] Ceph @ SoCal Linux Expo

2020-02-24 Thread Gregory Farnum
Hey all, we're excited to be returning properly to SCaLE in Pasadena[1] this year (March 5-8) with a Thursday Birds-of-a-Feather session[2] and a booth in the expo hall. Please come by if you're attending the conference or are in the area to get face time with other area users and Ceph developers.

[ceph-users] Re: ceph-mon using 100% CPU after upgrade to 14.2.5

2020-02-24 Thread Dan van der Ster
Hi Bryan, Did you ever learn more about this, or see it again? I'm facing 100% ceph-mon CPU usage now, and putting my observations here: https://tracker.ceph.com/issues/42830 Cheers, Dan On Mon, Dec 16, 2019 at 10:58 PM Bryan Stillwell wrote: > > Sasha, > > I was able to get past it by

[ceph-users] Re: Unable to increase PG numbers

2020-02-24 Thread Andres Rojas Guerrero
I have tried to increase to 16, with the same result: # ceph osd pool set cephfs_data pg_num 16 set pool 1 pg_num to 16 # ceph osd pool get cephfs_data pg_num pg_num: 8 El 24/2/20 a las 15:10, Gabryel Mason-Williams escribió: > Have you tried making a smaller increment instead of jumping from

[ceph-users] Re: Unable to increase PG numbers

2020-02-24 Thread Gabryel Mason-Williams
Have you tried making a smaller increment instead of jumping from 8 to 128 as that is quite a big leap? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Unable to increase PG numbers

2020-02-24 Thread Andres Rojas Guerrero
Hi, I have a Nautilus installation version 14.2.1 with a very unbalanced cephfs pool, I have 430 osd in the cluster but this pool only have 8 PG and PGP and 118 TB used : # ceph -s cluster: id: a2269da7-e399-484a-b6ae-4ee1a31a4154 health: HEALTH_WARN 1 nearfull osd(s)

[ceph-users] Re: RGW do not show up in 'ceph status'

2020-02-24 Thread Andreas Haupt
Sorry for the noise - problem was introduced by a missing iptables rule :-( On Fri, 2020-02-21 at 09:04 +0100, Andreas Haupt wrote: > Dear all, > > we recently added two additional RGWs to our CEPH cluster (version > 14.2.7). They work flawlessly, however they do not show up in 'ceph > status':

[ceph-users] One PG is stuck and reading is not possible

2020-02-24 Thread mikko . lampikoski
ceph version 12.2.13 luminous (stable) My whole ceph cluster went to kind of read only state. Ceph status showed that client reads is 0 op/s for whole cluster. There was normal amount of writes going on. I checked health and it said: # ceph health detail HEALTH_WARN Reduced data availability: