[ceph-users] Re: How to disable ceph-grafana during cephadm bootstrap

2021-04-14 Thread mabi
Thank you for the hint regarding the --skip-monitoring-stack parameter. Actually I already bootstrapped my cluster without this option, so is there a way to disable and remove the ceph-grafana part now? or do I need to bootstrap my cluster again? ‐‐‐ Original Message ‐‐‐ On Wednesday,

[ceph-users] Re: Revisit Large OMAP Objects

2021-04-14 Thread by morphin
I've same issue and joined to the club. Almost every deleted bucket is still there due to multisite. Also I've removed secondary zone and stopped sync but these stale-instance's still there. Before adding new secondary zone I want to remove them. If you gonna run anything let me know please.

[ceph-users] Re: ERROR: read_key_entry() idx= 1000_ ret=-2

2021-04-14 Thread by morphin
More informations: I have a overlimit bucket and the error belongs to this bucket. fill_status=OVER 100% objects_per_shard: 363472 (I use default 100K per shard) num_shards: 750 I'm deleting objects from this bucket with absolute path and I dont use dynamic bucket resharding due to multisite.

[ceph-users] Re: _delete_some new onodes has appeared since PG removal started

2021-04-14 Thread Neha Ojha
We saw this warning once in testing (https://tracker.ceph.com/issues/49900#note-1), but there, the problem was different, which also led to a crash. That issue has been fixed but if you can provide osd logs with verbose logging, we might be able to investigate further. Neha On Wed, Apr 14, 2021

[ceph-users] ERROR: read_key_entry() idx= 1000_ ret=-2

2021-04-14 Thread by morphin
Hello everyone! I'm running nautilus 14.2.16 and I'm using RGW with Beast frontend. I see this eror log in every SSD osd which is using for rgw index. Can you please tell me what is the problem? OSD LOG: cls_rgw.cc:1102: ERROR: read_key_entry() idx=�1000_matches/xdir/05/21/27260.jpg ret=-2

[ceph-users] Re: _delete_some new onodes has appeared since PG removal started

2021-04-14 Thread Igor Fedotov
Hi Dan, Seen that once before and haven't thoroughly investigated yet but I think the new PG removal stuff just revealed this "issue". In fact it had been in the code before the patch. The warning means that new object(s) (given the object names these are apparently system objects, don't

[ceph-users] Re: [External Email] Cephadm upgrade to Pacific problem

2021-04-14 Thread Dave Hall
Radoslav, I ran into the same. For Debian 10 - recent updates - you have to add 'cgroup_enable=memory swapaccount=1' to the kernel command line (/etc/default/grub). The reference I found said that Debian decided to disable this by default and make us turn it on if we want to run containers.

[ceph-users] Re: OSDs RocksDB corrupted when upgrading nautilus->octopus: unknown WriteBatch tag

2021-04-14 Thread Jorge Boncompte
Hi, every osd in a SSD that I have upgraded from 15.2.9->15.210 logs errors like the ones below. The osd's in HD or NVME don't. But they restart ok and a deep-scrub of the entire pool finishes ok. Could be the same bug? 2021-04-14T00:29:27.740+0200 7f364750d700 3 rocksdb:

[ceph-users] Re: [External Email] Cephadm upgrade to Pacific problem

2021-04-14 Thread Radoslav Milanov
Thanks for the pointer Dave, in my case though problem proved to be old docker version (18) provided by OS repos. Installing latest docker-ce from docker.com resolves the problem. It would be nice though if host was checked for compatibility before starting an upgrade. On 14.4.2021 г.

[ceph-users] Ceph Month June 2021 Event

2021-04-14 Thread Mike Perez
Hi everyone, In June 2021, we're hosting a month of Ceph presentations, lightning talks, and unconference sessions such as BOFs. There is no registration or cost to attend this event. The CFP is now open until May 12th. https://ceph.io/events/ceph-month-june-2021/cfp Speakers will receive

[ceph-users] _delete_some new onodes has appeared since PG removal started

2021-04-14 Thread Dan van der Ster
Hi Igor, After updating to 14.2.19 and then moving some PGs around we have a few warnings related to the new efficient PG removal code, e.g. [1]. Is that something to worry about? Best Regards, Dan [1] /var/log/ceph/ceph-osd.792.log:2021-04-14 20:34:34.353 7fb2439d4700 0 osd.792 pg_epoch:

[ceph-users] Re: Revisit Large OMAP Objects

2021-04-14 Thread DHilsbos
Casey; That makes sense, and I appreciate the explanation. If I were to shut down all uses of RGW, and wait for replication to catch up, would this then address most known issues with running this command in a multi-site environment? Can I offline RADOSGW daemons as an added precaution?

[ceph-users] Cephadm upgrade to Pacific problem

2021-04-14 Thread Radoslav Milanov
Hello, Cluster is 3 nodes Debian 10. Started cephadm upgrade on healthy 15.2.10 cluster. Managers were upgraded fine then first monitor went down for upgrade and never came back. Researching at the unit files container fails to run because of an error:

[ceph-users] Re: Revisit Large OMAP Objects

2021-04-14 Thread Casey Bodley
On Wed, Apr 14, 2021 at 11:44 AM wrote: > > Konstantin; > > Dynamic resharding is disabled in multisite environments. > > I believe you mean radosgw-admin reshard stale-instances rm. > > Documentation suggests this shouldn't be run in a multisite environment. > Does anyone know the reason for

[ceph-users] Re: Revisit Large OMAP Objects

2021-04-14 Thread DHilsbos
Konstantin; Dynamic resharding is disabled in multisite environments. I believe you mean radosgw-admin reshard stale-instances rm. Documentation suggests this shouldn't be run in a multisite environment. Does anyone know the reason for this? Is it, in fact, safe, even in a multisite

[ceph-users] Monitor dissapears/stopped after testing monitor-host loss and recovery

2021-04-14 Thread Kai Börnert
Hi, I'm currently testing some disaster scenarios. When removing one osd/monitor host, I see that a new quorum is build without the missing host. The missing host is listed in the dashboard under Not In Quorum, so probably everything as expected. After restarting the host, I see that the

[ceph-users] Re: How to disable ceph-grafana during cephadm bootstrap

2021-04-14 Thread Sebastian Wagner
cephadm bootstrap --skip-monitoring-stack should to the trick. See man cephadm On Tue, Apr 13, 2021 at 6:05 PM mabi wrote: > Hello, > > When bootstrapping a new ceph Octopus cluster with "cephadm bootstrap", > how can I tell the cephadm bootstrap NOT to install the ceph-grafana > container? >

[ceph-users] Re: Abandon incomplete (damaged EC) pgs - How to manage the impact on cephfs?

2021-04-14 Thread Joshua West
Additional to my last note, I should have mentioned, I am exploring options to delete the damaged data, but in hopes to preserve what I can, prior to moving to simply deleting all data on that pool. When trying to simply empty pgs, it seems like the pgs don't exist. In attempting to follow:

[ceph-users] Re: Abandon incomplete (damaged EC) pgs - How to manage the impact on cephfs?

2021-04-14 Thread Joshua West
Just working this through, how does one identify the OIDs within a PG, without list_unfound? I've been poking around, but can't seem to find a command that outputs the necessary OIDs. I tried a handful of cephfs commands, but they of course become stuck, and ceph pg commands haven't revealed the

[ceph-users] DocuBetter Meeting This Week -- 1630 UTC

2021-04-14 Thread John Zachary Dover
This week's meeting will focus on the ongoing rewrite of the cephadm documentation and the upcoming Google Season of Docs project. Meeting: https://bluejeans.com/908675367 Etherpad: https://pad.ceph.com/p/Ceph_Documentation ___ ceph-users mailing list

[ceph-users] Re: Exporting CephFS using Samba preferred method

2021-04-14 Thread Magnus HAGDORN
On Wed, 2021-04-14 at 08:55 +0200, Martin Palma wrote: > Hello, > > what is the currently preferred method, in terms of stability and > performance, for exporting a CephFS directory with Samba? > > - locally mount the CephFS directory and export it via Samba? > - using the "vfs_ceph" module of

[ceph-users] Re: Exporting CephFS using Samba preferred method

2021-04-14 Thread Alexander Sporleder
Hello Konstantin, In my experience the CephFS kernel driver (Ubuntu 20.04) was always faster and the CPU load was much lower compared to vfs_ceph. Alex Am Mittwoch, dem 14.04.2021 um 10:19 +0300 schrieb Konstantin Shalygin: > Hi, > > Actually vfs_ceph should perform better, but this method

[ceph-users] Re: Exporting CephFS using Samba preferred method

2021-04-14 Thread Konstantin Shalygin
Hi, Actually vfs_ceph should perform better, but this method will not work with another's vfs's, like recycle bin or audit, in one stack k Sent from my iPhone > On 14 Apr 2021, at 09:56, Martin Palma wrote: > > Hello, > > what is the currently preferred method, in terms of stability and

[ceph-users] Re: Revisit Large OMAP Objects

2021-04-14 Thread Konstantin Shalygin
Run reshard instances rm And reshard your bucket by hand or leave dynamic resharding process to do this work k Sent from my iPhone > On 13 Apr 2021, at 19:33, dhils...@performair.com wrote: > > All; > > We run 2 Nautilus clusters, with RADOSGW replication (14.2.11 --> 14.2.16). > >

[ceph-users] Exporting CephFS using Samba preferred method

2021-04-14 Thread Martin Palma
Hello, what is the currently preferred method, in terms of stability and performance, for exporting a CephFS directory with Samba? - locally mount the CephFS directory and export it via Samba? - using the "vfs_ceph" module of Samba? Best, Martin ___