[ceph-users] Re: PGs stuck deep-scrubbing for weeks - 16.2.9

2022-07-15 Thread David Orman
Apologies, backport link should be: https://github.com/ceph/ceph/pull/46845 On Fri, Jul 15, 2022 at 9:14 PM David Orman wrote: > I think you may have hit the same bug we encountered. Cory submitted a > fix, see if it fits what you've encountered: > > https://github.com/ceph/ceph/pull/46727

[ceph-users] Re: PGs stuck deep-scrubbing for weeks - 16.2.9

2022-07-15 Thread David Orman
I think you may have hit the same bug we encountered. Cory submitted a fix, see if it fits what you've encountered: https://github.com/ceph/ceph/pull/46727 (backport to Pacific here: https://github.com/ceph/ceph/pull/46877 ) https://tracker.ceph.com/issues/54172 On Fri, Jul 15, 2022 at 8:52 AM

[ceph-users] Re: http_proxy settings for cephadm

2022-07-15 Thread Ed Rolison
That's a good notion, and was next on my list. I have actually tracked down the root cause here - it's sudo. Sudo does: Defaults env_reset And ceph orch calls podman within a sudo - so whilst the containers were getting an Env just fine, the deploy process wasn't. Adding: Defaults env_keep

[ceph-users] Re: [cephadm] ceph config as yaml

2022-07-15 Thread Redouane Kachach Elhichou
This section could be added to any service spec. cephadm will parse it and apply all the values included in the same. There's no documentation because this wasn't documented so far. I've just created a PR for that purpose: https://github.com/ceph/ceph/pull/46926 Best, Redo. On Fri, Jul 15,

[ceph-users] Re: [cephadm] ceph config as yaml

2022-07-15 Thread Ali Akil
Where to this add this section exactly. In the osd service specification section there is not mention for config. Also cephadm doesn't seem to apply changes added to ceph.conf. Best Regards, Ali On 15.07.22 15:21, Redouane Kachach

[ceph-users] PGs stuck deep-scrubbing for weeks - 16.2.9

2022-07-15 Thread Wesley Dillingham
We have two clusters one 14.2.22 -> 16.2.7 -> 16.2.9 Another 16.2.7 -> 16.2.9 Both with a multi disk (spinner block / ssd block.db) and both CephFS around 600 OSDs each with combo of rep-3 and 8+3 EC data pools. Examples of stuck scrubbing PGs from all of the pools. They have generally been

[ceph-users] Re: RGW error Coundn't init storage provider (RADOS)

2022-07-15 Thread Robert Reihs
Hi, When I have no luck yet solving the issue, but I can add some more information. The system pools ".rgw.root" and "default.rgw.log" are not created. I have created them manually, Now there is more log activity, but still getting the same error message in the log: rgw main: rgw_init_ioctx ERROR:

[ceph-users] Re: radosgw API issues

2022-07-15 Thread Casey Bodley
are you running quincy? it looks like this '/admin/info' API was new to that release https://docs.ceph.com/en/quincy/radosgw/adminops/#info On Fri, Jul 15, 2022 at 7:04 AM Marcus Müller wrote: > > Hi all, > > I’ve created a test user on our radosgw to work with the API. I’ve done the >

[ceph-users] Re: [cephadm] ceph config as yaml

2022-07-15 Thread Redouane Kachach Elhichou
Hello Ali, You can set configuration by including a config section in our yaml as following: config: param_1: val_1 ... param_N: val_N this is equivalent to call the following ceph cmd: > ceph config set Best Regards, Redo. On Fri, Jul 15, 2022 at 2:45 PM Ali Akil

[ceph-users] [cephadm] ceph config as yaml

2022-07-15 Thread Ali Akil
Hallo, i used to set the configuration for Ceph using the cli aka `ceph config set global osd_deep_scrub_interval `. I would like though to store these configuration in my git repository. Is there a way to apply these configurations as yaml file? I am using Quincy ceph cluster provisioned by

[ceph-users] http_proxy settings for cephadm

2022-07-15 Thread Ed Rolison
Hello everyone. I'm having a bit of a headache at the moment, trying to track down how I "should" be configuring proxy settings. When I was running Pacific, I think I managed to get things working, via setting a proxy in /etc/environment. Although note that if you do this, you'll have to also

[ceph-users] radosgw API issues

2022-07-15 Thread Marcus Müller
Hi all, I’ve created a test user on our radosgw to work with the API. I’ve done the following: ~#radosgw-admin user create --uid=testuser--display-name=„testuser" ~#radosgw-admin caps add --uid=testuser --caps={caps} "caps": [ { "type": "amz-cache", "perm":

[ceph-users] Re: moving mgr in Pacific

2022-07-15 Thread Konstantin Shalygin
Hi, > On 15 Jul 2022, at 12:25, Adrian Nicolae wrote: > > Hi, > > What is the recommended procedure to move the secondary mgr to another node ? > > Thanks. > On new node: systemctl reenable ceph-mgr@$(hostname -s) systemctl start ceph-mgr.target Good luck, k

[ceph-users] Ceph on FreeBSD

2022-07-15 Thread Olivier Nicole
Hi, I would like to try Ceph on FreeBSD (because I mostly use FreeBSD) but before I invest too much time in it, it seems that the current version of Ceph for FreeBSD is quite old. Is it still being taken care of or not? TIA Olivier ___ ceph-users