[ceph-users] ceph-mon rocksdb write latency

2022-01-10 Thread Karsten Nielsen
Hi all, I am trouble shooting a issue that I am not really sure how to deal with. We have setup a ceph cluster version 16.2.6 with cephadm, running with podman containers. Our hosts run ceph and kubernetes. Our hosts run all NVMe, 512GB mem and a single AMD EPYC 7702P CPU. We run baremetal

[ceph-users] Re: [RGW] bi_list(): (5) Input/output error blocking resharding

2022-01-10 Thread Gilles Mocellin
Le lundi 10 janvier 2022, 11:42:11 CET Matthew Vernon a ?crit : > Hi, > > On 07/01/2022 18:39, Gilles Mocellin wrote: > > Anyone who had that problem find a workaround ? > > Are you trying to reshard a bucket in a multisite setup? That isn't > expected to work (and, IIRC, the changes to support

[ceph-users] Re: How to troubleshoot monitor node

2022-01-10 Thread 胡 玮文
> 在 2022年1月11日,00:19,Andre Tann 写道: > > Hi Janne, > >> On 10.01.22 16:49, Janne Johansson wrote: >> >> Well, nc would not tell you if a bad (local or remote) firewall >> configuration prevented nc (and ceph -s) from connecting, it would >> give the same results as if the daemon wasn't

[ceph-users] Re: Cephadm Deployment with io_uring OSD

2022-01-10 Thread Mark Nelson
Hi Gene, Unfortunately when the io_uring code was first implemented there were no stable centos kernels in our test lab that included io_uring support so it hasn't gotten a ton of testing.  I agree that your issue looks similar to what was reported in issue #47661, but it looks like you are

[ceph-users] Re: MON slow ops and growing MON store

2022-01-10 Thread Daniel Poelzleithner
Hi, > Like last time, after I restarted all five MONs, the store size > decreased and everything went back to normal. I also had to restart MGRs > and MDSs afterwards. This starts looking like a bug to me. In our case, we had a real database corruption in the rocksdb that caused version

[ceph-users] Re: How to troubleshoot monitor node

2022-01-10 Thread Andre Tann
Hi Janne, On 10.01.22 16:49, Janne Johansson wrote: Well, nc would not tell you if a bad (local or remote) firewall configuration prevented nc (and ceph -s) from connecting, it would give the same results as if the daemon wasn't listening at all, so that is why I suggested checking if the port

[ceph-users] Re: Ceph orch command hangs forever

2022-01-10 Thread Boldbayar Jantsan
Thank you so much Boris for replying. We have three mons. Two mons are still in quorum. One mon is out of quorum. But mon who down sees all of three mons out of quorum. from 2 up mon nodes ceph -s result is mon: 3 daemons, quorum ceph1,compute1 (age 14h), out of quorum: compute2 from down mon

[ceph-users] Re: How to troubleshoot monitor node

2022-01-10 Thread Janne Johansson
Den mån 10 jan. 2022 kl 16:24 skrev Andre Tann : > Hi Janne, > On 10.01.22 16:13, Janne Johansson wrote: > > modern clusters use msgr2 communications on port 3300 by default I think. > > Also, check on the 192.168.14.48 host with "netstat -an | grep LIST" > > or "ss -ntlp" if something is

[ceph-users] Re: How to troubleshoot monitor node

2022-01-10 Thread Andre Tann
Hi Janne, On 10.01.22 16:13, Janne Johansson wrote: modern clusters use msgr2 communications on port 3300 by default I think. Also, check on the 192.168.14.48 host with "netstat -an | grep LIST" or "ss -ntlp" if something is listening on 6789 and/or 3300. Yes, I already checked 3300 and

[ceph-users] Re: How to troubleshoot monitor node

2022-01-10 Thread Boris Behrens
I would go with the ss tool, because netstat shortens IPv6 addresses, so you don't see if it is actually listening on the correct address. Am Mo., 10. Jan. 2022 um 16:14 Uhr schrieb Janne Johansson < icepic...@gmail.com>: > modern clusters use msgr2 communications on port 3300 by default I

[ceph-users] Re: How to troubleshoot monitor node

2022-01-10 Thread Janne Johansson
modern clusters use msgr2 communications on port 3300 by default I think. Also, check on the 192.168.14.48 host with "netstat -an | grep LIST" or "ss -ntlp" if something is listening on 6789 and/or 3300. Den mån 10 jan. 2022 kl 16:10 skrev Andreas Feile : > > Hi all, > > I've set up a 6-node ceph

[ceph-users] How to troubleshoot monitor node

2022-01-10 Thread Andreas Feile
Hi all, I've set up a 6-node ceph cluster to learn how ceph works and what I can do with it. However, I'm new to ceph, so if the answer to one of my questions is RTFM, point me to the right place. My problem is this: The cluster consists of 3 mons and 3 osds. Even though the dashboard shows

[ceph-users] Re: Single Node Cephadm Upgrade to Pacific

2022-01-10 Thread Nathan McGuire
Sebastian, Even though mgr is reporting 16.2.0, I'm unable to use mgr_standby_modules for some reason. root@prod1:~# ceph config set mgr mgr/cephadm/mgr_standby_modules false Error EINVAL: unrecognized config option 'mgr/cephadm/mgr_standby_modules' root@prod1:~# ceph mgr module enable

[ceph-users] RGW with keystone and dns-style buckets

2022-01-10 Thread Ansgar Jazdzewski
Hi folks, i try to get dns-style buckets running and stumbled across an issue with tenants I can access the bucket like https://s3.domain/: but I did not find a way to do it with DNS-Style something like that https://_.s3.domain ! Do I miss something in the documentation? Thanks for your help!

[ceph-users] Re: Ceph orch command hangs forever

2022-01-10 Thread Boris Behrens
Hi Boldayer, I had a similar issue with radosgw-admin sync status. It was actually a problem with the mons that were not listening on the correct IP addresses. You can check with `ceph mon stat` if the mon got the correct IP addresses. With `ceph -m IPADDRESS status` you can check if the mons are

[ceph-users] Ceph orch command hangs forever

2022-01-10 Thread Boldbayar Jantsan
Hello, I am using Octopus version which was deployed by Cephadm. After one of mon nodes rebooted, ceph orch command does not work and is not responsive. It looks very similar to the below issue.

[ceph-users] Re: [RGW] bi_list(): (5) Input/output error blocking resharding

2022-01-10 Thread Matthew Vernon
Hi, On 07/01/2022 18:39, Gilles Mocellin wrote: Anyone who had that problem find a workaround ? Are you trying to reshard a bucket in a multisite setup? That isn't expected to work (and, IIRC, the changes to support doing so aren't going to make it into quincy). Regards, Matthew

[ceph-users] Re: Single Node Cephadm Upgrade to Pacific

2022-01-10 Thread Sebastian Wagner
Hi Nathan, Should work, as long as you have two MGRs deployed. Please have a look at ceph config set mgr mgr/mgr_standby_modules = False Best, Sebastian Am 08.01.22 um 17:44 schrieb Nathan McGuire: > Hello! > > I'm running into an issue with upgrading Cephadm v15 to v16 on a single host. >