[ceph-users] Re: How to use hardware

2023-11-17 Thread Simon Kepp
I know that your question is regarding the service servers, but may I ask, why you are planning to place so many OSDs ( 300) on so few OSD hosts( 6) (= 50 OSDs per node)? This is possible to do, but sounds like the nodes were designed for scale-up rather than a scale-out architecture like ceph.

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-17 Thread Wesley Dillingham
Please send along a pastebin of "ceph status" and "ceph osd df tree" and "ceph df detail" also "ceph tell osd.158 status" Respectfully, *Wes Dillingham* w...@wesdillingham.com LinkedIn On Fri, Nov 17, 2023 at 6:20 PM Debian wrote: > thx for your

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-17 Thread Debian
thx for your reply, it shows nothing,... there are no pgs on the osd,... best regards On 17.11.23 23:09, Eugen Block wrote: After you create the OSD, run ‚ceph pg ls-by-osd {OSD}‘, it should show you which PGs are created there and then you’ll know which pool they belong to, then check again

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-17 Thread Eugen Block
After you create the OSD, run ‚ceph pg ls-by-osd {OSD}‘, it should show you which PGs are created there and then you’ll know which pool they belong to, then check again the crush rule for that pool. You can paste the outputs here. Zitat von Debian : Hi, after a massive

[ceph-users] blustore osd nearfull but no pgs on it

2023-11-17 Thread Debian
Hi, after a massive rebalance(tunables) my small SSD-OSDs are getting full, i changed my crush rules so there are actual no pgs/pools on it, but the disks stay full: ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable) ID CLASS WEIGHT REWEIGHT SIZE   

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-17 Thread Yuri Weinstein
Dev Lead, pls review/edit/approve the release notes: https://github.com/ceph/ceph/pull/54506 TIA On Thu, Nov 16, 2023 at 10:03 AM Travis Nielsen wrote: > > Rook already ran the tests against Guillaume's change directly, it looks good > to us. I don't see a new latest-reef-devel image tag yet,

[ceph-users] Re: RadosGW public HA traffic - best practices?

2023-11-17 Thread David Orman
I apologize, I somehow missed you cannot do BGP. I don't know of a better solution for you if this is the case. You'll just want to make sure to do graceful shutdowns of haproxy when necessary to do maintenance work to avoid severing active connections. At some point, though, timeouts will

[ceph-users] Re: RadosGW public HA traffic - best practices?

2023-11-17 Thread David Orman
Use BGP/ECMP with something like exabgp on the haproxy servers. David On Fri, Nov 17, 2023, at 04:09, Boris Behrens wrote: > Hi, > I am looking for some experience on how people make their RGW public. > > Currently we use the follow: > 3 IP addresses that get distributed via keepalived between

[ceph-users] Re: cephadm user on cephadm rpm package

2023-11-17 Thread David C.
If you provision the binary (python script) cephadm yourself and your users, you should be able to do without the cephadm rpm. Le ven. 17 nov. 2023 à 14:04, Luis Domingues a écrit : > So I guess I need to install the cephadm rpm packages on all my machines > then? > > I like the idea of not

[ceph-users] Re: cephadm user on cephadm rpm package

2023-11-17 Thread Luis Domingues
So I guess I need to install the cephadm rpm packages on all my machines then? I like the idea of not having a root user, and in fact we do it on our clusters. But as we need to push ssh keys to the user config, so we manage users outside of ceph, during OS provisioning. So it look a little

[ceph-users] Re: cephadm user on cephadm rpm package

2023-11-17 Thread David C.
Hi, You can use the cephadm account (instead of root) to control machines with the orchestrator. Le ven. 17 nov. 2023 à 13:30, Luis Domingues a écrit : > Hi, > > I noticed when installing the cephadm rpm package, to bootstrap a cluster > for example, that a user cephadm was created. But I do

[ceph-users] cephadm user on cephadm rpm package

2023-11-17 Thread Luis Domingues
Hi, I noticed when installing the cephadm rpm package, to bootstrap a cluster for example, that a user cephadm was created. But I do not see it used anywhere. What is the purpose of creating a user on the machine we install the local binary of cephadm? Luis Domingues Proton AG

[ceph-users] Re: No SSL Dashboard working after installing mgr crt|key with RSA/4096 secp384r1

2023-11-17 Thread Eugen Block
I was able to reproduce the error with a self-signed elliptic curves based certificate. But I also got out of it by removing cert and key: quincy-1:~ # ceph config-key rm mgr/dashboard/key key deleted quincy-1:~ # ceph config-key rm mgr/dashboard/crt key deleted Then I failed the mgr just to

[ceph-users] Re: Problem while upgrade 17.2.6 to 17.2.7

2023-11-17 Thread David C.
Hi, don't you have a traceback below that ? You probably have a communication problem (ssl ? ) between the dashboard and the rgw. Maybe check the settings: ceph dashboard get-rgw-api-* => https://docs.ceph.com/en/quincy/mgr/dashboard/#enabling-the-object-gateway-management-frontend Le ven.

[ceph-users] Re: How to use hardware

2023-11-17 Thread David C.
Hi Albert , 5 instead of 3 mon will allow you to limit the impact if you break a mon (for example, with the file system full) 5 instead of 3 MDS, this makes sense if the workload can be distributed over several trees in your file system. Sometimes it can also make sense to have several FSs in

[ceph-users] Re: Problem while upgrade 17.2.6 to 17.2.7

2023-11-17 Thread Jean-Marc FONTANA
Hello, everyone, There's nothing cephadm.log in /var/log/ceph. To get something else, we tried what David C. proposed (thanks to him !!) and found: nov. 17 10:53:54 svtcephmonv3 ceph-mgr[727]: [balancer ERROR root] execute error: r = -1, detail = min_compat_client jewel < luminous, which

[ceph-users] RadosGW public HA traffic - best practices?

2023-11-17 Thread Boris Behrens
Hi, I am looking for some experience on how people make their RGW public. Currently we use the follow: 3 IP addresses that get distributed via keepalived between three HAproxy instances, which then balance to three RGWs. The caveat is, that keepalived is PITA to get working in distributing a set

[ceph-users] How to use hardware

2023-11-17 Thread Albert Shih
Hi everyone, In the purpose to deploy a medium size of ceph cluster (300 OSD) we have 6 bare-metal server for the OSD, and 5 bare-metal server for the service (MDS, Mon, etc.) Those 5 bare-metal server have each 48 cores and 256 Gb. What would be the smartest way to use those 5 server, I see

[ceph-users] Re: No SSL Dashboard working after installing mgr crt|key with RSA/4096 secp384r1

2023-11-17 Thread Eugen Block
Hi, did you get your dashboard back in the meantime? I don't have an answer regarding the certificate based on elliptic curves but since you wrote: So we tried to go back to the original state by removing CRT anf KEY but without success. The new key seems to stuck into the config how

[ceph-users] Re: Issue with using the block device inside a pod.

2023-11-17 Thread Eugen Block
Hi, can you share the auth caps for your k8s client? ceph auth get client. And maybe share the yaml files as well (redact sensitive data) so we can get a full picture. Zitat von Kushagr Gupta : Hi Team, Components: Kubernetes, Ceph Problem statement: We are trying to integrate Ceph

[ceph-users] Re: Large size differences between pgs

2023-11-17 Thread Eugen Block
Hi, if you could share some more info about your cluster you might get a better response. For example, 'ceph osd df tree' could be helpful to get an impression how many PGs you currently have. You can inspect the 'ceph pg dump' output and look for the column "BYTES" which tells you how

[ceph-users] Re: Different behaviors for ceph kernel client in limiting IOPS when data pool enters `nearfull`?

2023-11-17 Thread Xiubo Li
On 11/17/23 00:41, Ilya Dryomov wrote: On Thu, Nov 16, 2023 at 5:26 PM Matt Larson wrote: Ilya, Thank you for providing these discussion threads on the Kernel fixes for where there was a change and details on this affects the clients. What is the expected behavior in CephFS client when