[ceph-users] Re: How check local network

2024-01-29 Thread David C.
Hello Albert, this should return you the sockets used on the network cluster : ceph report | jq '.osdmap.osds[] | .cluster_addrs.addrvec[] | .addr' Cordialement, *David CASIER*

[ceph-users] Re: How check local network

2024-01-29 Thread Albert Shih
Le 29/01/2024 à 22:43:46+0100, Albert Shih a écrit > Hi > > When I deploy my cluster I didn't notice on two of my servers the private > network was not working (wrong vlan), now it's working, but how can I check > the it's indeed working (currently I don't have data). I mean...ceph going to use

[ceph-users] How check local network

2024-01-29 Thread Albert Shih
Hi When I deploy my cluster I didn't notice on two of my servers the private network was not working (wrong vlan), now it's working, but how can I check the it's indeed working (currently I don't have data). Regards -- Albert SHIH 嶺  France Heure locale/Local time: lun. 29 janv. 2024

[ceph-users] pacific 16.2.15 QE validation status

2024-01-29 Thread Yuri Weinstein
Details of this release are summarized here: https://tracker.ceph.com/issues/64151#note-1 Seeking approvals/reviews for: rados - Radek, Laura, Travis, Ernesto, Adam King rgw - Casey fs - Venky rbd - Ilya krbd - in progress upgrade/nautilus-x (pacific) - Casey PTL (regweed tests failed)

[ceph-users] Re: Unsetting maintenance mode for failed host

2024-01-29 Thread Eugen Block
Hi, if you just want the cluster to drain this host but bring it back online soon I would just remove the noout flag: ceph osd rm-noout osd1 This flag is set when entering maintenance mode (ceph osd add-noout ). But it would not remove the health warning (host is in maintenance) until

[ceph-users] Re: 6 pgs not deep-scrubbed in time

2024-01-29 Thread Frank Schilder
You will have to look at the output of "ceph df" and make a decision to balance "objects per PG" and "GB per PG". Increase he PG count for the pools with the worst of these two numbers most such that it balances out as much as possible. If you have pools that see significantly more user-IO than

[ceph-users] Re: 6 pgs not deep-scrubbed in time

2024-01-29 Thread Frank Schilder
Setting osd_max_scrubs = 2 for HDD OSDs was a mistake I made. The result was that PGs needed a bit more than twice as long to deep-scrub. Net effect: high scrub load, much less user IO and, last but not least, the "not deep-scrubbed in time" problem got worse, because (2+eps)/2 > 1. For

[ceph-users] Re: 6 pgs not deep-scrubbed in time

2024-01-29 Thread Wesley Dillingham
Respond back with "ceph versions" output If your sole goal is to eliminate the not scrubbed in time errors you can increase the aggressiveness of scrubbing by setting: osd_max_scrubs = 2 The default in pacific is 1. if you are going to start tinkering manually with the pg_num you will want to

[ceph-users] Unsetting maintenance mode for failed host

2024-01-29 Thread Bryce Nicholls
Hi We put a host in maintenance and had issues bringing it back. Is there a safe way of exiting maintenance while the host is unreachable / offline? We would like the cluster to rebalance while we are working to get this host back online. Maintenance was set using: ceph orch host maintenance

[ceph-users] Re: 6 pgs not deep-scrubbed in time

2024-01-29 Thread Josh Baergen
You need to be running at least 16.2.11 on the OSDs so that you have the fix for https://tracker.ceph.com/issues/55631. On Mon, Jan 29, 2024 at 8:07 AM Michel Niyoyita wrote: > > I am running ceph pacific , version 16 , ubuntu 20 OS , deployed using > ceph-ansible. > > Michel > > On Mon, Jan

[ceph-users] January Ceph Science Virtual User Group

2024-01-29 Thread Kevin Hrpcek
Hey All, We will be having a Ceph science/research/big cluster call on Wednesday January 31st. If anyone wants to discuss something specific they can add it to the pad linked below. If you have questions or comments you can contact me. This is an informal open call of community members

[ceph-users] Re: 6 pgs not deep-scrubbed in time

2024-01-29 Thread Michel Niyoyita
I am running ceph pacific , version 16 , ubuntu 20 OS , deployed using ceph-ansible. Michel On Mon, Jan 29, 2024 at 4:47 PM Josh Baergen wrote: > Make sure you're on a fairly recent version of Ceph before doing this, > though. > > Josh > > On Mon, Jan 29, 2024 at 5:05 AM Janne Johansson >

[ceph-users] Re: 6 pgs not deep-scrubbed in time

2024-01-29 Thread Josh Baergen
Make sure you're on a fairly recent version of Ceph before doing this, though. Josh On Mon, Jan 29, 2024 at 5:05 AM Janne Johansson wrote: > > Den mån 29 jan. 2024 kl 12:58 skrev Michel Niyoyita : > > > > Thank you Frank , > > > > All disks are HDDs . Would like to know if I can increase the

[ceph-users] Re: RadosGW manual deployment

2024-01-29 Thread Janne Johansson
> If there is a (planned) documentation of manual rgw bootstrapping, > it would be nice to have also the names of required pools listed there. It will depend on several things, like if you enable swift users, I think they get a pool of their own, so I guess one would need to look in the

[ceph-users] Re: RadosGW manual deployment

2024-01-29 Thread Eugen Block
Hi, I was just curious what your intentions are, not meaning to critisize it. ;-) There are different reasons why that could be a better choice. And as I already mentioned previously, you only would have stray daemons warnings if you deployed the RGWs on hosts which already have cephadm

[ceph-users] Re: RadosGW manual deployment

2024-01-29 Thread Jan Kasprzak
Hello, Eugen, Eugen Block wrote: > Janne was a bit quicker than me, so I'll skip my short instructions > how to deploy it manually. But your (cephadm managed) cluster will > complain about "stray daemons". There doesn't seem to be a way to > deploy rgw daemons manually with the cephadm

[ceph-users] Re: RadosGW manual deployment

2024-01-29 Thread Jan Kasprzak
Hello, Janne, Janne Johansson wrote: > Den mån 29 jan. 2024 kl 08:11 skrev Jan Kasprzak : > > > > Is it possible to install a new radosgw instance manually? > > If so, how can I do it? > > We are doing it, and I found the same docs issue recently, so Zac > pushed me to provide a skeleton

[ceph-users] Re: 6 pgs not deep-scrubbed in time

2024-01-29 Thread Michel Niyoyita
This is how it is set , if you suggest to make some changes please advises. Thank you. ceph osd pool ls detail pool 1 'device_health_metrics' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 1407 flags hashpspool stripe_width 0

[ceph-users] Re: easy way to find out the number of allocated objects for a RBD image

2024-01-29 Thread Ilya Dryomov
On Sat, Nov 25, 2023 at 7:01 PM Tony Liu wrote: > > Thank you Eugen! "rbd du" is it. > The used_size from "rbd du" is object count times object size. > That's the actual storage taken by the image in backend. Somebody just quoted this sentence out of context, so I feel like I need to elaborate.

[ceph-users] Re: Debian 12 (bookworm) / Reef 18.2.1 problems

2024-01-29 Thread Chris Palmer
I have logged this as https://tracker.ceph.com/issues/64213 On 16/01/2024 14:18, DERUMIER, Alexandre wrote: Hi, ImportError: PyO3 modules may only be initialized once per interpreter process and ceph -s reports "Module 'dashboard' has failed dependency: PyO3 modules may only be initialized

[ceph-users] Re: 6 pgs not deep-scrubbed in time

2024-01-29 Thread Michel Niyoyita
Thank you Janne , no need of setting some flags like ceph osd set nodeep-scrub ??? Thank you On Mon, Jan 29, 2024 at 2:04 PM Janne Johansson wrote: > Den mån 29 jan. 2024 kl 12:58 skrev Michel Niyoyita : > > > > Thank you Frank , > > > > All disks are HDDs . Would like to know if I can

[ceph-users] Re: 6 pgs not deep-scrubbed in time

2024-01-29 Thread Janne Johansson
Den mån 29 jan. 2024 kl 12:58 skrev Michel Niyoyita : > > Thank you Frank , > > All disks are HDDs . Would like to know if I can increase the number of PGs > live in production without a negative impact to the cluster. if yes which > commands to use . Yes. "ceph osd pool set pg_num " where the

[ceph-users] Re: 6 pgs not deep-scrubbed in time

2024-01-29 Thread Michel Niyoyita
Thank you Frank , All disks are HDDs . Would like to know if I can increase the number of PGs live in production without a negative impact to the cluster. if yes which commands to use . Thank you very much for your prompt reply. Michel On Mon, Jan 29, 2024 at 10:59 AM Frank Schilder wrote: >

[ceph-users] Re: RadosGW manual deployment

2024-01-29 Thread Janne Johansson
Den mån 29 jan. 2024 kl 10:38 skrev Eugen Block : > > Ah, you probably have dedicated RGW servers, right? They are VMs, but yes. -- May the most significant bit of your life be positive. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe

[ceph-users] Re: RadosGW manual deployment

2024-01-29 Thread Eugen Block
Ah, you probably have dedicated RGW servers, right? Zitat von Janne Johansson : Den mån 29 jan. 2024 kl 09:35 skrev Eugen Block : But your (cephadm managed) cluster will complain about "stray daemons". There doesn't seem to be a way to deploy rgw daemons manually with the cephadm tool so it

[ceph-users] Re: RadosGW manual deployment

2024-01-29 Thread Janne Johansson
Den mån 29 jan. 2024 kl 09:35 skrev Eugen Block : But your (cephadm managed) cluster will > complain about "stray daemons". There doesn't seem to be a way to > deploy rgw daemons manually with the cephadm tool so it wouldn't be > stray. Is there a specific reason not to use the orchestrator for

[ceph-users] Re: 1 clients failing to respond to cache pressure (quincy:17.2.6)

2024-01-29 Thread Eugen Block
I'm not sure if I understand correctly: I decided to distribute subvolumes across multiple pools instead of multi-active-mds. With this method I will have multiple MDS and [1x cephfs clients for each pool / Host] Those two statements contradict each other, either you have multi-active MDS

[ceph-users] Re: 6 pgs not deep-scrubbed in time

2024-01-29 Thread Frank Schilder
Hi Michel, are your OSDs HDD or SSD? If they are HDD, its possible that they can't handle the deep-scrub load with default settings. In that case, have a look at this post https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/YUHWQCDAKP5MPU6ODTXUSKT7RVPERBJF/ for some basic tuning

[ceph-users] Re: RadosGW manual deployment

2024-01-29 Thread Eugen Block
Good morning, Janne was a bit quicker than me, so I'll skip my short instructions how to deploy it manually. But your (cephadm managed) cluster will complain about "stray daemons". There doesn't seem to be a way to deploy rgw daemons manually with the cephadm tool so it wouldn't be

[ceph-users] Re: RadosGW manual deployment

2024-01-29 Thread Janne Johansson
Den mån 29 jan. 2024 kl 08:11 skrev Jan Kasprzak : > > Hi all, > > how can radosgw be deployed manually? For Ceph cluster deployment, > there is still (fortunately!) a documented method which works flawlessly > even in Reef: > >