[ceph-users] 3 DC with 4+5 EC not quite working

2024-01-11 Thread Torkil Svensgaard
We are looking to create a 3 datacenter 4+5 erasure coded pool but can't quite get it to work. Ceph version 17.2.7. These are the hosts (there will eventually be 6 hdd hosts in each datacenter): -33 886.00842 datacenter 714 -7 209.93135 host ceph-hdd1 -69

[ceph-users] Re: Is there any way to merge an rbd image's full backup and a diff?

2024-01-11 Thread Satoru Takeuchi
Hi Ilya, 2023年12月18日(月) 9:14 Satoru Takeuchi : > > Hi Ilya, > > > > Yes, it's possible. It's one of a workaround I thought. Then the > > > backup data are as follows: > > > > > > a. The full backup taken at least 14 days ago. > > > b. The latest 14 days backup data > > > > I think it would be: >

[ceph-users] Re: [v18.2.1] problem with wrong osd device symlinks after upgrade to 18.2.1

2024-01-11 Thread Reto Gysi
Hi Eugen LV tags seem to look ok to me. LV_tags: - root@zephir:~# lvs -a -o +devices,tags | egrep 'osd1| LV' | grep -v osd12 LV VG Attr LSizePool Origin Data% Meta% Move Log

[ceph-users] Re: [v18.2.1] problem with wrong osd device symlinks after upgrade to 18.2.1

2024-01-11 Thread Reto Gysi
Ok, I found the problem I think. The problem is that the LVM osd with the LVM raid1 block.db is activated by RAWActivate instead of LVMActivate, which I think is wrong. furthermore if /dev/optante/ceph-db-osd1 is a raid1 LV, ceph_volume.device.raw.list reports: >>> foo =

[ceph-users] [quincy 17.2.7] ceph orchestrator not doing anything

2024-01-11 Thread Boris
Happy new year everybody. I just found out that the orchestrator in one of our clusters is not doing anything. What I tried until now: - disabling / enabling cephadm (no impact) - restarting hosts (no impact) - starting upgrade to same version (no impact) - starting downgrade (no impact) -

[ceph-users] Re: Pacific bluestore_volume_selection_policy

2024-01-11 Thread Igor Fedotov
Hi Reed, no much sense to attach the logs to the mentioned tickets - the problem with the assertion is well-known and has been already fixed. Your current issue is weird config update behavior which prevents from applying the work around. Feel free to open ticket about that but I don't

[ceph-users] Re: Unable to execute radosgw command using cephx users on client side

2024-01-11 Thread Eugen Block
Hi, I don't really have any solution, but it appears to require rwx permissions at least for the rgw tag: caps osd = "allow rwx tag rgw *=* This was the only way I got the radosgw-admin commands to work in my limited test attempts. Maybe someone else has more insights. My interpretation

[ceph-users] Re: [v18.2.1] problem with wrong osd device symlinks after upgrade to 18.2.1

2024-01-11 Thread Eugen Block
Hi, I don't really have any advice but I'm curious how the LV tags look like (lvs -o lv_tags). Do they point to the correct LVs for the block.db? Does the 'ceph osd metadata ' show anything weird? Is there something useful in the ceph-volume.log (/var/log/ceph/{FSID}/ceph-volume.log)?

[ceph-users] Re: Stuck in upgrade process to reef

2024-01-11 Thread Igor Fedotov
Hi Jan, unfortunately this wasn't very helpful. Moreover the log looks a bit messy - looks like a mixture of outputs from multiple running instances or something. I'm not an expert in using containerized setups though. Could you please simplify things by running ceph-osd process manually

[ceph-users] Re: ceph-volume fails in all recent releases with IndexError

2024-01-11 Thread Eugen Block
Hi, I don't use rook but I haven't seen this issue yet in any of my test clusters (from octopus to reef). Althouth I don't redeploy OSDs all the time, I do set up fresh (single-node) clusters once or twice a week with different releases without any ceph-volume issues. Just to confirm I

[ceph-users] Re: Rack outage test failing when nodes get integrated again

2024-01-11 Thread Frank Schilder
Hi Steve, I also observed that setting mon_osd_reporter_subtree_level to anything else than host leads to incorrect behavior. In our case, I actually observed the opposite. I had mon_osd_reporter_subtree_level=datacenter (we have 3 DCs in the crush tree). After cutting off a single host with

[ceph-users] Re: Sending notification after multiple objects are created in a ceph bucket.

2024-01-11 Thread Yuval Lifshitz
Lokendra and Kushagra, We don't have such an enhancement on the roadmap. Would think of 2 options: (1) implement the special logic using lua scripting. We have an example on how to send notifications to a NATS broker from lua [1]. you can easily adjust that to kafka. the 2 main drawbacks with this

[ceph-users] Re: Sending notification after multiple objects are created in a ceph bucket.

2024-01-11 Thread Lokendra Rathour
Facing a similar situation, any support would be helpful. -Lokendra On Tue, Jan 9, 2024 at 10:47 PM Kushagr Gupta wrote: > Hi Team, > > Features used: Rados gateway, ceph S3 buckets > > We are trying to create a data pipeline using the S3 buckets capability > and rado gateway in ceph. > Our