We are looking to create a 3 datacenter 4+5 erasure coded pool but can't
quite get it to work. Ceph version 17.2.7. These are the hosts (there
will eventually be 6 hdd hosts in each datacenter):
-33 886.00842 datacenter 714
-7 209.93135 host ceph-hdd1
-69
Hi Ilya,
2023年12月18日(月) 9:14 Satoru Takeuchi :
>
> Hi Ilya,
>
> > > Yes, it's possible. It's one of a workaround I thought. Then the
> > > backup data are as follows:
> > >
> > > a. The full backup taken at least 14 days ago.
> > > b. The latest 14 days backup data
> >
> > I think it would be:
>
Hi Eugen
LV tags seem to look ok to me.
LV_tags:
-
root@zephir:~# lvs -a -o +devices,tags | egrep 'osd1| LV' | grep -v osd12
LV VG
Attr LSizePool Origin
Data% Meta% Move Log
Ok, I found the problem I think.
The problem is that the LVM osd with the LVM raid1 block.db is activated by
RAWActivate instead of LVMActivate, which I think is wrong.
furthermore if /dev/optante/ceph-db-osd1 is a raid1 LV,
ceph_volume.device.raw.list reports:
>>> foo =
Happy new year everybody.
I just found out that the orchestrator in one of our clusters is not doing
anything.
What I tried until now:
- disabling / enabling cephadm (no impact)
- restarting hosts (no impact)
- starting upgrade to same version (no impact)
- starting downgrade (no impact)
-
Hi Reed,
no much sense to attach the logs to the mentioned tickets - the problem
with the assertion is well-known and has been already fixed.
Your current issue is weird config update behavior which prevents from
applying the work around. Feel free to open ticket about that but I
don't
Hi,
I don't really have any solution, but it appears to require rwx
permissions at least for the rgw tag:
caps osd = "allow rwx tag rgw *=*
This was the only way I got the radosgw-admin commands to work in my
limited test attempts. Maybe someone else has more insights. My
interpretation
Hi,
I don't really have any advice but I'm curious how the LV tags look
like (lvs -o lv_tags). Do they point to the correct LVs for the
block.db? Does the 'ceph osd metadata ' show anything weird? Is
there something useful in the ceph-volume.log
(/var/log/ceph/{FSID}/ceph-volume.log)?
Hi Jan,
unfortunately this wasn't very helpful. Moreover the log looks a bit
messy - looks like a mixture of outputs from multiple running instances
or something. I'm not an expert in using containerized setups though.
Could you please simplify things by running ceph-osd process manually
Hi,
I don't use rook but I haven't seen this issue yet in any of my test
clusters (from octopus to reef). Althouth I don't redeploy OSDs all
the time, I do set up fresh (single-node) clusters once or twice a
week with different releases without any ceph-volume issues. Just to
confirm I
Hi Steve,
I also observed that setting mon_osd_reporter_subtree_level to anything else
than host leads to incorrect behavior.
In our case, I actually observed the opposite. I had
mon_osd_reporter_subtree_level=datacenter (we have 3 DCs in the crush tree).
After cutting off a single host with
Lokendra and Kushagra,
We don't have such an enhancement on the roadmap. Would think of 2 options:
(1) implement the special logic using lua scripting. We have an example on
how to send notifications to a NATS broker from lua [1]. you can easily
adjust that to kafka. the 2 main drawbacks with this
Facing a similar situation, any support would be helpful.
-Lokendra
On Tue, Jan 9, 2024 at 10:47 PM Kushagr Gupta
wrote:
> Hi Team,
>
> Features used: Rados gateway, ceph S3 buckets
>
> We are trying to create a data pipeline using the S3 buckets capability
> and rado gateway in ceph.
> Our
13 matches
Mail list logo