[ceph-users] Ceph/daemon container lvm tools don’t work

2023-11-30 Thread Gaël THEROND
Is there anyone using containerized CEPH over CentOS Stream 9 Hosts already? I think there is a pretty big issue in here if CEPH images are built over CentOS but never tested against it. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe

[ceph-users] Ceph/daemon container lvm tools don’t work

2023-11-27 Thread Gaël THEROND
Hi team, I’m experimenting a bit CentOS Stream 9 on our infrastructure as we’re migrating away from CentOS Stream 8. As our deployment model is an hyperconverged one, I have CEPH and OPENSTACK running on the same hosts (OSDs+NOVA/CINDER). That prohibits me to let CEPH nodes running on CentOS

[ceph-users] CEPH Daemon container CentOS Stream 8 over CentOS Stream 9 host

2023-11-24 Thread Gaël THEROND
Hi team, I’m experimenting a bit CentOS Stream 9 on our infrastructure as we’re migrating away from CentOS Stream 8. As our deployment model is an hyperconverged one, I have CEPH and OPENSTACK running on the same hosts (OSDs+NOVA/CINDER). That prohibits me to let CEPH nodes running on CentOS

[ceph-users] Re: Rados gateway data-pool replacement.

2023-04-26 Thread Gaël THEROND
it confusing in the docs because you can't change the EC > profile of a pool due to k and m numbers and the crush rule is defined > in the profile as well, but you can change that outside of the > profile. > > Regards, > Rich > > On Mon, 24 Apr 2023 at 20:55, Gaël THEROND &

[ceph-users] Re: Rados gateway data-pool replacement.

2023-04-24 Thread Gaël THEROND
. What kind of policy should I write to do that ?? Is this procedure something that looks ok to you? Kind regards! Le mer. 19 avr. 2023 à 14:49, Casey Bodley a écrit : > On Wed, Apr 19, 2023 at 5:13 AM Gaël THEROND > wrote: > > > > Hi everyone, quick question regarding rado

[ceph-users] Rados gateway data-pool replacement.

2023-04-19 Thread Gaël THEROND
Hi everyone, quick question regarding radosgw zone data-pool. I’m currently planning to migrate an old data-pool that was created with inappropriate failure-domain to a newly created pool with appropriate failure-domain. If I’m doing something like: radosgw-admin zone modify —rgw-zone default

[ceph-users] RADOSGW zone data-pool migration.

2023-04-17 Thread Gaël THEROND
Hi everyone, quick question regarding radosgw zone data-pool. I’m currently planning to migrate an old data-pool that was created with inappropriate failure-domain to a newly created pool with appropriate failure-domain. If I’m doing something like: radosgw-admin zone modify —rgw-zone default

[ceph-users] Re: 10x more used space than expected

2023-03-15 Thread Gaël THEROND
is: > radosgw-admin metadata get bucket:{bucket_name} > or > radosgw-admin metadata get bucket.instance:{bucket_name}:{instance_id} > > Hopefully that helps you or someone else struggling with this. > > Rich > > On Wed, 15 Mar 2023 at 07:18, Gaël THEROND > wrote: >

[ceph-users] Re: 10x more used space than expected

2023-03-14 Thread Gaël THEROND
ect (chunks if EC) that are > smaller than the min_alloc size? This cheat sheet might help: > > > https://docs.google.com/spreadsheets/d/1rpGfScgG-GLoIGMJWDixEkqs-On9w8nAUToPQjN8bDI/edit?usp=sharing > > Mark > > On 3/14/23 12:34, Gaël THEROND wrote: > > Hi everyone, I’ve g

[ceph-users] Re: 10x more used space than expected

2023-03-14 Thread Gaël THEROND
get key: (22) Invalid argument Ok, fine for the api, I’ll deal with the s3 api. Even if a radosgw-admin bucket flush version —keep-current or something similar would be much appreciated xD Le mar. 14 mars 2023 à 19:07, Robin H. Johnson a écrit : > On Tue, Mar 14, 2023 at 06:59:51PM +0100, G

[ceph-users] Re: 10x more used space than expected

2023-03-14 Thread Gaël THEROND
with radosgw-admin? If not I’ll use the rest api no worries. Le mar. 14 mars 2023 à 18:49, Robin H. Johnson a écrit : > On Tue, Mar 14, 2023 at 06:34:54PM +0100, Gaël THEROND wrote: > > Hi everyone, I’ve got a quick question regarding one of our RadosGW > bucket. > > > > This

[ceph-users] 10x more used space than expected

2023-03-14 Thread Gaël THEROND
Hi everyone, I’ve got a quick question regarding one of our RadosGW bucket. This bucket is used to store docker registries, and the total amount of data we use is supposed to be 4.5Tb BUT it looks like ceph told us we rather use ~53Tb of data. One interesting thing is, this bucket seems to shard

[ceph-users] Re: OSD SLOW_OPS is filling MONs disk space

2022-03-08 Thread Gaël THEROND
chase projects that are putting pressure on the cluster even if the Openstack platform is having QoS in place all over ah ah :-) Le mer. 23 févr. 2022 à 16:57, Eugen Block a écrit : > That is indeed unexpected, but good for you. ;-) Is the rest of the > cluster healthy now? > > Zitat von

[ceph-users] Re: OSD SLOW_OPS is filling MONs disk space

2022-02-23 Thread Gaël THEROND
! Le mer. 23 févr. 2022 à 12:51, Gaël THEROND a écrit : > Thanks a lot Eugene, I dumbly forgot about the rbd block prefix! > > I’ll try that this afternoon and told you how it went. > > Le mer. 23 févr. 2022 à 11:41, Eugen Block a écrit : > >> Hi, >> >> > How

[ceph-users] Re: OSD SLOW_OPS is filling MONs disk space

2022-02-23 Thread Gaël THEROND
ted you can check the mon daemon: > > ceph daemon mon. sessions > > The mon daemon also has a history of slow ops: > > ceph daemon mon. dump_historic_slow_ops > > Regards, > Eugen > > > Zitat von Gaël THEROND : > > > Hi everyone, I'm having a really nasty is

[ceph-users] OSD SLOW_OPS is filling MONs disk space

2022-02-23 Thread Gaël THEROND
Hi everyone, I'm having a really nasty issue since around two days where our cluster report a bunch of SLOW_OPS on one of our OSD as: https://paste.openstack.org/show/b3DkgnJDVx05vL5o4OmY/ Here is the cluster specification: * Used to store Openstack related data (VMs/Snaphots/Volumes/Swift).

[ceph-users] Re: RBD Image can't be formatted - blk_error

2021-02-21 Thread Gaël THEROND
following by the way and sorry for the really late answer! Le lun. 11 janv. 2021 à 13:38, Ilya Dryomov a écrit : > On Mon, Jan 11, 2021 at 10:09 AM Gaël THEROND > wrote: > > > > Hi Ilya, > > > > Here is additional information: > > My cluster is a three OSD No

[ceph-users] Re: RBD Image can't be formatted - blk_error

2021-01-11 Thread Gaël THEROND
is the complete kernel logs: https://pastebin.com/SNucPXZW Thanks a lot for your answer, I hope these logs can help ^^ Le ven. 8 janv. 2021 à 21:23, Ilya Dryomov a écrit : > On Fri, Jan 8, 2021 at 2:19 PM Gaël THEROND > wrote: > > > > Hi everyone! > > > > I'm facin

[ceph-users] RBD Image can't be formatted - blk_error

2021-01-08 Thread Gaël THEROND
Hi everyone! I'm facing a weird issue with one of my CEPH clusters: OS: CentOS - 8.2.2004 (Core) CEPH: Nautilus 14.2.11 - stable RBD using erasure code profile (K=3; m=2) When I want to format one of my RBD image (client side) I've got the following kernel messages multiple time with different

[ceph-users] Re: Problems with mon

2020-10-13 Thread Gaël THEROND
h failure), but not responding for any >commands > > Regards > Mateusz Skała > > > On Tue, 13 Oct 2020 at 11:25, Gaël THEROND > wrote: > >> This error means your quorum didn’t formed. >> >> How much mon nodes do you have usually and how mu

[ceph-users] Re: Problems with mon

2020-10-13 Thread Gaël THEROND
This error means your quorum didn’t formed. How much mon nodes do you have usually and how much went down? Le mar. 13 oct. 2020 à 10:56, Mateusz Skała a écrit : > Hello Community, > I have problems with ceph-mons in docker. Docker pods are starting but I > got a lot of messages "e6

[ceph-users] Re: MONs are down, the quorum is unable to resolve.

2020-10-12 Thread Gaël THEROND
point about getting the container alive by using > `sleep` is important. Then you can get into the container with `exec` and > do what you need to. > > > https://rook.io/docs/rook/v1.4/ceph-disaster-recovery.html#restoring-mon-quorum > > > On Oct 12, 2020, at 4:16 PM, Gaël

[ceph-users] MONs are down, the quorum is unable to resolve.

2020-10-12 Thread Gaël THEROND
Hi everyone, Because of unfortunate events, I’ve a containers based ceph cluster (nautilus) in a bad shape. One of the lab cluster which is only made of 2 nodes as control plane (I know it’s bad :-)) each of these nodes run a mon, a mgr and a rados-gw containerized ceph_daemon. They were