[ceph-users] Re: Problem with take-over-existing-cluster.yml playbook

2024-05-14 Thread Guillaume ABRIOUX
Hi Vlad, To be honest, this playbook hasn’t received any engineering attention in a while. It's most likely broken. Which version of this playbook are you using? Regards, -- Guillaume Abrioux Software Engineer From: Frédéric Nass Date: Tuesday, 14 May 2024 at 10:12 To: vladimir franciz

[ceph-users] Re: pacific 16.2.15 QE validation status

2024-01-30 Thread Guillaume Abrioux
Hi Yuri, The ceph-volume failure is a valid bug. Investigating for the root cause of it and will submit a patch. Thanks! -- Guillaume Abrioux Software Engineer From: Yuri Weinstein Date: Monday, 29 January 2024 at 22:38 To: dev , ceph-users Subject: [EXTERNAL] [ceph-users] pacific 16.2.15 QE

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-12-11 Thread Guillaume Abrioux
Hi Yuri, Any chance we can include [1] ? This patch fixes mpath devices deployments, the PR has missed a merge and was backported onto reef this morning only. Thanks, [1] https://github.com/ceph/ceph/pull/53539/commits/1e7223281fa044c9653633e305c0b344e4c9b3a4 -- Guillaume Abrioux Software

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-16 Thread Guillaume Abrioux
Hi Yuri, Backport PR [2] for reef has been merged. Thanks, [2] https://github.com/ceph/ceph/pull/54514/files -- Guillaume Abrioux Software Engineer From: Guillaume Abrioux Date: Wednesday, 15 November 2023 at 21:02 To: Yuri Weinstein , Nizamudeen A , Guillaume Abrioux , Travis Nielsen Cc

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-15 Thread Guillaume Abrioux
). Another patch [2] is needed in order to fix this regression. Let me know if more details are needed. Thanks, [1] https://github.com/ceph/ceph/pull/54429/commits/ee26074a5e7e90b4026659bf3adb1bc973595e91 [2] https://github.com/ceph/ceph/pull/54514/files -- Guillaume Abrioux Software Engineer

[ceph-users] Re: quincy v17.2.7 QE Validation status

2023-10-18 Thread Guillaume Abrioux
Hi Yuri, ceph-volume approved https://jenkins.ceph.com/job/ceph-volume-test/566/ Regards, -- Guillaume Abrioux Software Engineer From: Yuri Weinstein Date: Monday, 16 October 2023 at 20:53 To: dev , ceph-users Subject: [EXTERNAL] [ceph-users] quincy v17.2.7 QE Validation status Details

[ceph-users] Re: 16.2.13 pacific QE validation status

2023-05-04 Thread Guillaume Abrioux
ceph-volume approved https://jenkins.ceph.com/job/ceph-volume-test/553/ On Wed, 3 May 2023 at 22:43, Guillaume Abrioux wrote: > The failure seen in ceph-volume tests isn't related. > That being said, it needs to be fixed to have a better view of the current > status. > > On

[ceph-users] Re: 16.2.13 pacific QE validation status

2023-05-03 Thread Guillaume Abrioux
gt; > ___ >> > > Dev mailing list -- d...@ceph.io >> > > To unsubscribe send an email to dev-le...@ceph.io >> > >> ___ >> Dev mailing list -- d...@ceph.io >>

[ceph-users] Re: Is ceph bootstrap keyrings in use after bootstrap?

2023-02-13 Thread Guillaume Abrioux
___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > > -- *Guillaume Abrioux*Senior Software Engineer ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Replacing OSD with containerized deployment

2023-02-07 Thread Guillaume Abrioux
-3a336b8e--ed39--4532--a199--ac6a3730840b-osd--wal--5d845dba--8b55--4984--890b--547fbdaff10c > 253:12 0 331.2G 0 lvm > > > So it looks like it is using that lvm group right there. Yet, the > dashboard doesn't show a nvme. (please compare screenshot osd_232.png and > osd_218.png) &g

[ceph-users] Re: Replacing OSD with containerized deployment

2023-02-01 Thread Guillaume Abrioux
Me AGN MU AIC 6.4TB > filter_logic: AND > objectstore: bluestore > wal_devices: > model: Dell Ent NVMe AGN MU AIC 6.4TB > status: > created: '2022-08-29T16:02:22.822027Z' > last_refresh: '2023-02-01T09:03:22.853860Z' > running: 306 > size: 306 > > > Best

[ceph-users] Re: Replacing OSD with containerized deployment

2023-01-31 Thread Guillaume Abrioux
On Tue, 31 Jan 2023 at 22:31, mailing-lists wrote: > I am not sure. I didn't find it... It should be somewhere, right? I used > the dashboard to create the osd service. > what does a `cephadm shell -- ceph orch ls osd --format yaml` say? -- *Guillaume Abrioux*Senior Software

[ceph-users] Re: ceph/daemon stable tag

2023-01-31 Thread Guillaume Abrioux
v6.0.10-stable-6.0-pacific-centos-stream8 (pacific 16.2.11) is now available on quay.io Thanks, On Tue, 31 Jan 2023 at 13:43, Guillaume Abrioux wrote: > On Tue, 31 Jan 2023 at 11:14, Jonas Nemeikšis > wrote: > >> Hello Guillaume, >> >> A little bit sad new

[ceph-users] Re: ceph/daemon stable tag

2023-01-31 Thread Guillaume Abrioux
w:) > > I would like to update and test Pacific's latest version. > Let me check if I can get these tags pushed quickly, I'll update this thread. Thanks, -- *Guillaume Abrioux*Senior Software Engineer ___ ceph-users mailing list -- ceph-user

[ceph-users] Re: ceph/daemon stable tag

2023-01-31 Thread Guillaume Abrioux
subscribe send an email to ceph-users-le...@ceph.io > > -- *Guillaume Abrioux*Senior Software Engineer ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: trouble deploying custom config OSDs

2023-01-23 Thread Guillaume Abrioux
h osd so it seems to have worked. I'm a bit confused but will be > researching more into this . > I may have messed up my dev env really bad initially, so maybe that's why > it didnt previously work. > > 20d41af95386 > quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0

[ceph-users] Re: trouble deploying custom config OSDs

2023-01-19 Thread Guillaume Abrioux
online > and talk to them realtime like discord/slack etc ? I tried irc but most are > afk. > > Thanks > > Sent with [Proton Mail](https://proton.me/) secure email. > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to c

[ceph-users] Re: Ceph-ansible: add a new HDD to an already provisioned WAL device

2023-01-18 Thread Guillaume Abrioux
standbys: baloo-3, baloo-1 > mds: 1/1 daemons up, 1 standby > osd: 24 osds: 24 up (since 4m), 24 in (since 5m) > rgw: 1 daemon active (1 hosts, 1 zones) > > data: > volumes: 1/1 healthy > pools: 7 pools, 177 pgs > objects: 213 objects, 584 KiB > usage: 98 MiB used, 138

[ceph-users] Re: 16.2.11 pacific QE validation status

2022-12-19 Thread Guillaume Abrioux
- Neha, Laura >> upgrade/pacific-p2p - Neha - Neha, Laura >> powercycle - Brad >> ceph-volume - Guillaume, Adam K >> >> Thx >> YuriW >> >> ___ >> Dev mailing list -- d...@ceph.io >> To unsubscribe send an email to dev-le...@ceph.io

[ceph-users] Re: Failed to probe daemons or devices

2022-10-24 Thread Guillaume Abrioux
on3.6/site-packages/ceph_volume/api/lvm.py", line 797, in > > 2022-10-24 10:25:20,307 7fd9b0d92b80 INFO /bin/podman: stderr return > [VolumeGroup(**vg) for vg in vgs] > 2022-10-24 10:25:20,307 7fd9b0d92b80 INFO /bin/podman: stderr File > "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 517, in > __init__ > 2022-10-24 10:25:20,307 7fd9b0d92b80 INFO /bin/podman: stderr raise > ValueError('VolumeGroup must have a non-empty name') > 2022-10-24 10:25:20,307 7fd9b0d92b80 INFO /bin/podman: stderr ValueError: > VolumeGroup must have a non-empty name > > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > > -- *Guillaume Abrioux*Senior Software Engineer ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: ceph-ansible install failure

2022-10-24 Thread Guillaume Abrioux
l > to purge the cluster. > > Thanks, > Zhongzhou Cai > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > -- *Guillaume Abrioux*Senior Software Engineer ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: quincy v17.2.4 QE Validation status

2022-09-19 Thread Guillaume Abrioux
to address them. > > > > Josh, Neha - LRC upgrade pending major suites approvals. > > RC release - pending major suites approvals. > > > > Thx > > YuriW > > > > ______

[ceph-users] Re: quincy v17.2.4 QE Validation status

2022-09-14 Thread Guillaume Abrioux
rbd approved. > > > krbd - missing packages, Adam Kr is looking into it > > It seems like a transient issue to me, I would just reschedule. > > Thanks, > > Ilya > > -- *Guillaume Abrioux*Senior Software Engineer ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: ceph-ansible stable-5.0 repository must be quincy?

2021-10-20 Thread Guillaume Abrioux
r ubuntu 20.04? > > Cheers > > /Simon > _______ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > > -- *Guillaume Abrioux*Senior Software Engineer ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph Ansible fails on check if monitor initial keyring already exists

2021-06-11 Thread Guillaume Abrioux
ne know why the > playbook would have a hard time with this step? > > Thanks in advance! > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.i

[ceph-users] Re: ceph-ansible in Pacific and beyond?

2021-03-18 Thread Guillaume Abrioux
siastic about > > it for ceph. > > > > Yeah, so I think it's good to discuss pros and cons and see what problem > it solves, and what extra problems it creates. > > Gr. Stefan > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > > -- *Guillaume Abrioux*Senior Software Engineer ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph on CentOS 8?

2020-05-29 Thread Guillaume Abrioux
Hi Jan, I might be wrong but I don't think download.ceph.com provides RPMs that can be consumed using CentOS 8 at the moment. Internally, for testing ceph@master on CentOS8, we use RPMs hosted in chacra. Dimitri who has worked a bit on this topic might have more inputs. Thanks, *Guillaume