Hi Vlad,
To be honest, this playbook hasn’t received any engineering attention in a
while. It's most likely broken.
Which version of this playbook are you using?
Regards,
--
Guillaume Abrioux
Software Engineer
From: Frédéric Nass
Date: Tuesday, 14 May 2024 at 10:12
To: vladimir franciz
Hi Yuri,
The ceph-volume failure is a valid bug.
Investigating for the root cause of it and will submit a patch.
Thanks!
--
Guillaume Abrioux
Software Engineer
From: Yuri Weinstein
Date: Monday, 29 January 2024 at 22:38
To: dev , ceph-users
Subject: [EXTERNAL] [ceph-users] pacific 16.2.15 QE
Hi Yuri,
Any chance we can include [1] ? This patch fixes mpath devices deployments, the
PR has missed a merge and was backported onto reef this morning only.
Thanks,
[1]
https://github.com/ceph/ceph/pull/53539/commits/1e7223281fa044c9653633e305c0b344e4c9b3a4
--
Guillaume Abrioux
Software
Hi Yuri,
Backport PR [2] for reef has been merged.
Thanks,
[2] https://github.com/ceph/ceph/pull/54514/files
--
Guillaume Abrioux
Software Engineer
From: Guillaume Abrioux
Date: Wednesday, 15 November 2023 at 21:02
To: Yuri Weinstein , Nizamudeen A ,
Guillaume Abrioux , Travis Nielsen
Cc
).
Another patch [2] is needed in order to fix this regression.
Let me know if more details are needed.
Thanks,
[1]
https://github.com/ceph/ceph/pull/54429/commits/ee26074a5e7e90b4026659bf3adb1bc973595e91
[2] https://github.com/ceph/ceph/pull/54514/files
--
Guillaume Abrioux
Software Engineer
Hi Yuri,
ceph-volume approved https://jenkins.ceph.com/job/ceph-volume-test/566/
Regards,
--
Guillaume Abrioux
Software Engineer
From: Yuri Weinstein
Date: Monday, 16 October 2023 at 20:53
To: dev , ceph-users
Subject: [EXTERNAL] [ceph-users] quincy v17.2.7 QE Validation status
Details
ceph-volume approved https://jenkins.ceph.com/job/ceph-volume-test/553/
On Wed, 3 May 2023 at 22:43, Guillaume Abrioux wrote:
> The failure seen in ceph-volume tests isn't related.
> That being said, it needs to be fixed to have a better view of the current
> status.
>
> On
gt; > ___
>> > > Dev mailing list -- d...@ceph.io
>> > > To unsubscribe send an email to dev-le...@ceph.io
>> >
>> ___
>> Dev mailing list -- d...@ceph.io
>>
___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
--
*Guillaume Abrioux*Senior Software Engineer
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
-3a336b8e--ed39--4532--a199--ac6a3730840b-osd--wal--5d845dba--8b55--4984--890b--547fbdaff10c
> 253:12 0 331.2G 0 lvm
>
>
> So it looks like it is using that lvm group right there. Yet, the
> dashboard doesn't show a nvme. (please compare screenshot osd_232.png and
> osd_218.png)
&g
Me AGN MU AIC 6.4TB
> filter_logic: AND
> objectstore: bluestore
> wal_devices:
> model: Dell Ent NVMe AGN MU AIC 6.4TB
> status:
> created: '2022-08-29T16:02:22.822027Z'
> last_refresh: '2023-02-01T09:03:22.853860Z'
> running: 306
> size: 306
>
>
> Best
On Tue, 31 Jan 2023 at 22:31, mailing-lists wrote:
> I am not sure. I didn't find it... It should be somewhere, right? I used
> the dashboard to create the osd service.
>
what does a `cephadm shell -- ceph orch ls osd --format yaml` say?
--
*Guillaume Abrioux*Senior Software
v6.0.10-stable-6.0-pacific-centos-stream8 (pacific 16.2.11) is now
available on quay.io
Thanks,
On Tue, 31 Jan 2023 at 13:43, Guillaume Abrioux wrote:
> On Tue, 31 Jan 2023 at 11:14, Jonas Nemeikšis
> wrote:
>
>> Hello Guillaume,
>>
>> A little bit sad new
w:)
>
> I would like to update and test Pacific's latest version.
>
Let me check if I can get these tags pushed quickly, I'll update this
thread.
Thanks,
--
*Guillaume Abrioux*Senior Software Engineer
___
ceph-users mailing list -- ceph-user
subscribe send an email to ceph-users-le...@ceph.io
>
>
--
*Guillaume Abrioux*Senior Software Engineer
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
h osd so it seems to have worked. I'm a bit confused but will be
> researching more into this .
> I may have messed up my dev env really bad initially, so maybe that's why
> it didnt previously work.
>
> 20d41af95386
> quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0
online
> and talk to them realtime like discord/slack etc ? I tried irc but most are
> afk.
>
> Thanks
>
> Sent with [Proton Mail](https://proton.me/) secure email.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to c
standbys: baloo-3, baloo-1
> mds: 1/1 daemons up, 1 standby
> osd: 24 osds: 24 up (since 4m), 24 in (since 5m)
> rgw: 1 daemon active (1 hosts, 1 zones)
>
> data:
> volumes: 1/1 healthy
> pools: 7 pools, 177 pgs
> objects: 213 objects, 584 KiB
> usage: 98 MiB used, 138
- Neha, Laura
>> upgrade/pacific-p2p - Neha - Neha, Laura
>> powercycle - Brad
>> ceph-volume - Guillaume, Adam K
>>
>> Thx
>> YuriW
>>
>> ___
>> Dev mailing list -- d...@ceph.io
>> To unsubscribe send an email to dev-le...@ceph.io
on3.6/site-packages/ceph_volume/api/lvm.py", line 797, in
>
> 2022-10-24 10:25:20,307 7fd9b0d92b80 INFO /bin/podman: stderr return
> [VolumeGroup(**vg) for vg in vgs]
> 2022-10-24 10:25:20,307 7fd9b0d92b80 INFO /bin/podman: stderr File
> "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 517, in
> __init__
> 2022-10-24 10:25:20,307 7fd9b0d92b80 INFO /bin/podman: stderr raise
> ValueError('VolumeGroup must have a non-empty name')
> 2022-10-24 10:25:20,307 7fd9b0d92b80 INFO /bin/podman: stderr ValueError:
> VolumeGroup must have a non-empty name
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
--
*Guillaume Abrioux*Senior Software Engineer
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
l
> to purge the cluster.
>
> Thanks,
> Zhongzhou Cai
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
*Guillaume Abrioux*Senior Software Engineer
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
to address them.
> >
> > Josh, Neha - LRC upgrade pending major suites approvals.
> > RC release - pending major suites approvals.
> >
> > Thx
> > YuriW
> >
> > ______
rbd approved.
>
> > krbd - missing packages, Adam Kr is looking into it
>
> It seems like a transient issue to me, I would just reschedule.
>
> Thanks,
>
> Ilya
>
>
--
*Guillaume Abrioux*Senior Software Engineer
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
r ubuntu 20.04?
>
> Cheers
>
> /Simon
> _______
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
--
*Guillaume Abrioux*Senior Software Engineer
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ne know why the
> playbook would have a hard time with this step?
>
> Thanks in advance!
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.i
siastic about
> > it for ceph.
> >
>
> Yeah, so I think it's good to discuss pros and cons and see what problem
> it solves, and what extra problems it creates.
>
> Gr. Stefan
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
--
*Guillaume Abrioux*Senior Software Engineer
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Jan,
I might be wrong but I don't think download.ceph.com provides RPMs that can
be consumed using CentOS 8 at the moment.
Internally, for testing ceph@master on CentOS8, we use RPMs hosted in
chacra.
Dimitri who has worked a bit on this topic might have more inputs.
Thanks,
*Guillaume
27 matches
Mail list logo