[ceph-users] ceph-iscsi on RL9

2023-12-23 Thread duluxoz

Hi All,

Just successfully(?) completed a "live" update of the first node of a 
Ceph Quincy cluster from RL8 to RL9. Everything "seems" to be working - 
EXCEPT the iSCSI Gateway on that box.


During the update the ceph-iscsi package was removed (ie 
`ceph-iscsi-3.6-2.g97f5b02.el8.noarch.rpm` - this is the latest package 
available from the Ceph Repos). So, obviously, I reinstalled the package.


However, `dnf` is throwing errors (unsurprisingly, as that package is an 
el8 package and this box is now running el9): that package requires 
python 3.6 and el9 runs with python 3.8 (I believe).


So my question(s) is: Can I simply "downgrade" python to 3.6, or is 
there an el9-compatible version of `ceph-iscsi` somewhere, and/or is 
there some process I need to follow to get the iSCSI Gateway back up and 
running?


Some further info: The next step in my 
"happy-happy-fun-time-holiday-ICT-maintenance" was to upgrade the 
current Ceph Cluster to use `cephadm` and to go from Ceph-Quincy to 
Ceph-Reef - is this my ultimate upgrade path to get the iSCSI G/W back?


BTW the Ceph Cluster is used *only* to provide iSCSI LUNS to an oVirt 
(KVM) Cluster front-end. Because it is the holidays I can take the 
entire network down (ie shutdown all the VMs) to facilitate this update 
process, which also means that I can use some other way (ie a non-iSCSI 
way - I think) to connect the Ceph SAN Cluster to the oVirt VM-Hosting 
Cluster - if *this* is the solution (ie no iSCSI) does someone have any 
experience in running oVirt off of Ceph in a non-iSCSI way - and could 
you be so kind as to provide some pointers/documentation/help?


And before anyone says it, let me: "I broke, now I own it" :-)

Thanks in advance, and everyone have a Merry Christmas, Heavenly 
Hanukkah, Quality Kwanzaa, Really-good (upcoming) Ramadan, and/or a 
Happy Holidays.


Cheers

Dulux-Oz
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] OSD is usable, but not shown in "ceph orch device ls"

2023-12-23 Thread E Taka
Hello,

in our cluster we have one node with SSD, which are used, but we cannot see
it in "ceph orch device ls". Everything als looks OK. For better
understanding, the diskname is /dev/sda, it's osd.138:

~# lsblk
NAME MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda  8:01 7T  0 disk

~# wipefs /dev/sda
DEVICE OFFSET TYPE   UUID LABEL
sda0x0ceph_bluestore

~# ceph osd tree
 -9   15.42809  host ceph06
138ssd 6.98630  osd.138  up   1.0  1.0

The file ceph-osd.138.log does not look unusal to me.

ceph-volume.log show that the SSD is found by the "lsblk" command of the
volume processing.

It is not possible to add the SSD by
"# ceph orch daemon add osd ceph06:/dev/sda

Error message in this case is a question asking if it is already used, even
if the SSD is fully wiped via "wipefs -a" or by overwriting the entire disk
with the dd command. But It is possible to add it to the cluster by using
the option "--method raw".

Do you have an idea what happened here and how can I debug this behaviour?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io