revin cu update la povestea de ieri

Clusterul pare sa mearga ok , Ceph nu da nici o eroare , aria a fost reconstruita dar in sysylog la nodul reconstruit apare un warning

Dec 20 10:19:47 pve-m710-3 ceph-crash[553]: WARNING:ceph-crash:post /var/lib/ceph/crash/2023-12-19T09:05:39.204755Z_2b65546d-25fd-4b14-bf05-5cb95ca3ad30 as client.admin failed: 2023-12-20T10:19:47.424+0200 7f499a33e6c0 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.admin.keyring: (13) Permission denied Dec 20 10:19:47 pve-m710-3 ceph-crash[553]: 2023-12-20T10:19:47.424+0200 7f499a33e6c0 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.admin.keyring: (13) Permission denied Dec 20 10:19:47 pve-m710-3 ceph-crash[553]: 2023-12-20T10:19:47.424+0200 7f499a33e6c0 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.admin.keyring: (13) Permission denied Dec 20 10:19:47 pve-m710-3 ceph-crash[553]: 2023-12-20T10:19:47.424+0200 7f499a33e6c0 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.admin.keyring: (13) Permission denied Dec 20 10:19:47 pve-m710-3 ceph-crash[553]: 2023-12-20T10:19:47.424+0200 7f499a33e6c0 -1 monclient: keyring not found Dec 20 10:19:47 pve-m710-3 ceph-crash[553]: [errno 13] RADOS permission denied (error connecting to the cluster)

presupun ca a fost o problema cand am reintrodus nodul nou in cluster si in ssh hosts la nodul care face managementul clusterului era semnatura nodului vechi.  Am rezolvat ulterior treaba asta dar e posibil sa nu fi facut totul ok cand am refacut configurarile de ceph pe nodul nou .

O parere , o idee ? Ca vad ca va plictisiti asteptand sa vina Mosu' cu bradu.

Paul

On 19.12.2023 16:37, Paul Lacatus via RLUG wrote:
configuratia era de la proxmox cu partitie de boot, EFI si root. Dar intrand cu un ventoy si boot proxmox debug am vazut ca ceva aiurea i-am dat eu la clonezilla ca pe discul clonat nu aveam decat partitia de root si lipseau  boot si EFI si cum clonarea a durat 12 ore am zis pas.  Asta cu toate ca am facut clonare de device nu de partitie. Am sters nodul din cluster si am reinstalat de la capat un nod cu acelasi nume, acelasi ip,  bagat in cluster si refacut osd de la ceph. Acum se curata si rebuild ceph. VM care oricum erau in ceph pool le-am restaurat din Proxmox backup server si acum pare ca e ok totul . Mai verific ca nu sunt inca 100% convins


Paul

On 19.12.2023 12:50, Alex 'CAVE' Cernat via RLUG wrote:
On 19-Dec-23 09:10, Paul Lacatus via RLUG wrote:
Salut

La o masina cu proxmox ssd de pe care booteza proxmox a inceput sa dea erori. L-am clonat cu clonezilla pe un disc de dimensiune apropiata . Discul nou porneste in grub rescue . cum fac sa bootez si sa repar grub ?

Multumesc

Paul

salut

daca vorbim de proxmox atunci foarte probabil avem de-a face cu lvm sau zfs care mai complica treaba; eventual si cu combinatii de EFI / secure boot care sporesc "distractia"

care era configuratia exacta?

Alex


_______________________________________________
RLUG mailing list
RLUG@lists.lug.ro
http://lists.lug.ro/mailman/listinfo/rlug_lists.lug.ro

_______________________________________________
RLUG mailing list
RLUG@lists.lug.ro
http://lists.lug.ro/mailman/listinfo/rlug_lists.lug.ro

_______________________________________________
RLUG mailing list
RLUG@lists.lug.ro
http://lists.lug.ro/mailman/listinfo/rlug_lists.lug.ro

Raspunde prin e-mail lui