Hi,

I suspect the auth_allow_insecure_global_id_reclaim config option. If you really need this to work you can set

$ ceph config set mon auth_allow_insecure_global_id_reclaim true

and the client should be able to connect. You will get a warning though:

mon is allowing insecure global_id reclaim

You can disable the warning if you need to:

$ ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false

Regards,
Eugen

Zitat von Pureewat Kaewpoi <pureewa...@bangmod.co.th>:

Hi All !

We have a new installed cluster with ceph reef. but our old client still using ceph luminous. The problem is when using any command to ceph cluster It will hang and no any output.

This is a output from command ceph osd pool ls --debug-ms 1
2023-10-02 23:35:22.727089 7fc93807c700  1  Processor -- start
2023-10-02 23:35:22.729256 7fc93807c700  1 -- - start start
2023-10-02 23:35:22.729790 7fc93807c700 1 -- - --> MON-1:6789/0 -- auth(proto 0 34 bytes epoch 0) v1 -- 0x7fc930174cb0 con 0 2023-10-02 23:35:22.730724 7fc935e72700 1 -- CLIENT:0/187462963 learned_addr learned my addr CLIENT:0/187462963 2023-10-02 23:35:22.732091 7fc927fff700 1 -- CLIENT:0/187462963 <== mon.0 MON-1:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 33+0+0 (2762451217 0 0) 0x7fc920002310 con 0x7fc93017d0f0 2023-10-02 23:35:22.732228 7fc927fff700 1 -- CLIENT:0/187462963 --> MON-1:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- 0x7fc914000fc0 con 0 2023-10-02 23:35:22.733237 7fc927fff700 1 -- CLIENT:0/187462963 <== mon.0 MON-1:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 206+0+0 (3693167043 0 0) 0x7fc920002830 con 0x7fc93017d0f0 2023-10-02 23:35:22.733428 7fc927fff700 1 -- CLIENT:0/187462963 --> MON-1:6789/0 -- auth(proto 2 165 bytes epoch 0) v1 -- 0x7fc914002e10 con 0 2023-10-02 23:35:22.733451 7fc927fff700 1 -- CLIENT:0/187462963 <== mon.0 MON-1:6789/0 3 ==== mon_map magic: 0 v1 ==== 532+0+0 (3038142027 0 0) 0x7fc920000e50 con 0x7fc93017d0f0 2023-10-02 23:35:22.734365 7fc927fff700 1 -- CLIENT:0/187462963 <== mon.0 MON-1:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 580+0+0 (3147563293 0 0) 0x7fc920001640 con 0x7fc93017d0f0 2023-10-02 23:35:22.734597 7fc927fff700 1 -- CLIENT:0/187462963 --> MON-1:6789/0 -- mon_subscribe({monmap=0+}) v2 -- 0x7fc9301755e0 con 0 2023-10-02 23:35:22.734678 7fc93807c700 1 -- CLIENT:0/187462963 --> MON-1:6789/0 -- mon_subscribe({mgrmap=0+}) v2 -- 0x7fc930180750 con 0 2023-10-02 23:35:22.734805 7fc93807c700 1 -- CLIENT:0/187462963 --> MON-1:6789/0 -- mon_subscribe({osdmap=0}) v2 -- 0x7fc930180f00 con 0 2023-10-02 23:35:22.734891 7fc935e72700 1 -- CLIENT:0/187462963 >> MON-1:6789/0 conn(0x7fc93017d0f0 :-1 s=STATE_OPEN pgs=754 cs=1 l=1).read_bulk peer close file descriptor 13 2023-10-02 23:35:22.734917 7fc935e72700 1 -- CLIENT:0/187462963 >> MON-1:6789/0 conn(0x7fc93017d0f0 :-1 s=STATE_OPEN pgs=754 cs=1 l=1).read_until read failed 2023-10-02 23:35:22.734922 7fc935e72700 1 -- CLIENT:0/187462963 >> MON-1:6789/0 conn(0x7fc93017d0f0 :-1 s=STATE_OPEN pgs=754 cs=1 l=1).process read tag failed 2023-10-02 23:35:22.734926 7fc935e72700 1 -- CLIENT:0/187462963 >> MON-1:6789/0 conn(0x7fc93017d0f0 :-1 s=STATE_OPEN pgs=754 cs=1 l=1).fault on lossy channel, failing 2023-10-02 23:35:22.734966 7fc927fff700 1 -- CLIENT:0/187462963 >> MON-1:6789/0 conn(0x7fc93017d0f0 :-1 s=STATE_CLOSED pgs=754 cs=1 l=1).mark_down 2023-10-02 23:35:22.735062 7fc927fff700 1 -- CLIENT:0/187462963 --> MON-2:6789/0 -- auth(proto 0 34 bytes epoch 3) v1 -- 0x7fc914005580 con 0 2023-10-02 23:35:22.735077 7fc927fff700 1 -- CLIENT:0/187462963 --> MON-3:6789/0 -- auth(proto 0 34 bytes epoch 3) v1 -- 0x7fc914005910 con 0 2023-10-02 23:35:22.737246 7fc927fff700 1 -- CLIENT:0/187462963 <== mon.2 MON-3:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 33+0+0 (2138308960 0 0) 0x7fc920002fd0 con 0x7fc91400b0c0 2023-10-02 23:35:22.737443 7fc927fff700 1 -- CLIENT:0/187462963 --> MON-3:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- 0x7fc914014f10 con 0 2023-10-02 23:35:22.737765 7fc927fff700 1 -- CLIENT:0/187462963 <== mon.1 MON-2:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 33+0+0 (3855879565 0 0) 0x7fc928002390 con 0x7fc91400f730 2023-10-02 23:35:22.737799 7fc927fff700 1 -- CLIENT:0/187462963 --> MON-2:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- 0x7fc914015850 con 0 2023-10-02 23:35:22.737966 7fc927fff700 1 -- CLIENT:0/187462963 <== mon.2 MON-3:6789/0 2 ==== auth_reply(proto 2 -13 (13) Permission denied) v1 ==== 24+0+0 (2583972696 0 0) 0x7fc920003240 con 0x7fc91400b0c0 2023-10-02 23:35:22.737981 7fc927fff700 1 -- CLIENT:0/187462963 >> MON-3:6789/0 conn(0x7fc91400b0c0 :-1 s=STATE_OPEN pgs=464 cs=1 l=1).mark_down 2023-10-02 23:35:22.738096 7fc927fff700 1 -- CLIENT:0/187462963 <== mon.1 MON-2:6789/0 2 ==== auth_reply(proto 2 -13 (13) Permission denied) v1 ==== 24+0+0 (2583972696 0 0) 0x7fc928002650 con 0x7fc91400f730 2023-10-02 23:35:22.738110 7fc927fff700 1 -- CLIENT:0/187462963 >> MON-2:6789/0 conn(0x7fc91400f730 :-1 s=STATE_OPEN pgs=344 cs=1 l=1).mark_down

By the way I have using same keyring with ceph nautilus client it work well without any problem.

What should I do next ? Where to debug or where to fix this issue.
Thanks
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to