Hi,

i'm running a ceph mimic cluster with 4x ISCSI gateway nodes. Cluster was setup 
via ceph-ansible v3.2-stable. I just checked my nodes and saw that only two of 
the four configured iscsi gw nodes are working correct. I first noticed via 
gwcli:


###


$gwcli -d ls

Traceback (most recent call last):

  File "/usr/bin/gwcli", line 191, in <module>

    main()

  File "/usr/bin/gwcli", line 103, in main

    root_node.refresh()

  File "/usr/lib/python2.7/site-packages/gwcli/gateway.py", line 87, in refresh

    raise GatewayError

gwcli.utils.GatewayError


###


I investigated and noticed that both "rbd-target-api" and "rbd-target-gw" are 
not running. I were not able to restart them via systemd. I then found that 
even tcmu-runner is not running and it exits with the following error:



###


tcmu_rbd_check_image_size:827 rbd/production.lun1: Mismatched sizes. RBD image 
size 5498631880704. Requested new size 5497558138880.


###


Now i have the situation that two nodes are running correct and two cant start 
tcmu-runner. I don't know where the image size mismatches are coming from - i 
haven't configured or resized any of the images.


Is there any chance to get my two iscsi gw nodes back working?



The following packets are installed:


rpm -qa |egrep "ceph|iscsi|tcmu|rst|kernel"


libtcmu-1.4.0-106.gd17d24e.el7.x86_64

ceph-iscsi-cli-2.7-2.7.el7.noarch

kernel-3.10.0-957.5.1.el7.x86_64

ceph-base-13.2.5-0.el7.x86_64

ceph-iscsi-config-2.6-2.6.el7.noarch

ceph-common-13.2.5-0.el7.x86_64

ceph-selinux-13.2.5-0.el7.x86_64

kernel-tools-libs-3.10.0-957.5.1.el7.x86_64

python-cephfs-13.2.5-0.el7.x86_64

ceph-osd-13.2.5-0.el7.x86_64

kernel-headers-3.10.0-957.5.1.el7.x86_64

kernel-tools-3.10.0-957.5.1.el7.x86_64

kernel-3.10.0-957.1.3.el7.x86_64

libcephfs2-13.2.5-0.el7.x86_64

kernel-3.10.0-862.14.4.el7.x86_64

tcmu-runner-1.4.0-106.gd17d24e.el7.x86_64



Thanks,

Greets


Kilian

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to