Am 22.06.2018 um 17:38 schrieb Nir Soffer:
On Fri, Jun 22, 2018 at 6:22 PM Bernhard Dick <bernh...@bdick.de <mailto:bernh...@bdick.de>> wrote:
    I've a problem creating an iSCSI storage domain. My hosts are running
the current ovirt 4.2 engine-ng

What is engine-ng?
sorry, I mixed it up. It is ovirt node-ng.


    version. I can detect and login to the
    iSCSI targets, but I cannot see any LUNs (on the LUNs > Targets page).
    That happens with our storage and with a linux based iSCSI target which
    I created for testing purposes.


Linux based iscsi based target works fine, we use it a lot for testing
environment.

Can you share the output of these commands on the the host connected
to the storage server?

lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 64G 0 disk sda1 8:1 0 1G 0 part /boot
sda2                                                 8:2    0   63G  0 part
onn-pool00_tmeta 253:0 0 1G 0 lvm
   onn-pool00-tpool                               253:2    0   44G  0 lvm
onn-ovirt--node--ng--4.2.3.1--0.20180530.0+1 253:3 0 17G 0 lvm /
     onn-pool00                                   253:12   0   44G  0 lvm
onn-var_log_audit 253:13 0 2G 0 lvm /var/log/audit onn-var_log 253:14 0 8G 0 lvm /var/log onn-var 253:15 0 15G 0 lvm /var onn-tmp 253:16 0 1G 0 lvm /tmp onn-home 253:17 0 1G 0 lvm /home
     onn-root                                     253:18   0   17G  0 lvm
     onn-ovirt--node--ng--4.2.2--0.20180430.0+1   253:19   0   17G  0 lvm
     onn-var_crash                                253:20   0   10G  0 lvm
onn-pool00_tdata 253:1 0 44G 0 lvm
   onn-pool00-tpool                               253:2    0   44G  0 lvm
onn-ovirt--node--ng--4.2.3.1--0.20180530.0+1 253:3 0 17G 0 lvm /
     onn-pool00                                   253:12   0   44G  0 lvm
onn-var_log_audit 253:13 0 2G 0 lvm /var/log/audit onn-var_log 253:14 0 8G 0 lvm /var/log onn-var 253:15 0 15G 0 lvm /var onn-tmp 253:16 0 1G 0 lvm /tmp onn-home 253:17 0 1G 0 lvm /home
     onn-root                                     253:18   0   17G  0 lvm
     onn-ovirt--node--ng--4.2.2--0.20180430.0+1   253:19   0   17G  0 lvm
     onn-var_crash                                253:20   0   10G  0 lvm
onn-swap 253:4 0 6.4G 0 lvm [SWAP] sdb 8:16 0 256G 0 disk
gluster_vg_sdb-gluster_thinpool_sdb_tmeta          253:5    0    1G  0 lvm
 gluster_vg_sdb-gluster_thinpool_sdb-tpool        253:7    0  129G  0 lvm
   gluster_vg_sdb-gluster_thinpool_sdb            253:8    0  129G  0 lvm
gluster_vg_sdb-gluster_lv_data 253:10 0 64G 0 lvm /gluster_bricks/data gluster_vg_sdb-gluster_lv_vmstore 253:11 0 64G 0 lvm /gluster_bricks/vmstore
gluster_vg_sdb-gluster_thinpool_sdb_tdata          253:6    0  129G  0 lvm
 gluster_vg_sdb-gluster_thinpool_sdb-tpool        253:7    0  129G  0 lvm
   gluster_vg_sdb-gluster_thinpool_sdb            253:8    0  129G  0 lvm
gluster_vg_sdb-gluster_lv_data 253:10 0 64G 0 lvm /gluster_bricks/data gluster_vg_sdb-gluster_lv_vmstore 253:11 0 64G 0 lvm /gluster_bricks/vmstore gluster_vg_sdb-gluster_lv_engine 253:9 0 100G 0 lvm /gluster_bricks/engine sdc 8:32 0 500G 0 disk sdd 8:48 0 1G 0 disk sr0 11:0 1 1.1G 0 rom

here sdc is from the storage, sdd is from the linux based target.

multipath -ll
No Output
cat /etc/multipath.conf
# VDSM REVISION 1.3
# VDSM PRIVATE
# VDSM REVISION 1.5

# This file is managed by vdsm.
# [...]
defaults {
    # [...]
    polling_interval            5
    # [...]
    no_path_retry               4
    # [...]
    user_friendly_names         no
    # [...]
    flush_on_last_del           yes
    # [...]
    fast_io_fail_tmo            5
    # [...]
    dev_loss_tmo                30
    # [...]
    max_fds                     4096
}
# Remove devices entries when overrides section is available.
devices {
    device {
        # [...]
        all_devs                yes
        no_path_retry           4
    }
}
# [...]
# inserted by blacklist_all_disks.sh

blacklist {
        devnode "*"
}

vdsm-client Host getDeviceList
[]


Nir

    When I logon to the ovirt hosts I see that they are connected with the
    target LUNs (dmesg is telling that there are iscsi devices being found
    and they are getting assigned to devices in /dev/sdX ). Writing and
    reading from the devices (also accros hosts) works. Do you have some
    advice how to troubleshoot this?

        Regards
          Bernhard
    _______________________________________________
    Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
    To unsubscribe send an email to users-le...@ovirt.org
    <mailto:users-le...@ovirt.org>
    Privacy Statement: https://www.ovirt.org/site/privacy-policy/
    oVirt Code of Conduct:
    https://www.ovirt.org/community/about/community-guidelines/
    List Archives:
    
https://lists.ovirt.org/archives/list/users@ovirt.org/message/45IIDJUVOCNLBUEKH5LOHM2DK6BYD44D/



--
Dipl.-Inf. Bernhard Dick
Auf dem Anger 24
DE-46485 Wesel
www.BernhardDick.de

jabber: bernh...@jabber.bdick.de

Tel : +49.2812068620
Mobil : +49.1747607927
FAX : +49.2812068621
USt-IdNr.: DE274728845
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CHFHGECDTBJRORPGORJOXLZSXKDPBHUM/

Reply via email to