Hi,

sorry for the delay, I reinstalled everything, configured the networks, attached the iSCSI storage with 2 interfaces and finally created the iSCSI-bond:

[root@ovh01 ~]# route
Kernel IP Routentabelle
Ziel            Router          Genmask         Flags Metric Ref    Use Iface
default         hp5406-1-srv.mo 0.0.0.0         UG    0      0        0 
ovirtmgmt
10.0.24.0       0.0.0.0         255.255.255.0   U     0      0        0 
ovirtmgmt
10.0.131.0      0.0.0.0         255.255.255.0   U     0      0        0 enp9s0f0
10.0.132.0      0.0.0.0         255.255.255.0   U     0      0        0 enp9s0f1
link-local      0.0.0.0         255.255.0.0     U     1005   0        0 enp9s0f0
link-local      0.0.0.0         255.255.0.0     U     1006   0        0 enp9s0f1
link-local      0.0.0.0         255.255.0.0     U     1008   0        0 
ovirtmgmt
link-local      0.0.0.0         255.255.0.0     U     1015   0        0 bond0
link-local      0.0.0.0         255.255.0.0     U     1017   0        0 ADMIN
link-local      0.0.0.0         255.255.0.0     U     1021   0        0 SRV

and:

[root@ovh01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp13s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master 
ovirtmgmt state UP qlen 1000
    link/ether e0:3f:49:6d:68:c4 brd ff:ff:ff:ff:ff:ff
3: enp8s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master 
bond0 state UP qlen 1000
    link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
4: enp8s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master 
bond0 state UP qlen 1000
    link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
5: enp9s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 
1000
    link/ether 90:e2:ba:11:21:d4 brd ff:ff:ff:ff:ff:ff
    inet 10.0.131.181/24 brd 10.0.131.255 scope global enp9s0f0
       valid_lft forever preferred_lft forever
6: enp9s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 
1000
    link/ether 90:e2:ba:11:21:d5 brd ff:ff:ff:ff:ff:ff
    inet 10.0.132.181/24 brd 10.0.132.255 scope global enp9s0f1
       valid_lft forever preferred_lft forever
7: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 26:b2:4e:5e:f0:60 brd ff:ff:ff:ff:ff:ff
8: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether e0:3f:49:6d:68:c4 brd ff:ff:ff:ff:ff:ff
    inet 10.0.24.181/24 brd 10.0.24.255 scope global ovirtmgmt
       valid_lft forever preferred_lft forever
14: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master 
ovirtmgmt state UNKNOWN qlen 500
    link/ether fe:16:3e:79:25:86 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc16:3eff:fe79:2586/64 scope link
       valid_lft forever preferred_lft forever
15: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue 
state UP
    link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
16: bond0.32@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue 
master ADMIN state UP
    link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
17: ADMIN: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
20: bond0.24@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue 
master SRV state UP
    link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
21: SRV: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff

The host keeps toggling all storage domains on and off as soon as there is an iSCSI bond configured.

Thank you for your patience.

cu,
Uwe


Am 18.08.2016 um 11:10 schrieb Elad Ben Aharon:
I don't think it's necessary.
Please provide the host's routing table and interfaces list ('ip a' or
ifconfing) while it's configured with the bond.

Thanks

On Tue, Aug 16, 2016 at 4:39 PM, Uwe Laverenz <u...@laverenz.de
<mailto:u...@laverenz.de>> wrote:

    Hi Elad,

    Am 16.08.2016 um 10:52 schrieb Elad Ben Aharon:

        Please be sure that ovirtmgmt is not part of the iSCSI bond.


    Yes, I made sure it is not part of the bond.

        It does seem to have a conflict between default and enp9s0f0/
        enp9s0f1.
        Try to put the host in maintenance and then delete the iscsi
        nodes using
        'iscsiadm -m node -o delete'. Then activate the host.


    I tried that, I managed to get the iSCSI interface clean, no
    "default" anymore. But that didn't solve the problem of the host
    becoming "inactive". Not even the NFS domains would come up.

    As soon as I remove the iSCSI-bond, the host becomes responsive
    again and I can activate all storage domains. Removing the bond also
    brings the duplicated "Iface Name" back (but this time causes no
    problems).

    ...

    I wonder if there is a basic misunderstanding on my side: wouldn't
    it be necessary that all targets are reachable from all interfaces
    that are configured into the bond to make it work?

    But this would either mean two interfaces in the same network or
    routing between the iSCSI networks.

    Thanks,

    Uwe
    _______________________________________________
    Users mailing list
    Users@ovirt.org <mailto:Users@ovirt.org>
    http://lists.ovirt.org/mailman/listinfo/users
    <http://lists.ovirt.org/mailman/listinfo/users>


_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to