And it's configuration of node2 (node1 is almost the same):
# virsh dumpxml node2
<domain type='xen' id='13'>
<name>node2</name>
<uuid>ae1223b6-af23-dcdb-60b6-f11dc86b12d8</uuid>
<bootloader>/usr/lib/xen/bin/pygrub</bootloader>
<os>
<type>linux</type>
</os>
<memory>786432</memory>
<vcpu>1</vcpu>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<distro name='solaris'/>
<clock offset='localtime'/>
<devices>
<interface type='bridge'>
<source bridge='e1000g0 '/>
<target dev='vif13.0'/>
<mac address='00:16:3e:43:73:23'/>
<script path='vif-vnic'/>
</interface>
<interface type='bridge'>
<source bridge='e1000g0'/>
<target dev='vif13.1'/>
<mac address='00:16:3e:46:31:9b'/>
<script path='vif-vnic'/>
</interface>
<interface type='bridge'>
<source bridge='e1000g0'/>
<target dev='vif13.2'/>
<mac address='00:16:3e:74:2a:ba'/>
<script path='vif-vnic'/>
</interface>
<disk type='block' device='disk'>
<driver name='phy'/>
<source dev='/dev/zvol/dsk/data/export/xvm/node2'/>
<target dev='xvda'/>
</disk>
<console tty='/dev/pts/1'/>
</devices>
</domain>
Btw, I CCed the xen-discuss list.
Piotr Jasiukajtis pisze:
> I think it's strange:
>
> [r...@node1 ~]# snoop -v -r -d xnf1
> Using device xnf1 (promiscuous mode)
> ETHER: ----- Ether Header -----
> ETHER:
> ETHER: Packet 1 arrived at 23:01:0.56727
> ETHER: Packet size = 60 bytes
> ETHER: Destination = 0:16:3e:46:23:9b,
> ETHER: Source = 0:16:3e:46:31:9b,
> ETHER: Ethertype = 0833 (Unknown)
> ETHER:
>
> ETHER: ----- Ether Header -----
> ETHER:
> ETHER: Packet 2 arrived at 23:01:1.05863
> ETHER: Packet size = 42 bytes
> ETHER: Destination = 0:16:3e:46:31:9b,
> ETHER: Source = 0:16:3e:46:23:9b,
> ETHER: VLAN ID = 0
> ETHER: VLAN Priority = 7
> ETHER: Ethertype = 0833 (Unknown)
> ETHER:
>
> ETHER: ----- Ether Header -----
> ETHER:
> ETHER: Packet 3 arrived at 23:01:1.26870
> ETHER: Packet size = 60 bytes
> ETHER: Destination = 1:80:c2:0:0:e, (multicast)
> ETHER: Source = 0:1a:70:20:c3:f9,
> ETHER: Ethertype = 88CC (Unknown)
> ETHER:
>
>
> [r...@node1 ~]# arp -an | grep 172.16
> xnf1 172.16.0.129 255.255.255.255 SPLA 00:16:3e:46:23:9b
> xnf1 172.16.0.130 255.255.255.255 00:16:3e:46:31:9b
> clprivnet0 172.16.4.1 255.255.255.255 SPLA 00:00:00:00:00:01
> xnf2 172.16.1.1 255.255.255.255 SPLA 00:16:82:74:29:ba
> xnf2 172.16.1.2 255.255.255.255 00:16:3e:74:2a:ba
>
>
>
>
>
> Piotr Jasiukajtis pisze:
>> Btw, I can ping node2 via interconnects from node1.
>> I can't login to node2 neither from console nor via sshd.
>>
>>
>>
>> Piotr Jasiukajtis pisze:
>>> Hi,
>>>
>>> I installed Solaris Cluster Express 12/08 on 2 pvm virtual nodes (on 2
>>> physical SXCE104 machines).
>>>
>>> Each virtual node has 3 NICs (3 VNICs from xVM dom0),
>>> but each physical node has only one physical NIC connected to the one
>>> physical switch. I know about security issues and such...
>>>
>>> There is a problem with interconnects so I can't create the cluster.
>>>
>>> Any idea? :)
>>>
>>>
>>>
>>> [r...@node1 ~]# /usr/cluster/bin/clnode status -v
>>>
>>> === Cluster Nodes ===
>>>
>>> --- Node Status ---
>>>
>>> Node Name Status
>>> --------- ------
>>> node1 Online
>>> node2 Offline
>>>
>>>
>>> --- Node IPMP Group Status ---
>>>
>>> Node Name Group Name Status Adapter Status
>>> --------- ---------- ------ ------- ------
>>> node1 sc_ipmp0 Online xnf0 Online
>>>
>>>
>>> [r...@node1 ~]# /usr/cluster/bin/clintr show
>>>
>>> === Transport Cables ===
>>>
>>> Transport Cable: node1:xnf1,swit...@1
>>> Endpoint1: node1:xnf1
>>> Endpoint2: swit...@1
>>> State: Enabled
>>>
>>> Transport Cable: node1:xnf2,swit...@1
>>> Endpoint1: node1:xnf2
>>> Endpoint2: swit...@1
>>> State: Enabled
>>>
>>> Transport Cable: node2:xnf1,swit...@2
>>> Endpoint1: node2:xnf1
>>> Endpoint2: swit...@2
>>> State: Enabled
>>>
>>> Transport Cable: node2:xnf2,swit...@2
>>> Endpoint1: node2:xnf2
>>> Endpoint2: swit...@2
>>> State: Enabled
>>>
>>>
>>> === Transport Switches ===
>>>
>>> Transport Switch: switch1
>>> State: Enabled
>>> Type: switch
>>> Port Names: 1 2
>>> Port State(1): Enabled
>>> Port State(2): Enabled
>>>
>>> Transport Switch: switch2
>>> State: Enabled
>>> Type: switch
>>> Port Names: 1 2
>>> Port State(1): Enabled
>>> Port State(2): Enabled
>>>
>>>
>>> --- Transport Adapters for node1 ---
>>>
>>> Transport Adapter: xnf1
>>> State: Enabled
>>> Transport Type: dlpi
>>> device_name: xnf
>>> device_instance: 1
>>> lazy_free: 1
>>> dlpi_heartbeat_timeout: 10000
>>> dlpi_heartbeat_quantum: 1000
>>> nw_bandwidth: 80
>>> bandwidth: 70
>>> ip_address: 172.16.0.129
>>> netmask: 255.255.255.128
>>> Port Names: 0
>>> Port State(0): Enabled
>>>
>>> Transport Adapter: xnf2
>>> State: Enabled
>>> Transport Type: dlpi
>>> device_name: xnf
>>> device_instance: 2
>>> lazy_free: 1
>>> dlpi_heartbeat_timeout: 10000
>>> dlpi_heartbeat_quantum: 1000
>>> nw_bandwidth: 80
>>> bandwidth: 70
>>> ip_address: 172.16.1.1
>>> netmask: 255.255.255.128
>>> Port Names: 0
>>> Port State(0): Enabled
>>>
>>>
>>> --- Transport Adapters for node2 ---
>>>
>>> Transport Adapter: xnf1
>>> State: Enabled
>>> Transport Type: dlpi
>>> device_name: xnf
>>> device_instance: 1
>>> lazy_free: 1
>>> dlpi_heartbeat_timeout: 10000
>>> dlpi_heartbeat_quantum: 1000
>>> nw_bandwidth: 80
>>> bandwidth: 70
>>> ip_address: 172.16.0.130
>>> netmask: 255.255.255.128
>>> Port Names: 0
>>> Port State(0): Enabled
>>>
>>> Transport Adapter: xnf2
>>> State: Enabled
>>> Transport Type: dlpi
>>> device_name: xnf
>>> device_instance: 2
>>> lazy_free: 1
>>> dlpi_heartbeat_timeout: 10000
>>> dlpi_heartbeat_quantum: 1000
>>> nw_bandwidth: 80
>>> bandwidth: 70
>>> ip_address: 172.16.1.2
>>> netmask: 255.255.255.128
>>> Port Names: 0
>>> Port State(0): Enabled
>>>
>>>
>>> [r...@node1 ~]# /usr/cluster/bin/clintr status -v
>>>
>>> === Cluster Transport Paths ===
>>>
>>> Endpoint1 Endpoint2 Status
>>> --------- --------- ------
>>> node1:xnf2 node2:xnf2 faulted
>>> node1:xnf1 node2:xnf1 faulted
>>>
>>>
>>>
>>> Jan 10 11:22:14 node1 genunix: [ID 965873 kern.notice] NOTICE: CMM: Node
>>> node1 (nodeid = 1) with votecount = 1 added.
>>> Jan 10 11:22:14 node1 genunix: [ID 843983 kern.notice] NOTICE: CMM: Node
>>> node1: attempting to join cluster.
>>> Jan 10 11:22:14 node1 genunix: [ID 525628 kern.notice] NOTICE: CMM:
>>> Cluster has reached quorum.
>>> Jan 10 11:22:14 node1 genunix: [ID 377347 kern.notice] NOTICE: CMM: Node
>>> node1 (nodeid = 1) is up; new incarnation number = 1231582933.
>>> Jan 10 11:22:14 node1 genunix: [ID 108990 kern.notice] NOTICE: CMM:
>>> Cluster members: node1.
>>> Jan 10 11:22:14 node1 genunix: [ID 279084 kern.notice] NOTICE: CMM: node
>>> reconfiguration #1 completed.
>>> Jan 10 11:22:17 node1 genunix: [ID 499756 kern.notice] NOTICE: CMM: Node
>>> node1: joined cluster.
>>> Jan 10 11:22:17 node1 ip: [ID 856290 kern.notice] ip: joining multicasts
>>> failed (18) on clprivnet0 - will use link layer broadcasts for multicast
>>> Jan 10 11:22:28 node1 Cluster.CCR: [ID 914260 daemon.warning] Failed to
>>> retrieve global fencing status from the global name server
>>> Jan 10 11:22:28 node1 last message repeated 1 time
>>> Jan 10 11:22:48 node1 Cluster.CCR: [ID 409585 daemon.error]
>>> /usr/cluster/bin/scgdevs: Cannot register devices as HA.
>>> Jan 10 11:22:53 node1 xntpd[909]: [ID 702911 daemon.notice] xntpd
>>> 3-5.93e+sun 03/08/29 16:23:05 (1.4)
>>> Jan 10 11:22:53 node1 xntpd[909]: [ID 301315 daemon.notice] tickadj = 5,
>>> tick = 10000, tvu_maxslew = 495, est. hz = 100
>>> Jan 10 11:22:53 node1 xntpd[909]: [ID 266339 daemon.notice] using kernel
>>> phase-lock loop 0041, drift correction 0.00000
>>> Jan 10 11:22:53 node1 last message repeated 1 time
>>> Jan 10 11:23:00 node1 : [ID 386282 daemon.error] ccr_initialize failure
>>> Jan 10 11:23:04 node1 last message repeated 8 times
>>> Jan 10 11:23:04 node1 svc.startd[8]: [ID 748625 daemon.error]
>>> system/cluster/scdpm:default failed repeatedly: transitioned to
>>> maintenance (see 'svcs -xv' for details)
>>> Jan 10 11:24:01 node1 xpvd: [ID 395608 kern.info] xen...@0, xenbus0
>>> Jan 10 11:24:01 node1 genunix: [ID 936769 kern.info] xenbus0 is
>>> /xpvd/xen...@0
>>> Jan 10 11:28:15 node1 genunix: [ID 965873 kern.notice] NOTICE: CMM: Node
>>> node2 (nodeid = 2) with votecount = 0 added.
>>> Jan 10 11:28:15 node1 genunix: [ID 108990 kern.notice] NOTICE: CMM:
>>> Cluster members: node1.
>>> Jan 10 11:28:15 node1 genunix: [ID 279084 kern.notice] NOTICE: CMM: node
>>> reconfiguration #2 completed.
>>> Jan 10 11:28:16 node1 genunix: [ID 884114 kern.notice] NOTICE: clcomm:
>>> Adapter xnf1 constructed
>>> Jan 10 11:28:16 node1 ip: [ID 856290 kern.notice] ip: joining multicasts
>>> failed (18) on clprivnet0 - will use link layer broadcasts for multicast
>>> Jan 10 11:28:16 node1 genunix: [ID 884114 kern.notice] NOTICE: clcomm:
>>> Adapter xnf2 constructed
>>> Jan 10 11:28:25 node1 rpc_scadmd[1196]: [ID 801593 daemon.notice] stdout:
>>> Jan 10 11:28:25 node1 rpc_scadmd[1196]: [ID 801593 daemon.notice] stderr:
>>> Jan 10 11:28:26 node1 rpc_scadmd[1196]: [ID 801593 daemon.notice] stdout:
>>> Jan 10 11:28:26 node1 rpc_scadmd[1196]: [ID 801593 daemon.notice] stderr:
>>> Jan 10 11:29:16 node1 genunix: [ID 604153 kern.notice] NOTICE: clcomm:
>>> Path node1:xnf1 - node2:xnf1 errors during initiation
>>> Jan 10 11:29:16 node1 genunix: [ID 618107 kern.warning] WARNING: Path
>>> node1:xnf1 - node2:xnf1 initiation encountered errors, errno = 62.
>>> Remote node may be down or unreachable through this path.
>>> Jan 10 11:29:16 node1 genunix: [ID 604153 kern.notice] NOTICE: clcomm:
>>> Path node1:xnf2 - node2:xnf2 errors during initiation
>>> Jan 10 11:29:16 node1 genunix: [ID 618107 kern.warning] WARNING: Path
>>> node1:xnf2 - node2:xnf2 initiation encountered errors, errno = 62.
>>> Remote node may be down or unreachable through this path.
>>> Jan 10 11:30:24 node1 genunix: [ID 537175 kern.notice] NOTICE: CMM: Node
>>> node2 (nodeid: 2, incarnation #: 1231583261) has become reachable.
>>> Jan 10 11:30:24 node1 xnf: [ID 601036 kern.warning] WARNING: xnf2:
>>> oversized packet (1518 bytes) dropped
>>> Jan 10 11:30:24 node1 last message repeated 1 time
>>> Jan 10 11:30:24 node1 genunix: [ID 387288 kern.notice] NOTICE: clcomm:
>>> Path node1:xnf2 - node2:xnf2 online
>>> Jan 10 11:30:24 node1 genunix: [ID 387288 kern.notice] NOTICE: clcomm:
>>> Path node1:xnf1 - node2:xnf1 online
>>> Jan 10 11:30:28 node1 xnf: [ID 601036 kern.warning] WARNING: xnf2:
>>> oversized packet (1518 bytes) dropped
>>> Jan 10 11:30:54 node1 last message repeated 2 times
>>> Jan 10 11:30:54 node1 xnf: [ID 601036 kern.warning] WARNING: xnf1:
>>> oversized packet (1518 bytes) dropped
>>> Jan 10 11:31:08 node1 last message repeated 2 times
>>> Jan 10 11:31:28 node1 xnf: [ID 601036 kern.warning] WARNING: xnf2:
>>> oversized packet (1518 bytes) dropped
>>> Jan 10 11:32:28 node1 last message repeated 1 time
>>> Jan 10 11:33:29 node1 genunix: [ID 489438 kern.notice] NOTICE: clcomm:
>>> Path node1:xnf2 - node2:xnf2 being drained
>>> Jan 10 11:33:29 node1 genunix: [ID 387288 kern.notice] NOTICE: clcomm:
>>> Path node1:xnf2 - node2:xnf2 online
>>>
>>>
>>>
>>>
>>> [r...@node1 ~]# uname -a
>>> SunOS node1 5.11 snv_101a i86pc i386 i86xpv
>>>
>>>
>>>
>>> # xm start -c node2
>>> v3.1.4-xvm chgset 'Mon Nov 24 22:48:21 2008 -0800 15909:8ac8abf844b5'
>>> SunOS Release 5.11 Version snv_101a 64-bit
>>> Copyright 1983-2008 Sun Microsystems, Inc. All rights reserved.
>>> Use is subject to license terms.
>>> Hostname: node2
>>> Configuring devices.
>>> /usr/cluster/bin/scdidadm: Could not load DID instance list.
>>> /usr/cluster/bin/scdidadm: Cannot open
>>> /etc/cluster/ccr/global/did_instances.
>>> Booting as part of a cluster
>>> name is non-existent for this module
>>> for a list of valid names, use name '?'
>>> NOTICE: CMM: Node node1 (nodeid = 1) with votecount = 1 added.
>>> NOTICE: CMM: Node node2 (nodeid = 2) with votecount = 0 added.
>>> NOTICE: clcomm: Adapter xnf2 constructed
>>> NOTICE: clcomm: Adapter xnf1 constructed
>>> NOTICE: CMM: Node node2: attempting to join cluster.
>>> NOTICE: CMM: Node node1 (nodeid: 1, incarnation #: 1231582933) has
>>> become reachable.
>>> WARNING: xnf1: oversized packet (1518 bytes) dropped
>>> NOTICE: clcomm: Path node2:xnf1 - node1:xnf1 online
>>> NOTICE: clcomm: Path node2:xnf2 - node1:xnf2 online
>>> WARNING: xnf1: oversized packet (1518 bytes) dropped
>>> WARNING: xnf1: oversized packet (1518 bytes) dropped
>>> WARNING: xnf2: oversized packet (1518 bytes) dropped
>>> WARNING: xnf2: oversized packet (1518 bytes) dropped
>>> WARNING: xnf2: oversized packet (1518 bytes) dropped
>>>
>>>
>>> # uname -srvi
>>> SunOS 5.11 snv_104 i86xpv
>>>
>>> # dladm show-link
>>> LINK CLASS MTU STATE OVER
>>> bge0 phys 1500 unknown --
>>> e1000g0 phys 1500 up --
>>> vnic1 vnic 1500 unknown e1000g0
>>> vnic2 vnic 1500 unknown e1000g0
>>> vnic18 vnic 1500 unknown e1000g0
>>> vnic19 vnic 1500 unknown e1000g0
>>> vnic20 vnic 1500 unknown e1000g0
>>>
>>>
>>>
>>
>
>
--
Regards,
Piotr Jasiukajtis | estibi | SCA OS0072
http://estseg.blogspot.com
_______________________________________________
xen-discuss mailing list
[email protected]