> The '!up' was used to imply that the interface should not be up.  This
 > is based upon the observation that if it is up that in.mpathd
 > complains about the test addresses (0.0.0.0) not being unique when I
 > try to configure for probe-based and I allow the interfaces to be up.

As Jim already pointed out, this is needed because you have created a very
bizarre configuration.  In general, you should just put the data address
on the physical interface -- and you should never have an address marked
'-failover' but not 'up'.

 > bge0 0.0.0.0 RUNNING,NOFAILOVER group native
 > bge0:1 10.0.0.1 UP,RUNNING
 > bge1 0.0.0.0 RUNNING,NOFAILOVER group native
 > bge1000 0.0.0.0 RUNNING,NOFAILOVER group vlan1
 > bge1000:1 10.0.1.1 UP,RUNNING zone dev1
 > bge1001 0.0.0.0 RUNNING,NOFAILOVER group vlan1
 > bge1001:1 10.0.1.2 UP,RUNNING zone dev2
 > bge2000 0.0.0.0 RUNNING,NOFAILOVER group vlan2
 > bge2000:1 10.0.2.1 UP,RUNNING zone prod1
 > bge2001 0.0.0.0 RUNNING,NOFAILOVER group vlan2
 > bge2001:1 10.0.2.2 UP,RUNNING zone prod2
 >
 > [ ... ]
 >
 > Now, if I pull bge0 everything fails over to bge1 as expected. 
 > However, when I reconnect the cable to bge0, 10.0.0.1 is on bge0 (not
 > bge0:1), 10.0.1.1 is on bge1000 (not bge1000:1), and 10.0.2.1 is on
 > bge2000 (not bge2000:1).

This is surely due to those weird addresses assigned to bge0, bge1000, and
bge2000.  I'm not sure it's a bug.

 > Then, if I pull bge1 I notice that the addresses on bge1:1, bge1001:1,
 > and bge2001:1 do not fail over properly.  The interfaces show that
 > they are FAILED and the RUNNING flag is cleared, but the addresses
 > stay where they started.  If I plug the cable for bge1 in, the FAILED
 > flag goes away and RUNNING is set.  If I then use "if_mpadm -d
 > bge2001" it says that it cannot fail the address over because there
 > are no more interfaces in the group.

Could you provide the complete ifconfig output once you get to this state?
It sounds like bge0 has not properly recovered.

-- 
meem
_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to