>  I have a DP server machine with two e1000 interface on the motherboard.
 > After installation of snv_83, snv_84 or snv_85,  the e1000g1 will be OK, but
 > e1000g0 can not be used. Following is the output of 'ifconfig -a`:
 > lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232
 > index 1
 >         inet 127.0.0.1 netmask ff000000
 > e1000g1: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500
 > index 2
 >         inet 10.0.1.3 netmask ffffff00 broadcast 10.0.1.255
 >         ether 0:30:48:35:1d:41
 > lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252
 > index 1
 >         inet6 ::1/128
 > 
 > I have tried to install e1000g0 manually, but failed. Following is the
 > output:
 > intel5# dladm show-dev
 > LINK            STATE  SPEED    DUPLEX
 > e1000g0         unknown 0Mb     half
 > e1000g1         up     1000Mb   full
 > intel5# ifconfig e1000g0 plumb
 > ifconfig: cannot open link "e1000g0": No such device or address
 > 
 > I have installed builds from snv_77 to snv_82 on the same machine before and
 > this issue never happened. Does anyone have met this similar issue and how
 > to solve it?

One of the side effects of the Clearview UV putback in build 83 is that
FMA-detected faults are now enforced.  I suspect running "fmadm faulty"
will show that e1000g0 has been detected as faulty; run "fmadm repair" to
clear the fault.  (e1000g's seem to have an FMA-related issue that leads
to spurious faults being reported; there's a CR on he issue but I can't
get to our bug database right now.)

--
meem
_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to