>I've got one client machine running solaris 7 that is taking quite a bit
>longer than it seems that it should to complete backups.  ...

That machine wouldn't happen to be connected via 100 Mbit Ethernet,
would it?

Solaris (among others) is notorious for not auto-negotiating the duplex
properly, which can lead to orders of magnitude slowdowns.  I've appended
the letter I sent out a while back about this.

If you can find out what the switch is set at, it might be worth
changing it at least temporarily and see if that makes it take off
(it's dramatic if it works :-).

Be careful trying to change this on the fly on the Solaris end with ndd.
Others have reported, shall we say, bad experiences :-).

>.michael lea

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]

===

You didn't mention if you needed the magic or not, but here it is just
in case (add to /etc/system and reboot):

  set hme:hme_adv_100fdx_cap = 1
  set hme:hme_adv_100hdx_cap = 0
  set hme:hme_adv_autoneg_cap = 0

The first line says "use full duplex", the second says "don't use half
duplex" and the third says "don't try to autonegotiate".  I assume you
flip the value in the first two lines if you use half duplex.

Truth be told, I only have (and knew about) the first and third lines.
But I just saw the second line and the following in a mailing list
recently.  I suspect the default is zero for these variables and so I
don't need the half duplex set.

If you want to test the values while the system is up (as root):

  ndd -get /dev/hme hme_adv_100fdx_cap
  ndd -get /dev/hme hme_adv_100hdx_cap
  ndd -get /dev/hme hme_adv_autoneg_cap

If you have multiple instances, select the specific one first, then do
the variables, e.g. for hme1:

  ndd -set /dev/hme instance 1
  ndd -get /dev/hme hme_adv_100fdx_cap
  ...

I'm not sure what you do in /etc/system for multiple instances.

In theory you can also set these at run time, but that's braver than
I'd want to be unless it was dead in the water and my local network guru
was at my shoulder.

Reply via email to