[Sorry Henrik, once again because bogus cc]
Henrik Olsen wrote:
>
> On Tue, 16 Mar 1999, Donald Becker wrote:
> > Subject: Re: 3c905 full duplex with netboot prom?
> >
> > On Tue, 16 Mar 1999, Hans-Peter Jansen wrote:
> >
> > > I'm running a couple of diskless ws with the 3c905(!b)
> > > card and a netboot bootprom at a 24 port 10/100 intel
> > > 510T switch. Because the packetdriver forces the card
> > > to half duplex (according to it's messages), I had
> > > to switch off NWAY auto negotation on the cards to
> > > prevent netboot failures (50% chance).
> >
> > > Is there a way
> > > to reestablish full duplex mode again? (Or does anybody
> > > know of a FD enabled packet driver for this card?)
> <Don Beckers technical answer snipped>
> <Soapbox mode>
> With a switch you really really want to use Half Duplex anyway, since full
> duplex has no congestion control, so will grind to uselessness once your
> load increases.
>
> To see why, consider that the switch has to buffer the packets, and simple
> queue theory will tell you that those buffers will run out. When they do,
> the switch can force collisions (half duplex) or silently drop the
> packets (full duplex).
>
> Retransmissions after collisions are handled by the cards, and is very
> fast compared to the end-to-end retransmission by the protocol stacks,
> resulting in a very much slower network.
> </Soapbox mode>
>
> Full Duplex is for marketing droids and management, not for tech staff.
Hi Henrik,
does you mean, the whole idea of store & forward is bogus?
Does I archive any better result with something like a hub in my
environment? Yes, your right, buffer space is limited, but I thought
it depends on bandwidth organization. The most traffic here happens
between the clients and the server. My first idea was, use multiple
100BT cards in the server. This failed for no visible reason. When I
started ifenslave, network wasn't working anymore, although I tried
to check the bonding patch semantics/consistence. As 3 PCI cards in
a 5 PCI slot MB are a bit disturbing, I purchased a GNIC II and a
counterpart module for the intel switch 510T. Don's hamachi driver
v0.07 let to serious/mysterious server crashes of my 2.0.36 smp server.
Driver v0.14 from Eric Kasten <[EMAIL PROTECTED]> seems stabile, but
snafus things like rlogins.
[See <PE GNic II with Intel Switch 510T> thread for better descr.]
Now, I can easily switch between a 3c905 and the gnic (although it
takes a 10 min. walk)
How would you configure the 24 port 10/100 switch? All ports half
duplex, cut through and flow control disabled? How to config the
gigabit port?
Currently, I keep everything pretty factory like. Differently to my
previous statement in this thread, all 100BT ports seems to work
FD as default. Even 10BT<->10B2 devices are detected correctly.
I ping "-fs 1500" flooded my server from the clients and watched the
switch buffer usage. It was pretty low. When using the gnic I got
up to 38 MB/s on the server port.
Hoping, the hamachi driver is getting to something production ready
in the near future, what is IYO the "best" switch configuration
therefore?
Hans-Peter
> --
> Henrik Olsen, Dawn Solutions I/S
> URL=http://www.iaeste.dk/~henrik/
> Get the rest there.
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to [EMAIL PROTECTED]