There are no deliberate buffering delays in the Ethernet layer.  There merely 
is one thread which receives (and filters) packets and queues the potentially 
interesting ones.  The available queue gets drained as fast as the simulated 
system happens to read the available data.  This might affect some worst case 
situations, but overruns due to speed mismatches and limited capacity of the 
old physical hardware are much more likely to blame.  Like I said, I’ve got 
multiple simulated LAVC nodes that can all talk just fine without the errors 
Hunter is seeing which if Bufferbloat was a factor might be worse there…


-          Mark

From: Simh [mailto:[email protected]] On Behalf Of Warren Young
Sent: Wednesday, July 18, 2018 2:33 PM
To: [email protected]
Subject: Re: [Simh] Cluster communications errors

On Wed, Jul 18, 2018 at 12:21 PM Mark Pizzolato 
<[email protected]<mailto:[email protected]>> wrote:

The simh Ethernet layer has dramatically more internal packet buffering (maybe 
50 X) than anything real DEC hardware ever had.  This might account for the 
relatively smooth behavior I’m seeing.

More buffering can also mean more delay in the feedback loop that controls the 
underlying protocols, leading to *worse* performance as buffer space goes up.

This is called Buffer Bloat in the TCP sphere:

    https://www..net/<https://www.bufferbloat.net/>

Perhaps the low-level protocols involved in VAX clustering have the same issue? 
They may be expecting to get some kind of feedback response, which is getting 
delayed through the buffering, which causes the real VAXen to kick the fake one 
out, thinking it's gone MIA.
_______________________________________________
Simh mailing list
[email protected]
http://mailman.trailing-edge.com/mailman/listinfo/simh

Reply via email to