Priscilla,

Yeah, this is evolving beyond a simple server-switch-host discussion.  I
think, now that we're adding routers to the mix, it's important to note that
things are handled quite differently depening on what layer we're talking
about.  I deal with a lot of telemetry applications.  Telemetry is generally
"on all the time" sustained data flow and is often quite bandwidth
intensive.  I've seen plenty of instances, like your example, where a WAN
connection became a bottleneck.  If the telemetry were encapsulated in UDP,
then there are simply "holes" in the data, as my users are so fond of
reporting.  If, for some reason, it's TCP, there are still holes, but the
behavior and the symptoms are a bit different (and actually often worse -
think of how TCP regards all packet loss to indicate congestion and how it
trys to deal with that).  In routers, buffer space exits in a bunch of
places, depending on the platform.  Often there is some kind of queuing
mechanism in place (and maybe even distributed).  Certainly there are
interface buffers, the size of which often just depends on how much memory
you purchased up front.  But in a more "normal" situation (and more relevant
to the list), I suspect there are traffic bursts that all of these queues
and interface buffers help to smooth out and, in some cases, in doing so
prevent packet loss.

Back to the switch scenario.  I think that's a bit different.  You don't see
WRED in a L2 switch.  There are presumably some fixed number of bytes of
buffering allocated per interface and once it's gone it's gone?  Or maybe
there's a more dynamic allocation scheme that follows traffic patterns?

NFS was offered as an example.  I don't have the capability to capture that
and post it here, but it sounded as if the poster was quite familiar with
the outcome.  Just as was discussed:  meltdown.

I guess at some layer you have flow control or you run the risk of finding
the bit bucket - particularly where speed mismatches exist.

I wasn't saying this would never happen.  I was saying that, with a
"typical" enterprise network, it may not be easy to test the "what if"
scenario.  Simple print and file transfer jobs seem to all have TCP in the
mix and, as you pointed out, upper-layer flow control mechanisms as well.

Priscilla Oppenheimer wrote:
> 
> I'm glad we're discussing this. I mentioned that CCIE used to
> have questions like this, but it occurred to me that CCNA
> expects you to understand flow control too! This is an
> important conversation.
> 
> In your printing example, Scott, you've got flow control
> running at lots of different layers! Indeed that can be
> "chatty." NetBIOS can do flow control. TCP can, of course. I
> don't know if RPC does or if SPOOLSS does. SMB doesn't have any
> flow control built into the protocol, but since it deals with
> transferring files, there's going to be some natural flow
> control as it reaches out to the hard drive to get blocks of
> data from the files.
> 
> Back to TCP. What triggers TCP flow control? One thing could be
> dropped packets at the switch in the example.
> 
> TCP does do slow start nowadays, though, to help it avoid
> sending so fast that internetworking devices in the middle drop
> packets. TCP should adjust to the mismatch situation of the
> server being on 100 Mbps and the client being on 10 Mbps.
> 
> Someone else mentioned IP Source Quench. That can no longer be
> used by routers (per RFC), though end hosts could use it, but
> they generally don't (except old versions of Mac OS. You gotta
> love Apple! :-)
> 
> However, TCP isn't the only protocol out there! There are lots
> of protocols (and entire protocol stacks) that don't do flow
> control.
> 
> That doesn't mean that Ethernet needs to do it with Pause
> frames, but something has to give.
> 
> When there's a speed mismatch, a switch or router simply has to
> drop packets at some point. It can't have infinite buffers, and
> you wouldn't want it to have infinite buffers because then
> stuff could lie around in the buffers for too long.
> 
> If you don't like my switch example, how about a router that
> connects 100Mbps Ethernet and 1.5 Mbps T1 with heavy traffic
> flowing across? At what point will it start dropping packets?
> 
> Saying that this won't happen in the real world is not an
> acceptable answer. :-) It could happen and router and switch
> vendors architect for the situation. I don't know how many
> buffers a router or switch might have, though. That's a good
> question. It wouldn't have all buffers of the same size, like
> in my example, but I didn't want it to be too diffcult! But
> having 1000 buffers as in my example might be realistic? I
> could say 10,000 buffers, but there's still a problem at some
> point. If there were really that many packets queued up in a
> buffer, upper layer protcols might start freaking out if the
> queue doesn't get emptied quickly.
> 
> More later on possible ways of looking at the original problem.
> 
> Priscilla 
> 
> s vermill wrote:
> > 
> > And as a follow up to the below, I just captured a print job
> on
> > our corp network using Ethereal.  I run Win2k Pro.  Don't know
> > anything about the print server itself.  Here's what I saw...
> > 
> > Microsoft Spool Subsystem (SPOOLSS) / DCE RPC / SMB / netBIOS
> /
> > TCP
> > 
> > VERY chatty.  I don't see how 10M could ever be achieved in
> the
> > first place.  Lots of queries and replies (i.e. lots of
> > processor interrupts and the like).
> > 
> > Then I did a large file transfer from one of our file servers
> > to my desktop.  Here's what I saw...
> > 
> > NetBIOS Session Service (NBSS) / TCP 
> > 
> > carrying the actual data and 
> > 
> > SMB / NBSS / TCP
> > 
> > doing the control function.  
> > 
> > Again, VERY chatty (I realize this may be pretty elementary
> > stuff for a lot of folks on the list but some of us don't get
> > too far up the protocol stack when it comes to this sort of
> > "user client-server" interaction, so please bear with me (I do
> > deal with VERY large TCP and UDP exchanges between servers
> from
> > time to time but there are never any speed mismatches to be
> > dealt with)).  In both of the above cases, TCP would have
> > prevented too much packet loss from taking place even if there
> > had been a choke point, which apparently there wasn't.  Even
> > with something like TFTP, which relies on UDP, each and every
> > block is acknowledged at the application layer (I think), so
> > there likely wouldn't have been any loss there either.
> > 
> > I'm just not sure there's a good real-world example to help us
> > with the theoretical "what if" question.  In what scenario
> > would a large transfer of data be attempted with out any type
> > of flow control in the stack somewhere?
> > 
> > Scott
> > 
> > 
> > s vermill wrote:
> > > 
> > > Steven,
> > > 
> > > Thanks for sharing.  A real example is just what the doctor
> > > ordered.  In your below example, did the print transaction
> > rely
> > > on TCP at L4?  I've captured some print traffic on our corp.
> > > network in the past and I'm pretty sure it was TCP.  Don't
> > know
> > > if there was a speed mismatch between the server and the
> > > printer though.
> > > 
> > > Scott 
> > > 
> > > Steven Aiello wrote:
> > > > 
> > > > Ok, I am still a lowly CCNA however Einstein said make
> > things
> > > > as simple
> > > > as they need to be and no more.  I work on a LAN where we
> > > > transmit large
> > > > print files to Xerox laser printers.  These files can get
> up
> > > to
> > > > 1.5Gb in
> > > > size and sometimes a bit larger.  The Printers run on
> older
> > > Sun
> > > > workstations and they have 10Mb cards.  I have never come
> > > > across a
> > > > situation where the server has been able to over flow
> first
> > of
> > > > all the
> > > > switches buffer and second of all it's NICs buffer.  I
> know
> > I
> > > > am not the
> > > > only sys admin who randomly sits on the network with a
> > packet
> > > > sniffer
> > > > and analyses traffic from the major sources of traffic on
> > > their
> > > > network,
> > > > yes sometimes there will be some retransmit requests by
> the
> > > > Xerox
> > > > workstations however nothing of large significance.  Also
> > > these
> > > > retransmits usually occur when another workstation is
> > > > processing a
> > > > separate file also about 1Gb or more and that data is
> being
> > > > transferred
> > > > over the network from workstation so the server.  Also
> what
> > > > kind of
> > > > network environment would you be in where your server
> would
> > be
> > > > slammin
> > > > one workstation?  Even real-time video would create this
> > type
> > > > of
> > > > overload, especially since I can imaging it would be run
> > over
> > > > UDP and
> > > > packets would be dropped if they were out of order. 
> > > > Theoretically you
> > > > may be able to overwhelm a 10base T card however I would
> > even
> > > > doubt that
> > > > considering the windowing and source quenching built into
> > > > TCP/IP (source
> > > > quench may be the wrong term but you all should know what
> I
> > am
> > > > talking
> > > > about).  I think it is far better to have the bandwidth
> > ready
> > > > and
> > > > available then to fall short.
> > > > 
> > > > That's just my opinion on the humble,
> > > > Steven
> > > > 
> > > > 
> > > 
> > > 
> > 
> > 
> 
> 




Message Posted at:
http://www.groupstudy.com/form/read.php?f=7&i=65377&t=65263
--------------------------------------------------
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]

Reply via email to