I have been searching as to the purpose of these FECN and BECN bits, and I
found this in an old newsgroup from 1994 from a guy who wrote part of Frame
Relay standards.  Looks like Howard and Pricilla were right in that IP
wasn't a concern, as IBM had SDLC and ATT & BellCore had x.25 and other
netowrks.  Looks like x.25 had congestion issues cause of no layer 4?  Am I
right?

From: [EMAIL PROTECTED] (Fred R. Goldstein)
Newsgroups: comp.dcom.frame-relay
Subject: Re: Use of FECN/BECN for congestion management.
Date: 16 Nov 1994 16:15:56 GMT
Organization: Bolt Beranek and Newman Inc.
Lines: 86
Message-ID: 
References:  
 
NNTP-Posting-Host: bbn.com


I was part of the Frame Relay Congestion Control battle/brou-ha-ha, or
whatever you prefer to call it, from around 1985 to the time the ANSI
standards were published in 1991.  So I _can_ give some historical
background to the motivations behind BECN and FECN.  I also wrote much of
the text for FECN.

When Frame Relay was "conceived", there was little attention paid to
congestion issues.  Frame Relay became "the standard" because AT&T was
pushing HARD for a "New Packet Mode Bearer Service" (NPMBS) which would
use Layer 2 multiplexing. This was invented by AT&T as "DMI Mode 3" which
used full LAPD plus X.25
PLP with a single layer 3 channel in each L2 VC.  In spring, 1986, AT&T,
IBM and Bellcore agreed to work on Frame Relay and advance it towards ANS
status via ANSI T1D1 (later became T1S1).

None of these companies had much IP experience at the time, and it was
mostly X.25-experienced people working on it.  So the congestion issues
needed to be brought out.  I was working for a company that sold
connectionless networks, and we KNEW about congestion and the
possibilities of congestion collapse.  (Firsthand experience with
congestion collapse in the eary '80s was a very good learning
experience.)  BTW, my main authority on this topic was Raj Jain, who
invented slow-start (named "CUTE", "congestion control using timouts in
the end-to-end layer") before Van did, and is credited in a footnote in
Van's aticle.

Since modern connectionless-network-layer-based networks use the transport
layer for flow control, and have RECEIVER-based windows, we figured it was
best to the the RECEIVER that the network was congested, because it could
reduce its window size.  We were still in the era when we expected OSI to
catch on, and
the North American OSI Implementors' Agreement for CLNP defined exactly
how to use the Congestion Encountered bit in the CLNP header to
dynamically adjust the windows size in TP4.  Semantically, TP4 is a lot
like TCP, and CLNP is a lot like IP, but IP lacks the CE bit.  :-(
Therefore I proposed the FECN bit.  This made the FR header "address"
field look different from LAPD, because we had to steal a bit (LAPD has 13
bits of address.)  The technical name for this is Explicit Binary
Feedback.

IBM, on the other hand, had implemented a congestion control strategy for
SNA using SDLC.  In SDLC, the only window is in the SENDER.  So they had
no use for FECN, an asked for a BECN bit.  We argued about it; having both
bits was not widely supported at first because it would have shrunk the
DLCI by another bit! Making it a per-connection option (the bit is FECN
_or_ BECN) was also not popular.  Eventually (by 1989) consensus moved
towards having both bits.

The DE bit was added because the networks needed a way to police the whole
shebang.  Since this was a telco service and telco like to sell rate-based
services, they wanted a way to carry "excessive" (exceeds the CIR leaky
bucket but not the EIR leaky bucket) traffic, but at lowered priority.  DE
does this quite nicely.  Thus we have three bits stolen from the DLCI.

The whole rate-based thing was written by T1S1.1 (Services) into
T1.610-Addendum, while the FECN and BECH were written by T1S1.2
(Protocols) into T1.618 (Core Aspects of LAPF).  The two mechanisms are
unrelated!

AT&T, btw, was concerned about asymmetrical packet voice traffic, and they
put in the Consolidated Link Layer Management message (CLLM), which is in
effect a complex Frame Relay Source Quench.  This isn't widely used.

So in summary, the FECN bit was aimed at feeding the Layer 3 "Congestion
Encountered" bit, which in turn was to shrink the L4 window (preferably
before losing frames, and thus providing a smoother flow).  The BECH bit
was aimed at reducing the HDLC/SDLC window.  CIR/EIR was aimed at
protecting the network against users who didn't pace their traffic; in
practice, it causes strategic discards which trigger VJ slow-start, and
that forms an "implicit" feedback mechanism.  The semantics of FECN and
BECN (how you should react; how it is set) are also INDEPENDENT of one
another; they were invented separately and have different notions of
congestion.  And because they're all optional, there's no reasonable
possibility of conformance testing.

It would be ideal if IP were to add a CE bit, like the one in CLNP.  TUBA,
of course, has it, but current IPv6 drafts do not.  It can be shown that
CE bits improve overall performance, but they're not universally loved in
the IETF
world (which after all did not invent them).  In their absence, FECN is of
limited usefulness.
    fred

--
Fred R. Goldstein  k1io  [EMAIL PROTECTED]  +1 617 873 3850
Opinions are mine alone.  Sharing requires permission.




--
RFC 1149 Compliant.


FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]

Reply via email to