Yes I said you would see the combined aggregate, you can send twice as
much data but your not aggregating the speed, i.e. 2 T1's are two 1.5M
links, even when bundled they do not have a clock of 3M but the
bandwidth of 3M.

  That may seem obvious but I have had calls from those that did the
"math" wondering why the thruput did not equal 3M at 3Mbps when
multilinking two T1s for example.

 Dave

Paul Lalonde wrote:
> 
> Hmm.. If this were the case, though, wouldn't I expect to only see 64Kbps
of
> bandwidth for a single user session on a 128K multilinked ISDN call?
> 
> Seems to me if the link were loaded up properly, you'd see the combined
> aggregate.
> 
> Paul Lalonde
> 
> ""MADMAN""  wrote in message
> [EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> > Yes you verified what I have harped a few times, the added complexity
> > of multilinking not to mention the several bugs I have encountered, is
> > why I say just use CEF and load share per packet/destination.  Also
> > multilinking nor CEF give your greater speed but you do have more
> > bandwidth.  If you have a 2 lane verses a 4 lane highway the additional
> > 2 lanes won't enable you to go any faster but you can get twice as many
> > cars to their destination.
> >
> >   So yes two T1's will give you twice the thruput in x time but the
> > links are still 1.5M no matter how you slice it.
> >
> >   Dave
> >
> > Chuck Larrieu wrote:
> > >
> > > A couple of weeks ago there were a couple of discussions on this board
> > about
> > > using multiple T1's to improve data throughput. If memory serves, there
> > were
> > > two possible ways to do this: 1) per packet load sharing and 2) PPP
> > > multilink
> > >
> > > for no particular reason I decided to do a little study on PPP
> multilink.
> > > Well, OK, I do have two particular reasons - an upcoming Lab and a
> customer
> > > who is asking about this.
> > >
> > > So, I build a scenario as follows:
> > >
> > >    serial0  token ring
> > > R6--------R5-----------R4
> > >  |--------|
> > >   serial1
> > >
> > > to test throughput, I used extended ping, with multiple pings and
> various
> > > size payloads, from a loopback on R4 to a loopback on R6.
> > >
> > > the routing protocol was EIGRP, done to assure per packet routing
> between
> > R6
> > > and R5 as a control.
> > >
> > > My results were interesting, to say the least. unexpected, but so
> > consistent
> > > that there is no question, in my mind, anyway, about some of the
> > assumptions
> > > many of us make about various load sharing and multiplexing options.
> > >
> > > a summary of the results are using the Cisco router reporting of
> > > min/avg/max round trip times - the middle number is the one to watch.
> > >
> > > packet size       PPP multilink    single serial link configured as PPP
> > > multilink
> > >
> > > 1000              24/24/132        20/20/104
> > >
> > > 1500              28/29/52               24/27/112
> > >
> > > 500               16/19/64               12/13/104
> > >
> > > 64                12/14/60         4/7/104
> > >
> > > note that in every case, the single link, configured for PPP multilink,
> is
> > > SIGNIFICANTLY faster than the dual link.
> > >
> > > Interesting. So I constructed some further experiments, using extended
> > ping,
> > > multiple packets of variable size - range 64 to 1500:
> > >
> > >           PPP multilink    per packet load share   single T1
> > >
> > >            8/17/136           4/17/136              4/17/144
> > >
> > > these figures are from over 15,000 pings per scenario, so it is not a
> case
> > > of random chance here. there is no difference whatsoever between the
> > results
> > > of a single serial link, per packet load sharing over two serial links,
> and
> > > PPP multilink. what is most surprising is that a single serial
> connection
> > > proves JUST AS FAST as a dual serial connection.
> > >
> > > Now what I conclude from this is an opinion that multiple T1's DO NOT
> > really
> > > do much for you in terms of more bandwidth. At least for the kinds of
> data
> > > flows I am able to generate in the lab.  Furthermore, PPP multilink is
> > > actually harmful to throughput. So I gotta ask - is load sharing really
> > > adding anything to the mix? Really? In real world scenarios and data
> flows,
> > > where is it that you are gaining anything?
> > >
> > > Lastly, I set up a final scenario in which I sent 5000 byte packets.
> this
> > > means fragmentation and reassembly would occur, because the MTU on all
> wan
> > > interfaces is 1500 bytes. Here are the results when pinging 5000 times
> > using
> > > a 5000 byte payload:
> > >
> > > single serial link: 64/66/168
> > >
> > > per packet load share: 64/64/168
> > >
> > > ppp multilink: 48/52/172
> > >
> > > note here that the load sharing scenario is slightly faster than the
> single
> > > serial link, and that the ppp multilink is FAR AND AWAY faster that the
> > > other two. I suspect the reason for this is efficiencies gained under
> the
> > > multilink scenario when fragmenting and reassembling the oversized
> payloads
> > >
> > > In any case, I hope this presentation will lead to some good discussion
> of
> > > bandwidth and results. would it be fair to suggest that peoples'
efforts
> to
> > > solve what they perceive as bandwidth issues by implementing multiple
> WAN
> > > links is really a study in fruitless activity?
> > >
> > > Maybe I should have set up some IPX scenarios?
> > >
> > > Chuck
> > --
> > David Madland
> > Sr. Network Engineer
> > CCIE# 2016
> > Qwest Communications Int. Inc.
> > [EMAIL PROTECTED]
> > 612-664-3367
> >
> > "Emotion should reflect reason not guide it"
-- 
David Madland
Sr. Network Engineer
CCIE# 2016
Qwest Communications Int. Inc.
[EMAIL PROTECTED]
612-664-3367

"Emotion should reflect reason not guide it"




Message Posted at:
http://www.groupstudy.com/form/read.php?f=7&i=21707&t=21623
--------------------------------------------------
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]

Reply via email to