Hi

Very interesting. I would be interested in seeing the CPU load between
methods too. I will venture to say that CPU usage of the multilink is the
highest.

John Hardman CCNP

""Chuck Larrieu""  wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> A couple of weeks ago there were a couple of discussions on this board
about
> using multiple T1's to improve data throughput. If memory serves, there
were
> two possible ways to do this: 1) per packet load sharing and 2) PPP
> multilink
>
> for no particular reason I decided to do a little study on PPP multilink.
> Well, OK, I do have two particular reasons - an upcoming Lab and a
customer
> who is asking about this.
>
> So, I build a scenario as follows:
>
>    serial0  token ring
> R6--------R5-----------R4
>  |--------|
>   serial1
>
> to test throughput, I used extended ping, with multiple pings and various
> size payloads, from a loopback on R4 to a loopback on R6.
>
> the routing protocol was EIGRP, done to assure per packet routing between
R6
> and R5 as a control.
>
> My results were interesting, to say the least. unexpected, but so
consistent
> that there is no question, in my mind, anyway, about some of the
assumptions
> many of us make about various load sharing and multiplexing options.
>
> a summary of the results are using the Cisco router reporting of
> min/avg/max round trip times - the middle number is the one to watch.
>
> packet size   PPP multilink    single serial link configured as PPP
> multilink
>
> 1000   24/24/132        20/20/104
>
> 1500   28/29/52 24/27/112
>
> 500   16/19/64 12/13/104
>
> 64   12/14/60         4/7/104
>
> note that in every case, the single link, configured for PPP multilink, is
> SIGNIFICANTLY faster than the dual link.
>
> Interesting. So I constructed some further experiments, using extended
ping,
> multiple packets of variable size - range 64 to 1500:
>
>   PPP multilink    per packet load share   single T1
>
>    8/17/136           4/17/136              4/17/144
>
> these figures are from over 15,000 pings per scenario, so it is not a case
> of random chance here. there is no difference whatsoever between the
results
> of a single serial link, per packet load sharing over two serial links,
and
> PPP multilink. what is most surprising is that a single serial connection
> proves JUST AS FAST as a dual serial connection.
>
> Now what I conclude from this is an opinion that multiple T1's DO NOT
really
> do much for you in terms of more bandwidth. At least for the kinds of data
> flows I am able to generate in the lab.  Furthermore, PPP multilink is
> actually harmful to throughput. So I gotta ask - is load sharing really
> adding anything to the mix? Really? In real world scenarios and data
flows,
> where is it that you are gaining anything?
>
> Lastly, I set up a final scenario in which I sent 5000 byte packets. this
> means fragmentation and reassembly would occur, because the MTU on all wan
> interfaces is 1500 bytes. Here are the results when pinging 5000 times
using
> a 5000 byte payload:
>
> single serial link: 64/66/168
>
> per packet load share: 64/64/168
>
> ppp multilink: 48/52/172
>
> note here that the load sharing scenario is slightly faster than the
single
> serial link, and that the ppp multilink is FAR AND AWAY faster that the
> other two. I suspect the reason for this is efficiencies gained under the
> multilink scenario when fragmenting and reassembling the oversized
payloads
>
> In any case, I hope this presentation will lead to some good discussion of
> bandwidth and results. would it be fair to suggest that peoples' efforts
to
> solve what they perceive as bandwidth issues by implementing multiple WAN
> links is really a study in fruitless activity?
>
> Maybe I should have set up some IPX scenarios?
>
> Chuck




Message Posted at:
http://www.groupstudy.com/form/read.php?f=7&i=21624&t=21623
--------------------------------------------------
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]

Reply via email to