PPP Multilink studies - interesting results [7:21623]

2001-10-01 Thread Chuck Larrieu
A couple of weeks ago there were a couple of discussions on this board about using multiple T1's to improve data throughput. If memory serves, there were two possible ways to do this: 1) per packet load sharing and 2) PPP multilink for no particular reason I decided to do a little study on PPP mu

PPP Multilink studies - interesting results [7:21623]

2001-10-01 Thread [EMAIL PROTECTED]
Interesting, as you say. What load were you getting on the links? Your pings are measuring latency, not throughput. If the links weren't heavily loaded, then I can see why you could get these results. Each link is still clocked at T1 speed, so I wouldn't expect adding links to decrease latency.

Re: PPP Multilink studies - interesting results [7:21623]

2001-10-01 Thread John Hardman
Hi Very interesting. I would be interested in seeing the CPU load between methods too. I will venture to say that CPU usage of the multilink is the highest. John Hardman CCNP ""Chuck Larrieu"" wrote in message [EMAIL PROTECTED]">news:[EMAIL PROTECTED]... > A couple of weeks ago there were a co

Re: PPP Multilink studies - interesting results [7:21623]

2001-10-02 Thread Phillip Heller
Chuck, Round times will be roughly the same regardless of whether there's 1 T1 or 8 T1's in the multilink bundle. There is a limit to the speed bits will move in copper. However, the more T1's you have in the bundle, the more bits you can send at the same time. I'd suggest you retry your tes

Re: PPP Multilink studies - interesting results [7:21623]

2001-10-02 Thread John Neiberger
This is just a pre-morning-coffee thought. I'm thinking that multiple links only help if you're actually needing the extra bandwidth. If you're not generating more than 1.544 Mbps of traffic, I would expect the round-trip times to be at least fairly similar regardless of which configuration you

RE: PPP Multilink studies - interesting results [7:21623]

2001-10-02 Thread Kent Hundley
have to physically reconfigure my lab at home to get 2 serials in parallel. Regards, Kent -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Chuck Larrieu Sent: Monday, October 01, 2001 8:34 PM To: [EMAIL PROTECTED] Subject: PPP Multilink studies - interestin

Re: PPP Multilink studies - interesting results [7:21623]

2001-10-02 Thread MADMAN
Yes you verified what I have harped a few times, the added complexity of multilinking not to mention the several bugs I have encountered, is why I say just use CEF and load share per packet/destination. Also multilinking nor CEF give your greater speed but you do have more bandwidth. If you have

Re: PPP Multilink studies - interesting results [7:21623]

2001-10-02 Thread Priscilla Oppenheimer
What was the inter-packet gap between pings? Were they pushing right up against each other, with some wanting to go out while others were still being output? What other traffic was the router trying to pump out, if any? I'm thinking you would need to have packets queued up waiting to go out to

Re: PPP Multilink studies - interesting results [7:21623]

2001-10-02 Thread Paul Lalonde
Hmm.. If this were the case, though, wouldn't I expect to only see 64Kbps of bandwidth for a single user session on a 128K multilinked ISDN call? Seems to me if the link were loaded up properly, you'd see the combined aggregate. Paul Lalonde ""MADMAN"" wrote in message [EMAIL PROTECTED]">news:

Re: PPP Multilink studies - interesting results [7:21623]

2001-10-02 Thread MADMAN
Yes I said you would see the combined aggregate, you can send twice as much data but your not aggregating the speed, i.e. 2 T1's are two 1.5M links, even when bundled they do not have a clock of 3M but the bandwidth of 3M. That may seem obvious but I have had calls from those that did the "math