Re: [c-nsp] Dual OC3 - Separate Carrier - Load-balancing 7201

2011-04-05 Thread Justin M. Streiner

On Tue, 5 Apr 2011, arulgobinath emmanuel wrote:


 out of order could be a major headache, and throughput could suffer
greatly. I haven't tried nor experience in similar setup according to my
search MLPPP designed to handle the out of order packet (rfc1990)   but it
can impact the router performance  buffer space.


If you're doing per-packet load-sharing, the only way to handle 
out-of-order packets is to have deep buffers.  Either the routers or hosts 
have to buffer packets to get them back into the right order, or the 
applications themselves need to be able to tolerate packets arriving out 
of order.  In the worst case, the proper sequence of packets does not 
arrive before the buffer fills, which would force the application to 
either re-transmit the affected chunk of data, or throw an error.


None of these events are particularly good for maximizing throughput.

jms


On Tue, Apr 5, 2011 at 3:38 AM, Justin M. Streiner
strei...@cluebyfour.orgwrote:


On Mon, 4 Apr 2011, Mark Mason wrote:

 We are planning on terminating dual OC3 point-to-point circuits

(PA-POS-2OC3) from different carriers on 7201 NPE-G2's at two of our DC's
and do either HDLC CEF per packet load-balancing or multilink PPP bundle
together. What are some of the questions and responses from the field for
anyone who has done this type of setup?



I would stay away from per-packet load-sharing in this design, unless there
is a really compelling reason to use it.  The biggest reason to stay away is
that OC3 (technically, OC3c) circuits from different carriers to the same
locations could have vastly different end-to-end latencies.  If they are
significantly different, dealing with packets arriving out of order could be
a major headache, and throughput could suffer greatly.

You might also want to look at a Gigabit Ethernet solution, if that's an
option and you're not chained to a POS design.  The operating costs of a
pair of gig-e circuits could very well be substantially lower than a pair of
OC3cs, with the added benefit of being able to provide a lot more bandwidth.
 If you were to get two gig-e circuits, the caveat above related to
per-packet load-sharing would still apply.

In a POS world, if you need more than 155 Mb/s, you either need to install
another POS circuit, or start upgrading to OC12c or higher.  That also
throws in the need to purchase new router hardware, because the 7201 won't
handle a POS OC12c, so your costs per megabit wouldn't scale too well.

jms

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/




___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Dual OC3 - Separate Carrier - Load-balancing 7201

2011-04-04 Thread Mark Mason
We are planning on terminating dual OC3 point-to-point circuits (PA-POS-2OC3) 
from different carriers on 7201 NPE-G2's at two of our DC's and do either HDLC 
CEF per packet load-balancing or multilink PPP bundle together. What are some 
of the questions and responses from the field for anyone who has done this type 
of setup?

The particulars:
PID: PA-POS-2OC3
PID: MEM-7201-FLD256
NAME: c7201, DESCR: Cisco 7201 Network Processing Engine



Mark Mason
Network Engineer
JHA Communications Infrastructure
Design and Implementation Engineering
417.235.6652 x1520

NOTICE: This electronic mail message and any files transmitted with it are 
intended
exclusively for the individual or entity to which it is addressed. The message, 
together with any attachment, may contain confidential and/or privileged 
information.
Any unauthorized review, use, printing, saving, copying, disclosure or 
distribution 
is strictly prohibited. If you have received this message in error, please 
immediately advise the sender by reply email and delete all copies.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Dual OC3 - Separate Carrier - Load-balancing 7201

2011-04-04 Thread Justin M. Streiner

On Mon, 4 Apr 2011, Mark Mason wrote:

We are planning on terminating dual OC3 point-to-point circuits 
(PA-POS-2OC3) from different carriers on 7201 NPE-G2's at two of our 
DC's and do either HDLC CEF per packet load-balancing or multilink PPP 
bundle together. What are some of the questions and responses from the 
field for anyone who has done this type of setup?


I would stay away from per-packet load-sharing in this design, unless 
there is a really compelling reason to use it.  The biggest reason to 
stay away is that OC3 (technically, OC3c) circuits from different 
carriers to the same locations could have vastly different end-to-end 
latencies.  If they are significantly different, dealing with packets 
arriving out of order could be a major headache, and throughput could 
suffer greatly.


You might also want to look at a Gigabit Ethernet solution, if that's an 
option and you're not chained to a POS design.  The operating costs of a 
pair of gig-e circuits could very well be substantially lower than a pair 
of OC3cs, with the added benefit of being able to provide a lot more 
bandwidth.  If you were to get two gig-e circuits, the caveat above 
related to per-packet load-sharing would still apply.


In a POS world, if you need more than 155 Mb/s, you either 
need to install another POS circuit, or start upgrading to OC12c or 
higher.  That also throws in the need to purchase new router hardware, 
because the 7201 won't handle a POS OC12c, so your costs per megabit 
wouldn't scale too well.


jms
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Dual OC3 - Separate Carrier - Load-balancing 7201

2011-04-04 Thread arulgobinath emmanuel
 out of order could be a major headache, and throughput could suffer
greatly. I haven't tried nor experience in similar setup according to my
search MLPPP designed to handle the out of order packet (rfc1990)   but it
can impact the router performance  buffer space.
Gobinath


On Tue, Apr 5, 2011 at 3:38 AM, Justin M. Streiner
strei...@cluebyfour.orgwrote:

 On Mon, 4 Apr 2011, Mark Mason wrote:

  We are planning on terminating dual OC3 point-to-point circuits
 (PA-POS-2OC3) from different carriers on 7201 NPE-G2's at two of our DC's
 and do either HDLC CEF per packet load-balancing or multilink PPP bundle
 together. What are some of the questions and responses from the field for
 anyone who has done this type of setup?


 I would stay away from per-packet load-sharing in this design, unless there
 is a really compelling reason to use it.  The biggest reason to stay away is
 that OC3 (technically, OC3c) circuits from different carriers to the same
 locations could have vastly different end-to-end latencies.  If they are
 significantly different, dealing with packets arriving out of order could be
 a major headache, and throughput could suffer greatly.

 You might also want to look at a Gigabit Ethernet solution, if that's an
 option and you're not chained to a POS design.  The operating costs of a
 pair of gig-e circuits could very well be substantially lower than a pair of
 OC3cs, with the added benefit of being able to provide a lot more bandwidth.
  If you were to get two gig-e circuits, the caveat above related to
 per-packet load-sharing would still apply.

 In a POS world, if you need more than 155 Mb/s, you either need to install
 another POS circuit, or start upgrading to OC12c or higher.  That also
 throws in the need to purchase new router hardware, because the 7201 won't
 handle a POS OC12c, so your costs per megabit wouldn't scale too well.

 jms

 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/