Seems safer to purchase two clear channel circuits, one from each provider. In the event that a single provider goes down the interface/neighbor will fail and normal routing failover will occur. Even with the redundant providers the aggregated GE must still plug into a single card on a single piece of equipment. Maybe I'm missing something, but it seems like you traded one single point of failure for another.
On Tue, Oct 26, 2010 at 11:45 PM, jack daniels <[email protected]>wrote: > MAP GE to 4 STM1 -- > > STM1 form (PROVIDER A) > STM1 form (PROVIDER A) > STM1 form (PROVIDER B) > STM1 form (PROVIDER B) > > This is done for redundancy ...In case Providers ( A ) go down then > we have only 155*2 BW left.(CONGESTION) > > > On Tue, Oct 26, 2010 at 5:11 PM, Keegan Holley > <[email protected]> wrote: > > An STM is simply a group of time slots over the existing physical path. > It > > would be pretty strange for an STM to go down without the entire circuit > > going down. Less strange would be misconfiguration, but that has all the > > standard hooks. If you own the layer-1 equipment then change control > > policies would help with this. I'm personally not aware of a way to > monitor > > capacity from within the circuit without testing and trying to use all > the > > available bandwidth, which would defeat the purpose. > > > > > > On Tue, Oct 26, 2010 at 1:56 AM, jack daniels <[email protected]> > > wrote: > >> > >> Hi guys, > >> > >> I have a EOSDH as a primary link in which GE is mapped to 4 STM-1 > >> > >> and backup path is another GE Link. > >> > >> In case 2 STM-1 out of 4 STM-1 in EOSDH fail my routing will not be > >> aware of that and will not reroute the traffic to backup GE. > >> This will lead to congestion on Primary link , while backup path not > >> at all be used. > >> Is there any way to work out this issue. > >> _______________________________________________ > >> cisco-nsp mailing list [email protected] > >> https://puck.nether.net/mailman/listinfo/cisco-nsp > >> archive at http://puck.nether.net/pipermail/cisco-nsp/ > >> > >> > > > > > > > _______________________________________________ cisco-nsp mailing list [email protected] https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
