OSPFv2 does not support unequal-cost load balancing.

Well, there is a way to do that; For example A customer have three E1's to a
remote site.  Two are p-t-p and the other channel off of a E3.  When you add
E3; EIGRP choose E3, ignoring the other two E1's.  Using the variance
command you force EIGRP (Cisco) to utilize all three E1's.

And now you could tune the OSPF metrics so that three paths would appear
equal (as Farhan pointed out) or you could use RIP, assuming that the hop
count to reach the destination on both links is the same. In either case you
still have equal cost load balancing on two unequal links, which will result
in wasted bandwidth at best and a bottleneck at worst.

Regards,
Masood



-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Farhan Jaffer
Sent: Thursday, November 20, 2008 12:59 PM
To: Iftikhar Ahmed
Cc: juniper-nsp@puck.nether.net
Subject: Re: [j-nsp] Unequal-Cost Load Balancing for OSPF

No. And it's not Juniper, It is OSPF which does not support unequal cost
load balancing.

EIGRP supports this behavior by variance factor.

If EIGRP is open atandard (not Cisco proprietry) then Juniper will also
support the same feature for EIGRP.

However you can use some tricks to justify your scenario.


Farhan



On Thu, Nov 20, 2008 at 12:44 PM, Iftikhar Ahmed
<[EMAIL PROTECTED]>wrote:

> Hi,
>
>
> Does Juniper support Unequal-Cost Load Balancing for OSPF, like the way
> cisco do for EIGRP with commands like variance & maximum-paths etc stuff?
>
>
>
>
> Regards
> Iftikhar Ahmed
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Reply via email to