I don't think L2 extension is a good idea, but if you have to, OTV is the
way to do it IMO.  Particularly since you have to support non virtualized
infrastructure, and may have > 2 sites in the future.  OTV isn't going to
have sub-second convergence (though I think it might support BFD in the
future?)

35km isn't too bad latency wise.  1000km will be much trickier.  Things
like storage replication will be more difficult.  And while OTV can easily
isolate FHRP's to send outbound traffic out the local default gateway,
traffic tromboning with inbound traffic might be an issue.  (If inbound
traffic is routed to the "wrong" data center, the extra latency from having
to go across 1000km to get to the datacenter where the box actually is will
suck).  I think LISP in theory could help with this, but would involve
changes to how traffic gets *to* your data centers.

Also, in general don't overlook the integration of storage / network /
system stuff.  IE, if you lose all the links between the data centers,
storage / networking / systems all need to fail over to the same place.  If
you can, do lots of disruptive testing before going into production.  A
previous organization I've been with discovered (and fixed) many issues
with L2 extension through pre-production testing of everything.

Oliver

-------------------------------------

Oliver Garraux
Check out my blog:  blog.garraux.net
Follow me on Twitter:  twitter.com/olivergarraux

On Fri, Oct 31, 2014 at 9:41 AM, R LAS <dim0...@hotmail.com> wrote:

> Hi all
> a customer of mine is thinking to renew DC infrastructure and
> interconnection among the main (DC1) and secondary (DC2), with the
> possibility in the future to add another (DC3).
>
> Main goal are: sub-second convergence in case of a single fault of any
> component among DC1 and DC2 (not DC3), to have the possibility to extend L2
> and L3 among DCs, to provide STP isolation among DCs, to provide ports on
> the server at eth 1/10Gbs speed.
>
> DC1 and DC2 are 35 km away, DC3 around 1000 km away from DC1 and DC2.
>
> Customer would like to design using Cisco or Juniper and at the end to
> decide.
>
> Talking about Juniper my idea was to build and MPLS interconnection with
> MX240 or MX104 in VC among DC1 and DC2 (tomorrow will be easy to add DC3)
> and to use QFX in a virtual chassis fabric configuration.
>
> Does anybody have these kind of config ?
> Is it stable QFX ?
> Any other suggestion/improvement ?
>
> And if you would go with Cisco, what do you propose in this scenario ?
>
> Rgds
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Reply via email to