Re: [j-nsp] Best design can fit DC to DC

2014-11-04 Thread Ben Dale
I recommend you take a good long look at E-VPN:

http://www.juniper.net/documentation/en_US/junos13.2/topics/concept/evpns-overview.html

https://conference.apnic.net/data/37/2014-02-24-apricot-evpn-presentation_1393283550.pdf

http://www.cisco.com/c/en/us/products/collateral/routers/asr-9000-series-aggregation-services-routers/whitepaper_c11-731864.html

It is supported in one form or another on MXs, ASRs and ALU boxes and presents 
P2MP services just like a VPLS, only with control-plane based MAC learning, 
active/active multi-homing and a bunch of other goodness which will probably 
spell the end for OTV (which under the hood is essentially MPLS over GRE with 
some smarts), and maybe even replace VPLS and L2VPN/EoMPLS longer term.

OTV limits you to a single platform (N7K only last I looked) from a single 
vendor and you have to anchor your tunnels on your Data Centre switch.  Now 
your interconnect is tied to your DC fabric.  How inconvenient.

Stretching L2 domains over 1000km of wouldn't be my first suggestion either.

I've had a bit of a play with VCF and it seems to work as advertised, but be 
aware of mixed-mode limitations if you need to bring in lots of 1GE ports (eg: 
EX4300) - they don't support TISSU.  Putting 1GE SFPs into a QFX5100 is a 
slightly more expensive workaround though.

Ben

On 5 Nov 2014, at 12:06 pm, Oliver Garraux  wrote:

> I don't think L2 extension is a good idea, but if you have to, OTV is the
> way to do it IMO.  Particularly since you have to support non virtualized
> infrastructure, and may have > 2 sites in the future.  OTV isn't going to
> have sub-second convergence (though I think it might support BFD in the
> future?)
> 
> 35km isn't too bad latency wise.  1000km will be much trickier.  Things
> like storage replication will be more difficult.  And while OTV can easily
> isolate FHRP's to send outbound traffic out the local default gateway,
> traffic tromboning with inbound traffic might be an issue.  (If inbound
> traffic is routed to the "wrong" data center, the extra latency from having
> to go across 1000km to get to the datacenter where the box actually is will
> suck).  I think LISP in theory could help with this, but would involve
> changes to how traffic gets *to* your data centers.
> 
> Also, in general don't overlook the integration of storage / network /
> system stuff.  IE, if you lose all the links between the data centers,
> storage / networking / systems all need to fail over to the same place.  If
> you can, do lots of disruptive testing before going into production.  A
> previous organization I've been with discovered (and fixed) many issues
> with L2 extension through pre-production testing of everything.
> 
> Oliver
> 
> -
> 
> Oliver Garraux
> Check out my blog:  blog.garraux.net
> Follow me on Twitter:  twitter.com/olivergarraux
> 
> On Fri, Oct 31, 2014 at 9:41 AM, R LAS  wrote:
> 
>> Hi all
>> a customer of mine is thinking to renew DC infrastructure and
>> interconnection among the main (DC1) and secondary (DC2), with the
>> possibility in the future to add another (DC3).
>> 
>> Main goal are: sub-second convergence in case of a single fault of any
>> component among DC1 and DC2 (not DC3), to have the possibility to extend L2
>> and L3 among DCs, to provide STP isolation among DCs, to provide ports on
>> the server at eth 1/10Gbs speed.
>> 
>> DC1 and DC2 are 35 km away, DC3 around 1000 km away from DC1 and DC2.
>> 
>> Customer would like to design using Cisco or Juniper and at the end to
>> decide.
>> 
>> Talking about Juniper my idea was to build and MPLS interconnection with
>> MX240 or MX104 in VC among DC1 and DC2 (tomorrow will be easy to add DC3)
>> and to use QFX in a virtual chassis fabric configuration.
>> 
>> Does anybody have these kind of config ?
>> Is it stable QFX ?
>> Any other suggestion/improvement ?
>> 
>> And if you would go with Cisco, what do you propose in this scenario ?
>> 
>> Rgds
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Best design can fit DC to DC

2014-11-04 Thread Oliver Garraux
I don't think L2 extension is a good idea, but if you have to, OTV is the
way to do it IMO.  Particularly since you have to support non virtualized
infrastructure, and may have > 2 sites in the future.  OTV isn't going to
have sub-second convergence (though I think it might support BFD in the
future?)

35km isn't too bad latency wise.  1000km will be much trickier.  Things
like storage replication will be more difficult.  And while OTV can easily
isolate FHRP's to send outbound traffic out the local default gateway,
traffic tromboning with inbound traffic might be an issue.  (If inbound
traffic is routed to the "wrong" data center, the extra latency from having
to go across 1000km to get to the datacenter where the box actually is will
suck).  I think LISP in theory could help with this, but would involve
changes to how traffic gets *to* your data centers.

Also, in general don't overlook the integration of storage / network /
system stuff.  IE, if you lose all the links between the data centers,
storage / networking / systems all need to fail over to the same place.  If
you can, do lots of disruptive testing before going into production.  A
previous organization I've been with discovered (and fixed) many issues
with L2 extension through pre-production testing of everything.

Oliver

-

Oliver Garraux
Check out my blog:  blog.garraux.net
Follow me on Twitter:  twitter.com/olivergarraux

On Fri, Oct 31, 2014 at 9:41 AM, R LAS  wrote:

> Hi all
> a customer of mine is thinking to renew DC infrastructure and
> interconnection among the main (DC1) and secondary (DC2), with the
> possibility in the future to add another (DC3).
>
> Main goal are: sub-second convergence in case of a single fault of any
> component among DC1 and DC2 (not DC3), to have the possibility to extend L2
> and L3 among DCs, to provide STP isolation among DCs, to provide ports on
> the server at eth 1/10Gbs speed.
>
> DC1 and DC2 are 35 km away, DC3 around 1000 km away from DC1 and DC2.
>
> Customer would like to design using Cisco or Juniper and at the end to
> decide.
>
> Talking about Juniper my idea was to build and MPLS interconnection with
> MX240 or MX104 in VC among DC1 and DC2 (tomorrow will be easy to add DC3)
> and to use QFX in a virtual chassis fabric configuration.
>
> Does anybody have these kind of config ?
> Is it stable QFX ?
> Any other suggestion/improvement ?
>
> And if you would go with Cisco, what do you propose in this scenario ?
>
> Rgds
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Counters for Storm-Control on EX switches?

2014-11-04 Thread Sachin Rai

AFAIK, there is no such config or any hidden knob count the dropped packet.

If this is a test scenario you can do following (this is just an imagination 
which i had):

on an interface you want to see the  storm control dropped packet you can graph 
that interface for couple of hrs or days, in any of the snmp graphing tools 
(polling here should be very frequent, let say every 10 sec).
Then create a thresh-hold to pass traffic on that interface.
At the time of storm you may have an idea how much packet was dropped.



> Date: Mon, 3 Nov 2014 17:29:11 +0100
> From: juniper-...@ml.karotte.org
> To: juniper-nsp@puck.nether.net
> Subject: [j-nsp] Counters for Storm-Control on EX switches?
> 
> Hello,
> 
> are there any counters showing how many packets are dropped by storm
> control on a Juniper EX? It doesn't show up in dropped packets, only
> produces a (not very informative) syslog message.
> 
> Regards
> 
> Sebastian
> 
> -- 
> GPG Key: 0x93A0B9CE (F4F6 B1A3 866B 26E9 450A  9D82 58A2 D94A 93A0 B9CE)
> 'Are you Death?' ... IT'S THE SCYTHE, ISN'T IT? PEOPLE ALWAYS NOTICE THE 
> SCYTHE.
> -- Terry Pratchett, The Fifth Elephant
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp