I’m not sure about the BFD thing.

As I recall and I would definitely suggest you take this with a few grains of 
salt and research a second source but BFD on directly connected paths will 
distribute to the line card.  BFD say loopback to loopback or over a non direct 
path will be handled by the RE although I believe this was being addressed in 
future releases to have it distributed as well.


> On Jul 9, 2015, at 9:27 AM, Adam Vitkovsky <adam.vitkov...@gamma.co.uk> wrote:
> 
> Interesting facts. 
> Now the Juniper MX104 win over Cisco ASR903 (max prefix limit) is not that 
> clear anymore.
> 
> Since the chassis is 80Gbps in total I'd assume around 40Gbps towards 
> aggregation and 40Gbps to backbone.
> 
> Also if BFD is really not offloaded into HW it would be a bummer on such a 
> slow CPU.
> 
> With regards to 1588 I'd like to know if or how anyone deployed this on MPLS 
> backbone if the 4G is in a VRF???
> In other words 1588 runs in GRT/inet.0 so how do you then rely the precise 
> per hop delay/jitter info to a 4G cell which sits in a VRF?
> Never mind that the cell doesn't really need this precision and running 1588 
> with the server in 4G VRF across the 1588-blind MPLS core is enough.
> 
> It seems Juniper is still waiting for a big customer that is not willing to 
> wait for BGP to converge millions of MAC addresses if DF PE fails (PBB-EVPN)
> 
> 
> 
> adam
>> -----Original Message-----
>> From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf
>> Of Colton Conor
>> Sent: 24 June 2015 14:09
>> To: juniper-nsp@puck.nether.net
>> Subject: [j-nsp] MX104 Limitations
>> 
>> We are considering upgrading to a Juniper MX104, but another vendor (not
>> Juniper) pointed out the following limitations about the MX104 in their
>> comparison. I am wondering how much of it is actually true about the
>> MX104?
>> And if true, is it really that big of a deal?:
>> 
>> 1.       No fabric redundancy due to fabric-less design. There is no switch
>> fabric on the MX104, but there is on the rest of the MX series. Not sure if
>> this is a bad or good thing?
>> 
>> 2.       The Chassis fixed ports are not on an FRU.  If a fixed port fails,
>> or if data path fails, entire chassis requires replacement.
>> 
>> 3.       There is no mention of software support for MACSec on the MX104,
>> it appears to be a hardware capability only at this point in time with
>> software support potentially coming at a later time.
>> 
>> 4.       No IX chipsets for the 10G uplinks (i.e. no packet
>> pre-classification, the IX chip is responsible for this function as well as
>> GE to 10GE i/f adaptation)
>> 
>> 5.       QX Complex supports HQoS on MICs only, not on the integrated 4
>> 10GE ports on the PMC. I.e. no HQoS support on the 10GE uplinks
>> 
>> 6.       Total amount of traffic that can be handled via HQoS is restricted
>> to 24Gbps. Not all traffic flows can be shaped/policed via HQoS due to a
>> throughput restriction between the MQ and the QX. Note that the MQ can
>> still however perform basic port based policing/shaping on any flows. HQoS
>> support on the 4 installed MICs can only be enabled via a separate license.
>> Total of 128k queues on the chassis
>> 
>> 7.       1588 TC is not supported across the chassis as the current set of
>> MICs do not support edge time stamping.  Edge timestamping is only
>> supported on the integrated 10G ports.  MX104 does not presently list 1588
>> TC as being supported.
>> 
>> 8.       BFD can be supported natively in the TRIO chipset.  On the MX104,
>> it is not supported in hardware today.  BFD is run from the single core
>> P2020 MPC.
>> 
>> 9.       TRIO based cards do not presently support PBB; thus it is
>> presently not supported on the MX104. PBB is only supported on older
>> EZChip
>> based MX hardware.  Juniper still needs a business case to push this forward
>> 
>> 10.   MX104 operating temperature: -40 to 65C, but MX5, MX10, MX40, MX80
>> and MX80-48T are all 0-40C all are TRIO based. Seems odd that the MX104
>> would support a different temperature range. There are only 3 temperature
>> hardened MICs for this chassis on the datasheet: (1) 16 x T1/E1 with CE,
>> (2) 4 x chOC3/STM1 & 1 x chOC12/STM4 with CE, (3) 20 x 10/100/1000 Base-T.
>> 
>> 11.   Air-flow side-to-side; there is no option for front-to-back cooling
>> with this chassis.
>> 
>> 12.   Routing Engine and MPC lack a built-in Ethernet sync port.  If the
>> chassis is deployed without any GE ports, getting SyncE or 1588 out of the
>> chassis via an Ethernet port will be a problem.  SR-a4/-a8 have a built-in
>> sync connector on the CPM to serve this purpose explicitly.
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
> 
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Reply via email to