Re: [j-nsp] Juniper vMX limitations
So what's the use case... Run this as the local RR? Or manage routes between tenants in compute? Sent from my iPhone On Jul 29, 2015, at 9:44 AM, David Blundell david.blund...@100percentit.com wrote: Has anyone testing the vMX software found out its RIB/FIB/L3VPN limitations? The Juniper datasheet at http://www.juniper.net/assets/us/en/local/pdf/datasheets/1000522-en.pdf says the parts VMX-100M to VMX-500M Includes all features in full scale which I take to mean they can handle as many routes as will fit in RAM. This contradicts other numbers I've heard which gave a limit of 128K RIB/FIB and 50 L3VPN instances. Thanks, David ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] R: Re: QinQ interface configuration question
I've seen this sort of a thing done on other platforms. The most you can expect it to put the more specific match up front ( unit 1 } outer 555 inner 200 } ) followed by less specifics towards the end ( unit 20 { outer 555 } ). But I'm guessing here, I've not seen this done and in principal it sound like what you want to do is impossible. Sent from my iPhone On Jul 20, 2014, at 9:31 PM, dim0sal dim0...@hotmail.com wrote: Yes, this is the point why it doesn't work. Tks Sent with Mobile Messaggio originale Da: Edward Dore edward.d...@freethought-internet.co.uk Data: A: dim0sal dim0...@hotmail.com Cc: sth...@nethelp.no,juniper-nsp@puck.nether.net Oggetto: Re: [j-nsp] QinQ interface configuration question How does the device know which unit on the physical interface an ingress packet belongs to though when you’re defining the same VLAN tags on each unit? Edward Dore Freethought Internet On 20 Jul 2014, at 19:47, dim0sal dim0...@hotmail.com wrote: Didn't catch your point. Inside routing-instances configuration you have to declare which interfaces belongs to... right? Sent with Mobile Messaggio originale Da: sth...@nethelp.no Data: A: dim0...@hotmail.com Cc: juniper-nsp@puck.nether.net Oggetto: Re: R: Re: R: Re: [j-nsp] QinQ interface configuration question What about scenario 4) in this way: interface ge-0/0/0.1 vlan-tags outer a inner a and interface ge-0/0/0.2 vlan-tags outer a inner a This is what we re talking about. This doesn't work despite of the different routing-instances, right? Obviously doesn't work. And I'm afraid I don't understand what you are trying to achieve here. With different routing-instances, *what exactly* is supposed to determine which routing-instance a packet belongs to? Steinar Haug, Nethelp consulting, sth...@nethelp.no ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] qfabric 1536k mac address?
If the QFabric controllers can do anything it's scale up the number of MACs that can be learned across the fabric. Their solution was novel before SDN and I guess it's to their credit that they are not trying to market the QFabric that way now ;) rgds, --r On Jan 2, 2014, at 9:10 AM, Eugeniu Patrascu eu...@imacandi.net wrote: On Thu, Jan 2, 2014 at 4:20 PM, giovanni rana superburri...@hotmail.comwrote: Even in the case you mentioned the node shall be able to keep a table where there's an index Made by 1536k entries. I can understand that some memory can be saved by using a vpls style approach, but if I got 1536k VMs with unique and Mac addresses I'm still able to manage them via qfabric? Public docs does not clarify enough this aspects. Thanks for your answer! You do realise that you are talking about approximately 1,5000,000 MAC addresses in your network in a flat any-to-any topology and not about 1,500 or 15,000 MAC addresses ? This is how QFabric does MAC learning and forwarding: http://blog.ipspace.net/2011/09/qfabric-part-3-forwarding.html Regards, Eugeniu ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] vpls loop avoidance
Wouldn't it be nice if we had an option to turn on SPB or TRILL inside the VPLS service? rgds, --r On Oct 12, 2011, at 8:05 AM, Chuck Anderson wrote: On Tue, Oct 11, 2011 at 04:14:53PM -0400, Keegan Holley wrote: STP doesn't seem to work here. Only cisco PVST which doesn't help me on an EX4200. PVST+ is supported on EX4200. edit protocols vstp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] MTU Compression
The motivation for this requirement is not all that clear and I suspect may have some bearing on a solution.Is the backbone requirement because your forwarding on MPLS headers? If so, I do not think that any of the MSS tweaks are going to help you. Unfortunately, your third party circuit leaves you very little wiggle room, you might just be out of luck on that path. Conversely, if the requirement is coming from the application side, you might consider a WAN Optimization Controller It's not a straight forward payload compression thing, but I think that they can operate in any point-to-point Ethernet environment and effectively proxy the TCP sessions locally. that might get you bye. good luck rgds, --r On Jul 7, 2010, at 1:59 AM, Humair Ali wrote: Hi All MX480 Junos 9.6 R3 We are experiencing some MTU issue on one of our circuit. We have been provided a circuit by a 3rd party provider that only supports MTU size 1518, Unfortunately it seems they cannot provide Jumbo Frames, However for our backbone we required to have Jumbo Frames supported. My question is , is there any such option as MTU compression available ? something that would allow a packet of 9192 coming into our MX480 , then be comrpessed to 1518 accross the Circuit provider link, then De-compress on our MX480 on the other end ? I have been looking on docs but couldnt anything as such your help much appreciate, if someone came accross similar and found a workaround , please share Thanks ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp