Re: [j-nsp] ACX5448 & ACX710

2020-01-23 Thread quinn snyder
That would be something like the NCS540.
Its not an apples-to-apples comparison — as the 540 runs XR and the usual 
things that come with that.  There were some threads about it in [c-nsp] — 
might be something to explore.  I feel its a bit heavyweight for the metro, but 
it gives you environmentally hardened 10GE with a variety of options for uplink.

q.

—
Quinn Snyder | snyd...@gmail.com 

-= Sent via iPad.  Please excuse grammar, spelling, and brevity =-

> On Jan 23, 2020, at 14:03, Colton Conor  wrote:
> 
> What is Cisco's upgrade path from the ASR920 if you need more 10G ports?
> 
>> On Thu, Jan 23, 2020 at 2:52 PM Mark Tinka  wrote:
>> 
>> 
>> 
>>> On 23/Jan/20 16:00, Shamen Snyder wrote:
>>> 
>>> I have been following the ACX 710 for a while now. We have a use case
>>> in rural markets where we need a dense 10G hardened 1 RU box.
>>> 
>>> Looks like a promising box, hope the price is right. If not we may
>>> have to jump to Cisco ASR920s
>> 
>> If I'm honest, what I've noticed with most traditional vendors selling
>> Broadcom-based boxes is they are touting "price" as the killer use-case
>> for those boxes. For me, I'm not unwilling to spend a little bit more if
>> I can sleep at night knowing I have data plane parity between a
>> Broadcom-based box and an in-house-based box from the same traditional
>> vendor.
>> 
>> But time and time again, almost like clockwork, Broadcom-based boxes are
>> being marketed as "Multi-Gigabit" and "Multi-Terabit" platforms with a
>> gazillion ports at half the price of the "normal" box. What good is all
>> that hardware if a simple feature doesn't work as I've known it to
>> before "enhancing my network"?
>> 
>> 
>>> 
>>> 4 100/40G (can be channelized to 4x25G or 4x10G) interfaces, 24 1/10G
>>> interfaces. Broadcom QAX chipset. 320Gbps of throughput. 3GB buffer.
>> 
>> What I saw about the ACX710 is it has a small FIB. Since we are used to
>> filtering what enters our ASR920 FIB (and the ACX710 has about 12.8
>> times that), that's not a show-stopper.
>> 
>> Mark.
>> 
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] RE-S-X6-64G-BB

2016-05-21 Thread quinn snyder


> On May 21, 2016, at 13:26, Saku Ytti <s...@ytti.fi> wrote:
> 
> All vendors are pimping this like it's something customers have been
> crying for ages. But who actually is planning to use their routers are
> general purpose compute?

isnt the point of hypervisor on re/rp meant to support $os within vm?
i know cisco is moving towards contaniers-in-vm for packaging inside of ncs6000 
platform. i expect with hardware revisions -- other platforms with high-impact 
in core/edge applications will move this way as well. 

i imagine that its somewhat easier to concurrently run two instances of router 
code, then quiesce the memory/state between them rather than perform tedious 
state saving/manipulation in memory to support any semblance of issu on box 
(debates about issu notwithstanding). 

i'm not current in a lot of juniper roadmap -- so not sure if this is a 
possibility for future code on re. 

q. 

--
quinn snyder | snyd...@gmail.com

-= sent via iphone. please excuse spelling, grammar, and brevity =-
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] srx240 | frame-relay t1.606

2013-09-26 Thread quinn snyder
thanks, phil!

q. 

-= sent via ipad. please excuse brevity, spelling, and grammar =-

On Sep 26, 2013, at 15:59, Phil Fagan philfa...@gmail.com wrote:

 I was actually refering to t1.102 and t1.107, however, t1.617 is the only 
 supported frame service I see on the SRX.
 
 
 On Thu, Sep 26, 2013 at 2:05 PM, quinn snyder snyd...@gmail.com wrote:
 just to be clear -- t1.606 is not supported. are you referring to t1.602 and 
 t1.607 as being supported?
 
 q. 
 
 -= sent via ipad. please excuse brevity, spelling, and grammar =-
 
 On Sep 26, 2013, at 13:18, Phil Fagan philfa...@gmail.com wrote:
 
 Looks like only 102 and 107; not 106.
 
 
 On Wed, Sep 25, 2013 at 4:50 PM, quinn snyder snyd...@gmail.com wrote:
 all --
 
 just a quick reachout.  trying to dig through docs and either missing the 
 boat or it doesn't exist.  either way…
 
 i need to know if the srx240 supports the frame-relay standard t1.606.  
 any pointers/links would be appreciated.
 
 thanks!
 
 q.
 --
 quinn snyder
 snyd...@gmail.com
 
 
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 
 
 -- 
 Phil Fagan
 Denver, CO
 970-480-7618
 
 
 
 -- 
 Phil Fagan
 Denver, CO
 970-480-7618
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] srx240 | frame-relay t1.606

2013-09-26 Thread quinn snyder
just to be clear -- t1.606 is not supported. are you referring to t1.602 and 
t1.607 as being supported?

q. 

-= sent via ipad. please excuse brevity, spelling, and grammar =-

On Sep 26, 2013, at 13:18, Phil Fagan philfa...@gmail.com wrote:

 Looks like only 102 and 107; not 106.
 
 
 On Wed, Sep 25, 2013 at 4:50 PM, quinn snyder snyd...@gmail.com wrote:
 all --
 
 just a quick reachout.  trying to dig through docs and either missing the 
 boat or it doesn't exist.  either way…
 
 i need to know if the srx240 supports the frame-relay standard t1.606.  any 
 pointers/links would be appreciated.
 
 thanks!
 
 q.
 --
 quinn snyder
 snyd...@gmail.com
 
 
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 
 
 -- 
 Phil Fagan
 Denver, CO
 970-480-7618
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

[j-nsp] srx240 | frame-relay t1.606

2013-09-25 Thread quinn snyder
all --

just a quick reachout.  trying to dig through docs and either missing the boat 
or it doesn't exist.  either way…

i need to know if the srx240 supports the frame-relay standard t1.606.  any 
pointers/links would be appreciated.

thanks!

q.
--
quinn snyder
snyd...@gmail.com



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] IRB Interface Question

2011-09-06 Thread quinn snyder
isnt the bandwidth used as a metric placeholder for $routing_protocol?
 this is the significance in vendor 'c' land.

q.

-= sent via iphone. please excuse spelling, grammar, and brevity =-

On Sep 6, 2011, at 17:15, Scott T. Cameron routeh...@gmail.com wrote:

 IRB is like RVI on Cisco.  It's a logical interface, and doesn't have a
 physical (bandwidth) limitation.

 I don't use NMS so can't speak on what you're seeing.  But I have 2x 1Gbps
 interfaces in LACP (ae1) bound to an IRB  1x 10Gb.  show int irb ext shows
 only 1000 Mbps, but I think that's just a placeholder instead of having
 different show interface output.

 Scott

 On Tue, Sep 6, 2011 at 7:59 PM, Paul Stewart p...@paulstewart.org wrote:

 Hi there...



 Been searching for an answer on this - can't find it.



 On an MX box we have an IRB interface that is physically made up of 4X1GE
 interfaces.  I noticed our NMS platform reports the IRB interface itself as
 1000mbps and also the CLI reports the same:



 Logical interface irb.911 (Index 97) (SNMP ifIndex 384)

   Description: x

   Flags: SNMP-Traps 0x4004000 Encapsulation: ENET2

   Bandwidth: 1000mbps

   Routing Instance: xx Bridging Domain: xxx





 I presume that the IRB has no actual bandwidth limitation and that the only
 limitation is the physical interfaces?  Can I set the bandwidth manually or
 is this because the IRB has no real way to know what the bandwidth behind
 it
 is possible of doing?



 Thanks,



 Paul







 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] VPLS scalability question.. OTV answer?

2011-03-27 Thread Quinn Snyder
otv requires a unique vdc (virtual context) to run, plus the desired
number of interfaces required to interconnect 'edge' with 'lan'
contexts, as there is no backplane interconnect between contexts.  oh,
and vdc requires the advanced license (~$30k list).  maximum number of
vdc per n7k is 4, as the vdc carves cam, tcam, mem, control-plane, etc
to each virtual context.
n7k-lic-adv + n*(sfp-10g + 10gbe interface) != free, especially when
hardware resources (t/cam) are factored on busy boxen.

my two bits.

q.

-= sent via iphone. please excuse spelling, grammar, and brevity =-

On Mar 27, 2011, at 15:59, Chris Evans chrisccnpsp...@gmail.com wrote:

 All the communication that we've received from Juniper is that they perceive
 MPLS and VPLS to be their answer to Cisco's OTV. I've been researching VPLS
 on the Juniper platforms and I cannot find any definite information as to
 how much it can scale performance/bandwidth wise. VPLS requires either a VT
 interface or a LSI interface on that hardware. The VT interfaces can only be
 obtained by hardware that can do tunnel services, and the LSI interface is
 only on the MX platforms from what I can read.

 As tunnel PICs have limited performance and LSI interfaces 'steal' physical
 10Gig interfaces on the 10Gig MX blades (I know it won't on the GigE blades)
 how does Juniper expect to be able to provide high bandwidth VPLS while
 still providing high port density? The TRIO cards have some inline services,
 but does they offer these services? It seems like Juniper is expecting to
 throw another half baked solution out there to compete with Cisco and I'm
 not sure how they're going to scale the infrastructure. The Cisco solution
 uses the built in ASIC hardware to do this and do not require ports to be
 stolen, etc.. It really bothers me that you have to lose interfaces and/or
 install special hardware to do inline services, which only increases the
 cost of the platforms drastically.

 Anyone have some insight?

 Thanks

 Chris
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp