On 17/06/12 08:14, Saku Ytti wrote:
This is true for branch (multicore cavium octeon). But not for
1k/3k/5k. And there is no where to offload in branch.
This is interesting. I was not aware of the EZchip offload in the
high-end stuff.
Can anyone comment on the expected (or observed)
18.06.2012 14:47, Phil Mayers wrote:
This is interesting. I was not aware of the EZchip offload in the
high-end stuff.
As of the docs, it seems like the only advantage is reduced latency:
18.06.2012 15:22, Pavel Lunin wrote:
easily pass a single 10G interface (and, consequently, NPC).
NPU, to be precise.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
On 18/06/12 12:22, Pavel Lunin wrote:
hops. But I really wonder how (if) it affects the new session setup rate
and concurrent session scaling. I really doubt, a single EZchip is
capable to handle states of millions of sessions, all of which can
easily pass a single 10G interface (and,
18.06.2012 16:31, Phil Mayers wrote:
However, the docs suggest that it might be possible to enable offload
on a per-policy basis.
This might be good for certain latency-sensitive flows, if true.
Yep, as I understood, it's only per-policy based (please correct, if I'm
not right), which is, I
Folks,
Here's my situation.
I have an MX5 configured to dual-homed to two sets of SRX240 in a cluster with
the intention of running OSPF + BGP between them.
The links are straight point to points with no switch in between.
OSPF between the two works alright, so does static. BGPv4 and BGPv6
Forgot to mention another important bits.
There is no filter on MX and the SRX is configured to allow all inbound
services and protocols.
ihsan
On Jun 19, 2012, at 1:10 AM, Ihsan Junaidi Ibrahim wrote:
Folks,
Here's my situation.
I have an MX5 configured to dual-homed to two sets of
Pavel Lunin plu...@senetsy.ru writes:
To be honest, by now it looks like a feature for a particular customer.
To me it seems like a feature for every customer... Without offloading,
how would you do stateful firewalling at 10Gbps+?
To put it another way, is there a hardware/software
Has anyone measured the actual power consumption of a J2320 with a
watt meter and have the results handy?
Im looking at sticking one into co-lo, and Junipers documentation
says, from the way Im reading it, 1.3 amps at 240v. But in my
experience gear tends to use a lot less than the manufacturer
2012/6/19 Benny Amorsen benny+use...@amorsen.dk
To be honest, by now it looks like a feature for a particular customer.
To me it seems like a feature for every customer... Without offloading,
how would you do stateful firewalling at 10Gbps+?
Em… isn't 10G+ possible on SRX HE without
I have a pair of J2350 running about 200mbits through traffic each and they
together use about 1A under 120v (juding from my power strip) . Hope it
will helps your decision.
Cheers.
On Mon, Jun 18, 2012 at 2:42 PM, Tom Storey t...@snnap.net wrote:
Has anyone measured the actual power
by the way, I do't have any FPIC / expansion card installed.
On Mon, Jun 18, 2012 at 3:22 PM, Yucong Sun (叶雨飞) sunyuc...@gmail.comwrote:
I have a pair of J2350 running about 200mbits through traffic each and
they together use about 1A under 120v (juding from my power strip) . Hope
it will
Thanks!
So that would be about 500mA at 240v for the pair by my reckoning, or about
250mA each? And if thats for a 2350, one would imagine a 2320 would be the
same or less.
I also would have no modules, just using base ports.
On Monday, June 18, 2012, Yucong Sun (叶雨飞) wrote:
I have a pair of
Pavel Lunin plu...@senetsy.ru writes:
Em… isn't 10G+ possible on SRX HE without offloading?
I don't know, that is part of what I am trying to find out :)
/Benny
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
Tom,
On 18/06/2012 23:42, Tom Storey wrote:
If anyone has a J2320 and a watt meter handy I would greatly
appreciate if you would be able to take a reading for me (powered up
and sitting idle after full boot.)
Mine are routing a small amount of traffic and my PDU is telling me
about 0.35A each
15 matches
Mail list logo