I feel compelled to share this link:

http://www.snookles.com/slf-blog/2012/01/05/tcp-incast-what-is-it/

Just in the event you haven't seen it or looked deeper. 

Jared Mauch

On Jan 9, 2012, at 8:13 PM, Morgan McLean <wrx...@gmail.com> wrote:

> We're running over a terabyte in membase, but thats besides the point.
> Question still stands :)
> 
> Morgan
> 
> On Mon, Jan 9, 2012 at 4:56 PM, Joel jaeggli <joe...@bogus.com> wrote:
> 
>> On 1/9/12 16:28 , Morgan McLean wrote:
>>> Yes, we are using it for security purposes. Why would I spend so much
>> money
>>> on a box that is so limited in throughput due to its various fw
>> inspection
>>> overhead?
>>> 
>>> I am running two 3600's that connect via 10GE to a couple core 8208 EX
>>> switches, which then multihome down to top of rack switches. The 3600's
>> are
>>> using a reth group to manage which 10ge connection has the gateway
>>> addresses.
>>> 
>>> The firewalls are barely loaded, under 6,000 sessions with a very slow
>> ramp
>>> rate. Not a whole lot of policies, not a whole lot of address book
>> entries
>>> (under 100?), running some OSPF with less than 130 routes. This also
>>> happens between two zones for example that are any any.
>>> 
>>> The interface peaks at around a gigabit a second at anywhere from 75k to
>>> 100k pps. This box is in no way loaded. Personally I think the caching
>>> issues my boss mentioned are related to something else, and I think .5ms
>>> isn't so unreasonable, but I'm being pressed as to why its so much
>>> higher. The application is a replicating cache system based around
>>> memcached.
>> 
>> given that I've seen memcache replication occur over signficantly longer
>> distances I'd pretty much not identify latency the first order culprit.
>> repcached is asyncronous and it tends to ramp quite quickly if you've
>> got a big membase replicating into an empty bucket.
>> 
>>> I don't think any ALG could possibly be applied to this, but I'll double
>>> check.
>>> 
>>> Thanks,
>>> Morgan
>>> 
>>> On Mon, Jan 9, 2012 at 3:48 PM, Phil Mayers <p.may...@imperial.ac.uk>
>> wrote:
>>> 
>>>> On 01/09/2012 11:23 PM, Morgan McLean wrote:
>>>> 
>>>>> Its an SRX3600 cluster, with no traffic traversing the fabric
>> connection,
>>>>> so its all being contained on one chassis. These are just standard ICMP
>>>>> packets between two linux hosts on different subnets.
>>>>> 
>>>> 
>>>> I assume you are using these as a firewall, not just as a "convenient"
>>>> JunOS router?
>>>> 
>>>> What is the security topology? How many policies and of what type do you
>>>> have? What's the background load in terms of bits/sec, packets/sec,
>> session
>>>> ramp rate, etc.? What are the interface speeds?
>>>> 
>>>> This is a complex question to answer in general. To give some
>> comparative
>>>> data, we have Netscreen 5400s with M2 10G cards, hundreds of policies,
>> tens
>>>> of thousands of address book entries, full BGP routing with ~1000
>> routing
>>>> entries, and session counts of ~20k sessions, ramp rate ~15k/minute.
>>>> 
>>>> Through these firewalls, we incur an extra ~200usec on a ping round trip
>>>> time.
>>>> 
>>>> So yes, I would say that going from 0.1msec (100usec) to 0.5msec
>> (500usec)
>>>> is about the right order for a fast gig/ten gig firewall with moderately
>>>> complex config and load. Obviously the SRX 3600 and NS 5400 are
>> different
>>>> beasts.
>>>> 
>>>> Frankly, if your demands are such that you can't tolerate 400usec of
>>>> incurred latency, you possibly shouldn't be running it though a security
>>>> device. What kind of "caching application" is this?
>>>> 
>>>> Are you sure the latency you're measuring with a ping is the same
>> latency
>>>> your application is incurring? Are you sure an ALG isn't activating for
>>>> your traffic - perhaps try creating a policy to match the traffic and
>>>> explicitly disable the "application" / ALG.
>>>> 
>>>> Cheers,
>>>> Phil
>>>> 
>>>> ______________________________**_________________
>>>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>>>> https://puck.nether.net/**mailman/listinfo/juniper-nsp<
>> https://puck.nether.net/mailman/listinfo/juniper-nsp>
>>>> 
>>> _______________________________________________
>>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>> 
>> 
>> 
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Reply via email to