Sorry: Solaris VALUE_IN_SECONDS -> VALUE_IN_MILLISECONDS

Rainer Jung schrieb:
> Hi Filip,
> 
> that's one of the not so nice things with linux. As far as I know it's
> not configurable with standard linux. There exist kernel patches for
> this and there is an ip filter module that lets you do that, but some
> say that module is very bad for IP performance (and high performance
> would be the major reason to decrease the time_wait interval).
> 
> It' shrinkable for solaris (ndd -set /dev/tcp tcp_time_wait_interval
> VALUE_IN_SECONDS), but even there the thread cleaning up the tables runs
> only every 5 seconds.
> 
> Concerning the one request 1 connection case: I often realized strange
> behaviour (unclean shutdown) of ab concerning the last request in a
> connection. I never analysed it though. If you can easily reproduce the
> "one request over one connection is slow" problem without high load, you
> might want to tcpdump to check, if it's really slow on the server side.
> 
> Just my 0.9 cents ...
> 
> Rainer
> 
> Filip Hanik - Dev Lists schrieb:
>> That's some very good info, it looks like my system never does go over
>> 30k and cleaning it up seems to be working really well.
>> btw. do you know where I change the cleanup intervals for linux 2.6 kernel?
>>
>> I figured out what the problem was:
>> Somewhere I have a lock/wait problem
>>
>> for example, this runs perfectly:
>> ./ab -n 1 -c 100 http://localhost:$PORT/run.jsp?run=TEST$i
>>
>> If I change -c 100 (100 sockets) to -c 1, each JSP request takes 1 second.
>>
>> so what was happening in my test was running 1000 requests over 400
>> connections, then invoking 1 request over 1 connection, and repeat.
>> Every time I did the single connection request, it does a 1sec delay,
>> this cause the CPU to drop.
>>
>> So basically, the NIO connector sucks majorly if you are a single user
>> :), I'll trace this one down.
>> Filip
>>
>>
>> Rainer Jung wrote:
>>> Hi Filip,
>>>
>>> the fluctuation reminds me of something: depending on the client
>>> behaviour connections will end up in TIME_WAIT state. Usually you run
>>> into trouble (throughput stalls) once you have around 30K of them. They
>>> will be cleaned up every now and then by the kernel (talking about the
>>> unix/Linux style mechanisms) and then throughput (and CPU usage) start
>>> again.
>>>
>>> With modern systems handling 10-20k requests per second one can run into
>>> trouble much faster, than the usual cleanup intervals.
>>>
>>> Check with "netstat -an" if you can see a lot of TIME_WAIT connections
>>> (thousands). If not it's something different :(
>>>
>>> Regards,
>>>
>>> Rainer
>>>
>>> Filip Hanik - Dev Lists schrieb:
>>>  
>>>> Remy Maucherat wrote:
>>>>    
>>>>> [EMAIL PROTECTED] wrote:
>>>>>      
>>>>>> Author: fhanik
>>>>>> Date: Wed Oct 25 15:11:10 2006
>>>>>> New Revision: 467787
>>>>>>
>>>>>> URL: http://svn.apache.org/viewvc?view=rev&rev=467787
>>>>>> Log:
>>>>>> Documented socket properties
>>>>>> Added in the ability to cache bytebuffers based on number of channels
>>>>>> or number of bytes
>>>>>> Added in nonGC poller events to lower CPU usage during high traffic
>>>>>>         
>>>>> I'm starting to get emails again, so sorry for not replying.
>>>>>
>>>>> I am testing with the default VM settings, which basically means that
>>>>> excessive GC will have a very visible impact. I am testing to
>>>>> optimize, not to see which connector would be faster in the real world
>>>>> (probably neither unless testing scalability), so I think it's
>>>>> reasonable.
>>>>>
>>>>> This fixes the paranormal behavior I was seeing on Windows, so the NIO
>>>>> connector works properly now. Great ! However, I still have NIO which
>>>>> is slower than java.io which is slower than APR. It's ok if some
>>>>> solutions are better than others on certain platforms of course.
>>>>>
>>>>>       
>>>> thanks for the feedback, I'm testing with larger files now, 100k+ and
>>>> also see APR->JIO->NIO
>>>> NIO has a very funny CPU telemetry graph, it fluctuates way to much, so
>>>> I have to find where in the code it would do this, so there is still
>>>> some work to do.
>>>> I'd like to see a nearly flat CPU usage when running my test, but
>>>> instead the CPU goes from 20-80% up and down, up and down.
>>>>
>>>> during my test
>>>> (for i in $(seq 1 100); do echo -n "$i."; ./ab -n 1000 -c 400
>>>> http://localhost:$PORT/104k.jpg 2>1 |grep "Requests per"; done)
>>>>
>>>> my memory usage goes up to 40MB, then after a FullGC it goes down to
>>>> 10MB again, so I wanna figure out where that comes from as well. My
>>>> guess is that all that data is actually in the java.net.Socket classes,
>>>> as I am seeing the same results with the JIO connector, but not with
>>>> APR(cause APR allocates mem using pools)
>>>> Btw, had to put in the byte[] buffer back into the
>>>> InternalNioOutputBuffer.java, ByteBuffers are way to slow.
>>>>
>>>> With APR, I think the connections might be lingering to long as
>>>> eventually, during my test, it stop accepting connections. Usually
>>>> around the 89th iteration of the test.
>>>> I'm gonna keep working on this for a bit, as I think I am getting to a
>>>> point with the NIO connector where it is a viable alternative.
>>>>
>>>> Filip
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: [EMAIL PROTECTED]
>>>> For additional commands, e-mail: [EMAIL PROTECTED]
>>>>     
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: [EMAIL PROTECTED]
>>> For additional commands, e-mail: [EMAIL PROTECTED]
>>>
>>>
>>>
>>>   
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [EMAIL PROTECTED]
>> For additional commands, e-mail: [EMAIL PROTECTED]
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to