[
https://issues.apache.org/jira/browse/DIRMINA-681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12839493#action_12839493
]
Mauritz Lovgren commented on DIRMINA-681:
-----------------------------------------
The changes in trunk over the previous weeks have caused different performance
results during my tests. As of the week-old revisions, the load tests halt
after few minutes of traffic and CPU rise to 100% making the entire host
unresponsive. I noticed more fixes on the Nio processing code late last week,
but I have not had the time to test further. Going for a new run early this
week :-). Porting my communications framework to support the latest Netty
version as weel to see if it exhibits the same performance problems.
> Strange CPU peak occuring at fixed interval when several thousand connections
> active
> ------------------------------------------------------------------------------------
>
> Key: DIRMINA-681
> URL: https://issues.apache.org/jira/browse/DIRMINA-681
> Project: MINA
> Issue Type: Task
> Components: Core
> Affects Versions: 2.0.0-M4, 2.0.0-RC1
> Environment: Windows Vista Ultimate 64-bit (on 64-bit Sun JDK
> 1.6.0_18). Intel Core 2 Quad Core Q9300 2,5 GHz, 8 GB RAM
> Reporter: Mauritz Lovgren
> Fix For: 2.0.0
>
> Attachments: screenshot-1.jpg, screenshot-2.jpg, screenshot-3.jpg,
> screenshot-4.jpg
>
>
> Observing strange CPU activity occuring at regular (seemingly fixed) interval
> with no protocol traffic activity.
> See attached window capture of task manager that shows this with 3000 active
> connections.
> Is there some kind of cleanup occuring within MINA core at a predefined
> interval?
> The 3000 connections in the example above connects within 250 seconds. A
> normal situation would be that these connections are established over a
> longer period of time, perhaps spreading the CPU peaks shown above as well,
> flattening the curve.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.