[jira] Commented: (DIRMINA-678) NioProcessor 100% CPU usage on Linux (epoll selector bug)

2010-02-09 Thread David Latorre (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12831361#action_12831361
 ] 

David Latorre commented on DIRMINA-678:
---

Hello Serge, 


I haven't paid much attention to this issue but as you can see from the code 
comments Emmanuel is willing to revert the change if neccessary.

Still, not all our users are going to upgrade Sun JDK 1.6_18 so it might be 
interesting that you provided more details on the failures you are having so we 
can provide  a solution for all the potential MINA users - this is ,of course, 
if the issue is 'easily fixable', otherwise updating to a non-buggy JDK version 
is the way to go.




 NioProcessor 100% CPU usage on Linux (epoll selector bug)
 -

 Key: DIRMINA-678
 URL: https://issues.apache.org/jira/browse/DIRMINA-678
 Project: MINA
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0.0-M4
 Environment: CentOS 5.x, 32/64-bit, 32/64-bit Sun JDK 1.6.0_12, also 
 _11/_10/_09 and Sun JDK 1.7.0 b50, Kernel 2.6.18-92.1.22.el5 and also older 
 versions,
Reporter: Serge Baranov
 Fix For: 2.0.0-RC2

 Attachments: snap973.png, snap974.png


 It's the same bug as described at http://jira.codehaus.org/browse/JETTY-937 , 
 but affecting MINA in the very similar way.
 NioProcessor threads start to eat 100% resources per CPU. After 10-30 minutes 
 of running depending on the load (sometimes after several hours) one of the 
 NioProcessor starts to consume all the available CPU resources probably 
 spinning in the epoll select loop. Later, more threads can be affected by the 
 same issue, thus 100% loading all the available CPU cores.
 Sample trace:
 NioProcessor-10 [RUNNABLE] CPU time: 5:15
 sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int)
 sun.nio.ch.EPollArrayWrapper.poll(long)
 sun.nio.ch.EPollSelectorImpl.doSelect(long)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(long)
 sun.nio.ch.SelectorImpl.select(long)
 org.apache.mina.transport.socket.nio.NioProcessor.select(long)
 org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run()
 org.apache.mina.util.NamePreservingRunnable.run()
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker)
 java.util.concurrent.ThreadPoolExecutor$Worker.run()
 java.lang.Thread.run()
 It seems to affect any NIO based Java server applications running in the 
 specified environment.
 Some projects provide workarounds for similar JDK bugs, probably MINA can 
 also think about a workaround.
 As far as I know, there are at least 3 users who experience this issue with 
 Jetty and all of them are running CentOS (some distribution default setting 
 is a trigger?). As for MINA, I'm not aware of similar reports yet.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DIRMINA-678) NioProcessor 100% CPU usage on Linux (epoll selector bug)

2010-02-09 Thread Serge Baranov (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12831363#action_12831363
 ] 

Serge Baranov commented on DIRMINA-678:
---

The issue is that the connections stall randomly, no data is sent or received 
when curl is connecting to Apache server via MINA based proxy server.

OS: Windows Vista 64-bit
JDK: 1.6.0_18 (32-bit)

The proxy is similar to the one I've provided to another issue: DIRMINA-734 
(mina-flush-regression.zip)

You can try running it with the latest MINA revision and connect to some server 
using curl via this proxy, make several concurrent connections, and curl will 
hang waiting for the data from the server.
No time for the isolated test case, sorry.

I suggest making this workaround optional and enable it if JDK version is  
1.6.0_18 or via some setting/property.

 NioProcessor 100% CPU usage on Linux (epoll selector bug)
 -

 Key: DIRMINA-678
 URL: https://issues.apache.org/jira/browse/DIRMINA-678
 Project: MINA
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0.0-M4
 Environment: CentOS 5.x, 32/64-bit, 32/64-bit Sun JDK 1.6.0_12, also 
 _11/_10/_09 and Sun JDK 1.7.0 b50, Kernel 2.6.18-92.1.22.el5 and also older 
 versions,
Reporter: Serge Baranov
 Fix For: 2.0.0-RC2

 Attachments: snap973.png, snap974.png


 It's the same bug as described at http://jira.codehaus.org/browse/JETTY-937 , 
 but affecting MINA in the very similar way.
 NioProcessor threads start to eat 100% resources per CPU. After 10-30 minutes 
 of running depending on the load (sometimes after several hours) one of the 
 NioProcessor starts to consume all the available CPU resources probably 
 spinning in the epoll select loop. Later, more threads can be affected by the 
 same issue, thus 100% loading all the available CPU cores.
 Sample trace:
 NioProcessor-10 [RUNNABLE] CPU time: 5:15
 sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int)
 sun.nio.ch.EPollArrayWrapper.poll(long)
 sun.nio.ch.EPollSelectorImpl.doSelect(long)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(long)
 sun.nio.ch.SelectorImpl.select(long)
 org.apache.mina.transport.socket.nio.NioProcessor.select(long)
 org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run()
 org.apache.mina.util.NamePreservingRunnable.run()
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker)
 java.util.concurrent.ThreadPoolExecutor$Worker.run()
 java.lang.Thread.run()
 It seems to affect any NIO based Java server applications running in the 
 specified environment.
 Some projects provide workarounds for similar JDK bugs, probably MINA can 
 also think about a workaround.
 As far as I know, there are at least 3 users who experience this issue with 
 Jetty and all of them are running CentOS (some distribution default setting 
 is a trigger?). As for MINA, I'm not aware of similar reports yet.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DIRMINA-678) NioProcessor 100% CPU usage on Linux (epoll selector bug)

2010-02-09 Thread Emmanuel Lecharny (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12831369#action_12831369
 ] 

Emmanuel Lecharny commented on DIRMINA-678:
---

Hi Serge,

thanks fror having tested the patch. Sadly, the latest version of the JDK does 
*not* ix the epoll spinning issue. This is what the patch was supposed to fix, 
but I agree that there is some other issue that make the server to stall under 
load.

Another JIRA brings some more light on the issue : 
https://issues.apache.org/jira/browse/DIRMINA-762

This is currently being investigated

 NioProcessor 100% CPU usage on Linux (epoll selector bug)
 -

 Key: DIRMINA-678
 URL: https://issues.apache.org/jira/browse/DIRMINA-678
 Project: MINA
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0.0-M4
 Environment: CentOS 5.x, 32/64-bit, 32/64-bit Sun JDK 1.6.0_12, also 
 _11/_10/_09 and Sun JDK 1.7.0 b50, Kernel 2.6.18-92.1.22.el5 and also older 
 versions,
Reporter: Serge Baranov
 Fix For: 2.0.0-RC2

 Attachments: snap973.png, snap974.png


 It's the same bug as described at http://jira.codehaus.org/browse/JETTY-937 , 
 but affecting MINA in the very similar way.
 NioProcessor threads start to eat 100% resources per CPU. After 10-30 minutes 
 of running depending on the load (sometimes after several hours) one of the 
 NioProcessor starts to consume all the available CPU resources probably 
 spinning in the epoll select loop. Later, more threads can be affected by the 
 same issue, thus 100% loading all the available CPU cores.
 Sample trace:
 NioProcessor-10 [RUNNABLE] CPU time: 5:15
 sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int)
 sun.nio.ch.EPollArrayWrapper.poll(long)
 sun.nio.ch.EPollSelectorImpl.doSelect(long)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(long)
 sun.nio.ch.SelectorImpl.select(long)
 org.apache.mina.transport.socket.nio.NioProcessor.select(long)
 org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run()
 org.apache.mina.util.NamePreservingRunnable.run()
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker)
 java.util.concurrent.ThreadPoolExecutor$Worker.run()
 java.lang.Thread.run()
 It seems to affect any NIO based Java server applications running in the 
 specified environment.
 Some projects provide workarounds for similar JDK bugs, probably MINA can 
 also think about a workaround.
 As far as I know, there are at least 3 users who experience this issue with 
 Jetty and all of them are running CentOS (some distribution default setting 
 is a trigger?). As for MINA, I'm not aware of similar reports yet.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DIRMINA-678) NioProcessor 100% CPU usage on Linux (epoll selector bug)

2010-02-09 Thread Victor N (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12831370#action_12831370
 ] 

Victor N commented on DIRMINA-678:
--

Is there a confirmation that this issue was fixed in JDK ?

I agree, this fix can be optional and/or check the operating system and JDK 
version.

 NioProcessor 100% CPU usage on Linux (epoll selector bug)
 -

 Key: DIRMINA-678
 URL: https://issues.apache.org/jira/browse/DIRMINA-678
 Project: MINA
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0.0-M4
 Environment: CentOS 5.x, 32/64-bit, 32/64-bit Sun JDK 1.6.0_12, also 
 _11/_10/_09 and Sun JDK 1.7.0 b50, Kernel 2.6.18-92.1.22.el5 and also older 
 versions,
Reporter: Serge Baranov
 Fix For: 2.0.0-RC2

 Attachments: snap973.png, snap974.png


 It's the same bug as described at http://jira.codehaus.org/browse/JETTY-937 , 
 but affecting MINA in the very similar way.
 NioProcessor threads start to eat 100% resources per CPU. After 10-30 minutes 
 of running depending on the load (sometimes after several hours) one of the 
 NioProcessor starts to consume all the available CPU resources probably 
 spinning in the epoll select loop. Later, more threads can be affected by the 
 same issue, thus 100% loading all the available CPU cores.
 Sample trace:
 NioProcessor-10 [RUNNABLE] CPU time: 5:15
 sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int)
 sun.nio.ch.EPollArrayWrapper.poll(long)
 sun.nio.ch.EPollSelectorImpl.doSelect(long)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(long)
 sun.nio.ch.SelectorImpl.select(long)
 org.apache.mina.transport.socket.nio.NioProcessor.select(long)
 org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run()
 org.apache.mina.util.NamePreservingRunnable.run()
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker)
 java.util.concurrent.ThreadPoolExecutor$Worker.run()
 java.lang.Thread.run()
 It seems to affect any NIO based Java server applications running in the 
 specified environment.
 Some projects provide workarounds for similar JDK bugs, probably MINA can 
 also think about a workaround.
 As far as I know, there are at least 3 users who experience this issue with 
 Jetty and all of them are running CentOS (some distribution default setting 
 is a trigger?). As for MINA, I'm not aware of similar reports yet.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DIRMINA-678) NioProcessor 100% CPU usage on Linux (epoll selector bug)

2010-02-09 Thread Serge Baranov (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12831374#action_12831374
 ] 

Serge Baranov commented on DIRMINA-678:
---

Interesting, I've been running with a patch from Sun under JDK 1.6.0_12 on 
Linux for almost a year and no spinning selector bug. This patch should be in 
1.6.0_18 as http://bugs.sun.com/view_bug.do?bug_id=6693490 is fixed in this 
version.

Maybe DIRMINA-762 is about another unfixed bug.

 NioProcessor 100% CPU usage on Linux (epoll selector bug)
 -

 Key: DIRMINA-678
 URL: https://issues.apache.org/jira/browse/DIRMINA-678
 Project: MINA
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0.0-M4
 Environment: CentOS 5.x, 32/64-bit, 32/64-bit Sun JDK 1.6.0_12, also 
 _11/_10/_09 and Sun JDK 1.7.0 b50, Kernel 2.6.18-92.1.22.el5 and also older 
 versions,
Reporter: Serge Baranov
 Fix For: 2.0.0-RC2

 Attachments: snap973.png, snap974.png


 It's the same bug as described at http://jira.codehaus.org/browse/JETTY-937 , 
 but affecting MINA in the very similar way.
 NioProcessor threads start to eat 100% resources per CPU. After 10-30 minutes 
 of running depending on the load (sometimes after several hours) one of the 
 NioProcessor starts to consume all the available CPU resources probably 
 spinning in the epoll select loop. Later, more threads can be affected by the 
 same issue, thus 100% loading all the available CPU cores.
 Sample trace:
 NioProcessor-10 [RUNNABLE] CPU time: 5:15
 sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int)
 sun.nio.ch.EPollArrayWrapper.poll(long)
 sun.nio.ch.EPollSelectorImpl.doSelect(long)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(long)
 sun.nio.ch.SelectorImpl.select(long)
 org.apache.mina.transport.socket.nio.NioProcessor.select(long)
 org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run()
 org.apache.mina.util.NamePreservingRunnable.run()
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker)
 java.util.concurrent.ThreadPoolExecutor$Worker.run()
 java.lang.Thread.run()
 It seems to affect any NIO based Java server applications running in the 
 specified environment.
 Some projects provide workarounds for similar JDK bugs, probably MINA can 
 also think about a workaround.
 As far as I know, there are at least 3 users who experience this issue with 
 Jetty and all of them are running CentOS (some distribution default setting 
 is a trigger?). As for MINA, I'm not aware of similar reports yet.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DIRMINA-678) NioProcessor 100% CPU usage on Linux (epoll selector bug)

2010-02-09 Thread Serge Baranov (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12831382#action_12831382
 ] 

Serge Baranov commented on DIRMINA-678:
---

I'll stay on 1.6.0_12 + Sun patch then. As for MINA, I've updated to the recent 
revision and reverted just the selector patch revisions, it seems to work fine 
now (at least passes the tests, didn't try in the production yet).

 I still see no reason for this code to be active on Windows platform as it was 
never affected by the Linux selector bug, OS check + option to disable it on 
any platform until the proper working patch is provided would be nice.

 NioProcessor 100% CPU usage on Linux (epoll selector bug)
 -

 Key: DIRMINA-678
 URL: https://issues.apache.org/jira/browse/DIRMINA-678
 Project: MINA
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0.0-M4
 Environment: CentOS 5.x, 32/64-bit, 32/64-bit Sun JDK 1.6.0_12, also 
 _11/_10/_09 and Sun JDK 1.7.0 b50, Kernel 2.6.18-92.1.22.el5 and also older 
 versions,
Reporter: Serge Baranov
 Fix For: 2.0.0-RC2

 Attachments: snap973.png, snap974.png


 It's the same bug as described at http://jira.codehaus.org/browse/JETTY-937 , 
 but affecting MINA in the very similar way.
 NioProcessor threads start to eat 100% resources per CPU. After 10-30 minutes 
 of running depending on the load (sometimes after several hours) one of the 
 NioProcessor starts to consume all the available CPU resources probably 
 spinning in the epoll select loop. Later, more threads can be affected by the 
 same issue, thus 100% loading all the available CPU cores.
 Sample trace:
 NioProcessor-10 [RUNNABLE] CPU time: 5:15
 sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int)
 sun.nio.ch.EPollArrayWrapper.poll(long)
 sun.nio.ch.EPollSelectorImpl.doSelect(long)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(long)
 sun.nio.ch.SelectorImpl.select(long)
 org.apache.mina.transport.socket.nio.NioProcessor.select(long)
 org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run()
 org.apache.mina.util.NamePreservingRunnable.run()
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker)
 java.util.concurrent.ThreadPoolExecutor$Worker.run()
 java.lang.Thread.run()
 It seems to affect any NIO based Java server applications running in the 
 specified environment.
 Some projects provide workarounds for similar JDK bugs, probably MINA can 
 also think about a workaround.
 As far as I know, there are at least 3 users who experience this issue with 
 Jetty and all of them are running CentOS (some distribution default setting 
 is a trigger?). As for MINA, I'm not aware of similar reports yet.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DIRMINA-678) NioProcessor 100% CPU usage on Linux (epoll selector bug)

2010-02-09 Thread Victor N (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12831388#action_12831388
 ] 

Victor N commented on DIRMINA-678:
--

Sergey, the patch your are talking about - can it be shared here or is it still 
for testers only? It is almost 1 year old ;)
Did you send you feedback to Sun? Maybe you (or someone of mina developers) 
could ask Sun about posting the patch here
or just asking when this bug-fix will be publicly available?

Also, we could try to look into open jdk - maybe, the patch is already there ;)

Also, it is interesting how the 2 bugs correlate with each other:
http://bugs.sun.com/view_bug.do?bug_id=6693490
http://bugs.sun.com/view_bug.do?bug_id=6670302

 NioProcessor 100% CPU usage on Linux (epoll selector bug)
 -

 Key: DIRMINA-678
 URL: https://issues.apache.org/jira/browse/DIRMINA-678
 Project: MINA
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0.0-M4
 Environment: CentOS 5.x, 32/64-bit, 32/64-bit Sun JDK 1.6.0_12, also 
 _11/_10/_09 and Sun JDK 1.7.0 b50, Kernel 2.6.18-92.1.22.el5 and also older 
 versions,
Reporter: Serge Baranov
 Fix For: 2.0.0-RC2

 Attachments: snap973.png, snap974.png


 It's the same bug as described at http://jira.codehaus.org/browse/JETTY-937 , 
 but affecting MINA in the very similar way.
 NioProcessor threads start to eat 100% resources per CPU. After 10-30 minutes 
 of running depending on the load (sometimes after several hours) one of the 
 NioProcessor starts to consume all the available CPU resources probably 
 spinning in the epoll select loop. Later, more threads can be affected by the 
 same issue, thus 100% loading all the available CPU cores.
 Sample trace:
 NioProcessor-10 [RUNNABLE] CPU time: 5:15
 sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int)
 sun.nio.ch.EPollArrayWrapper.poll(long)
 sun.nio.ch.EPollSelectorImpl.doSelect(long)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(long)
 sun.nio.ch.SelectorImpl.select(long)
 org.apache.mina.transport.socket.nio.NioProcessor.select(long)
 org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run()
 org.apache.mina.util.NamePreservingRunnable.run()
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker)
 java.util.concurrent.ThreadPoolExecutor$Worker.run()
 java.lang.Thread.run()
 It seems to affect any NIO based Java server applications running in the 
 specified environment.
 Some projects provide workarounds for similar JDK bugs, probably MINA can 
 also think about a workaround.
 As far as I know, there are at least 3 users who experience this issue with 
 Jetty and all of them are running CentOS (some distribution default setting 
 is a trigger?). As for MINA, I'm not aware of similar reports yet.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DIRMINA-678) NioProcessor 100% CPU usage on Linux (epoll selector bug)

2010-02-09 Thread Emmanuel Lecharny (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12831399#action_12831399
 ] 

Emmanuel Lecharny commented on DIRMINA-678:
---

To Serge :
--
The patch does *not* impact Windows in any case. It's a workaround when we met 
some very specific conditions, namely :
- when select( 1000 ) does not block
- and returns 0
- and does it immediately
- and the IoProcessor has not been waken up.

On Windows, if those 4 conditions are not met, then the code provided in the 
patch will *never* be called. 

So no need to add some ugly 'à la C/C++' code to test the underlying OS version 
:)

To Victor :
--
It would be great if we could have OpenJDK working natively on Mac OS too :/ 
But this is not the case. This is the reason why we are stuck with Sun JVM, 
buggy as it is. 

But this is not horrible. We can deal with this bug. Now, I repeat myself, but 
https://issues.apache.org/jira/browse/DIRMINA-762  is currently the problem 
that you will face when using the latest version of the trunk, we are working 
on it, it's not obvious, and it may take time. At this point, any help is 
welcomed.

Thanks both of you !

 NioProcessor 100% CPU usage on Linux (epoll selector bug)
 -

 Key: DIRMINA-678
 URL: https://issues.apache.org/jira/browse/DIRMINA-678
 Project: MINA
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0.0-M4
 Environment: CentOS 5.x, 32/64-bit, 32/64-bit Sun JDK 1.6.0_12, also 
 _11/_10/_09 and Sun JDK 1.7.0 b50, Kernel 2.6.18-92.1.22.el5 and also older 
 versions,
Reporter: Serge Baranov
 Fix For: 2.0.0-RC2

 Attachments: snap973.png, snap974.png


 It's the same bug as described at http://jira.codehaus.org/browse/JETTY-937 , 
 but affecting MINA in the very similar way.
 NioProcessor threads start to eat 100% resources per CPU. After 10-30 minutes 
 of running depending on the load (sometimes after several hours) one of the 
 NioProcessor starts to consume all the available CPU resources probably 
 spinning in the epoll select loop. Later, more threads can be affected by the 
 same issue, thus 100% loading all the available CPU cores.
 Sample trace:
 NioProcessor-10 [RUNNABLE] CPU time: 5:15
 sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int)
 sun.nio.ch.EPollArrayWrapper.poll(long)
 sun.nio.ch.EPollSelectorImpl.doSelect(long)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(long)
 sun.nio.ch.SelectorImpl.select(long)
 org.apache.mina.transport.socket.nio.NioProcessor.select(long)
 org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run()
 org.apache.mina.util.NamePreservingRunnable.run()
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker)
 java.util.concurrent.ThreadPoolExecutor$Worker.run()
 java.lang.Thread.run()
 It seems to affect any NIO based Java server applications running in the 
 specified environment.
 Some projects provide workarounds for similar JDK bugs, probably MINA can 
 also think about a workaround.
 As far as I know, there are at least 3 users who experience this issue with 
 Jetty and all of them are running CentOS (some distribution default setting 
 is a trigger?). As for MINA, I'm not aware of similar reports yet.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



MINA2 space

2010-02-09 Thread Ashish
What happened to our new MINA2 space? I logged half an hr back, but
couldn't locate it :-(

Now having trouble logging in. Anyone experiencing the same problems?

thanks
ashish


Re: MINA2 space

2010-02-09 Thread Emmanuel Lecharny

On 2/9/10 12:11 PM, Ashish wrote:

What happened to our new MINA2 space? I logged half an hr back, but
couldn't locate it :-(

Now having trouble logging in. Anyone experiencing the same problems?

thanks
ashish

   

and the wiki : http://cwiki.apache.org/confluence/display/MINA2/Index

--
Regards,
Cordialement,
Emmanuel Lécharny
www.nextury.com




[jira] Commented: (DIRMINA-678) NioProcessor 100% CPU usage on Linux (epoll selector bug)

2010-02-09 Thread Serge Baranov (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12831418#action_12831418
 ] 

Serge Baranov commented on DIRMINA-678:
---

Victor, I've sent you a patch by e-mail. It's just a zip with classes which 
replace the default JDK implementation (run with -Xbootclasspath/p:patch.zip 
command line option).

Emmanuel, it probably should not impact Windows in any case, but currently 
connections hang with no CPU usage when running latest MINA revision and 
everything works fine when running latest MINA revision, but without the 
selector code submitted as 42. That's why I've reported it. Sorry if it was 
not clear. Your fix breaks MINA on Windows, at least for my application.

 NioProcessor 100% CPU usage on Linux (epoll selector bug)
 -

 Key: DIRMINA-678
 URL: https://issues.apache.org/jira/browse/DIRMINA-678
 Project: MINA
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0.0-M4
 Environment: CentOS 5.x, 32/64-bit, 32/64-bit Sun JDK 1.6.0_12, also 
 _11/_10/_09 and Sun JDK 1.7.0 b50, Kernel 2.6.18-92.1.22.el5 and also older 
 versions,
Reporter: Serge Baranov
 Fix For: 2.0.0-RC2

 Attachments: snap973.png, snap974.png


 It's the same bug as described at http://jira.codehaus.org/browse/JETTY-937 , 
 but affecting MINA in the very similar way.
 NioProcessor threads start to eat 100% resources per CPU. After 10-30 minutes 
 of running depending on the load (sometimes after several hours) one of the 
 NioProcessor starts to consume all the available CPU resources probably 
 spinning in the epoll select loop. Later, more threads can be affected by the 
 same issue, thus 100% loading all the available CPU cores.
 Sample trace:
 NioProcessor-10 [RUNNABLE] CPU time: 5:15
 sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int)
 sun.nio.ch.EPollArrayWrapper.poll(long)
 sun.nio.ch.EPollSelectorImpl.doSelect(long)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(long)
 sun.nio.ch.SelectorImpl.select(long)
 org.apache.mina.transport.socket.nio.NioProcessor.select(long)
 org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run()
 org.apache.mina.util.NamePreservingRunnable.run()
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker)
 java.util.concurrent.ThreadPoolExecutor$Worker.run()
 java.lang.Thread.run()
 It seems to affect any NIO based Java server applications running in the 
 specified environment.
 Some projects provide workarounds for similar JDK bugs, probably MINA can 
 also think about a workaround.
 As far as I know, there are at least 3 users who experience this issue with 
 Jetty and all of them are running CentOS (some distribution default setting 
 is a trigger?). As for MINA, I'm not aware of similar reports yet.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: MINA2 space

2010-02-09 Thread Ashish
Seems I am out of luck :-( the login page doesn't show up..


On Tue, Feb 9, 2010 at 4:54 PM, Emmanuel Lecharny elecha...@gmail.com wrote:
 On 2/9/10 12:11 PM, Ashish wrote:

 What happened to our new MINA2 space? I logged half an hr back, but
 couldn't locate it :-(

 Now having trouble logging in. Anyone experiencing the same problems?

 thanks
 ashish



 and the wiki : http://cwiki.apache.org/confluence/display/MINA2/Index

 --
 Regards,
 Cordialement,
 Emmanuel Lécharny
 www.nextury.com






-- 
thanks
ashish

Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal


Re: MINA2 space

2010-02-09 Thread Norman Maurer
Works for me too..

Bye,
Norman

2010/2/9 Ashish paliwalash...@gmail.com:
 Seems I am out of luck :-( the login page doesn't show up..


 On Tue, Feb 9, 2010 at 4:54 PM, Emmanuel Lecharny elecha...@gmail.com wrote:
 On 2/9/10 12:11 PM, Ashish wrote:

 What happened to our new MINA2 space? I logged half an hr back, but
 couldn't locate it :-(

 Now having trouble logging in. Anyone experiencing the same problems?

 thanks
 ashish



 and the wiki : http://cwiki.apache.org/confluence/display/MINA2/Index

 --
 Regards,
 Cordialement,
 Emmanuel Lécharny
 www.nextury.com






 --
 thanks
 ashish

 Blog: http://www.ashishpaliwal.com/blog
 My Photo Galleries: http://www.pbase.com/ashishpaliwal



[jira] Commented: (DIRMINA-678) NioProcessor 100% CPU usage on Linux (epoll selector bug)

2010-02-09 Thread Emmanuel Lecharny (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12831419#action_12831419
 ] 

Emmanuel Lecharny commented on DIRMINA-678:
---

Serge,

I understand that. The problem is that it not only breaks on windows, but also 
on mac, and linux :/

it *sucks*... 

IMO, there is something else going wild here. I have a test to reproduce the 
breakage, and I see no other option but running the test with all the revisions 
since RC1. It will probably take a day to do that. 

What a perfect day it will be :/

 NioProcessor 100% CPU usage on Linux (epoll selector bug)
 -

 Key: DIRMINA-678
 URL: https://issues.apache.org/jira/browse/DIRMINA-678
 Project: MINA
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0.0-M4
 Environment: CentOS 5.x, 32/64-bit, 32/64-bit Sun JDK 1.6.0_12, also 
 _11/_10/_09 and Sun JDK 1.7.0 b50, Kernel 2.6.18-92.1.22.el5 and also older 
 versions,
Reporter: Serge Baranov
 Fix For: 2.0.0-RC2

 Attachments: snap973.png, snap974.png


 It's the same bug as described at http://jira.codehaus.org/browse/JETTY-937 , 
 but affecting MINA in the very similar way.
 NioProcessor threads start to eat 100% resources per CPU. After 10-30 minutes 
 of running depending on the load (sometimes after several hours) one of the 
 NioProcessor starts to consume all the available CPU resources probably 
 spinning in the epoll select loop. Later, more threads can be affected by the 
 same issue, thus 100% loading all the available CPU cores.
 Sample trace:
 NioProcessor-10 [RUNNABLE] CPU time: 5:15
 sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int)
 sun.nio.ch.EPollArrayWrapper.poll(long)
 sun.nio.ch.EPollSelectorImpl.doSelect(long)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(long)
 sun.nio.ch.SelectorImpl.select(long)
 org.apache.mina.transport.socket.nio.NioProcessor.select(long)
 org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run()
 org.apache.mina.util.NamePreservingRunnable.run()
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker)
 java.util.concurrent.ThreadPoolExecutor$Worker.run()
 java.lang.Thread.run()
 It seems to affect any NIO based Java server applications running in the 
 specified environment.
 Some projects provide workarounds for similar JDK bugs, probably MINA can 
 also think about a workaround.
 As far as I know, there are at least 3 users who experience this issue with 
 Jetty and all of them are running CentOS (some distribution default setting 
 is a trigger?). As for MINA, I'm not aware of similar reports yet.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: MINA2 space

2010-02-09 Thread Ashish
still not working for me :-(
may be will try after sometime..

On Tue, Feb 9, 2010 at 5:01 PM, Norman Maurer
norman.mau...@googlemail.com wrote:
 Works for me too..

 Bye,
 Norman

 2010/2/9 Ashish paliwalash...@gmail.com:
 Seems I am out of luck :-( the login page doesn't show up..


 On Tue, Feb 9, 2010 at 4:54 PM, Emmanuel Lecharny elecha...@gmail.com 
 wrote:
 On 2/9/10 12:11 PM, Ashish wrote:

 What happened to our new MINA2 space? I logged half an hr back, but
 couldn't locate it :-(

 Now having trouble logging in. Anyone experiencing the same problems?

 thanks
 ashish



 and the wiki : http://cwiki.apache.org/confluence/display/MINA2/Index

 --
 Regards,
 Cordialement,
 Emmanuel Lécharny
 www.nextury.com






 --
 thanks
 ashish

 Blog: http://www.ashishpaliwal.com/blog
 My Photo Galleries: http://www.pbase.com/ashishpaliwal





-- 
thanks
ashish

Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal


Re: MINA2 space

2010-02-09 Thread Ashish
Its working now :-)

Not sure what the problem was..

On Tue, Feb 9, 2010 at 5:10 PM, Ashish paliwalash...@gmail.com wrote:
 still not working for me :-(
 may be will try after sometime..

 On Tue, Feb 9, 2010 at 5:01 PM, Norman Maurer
 norman.mau...@googlemail.com wrote:
 Works for me too..

 Bye,
 Norman

 2010/2/9 Ashish paliwalash...@gmail.com:
 Seems I am out of luck :-( the login page doesn't show up..


 On Tue, Feb 9, 2010 at 4:54 PM, Emmanuel Lecharny elecha...@gmail.com 
 wrote:
 On 2/9/10 12:11 PM, Ashish wrote:

 What happened to our new MINA2 space? I logged half an hr back, but
 couldn't locate it :-(

 Now having trouble logging in. Anyone experiencing the same problems?

 thanks
 ashish



 and the wiki : http://cwiki.apache.org/confluence/display/MINA2/Index

 --
 Regards,
 Cordialement,
 Emmanuel Lécharny
 www.nextury.com






 --
 thanks
 ashish

 Blog: http://www.ashishpaliwal.com/blog
 My Photo Galleries: http://www.pbase.com/ashishpaliwal





 --
 thanks
 ashish

 Blog: http://www.ashishpaliwal.com/blog
 My Photo Galleries: http://www.pbase.com/ashishpaliwal




-- 
thanks
ashish

Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal


[jira] Commented: (DIRMINA-762) WARN org.apache.mina.core.service.IoProcessor - Create a new selector. Selected is 0, delta = 0

2010-02-09 Thread Emmanuel Lecharny (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12831485#action_12831485
 ] 

Emmanuel Lecharny commented on DIRMINA-762:
---

Ok, I found some time to run the new client. I get some interesting results :

3092.25 messages/sec (total messages received 15235)
3231.0 messages/sec (total messages received 31367)
...
3232.25 messages/sec (total messages received 79864)
Warning : Short connections thread 2 have no activity for 1322 ms
Warning : Long connection thread 0 have no activity for 1271 ms
Warning : Long connection thread 1 have no activity for 1262 ms
Warning : Long connection thread 3 have no activity for 1271 ms
Warning : Long connection thread 6 have no activity for 1264 ms
Warning : Long connection thread 10 have no activity for 1269 ms
Warning : Long connection thread 17 have no activity for 1262 ms
Warning : Long connection thread 18 have no activity for 1261 ms
Warning : Long connection thread 21 have no activity for 1271 ms
Warning : Long connection thread 24 have no activity for 1267 ms
Warning : Long connection thread 27 have no activity for 1271 ms
Warning : Long connection thread 28 have no activity for 1261 ms
Warning : Long connection thread 31 have no activity for 1262 ms
2262.5 messages/sec (total messages received 91947)
1733.0 messages/sec (total messages received 100753)
Warning : Short connections thread 1 have no activity for 1479 ms
Warning : Long connection thread 4 have no activity for 1421 ms
Warning : Long connection thread 7 have no activity for 1430 ms
Warning : Long connection thread 11 have no activity for 1422 ms
Warning : Long connection thread 13 have no activity for 1424 ms
Warning : Long connection thread 14 have no activity for 1430 ms
Warning : Long connection thread 16 have no activity for 1422 ms
Warning : Long connection thread 19 have no activity for 1424 ms
Warning : Long connection thread 22 have no activity for 1423 ms
Warning : Long connection thread 26 have no activity for 1430 ms
Warning : Long connection thread 29 have no activity for 1421 ms
1041.25 messages/sec (total messages received 105948)
...
1035.5 messages/sec (total messages received 163086)
Warning : Short connections thread 4 have no activity for 1844 ms
Warning : Long connection thread 2 have no activity for 1793 ms
Warning : Long connection thread 5 have no activity for 1786 ms
Warning : Long connection thread 8 have no activity for 1785 ms
Warning : Long connection thread 9 have no activity for 1785 ms
Warning : Long connection thread 12 have no activity for 1793 ms
Warning : Long connection thread 15 have no activity for 1783 ms
Warning : Long connection thread 20 have no activity for 1793 ms
Warning : Long connection thread 23 have no activity for 1784 ms
Warning : Long connection thread 25 have no activity for 1786 ms
Warning : Long connection thread 30 have no activity for 1785 ms
39.5 messages/sec (total messages received 163483)
...

There are 32 long connection created, and one third (roughly) is being 
'killed', or stall, then a few moment later, another third stall, and at the 
end, the last third stall. At this point, we don't receive any message on the 
long connections.

Seems like we have a problem writing data to the client at some point. The 
selector is not swapped, btw. 

 WARN org.apache.mina.core.service.IoProcessor  - Create a new selector. 
 Selected is 0, delta = 0
 

 Key: DIRMINA-762
 URL: https://issues.apache.org/jira/browse/DIRMINA-762
 Project: MINA
  Issue Type: Bug
 Environment: Linux (2.6.26-2-amd64),  java version 1.6.0_12 and also 
 1.6.0_18.
Reporter: Omry Yadan
Priority: Critical
 Fix For: 2.0.0-RC2

 Attachments: BufferCodec.java, NettyTestServer.java, 
 RateCounter.java, Screen shot 2010-02-02 at 7.48.39 PM.png, Screen shot 
 2010-02-02 at 7.48.46 PM.png, Screen shot 2010-02-02 at 7.48.59 PM.png, 
 Screen shot 2010-02-02 at 7.49.13 PM.png, Screen shot 2010-02-02 at 7.49.18 
 PM.png, Server.java, StressClient.java


 Mina server gets into a bad state where it constantly prints :
 WARN org.apache.mina.core.service.IoProcessor  - Create a new selector. 
 Selected is 0, delta = 0
 when this happens, server throughput drops significantly.
 to reproduce run the attached server and client for a short while (30 seconds 
 on my box).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Another Test

2010-02-09 Thread Jeff Genender
Wow... the test passed!  Look!

---
 T E S T S
---
Running dev@mina.apache.org alanTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.369 sec

Results :

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] BUILD SUCCESSFUL
[INFO] 
[
On Feb 9, 2010, at 8:04 PM, Alan D. Cabrera wrote:

 Another simple test
 
 
 Regards,
 Alan