[Resin-interest] hmux broken pipe

2009-12-14 Thread Wesley Wu
when running resin in load-balance cluster with a web-tier(one server)
and a app-tier(two server)

my app-tier console display the error

[09-12-15 00:14:34.369] {server-192.168.1.5:6801-34}
com.caucho.java.JavaCompiler compileInt Compiling
_jsp/_person/_Index__jsp.java
[09-12-15 00:14:34.723] {server-192.168.1.5:6801-34}
com.caucho.java.JavaCompiler compileInt Compiling
_jsp/_person/_Header__jsp.java
[09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}
com.caucho.server.cluster.HmuxQueue dispatch java.net.SocketException:
Write failed: Broken pipe
[09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
jrockit.net.SocketNativeIO.writeBytesPinned(Native Method)
[09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
jrockit.net.SocketNativeIO.socketWrite(SocketNativeIO.java:73)
[09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
java.net.SocketOutputStream.socketWrite0(SocketOutputStream.java)
[09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
[09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
java.net.SocketOutputStream.write(SocketOutputStream.java:137)
[09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
com.caucho.vfs.TcpStream.write(TcpStream.java:150)
[09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
com.caucho.vfs.WriteStream.flush(WriteStream.java:367)
[09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
com.caucho.server.cluster.ClusterStream.writeYield(ClusterStream.java:626)
[09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
com.caucho.server.cluster.HmuxQueue.dispatch(HmuxQueue.java:132)
[09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
com.caucho.hemp.broker.HempMemoryQueue.consumeQueue(HempMemoryQueue.java:415)
[09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
com.caucho.hemp.broker.HempMemoryQueue.run(HempMemoryQueue.java:473)
[09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
com.caucho.util.ThreadPool$PoolThread.runTasks(ThreadPool.java:901)
[09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
com.caucho.util.ThreadPool$PoolThread.run(ThreadPool.java:866)
[09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}

I'm using resin 4.0.2.s20091202 snapshot on CentOS 5.2 x86 linux
compiled with JNI, with a evaluation PRO lisence.

JDK: jrockit 1.6.0_14 x86

I thinks maybe caused by  directive.

Any ideas?

-Wesley


___
resin-interest mailing list
resin-interest@caucho.com
http://maillist.caucho.com/mailman/listinfo/resin-interest


Re: [Resin-interest] hmux broken pipe

2009-12-14 Thread Scott Ferguson
Wesley Wu wrote:
> when running resin in load-balance cluster with a web-tier(one server)
> and a app-tier(two server)
>   
Thanks. I've filed this. It looks like a socket timeout issue.

-- Scott
> my app-tier console display the error
>
> [09-12-15 00:14:34.369] {server-192.168.1.5:6801-34}
> com.caucho.java.JavaCompiler compileInt Compiling
> _jsp/_person/_Index__jsp.java
> [09-12-15 00:14:34.723] {server-192.168.1.5:6801-34}
> com.caucho.java.JavaCompiler compileInt Compiling
> _jsp/_person/_Header__jsp.java
> [09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}
> com.caucho.server.cluster.HmuxQueue dispatch java.net.SocketException:
> Write failed: Broken pipe
> [09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
> jrockit.net.SocketNativeIO.writeBytesPinned(Native Method)
> [09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
> jrockit.net.SocketNativeIO.socketWrite(SocketNativeIO.java:73)
> [09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
> java.net.SocketOutputStream.socketWrite0(SocketOutputStream.java)
> [09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
> [09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
> java.net.SocketOutputStream.write(SocketOutputStream.java:137)
> [09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
> com.caucho.vfs.TcpStream.write(TcpStream.java:150)
> [09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
> com.caucho.vfs.WriteStream.flush(WriteStream.java:367)
> [09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
> com.caucho.server.cluster.ClusterStream.writeYield(ClusterStream.java:626)
> [09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
> com.caucho.server.cluster.HmuxQueue.dispatch(HmuxQueue.java:132)
> [09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
> com.caucho.hemp.broker.HempMemoryQueue.consumeQueue(HempMemoryQueue.java:415)
> [09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
> com.caucho.hemp.broker.HempMemoryQueue.run(HempMemoryQueue.java:473)
> [09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
> com.caucho.util.ThreadPool$PoolThread.runTasks(ThreadPool.java:901)
> [09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}at
> com.caucho.util.ThreadPool$PoolThread.run(ThreadPool.java:866)
> [09-12-15 00:14:53.096] {hmtp-aaa-to-aaa-31}
>
> I'm using resin 4.0.2.s20091202 snapshot on CentOS 5.2 x86 linux
> compiled with JNI, with a evaluation PRO lisence.
>
> JDK: jrockit 1.6.0_14 x86
>
> I thinks maybe caused by  directive.
>
> Any ideas?
>
> -Wesley
>
>
> ___
> resin-interest mailing list
> resin-interest@caucho.com
> http://maillist.caucho.com/mailman/listinfo/resin-interest
>
>   




___
resin-interest mailing list
resin-interest@caucho.com
http://maillist.caucho.com/mailman/listinfo/resin-interest


Re: [Resin-interest] Potential class loader leaks

2009-12-14 Thread Mattias Jiderhamn

Scott, can you tell me whether you've been able to reproduce this scenario?



Mattias Jiderhamn wrote (2009-12-11 08:34):

Scott Ferguson wrote (2009-12-11 00:23):

Mattias Jiderhamn wrote:
  
So I've spent another day hunting that lo(ooo)ng standing PermGen memory 
leak in our application and/or Resin.


I made a new discovery which "shouldn't" be an issue, but could 
potentially fix problems.
 From my investigation it seems that whenever the application is 
reloaded, a reference to the old 
com.caucho.loader.EnvironmentClassLoader is kept in 
com.caucho.hessian.io.SerializerFactory._defaultFactory. I agree that in 
theory this shouldn't happen, since the ClassLoader is a weak reference 
key, /however/ if I add some code to the application shutdown, 
explicitly removing the EnvironmentClassLoader from _defaultFactory 
using reflection, the garbage collector is able to unload these classes.
  

I changed this in 4.0.2 (with some more changes in 4.0.3). Even though 
the key is a weak reference, the value is a strong reference.
  

Indeed, this has been fixed in the latest snapshot! Good job!
Sorry for not testing against that...


I should mention though, that there is still a minimum of two 
EnvironmentClassLoaders for the given application after reloading at 
least once. The former one seem to stick around somehow. We have 
discussed this before, Scott; how references are kept inside 
com.caucho.server.dispatch.Invocation, at least in a low traffic (dev / 
debugging) environment.

On a web-app change the Invocation cache is cleared, so there shouldn't 
be any old references there.
  
Could you please double check that? For example, what happens to the 
503:ed requests received during redeployment?
I have an examplifying YourKit (freely available EAP) snapshot I could 
send you if it helps.


Here is what I'm getting, which seems repeatable over and over in my 
environment:

Startup: 9k loaded classes, no unloaded classes, 49 MB PermGen
First redeployment: 15k loaded classes, no unloaded classes, 77 MB 
PermGen = one extra instance of the app with 6k classes loaded. 
Assumed a total of 2 classloaders
Second redeployment: 15k loaded classes, 6k unloaded classes, 77 MB 
PermGen = initial/first(?) instance GC:ed and a new one loaded. Still 
2 classloaders.
Third redeployment: 21k loaded classes, 6k unloaded classes, 105 MB 
PermGen = one extra instance of the app with 6k classes loaded, 
nothing GC:ed. Assumed a total of 3 classloaders
Fourth redeployment: 21k loaded classes, 12k unloaded classes, 105 MB 
PermGen = second(?) instance GC:ed and a new one loaded. Still 3 
classloaders.
(Fifth redeployment: 15k loaded classes, 24k unloaded classes, 77 MB 
PermGen = third and fourth(?) instance GC:ed and a new one loaded. 2 
classloaders. Haven't repeated this test enough to build a thesis)



 Not sure if this a real problem, but ideally the 
class loader of the previous version should be available for garbage 
collection before the classes of the new version are loaded ('cause 
there is no turning back anyway, is there...?).

In the past this could be a problem with JNI because a JNI library can 
only be loaded in one classloader (that may have been changed but it 
always was a JDK restriction.)
  
Are you saying this shouldn't be a issue in 4.0.2/snapshot running on 
JDK 1.6?


--

  


___
resin-interest mailing list
resin-interest@caucho.com
http://maillist.caucho.com/mailman/listinfo/resin-interest