[JIRA] (JENKINS-57119) maven-plugin random socket leak, leading to threads leak on slave and master

2019-04-19 Thread glav...@gmail.com (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Gabriel Lavoie created an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-57119  
 
 
  maven-plugin random socket leak, leading to threads leak on slave and master   
 

  
 
 
 
 

 
Issue Type: 
  Bug  
 
 
Assignee: 
 Unassigned  
 
 
Components: 
 maven-plugin  
 
 
Created: 
 2019-04-19 12:21  
 
 
Environment: 
 Master Java version: openjdk version "1.8.0_212"  Docker image built "FROM jenkins/jenkins:2.171"  maven-plugin: 3.12  ssh-slave-plugin: 1.29.4  ec2-fleet-plugin: 1.1.9   Slave Java version: openjdk version "1.8.0_212"  Running on AMI: debian-stretch-hvm-x86_64-gp2-2019-02-19-26620  (https://wiki.debian.org/Cloud/AmazonEC2Image/Stretch)  Kernel: Linux  4.9.0-8-amd64 #1 SMP Debian 4.9.144-3 (2019-02-02) x86_64 GNU/Linux   
 
 
Priority: 
  Major  
 
 
Reporter: 
 Gabriel Lavoie  
 

  
 
 
 
 

 
 I've had to deal with a leak of "Channel reader thread: Channel to Maven" for a few months now. On the master, these appear as the following, stuck on a read() operation, without the attached "Executor" thread that exists when a job is running:   

 

"Channel reader thread: Channel to Maven [/var/lib/jenkins/tools/hudson.model.JDK/AdoptOpenJDK_11.0.1u13/jdk-11.0.1+13/bin/java, -Xmx512m, -cp, /var/lib/jenkins/maven33-agent.jar:/var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.x_-_Cog_Saucesomeness/3.3.x_-_With_Cog/boot/plexus-classworlds-2.5.2.jar:/var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.x_-_Cog_Saucesomeness/3.3.x_-_With_Cog/conf/logging, jenkins.maven3.agent.Maven33Main, /var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.x_-_Cog_Saucesomeness/3.3.x_-_With_Cog, /var/lib/jenkins/remoting.jar, /var/lib/jenkins/maven33-interceptor.jar, /

[JIRA] (JENKINS-57119) maven-plugin random socket leak, leading to threads leak on slave and master

2019-04-19 Thread glav...@gmail.com (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Gabriel Lavoie updated an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-57119  
 
 
  maven-plugin random socket leak, leading to threads leak on slave and master   
 

  
 
 
 
 

 
Change By: 
 Gabriel Lavoie  
 
 
Attachment: 
 normal-no-leak.png  
 
 
Attachment: 
 rst-leak.png  
 
 
Attachment: 
 rst-no-leak.png  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-57119) maven-plugin random socket leak, leading to threads leak on slave and master

2019-04-19 Thread glav...@gmail.com (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Gabriel Lavoie updated an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-57119  
 
 
  maven-plugin random socket leak, leading to threads leak on slave and master   
 

  
 
 
 
 

 
Change By: 
 Gabriel Lavoie  
 

  
 
 
 
 

 
 I've had to deal with a leak of "Channel reader thread: Channel to Maven" for a few months now. On the master, these appear as the following, stuck on a read() operation, without the attached "Executor" thread that exists when a job is running: {code:java}"Channel reader thread: Channel to Maven [/var/lib/jenkins/tools/hudson.model.JDK/AdoptOpenJDK_11.0.1u13/jdk-11.0.1+13/bin/java, -Xmx512m, -cp, /var/lib/jenkins/maven33-agent.jar:/var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.x_-_Cog_Saucesomeness/3.3.x_-_With_Cog/boot/plexus-classworlds-2.5.2.jar:/var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.x_-_Cog_Saucesomeness/3.3.x_-_With_Cog/conf/logging, jenkins.maven3.agent.Maven33Main, /var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.x_-_Cog_Saucesomeness/3.3.x_-_With_Cog, /var/lib/jenkins/remoting.jar, /var/lib/jenkins/maven33-interceptor.jar, /var/lib/jenkins/maven3-interceptor-commons.jar, 45945]" #4754401 daemon prio=5 os_prio=0 tid=0x7f3c6c0ee000 nid=0x4d09 in Object.wait() [0x7f3bf49c3000]   java.lang.Thread.State: TIMED_WAITING (on object monitor)        at java.lang.Object.wait(Native Method)        at hudson.remoting.FastPipedInputStream.read(FastPipedInputStream.java:175)        - locked <0x000607135d08> (a [B)        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)        at java.io.BufferedInputStream.read(BufferedInputStream.java:265)        - locked <0x000607062548> (a java.io.BufferedInputStream)        at hudson.remoting.FlightRecorderInputStream.read(FlightRecorderInputStream.java:91)        at hudson.remoting.ChunkedInputStream.readHeader(ChunkedInputStream.java:72)        at hudson.remoting.ChunkedInputStream.readUntilBreak(ChunkedInputStream.java:103)        at hudson.remoting.ChunkedCommandTransport.readBlock(ChunkedCommandTransport.java:39)        at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)        at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63){code}  After about 3 weeks, we get between 3000-4000 of those, eventually leading to OutOfMemory or file descriptor related errors. After some digging, I found that those threads are attached to "proxy" threads on the slaves, also stuck on a read() operation with the following stack: {code:java}"Stream reader: maven process at Socket[addr=/127.0.0.1,port=60304,localport=45945]" #76026 prio=5 os_prio=0 tid=0x7fc7c40c6000 nid=0x4ab9 runnable [0x7fc7f3bfa000]   java.lang.Thread.State: RUNNABLE        at java.net.SocketInputStream.socketRead0(Native Method)        at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)        at java.net.SocketInputStream.read(SocketInputStream.java:171)        at java.net.SocketInputStream.read(Sock

[JIRA] (JENKINS-57119) maven-plugin random socket leak, leading to threads leak on slave and master

2019-04-19 Thread glav...@gmail.com (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Gabriel Lavoie updated an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-57119  
 
 
  maven-plugin random socket leak, leading to threads leak on slave and master   
 

  
 
 
 
 

 
Change By: 
 Gabriel Lavoie  
 

  
 
 
 
 

 
 I've had to deal with a leak of "Channel reader thread: Channel to Maven" for a few months now. On the master, these appear as the following, stuck on a read() operation, without the attached "Executor" thread that exists when a job is running: {code:java}"Channel reader thread: Channel to Maven [/var/lib/jenkins/tools/hudson.model.JDK/AdoptOpenJDK_11.0.1u13/jdk-11.0.1+13/bin/java, -Xmx512m, -cp, /var/lib/jenkins/maven33-agent.jar:/var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.x_-_Cog_Saucesomeness/3.3.x_-_With_Cog/boot/plexus-classworlds-2.5.2.jar:/var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.x_-_Cog_Saucesomeness/3.3.x_-_With_Cog/conf/logging, jenkins.maven3.agent.Maven33Main, /var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.x_-_Cog_Saucesomeness/3.3.x_-_With_Cog, /var/lib/jenkins/remoting.jar, /var/lib/jenkins/maven33-interceptor.jar, /var/lib/jenkins/maven3-interceptor-commons.jar, 45945]" #4754401 daemon prio=5 os_prio=0 tid=0x7f3c6c0ee000 nid=0x4d09 in Object.wait() [0x7f3bf49c3000]   java.lang.Thread.State: TIMED_WAITING (on object monitor)        at java.lang.Object.wait(Native Method)        at hudson.remoting.FastPipedInputStream.read(FastPipedInputStream.java:175)        - locked <0x000607135d08> (a [B)        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)        at java.io.BufferedInputStream.read(BufferedInputStream.java:265)        - locked <0x000607062548> (a java.io.BufferedInputStream)        at hudson.remoting.FlightRecorderInputStream.read(FlightRecorderInputStream.java:91)        at hudson.remoting.ChunkedInputStream.readHeader(ChunkedInputStream.java:72)        at hudson.remoting.ChunkedInputStream.readUntilBreak(ChunkedInputStream.java:103)        at hudson.remoting.ChunkedCommandTransport.readBlock(ChunkedCommandTransport.java:39)        at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)        at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63){code}  After about 3 weeks, we get between 3000-4000 of those, eventually leading to OutOfMemory or file descriptor related errors. After some digging, I found that those threads are attached to "proxy" threads on the slaves, also stuck on a read() operation with the following stack: {code:java}"Stream reader: maven process at Socket[addr=/127.0.0.1,port=60304,localport=45945]" #76026 prio=5 os_prio=0 tid=0x7fc7c40c6000 nid=0x4ab9 runnable [0x7fc7f3bfa000]   java.lang.Thread.State: RUNNABLE        at java.net.SocketInputStream.socketRead0(Native Method)        at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)        at java.net.SocketInputStream.read(SocketInputStream.java:171)        at java.net.SocketInputStream.read(Sock

[JIRA] (JENKINS-57119) maven-plugin random socket leak, leading to threads leak on slave and master

2019-04-19 Thread glav...@gmail.com (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Gabriel Lavoie updated an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-57119  
 
 
  maven-plugin random socket leak, leading to threads leak on slave and master   
 

  
 
 
 
 

 
Change By: 
 Gabriel Lavoie  
 

  
 
 
 
 

 
 I've had to deal with a leak of "Channel reader thread: Channel to Maven" for a few months now. On the master, these appear as the following, stuck on a read() operation, without the attached "Executor" thread that exists when a job is running: {code:java}"Channel reader thread: Channel to Maven [/var/lib/jenkins/tools/hudson.model.JDK/AdoptOpenJDK_11.0.1u13/jdk-11.0.1+13/bin/java, -Xmx512m, -cp, /var/lib/jenkins/maven33-agent.jar:/var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3. x_-_Cog_Saucesomeness 9 /3.3. x_-_With_Cog 9 /boot/plexus-classworlds-2.5.2.jar:/var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3. x_-_Cog_Saucesomeness 9 /3.3. x_-_With_Cog 9 /conf/logging, jenkins.maven3.agent.Maven33Main, /var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3. x_-_Cog_Saucesomeness 9 /3.3. x_-_With_Cog 9 , /var/lib/jenkins/remoting.jar, /var/lib/jenkins/maven33-interceptor.jar, /var/lib/jenkins/maven3-interceptor-commons.jar, 45945]" #4754401 daemon prio=5 os_prio=0 tid=0x7f3c6c0ee000 nid=0x4d09 in Object.wait() [0x7f3bf49c3000]   java.lang.Thread.State: TIMED_WAITING (on object monitor)        at java.lang.Object.wait(Native Method)        at hudson.remoting.FastPipedInputStream.read(FastPipedInputStream.java:175)        - locked <0x000607135d08> (a [B)        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)        at java.io.BufferedInputStream.read(BufferedInputStream.java:265)        - locked <0x000607062548> (a java.io.BufferedInputStream)        at hudson.remoting.FlightRecorderInputStream.read(FlightRecorderInputStream.java:91)        at hudson.remoting.ChunkedInputStream.readHeader(ChunkedInputStream.java:72)        at hudson.remoting.ChunkedInputStream.readUntilBreak(ChunkedInputStream.java:103)        at hudson.remoting.ChunkedCommandTransport.readBlock(ChunkedCommandTransport.java:39)        at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)        at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63){code}  After about 3 weeks, we get between 3000-4000 of those, eventually leading to OutOfMemory or file descriptor related errors. After some digging, I found that those threads are attached to "proxy" threads on the slaves, also stuck on a read() operation with the following stack: {code:java}"Stream reader: maven process at Socket[addr=/127.0.0.1,port=60304,localport=45945]" #76026 prio=5 os_prio=0 tid=0x7fc7c40c6000 nid=0x4ab9 runnable [0x7fc7f3bfa000]   java.lang.Thread.State: RUNNABLE        at java.net.SocketInputStream.socketRead0(Native Method)        at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)        at java.net.SocketInputStream.read(SocketInputStream.java:171)        at java.net.Soc

[JIRA] (JENKINS-57119) maven-plugin random socket leak, leading to threads leak on slave and master

2019-04-19 Thread glav...@gmail.com (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Gabriel Lavoie updated an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-57119  
 
 
  maven-plugin random socket leak, leading to threads leak on slave and master   
 

  
 
 
 
 

 
Change By: 
 Gabriel Lavoie  
 

  
 
 
 
 

 
 I've had to deal with a leak of "Channel reader thread: Channel to Maven" for a few months now. On the master, these appear as the following, stuck on a read() operation, without the attached "Executor" thread that exists when a job is running: {code:java}"Channel reader thread: Channel to Maven [/var/lib/jenkins/tools/hudson.model.JDK/AdoptOpenJDK_11.0.1u13/jdk-11.0.1+13/bin/java, -Xmx512m, -cp, /var/lib/jenkins/maven33-agent.jar:/var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.9/3.3.9/boot/plexus-classworlds-2.5.2.jar:/var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.9/3.3.9/conf/logging, jenkins.maven3.agent.Maven33Main, /var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.9/3.3.9, /var/lib/jenkins/remoting.jar, /var/lib/jenkins/maven33-interceptor.jar, /var/lib/jenkins/maven3-interceptor-commons.jar, 45945]" #4754401 daemon prio=5 os_prio=0 tid=0x7f3c6c0ee000 nid=0x4d09 in Object.wait() [0x7f3bf49c3000]   java.lang.Thread.State: TIMED_WAITING (on object monitor)        at java.lang.Object.wait(Native Method)        at hudson.remoting.FastPipedInputStream.read(FastPipedInputStream.java:175)        - locked <0x000607135d08> (a [B)        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)        at java.io.BufferedInputStream.read(BufferedInputStream.java:265)        - locked <0x000607062548> (a java.io.BufferedInputStream)        at hudson.remoting.FlightRecorderInputStream.read(FlightRecorderInputStream.java:91)        at hudson.remoting.ChunkedInputStream.readHeader(ChunkedInputStream.java:72)        at hudson.remoting.ChunkedInputStream.readUntilBreak(ChunkedInputStream.java:103)        at hudson.remoting.ChunkedCommandTransport.readBlock(ChunkedCommandTransport.java:39)        at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)        at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63){code}  After about 3 weeks, we get between 3000-4000 of those, eventually leading to OutOfMemory or file descriptor related errors. After some digging, I found that those threads are attached to "proxy" threads on the slaves, also stuck on a read() operation with the following stack: {code:java}"Stream reader: maven process at Socket[addr=/127.0.0.1,port=60304,localport=45945]" #76026 prio=5 os_prio=0 tid=0x7fc7c40c6000 nid=0x4ab9 runnable [0x7fc7f3bfa000]   java.lang.Thread.State: RUNNABLE        at java.net.SocketInputStream.socketRead0(Native Method)        at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)        at java.net.SocketInputStream.read(SocketInputStream.java:171)        at java.net.SocketInputStream.read(SocketInputStream.java:141)        at java.io.FilterInputStream.read(FilterInputStream.java:133) 

[JIRA] (JENKINS-57119) maven-plugin random socket leak, leading to threads leak on slave and master

2019-04-19 Thread glav...@gmail.com (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Gabriel Lavoie updated an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-57119  
 
 
  maven-plugin random socket leak, leading to threads leak on slave and master   
 

  
 
 
 
 

 
Change By: 
 Gabriel Lavoie  
 

  
 
 
 
 

 
 I've had to deal with a leak of "Channel reader thread: Channel to Maven" for a few months now. On the master, these appear as the following, stuck on a read() operation, without the attached "Executor" thread that exists when a job is running: {code:java}"Channel reader thread: Channel to Maven [/var/lib/jenkins/tools/hudson.model.JDK/AdoptOpenJDK_11.0.1u13/jdk-11.0.1+13/bin/java, -Xmx512m, -cp, /var/lib/jenkins/maven33-agent.jar:/var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.9/3.3.9/boot/plexus-classworlds-2.5.2.jar:/var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.9/3.3.9/conf/logging, jenkins.maven3.agent.Maven33Main, /var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.9/3.3.9, /var/lib/jenkins/remoting.jar, /var/lib/jenkins/maven33-interceptor.jar, /var/lib/jenkins/maven3-interceptor-commons.jar, 45945]" #4754401 daemon prio=5 os_prio=0 tid=0x7f3c6c0ee000 nid=0x4d09 in Object.wait() [0x7f3bf49c3000]   java.lang.Thread.State: TIMED_WAITING (on object monitor)        at java.lang.Object.wait(Native Method)        at hudson.remoting.FastPipedInputStream.read(FastPipedInputStream.java:175)        - locked <0x000607135d08> (a [B)        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)        at java.io.BufferedInputStream.read(BufferedInputStream.java:265)        - locked <0x000607062548> (a java.io.BufferedInputStream)        at hudson.remoting.FlightRecorderInputStream.read(FlightRecorderInputStream.java:91)        at hudson.remoting.ChunkedInputStream.readHeader(ChunkedInputStream.java:72)        at hudson.remoting.ChunkedInputStream.readUntilBreak(ChunkedInputStream.java:103)        at hudson.remoting.ChunkedCommandTransport.readBlock(ChunkedCommandTransport.java:39)        at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)        at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63){code}  After about 3 weeks, we get between 3000-4000 of those, eventually leading to OutOfMemory or file descriptor related errors. After some digging, I found that those threads are attached to "proxy" threads on the slaves, also stuck on a read() operation with the following stack: {code:java}"Stream reader: maven process at Socket[addr=/127.0.0.1,port=60304,localport=45945]" #76026 prio=5 os_prio=0 tid=0x7fc7c40c6000 nid=0x4ab9 runnable [0x7fc7f3bfa000]   java.lang.Thread.State: RUNNABLE        at java.net.SocketInputStream.socketRead0(Native Method)        at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)        at java.net.SocketInputStream.read(SocketInputStream.java:171)        at java.net.SocketInputStream.read(SocketInputStream.java:141)        at java.io.FilterInputStream.read(FilterInputStream.java:133) 

[JIRA] (JENKINS-57119) maven-plugin random socket leak, leading to threads leak on slave and master

2019-04-19 Thread glav...@gmail.com (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Gabriel Lavoie updated an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-57119  
 
 
  maven-plugin random socket leak, leading to threads leak on slave and master   
 

  
 
 
 
 

 
Change By: 
 Gabriel Lavoie  
 

  
 
 
 
 

 
 I've had to deal with a leak of "Channel reader thread: Channel to Maven" for a few months now. On the master, these appear as the following, stuck on a read() operation, without the attached "Executor" thread that exists when a job is running: {code:java}"Channel reader thread: Channel to Maven [/var/lib/jenkins/tools/hudson.model.JDK/AdoptOpenJDK_11.0.1u13/jdk-11.0.1+13/bin/java, -Xmx512m, -cp, /var/lib/jenkins/maven33-agent.jar:/var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.9/3.3.9/boot/plexus-classworlds-2.5.2.jar:/var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.9/3.3.9/conf/logging, jenkins.maven3.agent.Maven33Main, /var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.9/3.3.9, /var/lib/jenkins/remoting.jar, /var/lib/jenkins/maven33-interceptor.jar, /var/lib/jenkins/maven3-interceptor-commons.jar, 45945]" #4754401 daemon prio=5 os_prio=0 tid=0x7f3c6c0ee000 nid=0x4d09 in Object.wait() [0x7f3bf49c3000]   java.lang.Thread.State: TIMED_WAITING (on object monitor)        at java.lang.Object.wait(Native Method)        at hudson.remoting.FastPipedInputStream.read(FastPipedInputStream.java:175)        - locked <0x000607135d08> (a [B)        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)        at java.io.BufferedInputStream.read(BufferedInputStream.java:265)        - locked <0x000607062548> (a java.io.BufferedInputStream)        at hudson.remoting.FlightRecorderInputStream.read(FlightRecorderInputStream.java:91)        at hudson.remoting.ChunkedInputStream.readHeader(ChunkedInputStream.java:72)        at hudson.remoting.ChunkedInputStream.readUntilBreak(ChunkedInputStream.java:103)        at hudson.remoting.ChunkedCommandTransport.readBlock(ChunkedCommandTransport.java:39)        at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)        at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63){code}  After about 3 weeks, we get between 3000-4000 of those, eventually leading to OutOfMemory or file descriptor related errors. After some digging, I found that those threads are attached to "proxy" threads on the slaves, also stuck on a read() operation with the following stack: {code:java}"Stream reader: maven process at Socket[addr=/127.0.0.1,port=60304,localport=45945]" #76026 prio=5 os_prio=0 tid=0x7fc7c40c6000 nid=0x4ab9 runnable [0x7fc7f3bfa000]   java.lang.Thread.State: RUNNABLE        at java.net.SocketInputStream.socketRead0(Native Method)        at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)        at java.net.SocketInputStream.read(SocketInputStream.java:171)        at java.net.SocketInputStream.read(SocketInputStream.java:141)        at java.io.FilterInputStream.read(FilterInputStream.java:133) 

[JIRA] (JENKINS-57119) maven-plugin random socket leak, leading to threads leak on slave and master

2019-04-19 Thread glav...@gmail.com (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Gabriel Lavoie updated an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-57119  
 
 
  maven-plugin random socket leak, leading to threads leak on slave and master   
 

  
 
 
 
 

 
Change By: 
 Gabriel Lavoie  
 

  
 
 
 
 

 
 I've had to deal with a leak of "Channel reader thread: Channel to Maven" for a few months now. On the master, these appear as the following, stuck on a read() operation, without the attached "Executor" thread that exists when a job is running: {code:java}"Channel reader thread: Channel to Maven [/var/lib/jenkins/tools/hudson.model.JDK/AdoptOpenJDK_11.0.1u13/jdk-11.0.1+13/bin/java, -Xmx512m, -cp, /var/lib/jenkins/maven33-agent.jar:/var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.9/3.3.9/boot/plexus-classworlds-2.5.2.jar:/var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.9/3.3.9/conf/logging, jenkins.maven3.agent.Maven33Main, /var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.9/3.3.9, /var/lib/jenkins/remoting.jar, /var/lib/jenkins/maven33-interceptor.jar, /var/lib/jenkins/maven3-interceptor-commons.jar, 45945]" #4754401 daemon prio=5 os_prio=0 tid=0x7f3c6c0ee000 nid=0x4d09 in Object.wait() [0x7f3bf49c3000]   java.lang.Thread.State: TIMED_WAITING (on object monitor)        at java.lang.Object.wait(Native Method)        at hudson.remoting.FastPipedInputStream.read(FastPipedInputStream.java:175)        - locked <0x000607135d08> (a [B)        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)        at java.io.BufferedInputStream.read(BufferedInputStream.java:265)        - locked <0x000607062548> (a java.io.BufferedInputStream)        at hudson.remoting.FlightRecorderInputStream.read(FlightRecorderInputStream.java:91)        at hudson.remoting.ChunkedInputStream.readHeader(ChunkedInputStream.java:72)        at hudson.remoting.ChunkedInputStream.readUntilBreak(ChunkedInputStream.java:103)        at hudson.remoting.ChunkedCommandTransport.readBlock(ChunkedCommandTransport.java:39)        at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)        at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63){code}    After about 3 weeks, we get between 3000-4000 of those, eventually leading to OutOfMemory or file descriptor related errors.  I found no error in the master logs, slave logs or from any job that would relate to this. After some digging, I found that those threads are attached to "proxy" threads on the slaves, also stuck on a read() operation with the following stack: {code:java}"Stream reader: maven process at Socket[addr=/127.0.0.1,port=60304,localport=45945]" #76026 prio=5 os_prio=0 tid=0x7fc7c40c6000 nid=0x4ab9 runnable [0x7fc7f3bfa000]   java.lang.Thread.State: RUNNABLE        at java.net.SocketInputStream.socketRead0(Native Method)        at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)        at java.net.SocketInputStream.read(SocketInputStream.java:171)        at java.net.SocketInputStream.read(Soc

[JIRA] (JENKINS-57119) maven-plugin random socket leak, leading to threads leak on slave and master

2019-04-19 Thread glav...@gmail.com (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Gabriel Lavoie updated an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-57119  
 
 
  maven-plugin random socket leak, leading to threads leak on slave and master   
 

  
 
 
 
 

 
Change By: 
 Gabriel Lavoie  
 

  
 
 
 
 

 
 I've had to deal with a leak of "Channel reader thread: Channel to Maven" for a few months now. On the master, these appear as the following, stuck on a read() operation, without the attached "Executor" thread that exists when a job is running: {code:java}"Channel reader thread: Channel to Maven [/var/lib/jenkins/tools/hudson.model.JDK/AdoptOpenJDK_11.0.1u13/jdk-11.0.1+13/bin/java, -Xmx512m, -cp, /var/lib/jenkins/maven33-agent.jar:/var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.9/3.3.9/boot/plexus-classworlds-2.5.2.jar:/var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.9/3.3.9/conf/logging, jenkins.maven3.agent.Maven33Main, /var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.9/3.3.9, /var/lib/jenkins/remoting.jar, /var/lib/jenkins/maven33-interceptor.jar, /var/lib/jenkins/maven3-interceptor-commons.jar, 45945]" #4754401 daemon prio=5 os_prio=0 tid=0x7f3c6c0ee000 nid=0x4d09 in Object.wait() [0x7f3bf49c3000]   java.lang.Thread.State: TIMED_WAITING (on object monitor)        at java.lang.Object.wait(Native Method)        at hudson.remoting.FastPipedInputStream.read(FastPipedInputStream.java:175)        - locked <0x000607135d08> (a [B)        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)        at java.io.BufferedInputStream.read(BufferedInputStream.java:265)        - locked <0x000607062548> (a java.io.BufferedInputStream)        at hudson.remoting.FlightRecorderInputStream.read(FlightRecorderInputStream.java:91)        at hudson.remoting.ChunkedInputStream.readHeader(ChunkedInputStream.java:72)        at hudson.remoting.ChunkedInputStream.readUntilBreak(ChunkedInputStream.java:103)        at hudson.remoting.ChunkedCommandTransport.readBlock(ChunkedCommandTransport.java:39)        at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)        at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63){code}  After about 3 weeks, we get between 3000-4000 of those, eventually leading to OutOfMemory or file descriptor related errors. I found no error in the master logs, slave logs or from any job that would relate to this.After some digging, I found that those threads are attached to "proxy" threads on the slaves, also stuck on a read() operation with the following stack: {code:java}"Stream reader: maven process at Socket[addr=/127.0.0.1,port=60304,localport=45945]" #76026 prio=5 os_prio=0 tid=0x7fc7c40c6000 nid=0x4ab9 runnable [0x7fc7f3bfa000]   java.lang.Thread.State: RUNNABLE        at java.net.SocketInputStream.socketRead0(Native Method)        at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)        at java.net.SocketInputStream.read(SocketInputStream.java:171)        at java.net.SocketInputStream.read(SocketI

[JIRA] (JENKINS-57119) maven-plugin random socket leak, leading to threads leak on slave and master

2019-04-19 Thread glav...@gmail.com (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Gabriel Lavoie updated an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-57119  
 
 
  maven-plugin random socket leak, leading to threads leak on slave and master   
 

  
 
 
 
 

 
Change By: 
 Gabriel Lavoie  
 

  
 
 
 
 

 
 I've had to deal with a leak of "Channel reader thread: Channel to Maven" for a few months now. On the master, these appear as the following, stuck on a read() operation, without the attached "Executor" thread that exists when a job is running: {code:java}"Channel reader thread: Channel to Maven [/var/lib/jenkins/tools/hudson.model.JDK/AdoptOpenJDK_11.0.1u13/jdk-11.0.1+13/bin/java, -Xmx512m, -cp, /var/lib/jenkins/maven33-agent.jar:/var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.9/3.3.9/boot/plexus-classworlds-2.5.2.jar:/var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.9/3.3.9/conf/logging, jenkins.maven3.agent.Maven33Main, /var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/3.3.9/3.3.9, /var/lib/jenkins/remoting.jar, /var/lib/jenkins/maven33-interceptor.jar, /var/lib/jenkins/maven3-interceptor-commons.jar, 45945]" #4754401 daemon prio=5 os_prio=0 tid=0x7f3c6c0ee000 nid=0x4d09 in Object.wait() [0x7f3bf49c3000]   java.lang.Thread.State: TIMED_WAITING (on object monitor)        at java.lang.Object.wait(Native Method)        at hudson.remoting.FastPipedInputStream.read(FastPipedInputStream.java:175)        - locked <0x000607135d08> (a [B)        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)        at java.io.BufferedInputStream.read(BufferedInputStream.java:265)        - locked <0x000607062548> (a java.io.BufferedInputStream)        at hudson.remoting.FlightRecorderInputStream.read(FlightRecorderInputStream.java:91)        at hudson.remoting.ChunkedInputStream.readHeader(ChunkedInputStream.java:72)        at hudson.remoting.ChunkedInputStream.readUntilBreak(ChunkedInputStream.java:103)        at hudson.remoting.ChunkedCommandTransport.readBlock(ChunkedCommandTransport.java:39)        at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)        at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63){code}  After about 3 weeks, we get between 3000-4000 of those, eventually leading to OutOfMemory or file descriptor related errors. I found no error in the master logs, slave logs or from any job that would relate to this.After some digging, I found that those threads are attached to "proxy" threads on the slaves, also stuck on a read() operation with the following stack: {code:java}"Stream reader: maven process at Socket[addr=/127.0.0.1,port=60304,localport=45945]" #76026 prio=5 os_prio=0 tid=0x7fc7c40c6000 nid=0x4ab9 runnable [0x7fc7f3bfa000]   java.lang.Thread.State: RUNNABLE        at java.net.SocketInputStream.socketRead0(Native Method)        at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)        at java.net.SocketInputStream.read(SocketInputStream.java:171)        at java.net.SocketInputStream.read(SocketI

[JIRA] (JENKINS-57119) maven-plugin random socket leak, leading to threads leak on slave and master

2019-05-03 Thread jgl...@cloudbees.com (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Jesse Glick assigned an issue to Gabriel Lavoie  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-57119  
 
 
  maven-plugin random socket leak, leading to threads leak on slave and master   
 

  
 
 
 
 

 
Change By: 
 Jesse Glick  
 
 
Assignee: 
 Gabriel Lavoie  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-57119) maven-plugin random socket leak, leading to threads leak on slave and master

2019-05-03 Thread jgl...@cloudbees.com (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Jesse Glick updated  JENKINS-57119  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-57119  
 
 
  maven-plugin random socket leak, leading to threads leak on slave and master   
 

  
 
 
 
 

 
Change By: 
 Jesse Glick  
 
 
Status: 
 In  Progress  Review  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-57119) maven-plugin random socket leak, leading to threads leak on slave and master

2019-05-03 Thread jgl...@cloudbees.com (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Jesse Glick started work on  JENKINS-57119  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
Change By: 
 Jesse Glick  
 
 
Status: 
 Open In Progress  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-57119) maven-plugin random socket leak, leading to threads leak on slave and master

2019-05-03 Thread glav...@gmail.com (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Gabriel Lavoie commented on  JENKINS-57119  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: maven-plugin random socket leak, leading to threads leak on slave and master   
 

  
 
 
 
 

 
 Adding more information about TCP KeepAlive: https://www.tldp.org/HOWTO/html_single/TCP-Keepalive-HOWTO/ From HOWTO: Keepalive is non-invasive, and in most cases, if you're in doubt, you can turn it on without the risk of doing something wrong. But do remember that it generates extra network traffic, which can have an impact on routers and firewalls. In short, use your brain and be careful. In the next section we will distinguish between the two target tasks for keepalive: 
 
Checking for dead peers 
 
 
Preventing disconnection due to network inactivity 
   To add some explanation on the configuration, when keep-alive is enabled on a socket, the Linux network stack will start sending empty PSH packets after 2 hours of inactivity (net.ipv4.tcp_keepalive_time) at a regular interval (net.ipv4.tcp_keepalive_intvl) expecting to receive an ACK. If the peer doesn't answer for a specific number of attempts (net.ipv4.tcp_keepalive_probes) or a RST is received, the network stack will know the connection is dead and it will report it to the application (IOException).  We've been testing this change for a few weeks now and we don't see any adverse effect as those threads seems to be only linked to the slave, associated with jobs that completed successfully most of the time.   
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to j

[JIRA] (JENKINS-57119) maven-plugin random socket leak, leading to threads leak on slave and master

2019-06-19 Thread ol...@apache.org (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Olivier Lamy updated  JENKINS-57119  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-57119  
 
 
  maven-plugin random socket leak, leading to threads leak on slave and master   
 

  
 
 
 
 

 
Change By: 
 Olivier Lamy  
 
 
Status: 
 In Review Resolved  
 
 
Resolution: 
 Fixed  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-issues/JIRA.198869.1555676489000.4154.1561005240333%40Atlassian.JIRA.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-57119) maven-plugin random socket leak, leading to threads leak on slave and master

2019-06-19 Thread ol...@apache.org (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Olivier Lamy closed an issue as Fixed  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-57119  
 
 
  maven-plugin random socket leak, leading to threads leak on slave and master   
 

  
 
 
 
 

 
Change By: 
 Olivier Lamy  
 
 
Status: 
 Resolved Closed  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-issues/JIRA.198869.1555676489000.4156.1561005240357%40Atlassian.JIRA.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-57119) maven-plugin random socket leak, leading to threads leak on slave and master

2019-06-19 Thread ol...@apache.org (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Olivier Lamy updated an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-57119  
 
 
  maven-plugin random socket leak, leading to threads leak on slave and master   
 

  
 
 
 
 

 
Change By: 
 Olivier Lamy  
 
 
Labels: 
 maven-plugin-3.3  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-issues/JIRA.198869.1555676489000.4152.1561005240309%40Atlassian.JIRA.
For more options, visit https://groups.google.com/d/optout.