[JIRA] (JENKINS-11760) Failed SCM poll does not notify admin/anyone of failure.

2018-03-01 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Andrew Barber commented on  JENKINS-11760  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Failed SCM poll does not notify admin/anyone of failure.   
 

  
 
 
 
 

 
 Any hope of this issue getting fixed?  It causes issues for us after password changes.    
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian JIRA (v7.3.0#73011-sha1:3c73d0e)  
 
 

 
   
 

  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] [core] (JENKINS-35318) Make workspace cleanup configurable per slave

2016-06-02 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Andrew Barber created an issue 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 Jenkins /  JENKINS-35318 
 
 
 
  Make workspace cleanup configurable per slave  
 
 
 
 
 
 
 
 
 

Issue Type:
 
  Bug 
 
 
 

Assignee:
 

 Unassigned 
 
 
 

Components:
 

 core 
 
 
 

Created:
 

 2016/Jun/02 7:52 PM 
 
 
 

Priority:
 
  Minor 
 
 
 

Reporter:
 
 Andrew Barber 
 
 
 
 
 
 
 
 
 
 
It would be helpful if the cleanup thread could be configurable by slave. Each slave should have an option to enable/disable cleanup and another option to specify how many days old a workspace needs to be to be considered for cleanup. 
This will help solve issues where slaves may share a slave root or jobs are using custom workspaces and run on multiple slaves.  
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 


[JIRA] [core] (JENKINS-19686) Workspace directory randomly deleted

2016-06-02 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Andrew Barber commented on  JENKINS-19686 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Workspace directory randomly deleted  
 
 
 
 
 
 
 
 
 
 
A follow up on my case. I just realized that what is different for me is that I share a slave root across slaves. This is ok for me, because jobs are only tied to a single node. But by moving the job off a slave to another, it opened up the workspace for reaping (in the context of the old slave) which I don't want. You could argue user error, but then there is no checking in jenkins to ensure slaves have different roots (which implies that it must be allowed to share roots). The cleanup thread could be enhanced to check with all slaves that share a root to make sure none of them own the active workspace.  
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.4.2#64017-sha1:e244265) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] [core] (JENKINS-19686) Workspace directory randomly deleted

2016-06-02 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Andrew Barber edited a comment on  JENKINS-19686 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Workspace directory randomly deleted  
 
 
 
 
 
 
 
 
 
 I think there is a bug in the workspace cleanup code. We've been battling with disappearing workspaces and I could not figure out what was happening until I stumbled upon this thread. In our case, I moved a job from one slave to a different slave, but the cleanup code seemed to think it was ok to delete based on the old slave. The message in the log wasDeleting  on We haven't been using that old slave for this job for at least a few weeks. To make matters worse, it deleted the workspace WHILE the job was running on the new slave.This appears to be the trouble code:  { { code:java}  for (Node node : nodes) { FilePath ws = node.getWorkspaceFor(item); if (ws == null) { continue; // offline, fine } boolean check; try { check = shouldBeDeleted(item, ws, node); {code } }   The first node that it comes across that returns shouldBeDeleted true causes the workspace deleted even if another node (later in the list) is the last builder of that job (meaning the job is still active). This tries to get caught in shouldBeDeleted()  { { code:java}  Node lb = p.getLastBuiltOn(); LOGGER.log(Level.FINER, "Directory {0} is last built on {1}", new Object[] {dir, lb}); if(lb!=null && lb.equals(n)) { // this is the active workspace. keep it. LOGGER.log(Level.FINE, "Directory {0} is the last workspace for {1}", new Object[] {dir, p}); return false; } {code } }   But since the for loop code takes action before checking all nodes, this check can be pointless. 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.4.2#64017-sha1:e244265) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] [core] (JENKINS-19686) Workspace directory randomly deleted

2016-06-02 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Andrew Barber edited a comment on  JENKINS-19686 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Workspace directory randomly deleted  
 
 
 
 
 
 
 
 
 
 I think there is a bug in the workspace cleanup code. We've been battling with disappearing workspaces and I could not figure out what was happening until I stumbled upon this thread. In our case, I moved a job from one slave to a different slave, but the cleanup code seemed to think it was ok to delete based on the old slave. The message in the log wasDeleting  on We haven't been using that old slave for this job for at least a few weeks. To make matters worse, it deleted the workspace WHILE the job was running on the new slave.This appears to be the trouble code:{{ for (Node node : nodes) { FilePath ws = node.getWorkspaceFor(item); if (ws == null) { continue; // offline, fine } boolean check; try { check = shouldBeDeleted(item, ws, node);}}The first node that it comes across that returns shouldBeDeleted true causes the workspace deleted even if another node (later in the list) is the last builder of that job (meaning the job is still active). This tries to get caught in shouldBeDeleted() bq. {{  Node lb = p.getLastBuiltOn(); bq.  LOGGER.log(Level.FINER, "Directory {0} is last built on {1}", new Object[] {dir, lb}); bq.  if(lb!=null && lb.equals(n)) { bq.  // this is the active workspace. keep it. bq.  LOGGER.log(Level.FINE, "Directory {0} is the last workspace for {1}", new Object[] {dir, p}); bq.  return false; bq.  } }} But since the for loop code takes action before checking all nodes, this check can be pointless. 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.4.2#64017-sha1:e244265) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] [core] (JENKINS-19686) Workspace directory randomly deleted

2016-06-02 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Andrew Barber edited a comment on  JENKINS-19686 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Workspace directory randomly deleted  
 
 
 
 
 
 
 
 
 
 I think there is a bug in the workspace cleanup code. We've been battling with disappearing workspaces and I could not figure out what was happening until I stumbled upon this thread. In our case, I moved a job from one slave to a different slave, but the cleanup code seemed to think it was ok to delete based on the old slave. The message in the log wasDeleting  on We haven't been using that old slave for this job for at least a few weeks. To make matters worse, it deleted the workspace WHILE the job was running on the new slave.This appears to be the trouble code:{{   for (Node node : nodes) { FilePath ws = node.getWorkspaceFor(item); if (ws == null) { continue; // offline, fine } boolean check; try { check = shouldBeDeleted(item, ws, node);  }}The first node that it comes across that returns shouldBeDeleted true causes the workspace deleted even if another node (later in the list) is the last builder of that job (meaning the job is still active). This tries to get caught in shouldBeDeleted()bq. Node lb = p.getLastBuiltOn();bq. LOGGER.log(Level.FINER, "Directory {0} is last built on {1}", new Object[] {dir, lb});bq. if(lb!=null && lb.equals(n)) {bq. // this is the active workspace. keep it.bq. LOGGER.log(Level.FINE, "Directory {0} is the last workspace for {1}", new Object[] {dir, p});bq. return false;bq. }But since the for loop code takes action before checking all nodes, this check can be pointless. 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.4.2#64017-sha1:e244265) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] [core] (JENKINS-19686) Workspace directory randomly deleted

2016-06-02 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Andrew Barber edited a comment on  JENKINS-19686 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Workspace directory randomly deleted  
 
 
 
 
 
 
 
 
 
 I think there is a bug in the workspace cleanup code. We've been battling with disappearing workspaces and I could not figure out what was happening until I stumbled upon this thread. In our case, I moved a job from one slave to a different slave, but the cleanup code seemed to think it was ok to delete based on the old slave. The message in the log wasDeleting  on We haven't been using that old slave for this job for at least a few weeks. To make matters worse, it deleted the workspace WHILE the job was running on the new slave.This appears to be the trouble code: bq. {{  for (Node node : nodes) { bq.  FilePath ws = node.getWorkspaceFor(item); bq.  if (ws == null) { bq.  continue; // offline, fine bq.  } bq.  boolean check; bq.  try { bq.  check = shouldBeDeleted(item, ws, node); }} The first node that it comes across that returns shouldBeDeleted true causes the workspace deleted even if another node (later in the list) is the last builder of that job (meaning the job is still active). This tries to get caught in shouldBeDeleted()bq. Node lb = p.getLastBuiltOn();bq. LOGGER.log(Level.FINER, "Directory {0} is last built on {1}", new Object[] {dir, lb});bq. if(lb!=null && lb.equals(n)) {bq. // this is the active workspace. keep it.bq. LOGGER.log(Level.FINE, "Directory {0} is the last workspace for {1}", new Object[] {dir, p});bq. return false;bq. }But since the for loop code takes action before checking all nodes, this check can be pointless. 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.4.2#64017-sha1:e244265) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] [core] (JENKINS-19686) Workspace directory randomly deleted

2016-06-02 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Andrew Barber commented on  JENKINS-19686 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Workspace directory randomly deleted  
 
 
 
 
 
 
 
 
 
 
I think there is a bug in the workspace cleanup code. We've been battling with disappearing workspaces and I could not figure out what was happening until I stumbled upon this thread. In our case, I moved a job from one slave to a different slave, but the cleanup code seemed to think it was ok to delete based on the old slave. The message in the log was Deleting  on  We haven't been using that old slave for this job for at least a few weeks. To make matters worse, it deleted the workspace WHILE the job was running on the new slave. This appears to be the trouble code: 

for (Node node : nodes) {
 

FilePath ws = node.getWorkspaceFor(item);
 

if (ws == null) { bq. continue; // offline, fine bq. }
 

boolean check;
 

try {
 

check = shouldBeDeleted(item, ws, node);
 
The first node that it comes across that returns shouldBeDeleted true causes the workspace deleted even if another node (later in the list) is the last builder of that job (meaning the job is still active). This tries to get caught in shouldBeDeleted() 

Node lb = p.getLastBuiltOn();
 

LOGGER.log(Level.FINER, "Directory {0} is last built on {1}", new Object[] {dir, lb}); bq. if(lb!=null && lb.equals) { bq. // this is the active workspace. keep it. bq. LOGGER.log(Level.FINE, "Directory {0} is the last workspace for {1}", new Object[] {dir, p});
 

return false;
 

}
 
But since the for loop code takes action before checking all nodes, this check can be pointless. 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 

[JIRA] [core] (JENKINS-9436) gui option for "hudson.model.WorkspaceCleanupThread.disabled"

2016-06-02 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Andrew Barber commented on  JENKINS-9436 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: gui option for "hudson.model.WorkspaceCleanupThread.disabled"  
 
 
 
 
 
 
 
 
 
 
This is one system property that needs a UI option. Deleting workspaces is pretty serious and warrants live control over the behavior.  
I think there is a bug in the cleanup code still. We've been battling with disappearing workspaces and I could not figure out what was happening until I stumbled upon this thread. In our case, I moved a job from one slave to a different slave, but the cleanup code seemed to think it was ok to delete based on the old slave. The message in the log was Deleting  on  
We haven't been using that old slave for this job for at least a few weeks. To make matters worse, it deleted the workspace WHILE the job was running on the new slave. This appears to be the trouble code: for (Node node : nodes) { FilePath ws = node.getWorkspaceFor(item); if (ws == null)  { continue; // offline, fine } 
 boolean check; try  { check = shouldBeDeleted(item, ws, node); } 
 catch (IOException x) { 
The first node that it comes across that returns shouldBeDeleted true causes the workspace deleted even if another node (later in the list) is the last builder of that job (meaning the job is still active). This tries to get caught in shouldBeDeleted() if(lb!=null && lb.equals) { // this is the active workspace. keep it. LOGGER.log(Level.FINE, "Directory  {0} 
 is the last workspace for  {1} 
", new Object[]  {dir, p} 
); return false; } 
But since the for loop code takes action before checking all nodes, this check can be pointless. 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.4.2#64017-sha1:e244265) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe 

[JIRA] [core] (JENKINS-9436) gui option for "hudson.model.WorkspaceCleanupThread.disabled"

2016-06-02 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Andrew Barber edited a comment on  JENKINS-9436 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: gui option for "hudson.model.WorkspaceCleanupThread.disabled"  
 
 
 
 
 
 
 
 
 
 This is one system property that needs a UI option.  Deleting workspaces is pretty serious and warrants live control over the behavior.  I think there is a bug in the cleanup code still.  We've been battling with disappearing workspaces and I could not figure out what was happening until I stumbled upon this thread.  In our case, I moved a job from one slave to a different slave, but the cleanup code seemed to think it was ok to delete based on the old slave.  The message in the log wasDeleting  on We haven't been using that old slave for this job for at least a few weeks.  To make matters worse, it deleted the workspace WHILE the job was running on the new slave.This appears to be the trouble code:{quote} for (Node node : nodes) {FilePath ws = node.getWorkspaceFor(item);if (ws == null) {continue; // offline, fine}boolean check;try {check = shouldBeDeleted(item, ws, node);} catch (IOException x) {{quote}The first node that it comes across that returns shouldBeDeleted true causes the workspace deleted *even* if another node (later in the list) is the last builder of that job (meaning the job is still active).  This tries to get caught in shouldBeDeleted(){quote} {{ if(lb!=null && lb.equals(n)) {// this is the active workspace. keep it.LOGGER.log(Level.FINE, "Directory {0} is the last workspace for {1}", new Object[] {dir, p});return false;} }} {quote}But since the for loop code takes action before checking all nodes, this check can be pointless. 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.4.2#64017-sha1:e244265) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] [core] (JENKINS-9436) gui option for "hudson.model.WorkspaceCleanupThread.disabled"

2016-06-02 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Andrew Barber edited a comment on  JENKINS-9436 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: gui option for "hudson.model.WorkspaceCleanupThread.disabled"  
 
 
 
 
 
 
 
 
 
 This is one system property that needs a UI option.  Deleting workspaces is pretty serious and warrants live control over the behavior.  I think there is a bug in the cleanup code still.  We've been battling with disappearing workspaces and I could not figure out what was happening until I stumbled upon this thread.  In our case, I moved a job from one slave to a different slave, but the cleanup code seemed to think it was ok to delete based on the old slave.  The message in the log wasDeleting  on We haven't been using that old slave for this job for at least a few weeks.  To make matters worse, it deleted the workspace WHILE the job was running on the new slave.This appears to be the trouble code:{quote} for (Node node : nodes) {FilePath ws = node.getWorkspaceFor(item);if (ws == null) {continue; // offline, fine}boolean check;try {check = shouldBeDeleted(item, ws, node);} catch (IOException x) {{quote}The first node that it comes across that returns shouldBeDeleted true causes the workspace deleted *even* if another node (later in the list) is the last builder of that job (meaning the job is still active).  This tries to get caught in shouldBeDeleted(){quote} {{ if(lb!=null && lb.equals(n)) {// this is the active workspace. keep it.LOGGER.log(Level.FINE, "Directory {0} is the last workspace for {1}", new Object[] {dir, p});return false;} }} {quote}But since the for loop code takes action before checking all nodes, this check can be pointless. 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.4.2#64017-sha1:e244265) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] [core] (JENKINS-9436) gui option for "hudson.model.WorkspaceCleanupThread.disabled"

2016-06-02 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Andrew Barber edited a comment on  JENKINS-9436 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: gui option for "hudson.model.WorkspaceCleanupThread.disabled"  
 
 
 
 
 
 
 
 
 
 This is one system property that needs a UI option.  Deleting workspaces is pretty serious and warrants live control over the behavior.  I think there is a bug in the cleanup code still.  We've been battling with disappearing workspaces and I could not figure out what was happening until I stumbled upon this thread.  In our case, I moved a job from one slave to a different slave, but the cleanup code seemed to think it was ok to delete based on the old slave.  The message in the log wasDeleting  on We haven't been using that old slave for this job for at least a few weeks.  To make matters worse, it deleted the workspace WHILE the job was running on the new slave.This appears to be the trouble code:{ { quote}  for (Node node : nodes) {FilePath ws = node.getWorkspaceFor(item);if (ws == null) {continue; // offline, fine}boolean check;try {check = shouldBeDeleted(item, ws, node);} catch (IOException x) { {quote } } The first node that it comes across that returns shouldBeDeleted true causes the workspace deleted *even* if another node (later in the list) is the last builder of that job (meaning the job is still active).  This tries to get caught in shouldBeDeleted(){ { quote} if(lb!=null && lb.equals(n)) {// this is the active workspace. keep it.LOGGER.log(Level.FINE, "Directory {0} is the last workspace for {1}", new Object[] {dir, p});return false;} {quote } } But since the for loop code takes action before checking all nodes, this check can be pointless. 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.4.2#64017-sha1:e244265) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] [core] (JENKINS-9436) gui option for "hudson.model.WorkspaceCleanupThread.disabled"

2016-06-02 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Andrew Barber edited a comment on  JENKINS-9436 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: gui option for "hudson.model.WorkspaceCleanupThread.disabled"  
 
 
 
 
 
 
 
 
 
 This is one system property that needs a UI option.  Deleting workspaces is pretty serious and warrants live control over the behavior.  I think there is a bug in the cleanup code still.  We've been battling with disappearing workspaces and I could not figure out what was happening until I stumbled upon this thread.  In our case, I moved a job from one slave to a different slave, but the cleanup code seemed to think it was ok to delete based on the old slave.  The message in the log wasDeleting  on We haven't been using that old slave for this job for at least a few weeks.  To make matters worse, it deleted the workspace WHILE the job was running on the new slave.This appears to be the trouble code: {{  for (Node node : nodes) {FilePath ws = node.getWorkspaceFor(item);if (ws == null) {continue; // offline, fine}boolean check;try {check = shouldBeDeleted(item, ws, node);} catch (IOException x) { }} The first node that it comes across that returns shouldBeDeleted true causes the workspace deleted *even* if another node (later in the list) is the last builder of that job (meaning the job is still active).  This tries to get caught in shouldBeDeleted() {{ if(lb!=null && lb.equals(n)) {// this is the active workspace. keep it.LOGGER.log(Level.FINE, "Directory {0} is the last workspace for {1}", new Object[] {dir, p});return false;} }} But since the for loop code takes action before checking all nodes, this check can be pointless. 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.4.2#64017-sha1:e244265) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] [envinject-plugin] (JENKINS-13348) EnvInject overriding WORKSPACE variable

2016-04-21 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Andrew Barber commented on  JENKINS-13348 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: EnvInject overriding WORKSPACE variable  
 
 
 
 
 
 
 
 
 
 
I should also note that the job affected by this does not have "Prepare an environment for the run" selected. EnvInject is still lurking though as this message is always present in stdout:  [EnvInject] - Loading node environment variables. 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.4.2#64017-sha1:e244265) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] [envinject-plugin] (JENKINS-19754) Jenkins/EnvInject incorrectly sets ${WORKSPACE} on slave node

2016-04-21 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Andrew Barber commented on  JENKINS-19754 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Jenkins/EnvInject incorrectly sets ${WORKSPACE} on slave node  
 
 
 
 
 
 
 
 
 
 
After recent upgrades (jenkins 1.642.1, envinject 1.92.1), the work around has become intermittent. Every once a while system environment variables show up, including the improper WORKSPACE. This has become a real problem for us as there is no longer a work around. 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.4.2#64017-sha1:e244265) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] [envinject-plugin] (JENKINS-13348) EnvInject overriding WORKSPACE variable

2016-04-21 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Andrew Barber commented on  JENKINS-13348 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: EnvInject overriding WORKSPACE variable  
 
 
 
 
 
 
 
 
 
 
I see this issue as well. EnvInject 1.92.1, jenkins 1.642.1. I originally posted this bug https://issues.jenkins-ci.org/browse/JENKINS-19754 
The work around to get WORKSPACE set correctly is to  ○ Select "Prepare jobs environment" on the slave ○ Select "Unset System Environment Variables" on the slave 
After recent upgrades on jenkins and plugins I now see intermittent env inject of WORKSPACE to the slave root. I think the "unset system environment variables" is being ignored intermittently. Passing builds have a small set of environment variables set which appear to be only jenkins related. Failing builds have many more environment variables, especially system ones like HOST and LSF environment variables. The passing cases do not have WORKSPACE set. This is causing a lot of grief for our work. I hope someone can look into it.  
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.4.2#64017-sha1:e244265) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] [core] (JENKINS-32567) Downgrade “termination trace” warnings in Jenkins logs

2016-03-02 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Andrew Barber commented on  JENKINS-32567 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Downgrade “termination trace” warnings in Jenkins logs  
 
 
 
 
 
 
 
 
 
 
Backed up to 1.642.1 from 1.642.2 and memory leak is gone. With 1.642.2 the leak doesn't show in the java console or in the monitor plugin. I only see it in the memory usage from the top command. Resident memory grows until the machine eventually runs out of physical and swap memory. Using java 1.7.0_79 on centos 5.9.  
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.4.2#64017-sha1:e244265) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] [core] (JENKINS-32567) Downgrade “termination trace” warnings in Jenkins logs

2016-03-02 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Andrew Barber commented on  JENKINS-32567 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Downgrade “termination trace” warnings in Jenkins logs  
 
 
 
 
 
 
 
 
 
 
We experienced a memory leak after upgrading to a version with this patch. We had been seeing ~600MB daily log file size with these warnings present, but after the change we see unbounded growth in the memory footprint of java. Has anyone else noticed this? It might not be evident if the warning wasn't happening that often. 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.4.2#64017-sha1:e244265) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] [core] (JENKINS-32597) Log fills up with messages like: WARNING: Executor #4 for HOST : executing JOBNAME #171005 termination trace

2016-03-01 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Andrew Barber commented on  JENKINS-32597 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Log fills up with messages like: WARNING: Executor #4 for HOST : executing JOBNAME #171005 termination trace  
 
 
 
 
 
 
 
 
 
 
After picking up the fix in 1.646, I experienced a memory leak in jenkins. The 6GB host ran out of memory less than a week after I restarted jenkins on this version. I wonder if the fix didn't just put the old log output into a memory buffer that is never reclaimed. 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.4.2#64017-sha1:e244265) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] [core] (JENKINS-11889) Suspended slave do not start accepting tasks when policy changed

2016-02-09 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Andrew Barber updated an issue 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 Jenkins /  JENKINS-11889 
 
 
 
  Suspended slave do not start accepting tasks when policy changed  
 
 
 
 
 
 
 
 
 

Change By:
 
 Andrew Barber 
 
 
 

Attachment:
 
 screenshot-1.png 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.4.2#64017-sha1:e244265) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] [core] (JENKINS-11889) Suspended slave do not start accepting tasks when policy changed

2016-02-09 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Andrew Barber edited a comment on  JENKINS-11889 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Suspended slave do not start accepting tasks when policy changed  
 
 
 
 
 
 
 
 
 
 It's 2016 and I still see this bug in 1.643.  Groovy console command slightly changed !screenshot  ○ Jenkins.instance.getNode("node - 1 name") . png! toComputer().setAcceptingTasks(true)   It's a shame that this type of issue has been sitting here so long. 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.4.2#64017-sha1:e244265) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] [core] (JENKINS-11889) Suspended slave do not start accepting tasks when policy changed

2016-02-09 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Andrew Barber commented on  JENKINS-11889 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Suspended slave do not start accepting tasks when policy changed  
 
 
 
 
 
 
 
 
 
 
It's 2016 and I still see this bug in 1.643. Groovy console command slightly changed 
 
It's a shame that this type of issue has been sitting here so long. 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.4.2#64017-sha1:e244265) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] [multijob-plugin] (JENKINS-19680) Multijob plugin removes subBuillds from list when job name is the same but build number is different (fix proposed)

2015-11-22 Thread ajbarbe...@gmail.com (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Andrew Barber commented on  JENKINS-19680 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Multijob plugin removes subBuillds from list when job name is the same but build number is different (fix proposed)  
 
 
 
 
 
 
 
 
 
 
Sorry, I have never used GIT and don't have the time currently to learn it. I have a tar ball based on an old version I give to someone that has my edits. 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.4.2#64017-sha1:e244265) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.