[JIRA] (JENKINS-38764) Nodes allocated inside of parallel() should have their workspaces removed immediately
Title: Message Title Nisarg Shah assigned an issue to CloudBees Inc. Jenkins / JENKINS-38764 Nodes allocated inside of parallel() should have their workspaces removed immediately Change By: Nisarg Shah Assignee: CloudBees Inc. Add Comment This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d) -- You received this message because you are subscribed to the Google Groups "Jenkins Issues" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
[JIRA] (JENKINS-38764) Nodes allocated inside of parallel() should have their workspaces removed immediately
Title: Message Title Nisarg Shah assigned an issue to Nisarg Shah Jenkins / JENKINS-38764 Nodes allocated inside of parallel() should have their workspaces removed immediately Change By: Nisarg Shah Assignee: CloudBees Inc. Nisarg Shah Add Comment This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d) -- You received this message because you are subscribed to the Google Groups "Jenkins Issues" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
[JIRA] (JENKINS-38764) Nodes allocated inside of parallel() should have their workspaces removed immediately
Title: Message Title Nisarg Shah assigned an issue to Unassigned Jenkins / JENKINS-38764 Nodes allocated inside of parallel() should have their workspaces removed immediately Change By: Nisarg Shah Assignee: Nisarg Shah Add Comment This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d) -- You received this message because you are subscribed to the Google Groups "Jenkins Issues" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
[JIRA] (JENKINS-38764) Nodes allocated inside of parallel() should have their workspaces removed immediately
Title: Message Title Jesse Glick resolved as Not A Defect I don't think these workspaces serve any reasonable purpose after the parallel() step has completed since you cannot browse or do anything else with them. Nothing to do with parallel, and not much to do with Pipeline either. The UI offers workspace browsing links in Pipeline Steps. (Not for @tmp variants—by design.) There is an open RFE to offer clearer UI. Anyway the main purpose of keeping workspaces around after the lock has been released is to allow subsequent builds to reuse the workspace as an optimization. They should be removed to free up disk resources for other jobs on the agent. There is a filed RFE to move the workspace cleanup thread to a plugin. Follow-up features could include aggressive removal of unlocked workspaces depending on disk space levels (currently anything used in the past month is left untouched). Jenkins / JENKINS-38764 Nodes allocated inside of parallel() should have their workspaces removed immediately Change By: Jesse Glick Status: Open Resolved Resolution: Not A Defect Add Comment
[JIRA] (JENKINS-38764) Nodes allocated inside of parallel() should have their workspaces removed immediately
Title: Message Title R. Tyler Croy commented on JENKINS-38764 Re: Nodes allocated inside of parallel() should have their workspaces removed immediately Christoph Obexer, there are already tools which will support that. There's the ws step in Pipeline already, additionally the External Workspace Manager plugin which might more properly address your use-case. Workspaces persist between builds as an optimization, but they're definitely intended to be more ephemeral (thus stash/unstash and archiving of artifacts). Add Comment This message was sent by Atlassian JIRA (v7.1.7#71011-sha1:2526d7c) -- You received this message because you are subscribed to the Google Groups "Jenkins Issues" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
[JIRA] (JENKINS-38764) Nodes allocated inside of parallel() should have their workspaces removed immediately
Title: Message Title Christoph Obexer commented on JENKINS-38764 Re: Nodes allocated inside of parallel() should have their workspaces removed immediately I use parallel to build our software for multiple platforms (CentOS5, 6, 7, Windows, ...). I NEED the allocated workspaces on those nodes (7 in total) every build - twice every build actually. Maybe you need to configure docker on ci.jenkins.io to have better cleanup there? Add Comment This message was sent by Atlassian JIRA (v7.1.7#71011-sha1:2526d7c) -- You received this message because you are subscribed to the Google Groups "Jenkins Issues" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
[JIRA] (JENKINS-38764) Nodes allocated inside of parallel() should have their workspaces removed immediately
Title: Message Title R. Tyler Croy created an issue Jenkins / JENKINS-38764 Nodes allocated inside of parallel() should have their workspaces removed immediately Issue Type: Bug Assignee: CloudBees Inc. Components: pipeline Created: 2016/Oct/06 12:03 AM Environment: Jenkins 2.7.4 Priority: Minor Reporter: R. Tyler Croy I noticed some long-lived agents on ci.jenkins.io start to reach capacity on their disks. The major culprit seems to be these just-in-time allocated workspaces created by the parallel step. From the file system of an agent in question: 272M Core_jenkins_PR-2560-SLUUKE4ANV5FD5D67BJ6QJXT7I6A5KK7OK5XKDDLLMGUC2SH3DNA 4.0K Core_jenkins_PR-2560-SLUUKE4ANV5FD5D67BJ6QJXT7I6A5KK7OK5XKDDLLMGUC2SH3DNA@tmp 193M cture_jenkins-infra_staging-USM6F6JS6HK2JGY2BJ5HZ5TWAVMAIHCFNV6IJD37YMEUECW3O3EQ 4.0K cture_jenkins-infra_staging-USM6F6JS6HK2JGY2BJ5HZ5TWAVMAIHCFNV6IJD37YMEUECW3O3EQ@tmp 35G fra_infra-statistics_master-BQS7QBCYM7MBZLAZ2RN2ZHFUGONNKIGZDAM3XOWNBMQGUZL7RBLA 4.0K fra_infra-statistics_master-BQS7QBCYM7MBZLAZ2RN2ZHFUGONNKIGZDAM3XOWNBMQGUZL7RBLA@tmp 177M Infra_jenkins-infra_staging-KTU3HUJ4E475OGVM7LIUJKPX5J7XVRNW3NLLJ3INKWD4D5MXWZZQ 8.0M Infra_jenkins-infra_staging-KTU3HUJ4E475OGVM7LIUJKPX5J7XVRNW3NLLJ3INKWD4D5MXWZZQ@2 4.0K Infra_jenkins-infra_staging-KTU3HUJ4E475OGVM7LIUJKPX5J7XVRNW3NLLJ3INKWD4D5MXWZZQ@2@tmp 177M Infra_jenkins-infra_staging-KTU3HUJ4E475OGVM7LIUJKPX5J7XVRNW3NLLJ3INKWD4D5MXWZZQ@3 4.0K Infra_jenkins-infra_staging-KTU3HUJ4E475OGVM7LIUJKPX5J7XVRNW3NLLJ3INKWD4D5MXWZZQ@3@tmp 4.0K Infra_jenkins-infra_staging-KTU3HUJ4E475OGVM7LIUJKPX5J7XVRNW3NLLJ3INKWD4D5MXWZZQ@tmp 446M