[JIRA] [core] (JENKINS-23384) Stopping a jenkins clients marks many *other* clients as 'offline' in jenkins master
ciaranj created JENKINS-23384 Stopping a jenkins clients marks many *other* clients as offline in jenkins master Issue Type: Bug Affects Versions: current Assignee: Unassigned Components: core Created: 10/Jun/14 10:18 AM Description: It seems that when we manually restart a jenkins slave agent running on a windows client, a bunch of our other slaves were marked as permanently offline by jenkins (master). Potentially we had mixed slave versions (2.33 + 2.42) if that's relevant ? (Notably clients running a 2.14 as a service appear un-affected ?) The system log contained the following errors after we manually restarted the jenkins slave on DEV-CI-SE-16: un 10, 2014 10:55:51 AM WARNING org.jenkinsci.remoting.nio.NioChannelHub run Communication problem java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcher.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251) at sun.nio.ch.IOUtil.read(IOUtil.java:224) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254) at org.jenkinsci.remoting.nio.FifoBuffer$Pointer.receive(FifoBuffer.java:136) at org.jenkinsci.remoting.nio.FifoBuffer.receive(FifoBuffer.java:306) at org.jenkinsci.remoting.nio.NioChannelHub.run(NioChannelHub.java:496) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:679) Jun 10, 2014 10:55:51 AM WARNING jenkins.slaves.JnlpSlaveAgentProtocol$Handler$1 onClosed NioChannelHub keys=9 gen=23306: Computer.threadPoolForRemoting [#2] for + DEV-CI-SE-16 terminated java.io.IOException: Failed to abort at org.jenkinsci.remoting.nio.NioChannelHub$NioTransport.abort(NioChannelHub.java:184) at org.jenkinsci.remoting.nio.NioChannelHub.run(NioChannelHub.java:563) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:679) Caused by: java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcher.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251) at sun.nio.ch.IOUtil.read(IOUtil.java:224) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254) at org.jenkinsci.remoting.nio.FifoBuffer$Pointer.receive(FifoBuffer.java:136) at org.jenkinsci.remoting.nio.FifoBuffer.receive(FifoBuffer.java:306) at org.jenkinsci.remoting.nio.NioChannelHub.run(NioChannelHub.java:496) ... 7 more Jun 10, 2014 10:55:51 AM WARNING org.jenkinsci.remoting.nio.NioChannelHub run Failed to select java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:656) at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:378) at org.jenkinsci.remoting.nio.Closeables$1.close(Closeables.java:20) at org.jenkinsci.remoting.nio.NioChannelHub$MonoNioTransport.closeR(NioChannelHub.java:289) at org.jenkinsci.remoting.nio.NioChannelHub$NioTransport$1.call(NioChannelHub.java:226) at org.jenkinsci.remoting.nio.NioChannelHub$NioTransport$1.call(NioChannelHub.java:224) at org.jenkinsci.remoting.nio.NioChannelHub.run(NioChannelHub.java:474) at
[JIRA] [core] (JENKINS-22853) SEVERE: Trying to unexport an object that's already unexported
ciaranj edited a comment on JENKINS-22853 SEVERE: Trying to unexport an object thats already unexported As requested by Kohsuke Kawaguchi attached is the stack-trace I'm seeing: Trying to unexport an object that's already unexported java.lang.IllegalStateException: Invalid object ID 197 iota=480 at hudson.remoting.ExportTable.diagnoseInvalidId(ExportTable.java:277) at hudson.remoting.ExportTable.unexportByOid(ExportTable.java:300) at hudson.remoting.Channel.unexport(Channel.java:600) at hudson.remoting.UnexportCommand.execute(UnexportCommand.java:38) at hudson.remoting.Channel$2.handle(Channel.java:475) at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:60) Caused by: java.lang.Exception: Object was recently deallocated #197 (ref.0) : hudson.CloseProofOutputStream Created at Wed Jun 04 16:11:05 BST 2014 at hudson.remoting.ExportTable$Entry.init(ExportTable.java:86) at hudson.remoting.ExportTable.export(ExportTable.java:239) at hudson.remoting.Channel.export(Channel.java:592) at hudson.remoting.RemoteOutputStream.writeObject(RemoteOutputStream.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:959) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1480) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1416) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1528) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1493) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1416) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:346) at hudson.remoting.UserRequest._serialize(UserRequest.java:155) at hudson.remoting.UserRequest.serialize(UserRequest.java:164) at hudson.remoting.UserRequest.init(UserRequest.java:62) at hudson.remoting.Channel.call(Channel.java:738) at hudson.Launcher$RemoteLauncher.launch(Launcher.java:888) at hudson.Launcher$ProcStarter.start(Launcher.java:355) at hudson.Launcher$ProcStarter.join(Launcher.java:362) at hudson.plugins.msbuild.MsBuildBuilder.perform(MsBuildBuilder.java:180) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20) at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:745) at hudson.model.Build$BuildExecution.build(Build.java:198) at hudson.model.Build$BuildExecution.doRun(Build.java:159) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:518) at hudson.model.Run.execute(Run.java:1710) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:231) Released at Wed Jun 04 16:17:53 BST 2014 at hudson.remoting.ExportTable$Entry.release(ExportTable.java:115) at hudson.remoting.ExportTable.unexportByOid(ExportTable.java:303) at hudson.remoting.Channel.unexport(Channel.java:600) at hudson.remoting.ProxyOutputStream$Unexport$1.run(ProxyOutputStream.java:352) at hudson.remoting.PipeWriter$1.run(PipeWriter.java:158) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:111) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:679) at hudson.remoting.ExportTable.diagnoseInvalidId(ExportTable.java:270) ... 5 more Caused by: Released at
[JIRA] [core] (JENKINS-22853) SEVERE: Trying to unexport an object that's already unexported
ciaranj commented on JENKINS-22853 SEVERE: Trying to unexport an object thats already unexported As requested by Kohsuke Kawaguchi attached is the stack-trace I'm seeing: Trying to unexport an object that's already unexported java.lang.IllegalStateException: Invalid object ID 197 iota=480 at hudson.remoting.ExportTable.diagnoseInvalidId(ExportTable.java:277) at hudson.remoting.ExportTable.unexportByOid(ExportTable.java:300) at hudson.remoting.Channel.unexport(Channel.java:600) at hudson.remoting.UnexportCommand.execute(UnexportCommand.java:38) at hudson.remoting.Channel$2.handle(Channel.java:475) at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:60) Caused by: java.lang.Exception: Object was recently deallocated #197 (ref.0) : hudson.CloseProofOutputStream Created at Wed Jun 04 16:11:05 BST 2014 at hudson.remoting.ExportTable$Entry.init(ExportTable.java:86) at hudson.remoting.ExportTable.export(ExportTable.java:239) at hudson.remoting.Channel.export(Channel.java:592) at hudson.remoting.RemoteOutputStream.writeObject(RemoteOutputStream.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:959) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1480) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1416) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1528) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1493) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1416) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:346) at hudson.remoting.UserRequest._serialize(UserRequest.java:155) at hudson.remoting.UserRequest.serialize(UserRequest.java:164) at hudson.remoting.UserRequest.init(UserRequest.java:62) at hudson.remoting.Channel.call(Channel.java:738) at hudson.Launcher$RemoteLauncher.launch(Launcher.java:888) at hudson.Launcher$ProcStarter.start(Launcher.java:355) at hudson.Launcher$ProcStarter.join(Launcher.java:362) at hudson.plugins.msbuild.MsBuildBuilder.perform(MsBuildBuilder.java:180) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20) at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:745) at hudson.model.Build$BuildExecution.build(Build.java:198) at hudson.model.Build$BuildExecution.doRun(Build.java:159) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:518) at hudson.model.Run.execute(Run.java:1710) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:231) Released at Wed Jun 04 16:17:53 BST 2014 at hudson.remoting.ExportTable$Entry.release(ExportTable.java:115) at hudson.remoting.ExportTable.unexportByOid(ExportTable.java:303) at hudson.remoting.Channel.unexport(Channel.java:600) at hudson.remoting.ProxyOutputStream$Unexport$1.run(ProxyOutputStream.java:352) at hudson.remoting.PipeWriter$1.run(PipeWriter.java:158) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:111) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:679) at hudson.remoting.ExportTable.diagnoseInvalidId(ExportTable.java:270) ... 5 more Caused by: Released at Wed
[JIRA] (JENKINS-16449) When a job hits 'Max # of builds to keep' Nunit Publisher starts to throw NPE
ciaranj created JENKINS-16449 When a job hits Max # of builds to keep Nunit Publisher starts to throw NPE Issue Type: Bug Assignee: redsolo Components: junit, nunit Created: 23/Jan/13 9:01 AM Description: It seems as though jenkins somehow gets in a situation where at the end of running a job, at the point of trying to publish nunit results it throws an NPE, of the form: ERROR: Publisher hudson.plugins.nunit.NUnitPublisher aborted due to exception java.lang.NullPointerException at hudson.model.Run.getRootDir(Run.java:927) at hudson.tasks.junit.TestResultAction.getDataFile(TestResultAction.java:91) at hudson.tasks.junit.TestResultAction.load(TestResultAction.java:147) at hudson.tasks.junit.TestResultAction.getResult(TestResultAction.java:97) at hudson.tasks.junit.TestResultAction.getResult(TestResultAction.java:55) at hudson.tasks.test.AbstractTestResultAction.findCorrespondingResult(AbstractTestResultAction.java:183) at hudson.tasks.test.TestResult.getPreviousResult(TestResult.java:145) at hudson.tasks.junit.SuiteResult.getPreviousResult(SuiteResult.java:296) at hudson.tasks.junit.CaseResult.getPreviousResult(CaseResult.java:375) at hudson.tasks.junit.CaseResult.freeze(CaseResult.java:486) at hudson.tasks.junit.SuiteResult.freeze(SuiteResult.java:338) at hudson.tasks.junit.TestResult.freeze(TestResult.java:564) at hudson.tasks.junit.TestResultAction.setResult(TestResultAction.java:74) at hudson.tasks.junit.TestResultAction.init(TestResultAction.java:67) at hudson.plugins.nunit.NUnitPublisher.recordTestResult(NUnitPublisher.java:150) at hudson.plugins.nunit.NUnitPublisher.perform(NUnitPublisher.java:109) at hudson.tasks.BuildStepMonitor$3.perform(BuildStepMonitor.java:36) at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:810) at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:785) at hudson.model.Build$BuildExecution.post2(Build.java:183) at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:732) at hudson.model.Run.execute(Run.java:1568) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:236) Once this starts happening it continues to happen for every build for that particular job. A restart does not stop this behaviour from happening. We noted a while back that deleting the build history from the filesystem for this job does allow it carry on successfully, until such time as the error appears again. Very recently a team member noted that the number of builds in the history for that job was equal to (or thereabouts) to the value stored in the configuration setting 'Max # of builds to keep' (in our case 100). This could be a co-incidence, and I've upped the setting to 200 to see if builds start working again (but this takes ~6 hours to reach the failing job), but it does look like a smoking gun I had hoped that it would be related to https://issues.jenkins-ci.org/browse/JENKINS-16194 but having upgraded to the latest version we still see this issue. I'm mostly logging this in case others are seeing it Environment: Ubuntu Wheezy x86_64. Jenkins 1.499 (and on previous 1.496,497,498 possibly earlier) Project: Jenkins Priority: Major Reporter: ciaranj This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators.
[JIRA] (JENKINS-16449) When a job hits 'Max # of builds to keep' Nunit Publisher starts to throw NPE
ciaranj updated JENKINS-16449 When a job hits Max # of builds to keep Nunit Publisher starts to throw NPE Change By: ciaranj (23/Jan/13 9:34 AM) Description: Itseemsasthoughjenkinssomehowgetsinasituationwhereattheendofrunningajob,atthepointoftryingtopublishnunitresultsitthrowsanNPE,oftheform:ERROR:Publisherhudson.plugins.nunit.NUnitPublisherabortedduetoexceptionjava.lang.NullPointerException athudson.model.Run.getRootDir(Run.java:927) athudson.tasks.junit.TestResultAction.getDataFile(TestResultAction.java:91) athudson.tasks.junit.TestResultAction.load(TestResultAction.java:147) athudson.tasks.junit.TestResultAction.getResult(TestResultAction.java:97) athudson.tasks.junit.TestResultAction.getResult(TestResultAction.java:55) athudson.tasks.test.AbstractTestResultAction.findCorrespondingResult(AbstractTestResultAction.java:183) athudson.tasks.test.TestResult.getPreviousResult(TestResult.java:145) athudson.tasks.junit.SuiteResult.getPreviousResult(SuiteResult.java:296) athudson.tasks.junit.CaseResult.getPreviousResult(CaseResult.java:375) athudson.tasks.junit.CaseResult.freeze(CaseResult.java:486) athudson.tasks.junit.SuiteResult.freeze(SuiteResult.java:338) athudson.tasks.junit.TestResult.freeze(TestResult.java:564) athudson.tasks.junit.TestResultAction.setResult(TestResultAction.java:74) athudson.tasks.junit.TestResultAction.init(TestResultAction.java:67) athudson.plugins.nunit.NUnitPublisher.recordTestResult(NUnitPublisher.java:150) athudson.plugins.nunit.NUnitPublisher.perform(NUnitPublisher.java:109) athudson.tasks.BuildStepMonitor$3.perform(BuildStepMonitor.java:36) athudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:810) athudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:785) athudson.model.Build$BuildExecution.post2(Build.java:183) athudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:732) athudson.model.Run.execute(Run.java:1568) athudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) athudson.model.ResourceController.execute(ResourceController.java:88) athudson.model.Executor.run(Executor.java:236)Oncethisstartshappeningitcontinuestohappenforeverybuildforthatparticularjob.Arestartdoesnotstopthisbehaviourfromhappening.Wenotedawhilebackthatdeletingthebuildhistoryfromthefilesystemforthisjobdoesallowitcarryonsuccessfully,untilsuchtimeastheerrorappearsagain.Veryrecentlyateammembernotedthatthenumberofbuildsinthehistoryforthatjobwasequalto(orthereabouts [mostrecent#405oldest#306 )tothevaluestoredintheconfigurationsettingMax#ofbuildstokeep(inourcase100).Thiscouldbeaco-incidence,andIveuppedthesettingto200toseeifbuildsstartworkingagain(butthistakes~6hourstoreachthefailingjob),butitdoeslooklikeasmokinggun:)Ihadhopedthatitwouldberelatedto[https://issues.jenkins-ci.org/browse/JENKINS-16194]buthavingupgradedtothelatestversionwestillseethisissue.Immostlyloggingthisincaseothersareseeingit;) This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators. For more information on JIRA, see: http://www.atlassian.com/software/jira
[JIRA] (JENKINS-14321) Git plugin's 'Fast remote polling' uses slave environment, not master environment.
ciaranj reopened JENKINS-14321 Git plugins Fast remote polling uses slave environment, not master environment. Sorry Nicolas, this fix does not appear to fix the problem that my original pull request did ? I've re-submitted a new pull request based on your commit here, https://github.com/jenkinsci/git-plugin/pull/102 Change By: ciaranj (03/Oct/12 7:07 PM) Resolution: Fixed Status: Resolved Reopened This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators. For more information on JIRA, see: http://www.atlassian.com/software/jira
[JIRA] (JENKINS-12667) No possibility to specify Workspace Root Directory for Slave node
ciaranj edited a comment on JENKINS-12667 No possibility to specify Workspace Root Directory for Slave node I looked into this myself as I would like to have a set of VMs each running a jenkins slave (single executor) but all building to and from the same disk. (I'm running into problems with disk space due to multiple copies of the same job sitting redundantly on each VM's disk, as the same job (for me) never builds con-currently I can't see a reason why I shouldn't be able to do this disk IO performance aside) But because the location of the Jenkins Slave (JENKINS_HOME) is intrinsically linked to the location of the workspace (JENKINS_HOME/workspace) (on slaves anyway) I can't have all those slaves sharing the same workspace folder at the moment, as they would all try (and fail) to overwrite their working files in the JENKINS_HOME location (e.g. jenkins-slave.out, jenkins-slave, slave.jar). (FTR I've tried sym-linking each slave's 'workspace' folder to the shared disk but to compound things msysgit then appears to stop cloning!) It looks like there was an intention to implement this in JENKINS-8446 but a deliberate decision was made to avoid this Is it worth me spending anytime implementing this, or would that effort just be rejected as it isn't part of the intended strategy? This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators. For more information on JIRA, see: http://www.atlassian.com/software/jira
[JIRA] (JENKINS-12667) No possibility to specify Workspace Root Directory for Slave node
ciaranj commented on JENKINS-12667 No possibility to specify Workspace Root Directory for Slave node I looked into this myself as I would like to have a set of VMs each running a jenkins slave (single executor) but all building to and from the same disk. (I'm running into problems with disk space due to multiple copies of the same job sitting redundantly on each VM's disk, as the same job (for me) never builds con-currently I can't see a reason why I shouldn't be able to do this disk IO performance aside) But because the location of the Jenkins Slave (JENKINS_HOME) is intrinsically linked to the location of the workspace (JENKINS_HOME/workspace) I can't have all those slaves sharing the same workspace folder at the moment, as they would all try (and fail) to overwrite their working files in the JENKINS_HOME location (e.g. jenkins-slave.out, jenkins-slave, slave.jar). (FTR I've tried sym-linking each slave's 'workspace' folder to the shared disk but to compound things msysgit then appears to stop cloning!) It looks like there was an intention to implement this in JENKINS-8446 but a deliberate decision was made to avoid this Is it worth me spending anytime implementing this, or would that effort just be rejected as it isn't part of the intended strategy? This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators. For more information on JIRA, see: http://www.atlassian.com/software/jira
[JIRA] (JENKINS-12667) No possibility to specify Workspace Root Directory for Slave node
ciaranj edited a comment on JENKINS-12667 No possibility to specify Workspace Root Directory for Slave node As an update, further reading of the code has shown that it is possible to achieve what I (and possible the issue author requests through specifying a system property on startup of the jenkins master node. I've updated my jenkins startup script to pass: -Dhudson.model.Slave.workspaceRoot=e:/ This is interpreted seperately from the JENKINS_HOME variable so I'm able to have separate local jenkins working files, but a shared disk location for all the slave workspaces(in my case the E: drive, note the value passed can be absolutely specified or specified relatively to JENKINS_HOME), which is exactly what I wanted to achieve. Be aware this will affect all nodes however! This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators. For more information on JIRA, see: http://www.atlassian.com/software/jira