Yes, this is exactly where I noticed it as well (using the Cobertura plugin).
Personally, I think a better way to solve the issue for this plugin (and the
warnings plugin) would be for them to construct links back to the web-view of
the SCM where the code came from, assuming the job's SCM provid
The problem for us is that there are links to artifacts in Jenkins that are
'dead' because the content they are pointed to is removed after the server
is shutdown. An example is code coverage information. I wouldn't mind if
the content was overwritten with the latest data but having absolutely no
d
I myself have rarely considered this a problem. The workspace would anyway be
overwritten by the next build, so it is better not to trust it's existence or
contents.
If I need to debug a problem with a job, I either:
* Disable the job to prevent next build from overwriting the workspace or
* Co
This is my use case as well (slaves on EC2), and you are correct. I wouldn't
want to copy the entire workspace to the master after every job (that would be
quite time and disk space consuming), but not having it available later is
annoying.
I've got items on my to-do list to investigate whethe
OK, thanks for the explanation. In the case of using EC2 slaves that come
and go quite frequently this seems to be problematic. I see there is a
plugin that allows for copying from a slave to a master but even then it
seems unlikely the workspace link would function correctly.
On Tue, Mar 26, 201
Yes, this is how it is expected to work. Access to the workspace is performed
through the open connection to the Jenkins slave on the node; the workspace is
not copied to the Jenkins master. This causes problems in some areas; for
example, the Warnings plugin does not copy referenced source file
I see the same behavior and certainly assume it is by design. The
workspaces are likely still there and can be accessed directly if
you're on that node itself, just not through Jenkins.
Scott
On Tue, Mar 26, 2013 at 12:30 PM, Robert Moore wrote:
> This behavior seems new to me but perhaps I've
This behavior seems new to me but perhaps I've just overlooked it in the
past. We are building slave nodes and after the nodes have gone offline are
not able to access the workspace (we see "Error: no workspace" when trying
to access it). Is this by design or a bug?
Thanks,
Rob
--
You receiv