If your last hurtle here is to get the labeling working, can you use the 
perforce command line via a batch file build step to apply the label?

From: jenkinsci-users@googlegroups.com 
[mailto:jenkinsci-users@googlegroups.com] On Behalf Of Steinmetz, Jean-Philippe
Sent: Tuesday, March 25, 2014 5:24 PM
To: jenkinsci-users@googlegroups.com
Subject: Re: Shared Perforce workspace

There are two issues I am trying to resolve with the shared workspace. First, 
while each job is considered independent (code from one part of the workspace 
doesn't rely on another part) the exact drive location and folder structure is 
very important to maintain. There are a lot of pre-existing build scripts and 
environment settings that rely on the perforce workspace of the machine being 
in exactly one place.

When I originally posted this question I had set up each job to share the same 
p4 client spec but be able to modify the view while still specifying the same 
root workspace of each job (e.g. D:\workspace) . This allowed each job to sync 
and work against only the files it needs. However in doing this, I found that 
each time a job changed the view of the client spec and sync'd it would delete 
all the files previously sync'd by another job. This is a major issue as each 
job's particular view of the client spec can be 40GB or more.

What I currently have is for no jobs to be able to modify the client spec's 
view but instead use a view mask for sync'ing. While this eliminates the 
problem of having jobs delete previously sync'd folders I now cannot create 
meaningful p4 labels as they get applied to the entire depot tree instead of 
just the particular job's view where the label matters.

Ideally I should be able to specify the shared workspace for all jobs (e.g. 
D:\workspace) in order to ensure that the environment is properly setup, then 
have each job modify the client spec when polling/syncing in order to do it's 
work and still be able to label the limited view appropriately. This would not 
delete any previously sync'd files from a previous client spec. Is this 
possible?

Alternatively, I have thought about using my existing strategy of not allowing 
any job to modify the client spec and then making a custom build of the 
perforce plugin that would allow me to use the View Mask when applying a label.

On Mon, Mar 24, 2014 at 10:19 AM, Gareth Bowles 
<gbow...@gmail.com<mailto:gbow...@gmail.com>> wrote:
Does every build need to sync the entire depot ?  That's very unusual; 
individual builds that get code from Perforce normally use a restricted client 
view to get only the sub-paths from the depot that they need.  In that 
situation, each Jenkins job would use a different client view and you shouldn't 
see much of a performance impact if you clean the workspace before each build.


On Sunday, March 23, 2014 7:35:14 AM UTC-7, rginga wrote:
It certainly sounds like the perforce server's have list what it thinks is 
already in the workspace) is different from what is actually there.

My first thought is that none of these jobs can run at the same time.
Second is an incremental update should be much shorter. Not short enough for 
all jobs to use the exact same workspace definition?

From: jenkins...@googlegroups.com<mailto:jenkins...@googlegroups.com> 
[mailto:jenkins...@googlegroups.com] On Behalf Of 
jpste...@theworkshop.us.com<mailto:jpste...@theworkshop.us.com>

Sent: Thursday, March 20, 2014 3:17 PM
To: jenkins...@googlegroups.com<mailto:jenkins...@googlegroups.com>
Subject: Shared Perforce workspace

Hello,

I have a Jenkins master that is set up share the same workspace for Perforce. 
This is primarily because the depot is more than 40GB in size and syncing this 
per job is prohibitive. The jobs are also set up to share a single Perforce 
client workspace where each job modifies the client view to sync on the files 
it needs.

The problem I am finding with this setup however is that each time a job runs 
the files that were previously synced are gone. As far as I can tell this is 
either happening because the job is cleaning up the workspace after the job is 
done or due to having another job switch the client view which causes the 
directories to be cleaned up. As you might imagine with such a large depot this 
is very problematic. A single job takes at least 15 minutes to sync each time 
when in reality it shouldn't take more than a few seconds.

I have double checked all of my settings, everything pertaining to cleaning up 
workspaces is unchecked, force syncing disabled and so on. Does anyone know why 
this might be happening? Perhaps there is a better way to share the workspace 
for these jobs?

Thanks in advance,

Jean-Philippe Steinmetz
--
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-use...@googlegroups.com<mailto:jenkinsci-use...@googlegroups.com>.

For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to a topic in the Google 
Groups "Jenkins Users" group.
To unsubscribe from this topic, visit 
https://groups.google.com/d/topic/jenkinsci-users/neCzii5LdG0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to 
jenkinsci-users+unsubscr...@googlegroups.com<mailto:jenkinsci-users+unsubscr...@googlegroups.com>.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
jenkinsci-users+unsubscr...@googlegroups.com<mailto:jenkinsci-users+unsubscr...@googlegroups.com>.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to