Re: auto updating job's nextBuildNumber after restoring build history from another controller?

2021-06-27 Thread Tim Black
A better way to reset all nextBuildNumber files after restoring build 
history is to delete all of them and restart Jenkins. Jenkins seems to 
recreate the files for every job based on the most recent build detected in 
the history.

On Saturday, June 26, 2021 at 12:41:07 PM UTC-7 Tim Black wrote:

> Started down the path of inventing a solution with groovy, following the 
> below model for a single branch job:
>
> def jobName = "MyProject/develop"
> def job = Jenkins.instance.getItemByFullName(jobName)
> lastBuild = job.getLastBuild()
> println("last Build number is " + lastBuild.getNumber())
> println("job.nextBuildNumber is currently " + job.nextBuildNumber)
> job.updateNextBuildNumber(lastBuild.getNumber())
> println("job.nextBuildNumber was set to " + job.nextBuildNumber)
>
> This prints out the right thing but subsequently I have to click build 
> twice to actually trigger the next (correct) build number.
>
> Anyone know of a better way to implement this?
>
> On Saturday, June 26, 2021 at 12:07:30 PM UTC-7 Tim Black wrote:
>
>> Is there a way to make Jenkins automatically detect the latest build from 
>> the history and automatically update the nextBuildNumber file? 
>>
>> We are using the "jenkins.model.Jenkins.buildsDir" system property 
>> <https://www.jenkins.io/doc/book/managing/system-properties/> to 
>> separate jobs/ from builds/. When provisioning a new staging environment 
>> for our jenkins controller, we restore the build history from the 
>> production controller by rsyncing their buildsDirs. 
>>
>> On restart of the fresh new staging controller, the GUI for each job page 
>> correctly shows the build history, but the nextBuildNumber files are not 
>> updated correctly, and remain in their initial state. Because these 
>> indicate the next build number to be 1, when I click Build, nothing 
>> happens, and an error is logged in /var/log/jenkins/jenkins.log indicating 
>> that build number 1 already exists and it cannot overwrite. The 
>> nextBuildNumber file gets incremented, and next time it complains about 
>> build 2 existing, and so forth. So, presumably this will happen until next 
>> build number gets up to the 1 past the most recent build in the history 
>> that was rsynced over from production.
>>
>> Are we going to have to script a solution, or is there a way to make 
>> Jenkins automatically detect the latest build from the history and 
>> automatically update the nextBuildNumber file?
>>
>> We're using Jenkins CasC to manage all jenkins configuration, and the 
>> above system tries to implement a solution for backing up and restoring 
>> build history, to provide build history continuity across promotions of 
>> jenkins clusters across dev/test/staging/prod environments.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/8f387f98-ba54-4423-9622-2e73354119e2n%40googlegroups.com.


Re: auto updating job's nextBuildNumber after restoring build history from another controller?

2021-06-26 Thread Tim Black
Started down the path of inventing a solution with groovy, following the 
below model for a single branch job:

def jobName = "MyProject/develop"
def job = Jenkins.instance.getItemByFullName(jobName)
lastBuild = job.getLastBuild()
println("last Build number is " + lastBuild.getNumber())
println("job.nextBuildNumber is currently " + job.nextBuildNumber)
job.updateNextBuildNumber(lastBuild.getNumber())
println("job.nextBuildNumber was set to " + job.nextBuildNumber)

This prints out the right thing but subsequently I have to click build 
twice to actually trigger the next (correct) build number.

Anyone know of a better way to implement this?

On Saturday, June 26, 2021 at 12:07:30 PM UTC-7 Tim Black wrote:

> Is there a way to make Jenkins automatically detect the latest build from 
> the history and automatically update the nextBuildNumber file? 
>
> We are using the "jenkins.model.Jenkins.buildsDir" system property 
> <https://www.jenkins.io/doc/book/managing/system-properties/> to separate 
> jobs/ from builds/. When provisioning a new staging environment for our 
> jenkins controller, we restore the build history from the production 
> controller by rsyncing their buildsDirs. 
>
> On restart of the fresh new staging controller, the GUI for each job page 
> correctly shows the build history, but the nextBuildNumber files are not 
> updated correctly, and remain in their initial state. Because these 
> indicate the next build number to be 1, when I click Build, nothing 
> happens, and an error is logged in /var/log/jenkins/jenkins.log indicating 
> that build number 1 already exists and it cannot overwrite. The 
> nextBuildNumber file gets incremented, and next time it complains about 
> build 2 existing, and so forth. So, presumably this will happen until next 
> build number gets up to the 1 past the most recent build in the history 
> that was rsynced over from production.
>
> Are we going to have to script a solution, or is there a way to make 
> Jenkins automatically detect the latest build from the history and 
> automatically update the nextBuildNumber file?
>
> We're using Jenkins CasC to manage all jenkins configuration, and the 
> above system tries to implement a solution for backing up and restoring 
> build history, to provide build history continuity across promotions of 
> jenkins clusters across dev/test/staging/prod environments.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/3fa09bfc-980f-44ec-b3ad-a9f7da56e876n%40googlegroups.com.


auto updating job's nextBuildNumber after restoring build history from another controller?

2021-06-26 Thread Tim Black
Is there a way to make Jenkins automatically detect the latest build from 
the history and automatically update the nextBuildNumber file? 

We are using the "jenkins.model.Jenkins.buildsDir" system property 
 to separate 
jobs/ from builds/. When provisioning a new staging environment for our 
jenkins controller, we restore the build history from the production 
controller by rsyncing their buildsDirs. 

On restart of the fresh new staging controller, the GUI for each job page 
correctly shows the build history, but the nextBuildNumber files are not 
updated correctly, and remain in their initial state. Because these 
indicate the next build number to be 1, when I click Build, nothing 
happens, and an error is logged in /var/log/jenkins/jenkins.log indicating 
that build number 1 already exists and it cannot overwrite. The 
nextBuildNumber file gets incremented, and next time it complains about 
build 2 existing, and so forth. So, presumably this will happen until next 
build number gets up to the 1 past the most recent build in the history 
that was rsynced over from production.

Are we going to have to script a solution, or is there a way to make 
Jenkins automatically detect the latest build from the history and 
automatically update the nextBuildNumber file?

We're using Jenkins CasC to manage all jenkins configuration, and the above 
system tries to implement a solution for backing up and restoring build 
history, to provide build history continuity across promotions of jenkins 
clusters across dev/test/staging/prod environments.

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/b2d6a04a-1ff4-47dc-ac63-698b540e827bn%40googlegroups.com.


Re: Default Admin API Token

2021-06-26 Thread Tim Black
Daniel Beck, as I commented on the related closed jenkins issue 
,
 
I can't understand the usefulness of this feature for automated installs, 
which inherently will be setting `jenkins.install.runSetupWizard = false`. 
Can you help me understand the use case for this PR 
?

I'm trying to achieve essentially the same thing (want to pre-seed an api 
token to enable api access during initial jenkins provisioning), and months 
ago came to learn (via gitter) of the 
new Djenkins.install.SetupWizard.adminInitialApiToken option. I'm on 
2.289.1 and 
Djenkins.install.SetupWizard.adminInitialApiToken=
 simply 
has never worked for me.  The docs for this option 

 indicate 
that it:

> determines the behavior during the SetupWizard install phase concerning 
the API Token creation for the initial admin account.

So, it would seem for automated installs like ours which must disable the 
setup wizard, this option is innefectual, by design. No? 

On Gitter, Tim Jacomb 
 pointed me 
to the PR that introduced this feature 
 by Wadeck Follonier 
, whose 
description seems to indicate this is by design:

> No impact once an instance is configured.

Any advice? I tried it with and without running the setupwizard, and I've 
never been able to use the token to authenticate as my admin user in a new 
Jenkins instance.

So far, the big items I want to use the API for during provisioning are 
mostly putting jenkins into quietDown mode and safeRestarting - the former 
only useful during subsequent ansible provisioning, and the latter, which 
can also be done by simply restarting jenkins service, is required to 
absorb the build history which we're copying over from the production 
jenkins controller during provisioning.

More importantly, having a pre-baked API token would be highly useful for 
implementing test automation in the provisioning (ansible) playbooks. Using 
CasC to automate configuration is great, but insufficient in the same way 
that adding a new feature to any software project is insufficient until 
there are automated tests that can validate the new behavior.

Should I just give up on 
the Djenkins.install.SetupWizard.adminInitialApiToken approach and roll my 
own automation using the old-school crumb (CSRF token) approach?
On Tuesday, December 8, 2020 at 12:54:30 PM UTC-8 Daniel Beck wrote:

>
>
> > On 1. Dec 2020, at 11:40, Shahbaz Subedar  wrote:
> > 
> > -Djenkins.install.runSetupWizard="false"
> > 
> -Djenkins.install.SetupWizard.ADMIN_INITIAL_API_TOKEN=11b9b3fafe25923768621ca1b64d44bfd1
> > 
>
> You're disabling the setup wizard, and then set an option that is
>
> > only used before/during the Setup Wizard
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/826e663d-c2b3-4178-afb4-1d14ad4dc5b0n%40googlegroups.com.


Re: Move location of build history, separate from config

2021-06-11 Thread Tim Black
I'm curious, now that this config setting is gone, and now there is the 
"jenkins.model.Jenkins.buildsDir" system property 
, are you now 
using that to separate your jobs/ from your builds/?

We're using Jenkins CasC to manage all jenkins configuration, and are 
trying to implement a solution for backing up and restoring build history, 
to provide build history continuity across promotions of jenkins clusters 
across dev/test/staging/prod environments. The previous plan was just to 
backup the jobs/ dir (we're using restic to make external backups from a 
simple hourly cron job), but since CasC also writes to this area, I'm 
thinking that separating jobs/ from builds/ in this way would provide some 
insurance against unforeseen conflicts of these two mechanisms interfering 
with one another.

Thoughts?

On Wednesday, March 29, 2017 at 6:47:20 AM UTC-7 gastr...@gmail.com wrote:

> Excellent!
> Changing Build Record Root Directory in the system settings was exactly 
> what I was hoping for.
> I never noticed that was there, assuming the setting has been there all 
> along..
>
> My jobs folder is now a very managable size, containing pretty much just 
> the config.xml files, next build number, and a few symbolic links.
>
> Thanks!
>
>
> On Friday, March 10, 2017 at 10:24:05 PM UTC-5, Christopher Orr wrote:
>>
>> On Thu, 9 Mar 2017, at 22:37, Gastro Man wrote: 
>> > Is it possible to configure Jenkins so the jobs' build history is not 
>> > contained in the same directory as the config.xml? 
>> > 
>> > Ideally, I would prefer all the configs to be together in an area 
>> > ("job_config" folder)  that I can put in source control, backup, 
>> quickly 
>> > search and compare, and manage. 
>> > 
>> > And then the gigabytes of data stored in histories could be in another 
>> > location ("job_history" folder) where I don't care about source 
>> control, 
>> > don't need to include when searching across configs, etc. 
>> > Possibly even store this on a separate storage device.. 
>> > 
>> > I find it frustrating to have the mixing of "control" and "data" inside 
>> > every single job subfolder, but I haven't found a way around this so 
>> far. 
>>
>> You could mark the "builds" subdirectories as ignored in source control? 
>>
>> Or if you go to Manage Jenkins > Configure System and click "Advanced" 
>> at the top of the page, you'll be able to alter where build data is 
>> stored ("Build Record Root Directory"). 
>>
>> If you change this to "${JENKINS_HOME}/builds/${ITEM_FULLNAME}", for 
>> example, then all of the build data will be stored there. 
>>
>> At this point, the only thing left in ${JENKINS_HOME}/jobs/ (i.e. 
>> ${ITEM_ROOTDIR}) will be the job config.xml, and the metadata about 
>> build numbers and last success/failure symlinks etc.. 
>>
>> Note that changing this value will not automagically migrate all of your 
>> existing data to the new location you've specified. 
>>
>> Regards, 
>> Chris 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/171b5b4f-a15c-42a5-be3f-b57602c975een%40googlegroups.com.


Re: Efficiently copying artifacts

2021-03-04 Thread Tim Black
To whom it may concern, I ended up finding the code in Jenkins branch-api 
plugin that's creating that branch path segment (the NameMangler 
<https://github.com/jenkinsci/branch-api-plugin/blob/f2bd7ec715057feb047754f5427f209bbf1b3248/src/main/java/jenkins/branch/NameMangler.java#L55>),
 
however it turned out to be completely unnecessary since I can just get the 
build directory (to construct the path to the artifacts on the controller) 
from the project/job object, obtained by name from the jenkins instance in 
groovy. So my shared library function is even simpler now, works for any 
project type and safer because it doesn't need to use any core or plug in 
classes.

On Monday, March 1, 2021 at 1:43:02 PM UTC-8 Tim Black wrote:

> I'm trying to do same, but in both directions (archiving AND copying 
> artifacts from upstream). I wonder how the scp approach to copying 
> artifacts would work in multibranch pipelines? Can one deterministically 
> construct the path to a branch job's artifact folder on the controller's 
> disk?
>
> As I commented here 
> <https://stackoverflow.com/questions/21268327/rsync-alternative-to-jenkins-copy-artifacts-plugin#comment117378767_25530456>,
>  
> I'm also seeking massive performance gains by replacing copyArtifact with a 
> shell call in my pipelines. In lieu of installing a proper artifact 
> management system and replacing all archive/copyArtifact with calls to its 
> REST API (which I'll be doing later this year), I'm hoping to find a quick 
> alternative. SCP would be a slam dunk for me if I could construct the 
> source path correctly. The problem is that Jenkins is using an algorithm to 
> create a unique folder name, both for workspace names, and for branch job 
> names, but I'm not sure if that's consistent.
>
> E.g. to fetch artifacts from the corresponding branch of an upstream 
> multibranch pipeline job with Full project name of 
> "ProjectFolder/MyProject/feature%2Ffoo", in the downstream multibranch 
> pipeline, I would do something like:
>
> scp -r 
> jenkins-controller:/jobs/ProjectFolder/jobs/MyProject/branches/
> **
> /lastSuccessfulBuild/artifact/
> On Wednesday, April 29, 2015 at 7:02:09 AM UTC-7 mst...@gmail.com wrote:
>
>> I found that using that standard method is quite slow compared to scp.   
>> So I use that method to copy just a few small files, and one with GUIDs for 
>> fingerprinting, and for the big ones I do something like
>>
>> scp -v ${WORKSPACE}/bigfile.tar.gz 
>> user@jenkins_host_name:path_to_jenkins_root/jobs/${JOB_NAME}/builds/${BUILD_ID}/archive/
>>  
>> 2>&1 | tail -n 5
>>
>> I think there's a ${JENKINS_HOME} or something for the path on the 
>> master.   That copies a 2-3 GB file in roughly 40 seconds instead of 
>> something like 4 minutes.  There was a fix put in recently for I think some 
>> Maven plugin where when copying files to the master, the master would poll 
>> the slave to send over the next packet with too many requests, and fixing 
>> that sped things up a ton, perhaps there's another fix coming for how other 
>> files are transferred.
>>
>> Since "big" can sometimes be > 8GB, it would choke the normal archiver 
>> which uses tar under the covers, or at least it did.  In any case this is 
>> much faster, since pigz is multicore aware:
>>
>> tar cf ${WORKSPACE}/bigfile.tar.gz --use-compress-program=pigz [files to 
>> pack]
>>
>> YMMV
>>
>> --- Matt
>>
>>
>> On Monday, April 27, 2015 at 1:27:43 AM UTC-7, matthew...@diamond.ac.uk 
>> wrote:
>>>
>>> Are you using "Archive Artifacts" in the upstream job, and the "Copy 
>>> Artifact" plugin in the downstream job? This is the standard method. 
>>> If so, maybe the upstream job should produce a single zip file , which 
>>> the downstream job and get and unzip. 
>>> Matthew 
>>>
>>> > -Original Message- 
>>> > From: jenkins...@googlegroups.com [mailto:jenkins...@googlegroups.com] 
>>> On Behalf Of Simon Richter 
>>> > Sent: 25 April 2015 01:03 
>>> > To: jenkins...@googlegroups.com 
>>> > Subject: Efficiently copying artifacts 
>>> > 
>>> > Hi, 
>>> > 
>>> > I have a project that outputs a few large files (compiled DLL and 
>>> static 
>>> > library) as well as a few hundred header files as artifacts for use by 
>>> > the next project in the dependency chain. Copying these in and out of 
>>> > workspaces takes quite a long time, and the network link is not even 
>>> > near capacity, so presumably han

Re: Copy Artefact between master and slave node is very slow. (Windows Jenkins version 2.107.3)

2021-03-04 Thread Tim Black
To whom it may concern, I ended up finding the code in Jenkins branch-api 
plugin that's creating that branch path segment (the NameMangler 
<https://github.com/jenkinsci/branch-api-plugin/blob/f2bd7ec715057feb047754f5427f209bbf1b3248/src/main/java/jenkins/branch/NameMangler.java#L55>),
 
however it turned out to be completely unnecessary since I can just get the 
build directory (to construct the path to the artifacts on the controller) 
from the project/job object, obtained by name from the jenkins instance in 
groovy. So my shared library function is even simpler now, works for any 
project type and safer because it doesn't need to use any core or plug in 
classes.


On Monday, March 1, 2021 at 1:53:46 PM UTC-8 Tim Black wrote:

> Yes, archiving artifacts is very slow, this has apparently always been the 
> case with Jenkins - there are numerous jira issues that have come and been 
> resolved without fixes to this. It is what it is..
>
> As I commented here 
> <https://stackoverflow.com/questions/21268327/rsync-alternative-to-jenkins-copy-artifacts-plugin#comment117378767_25530456>,
>  
> I'm also seeking massive performance gains by replacing copyArtifact with a 
> shell call in my pipelines. In lieu of installing a proper artifact 
> management system and replacing all archive/copyArtifact with calls to its 
> REST API (which I'll be doing later this year), I'm hoping to find a quick 
> alternative. 
>
> HTTP/wget/curl is problematic when you want to fetch anything but a single 
> artifact or all artifacts, bc HTTP doesn't support a notion of a directory, 
> so you have to fetch the index and preprocess before fetching what you 
> really want. With scp I could just use unix glob pattern matching to fetch 
> what I desire in a single simple call.
>
> HTTP/wget/curl is also problematic bc you have to use Jenkins API tokens 
> and authentication. I'm using ansible to setup my jenkins infra inside a 
> firewalled LAN, and the jenkins users on all nodes are already set up to be 
> able to freely ssh back and forth without password. 
>
> So, SCP would be a slam dunk for me if I could construct the source path 
> correctly. The problem is that Jenkins uses an algorithm to create a unique 
> folder name, both for workspace names, and for branch job names, but I'm 
> not sure if that's consistent, and therefore I do not know if it would be 
> safe to attempt to re-construct and reference job paths on the controller's 
> disk.
>
> E.g. to fetch artifacts from the corresponding branch of an upstream 
> multibranch pipeline job with Full project name of 
> "ProjectFolder/MyProject/feature%2Ffoo", in the downstream multibranch 
> pipeline, I would do something like:
>
> scp -r 
> jenkins-controller:/jobs/ProjectFolder/jobs/MyProject/branches/
> **
> /lastSuccessfulBuild/artifact/
>
>
> On Wednesday, May 30, 2018 at 11:02:48 AM UTC-7 ok999 wrote:
>
>>  Copying files from jenkins workspace , is always slow. I have seen that 
>> in the past. 
>>
>> If u r using jenkins to checkout an artifact, and then copy to a path on 
>> a remote. Just use wget/rsync (as Mark suggested) . U can trigger that from 
>> jenkins too
>>
>> On Wed, May 30, 2018 at 6:12 AM Mark Waite  wrote:
>>
>>> I thought I remembered reading advice that archiving artifacts was known 
>>> to be slow with large files.  I couldn't find the reference, so I may be 
>>> incorrect in this case.
>>>
>>> I'd suggest using a different technique to store your large artifacts, 
>>> rather than having Jenkins perform the copy.  If on Unix, consider rsync or 
>>> other copy program.  If on Windows, consider robocopy.
>>>
>>> Mark Waite
>>>
>>> On Wed, May 30, 2018 at 1:44 AM panpaliamahen <
>>> mahendra...@buhlergroup.com> wrote:
>>>
>>>> Hi, 
>>>>
>>>> I am using Jenkins version 2.107.3, latest one and on windows. I am 
>>>> using
>>>> Jenkins Slave node to run a Job which require to copy an artefact of 
>>>> size
>>>> ~6GH. And we found when jenkins copy this it takes ~10 minutes. (Note
>>>> artefact are copied from master). 
>>>>
>>>> Also observed it takes ~10 minutes when jenkins job archived back to 
>>>> master. 
>>>>
>>>> Where as same can be copied in ~1 minute in same network without 
>>>> Jenkins. 
>>>>
>>>> Please can someone help me? 
>>>> What should I do? 
>>>> Do I need to install/upgrade any additional plugin on Jenkins? 
>>>>
>>>> Please it is very urgent and reducing efficiency with i

Re: Copy Artefact between master and slave node is very slow. (Windows Jenkins version 2.107.3)

2021-03-01 Thread Tim Black
Yes, archiving artifacts is very slow, this has apparently always been the 
case with Jenkins - there are numerous jira issues that have come and been 
resolved without fixes to this. It is what it is..

As I commented here 
,
 
I'm also seeking massive performance gains by replacing copyArtifact with a 
shell call in my pipelines. In lieu of installing a proper artifact 
management system and replacing all archive/copyArtifact with calls to its 
REST API (which I'll be doing later this year), I'm hoping to find a quick 
alternative. 

HTTP/wget/curl is problematic when you want to fetch anything but a single 
artifact or all artifacts, bc HTTP doesn't support a notion of a directory, 
so you have to fetch the index and preprocess before fetching what you 
really want. With scp I could just use unix glob pattern matching to fetch 
what I desire in a single simple call.

HTTP/wget/curl is also problematic bc you have to use Jenkins API tokens 
and authentication. I'm using ansible to setup my jenkins infra inside a 
firewalled LAN, and the jenkins users on all nodes are already set up to be 
able to freely ssh back and forth without password. 

So, SCP would be a slam dunk for me if I could construct the source path 
correctly. The problem is that Jenkins uses an algorithm to create a unique 
folder name, both for workspace names, and for branch job names, but I'm 
not sure if that's consistent, and therefore I do not know if it would be 
safe to attempt to re-construct and reference job paths on the controller's 
disk.

E.g. to fetch artifacts from the corresponding branch of an upstream 
multibranch pipeline job with Full project name of 
"ProjectFolder/MyProject/feature%2Ffoo", in the downstream multibranch 
pipeline, I would do something like:

scp -r 
jenkins-controller:/jobs/ProjectFolder/jobs/MyProject/branches/
**
/lastSuccessfulBuild/artifact/


On Wednesday, May 30, 2018 at 11:02:48 AM UTC-7 ok999 wrote:

>  Copying files from jenkins workspace , is always slow. I have seen that 
> in the past. 
>
> If u r using jenkins to checkout an artifact, and then copy to a path on a 
> remote. Just use wget/rsync (as Mark suggested) . U can trigger that from 
> jenkins too
>
> On Wed, May 30, 2018 at 6:12 AM Mark Waite  wrote:
>
>> I thought I remembered reading advice that archiving artifacts was known 
>> to be slow with large files.  I couldn't find the reference, so I may be 
>> incorrect in this case.
>>
>> I'd suggest using a different technique to store your large artifacts, 
>> rather than having Jenkins perform the copy.  If on Unix, consider rsync or 
>> other copy program.  If on Windows, consider robocopy.
>>
>> Mark Waite
>>
>> On Wed, May 30, 2018 at 1:44 AM panpaliamahen <
>> mahendra...@buhlergroup.com> wrote:
>>
>>> Hi, 
>>>
>>> I am using Jenkins version 2.107.3, latest one and on windows. I am using
>>> Jenkins Slave node to run a Job which require to copy an artefact of size
>>> ~6GH. And we found when jenkins copy this it takes ~10 minutes. (Note
>>> artefact are copied from master). 
>>>
>>> Also observed it takes ~10 minutes when jenkins job archived back to 
>>> master. 
>>>
>>> Where as same can be copied in ~1 minute in same network without 
>>> Jenkins. 
>>>
>>> Please can someone help me? 
>>> What should I do? 
>>> Do I need to install/upgrade any additional plugin on Jenkins? 
>>>
>>> Please it is very urgent and reducing efficiency with in our development
>>> teams. 
>>>
>>> Thanks and regards, 
>>> Mahendra 
>>>
>>>
>>>
>>>
>>> --
>>> Sent from: 
>>> http://jenkins-ci.361315.n4.nabble.com/Jenkins-users-f361316.html
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Jenkins Users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to jenkinsci-use...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/jenkinsci-users/1527665457279-0.post%40n4.nabble.com
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Jenkins Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to jenkinsci-use...@googlegroups.com.
>>
> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/jenkinsci-users/CAO49JtHYp8p3civwUYu4_pAyi63PB4O8exWjt4YS0qimgLa8tg%40mail.gmail.com
>>  
>> 
>> .
>
>
>> For more options, visit https://groups.google.com/d/optout.
>>
> -- 
> Sent from mobile device, excuse typos if any.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To 

Re: Efficiently copying artifacts

2021-03-01 Thread Tim Black
I'm trying to do same, but in both directions (archiving AND copying 
artifacts from upstream). I wonder how the scp approach to copying 
artifacts would work in multibranch pipelines? Can one deterministically 
construct the path to a branch job's artifact folder on the controller's 
disk?

As I commented here 
,
 
I'm also seeking massive performance gains by replacing copyArtifact with a 
shell call in my pipelines. In lieu of installing a proper artifact 
management system and replacing all archive/copyArtifact with calls to its 
REST API (which I'll be doing later this year), I'm hoping to find a quick 
alternative. SCP would be a slam dunk for me if I could construct the 
source path correctly. The problem is that Jenkins is using an algorithm to 
create a unique folder name, both for workspace names, and for branch job 
names, but I'm not sure if that's consistent.

E.g. to fetch artifacts from the corresponding branch of an upstream 
multibranch pipeline job with Full project name of 
"ProjectFolder/MyProject/feature%2Ffoo", in the downstream multibranch 
pipeline, I would do something like:

scp -r 
jenkins-controller:/jobs/ProjectFolder/jobs/MyProject/branches/
**
/lastSuccessfulBuild/artifact/
On Wednesday, April 29, 2015 at 7:02:09 AM UTC-7 mst...@gmail.com wrote:

> I found that using that standard method is quite slow compared to scp.   
> So I use that method to copy just a few small files, and one with GUIDs for 
> fingerprinting, and for the big ones I do something like
>
> scp -v ${WORKSPACE}/bigfile.tar.gz 
> user@jenkins_host_name:path_to_jenkins_root/jobs/${JOB_NAME}/builds/${BUILD_ID}/archive/
>  
> 2>&1 | tail -n 5
>
> I think there's a ${JENKINS_HOME} or something for the path on the 
> master.   That copies a 2-3 GB file in roughly 40 seconds instead of 
> something like 4 minutes.  There was a fix put in recently for I think some 
> Maven plugin where when copying files to the master, the master would poll 
> the slave to send over the next packet with too many requests, and fixing 
> that sped things up a ton, perhaps there's another fix coming for how other 
> files are transferred.
>
> Since "big" can sometimes be > 8GB, it would choke the normal archiver 
> which uses tar under the covers, or at least it did.  In any case this is 
> much faster, since pigz is multicore aware:
>
> tar cf ${WORKSPACE}/bigfile.tar.gz --use-compress-program=pigz [files to 
> pack]
>
> YMMV
>
> --- Matt
>
>
> On Monday, April 27, 2015 at 1:27:43 AM UTC-7, matthew...@diamond.ac.uk 
> wrote:
>>
>> Are you using "Archive Artifacts" in the upstream job, and the "Copy 
>> Artifact" plugin in the downstream job? This is the standard method. 
>> If so, maybe the upstream job should produce a single zip file , which 
>> the downstream job and get and unzip. 
>> Matthew 
>>
>> > -Original Message- 
>> > From: jenkins...@googlegroups.com [mailto:jenkins...@googlegroups.com] 
>> On Behalf Of Simon Richter 
>> > Sent: 25 April 2015 01:03 
>> > To: jenkins...@googlegroups.com 
>> > Subject: Efficiently copying artifacts 
>> > 
>> > Hi, 
>> > 
>> > I have a project that outputs a few large files (compiled DLL and 
>> static 
>> > library) as well as a few hundred header files as artifacts for use by 
>> > the next project in the dependency chain. Copying these in and out of 
>> > workspaces takes quite a long time, and the network link is not even 
>> > near capacity, so presumably handling of multiple small files is not 
>> > really efficient. 
>> > 
>> > Can this be optimized somehow, e.g. by packing and unpacking the files 
>> > for transfer? Manual inspection of artifacts is secondary, I think. 
>> > 
>> >Simon 
>> > 
>>
>> -- 
>> This e-mail and any attachments may contain confidential, copyright and 
>> or privileged material, and are for the use of the intended addressee only. 
>> If you are not the intended addressee or an authorised recipient of the 
>> addressee please notify us of receipt by returning the e-mail and do not 
>> use, copy, retain, distribute or disclose the information in or attached to 
>> the e-mail.
>> Any opinions expressed within this e-mail are those of the individual and 
>> not necessarily of Diamond Light Source Ltd. 
>> Diamond Light Source Ltd. cannot guarantee that this e-mail or any 
>> attachments are free from viruses and we cannot accept liability for any 
>> damage which you may sustain as a result of software viruses which may be 
>> transmitted in or with the message.
>> Diamond Light Source Limited (company no. 4375679). Registered in England 
>> and Wales with its registered office at Diamond House, Harwell Science and 
>> Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop 

Re: Can Jenkins redirect HTTP requests to HTTPS?

2020-10-23 Thread Tim Black
Owen, did your assertion turn out to be true? Is a reverse proxy required 
to perform this redirection of jenkins requests from http (80,8080) to 
https (port 8443)? 

I'm currently using iptables to forward 443 to 8443 to allow my users to 
not require a port in the URL, however, this does still require "https://; 
in the URL. So, like you (probably) I'd like to redirect http ports to 
jenkins default ssl/https port 8443. But I'm not sure if a simple port 
forward from 80/8080 to 8443 would work, or if it would break the ssl 
negotiation that needs to happen. 

So far, my attempts to add these rules have not succeeded in redirecting 
http requests on port 80 to https requests on port 443. 

I'm sure I'm missing something important here. In case someone is listening 
here is what my nat iptable looks like:

jenkins@jenkins-testing:~$ sudo iptables -nvL -t nat
Chain PREROUTING (policy ACCEPT 13 packets, 1810 bytes)
 pkts bytes target prot opt in out source  
 destination
7   364 REDIRECT   tcp  --  ens192 *   0.0.0.0/0
0.0.0.0/0tcp dpt:443 /* Ansible-generated. */ redir ports 8443
0 0 REDIRECT   tcp  --  ens192 *   0.0.0.0/0
0.0.0.0/0tcp dpt:8080 /* Ansible-generated. */ redir ports 8443
2   104 REDIRECT   tcp  --  ens192 *   0.0.0.0/0
0.0.0.0/0tcp dpt:80 /* Ansible-generated. */ redir ports 8443
160 DOCKER all  --  *  *   0.0.0.0/0
0.0.0.0/0ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 20 packets, 1582 bytes)
 pkts bytes target prot opt in out source  
 destination

Chain POSTROUTING (policy ACCEPT 151 packets, 10870 bytes)
 pkts bytes target prot opt in out source  
 destination
0 0 MASQUERADE  all  --  *  !docker0  172.17.0.0/16
0.0.0.0/0

Chain OUTPUT (policy ACCEPT 151 packets, 10870 bytes)
 pkts bytes target prot opt in out source  
 destination
0 0 DOCKER all  --  *  *   0.0.0.0/0  
 !127.0.0.0/8  ADDRTYPE match dst-type LOCAL

Chain DOCKER (2 references)
 pkts bytes target prot opt in out source  
 destination
0 0 RETURN all  --  docker0 *   0.0.0.0/0
0.0.0.0/0

On Friday, March 6, 2015 at 5:20:51 PM UTC-8 Owen B. Mehegan wrote:

> I guess the native Jenkins server is Winstone, not Jetty, and it doesn't 
> look like there's a way to do this there. It's sounding like I will need an 
> nginx redirect.
>
>
> On Friday, March 6, 2015 at 3:46:35 PM UTC-8, Owen B. Mehegan wrote:
>>
>> I'm running Jenkins 1.596.1 on Linux, using the built-in Jetty server. I 
>> have it serving HTTP and HTTPS successfully, and now I'm trying to figure 
>> out a way to redirect all HTTP requests to HTTPS. I've searched for 
>> Jetty-specific ways to do this, but the answers I've found don't really 
>> mesh with the Jenkins configuration, as far as I can see. Can anyone 
>> suggest a way to do this?
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/bb39050b-c605-4b83-b09e-621079902c78n%40googlegroups.com.


Re: How do you set java options for ssh agents

2020-10-03 Thread Tim Black
Bjorn I like that explanation. If this behaviour is "as designed" then I
just need to adjust my expectations.

I suspect that my Ansible playbooks are not relaunching the agent processes
when changes happen. The agent options are specified through the JCasC yaml
in the master/controller, and I suspect CasC does nothing to detect agent
config changes and relaunch remote agent processes. Thus I have to reboot
them..

I should probably add my own Ansible task to relaunch agent processes..

On Mon, Sep 28, 2020, 04:08 'Björn Pedersen' via Jenkins Users <
jenkinsci-users@googlegroups.com> wrote:

> I think this is  simply because the agent process survives the master
> restart (that is actually a feature) so if agent settings change, you need
> to disconnect and connect the agent (or otherwise restart the agent process
> to pick up the changes).
>
> timb...@gmail.com schrieb am Freitag, 25. September 2020 um 20:03:25
> UTC+2:
>
>> Thanks. I believe you were saying stop/start because you're using Docker
>> container. In your Docker example, stopping and restarting the Docker
>> container is analogous to rebooting (power cycling, or sudo rebooting) the
>> physical or virtual machine hosting the Jenkins service.
>>
>> In this thread I'm saying that restarting the Jenkins service (which
>> resides within a container or vm that is NOT being restarted/rebooted) IS
>> sufficient to apply MOST CasC settings, however, NOT the ssh agent
>> jvmOptions. It's not a blocking problem, bc I can get the desired effect by
>> rebooting master/controller and agent machines. But it's a mystery I'd like
>> to understand better, bc as I scale this cluster and rll out new
>> configuration changes to it, I'm going to need to understand these
>> mechanics.
>>
>> Would this be an appropriate thread for jenkins-developers group? Is
>> there another forum you could recommend to ask detailed questions about
>> JCasC? (I'm on the gitter channel but it's quite hit and miss due to the
>> format)
>>
>> On Friday, September 25, 2020 at 8:56:43 AM UTC-7 kuisat...@gmail.com
>> wrote:
>>
>>> Restart Jenkins using the CLI(
>>> https://www.jenkins.io/doc/book/managing/cli/) it is the same make it
>>> from the UI. When I said stop/start, I mean stop/start the Jankins
>>> daemon/service/Docker container/Whatever. The reason it is because IIRC
>>> JCasC runs on the start time of the Jenkins process, and also IIRC if you
>>> make changes on the JCasC config file and reload the configuration, or
>>> restart from UI the JCasC configuration is not recreated because the stage
>>> where it is run is not running on those restart ways. Probably someone with
>>> the deepest acknowledge of JCasC can add more context.
>>> It is easy to check, run a Jenkins Docker container configured with
>>> JCasC (e.g.
>>> https://github.com/kuisathaverat/jenkins-issues/tree/master/JENKINS-63703)
>>> then connect to the container and modify the JENKINS_HOME/jenkins.yaml file
>>> and restart from UI or CLI, the JCasC changes will not apply if you stop
>>> the Docker container and start it again the changes are applied.
>>>
>>> El vie., 25 sept. 2020 a las 17:33, Tim Black ()
>>> escribió:
>>>
>>>> Thanks. What's the difference between "restart Jenkins from UI" and
>>>> "stop the Jenkins instance and start it again"? In the latter, how are you
>>>> implying that Jenkins gets stopped and restarted, through the CLI? Just
>>>> trying to understand what you're saying - it sounds like you're implying
>>>> CasC settings aren't applied when you restart jenkins through the GUI, but
>>>> they are when you restart through the CLI..
>>>>
>>>> I don't think this explanation is relevant to my use case bc I *never* 
>>>> restart
>>>> jenkins through the GUI. In the workflow I outlined above, I am running an
>>>> ansible playbook on my jenkins cluster, over and over, and each time if
>>>> there is a config change, it restarts the jenkins service through the CLI
>>>> using a jenkins admin credentials (using an active-directory user
>>>> actually). This appears to *not *have the desired effect of applying
>>>> the new agent jvmOptions upon next connection of the agent, whereas when I
>>>> simply reboot the entire machines (master/controller and agents), the new
>>>> jvmOptions are used in the SSHLauncher). Note that *I do not have this
>>>> same problem with other CasC settings, only ssh agents.*
>>>>
>>

Re: How do you set java options for ssh agents

2020-09-25 Thread Tim Black
Thanks. I believe you were saying stop/start because you're using Docker 
container. In your Docker example, stopping and restarting the Docker 
container is analogous to rebooting (power cycling, or sudo rebooting) the 
physical or virtual machine hosting the Jenkins service.

In this thread I'm saying that restarting the Jenkins service (which 
resides within a container or vm that is NOT being restarted/rebooted) IS 
sufficient to apply MOST CasC settings, however, NOT the ssh agent 
jvmOptions. It's not a blocking problem, bc I can get the desired effect by 
rebooting master/controller and agent machines. But it's a mystery I'd like 
to understand better, bc as I scale this cluster and rll out new 
configuration changes to it, I'm going to need to understand these 
mechanics.

Would this be an appropriate thread for jenkins-developers group? Is there 
another forum you could recommend to ask detailed questions about JCasC? 
(I'm on the gitter channel but it's quite hit and miss due to the format)

On Friday, September 25, 2020 at 8:56:43 AM UTC-7 kuisat...@gmail.com wrote:

> Restart Jenkins using the CLI(
> https://www.jenkins.io/doc/book/managing/cli/) it is the same make it 
> from the UI. When I said stop/start, I mean stop/start the Jankins 
> daemon/service/Docker container/Whatever. The reason it is because IIRC 
> JCasC runs on the start time of the Jenkins process, and also IIRC if you 
> make changes on the JCasC config file and reload the configuration, or 
> restart from UI the JCasC configuration is not recreated because the stage 
> where it is run is not running on those restart ways. Probably someone with 
> the deepest acknowledge of JCasC can add more context. 
> It is easy to check, run a Jenkins Docker container configured with 
> JCasC (e.g. 
> https://github.com/kuisathaverat/jenkins-issues/tree/master/JENKINS-63703) 
> then connect to the container and modify the JENKINS_HOME/jenkins.yaml file 
> and restart from UI or CLI, the JCasC changes will not apply if you stop 
> the Docker container and start it again the changes are applied.
>
> El vie., 25 sept. 2020 a las 17:33, Tim Black () 
> escribió:
>
>> Thanks. What's the difference between "restart Jenkins from UI" and "stop 
>> the Jenkins instance and start it again"? In the latter, how are you 
>> implying that Jenkins gets stopped and restarted, through the CLI? Just 
>> trying to understand what you're saying - it sounds like you're implying 
>> CasC settings aren't applied when you restart jenkins through the GUI, but 
>> they are when you restart through the CLI..
>>
>> I don't think this explanation is relevant to my use case bc I *never* 
>> restart 
>> jenkins through the GUI. In the workflow I outlined above, I am running an 
>> ansible playbook on my jenkins cluster, over and over, and each time if 
>> there is a config change, it restarts the jenkins service through the CLI 
>> using a jenkins admin credentials (using an active-directory user 
>> actually). This appears to *not *have the desired effect of applying the 
>> new agent jvmOptions upon next connection of the agent, whereas when I 
>> simply reboot the entire machines (master/controller and agents), the new 
>> jvmOptions are used in the SSHLauncher). Note that *I do not have this 
>> same problem with other CasC settings, only ssh agents.*
>>
>> On Friday, September 25, 2020 at 3:05:59 AM UTC-7 kuisat...@gmail.com 
>> wrote:
>>
>>> ok, I think I know what happens, I saw it before using Docker and JCasC, 
>>> if you make changes on the JCasC and restart Jenkins from UI the changes 
>>> are not applied because JCasC is not executed on that restart, but if you 
>>> stop the Jenkins instance and start it again the changes are applied IIRC 
>>> is how it works.
>>>
>>> El miércoles, 23 de septiembre de 2020 a las 23:37:18 UTC+2, Ivan 
>>> Fernandez Calvo escribió:
>>>
>>>> I will configure a test environment with JCasC that has jmvOptions too 
>>>> see how it behaves, then we will know if it is an issue or not, in any 
>>>> case 
>>>> is weird.
>>>>
>>>> El El mié, 23 sept 2020 a las 22:10, Tim Black  
>>>> escribió:
>>>>
>>>>> More info: In my case, a reboot is definitely needed. A 
>>>>> disconnect/reconnect does not suffice, nor does rebooting just the 
>>>>> master/controller or the agent in sequence - *the only way I see the 
>>>>> correct jvmOptions being used is by rebooting the entire cluster at once*
>>>>> . 
>>>>>
>>>>> I'm using Jenkins 2.222.3, ssh build age

Re: How do you set java options for ssh agents

2020-09-25 Thread Tim Black
Thanks. What's the difference between "restart Jenkins from UI" and "stop 
the Jenkins instance and start it again"? In the latter, how are you 
implying that Jenkins gets stopped and restarted, through the CLI? Just 
trying to understand what you're saying - it sounds like you're implying 
CasC settings aren't applied when you restart jenkins through the GUI, but 
they are when you restart through the CLI..

I don't think this explanation is relevant to my use case bc I *never* restart 
jenkins through the GUI. In the workflow I outlined above, I am running an 
ansible playbook on my jenkins cluster, over and over, and each time if 
there is a config change, it restarts the jenkins service through the CLI 
using a jenkins admin credentials (using an active-directory user 
actually). This appears to *not *have the desired effect of applying the 
new agent jvmOptions upon next connection of the agent, whereas when I 
simply reboot the entire machines (master/controller and agents), the new 
jvmOptions are used in the SSHLauncher). Note that *I do not have this same 
problem with other CasC settings, only ssh agents.*

On Friday, September 25, 2020 at 3:05:59 AM UTC-7 kuisat...@gmail.com wrote:

> ok, I think I know what happens, I saw it before using Docker and JCasC, 
> if you make changes on the JCasC and restart Jenkins from UI the changes 
> are not applied because JCasC is not executed on that restart, but if you 
> stop the Jenkins instance and start it again the changes are applied IIRC 
> is how it works.
>
> El miércoles, 23 de septiembre de 2020 a las 23:37:18 UTC+2, Ivan 
> Fernandez Calvo escribió:
>
>> I will configure a test environment with JCasC that has jmvOptions too 
>> see how it behaves, then we will know if it is an issue or not, in any case 
>> is weird.
>>
>> El El mié, 23 sept 2020 a las 22:10, Tim Black  
>> escribió:
>>
>>> More info: In my case, a reboot is definitely needed. A 
>>> disconnect/reconnect does not suffice, nor does rebooting just the 
>>> master/controller or the agent in sequence - *the only way I see the 
>>> correct jvmOptions being used is by rebooting the entire cluster at once*
>>> . 
>>>
>>> I'm using Jenkins 2.222.3, ssh build agents plugin 1.31.2. 
>>>
>>> Another probably important piece of info here is that *I have 
>>> "ServerAliveCountMax 10" and "ServerAliveInterval 60" in the ssh client on 
>>> the Jenkins master/controller, to help keep ssh connections alive for 
>>> longer amount of time when agents are very very busy performing builds and 
>>> may not have the cycles to respond to the master/controller.*
>>>
>>> I'm also using ansible and configuration-as-code plugin (1.43) to 
>>> configure *everything* in the jenkins cluster. So, to make a change to 
>>> the agent java_options, what I do is:
>>>
>>> 1. Modify the local jenkins.yml CasC file to include new "jvmOptions" 
>>> values for my agent, e.g. my latest:
>>>
>>>   - permanent:
>>>   name: "jenkins-testing-agent-1"
>>>   nodeDescription: "Fungible Agent for jenkins-testing"
>>>   labelString: ""
>>>   mode: "NORMAL"
>>>   remoteFS: "/home/jenkins/.jenkins"
>>>   launcher:
>>> ssh:
>>>   credentialsId: "jenkins_user_on_linux_agent"
>>>   host: "jenkins-testing-agent-1"
>>>   jvmOptions: "-Dhudson.slaves.WorkspaceList=- 
>>> -Dorg.apache.commons.jelly.tags.fmt.timeZone=America/Vancouver *-Xmx4g 
>>> -Xms1g* -XX:+AlwaysPreTouch -XX:+HeapDumpOnOutOfMemoryError 
>>> -XX:HeapDumpPath=/home/jenkins/.jenkins/support -XX:+UseG1GC 
>>> -XX:+UseStringDeduplication -XX:+ParallelRefProcEnabled 
>>> -XX:+DisableExplicitGC -XX:+UnlockDiagnosticVMOptions 
>>> -XX:+UnlockExperimentalVMOptions -verbose:gc 
>>> -Xlog:gc:/home/jenkins/.jenkins/support/gc-%t.log -XX:+PrintGC 
>>> -XX:+PrintGCDetails -XX:ErrorFile=/hs_err_%p.log -XX:+LogVMOutput 
>>> -XX:LogFile=/home/jenkins/.jenkins/support/jvm.log"
>>>   launchTimeoutSeconds: 30
>>>   maxNumRetries: 20
>>>   port: 22
>>>   retryWaitTime: 10
>>>   sshHostKeyVerificationStrategy: 
>>> "nonVerifyingKeyVerificationStrategy"
>>>
>>> 2. send the CasC yaml file to /jenkins.yml on the 
>>> master/controller machine
>>> 3. run geerlingguy.jenkins role which, among other things, detects a 
>>> 

Re: How do you set java options for ssh agents

2020-09-23 Thread Tim Black
More info: In my case, a reboot is definitely needed. A 
disconnect/reconnect does not suffice, nor does rebooting just the 
master/controller or the agent in sequence - *the only way I see the 
correct jvmOptions being used is by rebooting the entire cluster at once*. 

I'm using Jenkins 2.222.3, ssh build agents plugin 1.31.2. 

Another probably important piece of info here is that *I have 
"ServerAliveCountMax 10" and "ServerAliveInterval 60" in the ssh client on 
the Jenkins master/controller, to help keep ssh connections alive for 
longer amount of time when agents are very very busy performing builds and 
may not have the cycles to respond to the master/controller.*

I'm also using ansible and configuration-as-code plugin (1.43) to configure 
*everything* in the jenkins cluster. So, to make a change to the agent 
java_options, what I do is:

1. Modify the local jenkins.yml CasC file to include new "jvmOptions" 
values for my agent, e.g. my latest:

  - permanent:
  name: "jenkins-testing-agent-1"
  nodeDescription: "Fungible Agent for jenkins-testing"
  labelString: ""
  mode: "NORMAL"
  remoteFS: "/home/jenkins/.jenkins"
  launcher:
ssh:
  credentialsId: "jenkins_user_on_linux_agent"
  host: "jenkins-testing-agent-1"
  jvmOptions: "-Dhudson.slaves.WorkspaceList=- 
-Dorg.apache.commons.jelly.tags.fmt.timeZone=America/Vancouver *-Xmx4g 
-Xms1g* -XX:+AlwaysPreTouch -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/home/jenkins/.jenkins/support -XX:+UseG1GC 
-XX:+UseStringDeduplication -XX:+ParallelRefProcEnabled 
-XX:+DisableExplicitGC -XX:+UnlockDiagnosticVMOptions 
-XX:+UnlockExperimentalVMOptions -verbose:gc 
-Xlog:gc:/home/jenkins/.jenkins/support/gc-%t.log -XX:+PrintGC 
-XX:+PrintGCDetails -XX:ErrorFile=/hs_err_%p.log -XX:+LogVMOutput 
-XX:LogFile=/home/jenkins/.jenkins/support/jvm.log"
  launchTimeoutSeconds: 30
  maxNumRetries: 20
  port: 22
  retryWaitTime: 10
  sshHostKeyVerificationStrategy: 
"nonVerifyingKeyVerificationStrategy"

2. send the CasC yaml file to /jenkins.yml on the 
master/controller machine
3. run geerlingguy.jenkins role which, among other things, detects a change 
and restarts the jenkins service
4. on Jenkins restart, Jenkins applies the new CasC settings in 
jenkins.yaml, and this can be verified as correct in the GUI subsequently
5. the agents are not restarted in this process (which I assert should be 
fine/ok)  

After my ansible playbook is complete, and all (verifiably correct) config 
has been applied to controller/agents, I look at the agent logs and they 
appear to have gone back to having the empty jvmOptions like I originally 
reported:

SSHLauncher{host='jenkins-testing-agent-1', port=22, 
credentialsId='jenkins_user_on_linux_agent', *jvmOptions=''*, javaPath='', 
prefixStartSlaveCmd='', suffixStartSlaveCmd='', launchTimeoutSeconds=30, 
maxNumRetries=20, retryWaitTime=10, 
sshHostKeyVerificationStrategy=hudson.plugins.sshslaves.verifiers.NonVerifyingKeyVerificationStrategy,
 
tcpNoDelay=true, trackCredentials=true} 

At this point, *if I only reboot the agent, when the master/controller 
reconnect to it the logs still shows jvmOptions=''*.

*If I then reboot the master/controller, is still shows jvmOptions=''*.

But if (and only iff) I reboot the entire cluster, I get the correct 
application of my ssh agent jvmOptions:

SSHLauncher{host='jenkins-testing-agent-1', port=22, 
credentialsId='jenkins_user_on_linux_agent', 
*jvmOptions='-Dhudson.slaves.WorkspaceList=- 
-Dorg.apache.commons.jelly.tags.fmt.timeZone=America/Vancouver -Xmx4g 
-Xms1g -XX:+AlwaysPreTouch -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/home/jenkins/.jenkins/support -XX:+UseG1GC 
-XX:+UseStringDeduplication -XX:+ParallelRefProcEnabled 
-XX:+DisableExplicitGC -XX:+UnlockDiagnosticVMOptions 
-XX:+UnlockExperimentalVMOptions -verbose:gc 
-Xlog:gc:/home/jenkins/.jenkins/support/gc-%t.log -XX:+PrintGC 
-XX:+PrintGCDetails -XX:ErrorFile=/hs_err_%p.log -XX:+LogVMOutput 
-XX:LogFile=/home/jenkins/.jenkins/support/jvm.log'*, javaPath='', 
prefixStartSlaveCmd='', suffixStartSlaveCmd='', launchTimeoutSeconds=30, 
maxNumRetries=20, retryWaitTime=10, 
sshHostKeyVerificationStrategy=hudson.plugins.sshslaves.verifiers.NonVerifyingKeyVerificationStrategy,
 
tcpNoDelay=true, trackCredentials=true} 

Thanks for your help in diagnosing these behaviors. kuisathaverat, let me 
know if any of this feels like a bug in ssh-slaves-plugin or 
configuration-as-code-plugin.

On Wednesday, September 23, 2020 at 12:01:39 PM UTC-7 Tim Black wrote:

> Thanks everyone, it's working now (see below for details). kuisathaverat, 
> these agents have 96GB total RAM. Thanks for the explanation. Our builds 
> are very RAM intensive, and I misunderstood that the builds happened within 
> the remoting

Re: How do you set java options for ssh agents

2020-09-23 Thread Tim Black
Thanks everyone, it's working now (see below for details). kuisathaverat, 
these agents have 96GB total RAM. Thanks for the explanation. Our builds 
are very RAM intensive, and I misunderstood that the builds happened within 
the remoting java process. Sounds like you're saying in this case there's 
no reason to give the agent jvm so much RAM. The Cloudbees JVM Best 
Practices page 

 indicates 
the default min/max heap are 1/64 physical RAM / 1/4 physical RAM, but both 
cap out at 1GB. So, before I was setting these options, my agents should 
have been effectively using 1GB/1GB for min/max. As for the other options 
I'm setting in the agents, these are the same options recommended by the 
page linked above (which I'm also using on master/controller). Do these not 
apply to agents as well as masters/controllers?

Also, on the agent machine, my /support/all*.logs and 
/remoting/logs/* are still empty,; any suggestions how to get 
more logging on the agents?

I didn't have gc or other logging enabled, so I'm still not yet sure what 
the catastrophic problem was, it might not be a java problem at all, since 
I'm not seeing any problems in syslog indicating problems with the jenkins 
remoting process. These are VMware machines, and they just stop themselves, 
so it seems like a kernel panic or something. I have them autorestarting 
now and the problem seems intermittent.

I think the jvmOptions is working as expected now. I think I may not have 
rebooted the jenkins instance but had only rebooted the agents and had only 
restarted the jenkins service on master/controller machine. So apparently 
the change I made required a reboot of the master/controller. Now, signing 
into the agent and looking at the java process for jenkins remoting, I can 
see all the specified args are there:

```
jenkins@jenkins-testing-agent-1:~$ ps aux | grep java jenkins 2733 5.1 70.4 
73509096 69794284 ? Ssl 11:19 0:26 java -Dhudson.slaves.WorkspaceList=- 
-Dorg.apache.commons.jelly.tags.fmt.timeZone=America/Vancouver -Xmx64g 
-Xms64g -XX:+AlwaysPreTouch -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/home/jenkins/.jenkins/support -XX:+UseG1GC 
-XX:+UseStringDeduplication -XX:+ParallelRefProcEnabled 
-XX:+DisableExplicitGC -XX:+UnlockDiagnosticVMOptions 
-XX:+UnlockExperimentalVMOptions -verbose:gc 
-Xlog:gc:/home/jenkins/.jenkins/support/gc-%t.log -XX:+PrintGC 
-XX:+PrintGCDetails -XX:ErrorFile=/hs_err_%p.log -XX:+LogVMOutput 
-XX:LogFile=/home/jenkins/.jenkins/support/jvm.log -jar remoting.jar 
-workDir /home/jenkins/.jenkins -jar-cache 
/home/jenkins/.jenkins/remoting/jarCache
```

I am also now seeing garbage collection logs in support/ as configured:

```
jenkins@jenkins-testing-agent-1:~$ ls -la .jenkins/support/ total 32 
drwxr-xr-x 2 jenkins jenkins 4096 Sep 23 11:20 . drwxrwxr-x 6 jenkins 
jenkins 4096 Sep 16 00:27 .. -rw-r--r-- 1 jenkins jenkins 0 Sep 22 11:01 
all_2020-09-22_18.01.37.log -rw-r--r-- 1 jenkins jenkins 0 Sep 22 11:03 
all_2020-09-22_18.03.01.log -rw-r--r-- 1 jenkins jenkins 0 Sep 22 13:04 
all_2020-09-22_20.04.15.log -rw-r--r-- 1 jenkins jenkins 0 Sep 22 15:17 
all_2020-09-22_22.17.09.log -rw-r--r-- 1 jenkins jenkins 0 Sep 22 15:32 
all_2020-09-22_22.32.14.log -rw-r--r-- 1 jenkins jenkins 0 Sep 22 15:56 
all_2020-09-22_22.56.18.log -rw-r--r-- 1 jenkins jenkins 1078 Sep 23 11:18 
all_2020-09-23_18.04.43.log -rw-r--r-- 1 jenkins jenkins 0 Sep 23 11:20 
all_2020-09-23_18.20.07.log -rw-r--r-- 1 jenkins jenkins 194 Sep 23 11:04 
gc-2020-09-23_11-04-04.log -rw-r--r-- 1 jenkins jenkins 194 Sep 23 11:04 
gc-2020-09-23_11-04-24.log -rw-r--r-- 1 jenkins jenkins 194 Sep 23 11:19 
gc-2020-09-23_11-19-32.log -rw-r--r-- 1 jenkins jenkins 546 Sep 23 11:22 
gc-2020-09-23_11-19-50.log -rw-r--r-- 1 jenkins jenkins 4096 Sep 23 11:20 
jvm.log 
```
On Wednesday, September 23, 2020 at 10:36:20 AM UTC-7 naresh@gmail.com 
wrote:

> I think to have those updated settings applied correctly we need to 
> disconnect and launch those agents again instead of just bringing those 
> offline and online, just checking to make sure that we are not missing 
> anything there. 
>
> On Wednesday, September 23, 2020 at 12:01:46 PM UTC-5 kuisat...@gmail.com 
> wrote:
>
>> How much memory those agents have? you set "-Xmx64g -Xms64g" for the 
>> remoting process (not for builds) you agent has to have more than 64GB of 
>> RAM to run any build on it, you grab 64GB only for the remoting process, 
>> and this RAM should be enough to run you builds. The remoting agent usually 
>> does not need more than 256-512MB, this keeps the rest of your agent memory 
>> for builds, agents rarely need JVM options to tune the memory the default 
>> configuration is enough, the only case I will recommend to pass JVM option 
>> is to limit the memory of the agent process.
>>
>> the jvmOptions field should work is tested on unit test, if not is and 
>> issue, Which 

How do you set java options for ssh agents

2020-09-22 Thread Tim Black
I'm using ssh-slaves-plugin  to 
configure and launch 2 ssh agents, and I've specified several java options 
in these agents' config (see photo and text list below), but when these 
agents are launched, the agents' log still shows empty jvmOptions in the 
ssh launcher call. Agent Log excerpt:

SSHLauncher{host='jenkins-testing-agent-1', port=22, 
credentialsId='jenkins_user_on_linux_agent', *jvmOptions=''*, javaPath='', 
prefixStartSlaveCmd='', suffixStartSlaveCmd='', launchTimeoutSeconds=30, 
maxNumRetries=20, retryWaitTime=10, 
sshHostKeyVerificationStrategy=hudson.plugins.sshslaves.verifiers.NonVerifyingKeyVerificationStrategy,
 
tcpNoDelay=true, trackCredentials=true} 
[09/22/20 15:56:12] [SSH] Opening SSH connection to 
jenkins-testing-agent-1:22. 
[09/22/20 15:56:16] [SSH] WARNING: SSH Host Keys are not being verified. 
Man-in-the-middle attacks may be possible against this connection. 
[09/22/20 15:56:16] [SSH] Authentication successful. 
[09/22/20 15:56:16] [SSH] The remote user's environment is: 
BASH=/usr/bin/bash
.
.
.
[SSH] java -version returned 11.0.8. 
[09/22/20 15:56:16] [SSH] Starting sftp client. [09/22/20 15:56:16] [SSH] 
Copying latest remoting.jar... Source agent hash is 
0146753DA5ED62106734D59722B1FA2C. Installed agent hash is 
0146753DA5ED62106734D59722B1FA2C Verified agent jar. No update is 
necessary. Expanded the channel window size to 4MB 
[09/22/20 15:56:16] [SSH] Starting agent process: cd 
"/home/jenkins/.jenkins" && java -jar remoting.jar -workDir 
/home/jenkins/.jenkins -jar-cache /home/jenkins/.jenkins/remoting/jarCache 
Sep 22, 2020 3:56:17 PM org.jenkinsci.remoting.engine.WorkDirManager 
initializeWorkDir INFO: Using /home/jenkins/.jenkins/remoting as a remoting 
work directory Sep 22, 2020 3:56:17 PM 
org.jenkinsci.remoting.engine.WorkDirManager setupLogging INFO: Both error 
and output logs will be printed to /home/jenkins/.jenkins/remoting 
<===[JENKINS REMOTING CAPACITY]===>channel started Remoting version: 4.2 
This is a Unix agent WARNING: An illegal reflective access operation has 
occurred WARNING: Illegal reflective access by 
jenkins.slaves.StandardOutputSwapper$ChannelSwapper to constructor 
java.io.FileDescriptor(int) WARNING: Please consider reporting this to the 
maintainers of jenkins.slaves.StandardOutputSwapper$ChannelSwapper WARNING: 
Use --illegal-access=warn to enable warnings of further illegal reflective 
access operations WARNING: All illegal access operations will be denied in 
a future release Evacuated stdout Agent successfully connected and online 


[image: jenkins-ssh-agent-config.PNG]

This is the full text in the "JVM Options" field for 
jenkins-testing-agent-1 and 2:

-Dhudson.slaves.WorkspaceList=- 
-Dorg.apache.commons.jelly.tags.fmt.timeZone=America/Vancouver -Xmx64g 
-Xms64g -XX:+AlwaysPreTouch -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/home/jenkins/.jenkins/support -XX:+UseG1GC 
-XX:+UseStringDeduplication -XX:+ParallelRefProcEnabled 
-XX:+DisableExplicitGC -XX:+UnlockDiagnosticVMOptions 
-XX:+UnlockExperimentalVMOptions -verbose:gc 
-Xlog:gc:/home/jenkins/.jenkins/support/gc-%t.log -XX:+PrintGC 
-XX:+PrintGCDetails -XX:ErrorFile=/hs_err_%p.log -XX:+LogVMOutput 
-XX:LogFile=/home/jenkins/.jenkins/support/jvm.log

I am having intermittent catastrophic failures of these agent machines 
during builds and am trying to properly configure java settings per 
Cloudbees best practices, but I cannot seem to get off the ground here. 
Another problem in my agents that's probably related is that the agent-side 
(remoting) logs are all zero bytes.

Thanks for your help.

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/fbceac13-b480-4816-bc99-c5fe7e10fba6n%40googlegroups.com.


Re: how to build on branch creation but not build on other scm event/commit?

2020-09-17 Thread Tim Black
Thanks Jeremy. Our developers already have control of their projects' 
branches' Jenkinsfiles so they can define whatever triggers/schedule they 
want. If I were going to hack this, I'd probably prefer to do the opposite 
of what you're recommending. Since git scanning is a wheel already invented 
many times, I'd rather NOT "Suppress Automatic SCM Triggering", but modify 
our pipelines (Declarative Jenkinsfile + loaded groovy file(s)) to check 
the "build cause" and act accordingly. Ultimately, we're likely to need 
this kind of sophistication anyway, since some build causes ( e.g. Branch 
Discovery, PR, commit, etc) may need special behavior. Ultimately 
ultimately, we'll have more build capacity and will be able to support 
per-commit builds, so perhaps some of this is temporary.

We're using Bitbucket On Prem, FWIW, and soon I will be implementing 
various Jenkins-Bitbucket integrations. Currently we're still just using 
Jenkins git plugin as BranchSource for our multibranch pipeline jobs. 
Perhaps once I switch us over to using the Bitbucket Server Integration 
plugin , 
a more elegant solution to this fine control over Automatic SCM Triggering 
will become clear.

Cheers.

On Thursday, September 17, 2020 at 9:12:17 AM UTC-7 jeremy@riftio.com 
wrote:

> You could leave the Automatic triggering suppressed and write a tool to 
> scan your GIT repo looking for new branches and trigger the build via an 
> API call when a new branch is found. Sounds like you might need such a tool 
> anyway so that developers could schedule builds. 
>
> On Thursday, September 17, 2020 at 12:59:27 AM UTC-4 timb...@gmail.com 
> wrote:
>
>> 1. I have a multibranch pipeline job that takes 30min to run, has a lot 
>> of branches, and my company is still at the earlier stages of devops 
>> transformation, so with our current infrastructure we do not want to 
>> trigger a build every commit.
>>
>> 2. Our job pipeline uses parameters heavily, so I would also like to 
>> automatically build each branch on branch creation/detection. 
>>
>> How do I achieve the above 2 requirements?
>>
>> Using git scm/plugin/branchsource, if I set (or clear) "Suppress 
>> Automatic SCM triggering", I get only one of the two requirements 
>> fulfilled: setting it suppresses ALL automatic triggering, not providing 2. 
>> Clearing it satisfies 2, automatically triggering a build on branch 
>> discovery, but also builds every commit which we don't want.
>>
>> In my research I have found the basic branch build strategies plugin 
>> ,
>>  
>> and while it provides some great sophisticated control over what branches 
>> build when, in terms of ensuring the job is built on branch creation, it 
>> seems to only provide added suppression.
>>
>> Any suggestions would be welcome. Thanks.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/bf3c640f-2cbf-4f20-8539-81ea8d1b530en%40googlegroups.com.


how to build on branch creation but not build on other scm event/commit?

2020-09-16 Thread Tim Black
1. I have a multibranch pipeline job that takes 30min to run, has a lot of 
branches, and my company is still at the earlier stages of devops 
transformation, so with our current infrastructure we do not want to 
trigger a build every commit.

2. Our job pipeline uses parameters heavily, so I would also like to 
automatically build each branch on branch creation/detection. 

How do I achieve the above 2 requirements?

Using git scm/plugin/branchsource, if I set (or clear) "Suppress Automatic 
SCM triggering", I get only one of the two requirements fulfilled: setting 
it suppresses ALL automatic triggering, not providing 2. Clearing it 
satisfies 2, automatically triggering a build on branch discovery, but also 
builds every commit which we don't want.

In my research I have found the basic branch build strategies plugin 
,
 
and while it provides some great sophisticated control over what branches 
build when, in terms of ensuring the job is built on branch creation, it 
seems to only provide added suppression.

Any suggestions would be welcome. Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/bc4a70ab-41f2-4189-a801-aa04b8df0f36n%40googlegroups.com.


WebSocket Agents and Archiving Artifacts Performance

2020-08-26 Thread Tim Black
Is there any reason to believe that using the new -webSocket mode for 
agents would be any less sluggish at archiving artifacts from agent to 
master than ssh mode? 

Using the ssh-slaves-plugin I'm getting abysmal throughput (~13Mbps) when 
artifacts are being copied from agent to master, despite their 10GBps link: 
https://issues.jenkins-ci.org/browse/JENKINS-7921

Reading up on some of the conversations on this long-standing issue over 
the last decade, I'm not confident that this performance is going to be 
improved. So, I'm considering alternatives, like using WebSocket agents. I 
read here 
<https://issues.jenkins-ci.org/browse/JENKINS-18276?focusedCommentId=249851=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-249851>that
 
the problem might be that "Jenkins archives via its control channel (e.g. 
ssh slave - using java SSH implementation JSCH). The java ssh just can't 
get anywhere near 1Gb/s network speed that native SSH can manage easily"

So, I was just wondering if WebSocket Agents might perform better at 
archiving artifacts bc they are implemented so differently.

Thanks,
Tim Black  

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/d41f3894-ed12-4594-ba08-2dee57fd4181n%40googlegroups.com.


Re: why does multibranch pipeline fetch branch source 3 times?

2019-12-17 Thread Tim Black
Understood. Note that, even with "CleanBeforeCheckout" I have to "re fetch 
tags" because my initial checkout, the one that "Discover Tags" causes, is 
cleaned up afterwards. This is very counter-intuitive, because there's no 
high level description of what's going on, or why, or what all the words 
("Fetch", Checkout"..) mean.

I have empirically determined that "CleanBeforeCHeckout" really means 
"Clean after the initial checkout/fetch/clone, the one whose sole purpose 
in life is to scan for Jenkinsfile changes, but before the subsequent 
second (and maybe third) checkout/fetch/clone operation".

It appears that the former, initial checkout/fetch/clone has controls in 
BranchSource behaviors to get tags, or do not get tags, etc.. but the 
subsequent checkout/fetch/clone operations are still a complete mystery to 
me. Do I have control over when and how those are going to occur? E.g. how 
can I make the subsequent checkout/fetch/clone operations use `--tags` 
instead of `--no-tags`?

On Tuesday, December 17, 2019 at 7:54:13 AM UTC-8, Mark Waite wrote:
>
>
>
> On Tue, Dec 17, 2019 at 7:45 AM Tim Black > 
> wrote:
>
>> Thanks for the info Mark! I'm curious what are the "use cases that might 
>> break" if I enabled "Honor refspec on initial clone" which you mention in 
>> your Live Demo video around 7:55. 
>>
>
> As an example, the Git plugin and the Git client plugin use the contents 
> of their own repositories as part of their automated tests.  Those 
> automated tests assumed that the history of all branches was available in 
> the workspace repository.  Other use cases include automated merge from one 
> branch to another.  Without both branches, an automated merge won't work.
>  
>
>> I would guess that my team's use case, which is performing 
>> branch-specific multibranch pipeline builds that do not need to know about 
>> other branches, is the use case that could very much benefit from 
>> customizing the refspec to fetch only the branch a particular branch 
>> pipeline project cares about. (If this use case doesn't benefit I can't 
>> imagine one that would.) 
>>
>
> Certain branch sources in multibranch pipeline will automatically 
> configure a narrow refspec that specifically includes only the branch being 
> built.  I don't recall which, but I believe it is the GitHub, Bitbucket, 
> Gitea, and Gitlab.  The git multibranch pipeline does not configure a 
> narrow refspec if I recall correctly.
>  
>
>>
>> Looking in my multibranch pipeline job "BranchSource" config after adding 
>> "Advanced clone behaviours" I can check "Honor refspec on initial clone". I 
>> am assuming here it is critical for me to additionally set "Specify ref 
>> specs" behavior at the same time. BTW, do you know, can I use 
>> ${BRANCH_NAME} env var in the refspec, e.g. will this work for a mb 
>> pipeline refspec? 
>>
>> +refs/heads/${BRANCH_NAME}:refs/remotes/@{remote}/${BRANCH_NAME} 
>>
>>
> I believe that (or a variant of it) will work.  I use it very frequently 
> in my jenkins-bugs repository where I have a branch per bug check.  
> Significantly faster to clone a single branch from that repository than to 
> clone the entire repository.
>  
>
>> Seems like this should be a built-in option for mb pipeline configs. 
>> Anyway, I will experiment with this and see how much time savings we get. 
>> Let me know if there's anything else I should know about this or if I'm 
>> making any wrong assertions above.
>>
>> ..and to my original question, I suppose this means there's really no way 
>> to achieve a true "single fetch per build, tags and all", without removing 
>> either the "WipeWorkspaceTrait" or the "CleanBeforeCheckoutTrait", correct? 
>> I'm actually ok with it doing multiple fetches as long as it preserves the 
>> things (tags) it fetched initially. I don't understand why the 
>> implementation/timing of these traits are clobbering the tags I fetched in 
>> the initial clone.
>>
>>
> The WipeWorkspaceTrait means that the entire repository is removed from 
> the workspace at each job.  It guarantees that everything must be fetched 
> again.  You only want the CleanBeforeCheckoutTrait so that it will retain 
> the existing repository but assure that the working files in the repository 
> are clean.
>  
>
>> Which is to say.. Why is there the distinction between the "initial 
>> fetch" and "the checkout"? I think there's a lot going on in the 
>> plugin-background here I don't understand. Can you poi

Re: why does multibranch pipeline fetch branch source 3 times?

2019-12-17 Thread Tim Black
Thanks for the info Mark! I'm curious what are the "use cases that might 
break" if I enabled "Honor refspec on initial clone" which you mention in 
your Live Demo video around 7:55. I would guess that my team's use case, 
which is performing branch-specific multibranch pipeline builds that do not 
need to know about other branches, is the use case that could very much 
benefit from customizing the refspec to fetch only the branch a particular 
branch pipeline project cares about. (If this use case doesn't benefit I 
can't imagine one that would.) 

Looking in my multibranch pipeline job "BranchSource" config after adding 
"Advanced clone behaviours" I can check "Honor refspec on initial clone". I 
am assuming here it is critical for me to additionally set "Specify ref 
specs" behavior at the same time. BTW, do you know, can I use 
${BRANCH_NAME} env var in the refspec, e.g. will this work for a mb 
pipeline refspec? 

+refs/heads/${BRANCH_NAME}:refs/remotes/@{remote}/${BRANCH_NAME} 

Seems like this should be a built-in option for mb pipeline configs. 
Anyway, I will experiment with this and see how much time savings we get. 
Let me know if there's anything else I should know about this or if I'm 
making any wrong assertions above.

..and to my original question, I suppose this means there's really no way 
to achieve a true "single fetch per build, tags and all", without removing 
either the "WipeWorkspaceTrait" or the "CleanBeforeCheckoutTrait", correct? 
I'm actually ok with it doing multiple fetches as long as it preserves the 
things (tags) it fetched initially. I don't understand why the 
implementation/timing of these traits are clobbering the tags I fetched in 
the initial clone.

Which is to say.. Why is there the distinction between the "initial fetch" 
and "the checkout"? I think there's a lot going on in the plugin-background 
here I don't understand. Can you point me to some docs that explain these 
concepts?

My team is interested in performing "clean checkouts" each build but 
perhaps we should be less paranoid and remove the above Traits (and maybe 
use a reference repo as well.) 

Thanks again.

On Monday, December 16, 2019 at 6:49:55 PM UTC-8, Mark Waite wrote:
>
> As far as I know, there isn't a way to avoid multiple fetches with the 
> current git plugin and git client plugin implementation.
>
> It should be feasible to eventually remove at least one of the duplicate 
> fetches so long as the job has configured the checkout option to use the 
> same refspec in the initial fetch as is used in the checkout.  Refer to 
> "Honor refspec on initial clone" at  
> https://plugins.jenkins.io/git#clone-extensions .  Unfortunately, that is 
> a "feasible" idea but not an implemented idea.  The duplicate fetch is 
> performed in all job types, even Freestyle.  Thus, it may be even later 
> than the cases you're trying to handle with multibranch pipeline.
>
> Reference repositories, narrow refspecs, and shallow clone are the current 
> alternatives to reduce the clone time and disc space for a git workspace.  
> Refer to 
> https://www.slideshare.net/markewaite/git-for-jenkins-faster-and-better for 
> slides that I presented at Jenkins World 2019 on those alternatives.  Refer 
> to 
> https://support.cloudbees.com/hc/en-us/articles/115001728812-Using-a-Git-reference-repository
>  for 
> a deeper dive into the technique.  Refer to https://youtu.be/jBGFjFc6Jf8 
> and https://youtu.be/TsWkZLLU-s4?t=139 for older video descriptions of 
> the techniques.
>
> On Mon, Dec 16, 2019 at 7:37 PM Tim Black > 
> wrote:
>
>> Is there ANY multibranch pipeline configuration that would allow me to:
>> * place a Jenkinsfile at single BranchSource repo root, and
>> * perform A SINGLE FETCH of this repo, full stop, and
>> * fetch --tags in this single fetch, and
>> * all of the above works when either "WipeWorkspace" or 
>> "CleanBeforeCheckout" traits are set, so that the initial fetch (tags and 
>> all) are preserved without having to fetch --tags again 
>> ??
>>
>> I have a multibranch pipeline project configured with a single 
>> BranchSource pointing at my repo containing Jenkinsfile. I have set the 
>> following BranchSource traits in config.xml:
>>
>>   
>>   
>>   
>> > "hudson.plugins.git.extensions.impl.SubmoduleOption">
>>   false
>>   true
>>   false
>>   
>>   false
>>   false
>> 
>>   
>>   

why does multibranch pipeline fetch branch source 3 times?

2019-12-16 Thread Tim Black
Is there ANY multibranch pipeline configuration that would allow me to:
* place a Jenkinsfile at single BranchSource repo root, and
* perform A SINGLE FETCH of this repo, full stop, and
* fetch --tags in this single fetch, and
* all of the above works when either "WipeWorkspace" or 
"CleanBeforeCheckout" traits are set, so that the initial fetch (tags and 
all) are preserved without having to fetch --tags again 
??

I have a multibranch pipeline project configured with a single BranchSource 
pointing at my repo containing Jenkinsfile. I have set the following 
BranchSource traits in config.xml:

  
  
  

  false
  true
  false
  
  false
  false

  
  

  

trying to coerce the project to fetch the repo ONCE to obtain everything my 
pipeline needs. (I have also tried this with "WipeWorkspaceTrait" and I get 
same problem.)

What is happening is that I can configure the project to fetch tags but 
it's meaningless if I have either "WipeWorkspaceTrait" 
or "CleanBeforeCheckoutTrait" set. This is because both of these delete the 
tags in the working tree. The first fetch is obviously there for grabbing 
the Jenkinsfile, but I don't understand why it needs to wipe/clean AFTER 
that. Why do the "WipeWorkspaceTrait" or "CleanBeforeCheckoutTrait" have to 
be implemented AFTER the initial fetch?


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/8e9b12ca-ac47-4da9-977f-99da4b88b255%40googlegroups.com.


Re: What compression algorithm is used by the pipeline zip step, and is it tunable?

2019-12-04 Thread Tim Black
Thanks Björn. We're currently on a single master, and I definitely will 
take performance into consideration when we scale. We're looking into 
installing and ind integrating with Artifactory in the coming months, which 
should help with managing artifacts, but I suspect there will still be the 
issue of "who does the compression"..

Many of our artifacts are already compressed entities, but I have confirmed 
that zipping everything (using Jenkins zip) shrinks them by more than half, 
so I'm definitely on the right track..

On Monday, December 2, 2019 at 11:09:19 PM UTC-8, Björn Pedersen wrote:
>
> Hi,
>
> I would probably try to compress on the agent before even trying to 
> transfer the large data to the master. This avoids load on the master a) 
> due to transfer and b) due to compression.
> And if the artifacts get really huge, consider storing them  independent 
> from jenkins (S3, maven-style repo, whatever matches your use-case).
>
>
> Björn
>
> Am Montag, 2. Dezember 2019 20:56:02 UTC+1 schrieb Tim Black:
>>
>> Our projects produce large artifacts that now need to be compressed, and 
>> I'm considering my alternatives. The zip step 
>> <https://jenkins.io/doc/pipeline/steps/pipeline-utility-steps/#zip-create-zip-file>
>>  
>> would be a nice non-plugin solution but I'm curious what compression 
>> technique this uses. The documentation page linked above doesn't show any 
>> options that pertain to compression tuning.
>>
>> I've also seen the Compress Artifacts Plugin 
>> <https://github.com/jenkinsci/compress-artifacts-plugin>, but I can't 
>> tell from its docs either whether the algo is tunable. Also I'd rather not 
>> depend on another plugin.
>>
>> If neither of the above work, I'll simply use sh step to call xz, gzip, 
>> bzip, or the like, from my linux-based master.
>>
>> Thanks for your consideration,
>> Tim Black
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/64c5c8bf-f5fe-4c2c-b4a8-e048928edf62%40googlegroups.com.


What compression algorithm is used by the pipeline zip step, and is it tunable?

2019-12-02 Thread Tim Black
Our projects produce large artifacts that now need to be compressed, and 
I'm considering my alternatives. The zip step 
<https://jenkins.io/doc/pipeline/steps/pipeline-utility-steps/#zip-create-zip-file>
 
would be a nice non-plugin solution but I'm curious what compression 
technique this uses. The documentation page linked above doesn't show any 
options that pertain to compression tuning.

I've also seen the Compress Artifacts Plugin 
<https://github.com/jenkinsci/compress-artifacts-plugin>, but I can't tell 
from its docs either whether the algo is tunable. Also I'd rather not 
depend on another plugin.

If neither of the above work, I'll simply use sh step to call xz, gzip, 
bzip, or the like, from my linux-based master.

Thanks for your consideration,
Tim Black

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/0f5a590a-0f97-4529-89eb-fbd2b249a31e%40googlegroups.com.


Re: Do you really have to use multi-branch pipeline to do pipeline as code?

2019-11-07 Thread Tim Black
Thanks Mark. I created a simple change for review: 
https://github.com/jenkins-infra/jenkins.io/pull/2631

On Tuesday, October 29, 2019 at 9:57:42 AM UTC-7, Mark Waite wrote:
>
> There is a link at the bottom of each page, "Improve this page".  We 
> welcome contributions to the Jenkins pipeline documentation.
>
> Yes, you're correct that I should have said:
>
> The pipeline as code page teaches users how to automate the creation, 
>> execution, and deletion of pipeline jobs based on the creation and deletion 
>> of *branches*.
>>
>
> It depends what you mean by "a better place for finding this kind of 
> information".  The documentation on jenkins.io is a good starting point.  
> It includes tutorials, how-to guides, and reference material.  Specific 
> questions are asked and answered on mailing lists, in the chat systems, on 
> Q sites like stackoverflow 
> <https://stackoverflow.com/questions/tagged/jenkins-pipeline>, and in 
> blog posts on many locations.  Videos are available from YouTube and other 
> locations, including segments like the "Jenkins Minute 
> <https://jenkins.io/blog/2017/08/08/introducing-jenkins-minute/>" video 
> series.  Self-paced courses are available from CloudBees, Udemy, and other 
> online course systems.  Jenkins Pipeline Fundamentals 
> <https://standard.cbu.cloudbees.com/cloudbees-university-jenkins-pipeline-fundamentals>
>  
> from CloudBees is no charge (as one example).
>
> On Tue, Oct 29, 2019 at 9:54 AM Tim Black > 
> wrote:
>
>> Thanks Mark. Of course you're correct. My main problem and point to make 
>> here (and with Jenkins ecosystem in general) is that the documentation says 
>> you have to use multi-branch pipeline to use pipeline as code, whereas this 
>> is patently false. I see in my freestyle pipeline job configuration that 
>> there is an option when specifying the pipeline to use scm. This is clearly 
>> also pipeline as code.
>>
>> Also minor correction in your first paragraph, I believe you mean 
>> creation and deletion of branches.
>>
>> Thanks again. I'm curious to see if people have comments or feedback on 
>> Jenkins pipeline documentation. Is there a better place than where I have 
>> linked to above for finding this kind of information?
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Jenkins Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to jenkins...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/jenkinsci-users/cbb8a468-c4b3-4ee7-87ad-dc89dc19daac%40googlegroups.com
>> .
>>
>
>
> -- 
> Thanks!
> Mark Waite
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/132a08d2-dcc9-4ff6-be75-f1078956291d%40googlegroups.com.


Re: Do you really have to use multi-branch pipeline to do pipeline as code?

2019-10-29 Thread Tim Black
Thanks Mark. Of course you're correct. My main problem and point to make here 
(and with Jenkins ecosystem in general) is that the documentation says you have 
to use multi-branch pipeline to use pipeline as code, whereas this is patently 
false. I see in my freestyle pipeline job configuration that there is an option 
when specifying the pipeline to use scm. This is clearly also pipeline as code.

Also minor correction in your first paragraph, I believe you mean creation and 
deletion of branches.

Thanks again. I'm curious to see if people have comments or feedback on Jenkins 
pipeline documentation. Is there a better place than where I have linked to 
above for finding this kind of information?

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/cbb8a468-c4b3-4ee7-87ad-dc89dc19daac%40googlegroups.com.


Do you really have to use multi-branch pipeline to do pipeline as code?

2019-10-28 Thread Tim Black
Just double checking this, at the below link it says the only way to use 
pipeline as code is to use the multi Branch pipeline configuration. Is this 
really true?

https://jenkins.io/doc/book/pipeline-as-code/

I thought perhaps there was a way to configure a job that doesn't monitor for 
branches and create corresponding jobs, but simply pulls a Jenkins file from 
SCM.

Perhaps this behavior I describe is just a subset of what multi-branch pipeline 
can do? So to achieve my simple behavior I just use multi-branch Pipeline and 
configure it to not monitor for new branches?

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/c45794e2-bb73-4b06-b450-a6cfef76b15f%40googlegroups.com.


Re: problems changing triggers{} in multibranch pipeline Jenkinsfile

2019-10-04 Thread Tim Black
Thanks Bjorn for helping clarify that. I would take what you said a step 
further by changing "could" to "must", since it appears that simply adding a 
trigger back into a Jenkinsfile for a job that has run since the trigger was 
removed, will necessarily have no effect until you manually run the job. 

Paraphrasing, and I wish this was in the docs somewhere, "Multibranch Pipeline 
job properties specified in a Jenkinsfile, e.g. triggers, agents.. have no 
effect until the job is run."

This is perhaps obvious to those with more experience with this plugin's SCM 
scanning feature.

Perhaps another good way to say it would be in the section that talks about the 
scanning feature to call out that "the scanning only looks at Jenkinsfile 
existence, it does not apply changes made inside Jenkinsfiles until they are 
run."

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/3591daa0-ee81-42da-8c20-b61a3fb47154%40googlegroups.com.


Re: Anyway to programatically update a Jenkinsfile?

2019-10-03 Thread Tim Black
I have to agree with Jeremy on this. Using a templating engine like m4 or 
jinja2 is superior to programmatic generation of groovy, if you need to do 
it on the fly. Why? Simplicity and Readability.

I inherited a build system that is based on using a multibranch pipeline 
with a Jenkinsfile that loads and executes a groovy script that's generated 
on the fly based on input from structured data from json files found 
alongside the Jenkinsfile in the repo. So, yeah, it's basically a 
hand-rolled solution to parametrizing a job. It sounds like that's 
basically what you're trying to accomplish as well, Jeff.

The part I don't like is the way we're generating the groovy code. We've a 
library of python code that uses classes and a streaming approach to 
outputting all the syntax required to implement the pipeline. First of all, 
there's a high learning curve to using this approach - when trying to 
determine what the groovy output will look like you have to run the code 
and look at the output, which is a whole software development process in 
and of itself. When making changes, instead of using a template with 
variable substitution and loops perhaps, with the programatic approach, you 
have to write (and test and debug) code to make the groovy code you want. A 
template already resembles what the final output (groovy) will look like, 
so it's quite intuitive to work on. Also, every time you need to use a new 
groovy construct, with the programatic approach, you have to add support 
for it in your code gen library. With the template you just write it, based 
on the pipeline syntax documents. This brings up another reason why 
templated approach is superior: Programatic approach obfuscates the details 
of the language  from the code gen developer. Looking back and forth 
between the jenkins pipeline syntax docs and a template is straightforward, 
not so with a library of classes defined to generate groovy.

So, that's my $0.02. I plan to refactor our code gen library to use jinja2 
templating to make our pipelines more intuitive and easier to work with.

Another thing related to keep in mind is that, as far as I can tell, the 
"load" command, which is the mechanism you would use to actually run your 
generated pipeline script, is designed to work with Scripted Pipeline, not 
Declarative. This distinction is very poorly documented by Cloudbees, IMO. 
Please chime in if you have more info on dynamic generation and running of 
pipeline code.

On Tuesday, October 1, 2019 at 8:43:25 AM UTC-7, Jeff Sault wrote:
>
> Does anyone know of any libraries/tools that will allow me to 
> add/remove/update sections of a Jenkinsfile programatically? I have a load 
> of different projects which I need to update to include some new mandatory 
> parameters. I'd like to go down the 'shared library' route but in this 
> instance its not really possible. Parsing the jenkinsfile is non-trivial 
> but I assume theres something in groovy/jenkins land which can already do 
> the job.
>
> Thanks
> Jeff
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/d77f7156-b8b9-4b65-9959-1aad8a39f2a5%40googlegroups.com.


stash/unstash file size limit and performance

2019-10-03 Thread Tim Black
The Jenkins documentation for the stash pipeline step 

 
says that it's only to be used for "small files", with no explanation why.

Specifically, how is stash bad for large, i.e. not small files? How should 
we define "not small"? 

What's the deal, is it just a slow transfer, or is it buggy?? Just curious 
why I shouldn't stash a 150MB file on a master so I can unstash it from a 
slave agent in a subsequent stage. Seems much simpler than using External 
Workspace Manager which is what they recommend for everything but "small 
files".

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/7a5c2a40-7853-4b5a-aaa8-f9ac5ca94485%40googlegroups.com.


problems changing triggers{} in multibranch pipeline Jenkinsfile

2019-10-03 Thread Tim Black
We have a multibranch pipeline job set up to scan a git repo (which 
contains Jenkinsfile at its root) for branches and create branch-specific 
jobs for each branch discovered. The Jenkinsfile on a branch specifies:

triggers {
cron('@midnight')
}

and this indeed runs nightly at midnight. However, when I delete the above 
block and commit the Jenkinsfile to my branch, it appears to have no 
effect. Last night the build still ran at midnight. This is my problem.

I confirmed the change by looking on the branch job configuration page and 
indeed see an empty BuildTriggers section. There are no triggers specified 
and none of the trigger checkboxes are checked.

Please confirm whether this is indeed the way I am supposed to disable a 
trigger specified in a multibranch pipeline Jenkinsfile. 

Also, note that in the top-level pipeline job configuration, BranchSources 
is set up so "All branches get same properties" and set the property 
"Suppress Automatic SCM Triggering" because we only want builds to be done 
nightly for now, and not as a result of commits. Also, "Scan Multibranch 
Pipeline Triggers" is configured to scan our repo every minute for branch 
changes. I assume this means that it also scans and applies any changes to 
Jenkinsfiles (e.g. our trigger {} changes) to the branch job. I believe 
this to be happening, bc as I've said, the job configuration page seems to 
have reflected our change to the Jenkinsfile in SCM. But the build still 
was triggered last night at midnight.

Please let me know if you know what's going wrong here, or have any 
suggestions for troubleshooting this.

Thanks,
Tim

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/37a06f44-fa92-4970-b9f5-0f32f069f609%40googlegroups.com.