Re: Scenario for Jenkins Pipeline

2017-12-06 Thread Bill Dennis
Did you consider using a single declarative pipeline for this with multiple 
stages? If you make the agent declaration at main pipeline level, the same 
workspace should be used for each stage so no need to copy the 200MB of files 
around - see this stack overflow 
https://stackoverflow.com/questions/43948248/jenkins-declarative-pipeline-what-workspace-is-associated-with-a-stage-when-the

I generally find Stack Overflow better for answers to Jenkins pipeline 
questions, as mostly someone has already asked my question there and got an 
answer.

Bill


On Tuesday, 5 December 2017 22:40:20 UTC, addie k  wrote:
> After waiting for a response I thought it is best to figure these things out 
> by myself by trial and error. 
> 
> 
> In the end I decided to go with using pipeline.
> 
> 
> 
> 
> Thanks a lot for the incredible help that I got here. Very helpful indeed.
> 
> 
> 
> 
> 
> On Tuesday, December 5, 2017 at 4:55:39 AM UTC+11, addie k wrote:
> Hey Group,
> 
> 
> I am new to Jenkins. We are creating a CI/CD pipeline and I had some basic 
> questions. Following is my scenario:
> 
> 
> Node project is cloned from Git repo and built.The test cases are run and 
> unit test code coverage report is generatedThe artifacts are then to be used 
> by SonarQube analysis task.With all the above steps successful, the built 
> artifact will be pushed to cloud foundry.The artifacts here are basically the 
> contents of the workspace directory - they are not zipped or anything. The 
> total size is approx 200 MB.
> My questions are:
> 
> 
> Will it be a good idea that task#1 and task#2 be a part of the same Jenkins 
> project? I tried isolating them into two separate projects but since task#2 
> uses the artifacts in task#1, I find the copy-artifact between the two 
> workspaces to be time-consuming.Task #3 uses the artifacts produced in Task 
> #2 and Task #1. If I isolate Task#3 in a separate Jenkins project, I am 
> facing the same problem of artifacts being copied and taking time. I tried 
> using the Jenkins "Pipeline" project. But I am confused - the tasks will use 
> their workspace individually and create artifacts in their own job 
> directories, so how will the pipeline bring them together? Is this even a 
> good candidate for the pipeline? 
> Sorry about this, as I am new to this, I have these questions. If you can 
> help me understand this here, I will really appreciate! I will be happy to 
> provide more information in case I am not clear anywhere.
> 
> 
> Thanks,
> addie

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/4497bc07-a385-4b2d-9f29-2d957174bb5d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Job Scheduling per 10 secs

2017-06-08 Thread Bill Dennis
I don' t think the Jenkins CRON spec has seconds resolution.

You can build an orchestrater job that is scheduled to run every 1 minute.

Then in that job, loop 6 times with a sleep of 10 seconds and build another 
job.

Also use the do not allow concurrent builds.

Something like this:

pipeline {


agent any


options {
disableConcurrentBuilds()
timestamps()
}


triggers {
// Default triggering on a schedule every 1 minute or you can get 
from ENV
cron("${env.CRON_SCHEDULE?:'* * * * *'}")
}


stages {


stage('Trigger Job Every 10s') {
steps {
script {
for(int i = 0;i < 6;i++) {


build job:'foo', wait: false


sleep 10
}
}
}
}
}
}

Have fun!

--Bill


On Wednesday, 7 June 2017 17:07:35 UTC+1, Ashish Kaushik wrote:
>
> Hi, 
> I am looking for a solution which allow me schedule a job which can run 
> every 10 secs and also the job should not run if the previous instance has 
> not yet finished. 
>
> I have checked the plugin store but can't find anything that supports this 
> requirements. Any pointers would be greatly appreciated. 
>
> Thanks
>
> --
> * Ashish *Kaushik
>  SourceFuse Technologies
>
> --
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/49f2800c-202b-4492-9864-7356966b7d9c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[pipeline] approve input statement from another job with pipeline code

2017-06-07 Thread Bill Dennis
(re-posting with corrections)

Has anyone managed to write pipeline code (or even their own plugin) to 
approve an input using pipeline code from another job (shared library 
NonCPS method or whatever)?

I have this scenario -

JobA

input id: 'JOBA_CALLBACK', message: 'Waiting for JobB', parameters: [string(
defaultValue: '', description: 'ID of the callback', name: 'CALLBACK_ID')]


JobB

I want to write this with this hypothetical pipeline DSL which doesn't 
exist:

approveInput jobName:'JobA', buildNumber: 1234, , inputId: 'JOBA_CALLBACK', 
parameters: [CALLBACK_ID: '5678']

Job B in my scenario is actually called by an external system and can route 
to a couple of other jobs based on job parameters (so I cannot simply put 
Job B code into A).

CloudBees support recommended that I use a REST call-back onto Jenkins to 
do this (REST call onto Jenkins from JobB), 

I do have this and it works, the code looks like this using the HTTP 
Request plugin:

def REQUEST_URL = 
"${env.JENKINS_URL}/job/JobA/1234/wfapi/inputSubmit?inputId=JOBA_CALLBACK".
toString()


def formJsonParamUrlEncoded = java.net.URLEncoder.encode(
"{\"parameter\":[{\"name\":\"CALLBACK_ID\",\"value\":5678]}").toString()


request = httpRequest authentication: 'LOCAL_JENKINS',
 consoleLogResponseBody: true,
 customHeaders: [[name: 'content-type', value: 
'application/x-www-form-urlencoded']],
 httpMode: 'POST',
 requestBody: "json=${formJsonParamUrlEncoded}",
 url: REQUEST_URL,
 validResponseCodes: '200'


echo "Request returned: ${request.toString()}"

However this does introduce some failures into our system when our IT folks 
have outages that affect our authentication and such like (the REST call to 
Jenkins has authentication).

I would like to remove this possibility of failure hence looking at code to 
do it.

I feel that there ought to be a way to call Jenkins internals to do this, 
but probably it would be best to write a plugin that implements the 
hypothetical DSL I have.

If I did write a plugin, would others find this useful?

--Bill

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/1a2f3183-1243-4f71-aeec-a499c6e5bf77%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[pipeline] approve input from another job using pipeline code

2017-06-07 Thread Bill Dennis
Has anyone managed to write pipeline code (or even their own plugin) to 
approve an input using pipeline code from another job (shared library 
NonCPS method or whatever)?

I have this scenario -

*JobA*

input id: 'JOBA_CALLBACK', message: 'Waiting for JobB', parameters: [string(
defaultValue: '', description: 'ID of the callback', name: 'CALLBACK_ID')]

*JobB*

I want to write this with this hypothetical pipeline DSL which doesn't 
exist:

approveInput jobName:'JobA', buildNumber: 1234, , inputId: 'JOBA_CALLBACK', 
parameters: [CALLBACK_ID: '5678']

Job B in my scenario is actually called by an external system and can route 
to a couple of other jobs based on job parameters (so I cannot simply put 
Job B code into A).

CloudBees support recommended that I use a REST call-back onto Jenkins to 
do this (REST call onto Jenkins from JobB), 

I do have this and it works, the code looks like this using the HTTP 
Request plugin:

def REQUEST_URL = 
"${env.JENKINS_URL}/job/JobA/1234/wfapi/inputSubmit?inputId=V6_CALLBACK".
toString()

def formJsonParamUrlEncoded = java.net.URLEncoder.encode(
"{\"parameter\":[{\"name\":\"CALLBACK_ID\",\"value\":5678]}").toString()

request = httpRequest authentication: 'LOCAL_JENKINS',
 consoleLogResponseBody: true,
 customHeaders: [[name: 'content-type', value: 
'application/x-www-form-urlencoded']],
 httpMode: 'POST',
 requestBody: "json=${formJsonParamUrlEncoded}",
 url: REQUEST_URL,
 validResponseCodes: '200'

echo "Request returned: ${request.toString()}"


However this does introduce some failures into our system when our IT folks 
have outages that affect our authentication and such like (the REST call to 
Jenkins has authentication).

I would like to remove this possibility of failure hence looking at code to 
do it.

I feel that there ought to be a way to call Jenkins internals to do this, 
but probably it would be best to write a plugin that implements the 
hypothetical DSL I have.

If I did write a plugin, would others find this useful?

--Bill

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/63a511e7-c6f7-408e-a05d-699c4b6d850b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Blue ocean Pipeline chaining

2017-06-02 Thread Bill Dennis



There is no chaining mechanism I am aware of built in.  

If you want to chain 2 pipeline jobs 'P1' and 'P2' you can just build P2 
from a 'post' section of the last stage on P1.

Your pipeline code would need to look something like this:

pipeline {

agent any

stages {

stage('First') {
steps {
echo 'Pipeline P1 first stage'
}
}

stage('Last') {
steps {
echo 'Pipeline P1 last stage'
}

post {
always {
build 'P2'
}
}  
}
}

post {
failure {
echo "Pipeline chain failed"
}
} 
}


You can probably build it in the BlueOcean pipeline editor somehow.

--Bill

On Friday, 2 June 2017 07:01:39 UTC+1, viral chande wrote:
>
> Any response on this will be really helpfull.
>
> On Thursday, June 1, 2017 at 12:42:54 PM UTC+5:30, slide wrote:
>>
>> Please don't double post.
>>
>> On Wed, May 31, 2017 at 11:25 PM viral chande  
>> wrote:
>>
>>> Hi,
>>> Is there any why i can trigger one pipeline at the end of another in 
>>> blue ocean?
>>> Thanks,
>>> Regards,
>>> Viral Chande
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Jenkins Users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to jenkinsci-use...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/jenkinsci-users/a87d10b4-fd80-4155-904b-66a9bd5e1152%40googlegroups.com
>>>  
>>> 
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/00bb685d-b773-4b31-8c83-a9bb7a8a131e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: How to get build results from a build job in a pipeline

2017-05-17 Thread Bill Dennis
Ah just saw you need the job to call all builds even if one fails. You can 
do it with a parallel section like this:

Map buildResults = [:]

Boolean failedJobs = false

void nofify_email(Map results) {
echo "TEST SIMULATE notify: ${results.toString()}"
}

Boolean buildJob(String jobName, Map results) {

def jobBuild = build job: jobName, propagate: false

def jobResult = jobBuild.getResult()

echo "Build of '${jobName}' returned result: ${jobResult}"

results[jobName] = jobResult

return jobResult == 'SUCCESS'
}

pipeline {

agent any

stages {

stage('Parallel Builds') {

steps {

parallel(

"testJob1": {
script {
if (!buildJob('testJob1', buildResults)) {
failedJobs = true
}
}
},

"testJob2": {
script {
if (!buildJob('testJob2', buildResults)) {
failedJobs = true
}
}
},
)
}
}

stage('Completion') {

steps {
script {
if (failedJobs) {
error("One or more jobs have failed")
}
}
}
}
}

post {

always {
echo "Build results: ${buildResults.toString()}"
}

success {
echo "All builds completed OK"
}

failure {
echo "A job failed"

script {
nofify_email(buildResults)
}
}
}
}



And output looks like this:

Started by user anonymous
[Pipeline] node
Running on master in /var/jenkins_home/workspace/foo
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Parallel Builds)
[Pipeline] parallel
[Pipeline] [testJob1] { (Branch: testJob1)
[Pipeline] [testJob2] { (Branch: testJob2)
[Pipeline] [testJob1] script
[Pipeline] [testJob1] {
[Pipeline] [testJob2] script
[Pipeline] [testJob2] {
[Pipeline] [testJob1] build (Building testJob1)
[testJob1] Scheduling project: testJob1
[Pipeline] [testJob2] build (Building testJob2)
[testJob2] Scheduling project: testJob2
[testJob1] Starting building: testJob1 #8
[testJob2] Starting building: testJob2 #4
[Pipeline] [testJob2] echo
[testJob2] Build of 'testJob2' returned result: SUCCESS
[Pipeline] [testJob2] }
[Pipeline] [testJob2] // script
[Pipeline] [testJob2] }
[testJob1] Build of 'testJob1' returned result: FAILURE
[Pipeline] [testJob1] echo
[Pipeline] [testJob1] }
[Pipeline] [testJob1] // script
[Pipeline] [testJob1] }
[Pipeline] // parallel
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Completion)
[Pipeline] script
[Pipeline] {
[Pipeline] error
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] echo
Build results: [testJob2:SUCCESS, testJob1:FAILURE]
[Pipeline] echo
A job failed
[Pipeline] script
[Pipeline] {
[Pipeline] echo
TEST SIMULATE notify: [testJob2:SUCCESS, testJob1:FAILURE]
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: One or more jobs have failed
Finished: FAILURE


--Bill

On Wednesday, 17 May 2017 03:45:27 UTC+1, Jesse Kinross-Smith wrote:
>
> How can I do this right - I want the results from a job I run (I need to 
> run a dozen of these in succession and will email devs if one of them 
> fails) 
>
> try{ BuildResults = build job: 'testJob'; currentBuild.result='SUCCESS'; } 
>> catch(e){ currentBuild.result = 'FAILURE'; } finally { 
>> notify_email(BuildResults); }
>
>
> if i do the above I only get a valid BuildResults in notify_email IF the 
> job is successful, 
> if it fails it causes an exception saying No such property: BuildResults
>
> currentBuild is useless as it's the pipeline results, not the job results 
> which is what I want
>
> I need the try/catch so I can continue to run my other jobs - otherwise 
> it'll stop immediately once one job fails
>
> I'm sure there's some syntax I'm missing here, but I'm struggling to find 
> it.
>
> Any help you can provide is appreciated.
>
> Regards,
>
> Jesse
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/8472bcdc-1c99-436a-91e7-00390111fb82%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: How to get build results from a build job in a pipeline

2017-05-17 Thread Bill Dennis
You could build the downstream jobs without propagating the error to the 
top level job calling them.

Then you could get the results from each downstream job and handle it to do 
the notifications according to SUCCESS/FAILURE/UNSTABLE etc.

I do this sort of thing using declarative pipeline then I do all the 
notifications in post { failure {}} sections in one place for the job or a 
stage (you can have post handling in declarative at the stage or job level).

I would recommend doing something like this:

def buildResults = [:]

void nofify_email(Map results) {
echo "TEST SIMULATE notify: ${results.toString()}"
}

pipeline {

agent any

stages {

stage('Build testJob') {

steps {
script {
def jobBuild = build job: 'testJob', propagate: false

def jobResult = jobBuild.getResult()

echo "Build of 'testJob' returned result: ${jobResult}"

buildResults['testJob'] = jobResult

if (jobResult != 'SUCCESS') {
error("testJob failed with result: ${jobResult}")
}
}
}
}
}

post {

always {
echo "Build results: ${buildResults.toString()}"
}

success {
echo "All builds completed OK"
}

failure {
echo "A job failed"

script {
nofify_email(buildResults)
}
}
}
}


Here is what the output would look like for success and failure:


Started by user anonymous
[Pipeline] node
Running on master in /var/jenkins_home/workspace/foo
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Build testJob)
[Pipeline] script
[Pipeline] {
[Pipeline] build (Building testJob)
Scheduling project: testJob
Starting building: testJob #1
[Pipeline] echo
Build of 'testJob' returned result: SUCCESS
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] echo
Build results: [testJob:SUCCESS]
[Pipeline] echo
All builds completed OK
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS



Started by user anonymous
[Pipeline] node
Running on master in /var/jenkins_home/workspace/foo
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Build testJob)
[Pipeline] script
[Pipeline] {
[Pipeline] build (Building testJob)
Scheduling project: testJob
Starting building: testJob #2
[Pipeline] echo
Build of 'testJob' returned result: FAILURE
[Pipeline] error
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] echo
Build results: [testJob:FAILURE]
[Pipeline] echo
A job failed
[Pipeline] script
[Pipeline] {
[Pipeline] echo
TEST SIMULATE notify: [testJob:FAILURE]
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: testJob failed with result: FAILURE
Finished: FAILURE


I recommend using declarative pipeline!


--Bill

On Wednesday, 17 May 2017 03:45:27 UTC+1, Jesse Kinross-Smith wrote:
>
> How can I do this right - I want the results from a job I run (I need to 
> run a dozen of these in succession and will email devs if one of them 
> fails) 
>
> try{ BuildResults = build job: 'testJob'; currentBuild.result='SUCCESS'; } 
>> catch(e){ currentBuild.result = 'FAILURE'; } finally { 
>> notify_email(BuildResults); }
>
>
> if i do the above I only get a valid BuildResults in notify_email IF the 
> job is successful, 
> if it fails it causes an exception saying No such property: BuildResults
>
> currentBuild is useless as it's the pipeline results, not the job results 
> which is what I want
>
> I need the try/catch so I can continue to run my other jobs - otherwise 
> it'll stop immediately once one job fails
>
> I'm sure there's some syntax I'm missing here, but I'm struggling to find 
> it.
>
> Any help you can provide is appreciated.
>
> Regards,
>
> Jesse
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/db320dcc-c6ed-4373-b56b-9efcef78d960%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Converting a string parameter to a List of Maps

2017-03-27 Thread Bill Dennis
I think it is failing on the call to the generateXML() method - is it 
defined in your job and how is it defined, what parameters does it expect?

BTW, be aware that the Groovy 'Eval' will allow anyone to execute whatever 
Groovy code they would like to put in the input parameter. This line:

def inputMap = Eval.me("$input") 

Your local IT security people may not like this :-)

Have you considered passing the List/Map as a JSON structure on the input? 
Seems safer, unless there is a better way to secure the groovy eval.

--Bill

On Sunday, 26 March 2017 20:06:16 UTC+1, ok999 wrote:
>
> hi, 
>
> Can anyone let me know me, what is wrong here. I have the pipeline script, 
> and i am trying to pass a string parameter when the job is triggered. The 
> parameter will then be converted into a List of maps, so that i can iterate 
> through it. 
>
> Here is what i am trying:
>
>
> String input = "$objectListParameter"  //This is from the job's input 
> String parameter
> println input 
> def inputMap = Eval.me("$input")  
> def objectList=[] //initialize an empty List
> objectList << inputMap  
> println objectList
> println objectList.getClass()
> //call the method
> generateXML(objectList)  // This is the method marked with @NonCPS
>
>
>
> The input parameter, ($objectListParameter) looks something like this: 
>
> [[name: 'a', file: 'fileA' , objectName: 'wf_A' , objectType: 'workflow', 
> sourceRepository: 'DEV2', folderNames: [srcFolder1: 'TgtFolder1', 
> srcFolder2: 'TgtFolder2']],[ name: 'B' , file: 'fileB' , objectName: 'wf_B' 
> , objectType: 'workflow', sourceRepository: 'DEV2', folderNames: 
> [srcFolder4: 'TgtFolder4', srcFolder3: 'TgtFolder3']]]
>
>
> In the jenkins console Log, this the snippet 
>
>
>
>
>
> [Pipeline] echo[[name: 'a', file: 'fileA' , objectName: 'wf_A' , objectType: 
> 'workflow', sourceRepository: 'DEV2', folderNames: [srcFolder1: 'TgtFolder1', 
> srcFolder2: 'TgtFolder2']],[ name: 'B' , file: 'fileB' , objectName: 'wf_B' , 
> objectType: 'workflow', sourceRepository: 'DEV2', folderNames: [srcFolder4: 
> 'TgtFolder4', srcFolder3: 'TgtFolder3']]][Pipeline] echo[[{name=a, 
> file=fileA, objectName=wf_A, objectType=workflow, sourceRepository=DEV2, 
> folderNames={srcFolder1=TgtFolder1, srcFolder2=TgtFolder2}}, {name=B, 
> file=fileB, objectName=wf_B, objectType=workflow, sourceRepository=DEV2, 
> folderNames={srcFolder4=TgtFolder4, srcFolder3=TgtFolder3}}]][Pipeline] 
> echoclass java.util.ArrayList[Pipeline] }[Pipeline] // node[Pipeline] End of 
> Pipelinehudson.remoting.ProxyException: groovy.lang.MissingMethodException: 
> No signature of method: 
> WorkflowScript$_generateXML_closure1$_closure2$_closure3$_closure4$_closure5$_closure6$_closure9.doCall()
>  is applicable for argument types: (java.util.LinkedHashMap) values: 
> [[srcFolder1:TgtFolder1, srcFolder2:TgtFolder2]]
> Possible solutions: doCall(java.lang.Object, java.lang.Object), findAll(), 
> findAll(), isCase(java.lang.Object), isCase(java.lang.Object)
>   at 
> org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:286)
>   at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1024)
>   at groovy.lang.Closure.call(Closure.java:414)
>   at groovy.lang.Closure.call(Closure.java:430)
>   at 
> org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2030)
>   at 
> org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2015)
>   at 
> org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2056)
>   at org.codehaus.groovy.runtime.dgm$162.invoke(Unknown Source)
>   at 
> org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoMetaMethodSiteNoUnwrapNoCoerce.invoke(PojoMetaMethodSite.java:274)
>
>
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/ba367247-dca5-43e4-8f8f-b6e4700ce6f5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[declarative pipeline] Variable persistance after Jenkins restart

2017-03-27 Thread Bill Dennis
Hi -

I have an issue with variables being restored in a pipeline job after a 
Jenkins restart.

Say I have a pipeline like this:

// Local var set to 'env'
def localEnv = env

// Simple local var
def foo = "Bar"

pipeline {
agent any

environment {

// You can do it here but I really want the whole env in my own var
myBuildNumber = "${env.BUILD_NUMBER}"
}

stages {

stage('Init') {

steps {
echo "${localEnv.BUILD_NUMBER}"
}
}

stage('Hello') {
steps {

echo "hello"

input 'Waiting for input...'

// Lets see what we have
echo "localEnv.BUILD_NUMBER = ${localEnv.BUILD_NUMBER}"
echo "env.BUILD_NUMBER = ${env.BUILD_NUMBER}"
echo "foo = ${foo}"
echo "myBuildNumber = ${myBuildNumber}"
}
}
}
}

I am making my own local variable as copy of the 'env' variable (localEnv). 
I do this a lot in my jobs because I call custom library DSL-type structure 
similar to this:

myCustomThing {
   reference = localEnv.BUILD_NUMBER
   message = "Something about build ${localEnv.BUILD_NUMBER}"
}

If I use 'env' there, I get a null pointer exception on this closure - 
something to do with groovy Closure delegate (that is another question I 
guess), so I did this 'def localEnv = env' thing to get around that. 
Anyway, for the normal case my job above outputs this after selecting 
approve on the input and all is well:

Waiting for input...

Proceed <http://localhost:32779/job/Foo/21/console#> or Abort 
<http://localhost:32779/job/Foo/21/console#>

Approved by Bill Dennis <http://localhost:32779/user/bill>

[Pipeline] echo

localEnv.BUILD_NUMBER = 21

[Pipeline] echo

env.BUILD_NUMBER = 21

[Pipeline] echo

foo = Bar

[Pipeline] echo

myBuildNumber = 21

When I restart Jenkins while the input is waiting then approve the input 
the output is like this (note the null):

Waiting for input...

Proceed <http://localhost:32781/job/Foo/22/console#> or Abort 
<http://localhost:32781/job/Foo/22/console#>

Resuming build at Mon Mar 27 23:48:21 UTC 2017 after Jenkins restart

Ready to run at Mon Mar 27 23:48:32 UTC 2017

Approved by Bill Dennis <http://localhost:32781/user/bill>

[Pipeline] echo

localEnv.BUILD_NUMBER = null

[Pipeline] echo

env.BUILD_NUMBER = 22

[Pipeline] echo

foo = Bar


As can be seen, 'localEnv.BUILD_NUMBER' comes back as *null* after the 
Jenkins restart. So it looks like I need to do a deep-copy of the 'env' or 
set up many more variables in my environment section for everything that 
needs to be available from 'env' surviving a restart, so that things I need 
cans be passed to any of my shared library stuff. 


Does anyone have any deeper understanding of this?


Thanks,

--Bill




-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/8aa04dac-5299-4d08-ba59-3a1a816c606c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Declarative pipelines vs scripted

2017-03-17 Thread Bill Dennis
Hi -

I'm tending to use Declarative as my preference after starting with the 
scripted like you did. I'm finding:


   - With declarative can have more of the job configuration in the 
   Jenkinsfile like parameters and SCM polling. It means the Jenkins server 
   can pick up the Jenkinsfiles for projects automatically with a MultiBranch 
   pipeline container or GH Organisation. So you don't have to create the jobs 
   configs, just add the Jenkinsfile in the repo with the code.
   
   - I really like the post section handling in Declarative for handling 
   errors and failures. You can have post handing at the job or stage level. 
   It means you don't need the try-catch-finally handling that you have. Seems 
   cleaner to me.
   
   - Declaration and use of tools is cleaner. Also setup of environment 
   variables for the build.

I don't see any issues with your scripted pipeline. I would use the 
"error('Some failure occurred')" step instead of throwing / re-throwing 
exceptions for errors, It allows to generate the error message at the point 
the failure occurs. I do google searches against github looking for 
interesting Jenkinsfiles or look in the CloudBees / Jenkins repos there.

I can recommend looking at the Declarative pipeline!

--Bill



On Friday, 17 March 2017 03:55:48 UTC, Nick Le Mouton wrote:
>
> Hi,
>
> I'm just getting my head around pipeline as code and have converted my 
> previous Jenkins job/ant build targets to a Jenkinsfile. As I was looking 
> for documentation on the Jenkins site, I'm seeing mentions of declarative 
> pipelines and it differs from what I've written.
>
> Which method should I be looking to use (especially with blue ocean)? 
> Declarative or scripted? Why should I be using one over the other?
>
> Can I also get some feedback on my Jenkinsfile (
> https://gist.github.com/NoodlesNZ/bf9b50cab82093097796d354e37083f0)? It's 
> hard to find examples beyond "hello world"/simple pipelines.
>
> Thanks
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/5bb89e53-6b98-458e-a406-8870b75feb03%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Trigger secondjob after some delay of first job execution

2017-03-15 Thread Bill Dennis
Should have been:

sleep time: 1, unit: 'HOURS'



On Wednesday, 15 March 2017 23:25:44 UTC, Bill Dennis wrote:
>
> Hi -
>
> You can do it with a declarative pipeline job to orchestrate running your 
> test and log collection jobs.
>
> Something like this using 'agent none' so you don't tie up executors when 
> waiting at any point:
>
> pipeline {
> agent none
> 
> stages {
> stage('Build') {
> steps {
> echo 'Triggering test cases..'
> build job: 'TestIt', wait: false
> }
> }
> 
> stage('Waiting for something') {
> steps {
> echo 'Waiting to collect logs'
> sleep time: 1, unit: 'MINUTES'
> }
> }
> 
> stage('Collect Logs') {
> steps {
> echo 'Collecting the logs'
> build 'CollectLogs'
> }
> }
> }
> 
> post {
> success {
> echo "All done"
> }
> 
> failure {
> echo "Something failed"
> }
> }
> }
>
> But how will you know the 1 hour wait is enough? If you trigger the test 
> cases job but elect to wait for that job to complete, will the logs be 
> ready to collect? Like this which defaults to waiting:
>
> build 'TestIt'
>
> --Bill
>
> On Tuesday, 14 March 2017 09:20:41 UTC, Hemanth Reddy wrote:
>>
>> Hi All,
>>
>> I have a requirement some thing like, my first job will run test cases on 
>> the machine.
>>
>> I need to configure another job in such a way that after 1 hour of first 
>> job triggered, my second job should run to collect the logs.
>>
>> Regards,
>> Hemanth
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/b80654ba-75d3-4519-ab72-89f673ebad51%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Trigger secondjob after some delay of first job execution

2017-03-15 Thread Bill Dennis
Hi -

You can do it with a declarative pipeline job to orchestrate running your 
test and log collection jobs.

Something like this using 'agent none' so you don't tie up executors when 
waiting at any point:

pipeline {
agent none

stages {
stage('Build') {
steps {
echo 'Triggering test cases..'
build job: 'TestIt', wait: false
}
}

stage('Waiting for something') {
steps {
echo 'Waiting to collect logs'
sleep time: 1, unit: 'MINUTES'
}
}

stage('Collect Logs') {
steps {
echo 'Collecting the logs'
build 'CollectLogs'
}
}
}

post {
success {
echo "All done"
}

failure {
echo "Something failed"
}
}
}

But how will you know the 1 hour wait is enough? If you trigger the test 
cases job but elect to wait for that job to complete, will the logs be 
ready to collect? Like this which defaults to waiting:

build 'TestIt'

--Bill

On Tuesday, 14 March 2017 09:20:41 UTC, Hemanth Reddy wrote:
>
> Hi All,
>
> I have a requirement some thing like, my first job will run test cases on 
> the machine.
>
> I need to configure another job in such a way that after 1 hour of first 
> job triggered, my second job should run to collect the logs.
>
> Regards,
> Hemanth
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/685b7b3c-8f68-4f8b-a7c9-4556a415d1f8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Iteration over list or map in Pipeline script

2017-03-13 Thread Bill Dennis
You can do it with a for loop. There are issues using Groovy iterators like 
each {}. Try something like this:

pipeline {
agent any

stages {
stage('loop') {
steps {
script {
def x = ['a', 'b', 'c']
println x 
for(String item: x) {
println item
}
}
}
}
}
}


On Monday, 13 March 2017 14:26:44 UTC, Martin Schmude wrote:
>
> Hello,
>
> I have a freestyle job with one step of the kind "Execute Groovy Script". 
> Its Groovy code is
>
> def x = ['a', 'b', 'c']
> println x
> x.each { println it }
>
>
> The output of this job is (not surprinsingly):
>
> [Test-Groovy2] $ groovy 
> /var/lib/jenkins/workspace/Test-Groovy2/hudson3825239812036801886.groovy
> [a, b, c]
> a
> b
> c
> Finished: SUCCESS
>
>
> But if I create a pipeline job with the pipeline script set to the same 
> Groovy code, its output is:
>
> [Pipeline] echo[a, b, c][Pipeline] echoa[Pipeline] End of PipelineFinished: 
> SUCCESS
>
>
> The .each() gets the first element in the list and nothing more.
> What's going on here? I thought that pipeline scripts are just Groovy plus 
> some pipeline DSL, but I seem to be wrong.
>
> BTW: my Jenkins is 2.27.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/de886784-ca86-4872-a310-0924a7a25d9c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: What is the freestyle "inject environment variables" equivalent inside blue ocean pipelines?

2017-03-09 Thread Bill Dennis
Hi -

Yes, I have done this JSON parse. Example on my Github:
https://github.com/macg33zr/jenkins-experimental-pipelines/blob/master/json-parse-pipeline.groovy

It is a bit ugly as it needs script approvals (or turn off pipeline sandbox 
or put the code in a trusted global pipeline library).

You need a NonCPS method like this:

import groovy.json.JsonSlurperClassic


@NonCPS
def parseJsonToMap(String json) {
final slurper = new JsonSlurperClassic()
return new HashMap<>(slurper.parseText(json))
}


Another way to go would be a command line tool to parse JSON like this one:


https://stedolan.github.io/jq/


Then just run a 'sh' command to get the data out. I haven't used that myself.


Have fun!

--Bill


On Thursday, 9 March 2017 21:51:20 UTC, jeremy@wonderful.fr wrote:
>
> Hi Bill.
>
> I've tried many things with no luck.
>
> So I've got this JSON credential file whose content looks like this:
>
> {
>   "sshUserKey":"sshuserval",
>   "sshHostKey":"sshhostval"
> }
>
>
> I've successfully managed to open this file in my pipeline:
>
> withCredentials([file(credentialsId: 'secrettest', variable: 
> 'testMasterCred')]) {
> sh "cat ${testMasterCred}";
> }
>
>
> The cat command shows effectively the content of the JSON file.
>
> Then, how would you parse this JSON?
> I've tried readJSON file: $testMasterCred; but this doesn't work, and 
> throws the following message: No such property: $testMasterCred for class: 
> groovy.lang.Binding
>
> I've got the feeling that I'm not very far from the truth.
> There's not much help on the pipeline-utility-steps-plugin readme 
> regarding this.
>
> Would you have an example about how I could get the parsing right?
>
> Thanking you.
>
> Regards.
>
> Le jeudi 9 mars 2017 10:06:44 UTC+1, Bill Dennis a écrit :
>>
>> It can be any format file you like XML, properties, txt whatever you need 
>> for some sort of configuration (except large binary files I guess). 
>>
>> There is a CloudBees article here that should help:
>>
>> https://support.cloudbees.com/hc/en-us/articles/203802500-Injecting-Secrets-into-Jenkins-Build-Jobs
>>
>> The article shows creating these globally but you can create them scoped 
>> on a folder.
>>
>> Then to use in the pipelines I suggest to drop into the pipeline syntax 
>> link on  a pipeline job that drops into the snipper generator in the 
>> Jenkins pipeline UI and go through the 'withCredentials' snippet generator. 
>> It found it best to experiment around a bit to figure it out.
>>
>> --Bill
>>
>>
>> On Thursday, 9 March 2017 08:41:23 UTC, jeremy@wonderful.fr wrote:
>>>
>>> Hi Bill.
>>>
>>> Thanks so much for your reply.
>>>
>>> I like this credential file option. That would mean I can create a file 
>>> with all the environment variables I need for my branches inside (one per 
>>> branch I guess). And if I could scope it inside my project folder even 
>>> better.
>>>
>>> I've tried to google information about how to use credential files, but 
>>> without much success. Would you have an example of how you'd write one?
>>> Is it a key / value format? bash variables declarations? JSON? XML?
>>>
>>> Thank you for your time and your help.
>>>
>>> Regards.
>>>
>>> Jeremy.
>>>
>>> Le mercredi 8 mars 2017 10:05:02 UTC+1, Bill Dennis a écrit :
>>>>
>>>> Just some other things I thought of -
>>>>
>>>> If you use the credentials file feature you can put all those sensitive 
>>>> properties in a properties file stored as 'jenkins credentials'. 
>>>>
>>>> Then pull that props file into your workspace using 'withCredentials' 
>>>> in the pipeline.
>>>>
>>>> Next thing is to grab the pipeline utility steps plugin which has a 
>>>> readProperties step (it is not one of the standard pipe plugins - you will 
>>>> need to add it).
>>>> https://plugins.jenkins.io/pipeline-utility-steps
>>>>
>>>> Then you have the file properties loaded as Java properties and you can 
>>>> use them as before.
>>>>
>>>> I did this move from Freestyle too and there is a lot to learn but it 
>>>> is worth it. Another recommendation is to look at the declarative pipeline 
>>>> not just scripted pipeline. Declarative has post build handling in the 
>>>> pipeline which you may miss from FreeStyle jobs. In scripted pipeline you 
>>>>

Re: What is the freestyle "inject environment variables" equivalent inside blue ocean pipelines?

2017-03-09 Thread Bill Dennis
It can be any format file you like XML, properties, txt whatever you need 
for some sort of configuration (except large binary files I guess). 

There is a CloudBees article here that should help:
https://support.cloudbees.com/hc/en-us/articles/203802500-Injecting-Secrets-into-Jenkins-Build-Jobs

The article shows creating these globally but you can create them scoped on 
a folder.

Then to use in the pipelines I suggest to drop into the pipeline syntax 
link on  a pipeline job that drops into the snipper generator in the 
Jenkins pipeline UI and go through the 'withCredentials' snippet generator. 
It found it best to experiment around a bit to figure it out.

--Bill


On Thursday, 9 March 2017 08:41:23 UTC, jeremy@wonderful.fr wrote:
>
> Hi Bill.
>
> Thanks so much for your reply.
>
> I like this credential file option. That would mean I can create a file 
> with all the environment variables I need for my branches inside (one per 
> branch I guess). And if I could scope it inside my project folder even 
> better.
>
> I've tried to google information about how to use credential files, but 
> without much success. Would you have an example of how you'd write one?
> Is it a key / value format? bash variables declarations? JSON? XML?
>
> Thank you for your time and your help.
>
> Regards.
>
> Jeremy.
>
> Le mercredi 8 mars 2017 10:05:02 UTC+1, Bill Dennis a écrit :
>>
>> Just some other things I thought of -
>>
>> If you use the credentials file feature you can put all those sensitive 
>> properties in a properties file stored as 'jenkins credentials'. 
>>
>> Then pull that props file into your workspace using 'withCredentials' in 
>> the pipeline.
>>
>> Next thing is to grab the pipeline utility steps plugin which has a 
>> readProperties step (it is not one of the standard pipe plugins - you will 
>> need to add it).
>> https://plugins.jenkins.io/pipeline-utility-steps
>>
>> Then you have the file properties loaded as Java properties and you can 
>> use them as before.
>>
>> I did this move from Freestyle too and there is a lot to learn but it is 
>> worth it. Another recommendation is to look at the declarative pipeline not 
>> just scripted pipeline. Declarative has post build handling in the pipeline 
>> which you may miss from FreeStyle jobs. In scripted pipeline you have to do 
>> a lot of try-catch handling for build errors.
>>
>> Bill
>>
>>
>> On Wednesday, 8 March 2017 08:45:03 UTC, Bill Dennis wrote:
>>>
>>> If you put the pipeline / branch jobs inside a folder, you can scope the 
>>> credentials to just that folder. Pretty sure that is available in Jenkins 
>>> OSS and not just Enterprise - you need the CloudBees Folders plugin. Have a 
>>> look on here, it might have some clues: 
>>> https://support.cloudbees.com/hc/en-us/articles/204264974-How-inject-your-Maven-settings-xml-at-folder-level-with-the-Credentials-plugin
>>>
>>> I am not sure if this helps in your branch scenario. I put all my 
>>> credentials globally then realised I could scope them to the folder level - 
>>> I missed it due to some nuances in the credentials UI.
>>>
>>> Bill
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/31e3d425-7067-4fd8-a3d3-a04ab783db25%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


System Global variables in pipelines

2017-03-08 Thread Bill Dennis
I have found the same. There is an open jira for this here:
https://issues.jenkins-ci.org/browse/JENKINS-40455

Are you using declarative or scripted pipeline?

In declarative, I have found I can reference env vars configured in Jenkins in 
the environment section to set up job level environment:

environment {
  jobVar = env.SOME_VAR
}

But I haven't checked if these job level environment variables are available in 
a script section that runs with 'agent none'

Most of the job work is done on a node / agent for my pipelines.

Bill

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/27dd108d-2119-4879-9ffe-6a9ef16b3bd7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: What is the freestyle "inject environment variables" equivalent inside blue ocean pipelines?

2017-03-08 Thread Bill Dennis
Just some other things I thought of -

If you use the credentials file feature you can put all those sensitive 
properties in a properties file stored as 'jenkins credentials'. 

Then pull that props file into your workspace using 'withCredentials' in 
the pipeline.

Next thing is to grab the pipeline utility steps plugin which has a 
readProperties step (it is not one of the standard pipe plugins - you will 
need to add it).
https://plugins.jenkins.io/pipeline-utility-steps

Then you have the file properties loaded as Java properties and you can use 
them as before.

I did this move from Freestyle too and there is a lot to learn but it is 
worth it. Another recommendation is to look at the declarative pipeline not 
just scripted pipeline. Declarative has post build handling in the pipeline 
which you may miss from FreeStyle jobs. In scripted pipeline you have to do 
a lot of try-catch handling for build errors.

Bill


On Wednesday, 8 March 2017 08:45:03 UTC, Bill Dennis wrote:
>
> If you put the pipeline / branch jobs inside a folder, you can scope the 
> credentials to just that folder. Pretty sure that is available in Jenkins 
> OSS and not just Enterprise - you need the CloudBees Folders plugin. Have a 
> look on here, it might have some clues: 
> https://support.cloudbees.com/hc/en-us/articles/204264974-How-inject-your-Maven-settings-xml-at-folder-level-with-the-Credentials-plugin
>
> I am not sure if this helps in your branch scenario. I put all my 
> credentials globally then realised I could scope them to the folder level - 
> I missed it due to some nuances in the credentials UI.
>
> Bill
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/b3a84fa3-a856-46bb-9356-6c508c97bcf6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


What is the freestyle "inject environment variables" equivalent inside blue ocean pipelines?

2017-03-08 Thread Bill Dennis
If you put the pipeline / branch jobs inside a folder, you can scope the 
credentials to just that folder. Pretty sure that is available in Jenkins OSS 
and not just Enterprise - you need the CloudBees Folders plugin. Have a look on 
here, it might have some clues: 
https://support.cloudbees.com/hc/en-us/articles/204264974-How-inject-your-Maven-settings-xml-at-folder-level-with-the-Credentials-plugin

I am not sure if this helps in your branch scenario. I put all my credentials 
globally then realised I could scope them to the folder level - I missed it due 
to some nuances in the credentials UI.

Bill

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/feae1731-afec-4af1-8461-58bcdd6b45c6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Jenkins2 pileline poll scm

2017-03-05 Thread Bill Dennis
If your Jenkinsfile is in the same repo, using declarative pipeline you can 
have triggers in the pipeline to do this:

pipeline {
 triggers {
pollSCM('*/5 * * * *')
 }
...
}

It works for me using subversion.

Bill

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/d2c1e7db-e1f7-441d-a3f0-4403bb3875a9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: User input in declarative pipeline

2017-03-02 Thread Bill Dennis
I've seen it used in a 'when' condition - in this example online: 
https://github.com/sta-szek/pojo-tester/blob/master/Jenkinsfile

Here is an extract from that:

when {
expression {
boolean publish = false
if (env.DEPLOYPAGES == "true") {
return true
}
try {
timeout(time: 1, unit: 'MINUTES') {
input 'Deploy pages?'
publish = true
}
} catch (final ignore) {
publish = false
}
return publish
}
}


But is doesn't seem to usable as a 'first class' step in a steps section 
without dropping into script.

Bill


On Thursday, 2 March 2017 06:34:06 UTC, Bert wrote:
>
> Hello everyone,
>
> One of the CloudBees support articles 
> 
>  
> describes how to do a user input in a pipeline script, as alternative for 
> the promoted builds plug-in. I'm trying to do that, but then with a 
> declarative pipeline definition.
>
> I know that's possible through a script block inside the pipeline 
> definition, but I'd like to prevent that and stay in the declarative model.
>
> Is that possible?
>
> Thanks in advance,
> Bert
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/ab3fdb40-1943-4c64-b0a6-e9de729d3a1d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Is there a maximum number of parallel builds a pipeline job can run?

2017-02-23 Thread Bill Dennis
Hey there -

I think it might be worth posting the pipeline of your orchestration job 
that runs the 12000 builds for anyone to comment in more detail.

>From what I understand, if your pipeline is not orchestrating each build in 
a node section it will use something called a 'flyweight executor' on the 
Jenkins master.

If this is happening, even though you have 100 agents your master may be 
heavily loaded running the pipeline - I guess each build allocates some 
Java objects that have to be GCed.

If I can't find any documentation, I tend to go onto Github to look at the 
source code to understand what the Jenkins pipeline is doing behind the 
scenes: https://github.com/jenkinsci/workflow-cps-plugin.

I ran a test on a system where I had an orchestrator kicking off 1000 
builds every 4 minutes on a schedule. After a few days the Jenkins service 
stopped responding (Java memory issues). One thing I have found I needed to 
pay attention to was file and process limits as documented here:
https://support.cloudbees.com/hc/en-us/articles/204231510-Memory-problem-unable-to-create-new-native-thread-


All the best,
--Bill



On Thursday, 23 February 2017 21:33:42 UTC, Chris Overend wrote:
>
> So not sure if this is a Jenkins limitation or pipeline.
> The jobs never exceeded available resources.
> The garbage collection was stable.
>
> So why did it lock-up?
>
> It did say I used 
>
>- 2950 million active threads
>- 350 threads
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/a504182f-ebf8-46a6-b343-03f639fefc8a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: declarative pipeline - gradle build tool not working

2017-02-16 Thread Bill Dennis
For future reference I discovered this issue is fixed by gradle plugin 
v1.26 - this issue: https://issues.jenkins-ci.org/browse/JENKINS-37394

On Thursday, 16 February 2017 02:40:42 UTC, Bill Dennis wrote:
>
> Hi -
>
> I'm looking to use gradle to run tests in declarative pipeline jobs.
>
> Looking at docs here under tools I should be able to spec a gradle tool in 
> the tools section:
> https://jenkins.io/doc/book/pipeline/syntax/#declarative-steps
>
> So I created a job like this:
>
> pipeline {
> agent any
> tools {
> gradle "GRADLE_LATEST"
> }
> stages {
> stage('Gradle') {
> steps {
> sh 'gradle --version'
> }
> }
> }
> }
>
> But I am getting an error that gradle is not a valid tool type:
>
> org.codehaus.groovy.control.MultipleCompilationErrorsException: startup 
> failed:
> WorkflowScript: 4: Invalid tool type "gradle". Valid tool types: [ant, 
> hudson.tasks.Ant$AntInstallation, 
> com.cloudbees.jenkins.plugins.customtools.CustomTool, 
> org.jenkinsci.plugins.docker.commons.tools.DockerTool, git, 
> hudson.plugins.git.GitTool, hudson.plugins.gradle.GradleInstallation, 
> hudson.plugins.groovy.GroovyInstallation, jdk, hudson.model.JDK, jgit, 
> org.jenkinsci.plugins.gitclient.JGitTool, jgitapache, 
> org.jenkinsci.plugins.gitclient.JGitApacheTool, maven, 
> hudson.tasks.Maven$MavenInstallation, 
> hudson.plugins.mercurial.MercurialInstallation] @ line 4, column 9.
>gradle "GRADLE_LATEST"
>^
>
> 1 error
>
>
> This is on Jenkins Enterprise 2.32.1.1 with version 1.0 of the Pipeline Model 
> plugins installed and with the gradle tool plugin installed and configured.
>
>
> Has anyone got gradle to work with declarative pipeline?
>
>
> I think I had it working with scripted pipeline sytax but I prefer this tools 
> section.
>
>
> Thanks for any help,
>
> Bill
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/185dad84-3e2f-42e0-a9f7-3da30bcb2cd9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Can't output clear text from secret text

2017-02-16 Thread Bill Dennis
Hi -

It is designed like that. Jenkins is masking the credentials details in the 
logs by design as you might not want them visible there. If you echo the 
credentials details to a file kept in the workspace, you should see the 
actual values in the file. That is how I check it for debugging.

Cheers,
Bill

On Thursday, 16 February 2017 15:45:22 UTC, John Marks wrote:
>
> I just wanted to test secret text, but the job always outputs "" when 
> I try to echo the bound variable from a shell script job.
>
>
> 
>
>
> Shell script: 
>
> set +x 
> echo "My secret is $username" 
>
> What I see in the console: 
>
> Started by user GMAS 
> [EnvInject] - Loading node environment variables. 
> Building in workspace /u02/app/jenkins/.jenkins/workspace/c-test 
> [c-test] $ /bin/sh -xe 
> /u02/app/jenkins/apache-tomcat-8.0.26/temp/hudson5680487019107036432.sh 
> + set +x 
> My secret is  
> Finished: SUCCESS 
>
> Probably something really obvious that I'm missing. Can someone help?
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/a1d9b12f-3b2a-4e3c-878e-38d6620c6646%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: declarative pipeline - gradle build tool not working

2017-02-16 Thread Bill Dennis
Yes, I did all that gradle configuration. 'GRADLE_LATEST' is the label we used 
for our gradle installation. We name it that way so every time we update to the 
latest gradle, we don't need to change all our jobs that we want to be on the 
latest gradle version. We also use gradle version specific labels for jobs that 
are sensitive to the version. It shouldn't be a problem here. 

Thanks, 
Bill 

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/a94d76c7-d221-4ee0-b86e-2815b4bea162%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


declarative pipeline - gradle build tool not working

2017-02-15 Thread Bill Dennis
Hi -

I'm looking to use gradle to run tests in declarative pipeline jobs.

Looking at docs here under tools I should be able to spec a gradle tool in 
the tools section:
https://jenkins.io/doc/book/pipeline/syntax/#declarative-steps

So I created a job like this:

pipeline {
agent any
tools {
gradle "GRADLE_LATEST"
}
stages {
stage('Gradle') {
steps {
sh 'gradle --version'
}
}
}
}

But I am getting an error that gradle is not a valid tool type:

org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 4: Invalid tool type "gradle". Valid tool types: [ant, 
hudson.tasks.Ant$AntInstallation, 
com.cloudbees.jenkins.plugins.customtools.CustomTool, 
org.jenkinsci.plugins.docker.commons.tools.DockerTool, git, 
hudson.plugins.git.GitTool, hudson.plugins.gradle.GradleInstallation, 
hudson.plugins.groovy.GroovyInstallation, jdk, hudson.model.JDK, jgit, 
org.jenkinsci.plugins.gitclient.JGitTool, jgitapache, 
org.jenkinsci.plugins.gitclient.JGitApacheTool, maven, 
hudson.tasks.Maven$MavenInstallation, 
hudson.plugins.mercurial.MercurialInstallation] @ line 4, column 9.
   gradle "GRADLE_LATEST"
   ^

1 error


This is on Jenkins Enterprise 2.32.1.1 with version 1.0 of the Pipeline Model 
plugins installed and with the gradle tool plugin installed and configured.


Has anyone got gradle to work with declarative pipeline?


I think I had it working with scripted pipeline sytax but I prefer this tools 
section.


Thanks for any help,

Bill

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/e5506ea0-e080-47b3-b885-cba24454ea61%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.