Am I doing this right? (Multi-service pipeline and deploy reuse)

2016-09-23 Thread Alex Kessinger
Let me know if this sounds, right.

I work at a company that has N services. Each service has it's own 
deployable artifact. Each artifact creation process is unique, but the 
pipeline, and deployment processes are similar. As we grow we are looking 
to create more services. I would like to manage this complexity in a couple 
of ways.

1. We use Jenkins
1. I'd like to use Jenkinsfile's to manage the jobs
1. The artifact process should stay unique, some shared code loaded via 
fileLoader.fromGit
2. I plan on using a shared deployment job configured via params
3. I plan on using a shared pipeline job configured via params

My main reasoning for re-using deploy, and pipeline code is it makes it 
easier to rollout new changes to all pipelines, because they are shared. 

Internally we currently manage our deployment pipeline via Slack, and I'd 
like to continue doing that. I'd also like to be able to use slack to do 
things like roll environments back to specific versions. So, for instance I 
need to be able to ask questions like what was the last version of 
application x that was rolled to production. So, I can roll back to that.

Am I in the right vicinity? 

I have included example jenkins files

Here is a generic Jenkinsfile I have a for building an artifact.

```groovy
node {
  stage('Checkout') {
checkout scm
BUILD_TAG = getBuildTag();
  }

  // Mostly static analysis tools
  stage('Preflight') {
withEnv(["WORKSPACE=${WORKSPACE}", "BUILD_TAG=${BUILD_TAG}", 
"TARBALLS=1"]) {
  sh "./scripts/preflight.sh"
}
  }

  stage('Unit Test') {
withEnv(["WORKSPACE=${WORKSPACE}", "BUILD_TAG=${BUILD_TAG}", 
"TARBALLS=1"]) {
  sh "./scripts/unit_test.sh"
}
  }

  stage('Build') {
withEnv(["WORKSPACE=${WORKSPACE}", "BUILD_TAG=${BUILD_TAG}", 
"TARBALLS=1"]) {
  sh "./scripts/build.sh"
}


step([
  $class: 'S3BucketPublisher',
  profileName: 'Artifact',
  entries: [
[sourceFile: 'dist/*', bucket: 'bepress-build-artifacts', 
selectedRegion: 'us-west-1', managedArtifacts: true],
  ]
])

  }

  stage('Start Pipeline') {
build([
  job: 'org/pipeline/master',
  wait: false,
  parameters: [
[$class: 'StringParameterValue', name: 'BUILD_TAG', value: 
BUILD_TAG],
[$class: 'StringParameterValue', name: 'PLAYBOOK', 
value:'playbook.yml'],
[$class: 'StringParameterValue', name: 'APPLICATION', 
value:'Readable Application name'],
  ]
])
  }
}
```

Then I have a deployment build like this

```groovy

properties([
[$class: "BuildDiscarderProperty", strategy: [$class: "LogRotator", 
numToKeepStr: "1"]
  ], [
$class: 'ParametersDefinitionProperty',
parameterDefinitions:[[
  $class: 'StringParameterDefinition',
  description: 'Artifact Build Tag',
  name: 'BUILD_TAG'
], [
  $class: 'StringParameterDefinition',
  description: 'Playbook to use to rollout artifact',
  name: 'PLAYBOOK'
], [
  $class: 'StringParameterDefinition',
  description: 'What environment to roll to',
  name: 'ENVIRONMENT'
], [
  $class: 'StringParameterDefinition',
  description: 'What application are we rolling for',
  name: 'APPLICATION'
]
  ]
]])

node {
  def build_tag_parts = BUILD_TAG.tokenize('-')
  def build_num = build_tag_parts[-2];
  def project_name_parts = build_tag_parts[0..-3]
  def project_name = project_name_parts.join('/')

  sh "rm -fR dist"
  sh "mkdir -p dist"
  step([
$class: 'S3CopyArtifact',
buildSelector: [
  $class: 'SpecificBuildSelector',
  buildNumber: build_num
],
excludeFilter: '',
filter: '*',
flatten: false,
optional: false,
projectName: project_name,
target: './dist/'
  ])

  // Perform deploy with ansible playbook
}

```


Finally I have a pipeline script that manages an artifacts delivery to 
production, through multiple manual gates.

```groovy
#!groovy

properties([
[$class: "BuildDiscarderProperty", strategy: [$class: "LogRotator", 
numToKeepStr: "10"]
  ], [
$class: 'ParametersDefinitionProperty',
parameterDefinitions:[[
  $class: 'StringParameterDefinition',
  description: 'Artifact Build Tag',
  name: 'BUILD_TAG'
], [
  $class: 'StringParameterDefinition',
  description: 'Playbook to use to rollout artifact',
  name: 'PLAYBOOK'
]
  ]
]])

// This step is automatic no approval required
stage('Integration Deployment') {

  build([
job: 'org/deploy/master',
parameters: [
  [$class: 'StringParameterValue', name: 'BUILD_TAG', value: BUILD_TAG],
  [$class: 'StringParameterValue', name: 'PLAYBOOK', value: PLAYBOOK],
  [$class: 'StringParameterValue', name: 'ENVIRONMENT', value: 
'integration'],
  [$class: 'StringParameterValue', name: 'APPLICATION', value: 
APPLICATION],
  [$class: 'BooleanParameterValue', name: 'DOCS', value: 
DOCS.toBoolean()]
]
  ])
}

stage('Staging Deployment') {
  // Send a notification to team 

Re: Multiple pipelines in Jenkinsfile

2016-07-21 Thread alex kessinger
I've tried the seed job-dsl method previously. Some automation was better
then no automation, but I think the Jenkinsfile in repo is even better. If
I make a change to the Jenkinsfile, that change can be isolated in each
environment/branch until it has been promoted to next step. If I have one
master seed job it's harder for me to control the promotion between
environments.

One other thing to Note, Job DSL, and Pipeline are both groovy, but they
are not the same DSL. That may not matter in this case because you are
orchestrating pipelines with job DSL.

On Thu, Jul 21, 2016 at 10:17 AM, Wayne Warren  wrote:

>
> Sorry for chiming in late here, but I have recently been evaluating
> Jenkins Pipeline Plugin for use at my workplace and have considered the
> very problem you are describing in this thread--what if a given source repo
> has multiple Pipeline groovy scripts it wants to use for different purposes?
>
> The approach I have come up with actually looks very similar to what I
> found described in a relatively new series of blog posts:
>
>
> http://marcesher.com/2016/06/21/jenkins-as-code-registering-jobs-for-automatic-seed-job-creation/
>
> This blog post describes an approach that leans heavily on the use of the
> Job DSL Plugin to create jobs by using a "mother seed" DSL job that will
> create new DSL seed jobs for each registered SCM repo (the blog author
> describes the use of git and I use git myself but it's easy for me to
> imagine an approach using some alternate SCM).
>
> This approach allows individual SCM respositories to contain a Job DSL
> script that in turn defines additional jobs specific to that repository.
> The jobs defined in this way could be Pipeline jobs. When defining these
> Pipeline jobs in the Job DSL script you can specify the relative path in
> the repository that contains the Pipeline script you want to use.
>
> As far as I can tell from the description of your situation this should
> fit your needs perfectly. I recommend starting at the earliest blog post in
> that series for full context on the approach the author is describing:
> http://marcesher.com/2016/06/08/jenkins-as-code-creating-jenkins-jobs-with-text-not-clicks/
>
> The blog series appears to still be a work in progress as the author has
> not yet reached the point where he describes the interaction between Job
> DSL and Pipeline but it seems to me like this should be obvious. Job DSL
> defines the Pipeline jobs. Pipeline defines the behavior of those jobs.
>
> Good luck!
>
> On Tuesday, July 12, 2016 at 11:33:15 AM UTC-7, Bartłomiej Sacharski wrote:
>>
>>
>>
>> On Tuesday, July 12, 2016 at 8:16:27 PM UTC+2, Mike Rooney wrote:
>>>
>>> This need makes a lot of sense to us, where we have a couple related
>>> sub-projects (as sub directories) in a single repository. It makes sense
>>> that they each have their own pipeline jobs and can run on different
>>> schedules. I've also seen cases similar to Alex's (hi Alex!) where there
>>> are different tasks you want to do with a single repo that don't make sense
>>> as one pipeline job that runs together (building/testing versus a nightly
>>> cron-type task that runs in the repo).
>>>
>>> It is reasonable that a Jenkinsfile corresponds to a single Pipeline
>>> job, because these are often associated with and run via a Pipeline job
>>> which isn't a logical "parent" of these seed jobs. However, a great place
>>> for this enhancement would be the Github Org / Bitbucket plugins that scan
>>> repositories for Jenkinsfiles and are already in the place of creating
>>> multiple Pipeline jobs.
>>>
>>
>>> My proposal would be: add a configuration option for the Github and
>>> Bitbucket plugins which scan organizations for Jenkinsfiles. So, "Project
>>> Recognizers -> Pipeline Jenkinsfile" would get a box for this which
>>> defaults to "Jenkinsfile". Some logical configuration examples might be,
>>> "Jenkinsfiles/*", "**/Jenkinsfile", "Jenkinsfile-*". Then the
>>> Github/Bitbucket plugins can be pointed at an org, or just one repository,
>>> and multiple Jenkinsfiles can exist which define different Pipeline jobs.
>>>
>>> Bartłomiej and Alex, would something like this satisfy your use cases as
>>> well?
>>>
>>
>>
>>>
>>> - Michael
>>>
>>> On Sunday, May 29, 2016 at 12:47:40 PM UTC-5, Bartłomiej Sacharski wrote:

 I'm really hyped about the Jenkinsfiles - they make it much much easier
 to document and preserve project configuration.
 However, all the examples that I've seen seem to use single pipeline.
 I've tried to define different stages in separate node blocks, however
 they still were seen as a single pipeline.

 Is it possible to define multiple pipelines in a single Jenkinsfile? Or
 maybe there's undocumented functionality for .jenkinsfile extension to
 handle such cases?

>>>
>> IMO just having an option to specify the name of Jenkinsfile would be
>> enough - and I would rather try to implement this as a 

Re: Multiple pipelines in Jenkinsfile

2016-07-20 Thread Alex Kessinger
Mike, I'd just like to chime in and say that makes a lot of sense to me. As 
others have noted their can be times when you want multiple pipelines with 
a repo. My own specific use case is that I'd like to be able to trigger a 
rollback pipeline.

On Tuesday, July 12, 2016 at 12:16:27 PM UTC-6, Mike Rooney wrote:
>
> This need makes a lot of sense to us, where we have a couple related 
> sub-projects (as sub directories) in a single repository. It makes sense 
> that they each have their own pipeline jobs and can run on different 
> schedules. I've also seen cases similar to Alex's (hi Alex!) where there 
> are different tasks you want to do with a single repo that don't make sense 
> as one pipeline job that runs together (building/testing versus a nightly 
> cron-type task that runs in the repo).
>
> It is reasonable that a Jenkinsfile corresponds to a single Pipeline job, 
> because these are often associated with and run via a Pipeline job which 
> isn't a logical "parent" of these seed jobs. However, a great place for 
> this enhancement would be the Github Org / Bitbucket plugins that scan 
> repositories for Jenkinsfiles and are already in the place of creating 
> multiple Pipeline jobs.
>
> My proposal would be: add a configuration option for the Github and 
> Bitbucket plugins which scan organizations for Jenkinsfiles. So, "Project 
> Recognizers -> Pipeline Jenkinsfile" would get a box for this which 
> defaults to "Jenkinsfile". Some logical configuration examples might be, 
> "Jenkinsfiles/*", "**/Jenkinsfile", "Jenkinsfile-*". Then the 
> Github/Bitbucket plugins can be pointed at an org, or just one repository, 
> and multiple Jenkinsfiles can exist which define different Pipeline jobs.
>
> Bartłomiej and Alex, would something like this satisfy your use cases as 
> well?
>
> - Michael
>
> On Sunday, May 29, 2016 at 12:47:40 PM UTC-5, Bartłomiej Sacharski wrote:
>>
>> I'm really hyped about the Jenkinsfiles - they make it much much easier 
>> to document and preserve project configuration.
>> However, all the examples that I've seen seem to use single pipeline.
>> I've tried to define different stages in separate node blocks, however 
>> they still were seen as a single pipeline.
>>
>> Is it possible to define multiple pipelines in a single Jenkinsfile? Or 
>> maybe there's undocumented functionality for .jenkinsfile extension to 
>> handle such cases?
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/a855fa87-29a9-46e5-9309-f0a0fff4e781%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.