I try to make the calls in my top level jenkinsfile atomic and complete, 
i.e. each one performs a single function. By using long, descriptive names 
I can avoid the need for lots of comments. It also makes building new 
pipelines easy and encourages reuse across files, stages and steps. If I 
see several sh calls in a row in a jenkinsfile, I am immediately looking at 
what they are trying to accomplish and if they will always be executed 
together, then I move them to a script or function. I do make my scripts 
slightly wordy. I find that 90% of the time CI problems are transient and 
by the time you enable debugging, the problem can be gone. A good logging 
convention is a must so that you can run grep on a log file and get a high 
level summary.

For each function, I look at whether it is likely to have to change when 
the product code changes versus when the devops code or the infrastructure 
changes.  If it is coupled to the product code (e.g. scripts. makefiles, 
dockerfiles, etc), then I create a shell script and store it in the CI 
folder with the other product code.  Everything else goes into scripts in 
the devops folder or groovy functions in the devops library file.  The idea 
is that developers are responsible for maintaining the CI folder and I own 
the devops folder. E.g. if the product has a new dependency on a debian 
package, it is up to dev to add it to the dockerfile, not me. But if I move 
my debian mirror into artifactory, that is on me.

bottom line .. make your top level jenkinsfile readable and isolated from 
the nitty-gritty details

my $0.02 


On Tuesday, August 11, 2020 at 12:01:09 PM UTC-4, Jérôme Godbout wrote:
>
> Hi, 
> this is my point of view only,but using a single script (that you put into 
> your repos make it easier to perform the build, I put my pipeline script 
> into a separated folder). But you need to make sure your script is verbose 
> enough to see where it has fail if anything goes wrong, sinlent and without 
> output long script will be hard to understand where it has an issue with 
> it. 
>
> Using multiple sh make it easier to see where it does fail since Jenkins 
> will display every sh invoke. 
>
> You can also put a function that will run some sh command into groovy and 
> load that file and execute a command from it. This leave more flexibility 
> from jenkins (decouple jenkins from the task to do) but you still can 
> invoke multiple sh command into that groovy script. So your repos can can 
> contain a groovy entry point that the pipeline will load and invoke that 
> script can call sh, sh scripts and/or groovy scripts as it please. 
>
> pipeline script --> repos groovy script --> calls (sh, groovy, shell 
> scripts...) 
>
> That avoid high maintance jenkins pipeline, the repos is more self aware 
> of his needs and can more easily changes between version. 
>
> I for one, use 3+ repos. 
> 1- The source code repos 
> 2- The pipeline and build script repos (this can evolve aside form the 
> source, so my build method can change and be applied to older source 
> version, I use branch/tag when backward compatibility is broken or a 
> specific version is needed for a particualr source branch) 
> 3- My common groovy, scripts tooling between my repos 
> 4- (optional) my unit tests are aside and can be run on multiple versions 
>
> This work well, I wish the shared library was more flexible and that I 
> could more easily do file manipulation into groovy, but I managed some 
> platform agnostic functions for most file/folder/path operations that I 
> reused between project. This make my pipeline script free of thousand of 
> if(isUnix()) and the like. My pipeline look the same for either 
> MacOS/Windows/Linux. 
>
> Hope this can help you decide or plan you build architecture. 
>
> Jerome 
>
> -----Original Message----- 
> From: jenkins...@googlegroups.com <javascript:> <
> jenkins...@googlegroups.com <javascript:>> On Behalf Of Sébastien 
> Hinderer 
> Sent: August 11, 2020 10:33 AM 
> To: jenkins...@googlegroups.com <javascript:> 
> Subject: Pipeline design question 
>
> Dear all, 
>
> When a pipeline needs to run a sequence of several shell commands, I see 
> several ways of doing that. 
>
> 1. Several "sh" invocations. 
>
> 2. One "sh" invocation that contains all the commands. 
>
> 3. Having each "sh" invocation in its own step. 
>
> 4. Putting all the commands in a script and invoking that script through 
> the sh step. 
>
> Would someone be able to explain the pros and cons of these different 
> approaches and to advice when to use which? Or is there perhaps a reference 
> I should read? 
>
> Thanks, 
>
> Sébastien. 
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Jenkins Users" group. 
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to jenkins...@googlegroups.com <javascript:>. 
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/jenkinsci-users/20200811143317.GB117214%40om.localdomain.
>  
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/668675ed-b36c-458c-8bbe-635c6fa1275eo%40googlegroups.com.

Reply via email to