Re: HTML Publisher pipeline syntax

2020-08-26 Thread Marco Sacchetto
According to the (non-optimal, I agree) documentation, where the pipeline 
steps options do match those on the UI in the freestyle jobs,  `includes` 
does not accept multiple entries.
Yours anyway are not needed both because "pick up all the files in the 
folder" is the default setting, and because your two ant patterns are 
equivalent, since ** in Ant covers multiple
directories levels.
Anyway, at least for me, trying to incorporate multiple test reports into 
the same entry in Jenkins never ended up well, constantly ended up with bad 
formatting or limitations. But
it may just have been my bad since I configured that some time ago, but I 
do remember quite some pain with the tabs that should have contained the 
different index pages.
If you wish to continue with your idea I'd rather recommend you generated a 
main index page linking to all of the `index.html` in the subfolders, that 
should keep stuff much simpler.

Il giorno lunedì 24 agosto 2020 alle 23:43:12 UTC+2 steven@gmail.com ha 
scritto:

> Hi,
>
> I'm working with a pipeline that builds a bunch of npm packages in a 
> monorepo. Our commands are run on each package individually, so we have 
> individual test/lint results in each package. I'm trying to pull all of the 
> results into a single html report. To make this easier, I'm copying the 
> reports into a central folder, and trying to build the report from there:
> pipelineContext.echo "Publishing jest results"
> pipelineContext.sh(script: 'mkdir coverage-results')
> List reportTitles = []
> changeSet.each {
> String coverageDir = it.packageJson.name.replace('/', '_')
> pipelineContext.sh(script: "mkdir -p coverage-results/${coverageDir}")
> pipelineContext.sh(script: "cp -R ${it.path}/coverage/lcov-report/* 
> coverage-results/${coverageDir}")
> reportTitles << "'${it.packageJson.name }'"
> }
> pipelineContext.publishHTML(
> target: [allowMissing : false,
> alwaysLinkToLastBuild: true,
> keepAll : true,
> reportDir : 'coverage-results/**/*',
> reportFiles : 'index.html',
> includes : '**/*,**/**/*',
> reportName : 'Jest Coverage',
> reportTitles : reportTitles.join(',')]
> )
> The issue I'm having is that for some reason, it's not seeing the folder, 
> I guess?
> ERROR: Specified HTML directory 
> '/home/rh/workspace/ckages_task_test-jest-publishing/coverage-results/**/*'
>  
> does not exist.
>
> I went into our build node and checked, and that directory does exist, so 
> I'm unsure where my issue lies.
>
> Thanks,
> Steve
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/3334ebe9-88aa-4dc0-94f8-0fdfad316801n%40googlegroups.com.


Re: parallel jobs not starting with docker workflow

2020-07-24 Thread Marco Sacchetto
I'm replying to myself, in case this helps anybody else. This seems to be 
triggered by having a (valid) global variable defined for agents in 
Jenkin's main configuration. Once removed, things started working as 
expected.

Il giorno venerdì 24 luglio 2020 alle 18:32:44 UTC+1 Marco Sacchetto ha 
scritto:

> Hi,
>
> I've been trying (with no result for now) to define a parallel declarative 
> pipeline such as this:
> ```
> pipeline{
> agent any
> stages {
> stage('Configure projects') {
> parallel {
> stage('Configure 1') {
> agent {
> docker {
> image ''
> alwaysPull true
> }
> steps { . }
> }
>   stage('Configure 2') {
>   
>   }
>}
>   }
> ...
> ```
> I'm uncurring in an issue that is very similar to 
> https://issues.jenkins-ci.org/browse/JENKINS-46831 , with the big 
> question mark due that 
> the bug there is from 2017 and our plugins are updated.
> The build arrives up to the parallel closure, and it spins up the parallel
> jobs in other available agents. At that point it gets stuck. On blueocean
> it seems stuck at the docker pull phase, but it doesn't look like it is 
> pulling anything. No logs appear in the logs window.
> After around 5 minutes the build fails, and both stages print out
>
> process apparently never started in 
> /data1/jenkins_workingfolder/workspace/***/***@tmp/durable-9585b504 
> (running Jenkins temporarily with 
> -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true 
> might make the problem clearer)
>
> Unluckily this is a managed service an I'm unable to run jenkins with
> such an option activated. Anybody got an idea of what might be
> happening here?
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/91b0f91d-5c62-4fb8-967f-2bf44771bd44n%40googlegroups.com.


parallel jobs not starting with docker workflow

2020-07-24 Thread Marco Sacchetto
Hi,

I've been trying (with no result for now) to define a parallel declarative 
pipeline such as this:
```
pipeline{
agent any
stages {
stage('Configure projects') {
parallel {
stage('Configure 1') {
agent {
docker {
image ''
alwaysPull true
}
steps { . }
}
  stage('Configure 2') {
  
  }
   }
  }
...
```
I'm uncurring in an issue that is very similar to 
https://issues.jenkins-ci.org/browse/JENKINS-46831 , with the big question 
mark due that 
the bug there is from 2017 and our plugins are updated.
The build arrives up to the parallel closure, and it spins up the parallel
jobs in other available agents. At that point it gets stuck. On blueocean
it seems stuck at the docker pull phase, but it doesn't look like it is 
pulling anything. No logs appear in the logs window.
After around 5 minutes the build fails, and both stages print out

process apparently never started in 
/data1/jenkins_workingfolder/workspace/***/***@tmp/durable-9585b504 
(running Jenkins temporarily with 
-Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true 
might make the problem clearer)

Unluckily this is a managed service an I'm unable to run jenkins with
such an option activated. Anybody got an idea of what might be
happening here?

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/8ace20d0-09b4-4125-9c69-0d201edf945bn%40googlegroups.com.


Re: jenkins agent definition loaded from a jenkins library

2020-06-15 Thread Marco Sacchetto
Thanks Gianluca, that actually works. I missed the bit of information 
regarding the annotation that needs to go on variables inside the library 
to make them global!

Il giorno lunedì 15 giugno 2020 13:28:33 UTC+1, Gianluca ha scritto:
>
> Hi,
> I'm not entirely sure what are you looking for but I want to tell you that 
> you can use variables as image name, or at least you can do for agent 
> labels:
>
> agent{ label "${builder.label" }
>
> We use the above in our pipelines to have different agents depending on 
> the branch and PR number and other environment factors.
>
> Cheers,
> Gianluca.
>
>
> On Monday, 15 June 2020 13:20:05 UTC+1, Marco Sacchetto wrote:
>>
>> Hi,
>>
>> I'm currently using a Jenkins declarative pipeline (note: if my issue 
>> would be solved with a scripted pipeline, I can switch).
>> The builds run inside an ephemeral docker agent spinned up by the 
>> pipeline using a syntax similar to
>>
>> agent{ docker{ image "my-image"}}}
>>
>> The issue with that is that I'm going to have a big number of pipelines 
>> defined as Jenkinsfiles which are all kind of similar
>> to each other, and all of them will be running using the same docker 
>> agent.
>> As such, I'd love to be able to parameterise the image name by defining 
>> it as a variable or as a function inside a jenkins
>> library, so that if the image needs to be change I don't have to commit 
>> back to all of the existing Jenkinsfile.
>>
>> I saw it's possible to define the whole declarative pipeline inside a 
>> function, but there seems to be no option to set instead 
>> just some String variable to be used when setting up the agent, and no 
>> possibility to define anything to run outside of the
>> normal pipeline steps - is that correct?
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/e2459c41-d905-4d1f-8f15-bcc76951a5eco%40googlegroups.com.


jenkins agent definition loaded from a jenkins library

2020-06-15 Thread Marco Sacchetto
Hi,

I'm currently using a Jenkins declarative pipeline (note: if my issue would 
be solved with a scripted pipeline, I can switch).
The builds run inside an ephemeral docker agent spinned up by the pipeline 
using a syntax similar to

agent{ docker{ image "my-image"}}}

The issue with that is that I'm going to have a big number of pipelines 
defined as Jenkinsfiles which are all kind of similar
to each other, and all of them will be running using the same docker agent.
As such, I'd love to be able to parameterise the image name by defining it 
as a variable or as a function inside a jenkins
library, so that if the image needs to be change I don't have to commit 
back to all of the existing Jenkinsfile.

I saw it's possible to define the whole declarative pipeline inside a 
function, but there seems to be no option to set instead 
just some String variable to be used when setting up the agent, and no 
possibility to define anything to run outside of the
normal pipeline steps - is that correct?

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/290526ae-a190-4a13-a779-6039c8d6775eo%40googlegroups.com.


Re: automatic up/downstream relationship and fingerprinting

2020-04-22 Thread Marco Sacchetto
So, do you mean that we should accurately select only the artifacts
that we create ourselves? That indeed makes sense. I still wonder
how the maven plugin works though since it does that automatically,
it can definitely fingerprint the build products by checking the pom
file, but I wonder about how it can distinguish between internal and
external dependencies. Personally with gradle I guess I could define
a function with a whitelist that checks in the gradle cache folder for
all the known artifacts we produce. But besides being bothersome
to create, it's also not particularly safe. Extra artifacts introduced
by developers would make the whole thing unreliable without giving
you any hint of that breakage.

Il giorno mer 22 apr 2020 alle ore 17:02 Martin Jost 
ha scritto:

> I'm not using pipeline jobs, but the "normal" configuration. There I can
> tell Jenkins, what to fingerprint (paths and wildcards)
>
> You need to be (very) careful what you fingerprint. If I understood that
> correctly, the algorithm used is
> - fingerprint whatever the user asks CI t  fingerprint (MD5 checksum)
> - Search through all fingerprints recorded for the up/downstream related
> jobs
> - Note a link between all combinations, where any fingerprint matches
>
> We once shot ourself in the foot, by having a "much helps much" approach
> and fingerprinted about every artifact - creating totally misleading links
> between runs.
>
> Martin
>
>
> Am Mittwoch, 22. April 2020 17:38:06 UTC+2 schrieb Marco Sacchetto:
>>
>> Hi, I'm trying to create a new set of pipeline builds for a number of
>> modules.
>> The modules are built with gradle and have dependencies between
>> themselves.
>> I'd like to leverage the fingerprinting feature in Jenkins to obtain
>> upstream and
>> downstream relationship created automatically, but I'm not clear about
>> how it
>> woud exactly work. My issue is with the fingerprint function not
>> indicating to
>> Jenkins if an artifact is an actual dependency or a product the build.
>>
>> For example, in a simple case:
>>
>> Build 1 builds artifact A using two external dependencies X and Y.
>> So, build 1 fingerprints A, X and Y.
>>
>> Build 2 builds artifact B using A, and it transitively dependends on X
>> and Y.
>> I understand that by fingerprinting A, B, X and Y Jenkins will set this
>> as a downstream
>> build of build 1.
>>
>> Build 3 builds an artifact C totally unrelated to builds 1 and 2, but it
>> still depends
>> on X and Y. But, since X and Y have been fingerprinted in builds 1 and 2,
>> won't
>> Jenkins record this as a downstream job of both build 1 and 2? Or am I
>> missing
>> something here? I expected there would be a way to tell to Jenkins that
>> artifacts X and Y have not been created by build 1 and 2, but the
>> fingerprint
>> function doesn't seem to be accepting any option.
>>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Jenkins Users" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/jenkinsci-users/Q6xPM6Cszzo/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> jenkinsci-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/jenkinsci-users/7dcbea10-ad71-4f7f-8e81-c0a6b2471a2d%40googlegroups.com
> <https://groups.google.com/d/msgid/jenkinsci-users/7dcbea10-ad71-4f7f-8e81-c0a6b2471a2d%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/CAFgfHUcZLz7Te6cYSH2Efy1c5soagB7Pg8_ic%3DYwgha9_xN8Tg%40mail.gmail.com.


automatic up/downstream relationship and fingerprinting

2020-04-22 Thread Marco Sacchetto
Hi, I'm trying to create a new set of pipeline builds for a number of 
modules.
The modules are built with gradle and have dependencies between themselves.
I'd like to leverage the fingerprinting feature in Jenkins to obtain 
upstream and
downstream relationship created automatically, but I'm not clear about how 
it
woud exactly work. My issue is with the fingerprint function not indicating 
to
Jenkins if an artifact is an actual dependency or a product the build.

For example, in a simple case:

Build 1 builds artifact A using two external dependencies X and Y.
So, build 1 fingerprints A, X and Y.

Build 2 builds artifact B using A, and it transitively dependends on X and 
Y.
I understand that by fingerprinting A, B, X and Y Jenkins will set this as 
a downstream
build of build 1.

Build 3 builds an artifact C totally unrelated to builds 1 and 2, but it 
still depends
on X and Y. But, since X and Y have been fingerprinted in builds 1 and 2, 
won't
Jenkins record this as a downstream job of both build 1 and 2? Or am I 
missing
something here? I expected there would be a way to tell to Jenkins that
artifacts X and Y have not been created by build 1 and 2, but the 
fingerprint
function doesn't seem to be accepting any option.

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/e79112df-aa25-4dab-b87f-cf2b4649ec77%40googlegroups.com.


Specify deployment servers as job parameters

2016-06-12 Thread Marco Sacchetto
Hi all,

I'm struggling to find some elegant solution to this problem, so I'm asking 
if anybody got stuck with this before and if any specific plugin exists 
that might take me to the solution. I need to dynamically build one or more 
configuration files and to deploy them to a number of servers. Deployment 
is by scp/ssh/sftp by the means of a local user on the remote system 
(access might be either by key or password) plus a "sudo cp" action to move 
the file in position. Now the problem is the deployment destination. That 
is likely to change everytime, and might be either a single server or a 
group of servers. If it's a group, I can assume that the same 
username/key/password can be used for all of them.
Unluckily, it looks like that no "deploy server" option can be set. I also 
tried to use the credentials plugin with the "password as build parameter" 
plugin to fire up a custom script or to launch a matrix style job with a 
simpler script, but the later only supports passwords and not keys. The 
credentials plugin would be fine if I could use it alone, but since it just 
returns an uuid it's of no use without a companion plugin to consumer the 
code it generates.
A very dirty solution would be to pass the key or password as a parameter 
when I build, but that would mean possible sharing the password with a 
number of people and our policies do not allow for that (besides it being 
possibly time consuming as it would force us to keep the passwords locally).
Any ideas?

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/a4e2b44d-1c2c-4ee5-85c9-5447b2ddc691%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Git plugin fetching too much data

2015-11-03 Thread Marco Sacchetto


> There is no way to force the plugin to use clone instead of fetch.  Even 
> if there were, it would likely have the same problem, since clone is often 
> described as "init + fetch".
>
>
That was meant as a way to try to overcome what seems to be a bug in the 
plugin, not sure it would have changed anything but at least would have 
been a try.
 

> You could reduce the amount of data transferred by using a shallow clone.  
> That's one of the checkboxes in the "Additional Behaviours" section.
>
>
I know about the shallow clone and we use extensively but unluckily here 
it's useless. The problem is that the offending, big files are on a 
different branch from the one I need. With that fetch unluckily Jenkins 
downloads all of the branches, that's why it gets so slow. In this case for 
me the shallow clone would have no effect unluckily.
 

> You could reduce the amount of data transferred by retaining a reference 
> copy of the repository on each slave agent.  That's one of the checkboxes 
> in the "Additional Behaviours" section.
>

Yes I know that as well, but we have hundreds of projects and that might 
become not so easy to manage - besides risking to become a big toll on disk 
space availability, which is often the reason for us wiping oout 
workspaces. 

>
> I'm not sure why it is running the fetch with the full ref spec 
> initially.  That seems like a bug.  However, it would need more 
> investigation, and you probably want to reduce the amount of data 
> transferred now.  To reduce the amount of data transferred immediately, use 
> a shallow clone.
>
>
I'll just wait to see if anybody else comes out with more ideas, if not I 
guess I'll file a bug on the plugin.
Thanks for your time! 

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/665edff7-47b6-4e4d-90bf-e32acfdc6bf9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Git plugin fetching too much data

2015-11-03 Thread Marco Sacchetto
Hi,

I am trying to get a repository cloned inside a Jenkins job. The cloning 
operation works, but the downloaded data is way too much, and we are having 
issues since the site is on low bandwidth and the programmers want the 
workspace to be cleaned out at each run.
The git repository needs authentication, and I only need the master branch. 
If I try to run a clone operation manually from a console, it downloads 
around 3,5MB of data. When the git repository needs credentials, it seems 
that it automatically switches to using git init + git fetch. Git fetch, 
though, downloads around 100MB of data for nothing from the repository. I 
then set the refspec setting to 
"+refs/heads/master:refs/remotes/origin/master". If I try to run a git 
fetch manually from console with that refspec, once again I get the correct 
3,5MB of data that I need.

The Jenkins Git plugin at this point behaves very strangely. What I see in 
the log is that it first runs:

"/usr/bin/git -c core.askpass=true fetch --tags --progress ***repository 
url here*** +refs/heads/*:refs/remotes/origin/* # timeout=30"

and only after that is done it finally runs a 

"/usr/bin/git -c core.askpass=true fetch --tags --progress ***repository 
url here*** +refs/heads/master:refs/remotes/origin/master # timeout=30"

This means that, deleting the workspace everytime, I still need to wait for 
all the 100MB of data to be downloaded again every time I run the job. Is 
there a reason for this behaviour? Is there also a way to inhibit it, or to 
force the git plugin to use clone instead of fetch?

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/10e38af4-0365-4a41-8fb1-8d2e641ee1d4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


parameterized build: does a string parameter need escaping?

2013-11-27 Thread Marco Sacchetto
Hello, we're recently using Jenkins  (updated to 1.541) for automatically 
building/testing our local java projects. I was now asked to have if 
possible Jenkins change some xml files before the actual build, not to have 
to change them later manually before deploying. It all worked great using 
string build parameter and some python scripting. All but a thing: one of 
those strings is pretty long, and including "$$" (without quotes). While 
all the rest seems to pass correctly as if it was escaped before being 
turned into a local variable, "$$" becomes "$". After a few tries, $ 
remains $ but $$ becomes $, $$$ becomes $$,  becomes $$. This happens 
actually before reading the variable into python, as I could banally see 
with "env". Is it wanted, some specific situation or just a bug? Does $ 
need escaping? If it does, are there other chars needing it? Or did I 
assume something wrong?

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.