[JIRA] (JENKINS-49199) Job DSL Authorizarion Matrix cannot manage the folder inheritance

2019-05-23 Thread joshua.slee...@gmail.com (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Josh Sleeper commented on  JENKINS-49199  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Job DSL Authorizarion Matrix cannot manage the folder inheritance   
 

  
 
 
 
 

 
 apologies for the slight necro-post, but I haven't come across a better place to say something about this not working for me with the very latest Jenkins and matrix-auth plugins. even something as simple as the example Daniel Spilker gave above (except using authorizationMatrix since that was the symbol chosen) fails for me with the following error:   

 

ERROR: Scripts not permitted to use method groovy.lang.GroovyObject invokeMethod java.lang.String java.lang.Object (javaposse.jobdsl.dsl.Folder authorizationMatrix ConfigureJobDsl$_run_closure2$_closure5$_closure6) 

   here's some specific code that's giving me that error: 

 

folder('project') {
properties {
authorizationMatrix {
inheritanceStrategy {
inheriting()
}
permissions(['hudson.model.Item.Build:jsleeper'])
}
}
} 

    
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
To view this discussion on the web visit 

[JIRA] (JENKINS-40723) Built Dockerfile images are never removed

2017-01-04 Thread joshua.slee...@gmail.com (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Josh Sleeper commented on  JENKINS-40723  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Built Dockerfile images are never removed   
 

  
 
 
 
 

 
 James Dumay I can totally see what you're saying, but to some extent I kinda think yes. To me, part of the beauty of using Dockerfile(s) in the Declarative Pipeline syntax is that it really does allow me to just stop caring about my nodes. I don't care about their platform, I don't care about what they have installed (beyond Docker, of course), and I don't care about managing a complete pre-built Docker image somewhere. Not caring too much about the images I generate that way seems like it fits right in to that mentality. Here's the perspective I think many people might end up seeing this from. 
 As someone who is a Jenkins user but not a Jenkins admin, working with a pool of generic nodes with Docker installed, I don't want to be that person who used up all of a slave/node's disk space because: 
 
I didn't have permission to access the nodes directly and clean up my old images 
I didn't know how to clean up my old images 
I didn't have time or care enough to clean up my old images 
 
 One solution, just like you suggested, is to run something like docker-gc on each and every node in the pool with a regular cadence.  Frankly, to manage a whole pool of Docker nodes like I'm thinking, that may very well have to be something we do anyway and that would just become part of the requirements to be a Docker node. I'm just not sure if everyone else thinks that running something like docker-gc totally separate from the Jenkins job creating the images is a suitable solution. Does that make sense, or am I missing something still?  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian JIRA (v7.1.7#71011-sha1:2526d7c)  
 
 

[JIRA] (JENKINS-40723) Built Dockerfile images are never removed

2017-01-02 Thread joshua.slee...@gmail.com (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Josh Sleeper commented on  JENKINS-40723  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Built Dockerfile images are never removed   
 

  
 
 
 
 

 
 Preemptive apology for the novel of a comment, but I wanted to be detailed with my thoughts. 
 Patrick Wolf is absolutely right, building the image every time does mostly defeat the purpose, but being able to store our Docker definition alongside our pipeline definition is exceedingly convenient for both our dev and QA in my experience so far. This may or may not be hard to do from your end (I'm not too familiar with the code managing the Dockerfile interactions), but here's the way I imagine the Dockerfile flow in declarative pipeline could work: 
 Each declarative pipeline job run using a Dockerfile would retain 1 or more pairs of fingerprints, where each pair would contain a Dockerfile fingerprint and the fingerprint of the Docker image built from said Dockerfile. Thus, for each declarative pipeline job run that utilizes Dockerfiles there are two possible paths to follow: 
 
The Dockerfile fingerprint does match the fingerprint from the previous job run, meaning that ideally we shouldn't rebuild unless we have to. To determine that, we check the current node for the image fingerprint from the previous job run: 
 
If the current node does have an image that matches the image fingerprint from the previous job run, just run that image and continue with the job. 
If the current node doesn't have an image that matches the image fingerprint from the previous job run, then we logically need to build it on the current node even though the Dockerfile itself hasn't changed. 
  
The Dockerfile fingerprint doesn't match the fingerprint from the previous job run, meaning we should rebuild and clean up previously created Docker images if present. Just like the first path we check the current node for the image fingerprint from the previous job run, but this time we focus on cleanup: 
 
If the current node does have an image that matches the image fingerprint from the previous job run, remove that image from the current node and then build and run like normal. 
If the current node doesn't have an image that matches the image fingerprint from the previous job run, then we've at least done our due diligence to clean up and just build and run like normal. 
  
 Keeping Dockerfile and Docker image fingerprints associated as a pair ensures that you can selectively remove or rebuild per Dockerfile used, and removing images only relative to fingerprints from the last job run handles what I'm guessing is the common case for Dockerfile image management. 
 I think this gives us the best overall user-friendliness for Dockerfiles in the declarative pipeline syntax, following the mentality that users generally shouldn't have to think