Re: ACTION REQUIRED: disk space on jenkins master nearly full

2019-06-14 Thread Chris Lambertus
All,

Thanks to those who have addressed this so far. The immediate storage issue has 
been resolved, but some builds still need to be fixed to ensure the build 
master does not run out of space again anytime soon.

Here is the current list of builds storing over 40GB on the master:

597GPackaging
204Gpulsar-master
199Ghadoop-multibranch
108GAny23-trunk
93G HBase Nightly
88G PreCommit-ZOOKEEPER-github-pr-build
71G stanbol-0.12
64G Atlas-master-NoTests
50G HBase-Find-Flaky-Tests
42G PreCommit-ZOOKEEPER-github-pr-build-maven


If you are unable to reduce the size of your retained builds, please let me 
know. I have added some additional project dev lists to the CC as I would like 
to hear back from everyone on this list as to the state of their stored builds.

Thanks,
Chris




> On Jun 10, 2019, at 10:57 AM, Chris Lambertus  wrote:
> 
> Hello,
> 
> The jenkins master is nearly full.
> 
> The workspaces listed below need significant size reduction within 24 hours 
> or Infra will need to perform some manual pruning of old builds to keep the 
> jenkins system running. The Mesos “Packaging” job also needs to be corrected 
> to include the project name (mesos-packaging) please.
> 
> It appears that the typical ‘Discard Old Builds’ checkbox in the job 
> configuration may not be working for multibranch pipeline jobs. Please refer 
> to these articles for information on discarding builds in multibranch jobs:
> 
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> https://issues.jenkins-ci.org/browse/JENKINS-35642
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> 
> 
> 
> NB: I have not fully vetted the above information, I just notice that many of 
> these jobs have ‘Discard old builds’ checked, but it is clearly not working. 
> 
> 
> If you are unable to reduce your disk usage beyond what is listed, please let 
> me know what the reasons are and we’ll see if we can find a solution. If you 
> believe you’ve configured your job properly and the space usage is more than 
> you expect, please comment here and we’ll take a look at what might be going 
> on. 
> 
> I cut this list off arbitrarily at 40GB workspaces and larger. There are many 
> which are between 20 and 30GB which also need to be addressed, but these are 
> the current top contributors to the disk space situation.
> 
> 
> 594GPackaging
> 425Gpulsar-website-build
> 274Gpulsar-master
> 195Ghadoop-multibranch
> 173GHBase Nightly
> 138GHBase-Flaky-Tests
> 119Gnetbeans-release
> 108GAny23-trunk
> 101Gnetbeans-linux-experiment
> 96G Jackrabbit-Oak-Windows
> 94G HBase-Find-Flaky-Tests
> 88G PreCommit-ZOOKEEPER-github-pr-build
> 74G netbeans-windows
> 71G stanbol-0.12
> 68G Sling
> 63G Atlas-master-NoTests
> 48G FlexJS Framework (maven)
> 45G HBase-PreCommit-GitHub-PR
> 42G pulsar-pull-request
> 40G Atlas-1.0-NoTests
> 
> 
> 
> Thanks,
> Chris
> ASF Infra



Re: HBase nightly job failing forever

2018-07-25 Thread Chris Lambertus


> On Jul 25, 2018, at 10:34 AM, Andrew Purtell  wrote:
> 

> public clouds instead. I'm not sure if the ASF is set up to manage on
> demand billing for test resources but this could be advantageous. It would
> track actual usage not fixed costs. To avoid budget overrun there would be
> caps and limits. Eventually demand would hit this new ceiling but the


On-demand resources are certainly being considered (and we had these in the 
past,) but I will point out that ephemeral (“on-demand”) cloud builds are in 
direct opposition to some of the points brought up by Allen in the other 
jenkins storage thread, in that they tend to rely on persistent object storage 
in their workspaces to improve the efficiency of their builds. Perhaps this 
would be less of an issue with an on-demand instance which would theoretically 
have no resource contention?


-Chris
ASF Infra


> --
> Best regards,
> Andrew




signature.asc
Description: Message signed with OpenPGP


hbase build sizes

2017-03-08 Thread Chris Lambertus
Hi HBase folks,

Just wanted to advise you of some build size issues we noticed on the jenkins 
master. It looks like over a span of about 2 weeks, the hbase builds are using 
a very large amount of storage in the retained builds directory:

79GPreCommit-HBASE-Build
72GHBase-1.3-JDK8
68GHBase-1.3-JDK7
60GHBase-1.2-JDK7t
59GHBase-1.2-JDK8
51GHBase-1.2-IT
36GHBase Website Link Ckecker

Each stored build appears to use on the average of 600-1.1GB of disk space, and 
there are over 200 builds in two weeks for PreCommit-HBASE-Build alone. Some 
are much smaller (1-8MB.)

We are running a bit low on storage on the jenkins master, so anything you 
could help us do to reduce the footprint here would be appreciated.

I’ve already discussed this a bit with Sean Busbey on Hipchat, he asked me to 
follow up here.

Thanks,

-Chris
ASF Infra



signature.asc
Description: Message signed with OpenPGP