Hmm. Upon further investigation, I do see that the 
pulsar-master/modules/*/builds directories contain current builds between 13 
and 15 June, and then hundreds of builds from 2018 and 2017. It looks like 
these are indeed “orphaned” builds, and your ‘discard old builds’ configuration 
is working properly. 

I have manually removed all builds older than 14 days from the 
pulsar-master/modules directory, and your usage is looking good now:

root@jenkins02:/x1/jenkins/jenkins-home/jobs# du -sh pulsar-master
23G     pulsar-master

I really appreciate your attention to this. I’ll check back again in a few 
weeks time to make sure that the builds are getting pruned as intended.


I will also look for this pathology in other build directories too, in case the 
problem is with jenkins rather than the build config.

-Chris



> On Jun 14, 2019, at 6:15 PM, Matteo Merli <matteo.me...@gmail.com> wrote:
> 
> Hi Chris,
> sorry, I lost the updates on this thread.
> 
> After applying the "discard old builds" check, I saw all the old stuff
> going away. Even now I don't see any of the old builds, from the
> Jenkins UI.
> https://builds.apache.org/job/pulsar-master/
> 
> Is it possible that maybe Jenkins failed to cleanup these, for some
> reason? In any case, please go ahead and remove those directories.
> 
> Matteo
> --
> Matteo Merli
> <matteo.me...@gmail.com>
> 
> On Mon, Jun 10, 2019 at 2:29 PM Chris Lambertus <c...@apache.org> wrote:
>> 
>> Matteo,
>> 
>> pulsar-website cleaned up nicely. pulsar-master is still problematic - 
>> despite having run a few minutes ago, there are still builds dating back to 
>> 2017 in the pulsar-master/modules/org.apache.pulsar* directories, so it also 
>> appears that maven module ‘discard old builds’ is not working either. I have 
>> not yet found any suggested solutions to this.
>> 
>> -Chris
>> 
>> 
>> 
>>> On Jun 10, 2019, at 11:14 AM, Chris Lambertus <c...@apache.org> wrote:
>>> 
>>> Outstanding, thanks. I believe the job cleanup runs when the next build 
>>> runs. You could manually trigger a build to test, or we can check next time 
>>> the build runs automatically (presuming it runs nighty.)
>>> 
>>> -Chris
>>> 
>>> 
>>>> On Jun 10, 2019, at 11:10 AM, Matteo Merli <mme...@apache.org> wrote:
>>>> 
>>>> For pulsar-website-build and pulsar-master, the "discard old builds"
>>>> wasn't set unfortunately. I just enabled it now. Not sure if there's a
>>>> way to quickly trigger a manual cleanup.
>>>> 
>>>> Regarding "pulsar-pull-request": this was an old Jenkins job no longer
>>>> used (since we switched to multiple smaller PR validation jobs a while
>>>> ago). I have removed the Jenkins job. Hopefully that should take care
>>>> of cleaning all the files.
>>>> 
>>>> 
>>>> Thanks,
>>>> Matteo
>>>> 
>>>> --
>>>> Matteo Merli
>>>> <mme...@apache.org>
>>>> 
>>>> On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <c...@apache.org> wrote:
>>>>> 
>>>>> Hello,
>>>>> 
>>>>> The jenkins master is nearly full.
>>>>> 
>>>>> The workspaces listed below need significant size reduction within 24 
>>>>> hours or Infra will need to perform some manual pruning of old builds to 
>>>>> keep the jenkins system running. The Mesos “Packaging” job also needs to 
>>>>> be corrected to include the project name (mesos-packaging) please.
>>>>> 
>>>>> It appears that the typical ‘Discard Old Builds’ checkbox in the job 
>>>>> configuration may not be working for multibranch pipeline jobs. Please 
>>>>> refer to these articles for information on discarding builds in 
>>>>> multibranch jobs:
>>>>> 
>>>>> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
>>>>> https://issues.jenkins-ci.org/browse/JENKINS-35642
>>>>> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
>>>>> 
>>>>> 
>>>>> 
>>>>> NB: I have not fully vetted the above information, I just notice that 
>>>>> many of these jobs have ‘Discard old builds’ checked, but it is clearly 
>>>>> not working.
>>>>> 
>>>>> 
>>>>> If you are unable to reduce your disk usage beyond what is listed, please 
>>>>> let me know what the reasons are and we’ll see if we can find a solution. 
>>>>> If you believe you’ve configured your job properly and the space usage is 
>>>>> more than you expect, please comment here and we’ll take a look at what 
>>>>> might be going on.
>>>>> 
>>>>> I cut this list off arbitrarily at 40GB workspaces and larger. There are 
>>>>> many which are between 20 and 30GB which also need to be addressed, but 
>>>>> these are the current top contributors to the disk space situation.
>>>>> 
>>>>> 
>>>>> 594G    Packaging
>>>>> 425G    pulsar-website-build
>>>>> 274G    pulsar-master
>>>>> 195G    hadoop-multibranch
>>>>> 173G    HBase Nightly
>>>>> 138G    HBase-Flaky-Tests
>>>>> 119G    netbeans-release
>>>>> 108G    Any23-trunk
>>>>> 101G    netbeans-linux-experiment
>>>>> 96G     Jackrabbit-Oak-Windows
>>>>> 94G     HBase-Find-Flaky-Tests
>>>>> 88G     PreCommit-ZOOKEEPER-github-pr-build
>>>>> 74G     netbeans-windows
>>>>> 71G     stanbol-0.12
>>>>> 68G     Sling
>>>>> 63G     Atlas-master-NoTests
>>>>> 48G     FlexJS Framework (maven)
>>>>> 45G     HBase-PreCommit-GitHub-PR
>>>>> 42G     pulsar-pull-request
>>>>> 40G     Atlas-1.0-NoTests
>>>>> 
>>>>> 
>>>>> 
>>>>> Thanks,
>>>>> Chris
>>>>> ASF Infra
>>> 
>> 

Reply via email to