Thanks Josh for acting on this!

On Tue, Jun 11, 2019 at 3:15 AM Sean Busbey <bus...@apache.org> wrote:

> We used to have a build step that compressed our logs for us. I don't think
> Jenkins can read the test results if we do the xml files from surefire, so
> I'm not sure how much space we can save. That's where I'd start though.
>
> On Mon, Jun 10, 2019, 19:46 张铎(Duo Zhang) <palomino...@gmail.com> wrote:
>
> > Does surefire have some options to truncate the test output if it is too
> > large? Or jenkins has some options to truncate or compress a file when
> > archiving?
> >
> > Josh Elser <els...@apache.org> 于2019年6月11日周二 上午8:40写道:
> >
> > > Just a cursory glance at some build artifacts showed just test output
> > > which sometimes extended into the multiple megabytes.
> > >
> > > So everyone else knows, I just chatted with ChrisL in Slack and he
> > > confirmed that our disk utilization is down already (after
> HBASE-22563).
> > > He thanked us for the quick response.
> > >
> > > We should keep pulling on this thread now that we're looking at it :)
> > >
> > > On 6/10/19 8:36 PM, 张铎(Duo Zhang) wrote:
> > > > Oh, it is the build artifacts, not the jars...
> > > >
> > > > Most of our build artifacts are build logs, but maybe the problem is
> > that
> > > > some of the logs are very large if the test hangs...
> > > >
> > > > 张铎(Duo Zhang) <palomino...@gmail.com> 于2019年6月11日周二 上午8:16写道:
> > > >
> > > >> For flakey we just need the commit id in the console output then we
> > can
> > > >> build the artifacts locally. +1 on removing artifacts caching.
> > > >>
> > > >> Josh Elser <els...@apache.org> 于2019年6月11日周二 上午7:50写道:
> > > >>
> > > >>> Sure, Misty. No arguments here.
> > > >>>
> > > >>> I think that might be a bigger untangling. Maybe Peter or Busbey
> know
> > > >>> better about how these could be de-coupled (e.g. I think flakies
> > > >>> actually look back at old artifacts), but I'm not sure off the top
> of
> > > my
> > > >>> head. I was just going for a quick fix to keep Infra from doing
> > > >>> something super-destructive.
> > > >>>
> > > >>> For context, I've dropped them a note in Slack to make sure what
> I'm
> > > >>> doing is having a positive effect.
> > > >>>
> > > >>> On 6/10/19 7:34 PM, Misty Linville wrote:
> > > >>>> Keeping artifacts and keeping build logs are two separate things.
> I
> > > >>> don’t
> > > >>>> see a need to keep any artifacts past the most recent green and
> most
> > > >>> recent
> > > >>>> red builds. Alternately if we need the artifacts let’s have
> Jenkins
> > > put
> > > >>>> them somewhere rather than keeping them there. You can get back to
> > > >>> whatever
> > > >>>> hash you need within git to reproduce a build problem.
> > > >>>>
> > > >>>> On Mon, Jun 10, 2019 at 2:26 PM Josh Elser <els...@apache.org>
> > wrote:
> > > >>>>
> > > >>>>> https://issues.apache.org/jira/browse/HBASE-22563 for a quick
> > > bandaid
> > > >>> (I
> > > >>>>> hope).
> > > >>>>>
> > > >>>>> On 6/10/19 4:31 PM, Josh Elser wrote:
> > > >>>>>> Eyes on.
> > > >>>>>>
> > > >>>>>> Looking at master, we already have the linked configuration, set
> > to
> > > >>>>>> retain 30 builds.
> > > >>>>>>
> > > >>>>>> We have some extra branches which we can lop off (branch-1.2,
> > > >>>>>> branch-2.0, maybe some feature branches too). A quick fix might
> be
> > > to
> > > >>>>>> just pull back that 30 to 10.
> > > >>>>>>
> > > >>>>>> Largely figuring out how this stuff works now, give me a shout
> in
> > > >>> Slack
> > > >>>>>> if anyone else has cycles.
> > > >>>>>>
> > > >>>>>> On 6/10/19 2:34 PM, Peter Somogyi wrote:
> > > >>>>>>> Hi,
> > > >>>>>>>
> > > >>>>>>> HBase jobs are using more than 400GB based on this list.
> > > >>>>>>> Could someone take a look at the job configurations today?
> > > >>> Otherwise, I
> > > >>>>>>> will look into it tomorrow morning.
> > > >>>>>>>
> > > >>>>>>> Thanks,
> > > >>>>>>> Peter
> > > >>>>>>>
> > > >>>>>>> ---------- Forwarded message ---------
> > > >>>>>>> From: Chris Lambertus <c...@apache.org>
> > > >>>>>>> Date: Mon, Jun 10, 2019 at 7:57 PM
> > > >>>>>>> Subject: ACTION REQUIRED: disk space on jenkins master nearly
> > full
> > > >>>>>>> To: <bui...@apache.org>
> > > >>>>>>> Cc: <d...@mesos.apache.org>, <d...@pulsar.apache.org>
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>>> Hello,
> > > >>>>>>>
> > > >>>>>>> The jenkins master is nearly full.
> > > >>>>>>>
> > > >>>>>>> The workspaces listed below need significant size reduction
> > within
> > > 24
> > > >>>>>>> hours
> > > >>>>>>> or Infra will need to perform some manual pruning of old builds
> > to
> > > >>>>>>> keep the
> > > >>>>>>> jenkins system running. The Mesos “Packaging” job also needs to
> > be
> > > >>>>>>> corrected to include the project name (mesos-packaging) please.
> > > >>>>>>>
> > > >>>>>>> It appears that the typical ‘Discard Old Builds’ checkbox in
> the
> > > job
> > > >>>>>>> configuration may not be working for multibranch pipeline jobs.
> > > >>> Please
> > > >>>>>>> refer to these articles for information on discarding builds in
> > > >>>>>>> multibranch
> > > >>>>>>> jobs:
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>
> > > >>>
> > >
> >
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> > > >>>>>>>
> > > >>>>>>> https://issues.jenkins-ci.org/browse/JENKINS-35642
> > > >>>>>>>
> > > >>>>>
> > > >>>
> > >
> >
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>>> NB: I have not fully vetted the above information, I just
> notice
> > > that
> > > >>>>>>> many
> > > >>>>>>> of these jobs have ‘Discard old builds’ checked, but it is
> > clearly
> > > >>> not
> > > >>>>>>> working.
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>>> If you are unable to reduce your disk usage beyond what is
> > listed,
> > > >>>>> please
> > > >>>>>>> let me know what the reasons are and we’ll see if we can find a
> > > >>>>> solution.
> > > >>>>>>> If you believe you’ve configured your job properly and the
> space
> > > >>> usage
> > > >>>>> is
> > > >>>>>>> more than you expect, please comment here and we’ll take a look
> > at
> > > >>> what
> > > >>>>>>> might be going on.
> > > >>>>>>>
> > > >>>>>>> I cut this list off arbitrarily at 40GB workspaces and larger.
> > > There
> > > >>> are
> > > >>>>>>> many which are between 20 and 30GB which also need to be
> > addressed,
> > > >>> but
> > > >>>>>>> these are the current top contributors to the disk space
> > situation.
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>>> 594G    Packaging
> > > >>>>>>> 425G    pulsar-website-build
> > > >>>>>>> 274G    pulsar-master
> > > >>>>>>> 195G    hadoop-multibranch
> > > >>>>>>> 173G    HBase Nightly
> > > >>>>>>> 138G    HBase-Flaky-Tests
> > > >>>>>>> 119G    netbeans-release
> > > >>>>>>> 108G    Any23-trunk
> > > >>>>>>> 101G    netbeans-linux-experiment
> > > >>>>>>> 96G     Jackrabbit-Oak-Windows
> > > >>>>>>> 94G     HBase-Find-Flaky-Tests
> > > >>>>>>> 88G     PreCommit-ZOOKEEPER-github-pr-build
> > > >>>>>>> 74G     netbeans-windows
> > > >>>>>>> 71G     stanbol-0.12
> > > >>>>>>> 68G     Sling
> > > >>>>>>> 63G     Atlas-master-NoTests
> > > >>>>>>> 48G     FlexJS Framework (maven)
> > > >>>>>>> 45G     HBase-PreCommit-GitHub-PR
> > > >>>>>>> 42G     pulsar-pull-request
> > > >>>>>>> 40G     Atlas-1.0-NoTests
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>>> Thanks,
> > > >>>>>>> Chris
> > > >>>>>>> ASF Infra
> > > >>>>>>>
> > > >>>>>
> > > >>>>
> > > >>>
> > > >>
> > > >
> > >
> >
>

Reply via email to