Re: user-added planet script

2008-05-13 Thread Karsten 'quaid' Wade

On Thu, 2008-05-08 at 12:44 -0400, seth vidal wrote:
 On Thu, 2008-05-08 at 19:20 +0300, Nicu Buculei wrote:
 
  Add here probably a language for when/if we will have language based 
  sub-planets. It is possible to have multiple values, like
  language = english, french
  for the case when someone blogs about half the time in one language and 
  half in another?
  
 
 actually I was thinking of just having different .planet files:
 
 .planet.fr == planet french
 .planet.art = planet art
 
 etc, etc.
 
 a simple type of group.

+1 ... I like the whole thing, this is a nice level of self-service.

- Karsten
-- 
Karsten Wade, Sr. Developer Community Mgr.
Dev Fu : http://developer.redhatmagazine.com
Fedora : http://quaid.fedorapeople.org
gpg key : AD0E0C41


signature.asc
Description: This is a digitally signed message part
___
Fedora-infrastructure-list mailing list
Fedora-infrastructure-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list


Disk Space Issues - cvs-int/build hosts - (includes general note on Nagios Disk notifications)

2008-05-13 Thread Nigel Jones

Hey guys,

I seem to have deleted the nagios notification that I want to mention in 
particular, but as I noted in SOP/Nagios the %ages noted in the e-mails 
are what is left, so inode=99% doesn't mean that 99% of the inodes are 
used, it means 99% are still free.


Anyway, what this means, is that when nagios has been complaining about 
cvs-int recently, in particular the fact that /git has reached WARNING.  
After a bit of hunting around, I found /repo/pkgs using 168GiB of the 
192GiB available, which is understandable, Fedora has got huge.


Problem here, is that there are a LOT of old tarballs in that folder, 
which leaves me wondering if we should do a spring clean ~1 mo after 
release.


Lets take banshee for example, a package which I adopted

$ ls -l /repo/pkgs/banshee/
total 88
drwxrwsr-x  3 apache repoextras 4096 May  3  2006 banshee-0.10.10.tar.gz
drwxrwsr-x  3 apache repoextras 4096 Aug  7  2006 banshee-0.10.11.tar.gz
drwxrwsr-x  3 apache repoextras 4096 Aug 24  2006 banshee-0.10.12.tar.gz
drwxrwsr-x  3 apache repoextras 4096 Mar  4  2006 banshee-0.10.6.tar.gz
drwxrwsr-x  3 apache repoextras 4096 Mar  7  2006 banshee-0.10.7.tar.gz
drwxrwsr-x  3 apache repoextras 4096 Mar 14  2006 banshee-0.10.8.tar.gz
drwxrwsr-x  3 apache repoextras 4096 Apr 16  2006 banshee-0.10.9.tar.gz
drwxrwsr-x  3 apache repoextras 4096 Feb  2  2007 banshee-0.11.5.tar.bz2
drwxrwsr-x  3 apache repoextras 4096 Mar  7  2007 banshee-0.12.0.tar.bz2
drwxrwsr-x  3 apache repoextras 4096 Apr  5  2007 banshee-0.12.1.tar.bz2
drwxrwsr-x  3 apache repoextras 4096 Aug  7  2007 banshee-0.13.0.tar.bz2
drwxrwsr-x  3 apache repoextras 4096 Aug 31  2007 banshee-0.13.1.tar.bz2
drwxrwsr-x  3 apache repoextras 4096 Jan 14 15:58 banshee-0.13.2.tar.bz2
drwxrwsr-x  3 apache repoextras 4096 Apr 13 01:51 banshee-1-0.98.3.tar.bz2
drwxrwsr-x  3 apache repoextras 4096 May 10 03:41 banshee-1-0.99.1.tar.bz2

$ du -sh /repo/pkgs/banshee/
26M /repo/pkgs/banshee/
(Approximately 2M/package)
At the most there should only be 4 tar balls there (R-2, R-1, R, 
Rawhide), R-2 only valid for one month after R has released.


Another couple of examples:
$ du -sh /repo/pkgs/kernel/
2.6G/repo/pkgs/kernel/
(900ish tarballs dating back to 2004)

$ du -sh /repo/pkgs/kdeedu
1.4G/repo/pkgs/kdeedu
(48 tarballs from KDE-3.0)

With plans such and Hans' plans of creating a 500M vegastrike data SRPM 
and the size growth and update schedules of some packages we are going 
to have these nightmares more and more frequently.


Two solutions I can think of:

Create a script, go thru ALL non dead.package's grab the tarball name 
from sources.list and basically create a bit of a database of what we 
are using, scan through /repo/pkgs and either:

* Move old tarballs to some archiving system (another use for archive.fp.o?)
* Delete old tarballs
Or throw more diskspace at cvs-int

Even if were to only remove 15% of the tarballs there (this is a very 
cautious estimate of the number of stale tarballs) we could potentially 
reach 72% diskspace available on that mount (down from 82%) - note this 
is very simplistic math, in essence, we could be no better off if we 
only removed the small stale tarballs :).


Diskspace isn't cheap, so I like delete old tarballs, I also like this 
option because it's not like they disappear completely, they should be 
in the src.rpm's already on archive.fp.o and if we accidentally delete 1 
or 2 that are still needed, well grab it from src.rpm...


This leads on to my second item...

xenbuilder2 has run out of diskspace in /, it's down to 32M, thankfully 
koji has disabled it so it's safe for now, but wouldn't it be nice to 
throw say a 50GB partition dedicated solely for /var/lib/mock  
/mnt/build?  Yes, yes, I know money, but once again, builds are getting 
bigger so 'it'd be nice'.


Just thought I'd throw these two thoughts into the open, let the 
discussion begin!


___
Fedora-infrastructure-list mailing list
Fedora-infrastructure-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list


Re: Disk Space Issues - cvs-int/build hosts - (includes general note on Nagios Disk notifications)

2008-05-13 Thread Nigel Jones

Jeremy Katz wrote:

On Tue, 2008-05-13 at 23:55 +1200, Nigel Jones wrote:
  
Anyway, what this means, is that when nagios has been complaining about 
cvs-int recently, in particular the fact that /git has reached WARNING.  
After a bit of hunting around, I found /repo/pkgs using 168GiB of the 
192GiB available, which is understandable, Fedora has got huge.


Problem here, is that there are a LOT of old tarballs in that folder, 
which leaves me wondering if we should do a spring clean ~1 mo after 
release.



We can't do this.  We need to keep the tarballs for packages which were
released (in any form, be it rawhide, an update, or the actual release
-- and probably even updates-testing) for 3 years to comply with the
terms of the GPL.

Not to mention that removing them would break any ability to go back and
try to bisect or find the source of a problem in an automated fashion.
And yes, I've actually had to go back to ancient history to do so in the
past with some packages :-/
  
We were just discussing this on #fedora-admin, the other alternative 
would be (as I mentioned) move them to some sort of publically available 
archive storage setup, with the Makefile pointing towards that instead 
of where we normally point.


Anyway, I'm just trying to kick the ball about, I'm sure there is a 
miracle solution but imo at the very least it'd be somewhat nice to 
unladen cvs-int of some of the burden.


- Nigel

___
Fedora-infrastructure-list mailing list
Fedora-infrastructure-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list


Re: Disk Space Issues - cvs-int/build hosts - (includes general note on Nagios Disk notifications)

2008-05-13 Thread Jeremy Katz
On Wed, 2008-05-14 at 00:39 +1200, Nigel Jones wrote:
 Jeremy Katz wrote:
  On Tue, 2008-05-13 at 23:55 +1200, Nigel Jones wrote:
  Anyway, what this means, is that when nagios has been complaining about 
  cvs-int recently, in particular the fact that /git has reached WARNING.  
  After a bit of hunting around, I found /repo/pkgs using 168GiB of the 
  192GiB available, which is understandable, Fedora has got huge.
 
  Problem here, is that there are a LOT of old tarballs in that folder, 
  which leaves me wondering if we should do a spring clean ~1 mo after 
  release.
  
 
  We can't do this.  We need to keep the tarballs for packages which were
  released (in any form, be it rawhide, an update, or the actual release
  -- and probably even updates-testing) for 3 years to comply with the
  terms of the GPL.
 
  Not to mention that removing them would break any ability to go back and
  try to bisect or find the source of a problem in an automated fashion.
  And yes, I've actually had to go back to ancient history to do so in the
  past with some packages :-/

 We were just discussing this on #fedora-admin, the other alternative 
 would be (as I mentioned) move them to some sort of publically available 
 archive storage setup, with the Makefile pointing towards that instead 
 of where we normally point.

If we're going to do that, we might as well use that for the tarball
repo always and not have it special cased for old vs not.  At which
point, the easy answer is just to nfs mount that storage
as /repo/pkgs :)

In fact, maybe that is the easy answer -- maybe we should just allocate
some of the space off of nfs1.  We're at below 200G of usage
for /repo/pkgs -- if we allocated half a terabyte, it wouldn't be that
big of a hit to what's available to koji and it would give us quite a
bit of room to grow for /repo/pkgs.

Jeremy

___
Fedora-infrastructure-list mailing list
Fedora-infrastructure-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list


Re: Disk Space Issues - cvs-int/build hosts - (includes general note on Nagios Disk notifications)

2008-05-13 Thread Jeremy Katz
On Tue, 2008-05-13 at 23:55 +1200, Nigel Jones wrote:
 Anyway, what this means, is that when nagios has been complaining about 
 cvs-int recently, in particular the fact that /git has reached WARNING.  
 After a bit of hunting around, I found /repo/pkgs using 168GiB of the 
 192GiB available, which is understandable, Fedora has got huge.
 
 Problem here, is that there are a LOT of old tarballs in that folder, 
 which leaves me wondering if we should do a spring clean ~1 mo after 
 release.

We can't do this.  We need to keep the tarballs for packages which were
released (in any form, be it rawhide, an update, or the actual release
-- and probably even updates-testing) for 3 years to comply with the
terms of the GPL.

Not to mention that removing them would break any ability to go back and
try to bisect or find the source of a problem in an automated fashion.
And yes, I've actually had to go back to ancient history to do so in the
past with some packages :-/

Jeremy

___
Fedora-infrastructure-list mailing list
Fedora-infrastructure-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list


Re: Disk Space Issues - cvs-int/build hosts - (includes general note on Nagios Disk notifications)

2008-05-13 Thread Nigel Jones

Jeremy Katz wrote:

On Wed, 2008-05-14 at 00:39 +1200, Nigel Jones wrote:
  

Jeremy Katz wrote:


On Tue, 2008-05-13 at 23:55 +1200, Nigel Jones wrote:
  
Anyway, what this means, is that when nagios has been complaining about 
cvs-int recently, in particular the fact that /git has reached WARNING.  
After a bit of hunting around, I found /repo/pkgs using 168GiB of the 
192GiB available, which is understandable, Fedora has got huge.


Problem here, is that there are a LOT of old tarballs in that folder, 
which leaves me wondering if we should do a spring clean ~1 mo after 
release.



We can't do this.  We need to keep the tarballs for packages which were
released (in any form, be it rawhide, an update, or the actual release
-- and probably even updates-testing) for 3 years to comply with the
terms of the GPL.

Not to mention that removing them would break any ability to go back and
try to bisect or find the source of a problem in an automated fashion.
And yes, I've actually had to go back to ancient history to do so in the
past with some packages :-/
  
  
We were just discussing this on #fedora-admin, the other alternative 
would be (as I mentioned) move them to some sort of publically available 
archive storage setup, with the Makefile pointing towards that instead 
of where we normally point.



If we're going to do that, we might as well use that for the tarball
repo always and not have it special cased for old vs not.  At which
point, the easy answer is just to nfs mount that storage
as /repo/pkgs :)

In fact, maybe that is the easy answer -- maybe we should just allocate
some of the space off of nfs1.  We're at below 200G of usage
for /repo/pkgs -- if we allocated half a terabyte, it wouldn't be that
big of a hit to what's available to koji and it would give us quite a
bit of room to grow for /repo/pkgs.
  

You know, that's quite sane and I really like that idea. :)

- Nigel

___
Fedora-infrastructure-list mailing list
Fedora-infrastructure-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list


Re: Disk Space Issues - cvs-int/build hosts - (includes general note on Nagios Disk notifications)

2008-05-13 Thread Bill Nottingham
Nigel Jones ([EMAIL PROTECTED]) said: 
 Problem here, is that there are a LOT of old tarballs in that folder, which 
 leaves me wondering if we should do a spring clean ~1 mo after release.

Until we get something like correspondingsource up and running, we don't
have a good mechanism for weeding out only the tarballs that correspond
to builds that were not/are not shipped.

Bill

___
Fedora-infrastructure-list mailing list
Fedora-infrastructure-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list


Re: Disk Space Issues - cvs-int/build hosts - (includes general note on Nagios Disk notifications)

2008-05-13 Thread Mike Bonnet
On Tue, 2008-05-13 at 08:58 -0500, Dennis Gilmore wrote:
 On Tuesday 13 May 2008, Nigel Jones wrote:
  Hey guys,
 
  I seem to have deleted the nagios notification that I want to mention in
  particular, but as I noted in SOP/Nagios the %ages noted in the e-mails
  are what is left, so inode=99% doesn't mean that 99% of the inodes are
  used, it means 99% are still free.
 
  Anyway, what this means, is that when nagios has been complaining about
  cvs-int recently, in particular the fact that /git has reached WARNING.
  After a bit of hunting around, I found /repo/pkgs using 168GiB of the
  192GiB available, which is understandable, Fedora has got huge.
 
  Problem here, is that there are a LOT of old tarballs in that folder,
  which leaves me wondering if we should do a spring clean ~1 mo after
  release.
 snip
 
 We already have plans to move the lookaside cache to the netapp.  at least 
 that was the last plan i was aware of.
 
  Diskspace isn't cheap, so I like delete old tarballs, I also like this
  option because it's not like they disappear completely, they should be
  in the src.rpm's already on archive.fp.o and if we accidentally delete 1
  or 2 that are still needed, well grab it from src.rpm...
 
  This leads on to my second item...
 
  xenbuilder2 has run out of diskspace in /, it's down to 32M, thankfully
  koji has disabled it so it's safe for now, but wouldn't it be nice to
  throw say a 50GB partition dedicated solely for /var/lib/mock 
  /mnt/build?  Yes, yes, I know money, but once again, builds are getting
  bigger so 'it'd be nice'.
 
 xenbuilder1 and hammer2 are the oldest boxes we have in the buildsystem.  
 closely followed by ppc1  xenbuilder2 is also quite an old box now.  im 
 cleaning up /var/lib/mock on xenbuilder2 now.  failed builds dont get cleaned 
 up automatically.  we probably should reap them more often.  

Buildroots for failed Koji builds do get cleaned up after 4 hours (to
give someone time to diagnose the problem if desired).  However,
xenbuilder2 is also running plague, which never cleans up buildroots as
far as I know.


___
Fedora-infrastructure-list mailing list
Fedora-infrastructure-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list


Re: Disk Space Issues - cvs-int/build hosts - (includes general note on Nagios Disk notifications)

2008-05-13 Thread Dennis Gilmore
On Tuesday 13 May 2008, Nigel Jones wrote:
 Hey guys,

 I seem to have deleted the nagios notification that I want to mention in
 particular, but as I noted in SOP/Nagios the %ages noted in the e-mails
 are what is left, so inode=99% doesn't mean that 99% of the inodes are
 used, it means 99% are still free.

 Anyway, what this means, is that when nagios has been complaining about
 cvs-int recently, in particular the fact that /git has reached WARNING.
 After a bit of hunting around, I found /repo/pkgs using 168GiB of the
 192GiB available, which is understandable, Fedora has got huge.

 Problem here, is that there are a LOT of old tarballs in that folder,
 which leaves me wondering if we should do a spring clean ~1 mo after
 release.
snip

We already have plans to move the lookaside cache to the netapp.  at least 
that was the last plan i was aware of.

 Diskspace isn't cheap, so I like delete old tarballs, I also like this
 option because it's not like they disappear completely, they should be
 in the src.rpm's already on archive.fp.o and if we accidentally delete 1
 or 2 that are still needed, well grab it from src.rpm...

 This leads on to my second item...

 xenbuilder2 has run out of diskspace in /, it's down to 32M, thankfully
 koji has disabled it so it's safe for now, but wouldn't it be nice to
 throw say a 50GB partition dedicated solely for /var/lib/mock 
 /mnt/build?  Yes, yes, I know money, but once again, builds are getting
 bigger so 'it'd be nice'.

xenbuilder1 and hammer2 are the oldest boxes we have in the buildsystem.  
closely followed by ppc1  xenbuilder2 is also quite an old box now.  im 
cleaning up /var/lib/mock on xenbuilder2 now.  failed builds dont get cleaned 
up automatically.  we probably should reap them more often.  

Dennis


signature.asc
Description: This is a digitally signed message part.
___
Fedora-infrastructure-list mailing list
Fedora-infrastructure-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list


Change Request/Notification: FAS server problems

2008-05-13 Thread Toshio Kuratomi
I just noticed that fas was having issues and found that fas was using 
over a GB of memory on fas2.fedora.phx.redhat.com.  No problem, I 
restarted fas and expected everything to be fine.  14 minutes later, 
memory was over a GB again.  Another restart took care of the problem 
but I looked at the logs to see what's going on.


Apparently FAS is busy enough now that it's running into the database 
connection limit we impose via SQLAlchemy and then requests are backing 
up, causing the memory explosion.  Since FAS is far and away our busiest 
TG app I'm bumping the limits up so that we don't keep losing FAs service:


sqlalchemy.connection_pool 5 = 10
sqlalchemy.max_overflow 21 = 25

The connection_pool is the number of db connections each fas server will 
 hold open.  Since it's busy, it makes sense to hold open more 
connections.  connection_pool + max_overflow = the total number of 
connections that can be open when there's a lot of requests.


I've made these changes and pushed them live since it's causing FAS to 
timeout and throw errors (which affects other services which auth 
through fas as well.)


-Toshio

___
Fedora-infrastructure-list mailing list
Fedora-infrastructure-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list