Re: bodhi/updates system status
On Sat, Feb 10, 2007 at 12:56:11PM +0100, Michael Schwendt wrote: > The status updates raise a few more questions without that I've tried to > install the project locally: > > * One of the screenshots, > > https://hosted.fedoraproject.org/projects/bodhi/attachment/wiki/Screenshots/bodhi-push.png > > shows a "Push Console" where a single build job's packages are pushed and > createrepo is run afterwards. A huge difference to real-world pushing is > that this single operation does not take just 3 seconds, but several > minutes for each repository. The push process takes care of anything that needs to get done (ie {,un}pushing, {,re}moving), based on developer requests (still due to manual intervention for package signing), not just a single build job. > I fail to see how pushing packages like this can be a real-time operation > managed through a web interface. Unless a queueing server runs in the > background, which handles access to the repositories. Access for a variety > of operations, not limited to metadata creation. Add pruning, repoview, > multi-lib stuff. These operations are mutually exclusive. Right now there is no queueing server, and updates are pushed out in batches by a single person, so being a real-time operation seems, and has already half-assedly shown, to be feasible. Since few people actually do the pushing, I'd like to implement whatever they want to use. Going to a web site, clicking a button and waiting for a bit while being able to see real time status updates hasn't gotten any complaints, but With the current real-time status approach (python generators), it should be trivial to be able to hook up any client to it, like a simple command-line tool to do pushes. If Jesse would rather click a button and then get an email at some point later on about the results, then that's fine too. There is a lock on the repos that can be used to keep mutual exclusion with regard to the pushing and sustainment scripts, but I agree that a queueing service is an optimal approach. > * Currently, we push to a local master repository on the buildsys server > and sync that another machine at RDU. This requires SSH-based > authorization. What are the plans to change that within a web-based > updates system? That shouldn't change; the system is going to need to access the build results of plague anyway, so I'm wondering if it would be best to implement some sort of xmlrpc call to have plague stage the packages for us? Since plague and bodhi (among other new pieces of infrastructure) are in different co-locations, we definitely need some sort of way to communicate. SSHFs was mentioned as a potentially viable solution, along with rsync hackery; any suggestions? > * "Staging area": I've browsed the bodhi wiki pages in search for > information on how to make better use of stages in the life-time of a > build-job's results. How do you suggest we make better use of them? I'm fairly ignorant in the ways of plague, so I don't know how the current staging of a build-job results really works. At the moment there is only a single physical updates-stage on disc. When a package is "pushed", it is copied from the build result over to this updates-stage. Before the push an update is either in 'testing', or is waiting to be signed/pushed/moved. It seems like having some of these staging hooks in plague itself might be a good idea, instead of pulling and moving packages out from under it. > Currently, we push from within a single staging area, plague's > "plague-results" tree. Earlier attempts, like moving packages from one > stage to another, have broken the buildsys (in particular the needsign > repo) too often, because it takes time for published packages to find > their way from fedoraproject.org to download.fedora.redhat.com, and > meanwhile, any pushed package we don't keep in the needsign repo breaks > the build servers' dep-chains. The work-around, that has worked flawlessly > since then, is to mark plague-results directories as "PUSHED" with an > empty file: > > http://buildsys.fedoraproject.org/plague-results/fedora-6-extras/seedit/2.1.0-3.fc6/ > > That way we don't need to mess with the needsign repo rpms and metadata > and can keep the access read-only. With operations like "push" and > "unpush" and a testing repo it likely needs more to keep track of the > package life-cycle. What are the plans here? Well, the current implementation of bodhi starts all updates off as 'testing', and from there it can be moved to final when requested (but eventually after a given number of approvals or pushed through by the security response team). From here an update can raise a few requests: push, unpush, and move; but as far as the filesystem stage is concerned, files only get written when updates are pushed to testing or final. John Polestra made a few diagrams of the update procedure that I'm in the process of touching up. Hopefully with these, and eventually some details on Koji, we can optimize th
Re: Account System Design Work (was Re: Infrastructure Design - Look & Feel)
On Fri, Feb 16, 2007 at 03:59:42PM -0500, seth vidal wrote: > On Fri, 2007-02-16 at 14:40 -0500, Will Woods wrote: > > Also - and may be bit off-topic - I'd love for new users to get Mugshot > > accounts along with their bugzilla/wiki/etc. stuff. Once you have > > mugshot membership for all users (and mugshot groups for each group) it > > seems like some of the RSS feed stuff would magically fall into place on > > its own. > > um, no. > > not mugshot, please. I would say hold off on the Mugshot membership until we have solid communities pre-defined and functional. Throwing new members into mugshot (as it currently is) is probably not the best idea; but I definitely see much potential in setting up groups for Fedora projects/sigs. Having groups that can be kept up to date in real time with the project activity can be very powerful, but bombarding people with real time status updates is only the first step; I feel that if we are going to take this fedora+mugshot integration seriously, we need to make it simple for people to wield mugshot to help drive fedora development. Basic example: * Community hacker wants to help with the kernel, so he joins the 'Fedora Kernel Team' on mugshot * He is then notified in real time of: - upstream news - bug activity (which he can interact with in real time (close/comment/push upsteram/etc)) - code commits (with the ability to instantly grab patches, srpms, etc) - package updates (buttons to pull down rpm, say "works for me/b0rked", file bugs) * All notifications of course can then be discussed in real time with the people that are making them happen This definitely adds more cohesion into the "classic" OSS development model (IRC + mailing lists + bugzilla), and would help form sub-communities around all different aspects of Fedora. Karsten has jotted some notes about Mugzilla[0] as well. luke [0]: http://fedoraproject.org/wiki/KarstenWade/Drafts/Mugzilla ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Infrastructure Design - Look & Feel
On Thu, Feb 15, 2007 at 09:07:25PM -0500, Máirín Duffy wrote: > - Bohdi - is this the application lmacken wrote? Where is it hosted? Is > there any way I could get access to it? Yep. It's currently not deployed yet, but it is going to make its way on to publictest2 shortly. In the mean time, feel free to take a look at screenshots of the new[0] implementation, as well as the old[1] system that is presently pushing out Fedora Core updates. Thanks for your help, Mo! luke [0]: https://hosted.fedoraproject.org/projects/bodhi/wiki/Screenshots [1]: http://www.fedoraproject.org/wiki/Infrastructure/UpdatesSystem/OldLifecycle ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Smolt deployment
On Sat, Mar 03, 2007 at 05:31:07PM -0500, seth vidal wrote: > How performant is the tg server? In the past the python webserver was > not exactly a barn burner when it came to performance. It worked, but it > didn't hold up well under heavy load. Having apache in front helps but > just like with zope, if the app is slow, the app is slow. > > any load testing done, yet? AFAIK, no load testing has been done with our TurboGears apps, but I'm definitely in favor of doing it before F7. I recall kim0 and paulobanon did some load tests in the past, but I haven't seen the results yet. Does anyone recommend any load generation tools? A presentation was given at this years PyCon called "Scaling Python for High-Load Web Sites"[0], I definitely recommend checking it out. I recommend that we load balance dynamic page requests from our proxy servers to our application servers, and let the proxies serve out cached static content. We definitely want to hide hide CherryPy behind apache, because having HTTP/1.1 and SSL support is nice, among many other benefits. Whether or not we use mod_{python,proxy,rewrite} to connect to CherryPy is up for discussion. mod_python is the fastest option, and the only downfall really is that it is harder to configure, and that you have to restart Apache every time you cange your CherryPy code. I give a +1 for mod_python, at least until WSGI support in CherryPy solidifies. Since each application server will have its own connection pool with the db servers, increasing our scalability will simply consist of adding another Xen guest behind our load balancer. So from here we might want to look into creating a standard guest image optimized for our TurboGears Xen guests. publictest2 was running FC6 (it still might be, but as far as I can tell it seems to be down), and I'm not sure what our other TG systems are running, but I think we should be consistent. I tend to lean towards RHEL{5,4}, which will help us get TurboGears & friends whipped into shape for EPEL What do you all think? luke [0]: http://www.polimetrix.com/pycon/slides/ ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Real-time project updates
Yo peeps, I created a Mugshot group for Fedora Infrastructure[0], and pointed it to the RSS feeds of the project Timelines for bodhi, koji, presto, beaker, pungu, and smolt. This way, members will get notifications on code commits, wiki changes, and ticket activity. Please feel free to add any infrastructure-related RSS feed to the group as well. Also, if you need a mugshot invite or something, drop me a line. luke [0]: http://mugshot.org/group?who=yWstkV2xGz93rQ ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Smolt deployment
On Mon, Mar 05, 2007 at 05:14:51PM -0800, Toshio Kuratomi wrote: > > A presentation was given at this years PyCon called "Scaling Python for > > High-Load Web Sites"[0], I definitely recommend checking it out. > > > Really cool. My reading of the talk is if our loads match up with their > sample application then we're probably okay with just a single cherrypy > instance behind apache for nearly everything. Load balancing could get > us the rest of the way for all of our "internal" apps (meaning: Apps > meant for contributors to the project rather than the Fedora Userbase.) > Of course, in your proposal, once we have one thing behind the load > balancers we should be able to put everything behind the load balancers > without too much effort. > > The wiki/plone, bugzilla and other end-user facing applications need > more than that. Unfortunately, we aren't in charge of coding those so > we don't have as many choices in terms of getting it to scale at the > moment. With moin moin, for instance, my impression is that moin > wouldn't be able to lock files if we had two instances running so we're > unable to use load balancing as an optimization. Yeah, I agree that we definitely need to work on optimizing some of our current software; I mean, seriously, have you tried saving a Wiki page lately ? > > I recommend that we load balance dynamic page requests from our proxy > > servers to our application servers, and let the proxies serve out cached > > static content. We definitely want to hide hide CherryPy behind apache, > > because having HTTP/1.1 and SSL support is nice, among many other > > benefits. Whether or not we use mod_{python,proxy,rewrite} to connect to > > CherryPy is up for discussion. mod_python is the fastest option, and the > > only downfall really is that it is harder to configure, and that you have to > > restart Apache every time you cange your CherryPy code. I give a +1 for > > mod_python, at least until WSGI support in CherryPy solidifies. > > > It appears that TG + mod_python is very slow ATM:: > http://tinyurl.com/3xyznr Interesting. To get a better idea of the performance of the TurboGears stack in our infrastructure, I think it would be extremely valuable to perform some stress tests before F7. This way, we can know for sure the best options for our needs, with regard to: o Apache mod_{rewrite,python,proxy} o SQL{Object,Alchemy} o Xen instances vs. CherryPy instances If anyone is interested in heading this up (as my stress-testing-fu is weak), I would definitely be willing to help out. > > Since each application server will have its own connection pool with the > > db servers, increasing our scalability will simply consist of adding > > another Xen guest behind our load balancer. > > > Why do we even need to add Xen guests? From the pycon talk it looked > like just adding additional cherrypy servers would increase our ability > to serve more pages. True. > We'd want to run benchmarks to see but I'd suspect that having one guest > with five cherrypy instances that we load balance between will give us > more bang for the resources used than five guests on the same Xen host > running one cherrypy server apiece. Yeah, I think that benchmarking this will yield extremely useful data that would benefit many. > Additional guests could enhance reliability, though. If our load > balancer detects whether a guest has stopped responding and serves > requests to the other guests that are running the cherrypy servers, we > could take a guest down for maintenance and then return it to the pool > without interrupting service. Having them on separate Xen hosts would > mean we could lose a physical machine and still survive (at half > capacity). Yep, this will help mitigate much suffering on our end :) > > So from here we might want to look into creating a standard guest image > > optimized > > for our TurboGears Xen guests. publictest2 was running FC6 (it still might > > be, > > but as far as I can tell it seems to be down), and I'm not sure what our > > other TG systems are running, but I think we should be consistent. I tend > > to > > lean towards RHEL{5,4}, which will help us get TurboGears & friends whipped > > into > > shape for EPEL > > > RHEL4 would be python2.3. RHEL5 is python2.4 like FC6. F7 will be > python2.5 > > python2.4 has decorators which TG makes heavy use of so I think we want > to have at least that version. It'll feel constraining to run 2.5 for > local development on our home machines with Fedora7+ but having to > develop for python 2.4 because that's what comes with RHEL5 (Unified > try:except:finally and ternary operators being the features I'll miss > the most) but I suspect that's a tradeoff that we'll want to make so we > aren't upgrading every six months. I have yet to start utilizing any Python 2.5 features in my code, so I'm not really partial either way. luke __
DSN audit
Peter van der Does will be conducting a DNS security audit next week; please speak up if this is going to be a problem for anyone. ns1.fedoraproject.org : Tuesday - 10 AM EST dns1.j2solutions.net : Wednesday - 10 AM EST The results will eventually make their way onto the InfrastructurePrivate wiki. luke ___ Fedora-infrastructure-list mailing list [EMAIL PROTECTED] https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: DSN audit
On Thu, Apr 12, 2007 at 03:18:25PM -0400, Peter van der Does wrote: > Yep, that's the only thing left. After the scan results are in I'll > write them up in a similar report. This report will go, as requested, to > Luke only. Please send them to {mmcgrath,[EMAIL PROTECTED] as well; I just wanted to make sure that the results didn't end up on this list first :) luke ___ Fedora-infrastructure-list mailing list [EMAIL PROTECTED] https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Set Reply-to Header for mailing list
On Tue, Apr 17, 2007 at 01:58:48PM -0400, Wilmer Jaramillo M. wrote: > Hi, I have a suggestion, because don't is set the "Reply-to" header > for fedora-infrastructure-list@redhat.com on mailing list?. When I > answer a mail, only one _should_ receive the mail, but all of the > internal list possibly does not, should use the header to redirect > replies to messages into to mailing list. Done. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
update announcement modifications
I modified our update announcement template to show Bugs and CVE's. Below is an example of what it looks like. Let me know if you have any comments/suggestions for improvements (nitpicking welcome). luke - - - Subject: [SECURITY] Fedora Core 6 Test Update: mutt-1.4.2.2-5.fc6 Fedora Test Update Notification FEDORA-2007-0001 2007-04-25 12:57:27 Name: mutt Product : Fedora Core 6 Version : 1.4.2.2 Release : 5.fc6 Summary : A text mode mail user agent. Description : Mutt is a text-mode mail user agent. Mutt supports color, threading, arbitrary key remapping, and a lot of customization. You should install mutt if you have used it in the past and you prefer it, or if you are new to mail programs and have not decided which one you are going to use. Update Information: blah blah blah ChangeLog: * Wed Dec 6 2006 Miroslav Lichvar <[EMAIL PROTECTED]> 5:1.4.2.2-5 - use correct fcc folder with IMAP (#217469) - don't require smtpdaemon, gettext References: Bug #123 - https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=123 Bug #1234 - https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=1234 CVE-2007-0001 - http://www.cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2007-0001 This update can be downloaded from: http://download.fedoraproject.org/pub/fedora/linux/core/updates/testing/6/ 15849ed63fdb68096748a08b2e4d3d33855e2b80 ppc/mutt-1.4.2.2-5.fc6.ppc.rpm c3287aeb9793072f770e4cae6344c9bb3adfdd36 ppc/debug/mutt-debuginfo-1.4.2.2-5.fc6.ppc.rpm 6d34ee31084b8c59416d27716c3cd1cccb4cbeb4 x86_64/mutt-1.4.2.2-5.fc6.x86_64.rpm 77e92e4b85922a71432b584eb13b6db1b716f17a x86_64/debug/mutt-debuginfo-1.4.2.2-5.fc6.x86_64.rpm ff49f9237c5563d50598a9cd6d2efe38c56be2fb i386/mutt-1.4.2.2-5.fc6.i386.rpm a5ee13f13fb37d4b1ecf0ec424268510c0cd06d2 i386/debug/mutt-debuginfo-1.4.2.2-5.fc6.i386.rpm c3e2582d8d703a3b03d779056fede294e85db497 SRPMS/mutt-1.4.2.2-5.fc6.src.rpm This update can be installed with the 'yum' update program. Use 'yum update package-name' at the command line. For more information, refer to 'Managing Software with yum,' available at http://fedora.redhat.com/docs/yum/. ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Fedora 7 Launch
On Fri, May 11, 2007 at 09:51:42AM -0500, Mike McGrath wrote: > We've got a lot of prep work to do before Fedora 7 launches. I'd like to > compile a list so if anything is missing let me know: > > [...] > > What am I missing? Bodhi. I have a test instance running on publictest2[0] that people can play around with at the moment. Here is what is on my list of priority tasks that need to get done before F7: o Deployment. Get bodhi configs into puppet, and make sure ssl/static files/etc are working from admin.fp.org/updates o Make sure all EPEL guidelines[1] are met o Multilib for F7. For previous releases there was a huge biarch-list-of-DOOM, which I have since imported into bodhi's database. However, there is no such list for F7, so we will most likely need to do things differently. notting mentioned somehow doing it on the fly, or by having bodhi tag things in koji and then using mash to build the entire updates repo. o Repo cleaner. We need something like RepoPrune.py/fedora-updates-clean to run occasionally and clean up our tree. I'll make sure our Bodhi Timeline[2] is up to date today. I'd like to get 1.0 deployed this weekend, and ideally hit 1.1 before F7. Comments/suggestions/help welcome :) luke [0]: http://publictest2.fedora.redhat.com [1]: http://fedoraproject.org/wiki/EPEL/GuidelinesAndPolicies/PackageMaintenanceAndUpdates [2]: https://hosted.fedoraproject.org/projects/bodhi/roadmap ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Errors when retrieving mirrorlist for updates repository
On Sat, May 12, 2007 at 08:58:24PM -1000, Julian Yap wrote: > Hi, > > I get this error when running 'yum update'. [...] > Is it a larger issue? It looks like mod_python was to blame. [Sun May 13 01:24:03 2007] [error] [client 10.8.32.55] PythonHandler return_mirrorlist: Traceback (most recent call last): [Sun May 13 01:24:03 2007] [error] [client 10.8.32.55] PythonHandler return_mirrorlist: File "/usr/lib64/python2.4/site-packages/mod_python/apache.py", line 287, in HandlerDispatch\nlog=debug) [Sun May 13 01:24:03 2007] [error] [client 10.8.32.55] PythonHandler return_mirrorlist: File "/usr/lib64/python2.4/site-packages/mod_python/apache.py", line 461, in import_module\nf, p, d = imp.find_module(parts[i], path) [Sun May 13 01:24:03 2007] [error] [client 10.8.32.55] PythonHandler return_mirrorlist: ImportError: No module named return_mirrorlist Restarting httpd seemed to have fixed it. I also noticed this SELinux audit message: audit(1179014650.637:1171): avc: denied { getattr } for pid=21393 comm="httpd" name="mirrorlist_cache.pkl" dev=xvda3 ino=129681 scontext=user_u:system_r:httpd_t:s0 tcontext=user_u:object_r:tmp_t:s0 tclass=file I wonder if SELinux is to blame for some of the erratic TurboGears behavior that we have been experiencing? luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: the mechanics of pushing updates
On Thu, May 24, 2007 at 01:02:35AM -0400, Bill Nottingham wrote: > Mmm, plumbing. bodhi is heading for production soon. To push updates, what > bodhi currently does is, for any update: > > - sign the package Nope, neither bodhi or the current update system sign any packages. The current system mails releng with the proper command to sign the packages, but it has always been done by hand as far as I know. You and Jesse are the only people I know of that have signed updates. By the looks of the fedora-release-tools module, there are two scripts that have been used to sign packages, ftsign and fedorasign, both of which call /usr/local/bin/rpm-4.1-sign, which is a symlink to /usr/lib/rpm/rpmk. I started implementing an XMLRPC server for bodhi so we can eventually do everything from the command line as well as from the browser. Hopefully we can streamline the signing process as much as possible in a command-line sign/push tool, until it can be fully automated with a signing server (when might this happen, btw?). Koji keeps a sigcache for each package in pkg/ver/rel/data/sigcache/arch/, although I have no idea at what point in the build process this gets created. I'm also under the impression that just having this detached signature isn't enough, and that there still must be some manual intervention? Is there anything else bodhi needs to do other than make sure the corresponding .sig exists for each package? > - copy the package to a 'staging' tree of the entirety of updates > - read a static list of packages that should be multilib, act on that This isn't as bad as the previous biarch-list-of-doom[0] anymore. Bodhi imports it into its Multilib database table[1] during initialization, and doesn't deal with it again. Upon submission of an update, bodhi builds the list of associated packages, taking care of multilib based on what's in the db. The multilib table can then be modified with ease using the TurboGears CatWalk database editor, or a simple command-line tool. > - run createrepo > - check deps on the repo FIXME: I need to track down some false positives in bodhi's closure.py[2] (or rewrite it). Mash would obviously resolve this for us. > - rsync the whole repo out TODO. At the moment bodhi stages to /mnt/koji/updates-stage -- where are we going to sync this to? wallace still? > Older updates are cleaned by a cron script later. TODO. We need something similar to the fedora-updates-clean script that is currently in place (but less hackish), or RepoPrune / repomanage. The TurboGears scheduler[3] is probably a good place for this. I'm going to try and find some time tonight to throw one together. > Advantages of this approach: > - it's simple > - it's easy to clean upthings that Go Wrong (just manually remove them > from the repo and re-sync) This also gives bodhi a LOT more control over the repos, as it maintains the extended updateinfo.xml.gz in the repodata as well. If we use mash we will have to maintain this file outside of the tree and re-insert it post-compose. > Disadvantages: > - multilib. In a world where we continually add new packages, this > *will not scale*. Random idea. Since multilib is handled by mash, which pushed out rawhide nightly, couldn't we just have mash keep the Multilib table up to date? Would this solve the scalability issue wrt new packages? > So, we need at least *some* sort of better workflow. > > One alternative - using mash (what we're using to build rawhide.) It > would go something like this: > > - sign the package > - tag the package (for updates-testing, or updates) > - run mash to create a repo of updates/updates-testing, solve it for > multilib > - rsync it out > > Advantages: > - solves multilib > - doesn't require continually keeping a staging tree around > - depcheck is built in when solving multilib > - builds on koji tags to let anyone easily query what updates are > released > > Disadvantages: > - by rebuilding the repo each time, it's going to be slow once > the repo gets large > - harder to clear out other strangeness > - will only have one version of each updated package > > The last of these isn't as *big* of a concern now, as all builds > will be available through the koji web site, space permitting. > > Other ideas for better workflow? What do the extras push scripts do? > Do we want to add a modified version of mash's multilib solver into > bodhi? I think the mash idea is interesting. Although, due to it's overhead, we would probaby have to resort to pushing out a single batch of updates a day, and maybe some smaller batches of security updates. This might become a pain. I'm going to look around at the multilib solver for mash and extrsa tonight and see if bodhi can steal any of it. Michael Schwendt would probably be the expert in the extras world. luke [0]: https://hosted.fedoraproject.org/projects/bodhi/browser/bodhi/deprecated/biarch.py [1]: https://hosted.fedoraproject.org/projects/bodhi/browser/bodhi/mo
bodhi
So we're less than a week away from F7, so why not completely change the way updates are pushed? :) We're going to use mash[0] to compose our updates repo instead of managing it by hand. This removes the burden of multilib, repo-cleaning, and dep closure checking from bodhi. This means that we need to change the push process to be something like: - Move all submitted builds from dist-f7-updates-candidate to dist-f7-updates-testing in Koji - Run mash - Add/remove appropriate updates from updateinfo.xml and insert it into all of the repodata - Sync out to wallace, which will sync to the mirrors In theory, this should do the trick. The roadmap[1] to 1.0 should be fairly accurate now. So what we have left, aside from the new push process mentioned above, is: - ACLs. We need to make sure that all updates are submitted by the appropriate {,co-}maintainers. - Package signing stuff. Jesse pointed me to the sign_unsigned tool[2] that we could potentially integrate with to help do this. I won't be able to start hacking on this until monday, as I am graduating tomorrow and then moving on Sunday, so any help would be appreciated :) luke [0]: http://git.fedoraproject.org/?p=hosted/mash;a=summary [1]: https://hosted.fedoraproject.org/projects/bodhi/roadmap [2]: http://git.fedoraproject.org/?p=fedora/releng;a=blob_plain;f=scripts/sign_unsigned.py;hb=9b1b7f1b70976af053c155fe7374dd47b5698da4 ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
0-day bodhi
So, after about 20 hours of straight hacking, bodhi seems to be in fairly good shape at the moment (after rewriting/gutting most of it in the past 2 days). The first instance was/is deployed to app3, but was only able to take submissions as it cannot write to /mnt/koji. So I deployed bodhi on app5 and did a bunch of initial testing in a development environment with a local sqlite db. Everything seemed to work great ("everything" meaning mashing the repos and generating/sending update announcements. Other stuff like updateinfo.xml generation will have to be redesigned and reimplemented)). I changed the proxy config to point to app5, so hopefully that should propagate shortly and transparently switch over. So I threw together a Masher[0] for bodhi that should allow releng to queue up pushes as they please, and it will churn them out to /mnt/koji/mash/updates/f7-updates{,-testing}-YYMMDD.HHSS, and then symlink it to /mnt/koji/mash/updates/fc7-updates{,testing} when complete. From here an hourly sync script (that may or may not exist yet) will pick it up and it will eventually make it out to the mirrors. >From bodhi's end, it should be able to crunch out updates repos and send email notices around just fine. Some critical stuff that we need ASAP: o Access control. We need to make sure that only {,co-}maintainers can submit/modify/push their packages. It would be nice to be able to do this by calling the pkdb or koji, but the pkgdb doesn't have the API, and koji doesn't know about co-maintainers. Worse case scenario is we parse the owners.list ourselves. o Bodhi needs a client cert (instead of using mine) o Ability to submit/modify multiple updates at once. See my post on fedora-maintainers[1] o XML-RPC API and bodhi-client tool, for doing stuff from the command-line. As shiny as bodhi is, I'd personally rather stay out of firefox as much as possible. o updateinfo.xml.gz integration. The old-style updates pushing would insert/remove the extended metadata on the fly (and move it out of the way when running createrepo, then shove it back in). I'm thinking it would be fairly simple to iterate over the mashed repo and create/insert this metadata on the fly. I'm not sure how intensive of a process this will be, so we'll just have to try it and find out. That's all I can think of at the moment.. anyone have anything else that is a top priority for bodhi ? luke [0]: https://hosted.fedoraproject.org/projects/bodhi/browser/bodhi/masher.py [1]: https://www.redhat.com/archives/fedora-maintainers/2007-May/msg01034.html ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: 0-day bodhi
On Thu, May 31, 2007 at 08:52:11AM -0400, Jesse Keating wrote: > On Thursday 31 May 2007 08:11:07 Luke Macken wrote: > > That's all I can think of at the moment.. anyone have anything else that > > is a top priority for bodhi ? > > A way to see if there are broken deps in the new update set and the ability > to > say "no, don't push it" Bodhi's Masher handles rolling back of all of the builds tags when mash fails. So, if mash errors out on broken deps, bodhi will drop a /mnt/koji/mash/updates/mash-failed-YYMMDD.HHMM file with the output. Later today I'll throw together a web interface to check the status of the Masher and view the mash results and such. > Is there an try/except when trying to grab a signed package that isn't signed > yet? Will it just traceback? I enabled 'strict_keys' in bodhi's mash.conf, so the compose should fail if anything is unsigned. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: new python-fedora
On Fri, Jun 01, 2007 at 08:34:34PM -0700, Toshio Kuratomi wrote: > After looking at a traceback from Bodhi, I put together a new > python-fedora to hopefully address that. It's available from my home > directory on bastion: > /home/fedora/toshio/python-fedora-0.2.90.8-1.noarch.rpm > > I've only tested this on the pkgdb test machines so if you still have a > test instance, try it out there before loading it onto the main > instances of mirrormanager, Bodhi, etc. Seems to be working fine for me. I've been testing it for the past couple of days in my test instance, and deployed it tonight on app5 for bodhi. We should probably find a home for this thing in git/hg/whatev. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Fedora 7 Upgrade
On Thu, Jun 07, 2007 at 12:19:01PM -0500, Mike McGrath wrote: > Toshio Kuratomi wrote: > > Depends... > > > > What is the reason for those boxes to be on Fedora instead of RHEL? If > > it's because FC6 had things not in RHEL4 maybe we can make a switch to > > RHEL5 at some point. If the reason is we want the flexibility of moving > > to the latest code along with Fedora instead of waiting for RHEL to > > upgrade/backport then perhaps they should be on the latest Fedora where > > the latest code is more likely to land sooner. > > > > In the case of the app servers running TurboGears, I think that it was a > > lack of TG requirements on RHEL4 that held us back. I also think we're > > close to having TG for RHEL5 so we might want to move test servers to > > RHEL5, check that the applications run there, and then move the app > > servers handling those to RHEL. > > > > This is very true as well, I've not tested the TG status in RHEL5. Has > anyone else? Luke? Looks like we're just waiting on python-kid[0]. luke [0]: https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=239134 ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
TurboGears EL-5
The TurboGears stack is ready for RHEL5! Installable via `yum install TurboGears` from the EL-5 repo[0]. luke [0]: http://download.fedoraproject.org/pub/epel/5/ ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Fedora 7 Update Informatin for general users
On Thu, Jun 28, 2007 at 10:41:21AM -0400, Jesse Keating wrote: > On Thursday 28 June 2007 10:39:07 Rahul Sundaram wrote: > > Thomas Chung wrote: > > > First all, thank you for Fedora Update System. > > > It's wonderful to see that we finally have something we always wanted. > > > > > > While looking at the list of Stable Updates for Fedora 7[1], it gives > > > me an idea that this could replace our static Fedora Security > > > Advisories and Package Updates for Fedora 7[1] on our project wiki. > > > > > > What do you think? Could we open this system up so any general users > > > can see this information anonymously without logging-in with Fedora > > > Account System? > > > > Can anyone kindly answer this? It would save a lot of manual effort. > > Thanks. > > > > Rahul > > Filing an RFE in Trac is better than in a mailing list. I don't think > anybody > sees anything wrong with an anonymous view of the updates system, there is > just a lot more important stuff to get done first. https://hosted.fedoraproject.org/projects/bodhi/ticket/43 Shouldn't be very hard to implement; I think we can probably get away with simply wrapping the update actions and sidebar with I've been trying to get a huge bodhi upgrade out the door, but keep finding myself just trying to squeeze in "just one more feature". I'll get RSS feeds and anonymous access working after I get this other stuff released. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Infrastructure SCM
On Mon, Jul 02, 2007 at 09:13:10AM -0500, Mike McGrath wrote: > starting this up again. Since we want it to be distributed we're left with > either git or mercurial. Can I take a non-binding vote from the people on > this list as to a preference on each? Remember, our needs in > Infrastructure are really pretty simple, so at a glance. What do you guys > think? > > > git: > > mercurial: I don't care either way. Mercurial has been treating me well for bodhi development, but I've also been interested in learning git. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: The asterisk paradox
On Fri, Aug 10, 2007 at 10:48:01AM -0600, Jonathan Steffan wrote: > Should we look at testing speech to text? I mentioned using sphinx[0] yesterday on #fedora-admin, kind of as a joke -- but it may be worth a shot ? Once we are able to save conferences, then it should be pretty easy to test this stuff out. luke [0]: http://cmusphinx.sourceforge.net ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
ssl on publictest dev apps
There are a handful of development instances of various apps running on our publictest systems, most of which don't support SSL. This is obviously not a good thing, so I'm proposing that we either enable SSL on these apps, or disable the FAS identity provider and provide local guest accounts on the local SQLite dbs. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
sobby instance
Hey guys, I setup an instance of the stand-alone gobby server on publictest2, per the gobby RFR[0]. This will allow anyone to collaborate in real-time on any text: spec files, code, notes, ideas, etc. You can test it out by installing 'gobby' and connecting to publictest2.fedora.redhat.com If we want to commit to hosting this service, there are a bunch of stuff that we need to do and test first, but I'm just looking to get some initial feedback from you guys on it. Hopefully we can get a bunch of use out of it during the Virtual FUDCon this week. luke [0]: http://fedoraproject.org/wiki/Infrastructure/RFR/gobby ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Puppet Training!
On Thu, Aug 23, 2007 at 09:14:30PM +0200, Jeroen van Meeuwen wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > Toshio Kuratomi wrote: > > Mike McGrath wrote: > >> Please let me know which of these times work best for you: > > > >> Monday August 27th at 15:00 UTC > >> Monday August 27th at 20:00 UTC > >> Wednesday August 29th at 20:00 UTC > > > >> Right now I'm scheduling it for the 27th at 20:00 UTC unless people > >> can't make that time. > > > > Aug 27th at 20:00 UTC works well for me as well. > > > > +1 +1 ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
TurboMail patch
So bodhi has recently been having some issues with TurboMail, which seems to stop dispatching mid-push. Bodhi still 'turbomail.enqueue's the messages, but the worker threads seem to be dead. I created a new package with a patch from upstream Ticket #17 (http://trac.orianagroup.com/turbomail/ticket/17/). http://lmacken.fedorapeople.org/rpms/python-TurboMail-2.0.4-3.fc7.noarch.rpm I installed this package on releng1, so we'll see how it works during the next updates push. More testing is welcome :) luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: TurboMail patch
On Thu, Sep 20, 2007 at 02:39:54PM -0400, Luke Macken wrote: > So bodhi has recently been having some issues with TurboMail, which seems to > stop dispatching mid-push. Bodhi still 'turbomail.enqueue's the messages, but > the worker threads seem to be dead. > > I created a new package with a patch from upstream Ticket #17 > (http://trac.orianagroup.com/turbomail/ticket/17/). > > > http://lmacken.fedorapeople.org/rpms/python-TurboMail-2.0.4-3.fc7.noarch.rpm > > I installed this package on releng1, so we'll see how it works during the next > updates push. More testing is welcome :) This patch actually helped bring the bug to the surface, instead of the silent death that we were used to. The only problem is that upon error, it infinitely attempts to re-enqueue the mail -- which is not the best action IMO (thus, I probably won't be patching Fedora's TurboMail with this unless it can be re-worked a bit). Thankfully, the error was simple to fix: SMTPRecipientsRefused: {u'bodhi': (550, '5.1.1 ... User unknown')} I had previously fixed this issue in my development branch, so I killed last nights push, and will be doing a bodhi upgrade today to pull in a bunch of bugfixes and database changes. After that, I will resume the push. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Missing kernel update (with security fix!)
On Wed, Sep 26, 2007 at 02:35:24PM +0300, Axel Thimm wrote: > Hi, > > a kernel rpm with a security fix has been pushed 24h ago by bodhi, but > no mirror has it yet, all donwloadX show the old kernel but a couple > that give connection refused. It'll get pulled into the next mash (which I assume will be at some point today). luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: TurboMail suddenly stops dispatching
It looks like the TurboMail author is MIA, and their trac seems to be riddled with spam. Upstream could definitely use some help with things. It's important that we make sure this project does not get neglected, as we extensively use it for various pieces of infrastructure that we have deployed (bodhi, pkgdb, etc). Thankfully, Felix will be helping to improve TurboMail's error handling in the near future; so if anyone is interested in helping out -- drop him a line. luke - Forwarded message from Felix Schwarz <[EMAIL PROTECTED]> - From: Felix Schwarz <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] Subject: Re: TurboMail suddenly stops dispatching Date: Wed, 10 Oct 2007 20:01:10 +0200 User-Agent: Thunderbird 2.0.0.5 (X11/20070727) Hi Luke, http://trac.orianagroup.com/turbomail/ticket/59 > I think TurboMail should do much more in regards of error handling. I > mailed > some proposals to Matt but got no answer so far. I definitely do not want > that a single failure will cause a mail to be silently dropped but I agree > that there should be a point where we just have to drop mails. I'm > currently > on a paid project where I definitely need a more robust error handling so > I'm > willing to spend some time on that issue in the next months. > If someone is interested in joining the party, please send a mail to felix > dot schwarz (at) schwarz dot eu. -- Felix Schwarz software development and Linux system administration main focus: secure database applications Gubener Str. 38 10243 Berlin Germany - End forwarded message - ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: VCS choice status
On Thu, Oct 11, 2007 at 10:31:08AM -0500, Jeffrey C. Ollie wrote: > My personal choice would be to switch to Git for the VCS but keep the > repository data the same (spec file plus patches). I feel that > switching to expanded source-style repositories is too radical of a > change - we give up the notion of pristine source plus patches. Also, > using an expanded source-style repository would mean that packagers > would have to become much more familiar with the VCS since they would > need to maintain various branches (vendor branch, branches for various > patches). +1. ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
intermittent fedora web service outages
Mike is in the process of tracking down some issues with our database at the moment, so there may be some intermittent outages with some of our web services, including: - bodhi - koji - pkgdb - mirrormanager - accounts system Thanks, luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Php why must your apps suck so?
On Wed, Oct 24, 2007 at 08:39:48AM -0700, Toshio Kuratomi wrote: > Jeffrey C. Ollie wrote: >> On Wed, 2007-10-24 at 11:22 +0200, Mirko Klinner wrote: >>> Hi friends of good infrastructure, >>> >>> 2007/10/23, Mike McGrath <[EMAIL PROTECTED]>: >>> Ok, so there's a ticket for a new news site, >>> >>> https://hosted.fedoraproject.org/projects/fedora-infrastructure/ticket/178 >>> Fact: PHP apps have a poor track record. >>> Fact: There doesn't appear to be any viable Python CMS's >>> >>> After reading the requirements stated in the ticket I wonder if it >>> wouldn't be >>> the best idea to implement a little application like that in >>> TurboGears ? I work with TurboGears on daily basis, so I guess it would >>> take about >>> 4 weeks >>> to develop such a software, followed by another 8 weeks to test it and >>> bring it >>> to live. >>> What do you think about that ? >> Not that I doubt your coding skills, but that would be another large set >> of unique code that Fedora Infrastructure would need to maintain. By >> using an existing CMS system FI gains from the experiences of everyone >> else that runs the CMS we choose. Plus by choosing an existing CMS we >> can spend those 4-8 weeks working on content and making a Fedora theme. > Not to mention there are python CMS's... including a TurboGears one. We > could look into whether those are close to meeting our needs and propose > patches to upstream instead of completely reinventing the wheel. These are > the ones I found yesterday that appear to be in a usable state and active > upstream: > > http://www.pylucid.org/ -- DJango based > http://www.turtolcms.org/ > > http://www.pagodacms.org/ -- TurboGears based and has received a bit of > press but hasn't made a release yet. Needs evaluation to see if it's ready > enough for our usage Wow. I just finished watching the Pagoda screencast -- very impressive stuff. I definitely recommend taking a look. It's designed to drop right into any existing TurboGears project. It handles revision reviews, publication schedules, powerful plugins, wiki-like features, and all of the "web2.0" hotness you could ask for. However, it's still very immature, and parts of the screencast were actually mockups.. but it definitely has potential. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
yum depsolver patch for releng1
As most are probably aware, we are hitting a bug[0] in the yum depsolver when trying to mash dist-f7-updates-testing. Tim Lauridsen wrote a patch[1] that seems to fix the problem and allows us to compose updates-testing again (W!). Tim sent the patch to yum-devel for review, but in the mean time I'd be happy to apply this to the yum on releng1 so we can get these repos back under testing. What do you guys think? luke [0]: https://bugzilla.redhat.com/show_bug.cgi?id=360291 [1]: https://bugzilla.redhat.com/show_bug.cgi?id=360291#c24 ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: yum depsolver patch for releng1
On Wed, Nov 07, 2007 at 04:20:26PM -0500, Luke Macken wrote: > As most are probably aware, we are hitting a bug[0] in the yum depsolver > when trying to mash dist-f7-updates-testing. Tim Lauridsen wrote a > patch[1] that seems to fix the problem and allows us to compose > updates-testing again (W!). > > Tim sent the patch to yum-devel for review, but in the mean time I'd be > happy to apply this to the yum on releng1 so we can get these repos > back under testing. What do you guys think? Also worth noting that this patch does not break any of yums unittests: python test/alltests.py . -- Ran 121 tests in 0.286s OK luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: yum depsolver patch for releng1
On Wed, Nov 07, 2007 at 04:20:26PM -0500, Luke Macken wrote: > As most are probably aware, we are hitting a bug[0] in the yum depsolver > when trying to mash dist-f7-updates-testing. Tim Lauridsen wrote a > patch[1] that seems to fix the problem and allows us to compose > updates-testing again (W!). > > Tim sent the patch to yum-devel for review, but in the mean time I'd be > happy to apply this to the yum on releng1 so we can get these repos > back under testing. What do you guys think? Both Jeremy and Florian approved the patch, so I went ahead and patched yum on releng1 and mashed f7-updates-testing. So we should be back in action, after 16 days of updates-testing downtime. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
comps corruption.. again.
https://bugzilla.redhat.com/show_bug.cgi?id=374801 Once again, the checksums match up when they hit /mnt/koji/mash/updates, but from there they get synced out to our mirrors corrupted. We may be experiencing some corruption on the master mirror, but I don't believe I have the access to verify. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: lockbox daily email 6MB?
On Wed, Nov 21, 2007 at 12:53:11PM -0500, seth vidal wrote: > > On Wed, 2007-11-21 at 11:52 -0600, [EMAIL PROTECTED] wrote: > > For those receiving this daily 6MB email from lockbox, is it really > > necessary? :-) > > > > it's on the list to be nuked from orbit by trimming out the crap. We're planning to redesign the way we handle this stuff. This will probably entail adding some new feature to epylog. https://hosted.fedoraproject.org/projects/fedora-infrastructure/ticket/226 luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Restart TG apps for high mem-usage
On Sun, Nov 25, 2007 at 01:00:53PM -0800, Toshio Kuratomi wrote: > Here's a short script to test our TG apps run via supervisor for excessive > memory usage and restart them if necessary. We could run this via cron in > alternate hours on each app server. Does this seem like a good or bad idea > to people? Probably not a bad idea; I think koji does something similar with apache. However, I don't think we need this for bodhi, at least for the moment. The only time bodhi's memory usage jumps is when it's pushing updates -- so if we were to use this script for bodhi, it would have to check if it is currently running mash. But for now, I'm not sure that it is necessary seeing as how most of the time puppetd eats more memory than bodhi. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Restart TG apps for high mem-usage
On Mon, Nov 26, 2007 at 09:59:44AM -0800, Toshio Kuratomi wrote: > Matt Domsch wrote: >> On Sun, Nov 25, 2007 at 09:08:30PM -0800, Toshio Kuratomi wrote: +1, but does it make sure all transactions are finished? I know smolt does not have good transaction protection. If a transaction fails halfway through, we might have a mess. >>> Not if the app doesn't. From a brief test, TG apps do not do this. >> >> MirrorManager doesn't use transactions, I never figured out how to get >> them to work right. Advice welcome. >> > By not being able to get transactions working, do you mean explicit > transactions or implicit transactions? I see that mirrormanager, bodhi, > and noc (not running currently) are using a dburi that disables implicit > transactions:: > mirrormanager-prod.cfg.erb: > sqlobject.dburi="notrans_postgres://mirroradmin: > <%= mirrorPassword %>@db2.fedora.phx.redhat.com/mirrormanager" > > If that was changed to:: > sqlobject.dburi="postgres://mirroradmin:[...] > > TurboGears would at least attempt to use an implicit transaction per http > request which should protect the database from shutting down the > application in the middle of processing a multi-table update. I don't know > if that's the problem you're referring to, though. Removing the notrans_postgres:// from bodhi's sqlobject.dburi causes problems. Modifications don't seem to go through; I'm not sure if they hit the DB or not. I remember encountering this issue early on in bodhi development, and it was mitigated by calling hub.sync() all over the place. I have since removed them, and use notrans_postgres, which has been working fine since day 1 of our production instance. I'm not a db guru, so I'm not sure which is better or worse. I'll have to investigate this further. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Port 80 blocked on publictest machines
I just made the change in puppet to block port 80 on our publictest machines. Theoretically, the apps on those servers shouldn't be touching our production db, but we shouldn't underestimate human stupidity. Let us know if stuff breaks. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Request to remove infofeed
On Wed, Jan 02, 2008 at 11:21:18AM -0500, seth vidal wrote: > > On Wed, 2008-01-02 at 10:15 -0600, Mike McGrath wrote: > > There's been a request to remove infofeed in favor of bodhi's interface as > > they now duplicate information. > > > > Is anyone against doing this? > > > > https://fedorahosted.org/fedora-infrastructure/ticket/248 > > > > infofeed has more information, though. Look at the feeds on the right of > the infofeed. Additionally, there's no single feed for all of the > updates in bodhi. Yes there is, https://admin.fedoraproject.org/updates/rss/rss2.0?status=stable > I'm not wed to keeping infofeed - but before we remove something we > better make sure we provide the same information. It looks like the infofeed has the RPM summary, description, and recent changelog. Bodhi doesn't know about any of these things, so it will require some database changes to store it. I've been meaning to have bodhi grab the recent RPM changelog upon submission for a little while now, so I don't have a problem implementing this, if we care? If we were to display these fields too, it would end up looking like name - summary package description update notes bug list rpm changelog Is this what we want? luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
FC6 guests
So, we still have a handful of FC6 guests lying around in PHX. After a quick look, it seems that we're using them for the following services. publictest1 - pkgdb-dev - ns-slapd - mysqld - postgres - wevisor publictest2 - my mash/bodhi playground publictest4 - asterisk test1 - security irc bot / xmlrpc. Please feel free to chime in with any services / guests I'm missing. With regard to publictest2, this guest can go away and not come back. I'd be fine doing mash/bodhi testing on any guest that has a couple gigs of RAM and read-only access to /mnt/koji. So, in order to get this ball rolling, we need to determine which guests we want to upgrade / destroy, what OS we want to upgrade them to, and if the services are currently able to run on that OS. Also, when would the best time to upgrade be? luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Fedora Search
On Wed, Jan 30, 2008 at 11:57:23AM -0600, Mike McGrath wrote: > We need a Fedora search engine. Especially for docs. Options > > 1) Do we run our own? > > 2) Do we use google. > > I love 2, its easy. But it is, non-OSS. So there are moral issues at > stake here. (though I've not used google to exclusively search through > our sites, it may suck at it, who knows :) > > So, thoughts? Who has deployed their own search engines? I've used htdig > in the past. I know J5 has been working on a search controller for MyFedora, which will be responsible for scouring a bunch of our resources. I don't see why we wouldn't be able to search docs as well. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Collaboration Servers!
On Wed, Jan 30, 2008 at 03:15:22PM -0600, Mike McGrath wrote: > collab1.fedoraproject.org is up and running. Yahoo! So whats missing? > Well, it doesn't actually do anything yet. Plans for it include > > 1) gobby (its AMAZING) > 2) pastebin or something like it (also amazing) > 3) mailman. > > So who wants to set up what? Luke, you'd mentioned you might be able to > get gobby up sometime this week / next. Is that still the case? If so > I'll open a ticket and assign it to you. Yeah, I'm down. A few initial concerns I have with deploying a production Gobby instance: - Lack of ACLs. It will be a bit of a free-for-all, but that could possibly be a very good thing. There is really no sort of document heirarchy, and you can either set a single server-wide password, or none at all. - We can setup sobby to save its state after a given interval, but people need to be aware that their data is not safe, and they should save locally -- often. Undo does not exist. I think a single sobby instance will probably be fine for now. In the long run, it may be interesting to setup some sort of system that allows users to create new collaboration sessions with others instantly. It may be an idea worth hashing out at some point. > My only request for pastebin is that we use something that has an > upstream, and that we don't modify it other then to create a template for > that good ol' fedora look and feel. Stickum[0] has been pretty nice so far in my experience. My only beef with http://f3dora.org so far has been the fact that sometimes it's quite slow. If we want to take this route, the first step would be to get stickum into Fedora. Does anyone have any details as to how f3dora.org is currently deployed ? There is already an open Fedora Pastebin ticket: https://fedorahosted.org/fedora-infrastructure/ticket/53 luke [0]: http://code.google.com/p/stickum/ ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Collaboration Servers!
On Wed, Jan 30, 2008 at 03:15:22PM -0600, Mike McGrath wrote: > collab1.fedoraproject.org is up and running. Yahoo! So whats missing? > Well, it doesn't actually do anything yet. Plans for it include > > 1) gobby (its AMAZING) > 2) pastebin or something like it (also amazing) > 3) mailman. > > So who wants to set up what? Luke, you'd mentioned you might be able to > get gobby up sometime this week / next. Is that still the case? If so > I'll open a ticket and assign it to you. Sobby (the standalone obby server) is now running on gobby.fedoraproject.org. I also setup a cronjob to `git add . ; git commit -a` everything in the session hourly, and changes can be viewed via gitweb[0]. luke [0]: http://gobby.fedoraproject.org/git/gitweb.cgi?p=sobby/.git;a=summary ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Collaboration Servers!
On Thu, Feb 21, 2008 at 12:23:27AM +0200, Dimitris Glezos wrote: > On Wed, Feb 20, 2008 at 11:38 PM, Luke Macken <[EMAIL PROTECTED]> wrote: > > > > Sobby (the standalone obby server) is now running on > > gobby.fedoraproject.org. > > > > I also setup a cronjob to `git add . ; git commit -a` everything in the > > session hourly, > > and changes can be viewed via gitweb[0]. > > This is a good feature, however we should make it *very* clear somehow > that the text is being logged. Agreed. I mention that all files get committed to revision control in the README on the server, and in my blog post. Feel free to note this anywhere else you see applicable. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Collaboration Servers!
On Wed, Feb 20, 2008 at 11:08:38PM -0500, Yaakov Nemoy wrote: > On Wed, Feb 20, 2008 at 4:38 PM, Luke Macken <[EMAIL PROTECTED]> wrote: > > On Wed, Jan 30, 2008 at 03:15:22PM -0600, Mike McGrath wrote: > > > > > collab1.fedoraproject.org is up and running. Yahoo! So whats missing? > > > Well, it doesn't actually do anything yet. Plans for it include > > > > > > 1) gobby (its AMAZING) > > > 2) pastebin or something like it (also amazing) > > > 3) mailman. > > > > > > So who wants to set up what? Luke, you'd mentioned you might be able to > > > get gobby up sometime this week / next. Is that still the case? If so > > > I'll open a ticket and assign it to you. > > > > Sobby (the standalone obby server) is now running on > > gobby.fedoraproject.org. > > > > I also setup a cronjob to `git add . ; git commit -a` everything in the > > session hourly, > > and changes can be viewed via gitweb[0]. > > > > luke > > > > > > [0]: http://gobby.fedoraproject.org/git/gitweb.cgi?p=sobby/.git;a=summary > > I get a 403 error from trying to view that gitweb repo. > > Otherwise, pretty awesome. Ugh, SELinux. Should be "fixed", for now. I threw together a SELinux policy for sobby tonight; I'll look into getting it working tomorrow. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Mailman List Policy for Fedora Hosted
On Thu, Feb 28, 2008 at 02:34:17PM -0600, Mike McGrath wrote: > > On Thu, 28 Feb 2008, Ignacio Vazquez-Abrams wrote: > > > On Tue, 2008-02-19 at 09:45 -0600, Jeffrey Ollie wrote: > > > 3) What should the policy on list names be? My proposal: > > > > > > A) All list names must be prefixed with "-". > > > B) All list names must be suffixed with "-list". > > > C) Lists may optionally have something between the prefix and suffix, > > > as long as it's not obviously vulgar or obscene. > > > > Just pinging on this issue, since we're pretty close to getting it up > > AFAIK. > > > > (Personally, I prefer @lists.) > > > > > Alrighty, lets just do a +1 -1 and we'll count them up at the next meeting > > > For @lists.fedoraproject.org: +1 luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Change (already) - steved
On Tue, Apr 15, 2008 at 08:38:55AM -0500, Mike McGrath wrote: > One thing we'd been talking about before the freeze but just didn't get > around to was giving steved (nfs expert) access to our nfs box where > /mnt/koji lives as we still seem to be having nfslock issues (though they > are less frequent now). > > This is mostly so A) he can see how things are setup before an outage and > B) get immediate access to see what happened during the outage. > > I'd like to get his account on there (and possibly xen2 if he needs it) > before the freeze so we can keep the outage time to a minimum and possibly > have a fix right then or at least have a fix after the change freeze > ready. > > Anyone opposed? The only change here is actually giving steved access to > the boxes (with sudo) any other changes will have to come back through the > list when we have them. > > -Mike > > > PS. This isn't in an SOP but if nfslock happens on nfs1 and steved isn't > around. Anyone in sysadmin-main can just reboot the box which will cause > a quick down time but we've yet to figure out how to bring nfslock back up > sanely without the reboot. +1 luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Change freeze request: Fix invalid login pages
On Tue, Apr 15, 2008 at 12:23:05PM -0700, Toshio Kuratomi wrote: > When I built the last python-fedora, I built the package in my working tree > instead of making a fresh branch. This means the python-fedora package I > deployed on Friday has some incompatible changes that weren't meant to go > in until the next release. This is leading people to get an internal > server error when they try to login with an invalid username/password. > > There's two options: spin a new package based off the actual > python-fedora-0.2.99.8. The diff for that would look like > bzr-0.2.99.8-current.patch. > > The alternative is to only fix the problem that we know we're having with > BaseClient. The patch for that is quite a bit smaller: it's just a few > lines to change exception handling in fas2.py and jsonfasprovider.py to use > the new exception hierarchy in BaseClient. > > Risk for option #1: we have had the new python-fedora deployed since Friday > and this is the only problem reported so far. The patch is bigger than for > option #2 and thus there's more room for unexpected problems. > > Risk for option #2: We definitely do not want to push this package out to > the other servers as it's likely to break error handling in clients because > of the new exception hierarchy. Since we're in change freeze we're not > likely to do that for a while. > > I'm inclined for option #2. What do others think? Option #2 sounds like good idea to me. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: python-fedora upgrade
On Sun, Apr 20, 2008 at 01:20:49PM -0700, Toshio Kuratomi wrote: > Since the release is delayed and we're letting more changes through, I'd > like to update to the new python-fedora, 0.2.99.9 that reverts the > incompatible changes from 0.2.99.8. > > I'd like to push this package to all the app servers and fas1 & 2 so that > all the major TurboGears apps are on the same version. (We could push to > bodhi on releng1 as well if Luke is okay with that.) That's fine with me. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Change request: start releng2 guest on xen2
Hey guys, I'd like to start preparing for the releng1->releng2 move, and begin testing bodhi + mash + TG on RHEL5. This entails turning on the releng2 guest which lives on xen2. This guest has been down for a while now, and could possibly break something by coming back up. Anyone against this, or think it is a Bad Idea ? luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Change request: start releng2 guest on xen2
On Mon, May 05, 2008 at 03:30:22PM -0500, Mike McGrath wrote: > On Mon, 5 May 2008, Luke Macken wrote: > > > Hey guys, > > > > I'd like to start preparing for the releng1->releng2 move, and begin > > testing bodhi + mash + TG on RHEL5. > > > > This entails turning on the releng2 guest which lives on xen2. This > > guest has been down for a while now, and could possibly break something by > > coming back up. > > > > Anyone against this, or think it is a Bad Idea ? > > > > So these are the risks: > > 1) I don't know what state this box is in > 2) I don't know what IP its listening on > > > 2) is easy to check and fix without much issue. 1) I'm not sure about. > > Luke, did you or Jesse setup any cron jobs or anything on there that you > know of? Nope, I didn't setup anything on the box. I'm fine with holding off on this task until after the release. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: [Fwd: Tosca widgets, only half the battle]
On Wed, May 14, 2008 at 09:06:24AM -0700, Toshio Kuratomi wrote: > Forwarding to fedora-infrastructure-list soit canget more exposure and > discussion. > > Original Message > Subject: Tosca widgets, only half the battle > Date: Sun, 11 May 2008 12:27:36 -0400 > From: John (J5) Palmieri <[EMAIL PROTECTED]> > To: Toshio Kuratomi <[EMAIL PROTECTED]> > CC: [EMAIL PROTECTED], [EMAIL PROTECTED], [EMAIL PROTECTED] > > After hacking away at MyFedora and producing a lot of ugly code in the > process I finally sat down the last two weeks to organize everything > into a framework make it much more extensible and have patterns for > people to easily create content. Most of the technologies are > solidifying into my head and I have been working on hashing out an API > design behind the user interaction design I had started with. The issue > I am running into now is the fact that Turbo Gears and related > technology come from a monolithic design and adhere too stringently to > the Model/View/Controller design pattern. This is really an issue when > your models, views and controllers can come from different applications > or even different servers. MyFedora is of course a mashup of different > tools and does not fit the, I'm grabbing data from a single database and > displaying it via a self contained template, mold. What I need is a > complete plugin system where a person can write their own self contained > controllers, templates and static files which then drop in and are > loaded on the fly, while integrating with the global project. > > Before I go further let me describe my design. > > Vocabulary: > > Resource - This is the starting point for MyFedora plugins. A resource > is any abstract grouping such as "packages", "people" and "projects" > which contain tools for viewing and manipulating data within the > resource's context. > > Tools - A tool is a web app for viewing or manipulating data. For > example Builds would be a tool for the package resource. > > Data Id - The data id is a pointer to a specific dataset the tools work > on. For example the package resource considers each fedora package name > to be a data_id. > > The way things work are Resources are placed in the resources/ directory > and contain the logic for routing requests to a specific tool. They > also contain the master template which is a cause of path problems with > the current TG setup (include paths are relative to the including > template) > > Tools are placed in the tools/ directory and are controllers just like > any other TG controller. The exception is there is a standard for > including the master template and the tool pulls templates and static > files from its own directory. Tools can register with more than one > resource and must modify its behavior based on the resource calling it. > For instance the Build tool would be able to register with the package > and people resource and depending which resource is being used it would > display either a specific person's builds or the build history of a > package. Based on the resource being used the master template is pulled > in by the tool's templates. > > Data id's are simply what the resource passes to the tool and the tool > needs to be able to accept when dealing with a particular resource. For > instance the Packages resource would send a package name as a data id > and the Peoples resource would send a person's FAS username. > > The issue here is I need the tools to be self contained but still > integrate correctly with the global assests such as master templates and > graphics. Tosca widgets seemed to be the answer until I looked further > and found out they are just a higher level display layer than a self > contained controller/template system. It seems to be confusing because > it breaks the connection between the controller, data and the display > when I want that all to be encapsulated. Basically I don't want the > master page dolling out the data because the master page is just a > container to display the tool and links to other tools. The tools > should know where to get their data from. > > One solution is to use ToscaWidgets as a replacement for templates (or > more apt another layer between the controller and the template). That > makes things more complicated and throws away a lot of the concepts of > TG controllers. I guess I am probably just hung up on how I first > learned TG and we can just document around those issues. But another > thing to think about is stuff like WSGI. > > What do you guys think? Given my design and goals such as the ability to > display tools on the portal page, what is our plan of attack? How do we > concoct a plugin system to make it easy for others to create integrated > content while really just concentrating on their bits and not the wider > integration infrastructure? Are there systems/libraries out there that > already do this? Tosca is only part of the solution because it only > deals with en
Re: [Fwd: Tosca widgets, only half the battle]
On Wed, May 14, 2008 at 11:03:38AM -0500, Mike McGrath wrote: > > > Forwarding to fedora-infrastructure-list soit canget more exposure and > > discussion. > > > > Original Message > > Subject: Tosca widgets, only half the battle > > Date: Sun, 11 May 2008 12:27:36 -0400 > > From: John (J5) Palmieri <[EMAIL PROTECTED]> > > To: Toshio Kuratomi <[EMAIL PROTECTED]> > > CC: [EMAIL PROTECTED], [EMAIL PROTECTED], [EMAIL PROTECTED] > > > > After hacking away at MyFedora and producing a lot of ugly code in the > > process I finally sat down the last two weeks to organize everything > > into a framework make it much more extensible and have patterns for > > people to easily create content. Most of the technologies are > > solidifying into my head and I have been working on hashing out an API > > design behind the user interaction design I had started with. The issue > > I am running into now is the fact that Turbo Gears and related > > technology come from a monolithic design and adhere too stringently to > > the Model/View/Controller design pattern. This is really an issue when > > your models, views and controllers can come from different applications > > or even different servers. MyFedora is of course a mashup of different > > tools and does not fit the, I'm grabbing data from a single database and > > displaying it via a self contained template, mold. What I need is a > > complete plugin system where a person can write their own self contained > > controllers, templates and static files which then drop in and are > > loaded on the fly, while integrating with the global project. > > > > Do we want the myfedora app to be coded in such a way that it works with > lots of technologies? or do we want to define a standard that the > technologies can implement to make it work with myfedora? I'd like to see us re-use and be compatible with as many existing technologies and standards as possible. I don't necessarily see any value in re-inventing our own. That is, unless we have a sound reason to? luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: F10?
On Mon, May 19, 2008 at 10:27:52PM -0500, Mike McGrath wrote: > So F9 is out the door and we had a very exciting last 6 months. Here's > the short list: > > * FAS2 > * /mnt/koji migration and deployment > * Backup system up and running > * Collaboration servers brought up (gobby and asterisk POC) > * UTC switch > > The focus for this last release was mostly around sanity. Cleaning up > some configs, things like that. We actually did a very good job of that. > > All in all I feel it was a good release. So my question to the team, what > would you all like to see over the next 6 months? Here are some things I'd like to get done: - Signing server (sigul) - Solidify our SELinux deployment. I'm sitting down with Dan Walsh this week to churn through our logs and fix as much stuff as possible. Brett Lentz (Wakko666) has also been doing a great job of writing test cases and pushing some crucial puppet SELinux changes upstream. - Get our logging situation under control. - Get bodhi into the app cluster, and give it the ability to kick off mashes on our releng boxes. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: F10?
On Tue, May 20, 2008 at 09:15:28AM -0700, Toshio Kuratomi wrote: > Things I'd like but probably can't work on myself: [...] > * Optimize db calls within TG applications to make them as snappy as > possible. I can do this for SQLAlchemy but SQLObject isn't flexible > enough. Any page which is for viewing data and is returning multiple > records is potentially a good candidate. Speaking of stuff I'd love to see happen, but don't have the time for :) - Port bodhi to SQLAlchemy luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: F10?
On Tue, May 20, 2008 at 02:18:53PM -0400, Yaakov Nemoy wrote: > On Tue, May 20, 2008 at 1:54 PM, Luke Macken <[EMAIL PROTECTED]> wrote: > > On Tue, May 20, 2008 at 09:15:28AM -0700, Toshio Kuratomi wrote: > >> Things I'd like but probably can't work on myself: > > [...] > >> * Optimize db calls within TG applications to make them as snappy as > >> possible. I can do this for SQLAlchemy but SQLObject isn't flexible > >> enough. Any page which is for viewing data and is returning multiple > >> records is potentially a good candidate. > > > > Speaking of stuff I'd love to see happen, but don't have the time for :) > > - Port bodhi to SQLAlchemy > > Depends on how complicated your stuff is already. If it's mostly just > a bunch of tables, and the oddball query, I can probably do it in > about a day. If it's alot of complicated composite tables with > composite keys, custom data types, custom rules, and massive > dependencies, then it could take 2-3 days. > > Let me know when you need help. Cool. Give me a week or so to finish up some major bodhi changes that I have underway, and the releng2 migration. I've created a ticket so we can track this task, and I'll let you know when it's safe to dive in. https://fedorahosted.org/bodhi/ticket/202 Thanks! luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Fedora and CIA.vc
On Wed, May 21, 2008 at 12:30:57PM +0300, Dimitris Glezos wrote: > I think this was discussed briefly in the past on IRC, having Fedora > listed on http://cia.vc/. CIA is "a real-time window into the open > source world", providing "Real-time open source activity stats" with > active projects, people, commits, etc. > > It probably won't provide much added functionality (although its RSS > feeds are handy sometimes eg. [1]), but it'd be good having Fedora on > another contributor/project map. And with the diversity of the > projects hosted on Fedora Hosted, maybe this will bring more > contributors in. > > We'll need to add the CIA client script to our versioning systems: > > http://cia.vc/doc/clients/ +1. https://fedorahosted.org/fedora-infrastructure/ticket/164 I'm not sure if anyone addressed Mike's concern. Should we run this by FESCo or The Board ? luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: [Fwd: Tosca widgets, only half the battle]
On Thu, May 15, 2008 at 11:24:59AM -0400, John (J5) Palmieri wrote: > On Wed, 2008-05-14 at 12:39 -0400, Luke Macken wrote: > > On Wed, May 14, 2008 at 09:06:24AM -0700, Toshio Kuratomi wrote: > > > Forwarding to fedora-infrastructure-list soit canget more exposure and > > > discussion. > > > > > > Original Message > > > Subject: Tosca widgets, only half the battle > > > Date: Sun, 11 May 2008 12:27:36 -0400 > > > From: John (J5) Palmieri <[EMAIL PROTECTED]> > > > To: Toshio Kuratomi <[EMAIL PROTECTED]> > > > CC: [EMAIL PROTECTED], [EMAIL PROTECTED], [EMAIL PROTECTED] > > > > > > After hacking away at MyFedora and producing a lot of ugly code in the > > > process I finally sat down the last two weeks to organize everything > > > into a framework make it much more extensible and have patterns for > > > people to easily create content. Most of the technologies are > > > solidifying into my head and I have been working on hashing out an API > > > design behind the user interaction design I had started with. The issue > > > I am running into now is the fact that Turbo Gears and related > > > technology come from a monolithic design and adhere too stringently to > > > the Model/View/Controller design pattern. This is really an issue when > > > your models, views and controllers can come from different applications > > > or even different servers. MyFedora is of course a mashup of different > > > tools and does not fit the, I'm grabbing data from a single database and > > > displaying it via a self contained template, mold. What I need is a > > > complete plugin system where a person can write their own self contained > > > controllers, templates and static files which then drop in and are > > > loaded on the fly, while integrating with the global project. > > > > > > Before I go further let me describe my design. > > > > > > Vocabulary: > > > > > > Resource - This is the starting point for MyFedora plugins. A resource > > > is any abstract grouping such as "packages", "people" and "projects" > > > which contain tools for viewing and manipulating data within the > > > resource's context. > > > > > > Tools - A tool is a web app for viewing or manipulating data. For > > > example Builds would be a tool for the package resource. > > > > > > Data Id - The data id is a pointer to a specific dataset the tools work > > > on. For example the package resource considers each fedora package name > > > to be a data_id. > > > > > > The way things work are Resources are placed in the resources/ directory > > > and contain the logic for routing requests to a specific tool. They > > > also contain the master template which is a cause of path problems with > > > the current TG setup (include paths are relative to the including > > > template) > > > > > > Tools are placed in the tools/ directory and are controllers just like > > > any other TG controller. The exception is there is a standard for > > > including the master template and the tool pulls templates and static > > > files from its own directory. Tools can register with more than one > > > resource and must modify its behavior based on the resource calling it. > > > For instance the Build tool would be able to register with the package > > > and people resource and depending which resource is being used it would > > > display either a specific person's builds or the build history of a > > > package. Based on the resource being used the master template is pulled > > > in by the tool's templates. > > > > > > Data id's are simply what the resource passes to the tool and the tool > > > needs to be able to accept when dealing with a particular resource. For > > > instance the Packages resource would send a package name as a data id > > > and the Peoples resource would send a person's FAS username. > > > > > > The issue here is I need the tools to be self contained but still > > > integrate correctly with the global assests such as master templates and > > > graphics. Tosca widgets seemed to be the answer until I looked further > > > and found out they are just a higher level display layer than a self > > > contained controller/template system. It seems to be confusing because > > > it breaks the connection between the controller, d
Re: FAS instance on publictest10?
On Tue, Jul 08, 2008 at 05:46:07PM -0400, Robin Norwood wrote: > Ricky set up a FAS instance on pt9 that seems to work fine so far, so > I'm done whining. :-) Hmmm, so what is the difference between the pt10 and pt9 deployments? We've been testing the bleeding-edge python-fedora package on pt10, so if that is causing the issues we definitely need to track it down asap. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: doing FAS2 with sqlalchemy-migrate
On Thu, Jul 17, 2008 at 12:44:01PM +0200, Yaakov Nemoy wrote: > Hey List, > > There are alot of tickets for FAS2 outstanding that require some > changes to the DB. Toshio is working on getting sqlalchemy-migrate > into Fedora now, so I felt it would be fitting to get FAS2 to be an > example of how to do it right. (This means I am probably doing > something wrong.) > > http://git.fedorahosted.org/git/fas.git?p=fas.git;a=commitdiff;h=21643256e4840aa2179b8f2d6cf230ab714603a9 > > This git commit sets up migration for us. The instructions how to use > it are there. In short, the overlying design is to assume the DB has > already been configured and deployed using the old method. This will > manage changes we do on top. This will make it easier for people with > working development trees to simply migrate their dev systems over > without having to start their DB fresh. > > Please poke holes in this plan, so we can have something more solid > that can be used as a gold standard for Fedora Infrastructure > development. I reviewed and approved python-migrate yesterday. It's currently waiting to be built. https://bugzilla.redhat.com/show_bug.cgi?id=452388 luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: doing FAS2 with sqlalchemy-migrate
On Thu, Jul 17, 2008 at 05:45:44PM +0200, Yaakov Nemoy wrote: > On Thu, Jul 17, 2008 at 4:40 PM, Luke Macken <[EMAIL PROTECTED]> wrote: > > > > I reviewed and approved python-migrate yesterday. It's currently > > waiting to be built. > > > >https://bugzilla.redhat.com/show_bug.cgi?id=452388 > > How long does that take? > > I'll have to change the docs once it's done. python-migrate is now in updates-testing ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Tasks and followup
On Tue, Aug 12, 2008 at 08:33:32PM +, Paul W. Frields wrote: > Apologies for this being an idea with no code attached. I'm hoping that > some of the able folks are on this list and will see something > achievable. > > Something the Community Architecture folks and I have discovered is that > when we sign up new folks for an account, there's not any way to mark > them for follow up or to indicate a note for where they did it. A > couple methods for doing this come to mind: > > 1. Simple but effective -- a way to tag account holders arbitrarily. > This might help with a number of things, like skill sets, karma, and so > forth. In this case, the tag would allow Ambassadors to follow up on > particular shows by listing everyone with "FooCon 2008" in their tag > list. We could possibly do this by using the 'myfedora' application namespace that already exists in the FAS person config model. Each user in the db can have a list of configs for a variety of different apps (currently hardcoded to asterisk, moin, myfedora, and openid). For MyFedora we were thinking about storing various widget settings in this field, but the namespace has not yet been decided on. I'm not familiar with the FAS codebase, but we may be able to do something like this:: from fas.model import Person, Configs Person.configs.append( Configs(application='myfedora', value="{ 'tags' : ['FUDCon2008Boston'], 'skills': ['python', 'c++', 'trolling'], 'karma' : -8, }")) luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
SELinux status update
Over the past few months, I've been working closley with Dan Walsh and Mike McGrath to solidify our SELinux deployment. We're not yet to the point where we can flip every system into enforcing mode, but we're getting close. We're at the point now where we can pretty much do everything we need to do via our puppet configuration, and we've created a handful of constructs that can be used to configure various aspects of SELinux, for example: == Setting custom context semanage_fcontext { '/var/tmp/l10n-data(/.*)?': type => 'httpd_sys_content_t' } == Toggling booleans selinux_bool { 'httpd_can_network_connect_db': bool => 'on' } == Allowing ports semanage_port { '8081-8089': type => 'http_port_t', proto => 'tcp' } == Deploying custom policy semodule { 'fedora': } I created a custom 'fedora' selinux module that is loaded on all systems (that are configured with 'include selinux'). This module exists to fix various issues custom to our environment, and to cover up minor annoyances such as leaky file descriptors. So, now it's just a matter of hunting down the existing issues, and fixing them in puppet or in the SELinux policy. I've been keeping our infrastructure ahead of the RHEL5 selinux-policy, as Dan has fixed a lot of our issues in his rpms. I threw together a basic SOP for our SELinux configuration here: https://fedoraproject.org/wiki/Infrastructure/SOP/SELinux You can keep up to date on our SELinux deployment status here: https://fedorahosted.org/fedora-infrastructure/ticket/230 Cheers, luke pgp0ocs37c3m2.pgp Description: PGP signature ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Intrusion Detection System
Hey all, A couple of weeks ago I did an initial deployment of an Intrusion Detection System in our infrastructure. It utilizes the prelude stack, and is currently powered by auditd and prelude-lml events. Audit gives us a ridiculous amount of power with regarding to monitoring everything that happens on a system. Prelude-lml, out of the box using it's pcre plugin, is able to watch a large variety of service logs, including many things we are running (asterisk, mod_security, nagios, cacti, PAM, postfix, sendmail, selinux, shadowutils, sshd, sudo). Prewikka is the web-based frontend (https://admin.fedoraproject.org/prewikka). I created a new 'prelude' puppet module that contains the configuration for audit, auditsp-plugins, libprelude, prelude-manager, prewikka, prelude-correlator, and prelude-lml. Turning a node/servergroup into a sensor entails adding the following to your class definition: 'include prelude::sensor::audisp' My initial deployment entailed setting up the prelude-manager and correlator on a single box, and hooking up a single sensor (bastion). So, we're now at the point where we can fine tune our audit rules before we further deploy this infrastructure. Some things we want to consider: - Creating specific security policies for each servergroup - Define what files/directories/activities we want to monitor on which machines. - What events to we want to escalate ? I opened an infrastructure ticket to track this deployment here: https://fedorahosted.org/fedora-infrastructure/ticket/833 Suggestions, comments, and ideas are welcome. Cheers, luke pgpvvOxYzWF8G.pgp Description: PGP signature ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Intrusion Detection System
On Wed, Sep 10, 2008 at 06:29:38PM -0600, Stephen John Smoogen wrote: > 2008/9/10 Luke Macken <[EMAIL PROTECTED]>: > > Hey all, > > > > A couple of weeks ago I did an initial deployment of an Intrusion > > Detection System in our infrastructure. It utilizes the prelude stack, > > and is currently powered by auditd and prelude-lml events. Audit gives > > us a ridiculous amount of power with regarding to monitoring > > everything that happens on a system. Prelude-lml, out of the box > > using it's pcre plugin, is able to watch a large variety of service > > logs, including many things we are running (asterisk, mod_security, > > nagios, cacti, PAM, postfix, sendmail, selinux, shadowutils, sshd, > > sudo). Prewikka is the web-based frontend > > (https://admin.fedoraproject.org/prewikka). > > > > for the EL-5 systems.. did you need to update audit from what is > provided by RHEL-5.2? It looked like it would be needed when I talked > with Steve Grubb because it required stuff that had not been ported to > EL-5. I would be interested in helping you test/document this? Where > can I start? Yep, RHEL's audit is not compiled with '--enable-prelude', so I respun F-9's. I also built rawhide's prelude stack. All of these packages are in the fedora-infrastructure repo. As far as testing goes, I recommend setting up the stack on your home network to get familar with it (http://people.redhat.com/sgrubb/audit/prelude.txt). As for documentation, we definitely need to throw together a SOP, and maybe some sort of audit policy for all of our various server groups. Before we start tweaking out our audit rules, we should probably start by defining security policies for our various systems so we can turn them into audit rules and selinux policy. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: sb2/3 reboot again and weird newkey.newkey directories
On Sun, Sep 14, 2008 at 05:13:10AM -0400, Ricky Zhou wrote: > sb2/3 randomly rebooted again tonight (the nagios alert about ns1). > Sorry, but I never got around to sending a ticket about it. I'll try to > get a list of dates/times where this happened together soon. > > Also, just so this doesn't get lost in the backlog: > > 09:03:57 < yaneti> > http://download.fedora.redhat.com/pub/fedora/linux/updates/8/ > 09:04:04 < yaneti> newkey.newkey ? > > Weird stuff. Yeah, this was caused by a bug in bodhi's dot newkey hack and has been fixed with a patch from Ricky. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Puppet training
On Tue, Sep 16, 2008 at 09:49:56PM -0700, Toshio Kuratomi wrote: > Mike McGrath wrote: > > Hey guys, i think I'd like to hold puppet training next week on Wed > > sometime. Which would work best for you guys: > > > > 1:00 pm Chicago time > > 4:00 pm Chicago time. > > > > The live training will be identical to this training: > > > > http://mmcgrath.fedorapeople.org/puppet/ > > > > But will allow for Q and A. The training at that links includes an ogg > > and takes about a half hour to complete at full speed though if you're new > > to puppet its worth it to stop in the middle and review some of the > > topics. If you have any questions or comments about it please let me > > know. The slideshow is made with openoffice and I made the ogg with > > audacity. My throat hurts now so I'm going to get some tea :) > > > I'll go with the crowd and say 4:00PM but either works for me :-) Ditto. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Change Request - Elections
On Tue, Sep 23, 2008 at 01:32:23AM +1200, Nigel Jones wrote: > On Mon, 2008-09-22 at 09:25 -0400, Seth Vidal wrote: > > On Tue, 2008-09-23 at 01:08 +1200, Nigel Jones wrote: > > > Technically not directly covered in the Beta Freeze, but it's only > > > running on app4 at the moment and the settings it's on now is okay for > > > the infrequent use of casual browsing, but the Art team want to use it > > > for the next 36 or so hours. > > > > > > Can I get a +1 to bump to the resources used during the FESCo/Board > > > votes? > > > > Bump them to what? > > Err yeah, I boo-booed and forgot to include the diff. > > diff --git a/configs/web/applications/elections.conf > b/configs/web/applications/elections.conf > index 387f01f..4a11549 100644 > --- a/configs/web/applications/elections.conf > +++ b/configs/web/applications/elections.conf > @@ -10,8 +10,8 @@ WSGIPythonOptimize 2 > > # To save resources during periods of no-elections (quite often - ~75% > of the time) > # We can run less threads and processes. > -#WSGIDaemonProcess elections threads=2 processes=4 user=apache > display-name=elections > -WSGIDaemonProcess elections threads=1 processes=1 user=apache > display-name=elections > +WSGIDaemonProcess elections threads=2 processes=4 user=apache > display-name=elections > +#WSGIDaemonProcess elections threads=1 processes=1 user=apache > display-name=elections > > To be fair, it'd most likely be okay running at threads=2 processes=2, > but I'm trying to stick with what I've used in the past with the freeze > etc. +1. Looks like a harmless optimization. We may want also want to set a 'maximum-requests' on the WSGIDaemonProcess, as to help mitigate any memory leaks within the stack (as opposed to having to use our restart-memhogs cronjob). Bodhi's is currently set to 1000, but we should probably agree on a sane number and make it consistent throughout our infrastructure. Our TurboGears SOP also needs to be updated to reflect our new WSGI deployments as well. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: adding releases to bodhi and cluttered menu
On Tue, Oct 14, 2008 at 11:27:36PM +0200, Patrice Dumas wrote: > Hello, > > This question is asked in the context of > https://fedoraproject.org/wiki/User:Pertusus/Draft_keeping_infra_open_for_EOL > which has not already been approved by FESCO, so this could have no > follow-up, though I think that this issue is also relevant for EPEL. > > Till raised an interesting issue associated with adding more releases in > bodhi: each release takes some place in the left menu. Another could > still be right, but I think that 4 or more will certainly be > problematic. Has this issue already been considered? What is the plan > for EPEL when it switches to using bodhi? In addition to being there > there will be in the end 3 to 5 EPEL versions in parallel so this is > certainly an issue that will arise. > > Any comment, idea? The next major bodhi release will allow single updates to span across various releases. I also would like to add a differentiation between 'Products' (Fedora/EPEL). With this new model, we could easily come up with a nice sidebar view that can encompass everything. > If it ends up that for the proposal (or for EPEL) another bodhi server > has to be set up, can you tell if it is easy rather easy to set up and > administer or rather hard? It's extremely easy to setup, and not very difficult to maintain, but I'm not sure I see any value in setting up a separate instance for EPEL. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Change request: monitor auditd
On Sat, Oct 25, 2008 at 04:21:23PM -0500, Mike McGrath wrote: > On Sat, 25 Oct 2008, Jon Stanley wrote: > > > OK, this is my first nagios change - seems to be sane. Can I get two > > +1's since we're in a change freeze? And do we want defaulttemplate or > > criticaltemplate for this? > > > > [EMAIL PROTECTED] puppet]$ git diff > > diff --git a/configs/system/nagios/services/procs.cfg > > b/configs/system/nagios/services/procs.cfg > > index 49f790d..3fbca7b 100644 > > --- a/configs/system/nagios/services/procs.cfg > > +++ b/configs/system/nagios/services/procs.cfg > > @@ -83,4 +83,11 @@ define service { > >max_check_attempts12 > > } > > > > +define service { > > + hostgroupservers > > + service_description Audit Daemon > > + check_commnadcheck_by_nrpe!check_auditd > > + use defaulttemplate > > +} > > + > > > > diff --git a/configs/system/nrpe.cfg b/configs/system/nrpe.cfg > > index 2fbea87..2812dbc 100644 > > --- a/configs/system/nrpe.cfg > > +++ b/configs/system/nrpe.cfg > > @@ -215,6 +215,7 @@ > > command[check_supervisor]=/usr/lib/nagios/plugins/check_procs -c 1:1 > > -a '/usr/bi > > command[check_transifex_ssh_agent]=/usr/lib/nagios/plugins/check_procs > > -c 1:1 -C ssh-agent -u transifex > > command[check_lock]=/usr/lib/nagios/plugins/check_lock > > command[check_nagios]=/usr/lib/nagios/plugins/check_nagios -e 5 -F > > /var/log/nagios/status.dat -C /usr/sbin/nagios > > +command[check_auditd]=/usr/lib/nagios/plugins/check_procs -c 1:1 -C > > 'auditd' -u root > > # The following examples allow user-supplied arguments and can > > # only be used if the NRPE daemon was compiled with support for > > # command arguments *AND* the dont_blame_nrpe directive in this > > > > +1 +1 luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Change request: trivial bodhi fix
I would like to perform a trivial bodhi upgrade that contains the following patch: --- a/bodhi/controllers.py +++ b/bodhi/controllers.py @@ -168,7 +168,7 @@ class Root(controllers.RootController): forward_url= cherrypy.request.headers.get("Referer", "/") # This seems to be the cause of some bodhi-client errors -# cherrypy.response.status=403 +cherrypy.response.status=403 return dict(message=msg, previous_url=previous_url, logging_in=True, original_parameters=cherrypy.request.params, forward_url=forward_url) This reverts a workaround for a problem in python-fedora-0.2.x, which has since been resolved. This bodhi patch should hopefully resolve https://bugzilla.redhat.com/show_bug.cgi?id=466510 as well. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Change request: bodhi change for myfedora
Hey guys, I'd like to perform another quick bodhi upgrade soon, to add a feature needed to revoke update requests from myfedora. Should be a very low risk upgrade. https://fedorahosted.org/bodhi/changeset/bcc673fab69067e555654113d640f0152511f225 luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Change Request - fingerprints.html
On Thu, Nov 13, 2008 at 02:54:43PM -0500, Ricky Zhou wrote: > This is just a content change, so it should have no risk at all: > It fixes ticket 814 > (https://fedorahosted.org/fedora-infrastructure/ticket/814) > > diff --git a/configs/system/fingerprints.html > b/configs/system/fingerprints.html > index f8d9cc7..01d4e43 100644 > --- a/configs/system/fingerprints.html > +++ b/configs/system/fingerprints.html > @@ -25,26 +25,9 @@ > > >Package Signing Keys > - > - > -Key Purpose > - Identified As > -Fingerprint > -Last Updated (UTC) > - > - > -Fedora Project Releases > - Fedora Project <[EMAIL PROTECTED]> > -CAB4 4B99 6F27 744E 8612 7CDF B442 69D0 4F2A 6FD2 > -2008-08-19 00:00:00 > - > - > -Fedora Rawhide > - Fedora Project (Test Software) <[EMAIL PROTECTED]> > -3166 C14A AE72 30D9 3B7A B2F6 DA84 CBD4 30C9 ECF8 > -2008-08-19 00:00:00 > - > - > + > +Please refer to the https://fedoraproject.org/keys";>keys > page for updated information about package signing keys. > + >SSH Host Fingerprints > > > @@ -109,9 +92,8 @@ > > > > -Copyright © 2008 Red Hat, Inc. and others. All Rights Reserved. > Please send any comments or corrections to the mailto:[EMAIL > PROTECTED]">websites te > +Copyright © 2008 Red Hat, Inc. and others. All Rights Reserved. > For comments or queries, please href="http://fedoraproject.org/en/contact";>contact us. > > - > > The Fedora Project is maintained and driven by the community and > sponsored by Red Hat. This is a community maintained site. Red Hat is not > responsible for content. > > > Can I get two +1s? > > Thanks, > Ricky +1 ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Change request: SELinux tweaks.
Attached are some patches that will fix many AVC's that are currently happening within our infrastructure. Patch 0010-Fix-our-semanage_fcontext-function-to-work-on-symlin.patch /should/ fix the problem introduced in 41acfbc83c80d12d915a0d6087e841aba2c7e78c that caused restorecon to flip out when trying to apply context to a symlink. The rest should all be fairly straight-forward fixes that involve flipping booleans, setting context, and creating custom policy modules. Apologies for the binary blobs in the diffs :) luke >From 88b27f114147315ca789b6dda1263353f8582fd5 Mon Sep 17 00:00:00 2001 From: Luke Macken <[EMAIL PROTECTED]> Date: Fri, 21 Nov 2008 15:15:58 + Subject: [PATCH] Add a custom SELinux policy module for our noc systems. This allows ping_t to read from a nagios_spool_t fifo. diff --git a/configs/system/selinux/modules/noc.pp b/configs/system/selinux/modules/noc.pp new file mode 100644 index ..1321793adc4bc4c484d1a66ffa6efcaeaba50480 GIT binary patch literal 23375 zcmeI4S#u*fa>v{2yM5o++FN$}VB2bK&#uD{`([EMAIL PROTECTED]|U!Dpqmo zkXjB$>?d&8U)N9L|1SZM5J_>^huGomMhIfzN+c4Qi9`Zb|NYf}{_oE`c<|sGiodJ) z%?}?u_%G%ChMu4QF#D&f8DC_dwB=Yks{Dc?)[EMAIL PROTECTED](}P`jkAvlMn$( z_)En<[EMAIL PROTECTED]([EMAIL PROTECTED](UoKr!{+ynubY3;( zFY5PK6^piMTrp6Q;-Yi;;6Mk=uDfb*v;bWIp1XS1wF}A_Mb#EV{mXjzeZ?kUxGA82 zT?ufFc;w}>tF8d^hpSwJOsBu2*bBFzr4NSNx#DW75s?19P)Zt%>G;[EMAIL PROTECTED] zk93?Dgox92h7q>8KWg87DE& zVQ2yMYZO-f;2OyM2a4-alS2TJKM9faP%OiY0pT8|2aqs=rv>8xZ;[EMAIL PROTECTED] zZRw{J8rbByYe0baF(wWMg>uGP0r#U!CY=!I9BRg?6dQgCaiB-uOYAC13)Daw*0Mv6 zaX5A1sj;bENFs9+J6N%F{m`{Hz(LRwo~G--(>[EMAIL PROTECTED] zL_%40z#!Qzl(@}vr0B-uPFsY?y>P|&^=g2IOc9z0K#2I*kh;ZQWWz0;TeMeh%4E_J zb&_!dAdEXoVP2M2o3z&[EMAIL PROTECTED]@+TX{Yp3faovUu^HNiBzob? zu^sZ(I2^g{uk&SVYC#DbAZ>_*^ZIf+t7_|(8S!fN7&P5aKn9*FyfuN|I7t{ba}yV; zV&1(al7IFiV;L++O*T5%bOnPhq6}`-9J?u zgqldGmIaT1I;[EMAIL PROTECTED]&GXGYQi6s)2!oivSNzSZpucz&3D;o{5h}TNIxB`8Eu53?` zqJlN7Y`Id(7%pA>{l)-NK)[EMAIL PROTECTED]<2jK}!bZP;- zo#g=j{{Ol9M>?YE;IqhUjW%D53jNUK*VR%Rh~g?L8#a+Kjb<22pQZ&f;#;6cXkl8N zZD&Z+CoM?RA9czifQ(331~pK!C>+;l9VpzC%}ccR{#^N~#)*!JMw;p3ez&<024%V1 z3-ME>f0$vy-zhQ^M1od##!1pISkQ#Zwk2eiA<[EMAIL PROTECTED] z9UG{)hs>0NwLm1qNfr~9RsL3|8H>a`6Xx`K7u-td2?Ev&mLQQ}t^j!zT)wzpxkJUg zmU{{QvpGAiH))[EMAIL PROTECTED]&+O^E|KYE<[EMAIL PROTECTED]@_&{c--wZ zj0h6=rTAsRT|4rk(tV`dT`Ozvq{GJ-Z[EMAIL PROTECTED](h(+p_LEtT?eQQ6kp zc1y~T3r%o9i>(G#<[EMAIL PROTECTED]>rki~Tqr z2&91qtoE}1Smpt2UQ+?qzO)$F<^UXQ&[EMAIL PROTECTED] zl8peB_sOD?W+Go@&[EMAIL PROTECTED]&Y+AbeE|SsqJ3M6uNxhNYI z{a>l#E=6B0|6z9hHih)[EMAIL PROTECTED]&<[EMAIL PROTECTED] zqo1EKS^{F0&V=Nb4TxMOox9HK`n`M{^FSItB)[EMAIL PROTECTED])4g|O~wYJP2D=)hH z-b}?cDX8jJ;fBPPCKxS34fuuYdLu_>+ad-5wYJbrthAq6=&qHH6y!Zxoo=k672VHZ{}(l z>>mluFG1el_gd2S$gfXh1e^e4qyC5lEl$-8WoP>[EMAIL PROTECTED]@iZkRz0!; [EMAIL PROTECTED])g{>C&cyfiAc-o36Y=^e_ibuWTCJ=k-J#r%jti(u8kf>+#W1zM1rmhvLaHK zbF}rUSGelRJQLJn)n~&LH%)dh_YUq>k75l$Y{Fy3REXpdsC68s3w`hJOo%jZ;6gOJ zOafw=vUfO{e$U|da}PZ_!Tv^MN1}*?XxxW;Tc|8KTJe6m7v>g=TFT$V_tv?;1c?gx+u2SWe=s9_JD|OCw0E53AUw z5kNKNHoviiabPQ)>Xj~!z?(3ZO1T}v5d^$M!;mF@>|5`&HxBg`wSA2C7~lXC+1&0O z>Wl_Bv?$uek`q>~*5|H62SKKos|[EMAIL PROTECTED];cg`HK?_jXoq3;hGEP7MSTKqTFLaP&gu> zYea|q&V7kdKsKa^FvHH3uA!G|>u9K^A!cN)6P2lisVZ*BqHERANOZkDUyx*!5mtT$ zw!}!z;3w(Mlo!KUe9&+DOg8Ok?<[EMAIL PROTECTED])J0hY0m(7u%lcIff>` zsM>[EMAIL PROTECTED]|G*C#oq{*R)[EMAIL PROTECTED])bF#x}(VPrvis2q;#e zb<[EMAIL PROTECTED]<[EMAIL PROTECTED]954dlZv5l$>Six2wcnHXVwMUa1dGe5%E%#X@|{20#S zgMN7?M)>h}Ge5$?yxlDO3WilU3o*hFrvJDrAGY6>ulhgPnIAeoQz3B;-$UBYw{Exa ziS4%Q`;zVayWrnd|7G)c<0(2AMYr)W@@[EMAIL PROTECTED] z-+^c3xh*([EMAIL PROTECTED]&jSyS#UpoKJdAtMuo%L-5hvTyw&fW60yM7SUt$F{JVZSZc zKe;3SE_w#?8~o7pUY;ZD*ZoDuVWdXGG<{o^vBb!dyM8M(=JJaHz)OE=9#g9q>C!A# z*!VKoW`2AlD$NIkjG{CP5b>LUMWelzoG0|[EMAIL PROTECTED];{-SxD9 [EMAIL PROTECTED];$+;uJet[EMAIL PROTECTED] [EMAIL PROTECTED] zw=-!zqE1;[EMAIL PROTECTED](_0FOI)>IQlRUQK*VKiX;c2c4ERY8bZ-U1fJ+4>cYC~nm(II13!L9;pc3cv}^17?_wR_|JhxPb>3x&Pl6K3H-(ddJ^-DE zMuX=!i*xpq8MkW^jN*cFT&y6`?~_xpSD6jti9;i*15bG%SvZlV__R=nd4okZ9e3Su|~qkJ(2^9^&EOwzk(- zS0po$2%2KzwZVyGoa!fyySCq=mNK8X{xP^KFbS-!QG0maNI_ERz+lim$&0S-GrcmE z_1Bf$MN+oexf`yXYchYKH1kd%b)[EMAIL PROTECTED]|=vJTrS&g0L}f~TdyCfJAF5S zzJ|G5!O)97ZlPN>[EMAIL PROTECTED] zq1n0JXtyr1xvQ6AoY~kEG&srnsu6E5{yw{wI?mjSC1YL~kwk!MCOFbw>ay39oL``u zO&&e|-a!a|EQPP}&c;EnAtyLz{k$FPQiNwLD-wI!_C_*MW?73q6ZF+dr<>NjXNm*J z{Tkh_=(^SKK?D9Q%tPdsxyUK{T^u^Y&sZ{b^IUY9W!Em!iAjf$rJLnrEq=*{u`#Z^ z%Inc(d7t%5S5&h~s{Q=N>aMt0yeD8ZkJr&N-)cPxf?pnr;tyZ&?)H%3J9TA?BP&[EMAIL PROTECTED]@kOR9Nse(v-hdsYY< [EMAIL PROTECTED])6; [EMAIL PROTECTED]|82([EMAIL PROTEC
Re: Change request: SELinux tweaks.
On Fri, Nov 21, 2008 at 02:17:53PM -0600, Mike McGrath wrote: > On Fri, 21 Nov 2008, Luke Macken wrote: > > > Attached are some patches that will fix many AVC's that are currently > > happening within our infrastructure. > > > > Patch 0010-Fix-our-semanage_fcontext-function-to-work-on-symlin.patch > > /should/ fix the problem introduced in > > 41acfbc83c80d12d915a0d6087e841aba2c7e78c that caused restorecon to flip > > out when trying to apply context to a symlink. > > > > The rest should all be fairly straight-forward fixes that involve > > flipping booleans, setting context, and creating custom policy modules. > > Apologies for the binary blobs in the diffs :) > > > > What is the impact of actually implementing these changes? Also whats the > risk if stuff goes horribly wrong? These changes will greatly decrease the amount of SELinux AVCs generated, and in the case of bastion will also decrease the number of prelude alerts being sent to our prelude-manager. Since we're in permissive mode, all AVCs are essentially harmless, but we need to fix them to not only move forward with our SELinux deployment, but also for the IDS deployment as well (we currently have too many AVCs for our audit-driven prelude IDS to be useful). The only thing I can think of that could go "horribly wrong" is if patch 0010 does not fix the symlink issue, and it would trigger a 'restorecon -R /', which would only cause a little bit of disk churn. When these are applied, I will manually run puppet on our hosted machine to ensure that the symlink issue is properly fixed. Other than that, these changes should be completely transparent. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Change request: Increase the size of audit logs (on bastion)
The attached patch will allow the audit system to utilize 100mb for its logs, as opposed to 20mb. Due to the sheer number of SELinux denials that we're hitting on bastion (which will be resolved after a reboot, and my patches from the previous mail), bastion is only storing 1-2 days worth of audit logs. This patch will only effect bastion, as it is currently the only machine that is configured with 'include prelude::sensor::audisp' luke >From 6f3e644a09d15c659716f82e8af18b66d75517c1 Mon Sep 17 00:00:00 2001 From: Luke Macken <[EMAIL PROTECTED]> Date: Fri, 21 Nov 2008 21:11:50 + Subject: [PATCH] Increase the audit log size from 20mb to 100mb. diff --git a/modules/prelude/templates/auditd.conf.erb b/modules/prelude/templates/auditd.conf.erb index 4e9d153..0c95f4a 100644 --- a/modules/prelude/templates/auditd.conf.erb +++ b/modules/prelude/templates/auditd.conf.erb @@ -8,12 +8,12 @@ log_group = sysadmin-noc priority_boost = 4 flush = none freq = 0 -num_logs = 4 +num_logs = 10 disp_qos = lossless dispatcher = /sbin/audispd name_format = numeric #name = <%= hostname %> -max_log_file = 5 +max_log_file = 10 max_log_file_action = ROTATE space_left = 75 space_left_action = SYSLOG -- 1.5.5.1 ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Change request: SELinux tweaks.
On Fri, Nov 21, 2008 at 01:12:13PM -0800, Toshio Kuratomi wrote: > Luke Macken wrote: > > On Fri, Nov 21, 2008 at 02:17:53PM -0600, Mike McGrath wrote: > >> On Fri, 21 Nov 2008, Luke Macken wrote: > >> > >>> Attached are some patches that will fix many AVC's that are currently > >>> happening within our infrastructure. > >>> > >>> Patch 0010-Fix-our-semanage_fcontext-function-to-work-on-symlin.patch > >>> /should/ fix the problem introduced in > >>> 41acfbc83c80d12d915a0d6087e841aba2c7e78c that caused restorecon to flip > >>> out when trying to apply context to a symlink. > >>> > >>> The rest should all be fairly straight-forward fixes that involve > >>> flipping booleans, setting context, and creating custom policy modules. > >>> Apologies for the binary blobs in the diffs :) > >>> > >> What is the impact of actually implementing these changes? Also whats the > >> risk if stuff goes horribly wrong? > > > > These changes will greatly decrease the amount of SELinux AVCs > > generated, and in the case of bastion will also decrease the number of > > prelude alerts being sent to our prelude-manager. Since we're > > in permissive mode, all AVCs are essentially harmless, but we need to > > fix them to not only move forward with our SELinux deployment, but also > > for the IDS deployment as well (we currently have too many AVCs for our > > audit-driven prelude IDS to be useful). > > > > The only thing I can think of that could go "horribly wrong" is if patch > > 0010 does not fix the symlink issue, and it would trigger a 'restorecon > > -R /', which would only cause a little bit of disk churn. When these > > are applied, I will manually run puppet on our hosted machine to ensure > > that the symlink issue is properly fixed. > > > How does patch 0010 fix the problem? It looks like trying to use this > on /git will still result in restorecon -R / being run. Good catch. So, for symlinks such as /cvs, defining them like this should do the trick: --- a/manifests/servergroups/cvs.pp +++ b/manifests/servergroups/cvs.pp @@ -28,7 +28,7 @@ class cvs { bool => 'on' } -semanage_fcontext { '/cvs': +semanage_fcontext { '/cvs(/.*)?': type => 'httpd_sys_content_t' } luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Bodhi 10k bug
As some of you may have noticed, the last batch of updates contained 209 updates with the ID of 'FEDORA-2008-1'. This is is due to a flaw in the way bodhi's PackageUpdate.assign_id() method finds the current update with the highest id. Presently, it does a PackageUpdate.select(..., orderBy=PackageUpdate.q.updateid). Since PackageUpdate.updateid is a unicode column, and due to the fact that u'FEDORA-2008-1' < u'FEDORA-2008-', this started to fail miserably. Attached is a patch that has the assign_id method order the query by the date_pushed DateTimeCol in order to find the highest updateid. However, it seems that SQLObject completely ignore milliseconds: if datetime: def DateTimeConverter(value, db): return "'%04d-%02d-%02d %02d:%02d:%02d'" % ( value.year, value.month, value.day, value.hour, value.minute, value.second) The problem with this is that we must now take into account multiple updates that were pushed at the same second. The "proper" way to fix this is at the model level, and probably to use an integer for the updateid column. I'm in the process of finishing up the SQLAlchemy port, which will properly solve this problem. In the mean time, this hack will not require any database changes. This patch also includes a test case for this 10k bug. [EMAIL PROTECTED] bodhi]$ nosetests bodhi/tests/test_model.py:TestPackageUpdate.test_id . -- Ran 1 test in 0.084s OK Once approved and applied, I will push out a fixed package (to releng2 only), fix the existing updates from the last push, and send out an errata containing the new update IDs. +1's ? luke >From 965360653ee505c76a89228bac462ada597dad0b Mon Sep 17 00:00:00 2001 From: Luke Macken <[EMAIL PROTECTED]> Date: Sat, 22 Nov 2008 18:27:40 -0500 Subject: [PATCH] Righteous hack for the 10k bug. Due to a flaw in the way the PackageUpdate.assign_id method finds the update with the highest id, when we reached 10,000 updates this year, it started to fail. The previous method ordered updates by the updateid string column, which now fails since u'FEDORA-2008-1' < u'FEDORA-2008-'. This patch changes this query to grab the most recent update based on the date_pushed column. This technically should also have the highest updateid. However, since SQLObject completely ignores milliseconds, we need to also take into account multiple updates being pushed during the same second. This changeset also includes a testcase for the 10k bug. diff --git a/bodhi/model.py b/bodhi/model.py index 2bc7c07..7d4627b 100644 --- a/bodhi/model.py +++ b/bodhi/model.py @@ -258,19 +258,37 @@ class PackageUpdate(SQLObject): if self.updateid != None and self.updateid != u'None': log.debug("Keeping current update id %s" % self.updateid) return -update = PackageUpdate.select(PackageUpdate.q.updateid != 'None', - orderBy=PackageUpdate.q.updateid) + +updates = PackageUpdate.select( +AND(PackageUpdate.q.date_pushed != None, +PackageUpdate.q.updateid != None), +orderBy=PackageUpdate.q.date_pushed, limit=1).reversed() + try: -prefix, year, id = update[-1].updateid.split('-') +update = updates[0] + +# We need to check if there are any other updates that were pushed +# at the same time, since SQLObject ignores milliseconds +others = PackageUpdate.select( +PackageUpdate.q.date_pushed == update.date_pushed) +if others.count() > 1: +# find the update with the highest id +for other in others: +if other.updateid_int > update.updateid_int: +update = other + +prefix, year, id = update.updateid.split('-') if int(year) != time.localtime()[0]: # new year id = 0 id = int(id) + 1 -except (AttributeError, IndexError): -id = 1 +except IndexError: +id = 1 # First update + self.updateid = u'%s-%s-%0.4d' % (self.release.id_prefix, time.localtime()[0],id) log.debug("Setting updateid for %s to %s" % (self.title, self.updateid)) +self.date_pushed = datetime.utcnow() hub.commit() def set_request(self, action, pathcheck=True): @@ -356,7 +374,6 @@ class PackageUpdate(SQLObject): """ if self.request == 'testing
Re: Informal survey
On Sat, Nov 22, 2008 at 12:36:29PM -0600, Mike McGrath wrote: > Hey guys, completely voluntary but I thought I'd ask because I'm curious > > For personal use, how many of you use something like linode or slicehost > or an individual provider? > > If you do use a provider which one is it? > > > For me, I do use one and I use slicehost. I use linode and webfaction. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Congratulations to Nigel Jones
On Mon, Nov 24, 2008 at 04:38:16PM -0600, Mike McGrath wrote: > I'm happy to announce I've just approved Nigel Jones in to the > sysadmin-main group. He's the first new member we've had to that group > since Ricky Zhou was approved in May earlier this year. > > For those that don't know sysadmin-main is for our core dedicated admins. > It typically takes many months (sometimes years) of commitment to get in > to this group. Nigel is on his way to doing a great job with re-inventing > our monitoring environment. He's spent a lot of time on many of our bits > of infrastructure and regularly puts in many hours a week doing Fedora > related tasks. He's a great volunteer and we're happy and lucky to have > him on. > > Nigel is currently based out of Brisbane which makes him the first non-US > member to be in sysadmin-main. This is an important change in focus for > us and greatly helps the stability/coverage of our environment. > > Thanks Nigel! Congratulations, G :) Thank you for all of your hard work and dedication! luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Bodhi 10k bug
On Sat, Nov 22, 2008 at 06:41:34PM -0500, Luke Macken wrote: > As some of you may have noticed, the last batch of updates contained 209 > updates with the ID of 'FEDORA-2008-1'. This is is due to a flaw in the > way bodhi's PackageUpdate.assign_id() method finds the current update with the > highest id. Presently, it does a PackageUpdate.select(..., > orderBy=PackageUpdate.q.updateid). Since PackageUpdate.updateid is a unicode > column, and due to the fact that u'FEDORA-2008-1' < u'FEDORA-2008-', > this started to fail miserably. > > Attached is a patch that has the assign_id method order the query by the > date_pushed DateTimeCol in order to find the highest updateid. However, it > seems that SQLObject completely ignore milliseconds: > > if datetime: > def DateTimeConverter(value, db): > return "'%04d-%02d-%02d %02d:%02d:%02d'" % ( > value.year, value.month, value.day, > value.hour, value.minute, > value.second) > > The problem with this is that we must now take into account multiple updates > that were pushed at the same second. > > The "proper" way to fix this is at the model level, and probably to use an > integer for the updateid column. I'm in the process of finishing up the > SQLAlchemy port, which will properly solve this problem. In the mean time, > this hack will not require any database changes. > > This patch also includes a test case for this 10k bug. > > [EMAIL PROTECTED] bodhi]$ nosetests > bodhi/tests/test_model.py:TestPackageUpdate.test_id > . > -- > Ran 1 test in 0.084s > > OK > > Once approved and applied, I will push out a fixed package (to releng2 only), > fix the existing updates from the last push, and send out an errata containing > the new update IDs. Earlier today I pushed out a fixed bodhi-server to releng2, reassigned 209 update ids, and remashed the f10 updates repositories. I also sent an errata to the fedora-package-announce list, but it has yet to be moderated, so it is attached as well. Cheers, luke >From [EMAIL PROTECTED] Mon Nov 24 15:51:42 2008 Date: Mon, 24 Nov 2008 15:51:42 -0500 From: Luke Macken <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] Subject: ERRATA: Incorrect update IDs Message-ID: <[EMAIL PROTECTED]> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="y0ulUmNC+osPPQO6" Content-Disposition: inline User-Agent: Mutt/1.5.18 (2008-05-17) Status: RO Content-Length: 10363 Lines: 242 --y0ulUmNC+osPPQO6 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Hi, Due to a bug in bodhi[0], the most recent push of updates contained many with the ID of FEDORA-2008-1. This bug has since been fixed, and new IDs have been reassigned to those updates, which are listed below. Sorry for the inconvenience, luke [0]: https://www.redhat.com/archives/fedora-infrastructure-list/2008-November/msg00150.html FEDORA-2008-1 rpcbind-0.1.7-1.fc9 FEDORA-2008-10001 perl-IO-Socket-SSL-1.18-1.fc10 FEDORA-2008-10002 unikurd-web-font-20020502-1.fc10 FEDORA-2008-10003 gyachi-1.1.56-5.fc10 FEDORA-2008-10004 ifstat-1.1-8.fc10 FEDORA-2008-10005 libetpan-0.57-1.fc10 FEDORA-2008-10006 perl-Data-Visitor-0.21-1.fc10 FEDORA-2008-10007 postgresql-8.3.5-1.fc10 FEDORA-2008-10008 scim-chewing-0.3.2-1.fc10 FEDORA-2008-10009 gnome-gmail-notifier-0.9.4-3.fc10 FEDORA-2008-10010 geda-gnetlist-20080929-2.fc10 FEDORA-2008-10011 libpng-1.2.33-1.fc10 FEDORA-2008-10012 codeblocks-8.02-4.fc10 FEDORA-2008-10013 xdvik-22.84.14-4.fc10 FEDORA-2008-10014 darcs-2.1.0-1.fc10 FEDORA-2008-10015 rcssmonitor-13.0.0-2.fc10 FEDORA-2008-10016 bacula-2.4.3-3.fc10 FEDORA-2008-10017 basket-1.0.3.1-2.fc10 FEDORA-2008-10018 libnfnetlink-0.0.39-3.fc10 FEDORA-2008-10019 clipper-2.0-20.fc10 FEDORA-2008-10020 linux-libertine-fonts-4.1.8-1.fc10 FEDORA-2008-10021 pngnq-0.5-5.fc10 FEDORA-2008-10022 rpcbind-0.1.7-1.fc10 FEDORA-2008-10023 VLGothic-fonts-20081029-1.fc10 FEDORA-2008-10024 dvipng-1.11-1.fc10 FEDORA-2008-10025 gyachi-1.1.56-5.fc8 FEDORA-2008-10026 ochusha-0.5.99.67.1-0.4.cvs20081114T2135.fc10 FEDORA-2008-10027 file-browser-applet-0.6.0-1.fc10 FEDORA-2008-10028 em8300-0.17.2-1.fc10 FEDORA-2008-10029 grip-3.2.0-25.fc10 FEDORA-2008-10030 gnome-power-manager-2.24.2-2.fc10 FEDORA-2008-10031 ruby-libvirt-0.1.0-2.fc10 FEDORA-2008-10032 plt-scheme-4.1.2-1.fc10 FEDORA-2008-10033 perl-Crypt-DSA-0.14-7.fc10,perl-Crypt-DH-0.06-9.fc10 FEDORA-2008-10034 system-config-services-0.99.28-1.fc10 FEDORA-2008-10035 bind-9.5.1-0.9.
Re: Change request: SELinux tweaks.
On Wed, Nov 26, 2008 at 12:55:28PM +0100, Nils Philippsen wrote: > On Fri, 2008-11-21 at 16:49 -0500, Luke Macken wrote: > > On Fri, Nov 21, 2008 at 01:12:13PM -0800, Toshio Kuratomi wrote: > > > Luke Macken wrote: > > > > On Fri, Nov 21, 2008 at 02:17:53PM -0600, Mike McGrath wrote: > > > >> On Fri, 21 Nov 2008, Luke Macken wrote: > > > >> > > > >>> Attached are some patches that will fix many AVC's that are currently > > > >>> happening within our infrastructure. > > > >>> > > > >>> Patch 0010-Fix-our-semanage_fcontext-function-to-work-on-symlin.patch > > > >>> /should/ fix the problem introduced in > > > >>> 41acfbc83c80d12d915a0d6087e841aba2c7e78c that caused restorecon to > > > >>> flip > > > >>> out when trying to apply context to a symlink. > > > >>> > > > >>> The rest should all be fairly straight-forward fixes that involve > > > >>> flipping booleans, setting context, and creating custom policy > > > >>> modules. > > > >>> Apologies for the binary blobs in the diffs :) > > > >>> > > > >> What is the impact of actually implementing these changes? Also whats > > > >> the > > > >> risk if stuff goes horribly wrong? > > > > > > > > These changes will greatly decrease the amount of SELinux AVCs > > > > generated, and in the case of bastion will also decrease the number of > > > > prelude alerts being sent to our prelude-manager. Since we're > > > > in permissive mode, all AVCs are essentially harmless, but we need to > > > > fix them to not only move forward with our SELinux deployment, but also > > > > for the IDS deployment as well (we currently have too many AVCs for our > > > > audit-driven prelude IDS to be useful). > > > > > > > > The only thing I can think of that could go "horribly wrong" is if patch > > > > 0010 does not fix the symlink issue, and it would trigger a 'restorecon > > > > -R /', which would only cause a little bit of disk churn. When these > > > > are applied, I will manually run puppet on our hosted machine to ensure > > > > that the symlink issue is properly fixed. > > > > > > > How does patch 0010 fix the problem? It looks like trying to use this > > > on /git will still result in restorecon -R / being run. > > > > Good catch. So, for symlinks such as /cvs, defining them like this > > should do the trick: > > > > --- a/manifests/servergroups/cvs.pp > > +++ b/manifests/servergroups/cvs.pp > > @@ -28,7 +28,7 @@ class cvs { > > bool => 'on' > > } > > > > -semanage_fcontext { '/cvs': > > +semanage_fcontext { '/cvs(/.*)?': > > type => 'httpd_sys_content_t' > > } > > Sorry to jump in uninformed, but will this actually catch files > beneath /cvs (if /cvs is a symlink)? IMO, the "real" path needs to be > specified here (e.g. /srv/cvs/... if /cvs pointed to /srv/cvs). > > Or do restorecon & co. actually follow symlinks (and thus would > potentially treat files differently depending on whether they were > reached by the canonical or a symlinked path)? The way we created the semanage_fcontext is that so that it will always run `restore -R` on the dirname of the path, with some sed "magic", so it should ideally follow everything below the symlink. define semanage_fcontext($type) { exec { "/usr/sbin/semanage fcontext -a -t $type '$name'; /sbin/restorecon -R `/usr/bin/dirname '$name/' | /bin/sed 's/(.*//'`": unless => "/usr/sbin/matchpathcon `/usr/bin/dirname '$name' | /bin/sed 's/(//'` | grep -qe $type", cwd => '/', } } Yes, it's a nasty hack, but it works for now until puppet can handle this stuff better (the latest version may actually be able to, I'm not quite sure) luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
[Change Request] Minor bodhi update
Hi guys, I'd like to do a low-risk bodhi upgrade this weekend. Changes include: * A new argument to the 'list' API method that will be utilized by Fedora Community. This does not break the existing API. * Added FormEncode validators to the 'list' API method, which fixes a couple of issues, and ensures we get the data that we expect. * Made some parts of the updates push process a bit more robust, so if there is a problem with 1 update, it won't effect the others. This will help us mitigate some recent explosions that we saw due to race-conditions. * Fixed some Koji session issues, which we have been hitting every now and then during pushes. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: More auth options
On Mon, Mar 30, 2009 at 12:57:23PM -0500, Dennis Gilmore wrote: > So doing a liitle looking around I cane across some options that look > interesting, the following options would mean you need to physically have > something to login. > > yubikey > http://www.yubico.com/products/yubikey/ > It would require a pam module and for us to setup a server for managing keys. > > it looks to be fairly low cost. it would implement a 2 facter > authentication. I've been a big fan of yubikey for a while now. The technology is secure, the hardware is solid, and the source is open. Aside from their online docs, this podcast was quite informative was well: http://twit.tv/sn143 luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
SELinux lockdown
Hey everyone, So I've been doing a lot of SELinux/audit related work behind the scenes within our infrastructure for a while now, working closely with Dan Walsh and Steve Grubb. It's taken a lot of patience and hard work, but we're finally at the point where we can start switching large portions of our infrastructure over to SELinux Enforcing mode. The following server groups are now fully enforcing: o gateway o people o planet o fas o collab o releng o db o torrent o dns These are all groups of machines that have not had any SELinux denials in at least a month. If you notice any issues with regard to these groups, please speak up. I will be keeping a close eye on these machines, and I encourage anyone that is interested to do the same. I threw together a little tool that I've been using to monitor & manage SELinux on our machines. It uses func, and allows you to do the following: Get the SELinux status: selinux-overlord.py --status Display all enforced denials: selinux-overlord.py --enforced-denials Dump all raw AVCs to disk. Each minion will have it's own file: selinux-overlord.py --dump-avcs Upgrade the SELinux policy RPMs: selinux-overlord.py --upgrade-policy It defaults to querying all minions, but you can specify groups of them if you wish: selinux-overlord.py --status app* db* This script should ideally be it's own func module, but in the mean time I added it to the fedora-infrastructure git repository: http://git.fedorahosted.org/git/?p=fedora-infrastructure.git;a=blob_plain;f=scripts/selinux/selinux-overlord.py;hb=HEAD More information on our SELinux deployment can be found in our [out of date] SOP: http://fedoraproject.org/wiki/Infrastructure/SOP/SELinux luke pgpPRGQ4aJEpk.pgp Description: PGP signature ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: AGPLv3 and GPLv2
On Wed, Jun 10, 2009 at 04:17:57PM -0500, Mike McGrath wrote: > So without knowing it we started using AGPLv3 code in our environment > recently for fedora community and moksha. In the past I think all of our > stuff has been GPL(ish) mostly GPLv2 (toshio correct me if I'm wrong > there) > > I want to make sure we're all aware of what we can and can'd do as far as > mixing the code between the two as this could be very unfortunate. > > Luke, you described the AGPLv3 as "crucial". Can you let the rest of us > know why the GPLv2 wouldn't work? Using GPLv2 would allow $BIG_EVIL_CORPORATION to take our code and run it publicly on their servers without making the source available. The AGPL fixes this issue, which is known as the "application service provider loophole", and would require them to put a link to the source code if one existed in the original copy. This is why you will see links to the Moksha and Fedora Community source code at the bottom of every page. I am not a lawyer, nor do I play one on television. Someone smarter than I can elaborate further, or correct any false assumptions that we have made. More details from Wikipedia: """ Both versions of the AGPL were designed to close a perceived application service provider "loophole" (the "ASP loophole") in the ordinary GPL, where by using but not distributing the software, the copyleft provisions are not triggered. Each version differs from the version of the GNU GPL on which it is based in having an additional provision addressing use of software over a computer network. The additional provision requires that the complete source code be made available to any network user of the AGPL-licensed work, typically a web application. The Free Software Foundation has recommended that the GNU AGPLv3 be considered for any software that will commonly be run over a network.[2] The Open Source Initiative approved the GNU AGPLv3[3] as an Open Source license in March 2008 after Funambol submitted it for consideration[4] [...] Compatibility with the GPL Both versions of the AGPL, like the corresponding versions of the GNU GPL on which they are based, are strong copyleft licenses. In the FSF's judgment, the additional requirement in section 2(d) of AGPLv1 made it incompatible with the otherwise nearly identical GPLv2. That is to say, one cannot distribute a single work formed by combining components covered by each license. By contrast, GPLv3 and AGPLv3 each include clauses (in section 13 of each license) that together achieve a form of mutual compatibility for the two licenses. These clauses explicitly allow the "conveying" of a work formed by linking code licensed under the one license against code licensed under the other license.[7] In this way, the copyleft of each license is relaxed to allow distribution of such combinations. """ http://en.wikipedia.org/wiki/Affero_General_Public_License ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
bodhi 0.6.0 with EPEL support
Hey all, I just deployed bodhi 0.6.0 to app1-6 and releng{2,1.stg}. This release contains patches from both Dennis Gilmore and myself to support pushing updates for EPEL. I just submitted my first EPEL update into bodhi, so things seem to be working properly so far. It should be safe to start queueing EPEL updates, and we'll try doing a small push early next week. Please file bugs here: https://fedorahosted.org/bodhi/newticket Thanks, luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: package category "new-package" for fedora-package-announce
On Sun, Jul 05, 2009 at 03:20:43PM -0400, David Juran wrote: > Hello! > > Would it be possible to add a new category for new packages to the > fedora-package-announce list? > I'm interested in seeing what new packages are released to Fedora but I > don't have the time/patience to wade through the full bulk of the > fedora-package-announce list. So in the same way that there currently exists > a category for security updates, would it be possible to implement a category > for new packages? Hi David, Would prepending something like [NEW] to the subject (similar to how we add [SECURITY]) suffice? This would be a fairly trivial change to bodhi. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Why not Proxy with Nginx ?
On Fri, Jul 10, 2009 at 06:35:57AM -0500, Mike McGrath wrote: > Stuff like this comes up from time to time and the question I always have > is: What is it we're wanting to do that we can't currently do with the > setup we have now? We're a group with a lot of turnover and almost > everyone knows how to use apache so why is it worth it to us to switch to > nginx? If we are wanting "to serve static files faster", then yes, Nginx would do that for us[0]. I use Nginx for all of my personal application deployments, and I've been extremely impressed with it's speed and ease of configuration. However, it's WSGI support is a bit questionable, so we would only want to use it as for serving static files & reverse proxying. You can also configure it to hit memcached before apache, which is pretty neat. Mike is right though, if it's not solving an real problems for us, we're better off using what we are already familiar with until we see an obviously need for it. luke [0]: http://blog.webfaction.com/a-little-holiday-present ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Why not Proxy with Nginx ?
On Fri, Jul 10, 2009 at 08:56:07AM -0500, Mike McGrath wrote: > On Fri, 10 Jul 2009, Luke Macken wrote: > > > On Fri, Jul 10, 2009 at 06:35:57AM -0500, Mike McGrath wrote: > > > Stuff like this comes up from time to time and the question I always have > > > is: What is it we're wanting to do that we can't currently do with the > > > setup we have now? We're a group with a lot of turnover and almost > > > everyone knows how to use apache so why is it worth it to us to switch to > > > nginx? > > > > If we are wanting "to serve static files faster", then yes, Nginx would > > do that for us[0]. I use Nginx for all of my personal application > > deployments, and I've been extremely impressed with it's speed and ease > > of configuration. However, it's WSGI support is a bit questionable, so > > we would only want to use it as for serving static files & reverse > > proxying. You can also configure it to hit memcached before apache, > > which is pretty neat. > > > > That is interesting though, does it just store entire html pages in > memcached? As far as I know nginx doesn't store things in memcached itself, but in your web application you can do fancy things like cache fully rendered HTML pages in memcached, and you can tell nginx to look there first. This article, "Pylons on Nginx with Memcached and SSI", is what inspired me to dive into this stuff a while back: http://www.reshetseret.com/app/blog/?p=3 They use a simple decorator to do the caching. However, this won't work out of the box with TurboGears2 since there is a piece of ToscaWidgets middleware that injects the widget resources at the last minute -- so your cache would be missing a ton of JS and CSS files. However, I've been thinking about having Moksha inject a piece of middleware at the top of the stack to optionally throw the rendered output in memcached. Anyway... fun stuff :) luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: bodhi 0.6.0 with EPEL support
Oh yeah, I also updated our Bodhi SOP with some details on how to push updates for EPEL. https://fedoraproject.org/wiki/Bodhi_Infrastructure_SOP On Fri, Jul 03, 2009 at 11:25:35PM -0400, Luke Macken wrote: > Hey all, > > I just deployed bodhi 0.6.0 to app1-6 and releng{2,1.stg}. This release > contains patches from both Dennis Gilmore and myself to support pushing > updates for EPEL. > > I just submitted my first EPEL update into bodhi, so things seem to be > working properly so far. It should be safe to start queueing EPEL > updates, and we'll try doing a small push early next week. > > Please file bugs here: https://fedorahosted.org/bodhi/newticket > > Thanks, > > luke > > ___ > Fedora-infrastructure-list mailing list > Fedora-infrastructure-list@redhat.com > https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list > ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Is F-community using its FAS account?
On Mon, Jul 27, 2009 at 10:04:02AM -0700, Toshio Kuratomi wrote: > Hey, > > lmacken and I realized that we probably hadn't changed Fedora > Community's password since moving it from the publictest machines to > production so I did that today. But when I went to test it we couldn't > figure out where it would actually be used:: > > [10:00:02] abadger1999: hmm, I'm actually not sure if that > account is getting used at all. It looks like it is only used when the > user is not logged in, but in that case they can't view users or > anything FAS related as far as I can tell. J5 would know for sure > > So, is the account being used? If so how can I test that the password > update worked okay? If not, can we disable the account? I'm not sure if we want to disable this account. Ian is working on some statistics applications that make authenticated calls to FAS, and we will want this to work for anonymous users as well. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Messaging SIG - proposal for our notification infrastructure
On Wed, Aug 05, 2009 at 07:37:30PM -0500, Mike McGrath wrote: > On Tue, 4 Aug 2009, John Palmieri wrote: > > > Hey everyone. I put up a proposal[1] that describes a publish/subscribe > > setup for the infrastructure wide notification system. I haven't quite > > gotten to the publish side of things because the QMF docs get a little hazy > > there but the meat of the proposal is there and I wanted to get feedback > > sooner than later. An event/notification system is important to the work I > > need to do going forward. I specifically avoided method invocation and > > properties/statistics as they can be added in a later round if we feel we > > need them. I do feel statistics might be nice (for instance keeping track > > of information that is expensive to do via a query but cheap to update > > based on events) but they are a bonus that we don't need right away. > > > > [1] > > https://fedoraproject.org/wiki/Messaging_SIG/PublishSubscribeNotificationProposal > > > > Hey John, thanks for putting this together. I'm glad I can finally move > the messaging infrastructure beyond just an SMTP replacement :) I'd like > to get some specific use cases in place on that page too. > > Also just so I can get a list together, if you're experienced with AMQP > just reply to this email with a "I am" so we can discuss security and > implementation considerations. I am :) ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
[Change Request] Move Fedora Community's beaker session secret
Trivial change, I would like to move Fedora Community's beaker.session.secret into our passwords git module (and change it, of course). --- a/modules/fedoracommunity/templates/fedoracommunity-prod.ini.erb +++ b/modules/fedoracommunity/templates/fedoracommunity-prod.ini.erb @@ -117,7 +117,7 @@ full_stack = true #lang = ru #cache_dir = /var/cache/fedoracommunity/data beaker.session.key = fedoracommunity -beaker.session.secret = ? +beaker.session.secret = <%= fcommBeakerSessionSecret %> beaker.cache.type = ext:memcached beaker.cache.url = memcached1;memcached2 ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
[Change Request] Bodhi masher update on releng2 and relepel1
Hey Guys, I'd like to do a bodhi masher upgrade on releng2 and relepel1. There are no critical changes for the app1-6 bodhi instances, so there is no need to upgrade those just yet. Effected code paths for releng2/relepel1 bodhi mashers: - Fix a bug that would cause duplicate update IDs across Fedora 10/11 (#515853) https://fedorahosted.org/bodhi/changeset/ff2fa4f45b980f0ccbabb0dd40b213f25468f374 - Fixes koji session timeout bug that has been lurking for a while https://fedorahosted.org/bodhi/changeset/da86a7a44fecb097ee1ffc40ba9614a04594cd31 - Remove some noisy debugging statements Thanks, luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: [Change Request] Bodhi masher update on releng2 and relepel1
On Thu, Aug 13, 2009 at 10:20:30AM -0700, Toshio Kuratomi wrote: > On 08/13/2009 10:00 AM, Luke Macken wrote: > > Hey Guys, > > > > I'd like to do a bodhi masher upgrade on releng2 and relepel1. There are no > > critical changes for the app1-6 bodhi instances, so there is no need to > > upgrade > > those just yet. Effected code paths for releng2/relepel1 bodhi mashers: > > > > - Fix a bug that would cause duplicate update IDs across Fedora 10/11 > > (#515853) > > > > https://fedorahosted.org/bodhi/changeset/ff2fa4f45b980f0ccbabb0dd40b213f25468f374 > > - Fixes koji session timeout bug that has been lurking for a while > > > > https://fedorahosted.org/bodhi/changeset/da86a7a44fecb097ee1ffc40ba9614a04594cd31 > > - Remove some noisy debugging statements > > > > +1 > > If this breaks it can be reverted on releng2/relepel1 without an outage > for packagers correct? Correct, it won't effect the web interface or packagers. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: Introduction
On Thu, Aug 27, 2009 at 12:12:11PM -0700, Toshio Kuratomi wrote: > On 08/26/2009 09:35 AM, Christian Del Pino wrote: > > Hello everyone, > > > > My name is Chris. I am looking to contribute my skills and time to the > > Fedora Infrastructure group. > > > > I started using Linux back in 1996 while in college. In 2005, I became a > > system administrator at a small company helping them build, deploy, and > > support Linux based laptops for use in capturing clinical data. Other > > tasks included projects to help the company scale our operations. > > > > I have a Bachelor's in Computer Science, and I am currently pursuing a > > Master's in Information Systems, with a couple of semesters to go. I > > also became a Red Hat Certified Technician back in 2004. > > > > My skills include: > > > > Bash scripting > > MySQL > > C++ > > HTML > > CSS > > Some Python > > Some PostgreSQL > > Started learning some Django. > > > > I want to be involved in the Fedora community by helping out where I > > can, and also learn some more new skills along the way. > > > > If you're interested in Django, one project that started off purely in > Fedora but has become more of its own upstream is transifex > (http://www.transifex.org, #transifex on irc.freenode.net). diegobz, > glezos, and ivazquez are all Fedora community members as well as > transifex hackers. Our particular transifex instance is at: > https://translate.fedoraproject.org > > Most of the rest of our web apps are written for the TurboGears 1 > framework. We're going to port them to TG2 at some point in the > indefinite future (probably when someone volunteers to make it their pet > project :-). Hey Christian, welcome! As we have already been talking on IRC about various things, I though I'd chime in with a list of some of the webapps that we've developed inhouse as well: https://fedoraproject.org/wiki/Infrastructure/Services luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Re: memcached opinions
On Wed, Jul 01, 2009 at 05:50:12PM -0700, Toshio Kuratomi wrote: > On 07/01/2009 05:39 PM, Mike McGrath wrote: > > On Wed, 1 Jul 2009, Stephen John Smoogen wrote: > > > >> On Wed, Jul 1, 2009 at 6:08 PM, Mike McGrath wrote: > >>> On Wed, 1 Jul 2009, Stephen John Smoogen wrote: > >>> > On Wed, Jul 1, 2009 at 3:33 PM, Mike McGrath wrote: > > I'm not sure if we have any memcached experience on the list but I > > thought > > I'd ask. Can anyone explain this: > > > > http://pastebin.ca/1481219 > > > > Notice how memcached1 has a much higher hit rate and memcached2 has a > > much > > lower hit rate? > > > > The time for memcached1 is 5x less than memcached2 being up. That can > have an effect on caching as right after a system comes up its rates > are usually much higher and then over time fall off (iirc). I think it > would take bringing both up at the same time to figure out if there is > a true disparity over caching. > > >>> > >>> I thought that exact same thing :) > >>> > >>> memcahed1: > >>> STAT uptime 9143 > >>> STAT get_hits 311736 > >>> STAT get_misses 11255 > >>> > >>> memcached2: > >>> STAT uptime 9144 > >>> STAT get_hits 49679 > >>> STAT get_misses 6 > >>> > >> > >> Now that shows something not kosher. My guess is some app is not > >> talking to both? What apps use memcached for what? > >> > > > > I was just talking to ricky about this a bit in IRC. So here's the scoop. > > > > We've got mediawiki using memcached for a couple of things, including > > session data (which is weird and wrong but fast). > > > > The recent addition to the group is Fedora community, specifically in it's > > implementation of beaker. I'm going to get ahold of luke tomorrow to > > verify and test some stuff but I think this line in the config: > > > > beaker.cache.url = memcached1;memcached > > > > I'm not sure how beaker reads that, but I suspect it might be only sending > > information to memcached1 and ignoring memcached2 altogether. If this > > theory holds it'd explain why memcached1 not only has a higher request > > rate but also a higher hit rate because I suspect fedoracommunity requests > > some of the same info over and over again compared to the wiki which > > probably has a broader data pool it pulls from. > > > easy test: reverse that: > > beaker.cache.url = memcached2;memcached1 > > and see if the cache hit ratio reverses itself. > > -Toshio So, I've been poking at this a little bit lately, and I'm thinking there are some problems with the way Fedora Community is utilizing it's Beaker cache & memcached. The fact that something is wrong with the caching setup becomes obvious when playing with the Bugzilla grid *should* cache the first 5 pages, and be very snappy, as it is in my local instance. However, that is not the case, which makes me think it's not hitting our memcached servers. I wrote up a little test script on app1:: from beaker.cache import CacheManager memcached1 = CacheManager(type='ext:memcached', url='memcached1', lock_dir='.') memcached2 = CacheManager(type='ext:memcached', url='memcached2', lock_dir='.') def createfunc(*args): print "createfunc(%s)" % (args,) def get_value(cache, value): cache_1 = memcached1.get_cache(cache) cache_2 = memcached2.get_cache(cache) result1 = cache_1.get_value(key=value, createfunc=createfunc) result2 = cache_2.get_value(key=value, createfunc=createfunc) print "memcached1[%s][%s] = %s" % (cache, value, result1) print "memcached2[%s][%s] = %s" % (cache, value, result2) return result1, result2 get_value('fedoracommunity_alerts_global', 'today') get_value('bodhi', 'dashboard_None') Which produces:: memcached1[fedoracommunity_alerts_global][today] = None memcached2[fedoracommunity_alerts_global][today] = None memcached1[bodhi][dashboard_None] = None memcached2[bodhi][dashboard_None] = None I also tried using netcat to query for these by hand, to no avail. So, it looks like we need to look a bit deeper into what is going on here. Either I'm Doing It Wrong with Beaker, or we're hitting a bug somewhere. luke ___ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list