Re: puppet and git (README)

2008-07-09 Thread Ricky Zhou
On 2008-07-08 04:20:25 PM, Mike McGrath wrote:
 Our repos are all in one git repo (minus private, still working on
 that).
Hey, the private repo is now in git as well :-)

Here's how to use it:

Old  | New
-|--
cvs -d /cvs/private co private   | git clone /git/private
chmod 700 private| chmod 700 private
cvs commit   | git commit
make install | git push

I just used git-cvsimport for the converstion, and modified
/usr/local/bin/syncPuppetMaster.sh (does this need to be put in puppet?)
to pull and rsync from there similar to before.  Rsyncing isn't
completely ideal, but since the configs and manifests are in the same
repo and need to be merged with the normal puppet stuff, it was the only
way I could think of, short of splitting the configs and manifests into
separate repos.

Feel free to further restructure - this was just a get it working
first attempt.

The old private repo was moved to /cvs/private.disabled

Thanks,
Ricky



pgpPsgN5iueAk.pgp
Description: PGP signature
___
Fedora-infrastructure-list mailing list
Fedora-infrastructure-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list


Re: FAS instance on publictest10?

2008-07-09 Thread Robin Norwood
On Tue, 8 Jul 2008 20:51:04 -0400
Ricky Zhou [EMAIL PROTECTED] wrote:

 On 2008-07-08 08:48:57 PM, Luke Macken wrote:
  Hmmm, so what is the difference between the pt10 and pt9
  deployments? We've been testing the bleeding-edge python-fedora
  package on pt10, so if that is causing the issues we definitely
  need to track it down asap.
 I manually updated python-fedora on pt9 to test that as well - so far,
 so good, although I still never figured out what was causing the
 NoSuchTable errors on publictest10.

Well, this morning things started misbehaving on pt9, too...just
differently. :-(

From web browser, if I go to:
http://publictest9.fedoraproject.org/accounts/

and log in, I get redirected to publictest10!  If I browse back to
pt9, it appears I did log in successfully.  The equivalent thing
happens if I log out from pt9.  

Looking at the /etc/fas.cfg on pt9, this is probably the cause of that:

samadhi.baseurl = 'http://publictest10.fedoraproject.org'

I didn't fix it b/c I haven't familiarized myself with your puppet
stuff yet.


Second, a few minutes ago I was getting 500 errors from
that fas instance when logging in from my app, but now I can't reproduce
it. :-/

Looking at the logs, it looks like the 500 errors are related to this
error:

[Wed Jul 09 14:59:40 2008] [error] raise
exceptions.DBAPIError.instance(statement, parameters, e, connection_invalidated=
is_disconnect)
[Wed Jul 09 14:59:40 2008] [error] ProgrammingError: (ProgrammingError)
server closed the connection unexpectedly
[Wed Jul 09 14:59:40 2008] [error] \tThis probably means the server
terminated abnormally
[Wed Jul 09 14:59:40 2008] [error] \tbefore or while processing the
request.
[Wed Jul 09 14:59:40 2008] [error]  'SELECT session.id AS session_id,
session.data AS session_data, session.expiration_time A
S session_expiration_time \\nFROM session \\nWHERE session.id =
%(param_1)s' {'param_1': '620e1904fbcde17dacb96779e77ec9db223
48554'}


But, like I said, they aren't happening now.  Curious.  Maybe it just
needed a second cup of coffee.

Thanks,

-RN

-- 
Robin Norwood
Red Hat, Inc.

The Sage does nothing, yet nothing remains undone.
-Lao Tzu, Te Tao Ching

___
Fedora-infrastructure-list mailing list
Fedora-infrastructure-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list


[Fwd: Cron [EMAIL PROTECTED] /var/lib/pgsql/vacstat.py check]

2008-07-09 Thread Toshio Kuratomi
Since the plan is to move koji to db3 within the week I'd like to hold 
off on this.  The dump and reload to move to the new server should be 
more effective than a manual vacuum.


-Toshio

 Original Message 
Subject: Cron [EMAIL PROTECTED] /var/lib/pgsql/vacstat.py check
Date: Wed, 9 Jul 2008 13:20:05 GMT
From: [EMAIL PROTECTED] (Cron Daemon)
To: [EMAIL PROTECTED]

Traceback (most recent call last):
  File /var/lib/pgsql/vacstat.py, line 650, in ?
Commands[command](opts)
  File /var/lib/pgsql/vacstat.py, line 150, in test_all
test_transactions(opts)
  File /var/lib/pgsql/vacstat.py, line 147, in test_transactions
raise XIDOverflowWarning, '\n'.join(overflows)
__main__.XIDOverflowWarning: Used over half the transaction ids for 
koji.  Please schedule a vacuum of that entire database soon:

  sudo -u postgres vacuumdb -zvd koji




signature.asc
Description: OpenPGP digital signature
___
Fedora-infrastructure-list mailing list
Fedora-infrastructure-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list


Re: [Fwd: Cron [EMAIL PROTECTED] /var/lib/pgsql/vacstat.py check]

2008-07-09 Thread Mike Bonnet
On Wed, 2008-07-09 at 11:33 -0700, Toshio Kuratomi wrote:
 Since the plan is to move koji to db3 within the week I'd like to hold 
 off on this.  The dump and reload to move to the new server should be 
 more effective than a manual vacuum.

Note that you still need to analyze all the tables after a dump/reload
to regenerate database statistics.  Otherwise postgres may choose some
less-than-optimal query plans.

 -Toshio
 
  Original Message 
 Subject: Cron [EMAIL PROTECTED] /var/lib/pgsql/vacstat.py check
 Date: Wed, 9 Jul 2008 13:20:05 GMT
 From: [EMAIL PROTECTED] (Cron Daemon)
 To: [EMAIL PROTECTED]
 
 Traceback (most recent call last):
File /var/lib/pgsql/vacstat.py, line 650, in ?
  Commands[command](opts)
File /var/lib/pgsql/vacstat.py, line 150, in test_all
  test_transactions(opts)
File /var/lib/pgsql/vacstat.py, line 147, in test_transactions
  raise XIDOverflowWarning, '\n'.join(overflows)
 __main__.XIDOverflowWarning: Used over half the transaction ids for 
 koji.  Please schedule a vacuum of that entire database soon:
sudo -u postgres vacuumdb -zvd koji
 
 
 ___
 Fedora-infrastructure-list mailing list
 Fedora-infrastructure-list@redhat.com
 https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list

___
Fedora-infrastructure-list mailing list
Fedora-infrastructure-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list


Re: PHX Jul 10,11

2008-07-09 Thread Stephen John Smoogen
On Wed, Jul 9, 2008 at 5:08 PM, Mike McGrath [EMAIL PROTECTED] wrote:
 Hey all, I'll be in Phoenix on July 10th and 11th.  I'll be installing two
 new application servers and, at last, db3!  I'll be sending some outage
 notifications though I'm not expecting any longer outage in the next
 couple of days, 10-20 minutes tops.  I'll also probably be out most of
 monday recovering, its a vacation day for me but I'll be around a bit.


Ah... so close to NM yet so far. Good luck.. worse comes to worse I
could try and drive overnight to help out ;).


-Mike

 ___
 Fedora-infrastructure-list mailing list
 Fedora-infrastructure-list@redhat.com
 https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list




-- 
Stephen J Smoogen. -- BSD/GNU/Linux
How far that little candle throws his beams! So shines a good deed
in a naughty world. = Shakespeare. The Merchant of Venice

___
Fedora-infrastructure-list mailing list
Fedora-infrastructure-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list