Re: [Spacewalk-list] Random Spacewalk things I've found...
On 01/18/2012 12:29 AM, Ian Forde wrote: 4. When using SW 1.6 with PostgreSQL on CentOS 6.2 (just trying to be specific here...), every night the "Show differences between profiled configuration files and deployed config files scheduled by (none)" job runs. This drives the load through the roof. I'm talking about in the upper 40's here. Eventually it calms down, but the box gets slammed. Is there something that I can do to mitigate this? Hello, I have a patch that will probably end up in 1.8 that will spread out the jobs over a period of time. Something to look forward to. Other than that, you could reduce the frequency at which the job runs (say, once a week at 3am on Sunday morning). Josh smime.p7s Description: S/MIME Cryptographic Signature ___ Spacewalk-list mailing list Spacewalk-list@redhat.com https://www.redhat.com/mailman/listinfo/spacewalk-list
Re: [Spacewalk-list] Random Spacewalk things I've found...
On Wed, Jan 18, 2012 at 10:15:17AM +0100, Jan Pazdziora wrote: > On Tue, Jan 17, 2012 at 09:29:06PM -0800, Ian Forde wrote: > > > > Here are some things that I've found recently... > > We might prefer to have these issues tracked in separate posts/threads > 'cause from the long post we might lose some things. > > > (more info on this) > > I just kickstarted a node, and had it happen again. I logged in, did > > a 'rhn-profile-sync' successfully. Then I did a 'rhn_check -vv' and > > got the following back: > > > > XMLRPC ProtocolError: > /XMLRPC: 500 Internal Server Error> > > > > I looked in /var/log/messages on the spacewalk server (I have logging > > to syslog enabled in postgres for things like this), and saw the > > following: > > > > Jan 17 21:25:05 ordmantell postgres[22246]: [3-1] ERROR: new row for > > relation "rhnpackageevr" violates check constraint > > "vn_rhnpackageevr_epoch" > > Jan 17 21:25:05 ordmantell postgres[22246]: [3-2] CONTEXT: SQL > > statement "INSERT INTO rhnPackageEvr (id, epoch, version, release, > > evr) VALUES (nextval('rhn_pkg_evr_seq'), $1 , $2 , $3 ,EVR_T( $1 , > > $2 , $3 ))" > > Jan 17 21:25:05 ordmantell postgres[22246]: [3-3] #011PL/pgSQL > > function "lookup_evr" line 10 at SQL statement > > Jan 17 21:25:05 ordmantell postgres[22246]: [3-4] #011SQL statement > > "SELECT LOOKUP_EVR( $1 , $2 , $3 )" > > Jan 17 21:25:05 ordmantell postgres[22246]: [3-5] #011PL/pgSQL > > function "lookup_transaction_package" line 20 at SQL statement > > Jan 17 21:25:05 ordmantell postgres[22246]: [3-6] STATEMENT: > > Jan 17 21:25:05 ordmantell postgres[22246]: [3-7] #011insert into > > rhnPackageDeltaElement > > Jan 17 21:25:05 ordmantell postgres[22246]: [3-8] #011 > > (package_delta_id, transaction_package_id) > > Jan 17 21:25:05 ordmantell postgres[22246]: [3-9] #011values > > Jan 17 21:25:05 ordmantell postgres[22246]: [3-10] #011 (9240, > > Jan 17 21:25:05 ordmantell postgres[22246]: [3-11] #011 > > lookup_transaction_package(E'insert', E'389-ds-base', E'', > > E'1.2.9.14', E'1.el6', NULL)) > > Jan 17 21:25:05 ordmantell postgres[22246]: [3-12] #011 > > > > Hope that helps... > > Can you try to patch your server_kickstart.py with > > diff --git a/backend/server/rhnServer/server_kickstart.py > b/backend/server/rhnServer/server_kickstart.py > index 7ba167b..0eca170 100644 > --- a/backend/server/rhnServer/server_kickstart.py > +++ b/backend/server/rhnServer/server_kickstart.py > @@ -580,8 +580,7 @@ def _packages_from_cursor(cursor): > # We ignore GPG public keys since they are too weird to schedule > # as a package delta > continue > -result.append((p_name, row['version'], row['release'], > -row['epoch'] or '')) > +result.append((p_name, row['version'], row['release'], row['epoch'])) > return result > > _query_lookup_pending_kickstart_sessions = rhnSQL.Statement(""" > > restart httpd and see if it fixes the problem for you? I've pushed this change now anyway. -- Jan Pazdziora Principal Software Engineer, Satellite Engineering, Red Hat ___ Spacewalk-list mailing list Spacewalk-list@redhat.com https://www.redhat.com/mailman/listinfo/spacewalk-list
Re: [Spacewalk-list] Random Spacewalk things I've found...
On Tue, Jan 17, 2012 at 09:29:06PM -0800, Ian Forde wrote: > > Here are some things that I've found recently... We might prefer to have these issues tracked in separate posts/threads 'cause from the long post we might lose some things. > (more info on this) > I just kickstarted a node, and had it happen again. I logged in, did > a 'rhn-profile-sync' successfully. Then I did a 'rhn_check -vv' and > got the following back: > > XMLRPC ProtocolError: /XMLRPC: 500 Internal Server Error> > > I looked in /var/log/messages on the spacewalk server (I have logging > to syslog enabled in postgres for things like this), and saw the > following: > > Jan 17 21:25:05 ordmantell postgres[22246]: [3-1] ERROR: new row for > relation "rhnpackageevr" violates check constraint > "vn_rhnpackageevr_epoch" > Jan 17 21:25:05 ordmantell postgres[22246]: [3-2] CONTEXT: SQL > statement "INSERT INTO rhnPackageEvr (id, epoch, version, release, > evr) VALUES (nextval('rhn_pkg_evr_seq'), $1 , $2 , $3 ,EVR_T( $1 , > $2 , $3 ))" > Jan 17 21:25:05 ordmantell postgres[22246]: [3-3] #011PL/pgSQL > function "lookup_evr" line 10 at SQL statement > Jan 17 21:25:05 ordmantell postgres[22246]: [3-4] #011SQL statement > "SELECT LOOKUP_EVR( $1 , $2 , $3 )" > Jan 17 21:25:05 ordmantell postgres[22246]: [3-5] #011PL/pgSQL > function "lookup_transaction_package" line 20 at SQL statement > Jan 17 21:25:05 ordmantell postgres[22246]: [3-6] STATEMENT: > Jan 17 21:25:05 ordmantell postgres[22246]: [3-7] #011insert into > rhnPackageDeltaElement > Jan 17 21:25:05 ordmantell postgres[22246]: [3-8] #011 > (package_delta_id, transaction_package_id) > Jan 17 21:25:05 ordmantell postgres[22246]: [3-9] #011values > Jan 17 21:25:05 ordmantell postgres[22246]: [3-10] #011 (9240, > Jan 17 21:25:05 ordmantell postgres[22246]: [3-11] #011 > lookup_transaction_package(E'insert', E'389-ds-base', E'', > E'1.2.9.14', E'1.el6', NULL)) > Jan 17 21:25:05 ordmantell postgres[22246]: [3-12] #011 > > Hope that helps... Can you try to patch your server_kickstart.py with diff --git a/backend/server/rhnServer/server_kickstart.py b/backend/server/rhnServer/server_kickstart.py index 7ba167b..0eca170 100644 --- a/backend/server/rhnServer/server_kickstart.py +++ b/backend/server/rhnServer/server_kickstart.py @@ -580,8 +580,7 @@ def _packages_from_cursor(cursor): # We ignore GPG public keys since they are too weird to schedule # as a package delta continue -result.append((p_name, row['version'], row['release'], -row['epoch'] or '')) +result.append((p_name, row['version'], row['release'], row['epoch'])) return result _query_lookup_pending_kickstart_sessions = rhnSQL.Statement(""" restart httpd and see if it fixes the problem for you? -- Jan Pazdziora Principal Software Engineer, Satellite Engineering, Red Hat ___ Spacewalk-list mailing list Spacewalk-list@redhat.com https://www.redhat.com/mailman/listinfo/spacewalk-list
[Spacewalk-list] Random Spacewalk things I've found...
I *love* Spacewalk. I really do. There is no *but*. In a past life, I deployed Satellite for RH at customer sites, so the fact that I'm still using the codebase years later says a lot about how I feel about it. Here are some things that I've found recently... Kickstart... I recently had a situation where I was testing kickstart profiles. I created 10 DNS entries for the hosts, and was creating/destroying them at will. I found some interesting things out... 1. If the VM doesn't have the XML file on the virtualization host, but the filename used by the profile DOES exist, the kickstart will fail. Silently. (I had some NFS issues, where the files were owned by root:root, so deleting them from virt-manager didn't actually work, so this may be a libvirt issue in that it reports deleted even though it's not.) 2. I had to put in a custom begin script to generate the hostname/IP address for static entries (more on that later). But it still registers with the name localhost.localdomain. 3. After the packages are installed and the post section has completed, the node usually reboots, and I see the checkbox in the "Register System to Spacewalk" section. But it doesn't get past that. Initially, I thought it was stuck at "Deploy Configuration Files", but I've since kickstarted with the option to sync a package profile from another host, and it never gets there... Not sure what's happening there. (more info on this) I just kickstarted a node, and had it happen again. I logged in, did a 'rhn-profile-sync' successfully. Then I did a 'rhn_check -vv' and got the following back: XMLRPC ProtocolError: I looked in /var/log/messages on the spacewalk server (I have logging to syslog enabled in postgres for things like this), and saw the following: Jan 17 21:25:05 ordmantell postgres[22246]: [3-1] ERROR: new row for relation "rhnpackageevr" violates check constraint "vn_rhnpackageevr_epoch" Jan 17 21:25:05 ordmantell postgres[22246]: [3-2] CONTEXT: SQL statement "INSERT INTO rhnPackageEvr (id, epoch, version, release, evr) VALUES (nextval('rhn_pkg_evr_seq'), $1 , $2 , $3 ,EVR_T( $1 , $2 , $3 ))" Jan 17 21:25:05 ordmantell postgres[22246]: [3-3] #011PL/pgSQL function "lookup_evr" line 10 at SQL statement Jan 17 21:25:05 ordmantell postgres[22246]: [3-4] #011SQL statement "SELECT LOOKUP_EVR( $1 , $2 , $3 )" Jan 17 21:25:05 ordmantell postgres[22246]: [3-5] #011PL/pgSQL function "lookup_transaction_package" line 20 at SQL statement Jan 17 21:25:05 ordmantell postgres[22246]: [3-6] STATEMENT: Jan 17 21:25:05 ordmantell postgres[22246]: [3-7] #011insert into rhnPackageDeltaElement Jan 17 21:25:05 ordmantell postgres[22246]: [3-8] #011 (package_delta_id, transaction_package_id) Jan 17 21:25:05 ordmantell postgres[22246]: [3-9] #011values Jan 17 21:25:05 ordmantell postgres[22246]: [3-10] #011 (9240, Jan 17 21:25:05 ordmantell postgres[22246]: [3-11] #011 lookup_transaction_package(E'insert', E'389-ds-base', E'', E'1.2.9.14', E'1.el6', NULL)) Jan 17 21:25:05 ordmantell postgres[22246]: [3-12] #011 Hope that helps... Back to the custom begin script - all it does is parse /proc/cmdline to get the hostname, look it up in DNS, and if it's found, use that hostname/IP, with the current netmask and default route. If it's not found, use the address that it picked up from DHCP. It's not in any way pretty, error-checking is minimal, and it isn't very generalized, so I'm somewhat reluctant to share it... (plus the fact that it's a little ugly!) ;) Other random stuff... 4. When using SW 1.6 with PostgreSQL on CentOS 6.2 (just trying to be specific here...), every night the "Show differences between profiled configuration files and deployed config files scheduled by (none)" job runs. This drives the load through the roof. I'm talking about in the upper 40's here. Eventually it calms down, but the box gets slammed. Is there something that I can do to mitigate this? 5. Even with selinux in permissive mode (and targeted policy) on both the Spacewalk VM and the virtualization host, I keep getting s0 at the end of the security labels on files that I've deployed from configuration channels. I redeploy, then they come back. Not sure what's happening here. I suppose I could always relabel all of the nodes via 'touch /.autorelabel' and reboot them, but I'd rather not... Just some food for thought... -Ian ___ Spacewalk-list mailing list Spacewalk-list@redhat.com https://www.redhat.com/mailman/listinfo/spacewalk-list