What version of 7 are you running? (including kernel #?) On Fri, Feb 14, 2020 at 11:17 AM Ezequiel Sozzi <soz...@gmail.com> wrote:
> Hi Paul, > > Versions should not be the problem, I'm managing almost 4000 servers with > spacewalk and 35% are Centos6 while the other 65% are Centos7. > have you tried to perform a rhn_check -vvvv from the client? That could > bring you more information. > > BR, > > El vie., 14 de feb. de 2020 a la(s) 13:12, Paul Greene ( > paul.greene...@gmail.com) escribió: > >> Ezequiel, >> >> I tried it but it didn't seem to do anything. 😬 >> These systems have no connection to the internet - our repositories are >> all internal to the network (one repo for base, one for updates, and one >> for EPEL), and they have all the latest updates anyway, so there was >> nothing to update. >> Not sure where to go with this. >> Just to add to my second post - older versions of CentOS 7 aren't having >> issues, and there's many systems still on CentOS 6 that don't have any >> issues either. So that leads me to believe there's something about the >> differences in OS versions that are the root of the problem. >> >> Paul >> >> On Thu, Feb 13, 2020 at 7:23 PM Ezequiel Sozzi <soz...@gmail.com> wrote: >> >>> Hi Paul, >>> >>> This a more often issue than everybody things, in order to fix this, >>> what we do is to run the next commands on the client side: >>> >>> Disable all the plugins to disable rhnplugin: sed -i >>> 's/plugins=1/plugins=0/g' /etc/yum.conf >>> >>> Disable all the external repositores: yum-config-manager --disable \* >>> >>> Re-enable all the plugins to enable rhnplugin: sed -i >>> 's/plugins=0/plugins=1/g' /etc/yum.conf >>> Update all the packages related to rpm, rhn, and yum: yum update rpm* >>> rhn* yum* -y >>> >>> This fix the issue. At least that's my experience. Hope this helps. >>> >>> BR, >>> >>> Ezequiel >>> >>> >>> >>> El jue., 13 de febrero de 2020 7:26 p. m., Paul Greene < >>> paul.greene...@gmail.com> escribió: >>> >>>> I have a spacewalk 2.9 server with CentOS 7 clients. When I run a >>>> scheduled remote command on 50 systems, usually about half of the systems >>>> will get marked as "failed" with the error "Invalid function call attempted >>>> (code 6)". >>>> >>>> They all have the same configuration, and every line put in the remote >>>> command will run just fine from a command prompt. If I go into a system >>>> that has been marked "failed" and manually verify if the command did what >>>> it was supposed to do, many times it actually did succeed, but was still >>>> marked "failed". And there are some that did in fact fail. >>>> >>>> How can I address this error to get rid of the false "failed" messages? >>>> >>>> I looked in /var/log/up2date on the clients that failed and get just >>>> these messages at the time the scheduled task failed: >>>> >>>> up2date updateLoginfo() login info >>>> up2date logging into up2date server >>>> up2date successfully retrieved authentication token from up2date server >>>> _______________________________________________ >>>> Spacewalk-list mailing list >>>> Spacewalk-list@redhat.com >>>> https://www.redhat.com/mailman/listinfo/spacewalk-list >>> >>> _______________________________________________ >>> Spacewalk-list mailing list >>> Spacewalk-list@redhat.com >>> https://www.redhat.com/mailman/listinfo/spacewalk-list >> >> _______________________________________________ >> Spacewalk-list mailing list >> Spacewalk-list@redhat.com >> https://www.redhat.com/mailman/listinfo/spacewalk-list > > _______________________________________________ > Spacewalk-list mailing list > Spacewalk-list@redhat.com > https://www.redhat.com/mailman/listinfo/spacewalk-list
_______________________________________________ Spacewalk-list mailing list Spacewalk-list@redhat.com https://www.redhat.com/mailman/listinfo/spacewalk-list