[Spacewalk-list] Server not creating/updating /var/cache/rhn after upgrade

2018-02-20 Thread Glennie, Jonathan - 0443 - MITLL
Hi All-

 

I'm having an interesting issue where none of our software channels are
creating any repomd.xml files.  There is a long history with this server,
which may or may not be relevant to this problem, but it's worth mentioning
that this used to be a physical machine running 2.6 that we upgraded to 2.7.
After running in to various issue, we migrated the server to a VM running
2.7 using the procedure discussed here
https://www.redhat.com/archives/spacewalk-list/2013-November/msg00070.html

 

I have tried manually restarting taskomatic and also using the spacecmd
commands to regenerate the yum repodata, but nothing is being created in
/var/cache/rhn.  The repodata directory is completely missing.  I checked
the taskomatic logs and I don't see any errors that would point to an issue,
not sure where else to check.  

 

When running a yum list from a client, we get an error that the repomd.xml
file (not surpsingly) couldn't be found.  Out of curiosity, I tried manually
copying over the old /var/cache/rhn/repodata directory over from the old
server and after doing so, I can get past the repomd.xml error, but
obviously this isn't good if these repo files aren't updating.  

 



smime.p7s
Description: S/MIME cryptographic signature
___
Spacewalk-list mailing list
Spacewalk-list@redhat.com
https://www.redhat.com/mailman/listinfo/spacewalk-list

Re: [Spacewalk-list] Server not creating/updating /var/cache/rhn after upgrade

2018-02-20 Thread Brian Long
I've had similar issues on our SW server that has been upgraded over time.
Take a look here:
https://access.redhat.com/solutions/19303

cd /var/cache/rhn/repodata
rm -rf  (or mv  /scratch to keep
a backup)
/root/regen-repodata.py -a --url https://localhost/rpc/api

Login as admin with password

service taskomatic restart
Login to web UI using admin.  Click on Admin tab, Task Schedules.  Click
the channel-repodata-bunch in the right column
Click on Single Run Schedule to start the job.
tail -50f /var/log/rhn/rhn_taskomatic_daemon.log to watch progress.

If the above tail from taskomatic does not output anything for a period
over ten minutes but you've not seen metadata for every channel
regenerated, follow steps 4-6 again.  The log will show the following but
it should continue generating the metadata for the missing channels:

INFO   | jvm 1| 2018/01/05 15:16:36 | 2018-01-05 15:16:36,189
[Thread-42] WARN  com.redhat.rhn.taskomatic.core.SchedulerKernel - Number
of interrupted runs: 1

To verify metadata has been properly generated, login to a host and run the
following commands:

yum clean metadata
yum repolist

For each repo, you should see a positive integer on the far right column.
If you see a 0, that means the metadata for that channel was not generated.


Even with all these steps, we ran into other issues and I'm in the process
of rebuilding our SW server.

/Brian/

On Tue, Feb 20, 2018 at 2:34 PM, Glennie, Jonathan - 0443 - MITLL <
jrgle...@ll.mit.edu> wrote:

> Hi All-
>
>
>
> I’m having an interesting issue where none of our software channels are
> creating any repomd.xml files.  There is a long history with this server,
> which may or may not be relevant to this problem, but it’s worth mentioning
> that this used to be a physical machine running 2.6 that we upgraded to
> 2.7.  After running in to various issue, we migrated the server to a VM
> running 2.7 using the procedure discussed here https://www.redhat.com/
> archives/spacewalk-list/2013-November/msg00070.html
>
>
>
> I have tried manually restarting taskomatic and also using the spacecmd
> commands to regenerate the yum repodata, but nothing is being created in
> /var/cache/rhn.  The repodata directory is completely missing.  I checked
> the taskomatic logs and I don’t see any errors that would point to an
> issue, not sure where else to check.
>
>
>
> When running a yum list from a client, we get an error that the repomd.xml
> file (not surpsingly) couldn’t be found.  Out of curiosity, I tried
> manually copying over the old /var/cache/rhn/repodata directory over from
> the old server and after doing so, I can get past the repomd.xml error, but
> obviously this isn’t good if these repo files aren’t updating.
>
>
>
> ___
> Spacewalk-list mailing list
> Spacewalk-list@redhat.com
> https://www.redhat.com/mailman/listinfo/spacewalk-list
>
___
Spacewalk-list mailing list
Spacewalk-list@redhat.com
https://www.redhat.com/mailman/listinfo/spacewalk-list

Re: [Spacewalk-list] Server not creating/updating /var/cache/rhn after upgrade

2018-02-20 Thread Glennie, Jonathan - 0443 - MITLL
Thanks!  I had been using the spacecmd softwarechannel_regenerateyumcache 
command and trying to use the bunch schedule in the web UI, but it wasn’t until 
a read your response that I realized these were two different things.  
Apparently, I had to run the command in space cmd and THEN schedule the task 
bunch in the web UI to get it to actually do what I wanted.  It’s generating 
all of the repo lists now and I was able to verify with one of the clients that 
it can now see patches.  The web UI still shows no updates available for any 
given client, however I’m wondering if the task just needs to complete first 
before it will update that view.  

 

From: spacewalk-list-boun...@redhat.com 
[mailto:spacewalk-list-boun...@redhat.com] On Behalf Of Brian Long
Sent: Tuesday, February 20, 2018 3:27 PM
To: spacewalk-list@redhat.com
Subject: Re: [Spacewalk-list] Server not creating/updating /var/cache/rhn after 
upgrade

 

I've had similar issues on our SW server that has been upgraded over time.  
Take a look here:
https://access.redhat.com/solutions/19303

cd /var/cache/rhn/repodata
rm -rf  (or mv  /scratch to keep a 
backup)
/root/regen-repodata.py -a --url https://localhost/rpc/api

Login as admin with password

service taskomatic restart
Login to web UI using admin.  Click on Admin tab, Task Schedules.  Click the 
channel-repodata-bunch in the right column
Click on Single Run Schedule to start the job.
tail -50f /var/log/rhn/rhn_taskomatic_daemon.log to watch progress.

If the above tail from taskomatic does not output anything for a period over 
ten minutes but you've not seen metadata for every channel regenerated, follow 
steps 4-6 again.  The log will show the following but it should continue 
generating the metadata for the missing channels:

INFO   | jvm 1| 2018/01/05 15:16:36 | 2018-01-05 15:16:36,189 [Thread-42] 
WARN  com.redhat.rhn.taskomatic.core.SchedulerKernel - Number of interrupted 
runs: 1

To verify metadata has been properly generated, login to a host and run the 
following commands:

yum clean metadata
yum repolist

For each repo, you should see a positive integer on the far right column.  If 
you see a 0, that means the metadata for that channel was not generated.


Even with all these steps, we ran into other issues and I'm in the process of 
rebuilding our SW server.

/Brian/

 

On Tue, Feb 20, 2018 at 2:34 PM, Glennie, Jonathan - 0443 - MITLL 
mailto:jrgle...@ll.mit.edu> > wrote:

Hi All-

 

I’m having an interesting issue where none of our software channels are 
creating any repomd.xml files.  There is a long history with this server, which 
may or may not be relevant to this problem, but it’s worth mentioning that this 
used to be a physical machine running 2.6 that we upgraded to 2.7.  After 
running in to various issue, we migrated the server to a VM running 2.7 using 
the procedure discussed here 
https://www.redhat.com/archives/spacewalk-list/2013-November/msg00070.html

 

I have tried manually restarting taskomatic and also using the spacecmd 
commands to regenerate the yum repodata, but nothing is being created in 
/var/cache/rhn.  The repodata directory is completely missing.  I checked the 
taskomatic logs and I don’t see any errors that would point to an issue, not 
sure where else to check.  

 

When running a yum list from a client, we get an error that the repomd.xml file 
(not surpsingly) couldn’t be found.  Out of curiosity, I tried manually copying 
over the old /var/cache/rhn/repodata directory over from the old server and 
after doing so, I can get past the repomd.xml error, but obviously this isn’t 
good if these repo files aren’t updating.  

 


___
Spacewalk-list mailing list
Spacewalk-list@redhat.com  
https://www.redhat.com/mailman/listinfo/spacewalk-list

 



smime.p7s
Description: S/MIME cryptographic signature
___
Spacewalk-list mailing list
Spacewalk-list@redhat.com
https://www.redhat.com/mailman/listinfo/spacewalk-list

Re: [Spacewalk-list] Server not creating/updating /var/cache/rhn after upgrade

2018-02-20 Thread Robert Paschedag
Am 20. Februar 2018 20:34:36 MEZ schrieb "Glennie, Jonathan - 0443 - MITLL" 
:
>Hi All-
>
> 
>
>I'm having an interesting issue where none of our software channels are
>creating any repomd.xml files.  There is a long history with this
>server,
>which may or may not be relevant to this problem, but it's worth
>mentioning
>that this used to be a physical machine running 2.6 that we upgraded to
>2.7.
>After running in to various issue, we migrated the server to a VM
>running
>2.7 using the procedure discussed here
>https://www.redhat.com/archives/spacewalk-list/2013-November/msg00070.html
>
> 
>
>I have tried manually restarting taskomatic and also using the spacecmd
>commands to regenerate the yum repodata, but nothing is being created
>in
>/var/cache/rhn.  The repodata directory is completely missing.  I
>checked
>the taskomatic logs and I don't see any errors that would point to an
>issue,
>not sure where else to check.  
>
> 
>
>When running a yum list from a client, we get an error that the
>repomd.xml
>file (not surpsingly) couldn't be found.  Out of curiosity, I tried
>manually
>copying over the old /var/cache/rhn/repodata directory over from the
>old
>server and after doing so, I can get past the repomd.xml error, but
>obviously this isn't good if these repo files aren't updating.  
>
> 

This sounds as if taskomatic is not running the jobs for recreating the repos.

If you have selinux enabled, please check the selinux permissions if that 
directory or reset them.

Does the synchronisation of the report work?

Robert

___
Spacewalk-list mailing list
Spacewalk-list@redhat.com
https://www.redhat.com/mailman/listinfo/spacewalk-list