On Tue, May 24, 2016 at 10:22:16AM +0200, David Caro wrote:
> On 05/24 11:07, Amit Aviram wrote:
> > Hi.
> > For the last day I am getting this error over and over again from jenkins:
> > 
> > Start: yum install*07:23:55* ERROR: Command failed. See logs for
> > output.*07:23:55*  # /usr/bin/yum-deprecated --installroot
> > /var/lib/mock/epel-7-x86_64-cc6e9a99555654260f7f229c124a6940-31053/root/
> > --releasever 7 install @buildsys-build
> > --setopt=tsflags=nocontexts*07:23:55* WARNING: unable to delete
> > selinux filesystems (/tmp/mock-selinux-plugin.3tk4zgr4): [Errno 1]
> > Operation not permitted: '/tmp/mock-selinux-plugin.3tk4zgr4'*07:23:55*
> > Init took 3 seconds
> > 
> > 
> > (see http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/2026/)
> > 
> > 
> > This fails the job, so I get -1 from Jenkins CI for my patch.
> 
> 
> That's not what's failing the job, is just a warning, the failure is happening
> before that, when installing the chroot:
> 
> 07:23:53 Start: yum install
> 07:23:55 ERROR: Command failed. See logs for output.
> 07:23:55  # /usr/bin/yum-deprecated --installroot 
> /var/lib/mock/epel-7-x86_64-cc6e9a99555654260f7f229c124a6940-31053/root/ 
> --releasever 7 install @buildsys-build --setopt=tsflags=nocontexts
> 
> Checking the logs (logs.tgz file, archived on the job, under
> vdsm/logs/mocker-epel-7-x86_64.el7.init/root.log):
> 
> 
> DEBUG util.py:417:  
> https://repos.fedorapeople.org/repos/openstack/openstack-kilo/el7/repodata/repomd.xml:
>  [Errno 14] HTTPS Error 404 - Not Found
> DEBUG util.py:417:  Trying other mirror.
> DEBUG util.py:417:   One of the configured repositories failed ("Custom 
> openstack-kilo"),
> DEBUG util.py:417:   and yum doesn't have enough cached data to continue. At 
> this point the only
> DEBUG util.py:417:   safe thing yum can do is fail. There are a few ways to 
> work "fix" this:
> DEBUG util.py:417:       1. Contact the upstream for the repository and get 
> them to fix the problem.
> DEBUG util.py:417:       2. Reconfigure the baseurl/etc. for the repository, 
> to point to a working
> DEBUG util.py:417:          upstream. This is most often useful if you are 
> using a newer
> DEBUG util.py:417:          distribution release than is supported by the 
> repository (and the
> DEBUG util.py:417:          packages for the previous distribution release 
> still work).
> DEBUG util.py:417:       3. Disable the repository, so yum won't use it by 
> default. Yum will then
> DEBUG util.py:417:          just ignore the repository until you permanently 
> enable it again or use
> DEBUG util.py:417:          --enablerepo for temporary usage:
> DEBUG util.py:417:              yum-config-manager --disable openstack-kilo
> DEBUG util.py:417:       4. Configure the failing repository to be skipped, 
> if it is unavailable.
> DEBUG util.py:417:          Note that yum will try to contact the repo. when 
> it runs most commands,
> DEBUG util.py:417:          so will have to try and fail each time (and thus. 
> yum will be be much
> DEBUG util.py:417:          slower). If it is a very temporary problem 
> though, this is often a nice
> DEBUG util.py:417:          compromise:
> DEBUG util.py:417:              yum-config-manager --save 
> --setopt=openstack-kilo.skip_if_unavailable=true
> DEBUG util.py:417:  failure: repodata/repomd.xml from openstack-kilo: [Errno 
> 256] No more mirrors to try.
> 
> 
> So it seems that the repo does not exist anymore, there's a README.txt file
> though that says:
> 
> RDO Kilo is hosted in CentOS Cloud SIG repository
> http://mirror.centos.org/centos/7/cloud/x86_64/openstack-kilo/
> 
> And that new link seems to work ok, so probably you just need to change the
> automation/*.repos files on vdsm git repo to point to the new openstack repos
> url instead of the old one and everything should work ok.
> 
> 
> 
> > 
> > I am pretty sure it is not related to the patch. also fc23 job passes.
> > 
> > 
> > Any idea what's the problem?

Yep, I believe that https://gerrit.ovirt.org/57870 has solved that.
Please rebase on top of current ovirt-3.6 branch.
_______________________________________________
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra

Reply via email to