> For now, only Octavia [3] and Cinder [4] are blocked (i.e we can't build the
> latest commits with DLRN).
Let's not introduce master-cs8 since it has no future - instead let's
accumulate py36 related blockers on master in a CIX tracker,
as a pressure to move TripleO and Puppet to CS9
Alan
> and unable to benefit from the effects of having the
> Fedora community involved in supporting many of the components that
> OpenStack relies on.
We did not see the benefit, instead we ended up with lots of new
packages added by our team members.
We still do keep OpenStack clients maintained in
Hi Wes,
> Are there any public plans for building RDO packages on CentOS-Stream
> available for the community to review?
Do you mean c8-stream or c9-stream?
c9s is not there yet, so I'll assume c8s: RDO packages should work on
c8s as they are, do you have specific example where is that not the
Hi Pete,
> How is building on CentOS Stream better than building on Fedora?
centos8 stream is preview of the next minor RHEL8 release
and c9 stream will be next RHEL major preview
We had RDO Trunk on Fedora in the past and it was not sustainable to
maintain, it's basic principle to keep
Hi Pierre,
> > > I submitted a patch to raise the minimum requirement for dateutil in
> > > cloudkitty: https://review.opendev.org/#/c/742477/
thanks for that!
> > > However, how are those requirements taken into consideration when
> > > packaging OpenStack in RDO? RDO packages for CentOS7
err, wrong quote! Below was reply to this part of your email:
> However, did you notice that oslo.log also claims to require
> python-dateutil>=2.7.0
yes, Yatin and I had a chat the other day [1]
conclusion was "we got lucky until cloudkitty" :)
[1]
Hi Carlos,
> Octavia roadmap includes adding support to new features and performance
> improvements only available starting in HAProxy 2.0. CentOS 8 ships with
> HAProxy 1.8, and according to the package maintainer there are no plans to
> provide HAProxy 2.x in a foreseeable future.
> I have
Hi Pierre,
> I submitted a patch to raise the minimum requirement for dateutil in
> cloudkitty: https://review.opendev.org/#/c/742477/
> However, how are those requirements taken into consideration when
> packaging OpenStack in RDO? RDO packages for CentOS7 provide
>
> last update as of few hours ago was: rdocloud networking should be now
> stable, uplink is not redundant, IT will work on getting back failover
> during the day
Update as of this morning:
uplink redundancy was restored last night,
restoring full CI pool is planned today.
Cheers,
Alan
> Any updates on the status of the operations?
I was giving updates in #rdo IRC since we had unstable networking and
lists.r.o was not reachable,
last update as of few hours ago was: rdocloud networking should be now
stable, uplink is not redundant, IT will work on getting back failover
during
Hi all,
FYI RDO Cloud is undergoing scheduled movement of some of its racks,
control plane and infra services (www, lists, CI pool) should stay up
all the time.
In case of unplanned outage we'll let you know in this thread and also
announce when those operations are finished.
At one point there
> Please make sure ansible does not get bumped to 2.8.9 we are currently at
> 2.8.8 in https://trunk.rdoproject.org/centos8-master/deps/latest/noarch/
we don't have explicit blacklisting in rdoinfo, so let's try with
doc-comment like this? https://review.rdoproject.org/r/25744
Cheers,
Alan
adding rdo devel list
On Thu, Mar 5, 2020 at 5:38 AM kakarla, Chaitanya
wrote:
> Hi Rain/Team,
> Could you please respond on this below issue?
> We had discussion about Openstack Stein support with RHEL8.1 in January 2020.
> Regarding that I want to take help to overcome the issues during
> Even worse: upstream testing is done using Ubuntu, does this mean that we
> start building debs too?
TripleO is not tested with Ubuntu and we don't ship anything in OSP
for Ubuntu, so no, we're not going to start building debs.
> Ansible 2.9 introduced a way to install modules, via
Hi Yatin,
thanks for the update, I'm happy upstream virtualenv blocker is out of our way!
> * More and more packages are dropping support for python2. We keep pinning
> to last known good py2 but at some point will not make sense to keep a
> promotion pipeline on a repo with so many pinned
Hi,
thanks for the update!
> 1. Is there a server we can upload those packages for LinuxONE? E.g, use
> this repo as a experimental repo for LinuxONE.
Since those packages are built outside RDO infra, please publish them
at your public location and we can link to it in the docs.
Since we do not
> The following steps are provided on the Ussuri Milestone1 Test Day page.
> $ sudo curl -O http://trunk.rdoproject.org/centos7/delorean-deps.repo
> $ sudo curl -O
> http://trunk.rdoproject.org/centos7/current-passed-ci/delorean.repo
-L was missing, this is now fixed in
>> I followed the "How to test?" steps provided on the Ussuri Milestone1 Test
>> Day page (http://rdoproject.org/testday/ussuri/milestone1/).
curl commands were missing -L I've fixed it in
https://github.com/redhat-openstack/website/blob/master/source/testday/ussuri/milestone1.html.md
webpage
>> Build dependency d2to1 should have been gone, we need to fix that.
>> In which package did you hit it?
>>
>
> It's still in several packages:
>
> https://codesearch.rdoproject.org/?q=d2to1=nope==
>
> It may be worthy to check if those are actually required or we can clean it
> up. Until then,
> 1. We want contribute to RDO community, let RDO add s390x architecture
> build. As there is no CentOS s390x architecture build in CentOS
> repositories, we can only build and test RDO packages in RHEL, Is it
> possiable we add RDO s390x build without CentOS s390x architecture
> build? Build
Hi Samer,
I'm redirecting to the list since the answer is time dependent.
> Hi, is there a way to install RDO on Centos 8?
not yet, we'll bootstrap deps once CBS Koji is ready for c8, watch
https://trello.com/c/fv3u22df/709-centos8-move-to-centos8
In the meantime OSP 15 (Stein) was released on
Hi Lance,
> I'm assuming that the RH8 build will have Python3
this is correct
> but I'm also curious if RH7 will have Python3 or just stay on Python2.
RDO Train will be released at GA on RHEL7/CentOS7 on Python2, since
that's what was tested throughout this release cycle,
and as soon as we
> You can use our Taiga board to track issues:
> https://tree.taiga.io/project/morucci-software-factory/issues?q=
specifically with "infra" tag:
https://tree.taiga.io/project/morucci-software-factory/issues?q==infra
Alan
___
dev mailing list
> 10-14TB hard drives are not really so expensive.
true for consumer-class drives, cloud storage is more like > $1k/month
for 10TB HDD and >$5k/moth for 10TB SSD
Cheers,
Alan
___
dev mailing list
dev@lists.rdoproject.org
>> If we can get you on the zuul platform you won't have to rework or redo at
>> least some of that work. In zuul we have some post playbooks that execute
>> after the build job [1]. I think you'll be able to find just about
>> everything we do w/ containers here [2] now.
>
> This is
Hi Sorin,
> Based on RDO documentation, when talking with https://review.rdoproject.org/
> Gerrit we are supposed to use the API key from
> https://review.rdoproject.org/sf/user_settings.html page.
Does "Generate new API key" help?
IIRC there was something after one of the upgrades that made
Hi all,
I'll put this on RDO Meeting agenda, just a quick check re. ceph-jewel EOL
which needs to be verified by Storage SIG.
I'm not sure if we could move RDO Ocata to Lumnious?
Cheers,
Alan
-- Forwarded message -
From: Anssi Johansson
Date: Thu, Nov 1, 2018, 10:02
Subject:
> I think having the two separated images is the only way we can ensure we are
> not polluting the image in the initial phase with packages newer that in the
> stabilized repo.
This should be a small list, are any of those actually included in the
base image?
Alternatively, which jobs use
>
> If we use Fedora 28 to create the initial image and then replace
> repositories, we may get packages which are newer that the ones in the
> stabilized repo which would make images bad to test python3 packages.
>
These are DIB created images, we could enable stabilized repo when building
them?
> With the zuulv3 migration wrapping up, I wanted to start a thread about
> projects
> that use the package-distgit-check-jobs template. These are projects like:
>
> openstack/cloudkittyclient-distgit
>
> I wanted to raise the idea of maybe pushing these projects directly upstream
> into
>
On Tue, Jun 26, 2018 at 7:53 PM, Alfredo Moralejo Alonso
wrote:
> As part of the python3 PoC we are working on in rocky cycle i think we need
> to reconsider how we are managing executables in packages with
> python2/python3 subpackages. Currently, we are following Fedora best
> practices
> We think it'd be a lot easier to pull a couple builders in on the RDO end of
If by "RDO" you mean in RDO Cloud, there are two answers:
1) adding a multiarch computes was not included into design, there's
separate ops team managing RDO Cloud which we would need to consult
and get estimates, I'd
On Wed, Jun 6, 2018 at 1:44 PM, lucker zheng wrote:
> Sorry to trouble, I met a problem when I want to install RDO newton version,
> after installed the rdo-release-newton.rpm, it links to repo
>
> http://mirror.centos.org/centos/7/cloud/x86_64/openstack-newton/
> seems no rpm avaiable there, is
>> then we could sub-package openstack-nova to have separate optional
>> package for each hypervisor.
>
> I'm not sure I understand what you mean. Can you please give an
> example?
To allow hypervisor-specific deps, we could split %package compute into
%package compute-common
as-is now, just
>> On Mon, Jun 04, 2018 at 09:41:07PM +0300, Roman Kagan wrote:
>>> I'm now trying to figure out what is needed to make our QEMU package
>>> work with Nova; any help will be appreciated.
Where is Virtuozzo qemu RPM coming from?
If it is a separate package, could it provide virtuozzo specific
Quick note: each dep needs to be examined case by case, why is it needed
and does it really fit in RDO/Cloud SIG, not just rebuilt blindly.
Alan
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev
To
On Fri, Mar 30, 2018 at 4:31 AM, Sam Doran wrote:
>> Ansible RPMs are already there
>> http://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/ but they
>> depend on EPEL for additional deps.
>
> Ansible RPMs have always been there. I don't believe they depend on anything
> - Identify which projects we have had troubles in the past months and that
> we are not automatically testing when new versions bump up (OVS? Ceph? etc)
> - For these projects, how could we either 1) import them in delorean and
Pretty please s/delorean/RDO Trunk/
DLRN is a tool, RDO Trunk are
Hi Wes,
I'd prefer to integrate those alerts into existing RDO monitoring instead
of adding one more bot.
We have #rdo-dev channel where infra alerts would fit better, can you show
few example LPs where you those tags would be applied?
Alan
___
dev
Hi all,
tomorrow is the release day for OpenStack Queens, so I wanted to bring
this thread to the conclusion!
On Wed, Nov 29, 2017 at 12:16 PM, Alan Pevec <ape...@redhat.com> wrote:
> Proposal would be to redefine DoD as follows:
> - RDO GA release delivers RPM packages via Cent
On Mon, Feb 19, 2018 at 2:46 PM, Sagi Shnaidman wrote:
> just curios, is it known when we move to zuulv3 in RDO Software Factory?
> Do we have a plan for that?
Pre-requisite is to migrate tripleo ci to v3, discussed in
On Tue, Feb 13, 2018 at 6:05 PM, Michael Turek
wrote:
> Sorry for the confusion Haïkel, See link [1] for what I'm talking about
> https://trunk.rdoproject.org/centos7-queens/deps/latest/ppc64le/
That must have been mistake during recent deps repo sync, I'll clean that
Hi Bernard,
I've added this as a topic for the
https://etherpad.openstack.org/p/RDO-Meeting today,
with some initial questions to explore.
On Wed, Jan 10, 2018 at 1:50 PM, Bernard Cafarelli wrote:
> * easier install/maintenance for the user, tripleo can consume the
> image
Hi all,
we as a community last discussed RDO definition of done more than a
year ago and it was documented[1]
In the meantime we have multiple changes in the RDO promotion
process, most significant is that we do not run all the CI promotion
jobs in the single Jenkins pipeline, instead there is
44 matches
Mail list logo