Re: [openstack-dev] [goals][python3][adjutant][barbican][chef][cinder][cloudkitty][i18n][infra][loci][nova][charms][rpm][puppet][qa][telemetry][trove] join the bandwagon!

2018-08-31 Thread Dirk Müller
Am Fr., 31. Aug. 2018 um 01:28 Uhr schrieb Doug Hellmann
:

Hi Doug,

> | Packaging-rpm   | 4 repos   |

We're ready - please send the patches.

Greetings,
Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rpm-packaging] Step down as a reviewer

2018-08-28 Thread Dirk Müller
Hi Alberto,

Am Mo., 13. Aug. 2018 um 11:08 Uhr schrieb Alberto Planas Dominguez
:

> I will change my role at SUSE at the end of the month (August 2018), so
> I request to be removed from the core position on those projects.

Sad to see you go, but I appreciate the heads up and wish you all the
best at the new position.

I've removed you from the list of core's as request.

Greetings,
Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][release] FFE for tooz 1.60.0

2018-02-15 Thread Dirk Müller
Hi,

I would like to ask for a exception to add tooz 1.60.0 to
stable/queens. As part of the msgpack-python -> msgpack switch over we
converted all
dependencies, but the tooz release did not include the dependency
switch (not sure why, the branch point was just before the fix).

As it is a one liner dependency change and it brings everything in
stable/queens in a consistent state related to the dependencies (and
for
those who try to package openstack msgpack and msgpack-python do
file-conflict so can not be coinstalled) I would like to ask for a
FFE.

TIA,
Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][monasca] FFE request for monasca-common

2018-02-08 Thread Dirk Müller
2018-02-08 11:05 GMT+01:00 Bedyk, Witold :

> I would like to request FFE for monasca-common to be bumped in upper 
> constraints. The version has been bumped together with the rest of Monasca 
> components [1]. Monasca-common is used only in Monasca projects [2].

The changes between 2.7.0 and 2.8.0 are:

+2.8.0
+-
+
+* Enable tempest tests as voting
+* Add messages for testing unicode
+* Zuul: Remove project name
+* Remove not used mox library
+* Updated from global requirements
+* Updated from global requirements


The requirements changes are removal of mox, and fix for oslotest to
sync the queens level requirements. so this looks good to me. +1

Greetings,
Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][Blazar] FFE - add python-blazarclient in global-requirements

2018-01-30 Thread Dirk Müller
2018-01-30 16:53 GMT+01:00 Matthew Thode :

> LGTM, +2

+2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-13 Thread Dirk Müller
2017-12-13 17:17 GMT+01:00 Thierry Carrez :

> - We'd only do one *coordinated* release of the OpenStack components per
> year, and maintain one stable branch per year
> - We'd elect PTLs for one year instead of every 6 months
> - We'd only have one set of community goals per year
> - We'd have only one PTG with all teams each year

Well, I assume there would still be a chance to have a mid-cycle PTG
independent of the 1 year release schedule proposal if there
is a need for it.

From a vendors (downstream) opinion I like the idea of lengthening the
release cycle because
it pushes two major friction points we have with upstream into the community:

* Tracking+fixing all the issues hitting by skipping over the interim
release (FFU/Skip level upgrade). In the past
that was many months of work, hoping of course that the work on FFU
makes this more palatable.
* The joy of having to cherry-picking a feature of the interim release
had and $somebody really absolutely wants, and
then maintain that (and handle all regressions by yourself)

From an upstream perspective a longer release cycle could potentially
mean a larger "open span" of development where
new features are land and a longer stabilisation period. The tradeoff
is that the window of change and the window of
stabilisation being separated, meaning that we end up with a unplanned
longer period of freeze or with a bad release.

> switching. That is why I'd like us to consider taking the plunge and
> just doing it for *Rocky*, and have a single PTG in 2018 (in Dublin).

+1 Sounds good to me. The only alternative I see is having a different
cycle in the range of 7-11 months (pick a number!) which
causes the release time to always shift to a different month of the
year.  12 months does feel a bit long, but then again it usually
passes by pretty quickly in retrospective.

Greetings,
Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] SUSE jobs started failing on peakmem_tracker

2017-09-08 Thread Dirk Müller
Hi David,

Thanks for looking into this. I do watch devstack changes every once in a
while but couldn't catch this one  in time. The missing pmap -XX flag
problem has been there forever but it used to be non fatal. Now it is,
which is in principle a good change.

I will make sure that it passes again on SUSE shortly.

Greetings,
Dirk

I was trying to make sure the existing openSUSE jobs passed on Zuul v3
but even the regular v2 jobs are hitting a bug I filed here [1].
As far as I know, these jobs were passing until recently.



This is preventing us from sanity checking that everything works out
of the box for the suse devstack job for the v3 migration.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Packaging-RPM] Nominating Alberto Planas Dominguez for Packaging-RPM core

2017-02-16 Thread Dirk Müller
Hi Igor,


2017-02-16 15:43 GMT+01:00 Igor Yozhikov :


> I’d like to nominate Alberto Planas Dominguez known as aplanas on irc for
> Packaging-RPM core.
> Alberto done a lot of reviews for as for project modules [1],[2] as for rest
> of OpenStack components [3]. His experience within OpenStack components and
> packaging are very appreciated.

+1

Greetings,
Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FFE][requirements] python-* clients from last 2 days

2017-01-27 Thread Dirk Müller
Hi Dims,

2017-01-27 13:51 GMT+01:00 Davanum Srinivas :

> I've consolidated the changes to the upper-constraints into One single review:
> https://review.openstack.org/#/c/426116/

I'm ok with it if we have release team buy in.

Greetings,
Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rpm-packaging[packaging][rpm] Draft mascot logo

2016-10-20 Thread Dirk Müller
Hi all,

your feedback appreciated. I really like it :)

Greetings,
Dirk

-- Forwarded message --
From: Heidi Joy Tretheway 
Date: 2016-10-19 20:59 GMT+02:00
Subject: Your draft logo & a sneak peek

​[...]​

We welcome you to *share this logo with your team and discuss it in
Barcelona*. We're very happy to take feedback on it if we've missed the
mark. The style of the logos is consistent across projects, and we did our
best to incorporate any special requests, such as an element of an animal
that is especially important, or a reference to an old logo.


We ask that you don't start using this logo now since it's a draft.
Here's *what
you can expect for the final product*:

   - A horizontal version of the logo, including your mascot, project name
   and the words "An OpenStack Community project"
   - A square(ish) version of the logo, including all of the above
   - A mascot-only version of the logo
   - Stickers for all project teams distributed at the PTG
   - One piece of swag that incorporates all project mascots, such as a
   deck of playing cards, distributed at the PTG
   - All digital files will be available through the website


We know this is a busy time for you, so to take some of the burden of
coordinating feedback off you, we made a feedback form*:*
http://tinyurl.com/OSmascot  You are also welcome to reach out to Heidi Joy
directly with questions or concerns. Please provide *feedback by Friday,
Nov. 11*, so that we can request revisions from the illustrators if needed.
Or, if this logo looks great, just reply to this email and you don't need
to take any further action.




| Skype: heidi.tretheway

  
  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][rpm] 3rd-party gates promotion to voting gates

2016-09-29 Thread Dirk Müller
Hi,

>> Gates that do not leave verified +1 are called non-voting, so
>> logically gates that leaves verified +1 are called voting gates.
> +1

Eh, what I wanted to +1 was:

+1 to promote the check jobs on rpm-packaging from MOS and SUSE CI as
voting jobs.

Sorry for mixing up definitions and then even quoting the wrong thing ;-)


Thanks,
Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][rpm] 3rd-party gates promotion to voting gates

2016-09-29 Thread Dirk Müller
Hi,

2016-09-29 14:12 GMT+02:00 Haïkel :

> Gates that do not leave verified +1 are called non-voting, so
> logically gates that leaves verified +1 are called voting gates.

+1


Greetings,
Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release] [FFE] two xstatic packages to be bumped in upper-constraints, please

2016-09-13 Thread Dirk Müller
Am 13.09.2016 05:03 schrieb "Tony Breeds" :

> > The reviews are:
> >
> > update constraint for XStatic-Bootstrap-SCSS to new release 3.3.7.1
> > https://review.openstack.org/#/c/368970/
> >
> > update constraint for XStatic-smart-table to new release 1.4.13.2
> > https://review.openstack.org/#/c/366194/

+1

Greetings,
Dirk
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][FFE] pymod2pkg 0.5.4

2016-09-09 Thread Dirk Müller
Hi,

OpenStack RPM Packaging would like to update to pymod2pkg 0.5.4:

https://review.openstack.org/#/c/367388/

this package is only used by packaging (renderspec and rpm-packaging
git repos) that are on a release-independent schedule (iirc we're
tagged as never-release) and
since we're packaging, we needed to wait until all the deps we depend
on were released to finalize packaging and include the needed fixes.

it is bugfix only with no effect whatsoever on any of the OpenStack
Newton deliverables as far as I can say.

Please approve,

Thanks,
Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][FFE] block graphviz 0.5.0

2016-09-09 Thread Dirk Müller
Hi Tony,

2016-09-09 9:42 GMT+02:00 Tony Breeds :

> Has some impact on  octavia (doc-requirements) and rpm-packaging (internal
> global-requirements.txt)

Yeah, rpm-packaging shouldn't be a blocker on this, since we're just
trying to keep a snapshot of g-r that we have tested against (this
isn't fully functional yet).

so +2, workflow+1.

Greetings,
Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][FFE] Request to allow os-vif 1.2.1 in upper-constraints for newton

2016-09-08 Thread Dirk Müller
Hi Matt,
> This is the FFE to get this change in for newton:
>
> https://review.openstack.org/#/c/365165/
>
> The os-vif 1.2.1 release was a bug fix patch release to get a high priority
> bug [1] in for newton that is impacting the neutron linuxbridge job in the
> gate [2].
>
> So we already have the release done, we just need the u-c change to use it.
>
> Pretty please.

+1

Greetings,
Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [packaging-rpm] Javier Peña as additonal core reviewer for packaging-rpm core group

2016-09-02 Thread Dirk Müller
Hi,

I would like to suggest Javier Peña as an additional core reviewer for
the packaging-rpm core group. He's been an extremely valueable
contributor recently both doing regular reviews on the PRs as well as
contributing more to the packaging effort overall.

See http://stackalytics.com/?user_id=jpena-c&module=rpm-packaging


Please reply with +1/-1

Thanks a lot in advance,
Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] Refresher on how to use global requirements process

2016-05-19 Thread Dirk Müller
Hi Doug,

> Cleaning up the readme makes a lot of sense. If we end up wanting to
> split it a lot, I'd suggest moving some of the "consumer" facing info to
> the project team guide, but it's moderately less easy to change there
> because the reviewer group is different.

.. which is a good thing, right? we shouldn't change "consumer" facing
instructions too often..

Greetings,
Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [requirements] Global Requirements Team contact point - #openstack-requirements

2016-05-19 Thread Dirk Müller
Hi all,

I've updated https://wiki.openstack.org/wiki/Requirements#Contact (and
also the rest of the wiki page) with newer information
about global-requirements handling. Please if you find issues related
to the global requirements update process in your project
don't hesitate to join the #openstack-requirements IRC channel on
freenode and let us know (pointers to failing jobs or even better, a
root cause analysis are appreciated).

Thanks,
Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] Refresher on how to use global requirements process

2016-05-19 Thread Dirk Müller
2016-05-19 1:08 GMT+02:00 Robert Collins :

> It's already fully documented in the requirements repo README; if
> there is no link to that from the project-guide, we should add one,
> but I would not duplicate the content.

Agreed. The project driver guide is not very explicit on this
(http://docs.openstack.org/infra/manual/drivers.html#package-requirements)
and it should at least contain a pointer to the readme. I'll submit a
review for that.

However, in my opinion the README.rst is an unfortuante mix of
information for the requirements project maintainers/reviewers as well
as for "consumers" of the requirements repo. I would prefer to
separate out that information into two different locations (maybe two
different files even) that are more targetted towards the particular
audience.

I'll suggest this as a topic for the next requirements irc meeting.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Bootstrapping new team for Requirements.

2016-05-11 Thread Dirk Müller
> That works for me.  I'd prefer that meeting earlier in the week so perhaps
> a UTC 11:00 Tuesday would be a good time to use until DST changes and then we
> can re check based on who is active.

Works for me. thanks for the writeup in the etherpad, that was very
helpful. Lets meet on #openstack-requirements then?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [packaging-rpm] PTL candidacy for the Newton Cycle

2016-03-19 Thread Dirk Müller
Hi everyone,

I'd like to announce my candidacy for the RPM Packaging for OpenStack PTL. I
am putting up the candidacy because I would feel honored to continue doing the
PTL role, but I'm also fine with passing on the torch. At the end of the
day what matters is that the project becomes successful and relevant and
I'm happy to contribute my share to that.

During the Mitaka cycle I've been working closely with our small but motivated
contributor group to reach a critical mass of package spec files to self-sustain
the project. While we haven't reached the full goal the trend is up[1]
and I still
very much care about the idea and implementation. We're at least in a
state where the spec file templates are already in actual production use
by one of the vendors (SUSE), and both Red Hat and Mirantis are considering it.

We've reached a lot by diligently working through the various RPM packaging
vendors packaging requirements and abstracting them in a way that allows
them to feel "native" to the platform while still sharing the bulk of the
work on packaging openstack and I'd be happy to work on that further with
the help from the other contributors.

Obviously all that I did so far I would like to continue and make sure that
we build something that everyone can embrace.

For Newton I would like to focus on:

* Extend package spec template set to cover the bulk of packages that define
 an OpenStack distribution and are maintained in the OpenStack git/gerrit

* Foster collaboration with downstream packaging vendors (Mirantis/RDO/SUSE) to
 switch to the common packaging by working through barriers that prevent
 adoption and contributions.

* Work with downstream packaging vendors to contribute extra CI for
their platform,
 as currently only SUSE invested the effort in adding a check CI step.


Thanks,
Dirk

[1] = http://stackalytics.com/?module=rpm-packaging

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rpm-packaging] core reviewers nomination

2015-11-10 Thread Dirk Müller
Haikel,


> I'd like to propose new candidates for RPM packaging core reviewers:
> Alan Pevec
> Jakub Ruzicka

+1

Greetings,
Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [cloud-devel] Summit Report - Dirk

2015-10-30 Thread Dirk Müller
Hi,

> Ha, I'm surprised you forwarded this. But thanks. :)

Yeah, I guess I need to be more explicit on where my mails should be
forwarded to without even asking me next time..
I apologize wholeheartedly.

> Also, this is the first time Neutron has been more popular/buzzworthy than
> Nova?

Oh, it was definitely the most buzzworthy project already in YVR, but
this time it had only positive words attached to it as well. Very well
done, Neutron team!

>  The Nova sessions are usually populated by Nova people and we just
> stare at each other. :)

Thats not true.. At some point we might need a separate Nova/Neutron
summit though.

> I believe the best compliment I heard at the YVR
> nova/ops session was an operator saying 'nova is the least bad project in my
> deployment.'

Oh come on, we all love Nova anyway :-)

Greetings,
Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rpm-packaging] PTL Candidacy

2015-09-16 Thread Dirk Müller
Hi,


I'd like to announce my candidacy for the RPM Packaging for OpenStack PTL.
During the Liberty cycle I've been co-PTL'ing this together with Haïkel
Guemar as a bootstrap. For the Mitaka cycle we've been notified that this
setup needs to be adjusted to a single PTL. Therefore we discussed this
and decided that I'd be announcing the candicacy.


We've been mostly quiet on the mailing lists so far because we've been
working together on unifying our existing packaging in order to iron out the
differences for the benefit of being able to develop a common, standardized
packaging for OpenStack on RPM base. As such, the work was mostly on the
side of downstream adjustments in order to have a common state that can be
maintained upstream. We have also spent significant amount of time in
designing a solution that fits for all interested parties, and are just
starting to upstream the bits for it. For us it is important that
we can continue to work as we do right now under the OpenStack Big Tent in
order to reach a critical mass in the project during the next cycle.


My goal for the next cycle is to support the "bootstrapping" phase as much
as possible by ensuring that we can develop a common packaging standard
across all rpm based OpenStack distributions and are actually converging
the downstream effort into those common packagings. For that we have many
open action items that I'm aware of:


* Improve documentation and transparency of the project
* Foster collaboration on the existing spec files and motivate relevant
  groups to contribute back to us
* Design and Implement a gating policy on changes to the project
* Reach the critical mass for the project so that it can self-sustain
  in the future


I'd like to work on all of them together with the other core members and anyone
who is interested in contributing to the project during the next cycle.


Thanks,
Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rpm-packaging] Meeting minutes IRC meeting July 16th

2015-07-16 Thread Dirk Müller
Hi,

extraction of the meeting minutes from
https://etherpad.openstack.org/p/openstack-rpm-packaging

we agreed to switch to the standard way of doing meeting minutes once
the infra review got merged. its a bit of a short minutes, feel free
to reach out to number80 or me in case you have questions.

Greetings,
Dirk


Attendees:
derekh, dirk, toabctl, number80, apevec, jruzicka


* Regular Meeting schedule:thursday 4pm CET every two weeks
* define short term goal clearer vs long term goal
* topics that are ongoing:
* continuous builds
* generic rpm specs
* shared stable maintenance

agreed:

short term goal for liberty:
* have openstackclient and its deps templatized and packaged so that
we can directly create downstream spec files that build and work
* deliverable will be spec files

AI: need to define testing criteria  for gating

longer term goal:

* kill downstream packaging efforts if package exists in rpm-packaging
* maintain packaging for the stable/ lifecycle
** perhaps also extend stable/ branch lifecycle to satisfy our needs
(not sure, worth a try)
* continuous builds
* revisit import downstream packages in a two months frame
* gather gating idea on wiki page

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Adding rpm-packaging to the openstack git namespace

2015-07-09 Thread Dirk Müller
Hello OpenStackers, hello TC,

Haikel and I have just submitted an update to the change to the governance
repo to add RPM packaging to the OpenStack projects:

https://review.openstack.org/#/c/191587

As a start there we'd like to do a dual PTL from Red Hat (Haikel) and SUSE
(myself) and we'd like to hold a formal election for the next cycle.

Our Mission is that by the end of the liberty cycle we have common rpm
packaging at least for all OpenStack deps, hopefully also for the OpenStack
services as well.

So far, we've formed an IRC channel, created a launchpad project and team
and created a wiki page (pretty rough, will be improved over the next days):

https://wiki.openstack.org/wiki/Rpm-packaging

We're also doing roughly bi-weekly open irc meetings and hang out on
#openstack-rpm-packaging and have discussed and decided already on most
steps for achieving an integration point and prepared some tooling steps
for merging, so now it would be a good next step to have the git/gerrit and
project infrastructure set up.


Thanks,

Dirk
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] RPM Packaging for OpenStack IRC Meeting

2015-06-14 Thread Dirk Müller
Hi Joshua,


> An example of some specs already doing this (they are built using the
> cheetah template engine/style):
>
>
> https://github.com/stackforge/anvil/tree/master/conf/templates/packaging/specs
>
> They are turned into 'normal' spec files (the compilation part)
> during build time.
>

Right, I think thats a good idea, this allows us to work on reducing the
differences over time and start with basically what we have today.

Greetings,
Dirk
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] RPM Packaging for OpenStack IRC Meeting

2015-06-12 Thread Dirk Müller
Hi Russel,

> I'm just kind of curious ... as both the RDO and SUSE folks look closer
> at this, how big are the differences?

>From the overall big picture, we're doing pretty much the same thing.
We have both a tool chain to continuously track changes as they happen
in upstream git, packaging that up and building binary packages there,
testing them in an isolated area with a tempest-like run and when that
succeeds, promote them into the respective stable trees for operators
to consume.

In my personal view, there are surprisingly many overlapping
activities where collaborating makes sense and could save duplicated
effort on both sides.

However, the devil is a bit in the details: Right now there are
differences in the Fedora and openSUSE python packaging guidelines
that are leading to "almost the same but slightly different" .spec
files. We're looking at unifying those differences up to a point where
the remaining diff, should be anything left, can be handled by a post
processing tool that just generates the distro variant of the spec
file from the upstream spec file.

Just to give you an example (this is not an exhaustive list):
- SUSE requires the use of a spdx.org standardized License: tag in the
spec file, RDO uses something else
- SUSE requires packages to be called python-$pypi_name, while Fedora
escapes more things from the pypi name ('.', '_' and '+' are replaced
by '-' and the name is lowercased). This adds up in differences of
requires/provides/obsoletes/conflicts and so on. This can be likely
solved by substitution and by %if sections, we just need to work on
that.
- Indenting and whitespacing rules seem to be slightly different
between the distros

There are also some conventional changes (in some cases the RDO spec
file is more correctly packaged than the SUSE variant or vice versa)
and those can be easily resolved on a case by case base, and that will
immediately help both user bases.

> If instead it seems
> the differences are minor enough that combining efforts is a win for
> everyone, then that's even better, but I don't see it as the required
> outcome here personally.

Right. We've started with an open discussion and not started with any
of those two outcomes in mind already. I think thats also why we
agreed to start with a "green field" and not seed the repos with any
of the distro's existing spec files.

To me it looks promising that we can mechanically compile the $distro
policy conformant .spec file from the canonical upstream naming, and
at some point that compile step might end up being a "cp".


Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [packaging] RPM Packaging for OpenStack IRC Meeting

2015-06-12 Thread Dirk Müller
Hi,

A couple of packagers from RDO and SUSE met today on IRC to kick off
brain storming on unified upstream rpm packaging for OpenStack.

Please note: there are currently two movements going on: RDO would
like to move their Liberty packaging from github.com and gerrithub to
openstack gerrit, and RDO and SUSE would like to collaborate on
unified packaging. For the rest of the mail I'm only talking about the
latter.

The agreed goal is that by the time of the Liberty release, we have
all something that anyone interested like RDO and SUSE can use for
maintaining the packages for the whole Liberty maintenance lifecycle.
For the rest of the meeting we were quickly explaining our setups to
each other and shortly walked over the list of obvious differences
that we need to work on.

The meeting minutes have been captured here:

https://etherpad.openstack.org/p/openstack-rpm-packaging

We're planning to have a followup meetings to work some more on the issues and
are currently hanging out on #openstack-rpm-packaging on IRC.


Greetings,
Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-10 Thread Dirk Müller
Hi Derek,

> I selected these 80 to move all of what RDO is currently maintaining on
> gerrithub to review.openstack.org, this was perhaps too big a set and in RDO
> we instead may need to go hybrid.

Yeah, In my opinion we ahve lots of repeated divergence between the
different python modules, so getting them sorted out in a small set of
packages and then extending it to all python-* modules (and then as a
3rd step to all openstack-* modules) would be a better approach in my
opinion (small steps).

> 1. pull what I've proposed (or a subset of it) into a rpm namespace and from
> there work in package to get them to a point where all rpm interested
> parties can use them.

+1

> 3. Same as 2 but start with Suse packaging

well, imho we should have a look at which starting point makes more
sense for both of us. I see goods and bads in the spec files on both
sides (of course my view is a bit biased on that :-) ).

> For this specific example I think differences of opinion are ok, we should
> provide the tools for each party interest in the packaging can hook in their
> own patches (I'm not sure what this would look like yet), I'm assuming here
> that we would also have deployer's out there interested who would have their
> own custom patches and bug fixes that they are interested in.

Right, that might be a good solution (use pristine upstream packaging
and provide tools for downstream to modify/add patches easily).
For most of the python-* dependencies we have zero patches so its not
a big issue for the initial step, it gets more complex as we get
closer to the python*clients and openstack* packages, as we tend to
have a need for patches in there that have not yet been merged
upstream (for whatever reason).

> +1, maybe we should schedule something in a few days where we could go
> though the differences of a specific package and how things could take
> shape.

Good idea. Whats a good time and date? I'm mostly available
tomorrow-Sunday in the CEST time zone.

Greetings,
Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-09 Thread Dirk Müller
Hi Derek,

2015-06-09 0:34 GMT+02:00 Derek Higgins :

> This patch would result in 80 packaging repositories being pulled into
> gerrit.

I personally would prefer to start with fewer but common packages
between all RPM distros (is there more than Red Hat and SUSE ?) than
starting the process with 80, but I wouldn't object to that.

> o exactly what namespace/prefix to use in the naming, I've seen lots of
> opinions but I'm not clear if we have come to a decision
>
> o Should we use "rdo" in the packaging repo names and not "rpm"? I think
> this ultimatly depends whether the packaging can be shared between RDO and
> Suse or not.

Well, we're (SUSE that is) are interested in sharing the packaging,
and a non-RDO prefix would be preferred for the upstream coordination
efforts. It is all a bit fuzzy for me right now as I'm not entirely
sure our goals for packaging are necessarily the same (e.g. we have
the tendency to include patches that have not been merged but are
proposed upstream and are +1'ed already into our packages should there
be a pressing need for us to do so (e.g. fixes an important platform
bug), but maybe we can find enough common goals to make this a
benificial effort for all of us.

There are quite some details to sort out as our packaging is for
historical and for various policy reasons that we need to stick to
slightly different than the RDO packaging. I think going over those
and see how we can merge them in a consolidated effort (or maintain
two variants together) is the first step IMHO.

Another important point for us is that we start with equal rights on
the upstream collaboration (at least on the RPM side, I am fine with
decoupling and not caring about the deb parts). I'm not overly
optimistic that a single PTL would be able to cover both the DEB and
RPM worlds, as I perceive them quite different in details.

Greetings,
Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] [I18N] compiling translation message catalogs

2014-10-07 Thread Dirk Müller
Hi Julie,

> I'm adding a couple of people on cc: with an interest in Ubuntu and SUSE
> packaging: the Horizon team would love to have your opinion on this (it
> came up during our weekly meeting).

I was somehow not CC'ed although I'm the SUSE packager for OpenStack.
In my opinion a) is the option we prefer, we're actually removing and
ignoring the .mo files from git and (re-) build them as part of
packaging. Using the git-maintained build results would be a
distribution packaging policy violation for us.

Whether it needs to be done before Juno or after: I personally don't
care either way, there are much more serious issues in Juno than
this..


Greetings,
Dirk

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] icehouse failure rates are somewhat catastrophic - who is actually maintaining it?

2014-10-07 Thread Dirk Müller
2014-10-02 14:19 GMT+02:00 Duncan Thomas :

Hi,

> What is actually needed is those who rely on the stable branch(es)
> existence need to step forward and dedicate resources to it. Putting
> the work on people not interested is just the same as killing them
> off, except slower, messier and creating more anger and otehr
> community fallout along the way.

I'm paid by one of those distros that are interested in stable branch
maintenance but have no idea what needs to be done other than saying
"I'm interested in helping out". I've been proposing patches to
backport every once in a while already and being frustrated that it
takes weeks if not months for someone to review them and potentially
merge. What are the queries that people are supposed to look at ? How
can I help in pushing patches or look for gating failures *specific*
to stable/ branches?

Thanks,
Dirk

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] sample config files should be ignored in git...

2014-03-27 Thread Dirk Müller
Hi,

>> When I was an operator, I regularly referred to the sample config files
>> in the git repository.

The sample config files in git repository are tremendeously useful for
any operator and OpenStack Packager. Having them generateable with a
tox line is very cumbersome.

As a minimum those config files should be part of the sdist tarball
(aka generated during sdist time).

> Do they need to be in the git repo?

IMHO yes, they should go alongside the code change.

> Note that because libraries now export config options (which is the
> root of this problem!) you cannot ever know from the source all the
> options for a service - you *must* know the library versions you are
> running, to interrogate them for their options.

The problem is that we hammer in all the libraries configuration
option into the main config file. if we'd have "include" support and
we'd just include the libraries config options that are generated as a
separate file (and possibly autogenerated) this problem would not
occur, and it would avoid the gate breakages.


Thanks,
Dirk

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] "bad" default values in conf files

2014-02-14 Thread Dirk Müller
>> were not appropriate for real deployment, and our puppet modules were
>> not providing better values
>> https://bugzilla.redhat.com/show_bug.cgi?id=1064061.

I'd agree that raising the caching timeout is a not a good "production
default" choice. I'd also argue that the underlying issue is fixed
with https://review.openstack.org/#/c/69884/

In our testing this patch has speed up the revocation retrieval by factor 120.

> The default probably is too low, but raising it too high will cause
> concern with those who want revoked tokens to take effect immediately
> and are willing to scale the backend to get that result.

I agree, and changing defaults has a cost as well: Every deployment
solution out there has to detect the value change, update their config
templates and potentially also migrate the setting from the old to the
new default for existing deployments. Being in that situation, it has
happened that we were "surprised" by default changes that had
undesireable side effects, just because we chose to overwrite a
different default elsewhere.

I'm totally on board with having "production ready" defaults, but that
also includes that they seldomly change and change only for a very
good, possibly documented reason.


Greetings,
Dirk

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [TripleO] consistency vs packages in TripleO

2014-02-14 Thread Dirk Müller
Hi Robert,

> So far:
>  - some packages use different usernames
>  - some put things in different places (and all of them use different
> places to the bare metal ephemeral device layout which requires
> /mnt/).
>  - possibly more in future.

Somehow I miss between your suggestions of option #A and #B the option
#C, which started the whole discussion ;-)

The whole discussion started on service usernames really. I don't
really see the problem with that though, you're logging in as root and
all you do is running systemctl start . if that drops
privileges to user "foo" or to user "bar" is really something that can
be abstracted away.

> # A
>  - we have a reference layout - install from OpenStack git / pypi
> releases; this is what we will gate on, and can document.

I assume that you mean this with the "from source" install. I don't
think that you really need to document each individual path or the
like. Most of the "support install from packages" changes were adding
symlinks to client libraries to /usr/local/bin, aka $PATH. As long as
all the binaries that you're as an operator to expect to call are in
$PATH, I see nothing wrong with just documenting the binary to be
available. I also think that the average OpenStack operator is able to
ignore the problem that if the documentation says "/usr/local/bin/foo"
and the binary is "/usr/bin/foo". I strongly believe most of them will
not even notice.

This is something that can be tested btw, and used to "certify"
documentation against reality, be it install from source or packages.

>  -> we need multiple manuals describing how to operate and diagnose
> issues in such a deployment, which is a matrix that overlays platform
> differences the user selects like 'Fedora' and 'Xen'.

Shouldn't that be part of the normal OpenStack Operations Guide? I
don't see how TripleO needs to reinvent general OpenStack
documentation? The existing documentation already covers most of the
platform differences.

> # B
>  - we have one layout, with one set of install paths, usernames
>  - package installs vs source installs make no difference - we coerce
> the package into reference upstream shape as part of installing it.

Which is done anyway already (by relinking stuff to /mnt/state).

>  - documentation is then identical for all TripleO installs, except
> the platform differences (as above - systemd on Fedora, upstart on
> Ubuntu, Xen vs KVM)

I think there is nothing wrong with requiring the documentation of
differences being updated
as part of such changes being introduced.

> I see this much like the way Nova abstracts out trivial Hypervisor
> differences to let you 'nova boot' anywhere, that we should be hiding
> these incidental (vs fundamental capability) differences.

Isn't that what the DIB elements do? you build a media with "nova"
element, and whatever platform you're building on, it will get you
nova up and running? Why would you need to document which user the
"nova" element runs under? This is an implementation detail.

In my opinion this boils down to: Tests for the documentation. If we
document that you can start a service
by running "systemctl start foobar", then there gotta be a test for
it. What the code does to launch the service when "systemctl start
foobar" is run is an implementation detail, and if it requires running
user "bar" instead of user "foo" then so be it.

Installing from packages is not as bad as you might think. It brings
down image building time to less than half the time you need from
source, and you can apply patches that fix *your* problem on *your*
platform. I understand the idea of Continuous Deployment, but it
doesn't mean that the one patch you need to have in your system for
something to work isn't hanging in an upstream review for 3 months or
more. It also allows distributors to provide services on top of
OpenStack, something that pays the pay checks of some of the people in
the community.



Greetings,
Dirk

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] How to run pylint locally?

2014-02-13 Thread Dirk Müller
Hi,

> Here is what I tried:
> /opt/stack/cinder $ ./tools/lintstack.sh
>
> But this does not report any errors even though I am on the same branch.
> What am I missing?

You might be running against local packages then which have a
different version / output. The proper way to reproduce the Gate
errors is by using "tox".

in this case tox -e pylint

Please note that the pylint error is nonvoting, the actual -1 comes
from the Devstack test run failure.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hacking] unit test code is too less

2014-01-23 Thread Dirk Müller
Hi Zhi Qiang,

> for i.e. the hacking rule h233 in hacking looks not so robust,
> https://github.com/openstack-dev/hacking/blob/master/hacking/core.py#L345
> it cannot detect
>
> \bprint$
> \bprint >>>xxx, (\s+

It currently detects both as a violation of the rule, which is IMHO
correct. Please note that the behavior of

  print

depends on if its an operator (then it prints a newline) and a
function (then it does nothing)


Greetings,
Dirk

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Horizon] Is there precedent for validating user input on data types to APIs?

2013-07-14 Thread Dirk Müller
Hi Matt,

Given that the Nova API is public, this needs to be validated in the API,
otherwise the security guys are unhappy.

Of course the API shouldn't get bad data in the first place. That's a bug
in nova client. I have sent reviews for both code fixes but I've not seen
any serious reaction or approval on those for two weeks. Eventually
somebody is going to look at it, I guess.

Greetings,
Dirk
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The danger of capping python-*clients in core projects, and forbidding it in the future

2013-07-11 Thread Dirk Müller
>> See for example https://bugs.launchpad.net/horizon/+bug/1196823
> This is arguably a deficiency of mox, which (apparently?) doesn't let us mock 
> properties automatically.

I agree, but it is just one example. other test-only issues can happen as well.

Similar problem: the *client packages are not self-contained, they
have pretty strict dependencies on other packages. One case I already
run into was a dependency on python-requests: newer python-*client
packages (rightfully) require requests >= 1.x. running those on a
system that has OpenStack services from Grizzly or Folsom installed
cause a conflict: there are one or two that require requests to be <
1.0.

When you run gating on this scenario, I think the same flipping would
happen on e.g. requests as well, due to *client or the module being
installed in varying order.

Greetings,
Dirk

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The danger of capping python-*clients in core projects, and forbidding it in the future

2013-07-11 Thread Dirk Müller
Hi Thierry,

> Indeed. The whole idea behind a single release channel for python client
> libraries was that you should always be running the latest, as they
> should drastically enforce backward compatibility.

Well, backward compatibility can be tricky when it comes to test.
We've for example recently had an issue where the newer keystoneclient
broke mocking in tests. It is debateable if tests are part of the
backward compatibility or not.

See for example https://bugs.launchpad.net/horizon/+bug/1196823

This is currently also preventing me from being able to get a change
on stable/grizzly past gating checks (which stumble on exactly this
"regression").

Greetings,
Dirk

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The danger of capping python-*clients in core projects, and forbidding it in the future

2013-07-11 Thread Dirk Müller
> Let's submit a multi-project bug on launchpad, and be serious for changing
> these global requirements in following days

https://bugs.launchpad.net/keystone/+bug/1200214

created.

Greetings,
Dirk

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Gate is broken: all gate-tempest-devstack-* are failing

2013-07-11 Thread Dirk Müller
Hi Sean,

> Cinder uncapping python-keystoneclient will get us past this.

There is a review exactly proposing that:

https://review.openstack.org/#/c/36344/


> Though I'm not
> quite sure how we got to this break point in the first place.


I think this is due to the django_openstack_auth breakage that let
this one slip by (there was for a short amount of time a >= 0.3
requirement on python-keystoneclient from somewhere).

Greetings,
Dirk

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev