Re: [Openstack-operators] [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-01-28 Thread Thomas Goirand
On 01/28/2015 08:56 PM, Sean Dague wrote:
> There is a new stackforge project which is getting some activity now -
> https://github.com/stackforge/ec2-api. The intent and hope is that is
> the path forward for the portion of the community that wants this
> feature, and that efforts will be focused there.

I'd be happy to provide a Debian package for this, however, there's not
even a single git tag there. That's not so nice for tracking issues.
Who's working on it?

Also, is this supposed to be branch-less? Or will it follow juno/kilo/l... ?

Cheers,

Thomas Goirand (zigo)


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Deprecation of in tree EC2 API in Nova for Kilo release

2015-01-28 Thread Thomas Goirand
On 01/28/2015 09:38 PM, Tim Bell wrote:
> I would therefore propose that the new EC2 API modules are validated in 
> production and at scale before depreciating the existing functions. I think 
> validating, packaging and deploying to a reasonable number of clouds and 
> reviewing it with the operators is a viable target for Kilo. This can then be 
> reviewed in one of the joint developer/operator sessions for feasibility.

I can do the packaging for Debian (and therefore, Ubuntu if they sync
from Debian), only if some kind of version number are tagged in the
official Git repo. That's not yet the case, which is really annoying for
tracking issues.

Thomas


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging sample config versions

2015-01-28 Thread Thomas Goirand
On 01/27/2015 11:00 PM, Tom Fifield wrote:
> Hi all,
> 
> Based on Gustavo's excellent work below, talking with many ops, and
> after a brief chats with Jeremey and a few other TC folks, here's what
> I'd propose as an end goal:
> 
> * A git repository that has raw, sample configs in it for each project
> that will be automagically updated
> 
> * Raw configs distributed in the tar files we make as part of the release
> 
> Does that seem acceptable for us all?

It is not. Since it has been already discuss, but we're still not having
any counter point of argumentation, I shall repeat myself, until sanity
gets restored.

You are still *not* addressing the main issue: the .sample config files
*must* match your environment and the version of the different libs on
which a given service is going to run. Therefore, any attempt to go back
to the previous situation (where we have already pre-built config files)
can be considered a grave regression.

Shall I remind everyone that if there's a config option that shouldn't
be there, the daemons will refuse to start?

I don't see what the problem is, really. We have a perfectly valid
system using oslo-config-generator. Here's an example from Ceilometer
Kilo beta 1, in my debian/rules file, in the override_dh_install target:

oslo-config-generator --output-file
$(CURDIR)/debian/ceilometer-common/usr/share/ceilometer-common/ceilometer.conf
\
--namespace ceilometer \
--namespace oslo.db \
--namespace oslo.messaging \
--namespace keystonemiddleware.auth_token

This works perfectly, and is very easy to implement on anyone doing
packaging decently.

Now, let's say there's a new version of oslo.db that adds a new
configuration option, but Debian didn't upgrade oslo.db yet. Then the
ceilometer.conf there available online will be *WRONG*.

Please, just don't do it, and force everyone to use
oslo-config-generator, as they should. What's bad is to use stuff from
/openstack/common (ie: oslo-incubator, like it can be seen in
the heat for Kilo beta 1), and that's the thing we shall try to kill
before the final version of Kilo is out.

Cheers,

Thomas Goirand (zigo)


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Small openstack

2015-01-28 Thread Thomas Goirand
On 12/20/2014 11:16 PM, George Shuklin wrote:
> do 'network node on compute' is kinda sad

Why?

Thomas


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging sample config versions

2015-01-31 Thread Thomas Goirand
On 01/29/2015 12:30 AM, Tom Fifield wrote:
> Actually, many folks I spoke to didn't really care about this.

(computer) science isn't about voting.

On 01/29/2015 01:55 AM, Fischer, Matt wrote:
> Agreed completely. I know it wpn¹t be 100% perfect, but its 95% and
> right now I have 0%.

5% is enough to create big bugs.

Upstream authors decided to remove the sample config files for a reason,
which is: there's no way to provide a valid sample config file because
it depends on the version of the installed libs. You're now deciding to
ignore that reason. So basically, you're deciding you know better than
everyone else. You'll get into issues and troubles. I just hope you know
about it, and wont complain when things break badly.

Thomas


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] OpenStack 2015.1.0 for Debian Sid and Jessie

2015-05-14 Thread Thomas Goirand

Hi,

I am pleased to announce the general availability of OpenStack 2015.1.0 
(aka Kilo) in Debian unstable (aka Sid) and through the official Debian 
backports repository for Debian 8.0 (aka Sid).


Debian 8.0 Jessie just released
===
As you may know, Debian 8.0 was released on the 25th of April, just a 
few days before OpenStack Kilo (on the 30th of April). Just right after 
Debian Jessie got released, OpenStack Kilo was uploaded to unstable, and 
slowly migrated the usual way to the new Debian Testing, named Stretch.


As a lot of new packages had to go through the Debian FTP master NEW 
queue for review (they check mainly for the copyright / licensing 
information, but also if the package is conform to the Debian policy). 
I'd like here to publicly thank Paul Tagliamonte from the Debian FTP 
team for his prompt work, which allowed Kilo to reach the Debian 
repositories just a few days after its release (in fact, Kilo was fully 
available in Unstable more than a week ago).


Debian Jessie Backports
===
Previously, each release of OpenStack, as a backport for Debian Stable, 
was only available through private repositories. This wasn't a 
satisfying solution, and we wanted to address it by uploading to the 
official Debian backports. And the result is now available: all of 
OpenStack Kilo has been uploaded to Debian jessie-backports. If you want 
to use these repositories, just add them to your sources.list (note that 
the Debian installer proposes to add it by default):


deb http://httpredir.debian.org/debian jessie-backports main

(of course, you can use any Debian mirror, not just the httpredir)

All of the usual OpenStack components are currently available in the 
official backports, but there's still some more to come, like for 
example Heat, Murano, Trove or Sahara. For Heat, it's because we're 
still waiting for python-oslo.versionedobjects 0.1.1-2 to migrate to 
Stretch (as a rule: we can't upload to backports unless a package is 
already in Testing). For the last 3, I'm not sure if they will be 
backported to Jessie. Please provide your feedback and tell the Debian 
packaging team if they are important for you in the official 
jessie-backports repository, or if Sid is enough. Also, at the time of 
writing of this message, Horizon and Designate are still in the 
backports FTP master NEW queue (but it should be approved very soon).


Also, I have just uploaded a first version of Barbican (still in the NEW 
queue waiting for approval...), and there's a package for Manila that is 
currently on the work by a new contributor.


Note on Neutron off-tree drivers

The neutron-lbaas, neutron-fwaas and neutron-vpnaas packages have been 
uploaded and are part of Sid. If you need it through jessie-backports, 
please just let me know.


All vendor-specific drivers have been separated from Neutron, and are 
now available as separate packages. I wrote packages for them all, but 
the issue is that most of them wouldn't even build due to failed unit 
tests. For most of them, it used to work in the Kilo beta 3 of Neutron 
(it's the case for all but 2 of them who were broken at the time), but 
they appeared broken with the Kilo final release, as they didn't update 
after the Kilo release.


I have repaired some of them, but working on these packages has shown to 
be a very frustrating work, as they receive very few updates from 
upstream. I do not plan to work much on them unless one of the below 
condition:

- My employer needs them
- things are moving forward upstream, and that these unit tests are 
repaired in the stackforge repository.


If you are a network hardware vendor and read this, please push for more 
maintenance, as it's in a really bad state ATM. You are welcome to get 
in touch with me, and I'll be happy to help you to help.


Bug report
==
If you see any issue in the packages, please do report them to the 
Debian bug tracker. Instructions are available here:

https://www.debian.org/Bugs/Reporting

Happy installation,

Thomas Goirand (zigo)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] OpenStack 2015.1.0 for Debian Sid and Jessie

2015-05-15 Thread Thomas Goirand


On 05/15/2015 10:37 AM, neil.jer...@metaswitch.com wrote:

Out of interest, have you done this by re-releasing the Ubuntu packaging? Or 
have you taken an independent approach?

Regards,
 Neil


It's been since Folsom that I've released packages on my own in Debian. 
Absolutely zero packaging work was imported from Ubuntu to Debian in 
this release also. In fact, it's the opposite which (often) happens: the 
last release, Juno, in Ubuntu, was using nearly 100% of my work for 
packaging the dependencies (including Oslo libraries and the 
python-*client packages). This last Kilo release is different because I 
couldn't upload to Debian during the freeze of Jessie, so Canonical had 
to work on Oslo packages of their own. This shows especially on the 
naming of the oslo packages, with a dash in Ubuntu (which seems to be a 
mistake), and a dot in Debian (which is compatible with what the 
egg-info declares).


By the way, the list of packages which I maintain is available at [1], 
and there you can see the difference of version numbers between Debian 
and Ubuntu. When you see the same version in both Debian and Ubuntu, it 
means ubuntu has "synced from Debian", or in other words, imported the 
work I've done in Debian.


On 05/15/2015 03:50 PM, Ihar Hrachyshka wrote:
> Are there any attempts to avoid duplication of efforts? I would expect
> Ubuntu to reuse and extend what is in their upstream distro - Debian.
>
> Ihar

It's a decision from upper (or even *very* upper, shall I say...) 
management at Canonical that there's no collaboration between Debian and 
Ubuntu on the core packages. Maybe this may change in the future if the 
decision is reversed (I'm opened for it to happen...).


However, there's been some attempts to work more on the dependency 
packages together, but mostly, these attempts failed (partly due to the 
fact that Canonical insists on using BZR as a VCS). I've seen some bugs 
opened with patches by Ubuntu people to lessen the differences for these 
packages which is a good thing.


Let's hope things get better some time...

Cheers,

Thomas Goirand (zigo)

P.S: If you try deploying using Debian, make sure you're using 
python-pysaml2 >= 2.4.0 which I uploaded yesterday, otherwise Keystone 
will be broken.


[1] 
https://qa.debian.org/developer.php?login=openstack-de...@lists.alioth.debian.org


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [ops][tags][packaging] ops:packaging tag - a little common sense, please

2015-06-16 Thread Thomas Goirand
Thanks Jay for this.

I basically agree with all you wrote.

On 06/10/2015 07:51 PM, Jay Pipes wrote:
> I don't believe the Ops Tags team should be curating the packaging tags
> -- the packaging community should do that, and do that under the main
> openstack/governance repository.
> 
> Packagers, I would love it if you would curate a set of tags that looks
> kind of like this:
> 
>  - packaged:centos:kilo
>  - packaged:ubuntu:liberty
>  - packaged:sles:juno

As you wrote, the list will be *very* outdated *very* fast. I don't see
the point of having such tagging scheme, when all is available in a
central place [1] already.

I'm not happy either with the fact that there would be only a single
"apt" definition for the quality, when Debian & Ubuntu packages are
different. Especially when I take a great care of reducing the number of
bugs within the Debian tracker [2]. I've raised the issue multiple times
on the blueprint, but I basically got ignored.

If we want this blueprint to get through, please take into account
remarks that reviewers are making.

Cheers,

Thomas Goirand (zigo)

[1]
https://qa.debian.org/developer.php?login=openstack-de...@lists.alioth.debian.org

[2]
https://bugs.debian.org/cgi-bin/pkgreport.cgi?which=maint&data=openstack-devel%40lists.alioth.debian.org&archive=no&raw=yes&bug-rev=yes&pend-exc=fixed&pend-exc=done


Note on this URL: Yes, only 6 buts reported currently opened in Debian,
out of 242 packages. And with 5 of the bugs needing upstream actions
(getting out of suds, pyeclib needing a new release), and one pending
Debian FTP masters approval of the package. It's like zero actionable
bugs to me!!! Please do submit a bug, and I'll do my best to close it in
a record time...


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Debian packaging design summit sessions in Tokyo

2015-10-13 Thread Thomas Goirand
Dear Operators,

As you may know, the deb-packaging project joined the big tent. It looks
like I've been made PTL for the project. As such, I would like to invite
operators to join the packaging sessions on Wednesday in Tokyo, between
11:15 and 12:45 in the Kusunoki room. We'd be more than happy to get
feedback from you guys, and see how we can improve things to make your
lives easier.

Cheers,

Thomas Goirand (zigo)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-12-07 Thread Thomas Goirand
On 12/01/2015 07:57 AM, Steve Martinelli wrote:
> Trying to summarize here...
> 
> - There isn't much interest in keeping eventlet around.
> - Folks are OK with running keystone in a WSGI server, but feel they are
> constrained by Apache.
> - uWSGI could help to support multiple web servers.
> 
> My opinion:
> 
> - Adding support for uWSGI definitely sounds like it's worth
> investigating, but not achievable in this release (unless someone
> already has something cooked up).
> - I'm tempted to let eventlet stick around another release, since it's
> causing pain on some of our operators.
> - Other folks have managed to run keystone in a web server (and
> hopefully not feel pain when doing so!), so it might be worth getting
> technical details on just how it was accomplished. If we get an OK from
> the operator community later on in mitaka, I'd still be OK with removing
> eventlet, but I don't want to break folks.
> 
> stevemar
> 
> From: John Dewey 
> 100% agree.
> 
> We should look at uwsgi as the reference architecture. Nginx/Apache/etc
> should be interchangeable, and up to the operator which they choose to
> use. Hell, with tcp load balancing now in opensource Nginx, I could get
> rid of Apache and HAProxy by utilizing uwsgi.
> 
> John

The main problem I see with running Keystone (or any other service) in a
web server, is that *I* (as a package maintainer) will loose the control
over when the service is started. Let me explain why that is important
for me.

In Debian, many services/daemons are run, then their API is used by the
package. In the case of Keystone, for example, it is possible to ask,
via Debconf, that Keystone registers itself in the service catalog. If
we get Keystone within Apache, it becomes at least harder to do so.

The other issue is that if all services are sharing the same web server,
restarting the web server restarts all services. Or, said otherwise: if
I need to change a configuration value of any of the services served by
Apache, I will need to restart them all, which is very annoying: I very
much prefer to just restart *ONE* service if I need.

Also, something which we learned the hard way at Mirantis: it is *very*
annoying that Apache restarts every Sunday morning by default in
distributions like Ubuntu and Debian (I'm not sure for the other
distros). No, the default config of logrotate and Apache can't be
changed in distros just to satisfy OpenStack users: there's other users
of Apache in these distros.

Then, yes, uWSGI becomes a nice option. I used it for the Barbican
package, and it worked well. Though the uwsgi package in Debian isn't
very well maintained, and multiple times, Barbican could have been
removed from Debian testing because of RC bugs against uWSGI.

So, all together, I'm a bit reluctant to see the Eventlet based servers
going away. If it's done, then yes, I'll work around it. Though I'd
prefer if it didn't.

It is also my view that it's up to the deployers to decide how they want
to implement things. For many small use cases, Eventlet performs well
enough.

Finally, one thing which I never understood: if Eventlet is bad as an
HTTP server, can't we use anything else written in Python? Isn't it
possible to write a decent HTTP server in Python? Why are we forced into
just Eventlet for doing the job? I haven't searched around, but there
must be loads of alternatives, no?

Cheers,

Thomas Goirand (zigo)


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-12-09 Thread Thomas Goirand
els, and is punting up the stack where Python is
> handicapped. Don't think of it as a work around, think of it as having
> the freedom to architect your own deployment.

I'm ok with that, but as per the above, I'd like to provide something
which just works for Debian users. And I'd love to gather opinions on
what is best.

> It is also my view that it's up to the deployers to decide how they want
> to implement things. For many small use cases, Eventlet performs well
> enough.
> 
> Unfortunately, "small" is not "all."

The reasoning that I have, is that for small deployments, using the
package defaults is ok. For bigger deployments, you'd be using puppet
and other kinds of tooling anyway, and then it's ok to expect this type
of users to do what he/she thinks is best.

> Finally, one thing which I never understood: if Eventlet is bad as an
> HTTP server, can't we use anything else written in Python? Isn't it
> possible to write a decent HTTP server in Python? Why are we forced into
> just Eventlet for doing the job? I haven't searched around, but there
> must be loads of alternatives, no?
> 
> Yep! There are many. Eventlet is a bit unique, but ("Core") OpenStack
> services have historically been tightly bound to Eventlet for it's
> native-ish threading support. As Keystone has broken free, you are then
> free to deploy our generic WSGI app/s using any generic WSGI server in
> any process / threading architecture that suits your requirements.
> 
> We only ever preferred Apache for two reasons:
> 
> 1) There was interest in using apache-based auth plugins with keystone.
> 
> 2) Everyone sysadmin and their mother knows how to configure Apache.
> 
> It was just well-documented and well-understood.

I really find using Apache the less convenient of all the options available.

Cheers,

Thomas Goirand (zigo)


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-12-09 Thread Thomas Goirand
On 12/08/2015 06:39 AM, Jamie Lennox wrote:
> The main problem I see with running Keystone (or any other service) in a
> web server, is that *I* (as a package maintainer) will loose the control
> over when the service is started. Let me explain why that is important
> for me.
> 
> In Debian, many services/daemons are run, then their API is used by the
> package. In the case of Keystone, for example, it is possible to ask,
> via Debconf, that Keystone registers itself in the service catalog. If
> we get Keystone within Apache, it becomes at least harder to do so.
> 
> I was going to leave this up to others to comment on here, but IMO -
> excellent. Anyone that is doing an even semi serious deployment of
> OpenStack is going to require puppet/chef/ansible or some form of
> orchestration layer for deployment. Even for test deployments it seems
> to me that it's crazy for this sort of functionality be handled from
> debconf. The deployers of the system are going to understand if they
> want to use eventlet or apache and should therefore understand what
> restarting apache on a system implies.

It is often what everyone from within the community says. However,
there's lots of users who hardly do a single deployment, maybe 2. I
don't agree that they should all invest a huge amount of time in some
automation tools, and for them, packages should be enough.

Anyway, the debconf handling is completely optional, and most of the
helpers are completely disabled by default. So it is *not* in the way of
using any deployment tool like puppet.

Cheers,

Thomas Goirand (zigo)


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Announcing Debian Mitaka b2 packages with backports for Jessie and Trusty

2016-01-28 Thread Thomas Goirand
Hi everyone,

I'm delighted to announce the release of Debian packages for Mitaka b2.

Debian Experimental
===
I have uploaded it all to Debian Experimental. This is the only place
where you may find official packages. It will stay this way until Debian
Bikesheds (Bikesheds is what will be Debian's version of PPA) are
operational, or when Mitaka final is out, at which point all will be
uploaded to Sid, then to Jessie-backports.

Mitaka in Debian Stretch

As the Debian release team announced that Debian 9.0 (aka: Stretch) will
be frozen late in 2016, Mitaka will be the OpenStack release which I
will maintain in Stretch. Though, if before the release, we have working
Bikesheds, I will ask for the removal of all OpenStack stuff from Stable
and Testing, and will only maintain the last 2 stable releases in
specific Bikesheds.

Non-official Jessie and Trusty backports

All of Mitaka b2 is also available on the automatic Jenkins backport
build servers for Debian Jessie and Ubuntu Trusty. The repository
addresses are described here:

http://mitaka-jessie.pkgs.mirantis.com/

and here:

http://mitaka-trusty.pkgs.mirantis.com/

Note: these are Mirantis sponsored servers to automatically rebuild
backports, but the source packages are the exact copy of what's in
Debian without any change.

If you use puppet-openstack, and would like to use Ubuntu Trusty as the
base OS, you need to install the puppet-openstack-debian-fact on all of
your servers, so that the Puppet scripts know that you're using Debian
style packages on top of Ubuntu. This way, puppet will know the
difference in Horizon, Nova & Neutron (the other packages are using the
same names). Alternatively, you can do it manually (same effect):

echo os_package_type=debian > /etc/facter/facts.d/os_package_type.txt

Also, note that the Trusty backports have been rebuilt entirely using
Debian packages. No source package available in this repository where
downloaded from Ubuntu (but from Debian), meaning that these packages
are fully redistributable as it pleases you, without modification or
rebuild, without any risk with the Ubuntu trademark problems [1] (of
course, remains the problem of redistributing the base OS... but I'm not
redistributing this myself!).

Included in this release

The following server packages are available:

* aodh
* barbican
* ceilometer
* cinder
* designate
* glance
* gnocchi
* heat
* ironic
* keystone
* manila
* mistral
* murano
* murano-agent
* neutron
* nova
* openstack-trove
* sahara
* senlin
* zaqar

I couldn't upload these to Experimental (as Horizon needs to support
Django 1.9 support), but packages are done and backported to both Jessie
and Trusty:
- horizon
- murano-dashboard
- designate-dashboard
- trove-dashboard
- sahara-dashboard
- senlin-dashboard

At this point, even though the package is functional, I have a working
Congress package, but I can't upload it to Debian due to its
"thirdparty" folder containing non-free files, such as windows .dll and
such. I hope upstream maintainers can fix that.
- congress

These were still not tagged for Mitaka b2, so I didn't package them yet:
- magnum
- manila-ui
- zaqar-ui

Report bugs
===
This is a preview, which hasn't been tested much. Bugs are to be
expected, just like it is in upstream code. So by all means, report bugs
to the Debian BTS [2] if you find any.

Thanks to so many people

I'd like to hereby thank everyone who helped this release to happen.
This includes, but not limited to: cdent which I annoyed with one
package, when the issue was really in Debian, Corey Bryant from
Canonical for continuing to co-maintain OpenStack Python modules
directly in Debian, folks from Telemetry who are always very helpful to
help me fix a few things. Thanks to anyone who helped closing bugs I've
opened. I'm sure I am forgetting many people who helped a lot.

No keyboard (or any other hardware) was hurt doing this release.

Cheers,

Thomas Goirand (zigo)

[1] If you don't know what I'm talking about, you'd better urgently read
these blog posts from Matthew Garrett:
http://mjg59.dreamwidth.org/35969.html
http://mjg59.dreamwidth.org/36312.html
http://mjg59.dreamwidth.org/37113.html
http://mjg59.dreamwidth.org/38467.html

[2] https://www.debian.org/Bugs/Reporting

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] How to install magnum?

2016-02-24 Thread Thomas Goirand
On 10/16/2015 02:09 PM, hittang wrote:
> Hello,everynoe. Can anybody help me for installing magnum? I have an
> openstack installtion,which has one controller node, one network node,
> and server computes node. Now, I want to install magnum, and  to manage
> docker containers with. 
> Thanks.

Magnum is in both Debian Sid (version 1.0.0~b1, and it's been months
it's there...) and Experimental (version 1.1.0, aimed at Mitaka). So
it's just an apt-get install away... :)

So far, I had zero feedback from users. Please be my guess, and let me
know what works and what doesn't.

Cheers,

Thomas Goirand (zigo)


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] OpenStack Contributor Awards

2016-03-01 Thread Thomas Goirand
On 03/01/2016 11:30 PM, Tom Fifield wrote:
> Excellent, excellent.
> 
> What's the best place to buy Raspberry Pis these days?

One of the 2 official sites:
https://www.element14.com/community/community/raspberry-pi

The Pi 3 is the super nice shiny new stuff, with 64 arm bits.

Cheers,

Thomas Goirand (zigo)


Hopefully, with it, there will be no need of raspbian anymore (it was
there because of a very poor choice of CPU in model 1 and 2, just below
what the armhf builds required, forcing to use armel which is arm v4
instruction sets).



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [announce] Announcing validated Debian packages for Mitaka

2016-04-08 Thread Thomas Goirand
Greetings!

I am overjoyed, thrilled and delighted to announce the release of the
Debian packages for Mitaka.

All of the DefCore packages were validated successfully this morning
through our package-only-based Tempest CI.

Content of this release
===
This release includes the following 23 services:
aodh 2.0.0
barbican 2.0.0
ceilometer 6.0.0
cinder 8.0.0
congress 3.0.0+dfsg1
designate 2.0.0
glance 12.0.0
gnocchi 2.0.2
heat 6.0.0
horizon 9.0.0
ironic 5.1.0
keystone 9.0.0
magnum 2.0.0
manila 2.0.0
mistral 2.0.0
murano 2.0.0
neutron 8.0.0
nova 13.0.0
trove 5.0.0
sahara 4.0.0
senlin 1.0.0
swift 2.7.0
zaqar 2.0.0

Where to find these packages

1/ Sid
All of Mitaka was uploaded to Debian Sid this week. You can use Debian
Sid directly to use them.

2/ Official jessie-backports
As soon as everything migrates to Debian Testing (currently aka:
Stretch), in 5 days if no RC bug is reported, it will be possible to
upload all of Mitaka to the Debian official jessie-backports.

3/ Non-official Jessie and Trusty backports
In the meantime, the packages are available through Mirantis Jenkins
automatic Debian Jessie backport repository. The full sources.list is
available here:

http://mitaka-jessie.pkgs.mirantis.com/

You can use the Trusty backports as well:

http://mitaka-trusty.pkgs.mirantis.com/

To use these repositories, simply add the described sources.list to (for
example) /etc/apt/sources.list.d/openstack.list, and run apt-get update.
If you want to install the GPG key of the repositories, you can either
install the mitaka-jessie-archive-keyring or
mitaka-trusty-archive-keyring package (depending on your distribution of
choice). Alternatively "apt-key add" the public key available at
/debian/dists/pukey.gpg in these repositories.

As a reminder, the URLs above contain the word "Mirantis" only because
the service is sponsored by my employer. These repositories are
"straight" backports from what is available in Debian Sid, without any
modification.

Remember that the packages listed below are maintained separately in
Debian and Ubuntu, and therefore, packages are different in these
distributions:
aodh, barbican, ceilometer, cinder, designate, glance, heat, horizon,
ironic, keystone, manila, neutron, nova, trove, swift.

All other packages (including all OpenStack libraries like Oslo and
python-*clients) are maintained in Debian, with the contribution of
Canonical, and then synced to Ubuntu, so they are the exact same
packages (or at least, with a minimal difference). I hope we can further
improve collaboration between Debian and Canonical during the Newton cycle.

Bug reporting
=
As always, bug reports are welcome, and considered as high value
contributions. Please follow the instructions available at
https://www.debian.org/Bugs/Reporting to report bugs to the Debian BTS.

Moving forward with higher QA and the Packaging-deb project in Newton
=
Currently, DefCore packages are tested through a package-only (ie: no
puppet, chef, you-name-it... system management involved) Tempest CI.
Results can be seen at:
https://mitaka-jessie.pkgs.mirantis.com/job/openstack-tempest-ci/

Though not all packages are included in this CI. It is my intention,
during the Newton cycle, to also include services like Designate, Trove,
Barbican, Congress, ... in this CI. Individual upstream team for these
services are more than welcome to approach us to get this happen quicker.

Also, as we're slowing starting to get the Packaging-Deb project (ie:
packaging using upstream OpenStack gerrit and gating), it is also in the
pipe to use the above mentioned tempest CI system as a gate system for
the packaging. Hopefully, this will lead us to a full CI/CD working from
trunk. We also hope to be able to use these packages to help the Puppet
team to test packaged OpenStack from trunk.

Greetings
=
On each release, I ask myself who I should thank. This time, I would
like to thank everyone, because this release was overall very nice and
working well. The whole OpenStack community is always very helpful and
understand the requirements of downstream distributions. Guys, you're
awesome, I love my work, and I love working with you all!

Cheers,

Thomas Goirand (zigo)




signature.asc
Description: OpenPGP digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [announce] Debian Jessie arm 64bits backports for Mitaka and Newton available

2016-06-22 Thread Thomas Goirand
Hi everyone,

I am pleased to announce that, as of today, there are arm64 backports to
Jessie available in non-official backports repositories for Debian
Jessie. Here are the URLs (repository definitions are available at the
below addresses):

http://newton-jessie-arm64.linaro.org/
http://mitaka-jessie-arm64.linaro.org/

As the URLs are telling you, these build machines are nicely provided by
Linaro, who is sponsoring them so that automated builds are possible.
So, a big thanks to them!

I of course welcome feedback on these packages, which I couldn't test in
a real deployment myself using this architecture: I only checked that
all packages were available.

Cheers,

Thomas Goirand (zigo)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Call for sponsorship: hardware for Debian OpenStack packages functional testing

2017-11-08 Thread Thomas Goirand
tl;dr: I need hardware to run tempest on Debian + OpenStack. I wouldn't
refuse sponsorship of my work either.

Dear everyone,

As you may know, I have been packaging OpenStack in Debian nearly since
it existed (ie: since the Cactus release).

I used to be a Mirantis employee, though like many others from the
company, I've been "let go" last year (that's the wording of the
Mirantis marketing people... though I didn't want to go!). At the
moment, I'm still unemployed, even though I have very serious
opportunities offered to me.

Anyway, the thing is, every time my professional situation changes, I am
loosing the access to hardware used to do functional testing of the
Debian OpenStack packages. Currently, I don't have a server to run on,
so I cannot check if Pike works as expected.

I already had some offers from companies to use hardware that they also
would host. However, this also feels like not sustainable over a long
period of time. I would very much prefer to have such a hardware hosted
within the Debian infrastructure. Which is why I am hereby calling for
sponsorship of such a hardware.

Note that I already made such a request to the DSA team (Debian System
Administrators), and it was denied because they don't want to make
OpenStack a special case. Normally, DDs are supposed to test packages
themselves when uploading to Debian.

There's 3 types of setup that my current scripts are able to support:
1/ A Xen VM.
2/ A KVM vm.
3/ A Debian live system running on bare metal, which is reseted using
IMPI (using an ipmitool command).

It's that last one which is performing the best, because it's running on
bare metal, which avoids nested virtualization. Also, reinstalling the
system means simply doing a reset and waiting for the the server to be
up again. Last, the system runs on a tempfs, and IOPs are therefore a
way faster than on a normal disk (HDD / SSD). The local HDD is then used
as a scratch disk for testing Cinder and Swift, instead of a local
loopback in the case of KVM or Xen (so again, much faster). It also
needs to have IPMI, and preferably also KVM over IP.

The speed of the system used to do the functional testing is important,
because the time for setting-up the system is around 20 minutes (on
option 3 above, slower in other cases), then it takes roughly 1 hour to
run the functional test with tempest. Typically, such a debug process is
ran multiple times, iteratively, fixing one problem after another.

The hardware I last used was a multi-core 64 bits x86 system with 32 GB
of RAM, and an SSD scratch disk (100 GB of a single SSD is enough), plus
a server to run PXE network boot: tftp server, dhcp server, and apache
to provide the squashfs image to the server. That's about what I need.

Also, to be able to PXE boot the server, I need a 2nd server to run
dhcp, pxe and apache. On that server, I would run Xen to be able to also
install Jenkins server to do package build on each git push, which
avoids a lot of RC bugs in Debian, and speeds up therefore packaging.

So, all together, I'm searching for someone to sponsor:
- A 32 GB RAM server with at least 2 cores, and 100 GB SSD, and 2 nics
at least
and either:
- A 2nd server with a minimum of 1GB RAM & 20GB HDD and 2 nics
but preferably:
- 64 or even better 128 GB RAM, so I can host Jenkins servers and Debian
repositories (one per release, using virtualization), with a large
enough HDD to host the full set of packages per release: a pair of 1TB
HDD or more using RAID1 (or even better: 4 HDD with RAID10 for better
performances) seems a good choice to me.

If you are able to sponsor such a hardware, and send it either to
Univercity of British Columbia, or to Bytemarks in UK, please get in
touch with me.

Last thing. A number of companies offered me to sponsor my work
packaging OpenStack for Debian: at least 4 companies already. It really
feels like a number of companies were using my work over the years.
However, it never went through. As I've been unemployed for a long time,
I probably will accept a job not directly related to the packaging of
OpenStack. So if you wish that I continue what I've done, sponsoring is
welcome too. To such a sponsor, I can offer more than just the
packaging: I can offer my help for deploying OpenStack, and maintaining
it in production, plus whatever that company will need related to that,
and this either on Debian or Ubuntu (I can provide support for both,
even if my heart is on the Debian side). I also would accept any job
that would include OpenStack Debian packaging, and if it can be done
remote, from my home. The risk if this doesn't happen, is that the
Debian packaging of OpenStack stops. It was the case for Mitaka already,
and I decided to do Newton on my free time. I probably wont be able to
do that again for Queens, if I'm not paid for it: it's clearly not a
sustainable situation.

Cheers,

Thomas Goirand (zigo)

_

Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-11 Thread Thomas Goirand
On 11/08/2017 05:27 PM, Samuel Cassiba wrote:
> ie. deployment-focused development
> teams already under a crunch as contributor count continues to decline
> in favor of other projects inside and out of OpenStack.

Did you even think that one of the reason for such a decline, is that
OpenStack is moving too fast, and has no LTS? Some major public cloud
(which I will on purpose not name) are still running Kilo, which was
released 3 years ago! 3 or 5 years support for an LTS version is the
industry standard, and OpenStack is doing only 1 year. This has driven
people away, and will continue to do so if nothing is done.

Instead of thinking "this will be more work", why don't you think of the
LTS as an opportunity to only release OpenStack Chef for the LTS? That'd
be a lot less work indeed, and IMO that's a very good opportunity for
you to scale down.

Cheers,

Thomas Goirand (zigo)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release

2018-04-05 Thread Thomas Goirand
On 04/04/2018 10:45 AM, Kashyap Chamarthy wrote:
> Answering my own questions about Debian --
> 
> From looking at the Debian Archive[1][2], these are the versions for
> 'Stretch' (the current stable release) and in the upcoming 'Buster'
> release:
> 
> libvirt | 3.0.0-4+deb9u2 | stretch
> libvirt | 4.1.0-2| buster
> 
> qemu| 1:2.8+dfsg-6+deb9u3| stretch
> qemu| 1:2.11+dfsg-1  | buster
> 
> I also talked on #debian-backports IRC channel on OFTC network, where I
> asked: 
> 
> "What I'm essentially looking for is: "How can 'stretch' users get
> libvirt 3.2.0 and QEMU 2.9.0, even if via a different repository.
> As they are proposed to be least common denominator versions across
> distributions."
> 
> And two people said: Then the versions from 'Buster' could be backported
> to 'stretch-backports'.  The process for that is to: "ask the maintainer
> of those package and Cc to the backports mailing list."
> 
> Any takers?
> 
> [0] https://packages.debian.org/stretch-backports/
> [1] https://qa.debian.org/madison.php?package=libvirt
> [2] https://qa.debian.org/madison.php?package=qemu

Hi Kashyap,

Thanks for your considering of Debian, asking me and giving enough time
for answering! Here's my thoughts.

I updated the wiki page as you suggested [1]. As i wrote on IRC, we
don't need to care about Jessie, so I removed Jessie, and added Buster/SID.

tl;dr: just skip this section & go to conclusion

backport of libvirt/QEMU/libguestfs more in details
---

I already attempted the backports from Debian Buster to Stretch.

All of the 3 components (libvirt, qemu & libguestfs) could be built
without extra dependency, which is a very good thing.

- libvirt 4.1.0 compiled without issue, though the dh_install phase
failed with this error:

dh_install: Cannot find (any matches for) "/usr/lib/*/wireshark/" (tried
in "." and "debian/tmp")
dh_install: libvirt-wireshark missing files: /usr/lib/*/wireshark/
dh_install: missing files, aborting

Without more investigation but this build log, it's likely a minor fix
in debian/*.install files to make it possible to backport the package.

- qemu 2.11 built perfectly with zero change.

- libguestfs 1.36.13 only needed to have fdisk replaced by util-linux as
build-depends (fdisk is now a separate package in Buster).

So it looks like easy to backport these 3 *AT THIS TIME*. [2]

However, without a cristal ball, nobody can tell how hard it will be to
backport these *IN A YEAR FROM NOW*.

Conclusion:
---

If you don't absolutely need new features from libvirt 3.2.0 and 3.0.0
is fine, please choose 3.0.0 as minimum.

If you don't absolutely need new features from qemu 2.9.0 and 2.8.0 is
fine, please choose 2.8.0 as minimum.

If you don't absolutely need new features from libguestfs 1.36 and 1.34
is fine, please choose 1.34 as minimum.

If you do need these new features, I'll do my best adapt. :)

About Buster freeze & OpenStack Stein backports to Debian Stretch
-

Now, about Buster. As you know, Debian doesn't have planned release
dates. Though here's the stats showing that roughly, there's a new
Debian every 2 years, and the freeze takes about 6 months.

https://wiki.debian.org/DebianReleases#Release_statistics

With this logic and considering Stretch was released last year in June,
after Stein is released, Buster will probably start its freeze. If the
Debian freeze happens later, good for me, I'll have more time to make
Stein better. But then Debian users will probably expect an OpenStack
Stein backport to Debian Stretch, and that's where it can become tricky
to backport these 3 packages.

The end
---

I hope the above isn't too long, and helps to take the best decision,
Cheers,

Thomas Goirand (zigo)

[1]
https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix#Distro_minimum_versions

[2] I'm not shouting, just highlighting the important part! :)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release

2018-04-06 Thread Thomas Goirand
On 04/06/2018 12:07 PM, Kashyap Chamarthy wrote:
>> dh_install: Cannot find (any matches for) "/usr/lib/*/wireshark/" (tried
>> in "." and "debian/tmp")
>> dh_install: libvirt-wireshark missing files: /usr/lib/*/wireshark/
>> dh_install: missing files, aborting
> 
> That seems like a problem in the Debian packaging system, not in
> libvirt.

It sure is. As I wrote, it should be a minor packaging issue.

>  I double-checked with the upstream folks, and the install
> rules for Wireshark plugin doesn't have /*/ in there.

That part (ie: the path with *) isn't a mistake, it's because Debian has
multiarch support, so for example, we get path like this (just a random
example from my laptop):

/usr/lib/i386-linux-gnu/pulseaudio
/usr/lib/x86_64-linux-gnu/pulseaudio

> Note: You don't even have to build the versions from 'Buster', which are
> quite new.  Just the slightly more conservative libvirt 3.2.0 and QEMU
> 2.9.0 -- only if it's possbile.

Actually, for *official* backports, it's the policy to always update to
whatever is in testing until testing is frozen. I could maintain an
unofficial backport in stretch-stein.debian.net though.

> That said ... I just spent comparing the release notes of libvirt 3.0.0
> and libvirt 3.2.0[1][2].  By using libvirt 3.2.0 and QEMU 2.9.0, Debian users
> will be spared from a lot of critical bugs (see all the list in [3]) in
> CPU comparision area.
> 
> [1] https://www.redhat.com/archives/libvirt-announce/2017-April/msg0.html
> -- Release of libvirt-3.2.0
> [2] 
> https://www.redhat.com/archives/libvirt-announce/2017-January/msg3.html
> --  Release of libvirt-3.0.0
> [3] https://www.redhat.com/archives/libvir-list/2017-February/msg01295.html

So, because of these bugs, would you already advise Nova users to use
libvirt 3.2.0 for Queens?

Cheers,

Thomas Goirand (zigo)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging sample config versions

2014-12-11 Thread Thomas Goirand
s why you should never use tox/pip when building packages.

> Also python-libvrit failed to build because I don¹t have libvrit
> installed on this system.  So am I to assume that there are no
> libvrit options (which we both know is false)?
> Now I can get a example config - that wont work with my system - per
> what everyone else has been saying.  Also, at what point would the
> average user just say "F it"? - because at the point I feel like if I
> was an average user - I would be there right now.

Yeah, it's annoying, but deal-able with.

On 12/10/2014 01:25 AM, Michael Dorman wrote:
> Well I think we can all agree this is an irritation.  But how are
> others  actually dealing with this problem?  (Maybe it’s less
> complicated in Ubuntu.)

Since exactly when Ubuntu people started caring about the configuration
files that they ship? Last time I checked, Nova doesn't install a
workable nova.conf by default, it only has a few directives and that's
about it.

> The sense I get is that most people using Anvil, or other custom-ish
> packaging tools, are also running config management which handles
> generating the config files, anyway.  So you don’t so much care about
> the  contents of the config file shipped with the package.

Yeah, it has been the excuse for years so that 1/ upstream project
doesn't care 2/ downstream distribution don't care 3/ users give-up on
installing by hand. I DON'T BUY THIS CRAP! And as a package maintainer,
I strongly believe that it's my duty to make packages at least a little
bit useable by default.

> Is that accurate for most people?  Or are folks doing some other
> magic to get a good config file in the packages?

No magic. Only hard work can make it happen (unfortunately, currently
limited by the time I have available, which isn't much given the amount
of work OpenStack packaging represents). The way to do things is to get
OpenStack installed (by hand, without any helper, using the package
defaults), then make sure the package ships with sensible defaults that
do work. Which is why I've been working on the "openstack-deploy"
package and scripts in Debian. Contributions are welcome there too!
Also, reading the install-guide and making sure what it recommends as
options in config files are pre-wired in the packages may help as well.

Cheers,

Thomas Goirand (zigo)

P.S: Matt Fischer, could you *PLEASE* stop posting with a footer that
makes you look like a fool?

On 12/09/2014 11:39 AM, Fischer, Matt wrote:
> This E-mail and any of its attachments may contain Time Warner Cable
> proprietary information, which is privileged, confidential, or
> subject to copyright belonging to Time Warner Cable. This E-mail is
> intended solely for the use of the individual or entity to which it
> is addressed. If you are not the intended recipient of this E-mail,
> you are hereby notified that any dissemination, distribution,
> copying, or action taken in relation to the contents of and
> attachments to this E-mail is strictly prohibited and may be
> unlawful. If you have received this E-mail in error, please notify
> the sender immediately and permanently delete the original and any
> copy of this E-mail and any printout.

I am hereby notifying you: you're sending emails to a public list.
Therefore it *WILL* be reproduced, distributed, quoted, indexed, etc.,
And I challenge you (or your company) to dare threatening me again of
doing unlawful miss-handling of contents which you are willingly sending
to a public list... Should my lawyer get in touch with yours? :)


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging sample config versions

2014-12-13 Thread Thomas Goirand
 generate a config file.

I'm not sure I'm following. But for every package, you need to install
absolutely all build AND runtime dependencies, and if they don't exist
in the distribution, you also need to package them.

> What I am getting from you is that you basically install all the runtime
> deps for the project on your build machine and then build the config using
> the bash script.

Runtime dependencies are needed to run the unit tests, so yes, they get
installed, since the unit tests are run at package build time. On top of
that, you have specific build-dependencies for running the unit tests,
and possibly other things (like the config file generation which we're
talking about right now, or the sphinx docs, for example).

>>> 8.) add sample configuration generated in step 6 to the package.
>>
>> Why wouldn't the process of building your package include building a
>> sample configuration file? I don't get your reasoning here...
> 
> Getting to the point that what tox installs and what are available on the
> system are different. So the only way to get a valid config is to either
> use the same package versions that tox does or to duplicate what "tox" is
> suppose to do for you.

I'm sorry, but I have to say that this is incorrect. What you need to
have installed, is the exact same environment where you package will
run, not what tox wants. Which is why you should avoid at all costs
running some pip install things, because doing so, you may end up having
a wrong .conf.sample file (which will not match the Python modules that
your package will run on).

> Which sounds like you duplicate what tox does for
> you to avoid that mess.

That's not the only reason. The other is that, by policy, you should
*never ever* need an internet connection to build a package. This is a
strong Debian policy requirement, which I both agree on and support on
the technical level for all of the packages I produce. If there is a
package for which that's not the case, then this is a bug which shall be
reported on the Debian bug tracker.

>>> Then I need to make sure I also package all of the python-versions
>>> that was used in step 4, making sure that I don’t have conflicting
>>> system level dependencies from other openstack projects.
>>
>> Of course all build-dependencies and runtime dependencies need to be
>> packaged, and available in the system. That's the basics of packaging,
>> no? Making sure this happens is about 90% of my Debian packaging work.
>> So far, I haven't seen anyone in the community volunteering to help on
>> packaging Python modules. Why not focus on that rather than wasting your
>> time on non-issues such as generating sample config files? I'd
>> appreciate a lot some help you know...
> 
> That is what this effort is for? Coming up with tooling to package
> openstack and its python modules and if we can't simply include a sample
> config like we have done for the past 3 years, then the tooling (which we
> are trying to consolidate) should help us here.

As I wrote before: you need runtime deps for unit tests. Again, I see no
issue with the current way config files are generated.

>> On 12/09/2014 12:01 PM, Kris G. Lindgren wrote:
>>> I don’t think its too much to ask for each project to include a
>>> script that will build a venv that includes tox and the other
>>> relevant deps to build the sample configuration.
>>
>> A venv, seriously?!?
>>
>> No, it's not that. What need to happen is to have an easy and *OpenStack
>> unified way* of building the config files, and preferably with things
>> integrated directly in a oslo.config new command line. Not just a black
>> magic tox thing, but something documented. But I believe that's already
>> the direction that is being taken (did you notice
>> /usr/bin/oslo-config-generator ?).
> 
> Or projects could maintain a configuration file...  But I guess if
> everyone uses the bash script for config generation then I could work with
> that...

Not everyone does. Canonical people don't really care and don't ship a
full nova.conf.sample with their package. On my side, I only provide it
as a documentation, and produce my own nova.conf (which gets installed
in /etc/nova) with defaults which I find convenient and close to the
install-guide. I'm open to discussion about this though, maybe it'd be
best to install a (modified with better defaults) full nova.conf.sample
as /etc/nova/nova.conf directly.

Cheers,

Thomas Goirand (zigo)


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging sample config versions

2014-12-13 Thread Thomas Goirand
On 12/13/2014 04:30 AM, George Shuklin wrote:
> I do some tiny CI in my company: repackaging ubuntu
> packages with debian-jenkins-glue (plus backported patches
> icehouse->havana)

Ah, that's interesting! :)

> If I can help somehow, I'm ready to do something, but What should I
> do, exactly?

There's a lot that can be done. If you like working on CI stuff, then
you could help me with building the package validation CI which I'm
trying to (re-)work. All of this is currently inside the debian/juno of
the openstack-meta-packages (in the openstack-tempest-ci package, which
uses the openstack-deploy package).

In the past, I saw *A LOT* of CIs, and most of them were written in a
very dirty way. In fact, it's easy to write a CI, but it's very hard to
write it well. I'm not saying my approach is perfect, but IMO it's
moving toward the good direction.

For the moment, the packaged CI can do a full all-in-one deployment from
scratch (starting with an empty VM), install and configure tempest, and
run the Keystone tempest unit tests. I'm having issues with nova-compute
using Qemu, and also the Neutron setup. But once that's fixed, I hope to
be able to run most tempest tests. The next step will be to run on a
multi-node setup.

So, if you want to help on that, and as it seems you like doing CI
stuff, you're more welcome to do so.

Once we have this, then we could start building a repository with
everything from trunk. And when that is done, starting the effort of
building a 3rd party CI to do package validation on the gate.

Your thoughts?

Cheers,

Thomas Goirand (zigo)


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging sample config versions

2014-12-13 Thread Thomas Goirand
On 12/13/2014 04:59 AM, Jeremy Stanley wrote:
> On a related note, this topic has been added to the agenda[1] for
> Tuesday's Cross-Project Meeting (December 16, 21:00 UTC in the
> #openstack-meeting channel on the Freenode IRC network). If you're
> passionate about the issue then please come help work out a
> consistent solution we can recommend to all projects for future
> releases.
> 
> [1] 
> https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting#Proposed_agenda

Thanks for this.

I'm afraid I wont be able to attend, since I'm back in the Chinese
timezone until the 29th of this month.

Therefore, could you please forward my thought on the meeting, which is:
python setup.py install/sdist should be running the
"tools/config/generate_sample.sh -b . -p nova -o etc/nova" thing
automatically.

Cheers,

Thomas Goirand (zigo)


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging sample config versions

2014-12-15 Thread Thomas Goirand
On 12/14/2014 06:39 AM, George Shuklin wrote:
> Btw: we talking about debian packages or ubuntu?

Debian, not Ubuntu.

> They are differ -
> debian heavily relies on answers to debconfig

You mean debconf. And no, it doesn't "rely on", it's completely
optional, and the packages *must* be able to be installed in a
non-interactive way (as per the Debian policy).

> and ubuntu just put files
> in proper places without changing configs.

Ahem... Ubuntu simply doesn't care much about config files. See what
they ship for Nova and Cinder. I wouldn't say "without changing configs"
in this case.

> We using chef for
> configuration, so ubuntu approach is better

It's not better or worse, it's exactly the same as for Debian, as the
Debian package will *never* change something you modified in a config
file, as per Debian policy (if they do, then it's a bug you shall report
to the tracker).

> (when we starts doing
> openstack that was on of deciding factors between debian and ubuntu).

Then you decided on the wrong grounds.

On 12/14/2014 09:03 AM, George Shuklin wrote:
> Well, 'preseed' is just more work

But it's completely optional. And also, the openstack-meta-packages
source package provides all the facility for you (see the
"openstack-deploy" package which contains the preseed lib).

> noninteractive dpkg - is much better.

Then use:
DEBIAN_FRONTEND=noninteractive apt-get install 

and the Debian packages are all non-interactive as well.

> Anyway, I'm ready to help but have no idea how (within my limits).

Do you have any experience building 3rd party CIs on OpenStack infra?

Thomas Goirand (zigo)


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging sample config versions

2014-12-16 Thread Thomas Goirand
On 12/16/2014 09:33 AM, George Shuklin wrote:
> Nope. I've only done stuff with debian-jenkins-glue. But I have some
> experience on backporting patches from icehouse to havana (it still in
> production and still need fixes). I can research/fix something specific
> and local.

Oh, if you're good with back-porting, then when the Icehouse is
officially end-of-life upstream, you could join the team working on its
extended support. I'll be doing the distribution coordination for
security fixing. Is this some area you'd like to work on?

Otherwise, there's room for just *any* packaging work. From the smallest
Python module, to backporting key packages (like libvirt, Qemu, Ceph,
and so on...), or even working on core packages and testing. Just take
your pick... Basically, I'd accept *any* help, and I will adapt to make
it comfy for you to help! :)

Cheers,

Thomas Goirand (zigo)


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging sample config versions

2014-12-16 Thread Thomas Goirand
racker.
> 
> Again - I don't have this restriction and I guess because of it - building
> packages and dependent packages is apparently much easier for me.

You're also producing something of a really lower quality doing this
way. If you don't mind, then ok... (and I wont go into details why it is
of a lower quality, I already wrote about it, and I just hope you
understand why I'm saying it: it seems you do but you're satisfied with
the result so it's ok...)

> Honestly - I have no preference on this.  I am going to change the
> defaults anyway to customize it to something that works for me.  So either
> way I need to modify the config file and the configuration management
> stuff does this for me.  So if people want to put extra work into making a
> config file that they think is going to work well with everyone please do
> so, to me I think that¹s a dead end.  What I would rather see is feedback
> from operators to dev to get some defaults changed to more reasonable
> values.  I am specifically thinking about certain periodic tasks that
> everyone after a certain size is either changing or turning off.

I'd love to see this kind of change pushed upstream, so that everyone
has the benefits of it.

Cheers,

Thomas Goirand (zigo)


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging sample config versions

2014-12-18 Thread Thomas Goirand
Jeremy,

Thanks *A LOT* for writing this up. This is very helpful.

On 12/18/2014 09:57 AM, Jeremy Stanley wrote:
> During the first half of yesterday's cross-project meeting, we went
> through the sample configuration packaging/publishing topic to get a
> better idea of what options are open to us. Many thanks to all who
> attended. The meeting summary with a link to the full discussion
> logs can be found here:
> 
>  http://eavesdrop.openstack.org/meetings/crossproject/2014/crossproject.2014-12-16-21.01.html
>  >
> 
> We spent a fair amount of time discussing the current configuration
> model and the challenges presented by its design, particularly that
> the presence or absence of different libraries (and differing
> versions thereof) influence what configuration options are available
> in a service, the values to which they can be set, and in some cases
> those to which they default when otherwise omitted. I don't want to
> go into further detail on that within this thread, but feel
> obligated to point out that this model is to some extent the result
> of earlier operational complaints about having to modify too many
> different configuration files to set up a single service.
> 
> We debated the merits and drawbacks of a number of options proposed
> here and within the meeting. Before I go into them I'm compelled to
> remind everyone that none of these comes without some cost in
> development effort and ongoing management overhead, and ask that
> anyone who expresses a preference for one or more to include
> concrete use case descriptions. Things we can consider implementing:
> 
> 1. Standardize on a common mechanism across all projects to generate
> sample configuration files. This should be able to run within a
> global system context, not just within a virtualenv via tox.

Yes please! I'm already using what tox does, instead of tox itself. IMO,
this should go into oslo.config (or some kind of lib like this).

> 2. Provide a solution which runs within the scope of each project's
> setup.py to generate sample configuration and include it in any
> sdist tarball or Python wheel. This would have the added benefit
> that people installing via pip from PyPI or just retrieving official
> tarballs would get copies of sample configuration from the timeframe
> when they were generated. As this complicates sdist generation
> (because it requires installation of required and optional libraries
> used by the service), it probably needs to be easy to enable and
> disable.

As you know, I don't care about the sdist tarballs, but I do want
"python setup.py install" to generate the config files. Otherwise, a
"python setup.py config-file" or something similar would do, as long as
it is:
1/ Documented
2/ Consistent across all of OpenStack

> 3. Design a Sphinx plug-in or other similar solution to generate and
> include sample configuration files within the developer
> documentation of each project. Since this documentation is
> automatically updated and published, it would provide a stable
> location where interested parties can view and download these files
> without needing to manually generate or extract them from an
> archive.

This doesn't fix the issue with the consistency of libs.

> 4. Set up a service that periodically regenerates sample
> configuration and tracks it over time. This attempts to address the
> stated desire to be able to see how sample configurations change,
> but note that this is a somewhat artificial presentation since there
> are a lot of variables (described earlier) influencing the contents
> of such samples--any attempt to render it as a linear/chronological
> series could be misleading.

Same issue.

> Anyway, this is just an attempt to level-set and spur the discussion
> onward to actionable solutions rather than continuing to debate in
> the abstract. Hopefully it takes us in a good direction.

Let's just hope we'll experience consistency.

Thomas Goirand (zigo)


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [all] Bringing the community together (combine the lists!)

2018-08-30 Thread Thomas Goirand
On 08/30/2018 08:57 PM, Chris Friesen wrote:
> On 08/30/2018 11:03 AM, Jeremy Stanley wrote:
> 
>> The proposal is simple: create a new openstack-discuss mailing list
>> to cover all the above sorts of discussion and stop using the other
>> four.
> 
> Do we want to merge usage and development onto one list?
I really don't want this. I'm happy with things being sorted in multiple
lists, even though I'm subscribed to multiples.

Thomas

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [all] Bringing the community together (combine the lists!)

2018-08-31 Thread Thomas Goirand
On 08/30/2018 11:33 PM, Jeremy Stanley wrote:
> On 2018-08-30 22:49:26 +0200 (+0200), Thomas Goirand wrote:
> [...]
>> I really don't want this. I'm happy with things being sorted in
>> multiple lists, even though I'm subscribed to multiples.
> 
> I understand where you're coming from

I'm coming from the time when OpenStack had a list on launchpad where
everything was mixed. We did the split because it was really annoying to
have everything mixed.

> I was accustomed to communities where developers had one mailing
> list, users had another, and whenever a user asked a question on the
> developer mailing list they were told to go away and bother the user
> mailing list instead (not even a good, old-fashioned "RTFM" for
> their trouble).

I don't think that's what we are doing. Usually, when someone does the
mistake, we do reply to him/her, at the same time pointing to the
correct list.

> You're probably intimately familiar with at least
> one of these communities. ;)

I know what you have in mind! Indeed, in that list, it happens that some
people are a bit harsh to users. Hopefully, the folks in OpenStack devel
aren't like this.

> As the years went by, it's become apparent to me that this is
> actually an antisocial behavior pattern

In the OpenStack lists, every day, some developers take the time to
answer users. So I don't see what there is to fix.

> I believe OpenStack actually wants users to see the
> development work which is underway, come to understand it, and
> become part of that process.

Users are very much welcome in our -dev list. I don't think there's a
problem here.

> Requiring them to have their
> conversations elsewhere sends the opposite message.

In many places and occasion, we've sent the correct message.

On 08/30/2018 11:45 PM, Jimmy McArthur wrote:
> IMO this is easily solved by tagging.  If emails are properly tagged
> (which they typically are), most email clients will properly sort on
> rules and you can just auto-delete if you're 100% not interested in a
> particular topic.

This topically works with folks used to send tags. It doesn't for new
comers, which is what you see with newbies coming to ask questions.

Cheers,

Thomas Goirand (zigo)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators