Re: [openstack-dev] [fuel] Nominate Svetlana Karslioglu for fuel-docs core

2015-10-08 Thread Alexander Adamov
+1 to Svetlana's nomination.

On Tue, Sep 29, 2015 at 4:58 AM, Dmitry Borodaenko  wrote:

> I'd like to nominate Svetlana Karslioglu as a core reviewer for the
> fuel-docs-core team. During the last few months, Svetlana restructured
> the Fuel QuickStart Guide, fixed a few documentation bugs for Fuel 7.0,
> and improved the quality of the Fuel documentation through reviews.
>
> I believe it's time to grant her core reviewer rights in the fuel-docs
> repository.
>
> Svetlana's contribution to fuel-docs:
>
> http://stackalytics.com/?user_id=skarslioglu=all_type=all=fuel-docs
>
> Core reviewer approval process definition:
> https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>
> --
> Dmitry Borodaenko
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Heat] Liberty RC2 available

2015-10-08 Thread Thierry Carrez
Hello everyone,

Due to a number of release-critical issues spotted in Neutron and Heat
during RC1 testing (as well as last-minute translations imports), new
release candidates were created for Liberty. The list of RC2 fixes, as
well as RC2 tarballs are available at:

https://launchpad.net/neutron/liberty/liberty-rc2
https://launchpad.net/heat/liberty/liberty-rc2

Unless new release-critical issues are found that warrant a last-minute
release candidate respin, these tarballs will be formally released as
final "Liberty" versions in a week. You are therefore strongly
encouraged to test and validate these tarballs !

Alternatively, you can directly test the stable/liberty branch at:
http://git.openstack.org/cgit/openstack/neutron/log/?h=stable/liberty
http://git.openstack.org/cgit/openstack/heat/log/?h=stable/liberty

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/neutron/+filebug
or
https://bugs.launchpad.net/heat/+filebug

and tag it *liberty-rc-potential* to bring it to the release crew's
attention.

Thanks!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Scheduler proposal

2015-10-08 Thread Clint Byrum
Excerpts from Maish Saidel-Keesing's message of 2015-10-08 00:14:55 -0700:
> Forgive the top-post.
> 
> Cross-posting to openstack-operators for their feedback as well.
> 
> Ed the work seems very promising, and I am interested to see how this 
> evolves.
> 
> With my operator hat on I have one piece of feedback.
> 
> By adding in a new Database solution (Cassandra) we are now up to three 
> different database solutions in use in OpenStack
> 
> MySQL (practically everything)
> MongoDB (Ceilometer)
> Cassandra.
> 
> Not to mention two different message queues
> Kafka (Monasca)
> RabbitMQ (everything else)
> 
> Operational overhead has a cost - maintaining 3 different database 
> tools, backing them up, providing HA, etc. has operational cost.
> 
> This is not to say that this cannot be overseen, but it should be taken 
> into consideration.
> 
> And *if* they can be consolidated into an agreed solution across the 
> whole of OpenStack - that would be highly beneficial (IMHO).
> 

Just because they both say they're databases, doesn't mean they're even
remotely similar.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Process to clean up the review queue from non-active patches

2015-10-08 Thread Flavio Percoco

On 07/10/15 10:17 -0400, Doug Hellmann wrote:

Excerpts from Flavio Percoco's message of 2015-10-07 16:50:16 +0900:

On 06/10/15 23:36 +0900, Flavio Percoco wrote:
>Greetings,
>
>Not so long ago, Erno started a thread[0] in this list to discuss the
>abandon policies for patches that haven't been updated in Glance.
>
>I'd like to go forward and start following that policy with some
>changes that you can find below:
>
>1) Lets do this on patches that haven't had any activity in the last 2
>months. This adds one more month to Erno's proposal. The reason being
>that during the lat cycle, there were some ups and downs in the review
>flow that caused some patches to get stuck.
>
>2) Do this just on master, for all patches regardless they fix a
>bug or implement a spec and for all patches regardless their review
>status.
>
>3) The patch will be first marked as a WIP and then abandoned if the
>patch is not updated in 1 week. This will put this patches at the
>begining of the queue but using the Glance review dashboard should
>help keeing focus.
>
>Unless there are some critical things missing in the above or strong
>opiniones against this, I'll make this effective starting next Monday
>October 12th.

I'd like to provide some extra data here. This is our current status:

==
Total patches without activity in the last 2 months: 73
Total patches closing a bug: 30
Total patches with negative review by core reviewers: 62
Total patches with negative review by non-core reviewers: 75
Total patches without a core review in the last patchset: 13
Total patches with negative review from Jenkins: 50
==

It's not ideal but it's also not a lot. I'd like to recover as many
patches as possible from the above and I'm happy to do that manually
if necessary.

Cheers,
Flavio



It might be useful to schedule a review sprint to take a couple of days
to focus on reviews. Maybe the team can pick dates a couple of weeks in
advance so everyone can get permission to spend the full time on
reviews.


+1 This is something I've planned.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Liberty RC2 available

2015-10-08 Thread Thierry Carrez
Hello everyone,

In order to include last-minute translations updates and fix a couple of
issues, a new liberty release candidate was created for Horizon. RC2
tarballs are available at:

https://launchpad.net/horizon/liberty/liberty-rc2

Unless new release-critical issues are found that warrant a last-minute
release candidate respin, this tarball will be formally released as the
final "Liberty" version on October 15. You are therefore strongly
encouraged to test and validate this tarball !

Alternatively, you can directly test the stable/liberty branch at:
http://git.openstack.org/cgit/openstack/horizon/log/?h=stable/liberty

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/horizon/+filebug

and tag it *liberty-rc-potential* to bring it to the release crew's
attention.

Thanks!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live migration in Mitaka

2015-10-08 Thread Daniel P. Berrange
On Wed, Oct 07, 2015 at 03:54:29PM -0600, Chris Friesen wrote:
> On 10/07/2015 03:14 AM, Daniel P. Berrange wrote:
> 
> >For suspended instances, the scenario is really the same as with completely
> >offline instances. The only extra step is that you need to migrate the saved
> >image state file, as well as the disk images. This is trivial once you have
> >done the code for migrating disk images offline, since its "just one more 
> >file"
> >to care about.  Officially apps aren't supposed to know where libvirt keeps
> >the managed save files, but I think it is fine for Nova to peek behind the
> >scenes to get them. Alternatively I'd be happy to see an API added to libvirt
> >to allow the managed save files to be uploaded & downloaded via a libvirt
> >virStreamPtr object, in the same way we provide APIs to  upload & download
> >disk volumes. This would avoid the need to know explicitly about the file
> >location for the managed save image.
> 
> Assuming we were using libvirt with the storage pools API could we currently
> (with existing libvirt) migrate domains that have been suspended with
> virDomainSave()?  Or is the only current option to have nova move the file
> over using passwordless access?

If you used virDomainSave() instead of virDomainManagedSave() then you control
the file location, so you could create a directory based storage pool and
save the state into that directory, at which point you can use the storag
pool APIs to upload/download that data.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-08 Thread Maish Saidel-Keesing

Forgive the top-post.

Cross-posting to openstack-operators for their feedback as well.

Ed the work seems very promising, and I am interested to see how this 
evolves.


With my operator hat on I have one piece of feedback.

By adding in a new Database solution (Cassandra) we are now up to three 
different database solutions in use in OpenStack


MySQL (practically everything)
MongoDB (Ceilometer)
Cassandra.

Not to mention two different message queues
Kafka (Monasca)
RabbitMQ (everything else)

Operational overhead has a cost - maintaining 3 different database 
tools, backing them up, providing HA, etc. has operational cost.


This is not to say that this cannot be overseen, but it should be taken 
into consideration.


And *if* they can be consolidated into an agreed solution across the 
whole of OpenStack - that would be highly beneficial (IMHO).



--
Best Regards,
Maish Saidel-Keesing


On 10/08/15 03:24, Ed Leafe wrote:

On Oct 7, 2015, at 2:28 PM, Zane Bitter  wrote:


It seems to me (disclaimer: not a Nova dev) that which database to use is 
completely irrelevant to your proposal,

Well, not entirely. The difference is that what Cassandra offers that separates it from 
other DBs is exactly the feature that we need. The solution to the scheduler isn't to 
simply "use a database".


which is really about moving the scheduling from a distributed collection of 
Python processes with ad-hoc (or sometimes completely missing) synchronisation 
into the database to take advantage of its well-defined semantics. But you've 
framed it in such a way as to guarantee that this never gets discussed, because 
everyone will be too busy arguing about whether or not Cassandra is better than 
Galera.

Understood - all one has to do is review the original thread from back in July 
to see this happening. But the reason that I framed it then as an experiment in 
which we would come up with measures of success we could all agree on up-front 
was so that if someone else thought that Product Foo would be even better, we 
could set up a similar test bed and try it out. IOW, instead of bikeshedding, 
if you want a different color, you build another shed and we can all have a 
look.


-- Ed Leafe




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] We should move strutils.mask_password back into oslo-incubator

2015-10-08 Thread Julien Danjou
On Wed, Oct 07 2015, Matt Riedemann wrote:

> 2. Backport the oslo.utils change to a stable branch, release it as a patch
> release, bump minimum required version in stable g-r and then backport the 
> nova
> change and depend on the backported oslo.utils stable release - which also
> makes it a dependent library version bump for any packagers/distros that have
> already frozen libraries for their stable releases, which is kind of not fun.

You should not need to bump the minimum version in g-r. The minimum
version there should be the minimal version to have working code.

If you start bumping dependencies or dependencies of dependencies each
time they release because a bug or a security issue is fixed, it's going
to a never ending useless job.

When you're an operator, you know you need to always run the latest
stable version of the things you have in prod' to have all the fixes.
That's common good sense.

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live migration in Mitaka

2015-10-08 Thread Paul Carlton



On 08/10/15 09:57, Daniel P. Berrange wrote:

On Wed, Oct 07, 2015 at 03:54:29PM -0600, Chris Friesen wrote:

On 10/07/2015 03:14 AM, Daniel P. Berrange wrote:


For suspended instances, the scenario is really the same as with completely
offline instances. The only extra step is that you need to migrate the saved
image state file, as well as the disk images. This is trivial once you have
done the code for migrating disk images offline, since its "just one more file"
to care about.  Officially apps aren't supposed to know where libvirt keeps
the managed save files, but I think it is fine for Nova to peek behind the
scenes to get them. Alternatively I'd be happy to see an API added to libvirt
to allow the managed save files to be uploaded & downloaded via a libvirt
virStreamPtr object, in the same way we provide APIs to  upload & download
disk volumes. This would avoid the need to know explicitly about the file
location for the managed save image.

Assuming we were using libvirt with the storage pools API could we currently
(with existing libvirt) migrate domains that have been suspended with
virDomainSave()?  Or is the only current option to have nova move the file
over using passwordless access?

If you used virDomainSave() instead of virDomainManagedSave() then you control
the file location, so you could create a directory based storage pool and
save the state into that directory, at which point you can use the storag
pool APIs to upload/download that data.


Regards,
Daniel

I will update https://review.openstack.org/#/c/232053
which covers use of libvirt cold migration of non active instances to
cover the use of virDomainSave() and thus allow migration of suspended
instances.

--
Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Mobile:+44 (0)7768 994283
Email:mailto:paul.carlt...@hpe.com
Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may be 
legally privileged. If you have received this message in error, you should delete it from 
your system immediately and advise the sender. To any recipient of this message within 
HP, unless otherwise stated you should consider this message and attachments as "HP 
CONFIDENTIAL".




smime.p7s
Description: S/MIME Cryptographic Signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] Using python venvs for keystone, nova, glance, swift, heat, neutron, ceilometer

2015-10-08 Thread Jesse Pretorius
Hi everyone,

We've identified that it would be better for us to make use of python venvs
for various reasons as outlined in the blueprint [1] and in more detail in
the spec [2] for the implementation.

We'd like to solicit feedback from anyone in the community who has
experience running OpenStack services in python venvs. What should we look
out for? What problems have been experienced and how were they resolved?

We'd be happy to discuss this via the mailing list, or in the reviews [3].

Thanks,

Jesse
IRC: odyssey4me

[1]
https://blueprints.launchpad.net/openstack-ansible/+spec/enable-venv-support-within-the-roles
[2]
http://specs.openstack.org/openstack/openstack-ansible-specs/specs/liberty/enable-venv-support-within-the-roles.html
[3]
https://review.openstack.org/#/q/status:open+project:openstack/openstack-ansible+branch:master+topic:bp/enable-venv-support-within-the-roles,n,z
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-doc-tools] [bashate] liberty rc releases

2015-10-08 Thread Andreas Jaeger

On 2015-10-08 12:32, Dimitri John Ledkov wrote:

Heya,

Looks like openstack-doc-tools have staged updated pbr requirements in
git tree, but there is no rc libery release for it?


Dimitri,

We released openstack-doc-tools yesterday, is there anything else to do?

Andreas


Similarly bashate have old pbr global requirements, thus when
liberty's pbr is packaged with openstack-doc-tools from master things
fail at running openstack-doc-tools as downgraded pbr is attempted to
be installed for bashate.

Please release openstack-doc-tools & bashate that are installable &
testable with pbr 1.8




--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs][Neutron] Networking Guide - Call for contributors

2015-10-08 Thread Somanchi Trinath
Hi-

Count me too.

-
Trinath

From: Edgar Magana [mailto:edgar.mag...@workday.com]
Sent: Thursday, October 08, 2015 5:42 AM
To: OpenStack Development Mailing List (not for usage questions) 
; openstack-operat...@lists.openstack.org; 
openstack-d...@lists.openstack.org
Subject: [openstack-dev] [OpenStack-docs][Neutron] Networking Guide - Call for 
contributors

Hello,

I would like to invite everybody to become an active contributor for the 
OpenStack Networking Guide: http://docs.openstack.org/networking-guide/

During the Liberty cycle we made a lot of progress and we feel that the guide 
is ready to have even more contributions and formalize a bit more the team 
around it.
The first thing that I want to propose is to have a regular meeting over IRC to 
discuss the progress and to welcome new contributors. This is the same process 
that other guides like the operators one are following currently.

The networking guide is based on this ToC: 
https://wiki.openstack.org/wiki/NetworkingGuide/TOC
Contribution process is the same that the rest of the OpenStack docs under the 
openstack-manuals git repo: 
https://github.com/openstack/openstack-manuals/tree/master/doc/networking-guide/source

Please, response to this thread and let me know if you could allocate some time 
to help us to make this guide a rock star as the other ones. Based on the 
responses, I will propose a couple of times for the IRC meeting that could 
allocate to everybody if possible, this is why is very important to let me know 
your time zone.

I am really looking forward to increase the contributors in this guide.

Thanks in advance!

Edgar Magana
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][mistral] Automatic evacuation as a long running task

2015-10-08 Thread Deja, Dawid
Hi Matthew,

Thanks for bringing some light on what problems has nova with evacuation of an 
instance. It is very important to have those limitations in mind when preparing 
final solution. Or to fix them, as you proposed.

Nevertheless, I would say that evacuationD does more than what calling 'nova 
host-evacuate' do. Let's consider such scenario:

1. Call 'nova host evacuate HostX'
2. Caller dies during call - information that some VMs are still to be 
evacuated is lost.

Such thing would not happen with evacuationD, because it prepares one rabbitMQ 
message for each VM that needs to be evacuated. Moreover, it deals with 
situation, when process that lists VMs crashes. In such case, whole operation 
would be continued by another daemon.

EvacD may also handle another problem that you mentioned: failure of target 
host of evacuation. In such scenario, 'evacuate host' message will be send for 
a new host and EvacD will try to evacuate all of it's vms - even those in 
rebuild state. Of course, evacuation of such instances fails, but they would 
eventually enter error state and evacuationD would start resurrection process. 
This can be speed up by setting instances state to 'error' (despite these which 
are in 'active' state) on the beginning of whole 'evacuate host' process.

Finally, another action - called 'Look for VM' - could be added. It would check 
if given VM ended up in active state on new hosts; if no, VM could be rebuild. 
I hope this would give us as much certainty that VM is alive as possible.

Dawid

On Tue, 2015-10-06 at 16:34 +0100, Matthew Booth wrote:
Hi, Roman,

Evacuated has been on my radar for a while and this post has prodded me to take 
a look at the code. I think it's worth starting by explaining the problems in 
the current solution. Nova client is currently responsible for doing this 
evacuate. It does:

1. List all instances on the source host
2. Initiate evacuate for each instance

Evacuating a single instance does:

API:
1. Set instance task state to rebuilding
2. Create a migration record with source and dest if specified

Conductor:
3. Call the scheduler to get a destination host if not specified
4. Get the migration object from the db

Compute:
5. Rebuild the instance on dest
6. Update instance.host to dest

Examining single instance evacuation, the first obvious thing to look at is 
what if 2 happen simultaneously. Because step 1 is atomic, it should not be 
possible to initiate 2 evacuations simultaneously of a single instance. 
However, note that this atomic action hasn't updated the instance host, meaning 
the source host remains the owner of this instance. If the evacuation process 
fails to complete, the source host will automatically delete it if it comes 
back up because it will find a migration record, but it will not be rebuilt 
anywhere else. Evacuating it again will fail, because its task state is already 
rebuilding.

Also, let's imagine that the conductor crashes. There is not enough state for 
any tool, whether internal or external, to be able to know if the rebuild is 
ongoing somewhere or not, and therefore whether it is safe to retry even if 
that retry would succeed, which it wouldn't.

Which is to say that we can't currently robustly evacuate one instance!

Looking at the nova client side, there is an obvious race there: there is no 
guarantee in step 2 that instances returned in step one have not already been 
evacuated by another process. We're protected here, though because evacuating a 
single instance twice will fail the second time. Note that the process isn't 
idempotent, though, because an evacuation which falls into a hole will never be 
retried.

Moving on to what evacuated does. Evacuated uses rabbit to distribute jobs 
reliably. There are 2 jobs in evacuated:

1. Evacuate host:
  1.1 Get list of all instances on the source host from Nova
  1.2 Send an evacuate vm job for each instance
2. Evacuate vm:
  2.1 Tell Nova to start evacuating an instance

Because we're using rabbit as a reliable message bus, the initiator of one of 
the tasks knows that it will eventually run to completion at least once. Note 
that there's nothing to prevent the task being executed more than once per 
call, though. A task may crash before sending an ack, or may just be really 
slow. However, in both cases, for exactly the same reasons as for the 
implementation in nova client, running more than once should not race. It is 
still not idempotent, though, again for exactly the same reasons as nova client.

Also notice that, exactly as in the nova client implementation, we are not 
asserting that an instance has been evacuated. We are only asserting that we 
called nova.evacuate, which is to say that we got as far as step 2 in the 
evacuation sequence above.

In other words, in terms of robustness, calling evacuated's evacuate host is 
identical to asserting that nova client's evacuate host ran to completion at 
least once, which is quite a lot simpler to do. That's 

[openstack-dev] [horizon] Weekly Bug Report

2015-10-08 Thread Rob Cresswell (rcresswe)
Morning Horizoneers,

https://wiki.openstack.org/wiki/Horizon/WeeklyBugReport


Weekly bug report has been updated! Most of the bugs were solved
(woohoo!), or are stuck at discussion point and have been removed until
better solutions are found. I¹ve added new priority bugs, and there are
still some blueprints to look over.

As usual, if you feel there is something critical please add it (or shout
at me to do my job better).

Cheers,
Rob



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-doc-tools] [bashate] liberty rc releases

2015-10-08 Thread Dimitri John Ledkov
On 8 October 2015 at 11:32, Dimitri John Ledkov
 wrote:
> Heya,
>
> Looks like openstack-doc-tools have staged updated pbr requirements in
> git tree, but there is no rc libery release for it?
>
> Similarly bashate have old pbr global requirements, thus when
> liberty's pbr is packaged with openstack-doc-tools from master things
> fail at running openstack-doc-tools as downgraded pbr is attempted to
> be installed for bashate.
>
> Please release openstack-doc-tools & bashate that are installable &
> testable with pbr 1.8
>

actually openstack-doc-tools 0.31 is out. So I only spot bashate as
needing a release now.

-- 
Regards,

Dimitri.
90 sleeps till Christmas, or less

https://clearlinux.org
Open Source Technology Center
Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon SN3 1RJ.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet][Fuel] OpenstackLib Client Provider Better Exception Handling

2015-10-08 Thread Vladimir Kuklin
Hi, folks

* Intro

Per our discussion at Meeting #54 [0] I would like to propose the uniform
approach of exception handling for all puppet-openstack providers accessing
any types of OpenStack APIs.

* Problem Description

While working on Fuel during deployment of multi-node HA-aware environments
we faced many intermittent operational issues, e.g.:

401/403 authentication failures when we were doing scaling of OpenStack
controllers due to difference in hashing view between keystone instances
503/502/504 errors due to temporary connectivity issues
non-idempotent operations like deletion or creation - e.g. if you are
deleting an endpoint and someone is deleting on the other node and you get
404 - you should continue with success instead of failing. 409 Conflict
error should also signal us to re-fetch resource parameters and then decide
what to do with them.

Obviously, it is not optimal to rerun puppet to correct such errors when we
can just handle an exception properly.

* Current State of Art

There is some exception handling, but it does not cover all the
aforementioned use cases.

* Proposed solution

Introduce a library of exception handling methods which should be the same
for all puppet openstack providers as these exceptions seem to be generic.
Then, for each of the providers we can introduce provider-specific
libraries that will inherit from this one.

Our mos-puppet team could add this into their backlog and could work on
that in upstream or downstream and propose it upstream.

What do you think on that, puppet folks?

[0]
http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-10-06-15.00.html

-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] naming N and O releases nowish

2015-10-08 Thread Sean Dague
On 10/07/2015 06:22 PM, Monty Taylor wrote:
> On 10/07/2015 09:24 AM, Sean Dague wrote:
>> On 10/07/2015 08:57 AM, Thierry Carrez wrote:
>>> Sean Dague wrote:
 We're starting to make plans for the next cycle. Long term plans are
 getting made for details that would happen in one or two cycles.

 As we already have the locations for the N and O summits I think we
 should do the naming polls now and have names we can use for this
 planning instead of letters. It's pretty minor but it doesn't seem like
 there is any real reason to wait and have everyone come up with working
 names that turn out to be confusing later.
>>>
>>> That sounds fair. However the release naming process currently
>>> states[1]:
>>>
>>> """
>>> The process to chose the name for a release begins once the location of
>>> the design summit of the release to be named is announced and no sooner
>>> than the opening of development of the previous release.
>>> """
>>>
>>> ...which if I read it correctly means we could pick N now, but not O. We
>>> might want to change that (again) first.
>>>
>>> [1] http://governance.openstack.org/reference/release-naming.html
>>
>> Right, it seems like we should change it so that we can do naming as
>> soon as the location is announced.
>>
>> For projects like Nova that are trying to plan things more than one
>> cycle out, having those names to hang those features on is massively
>> useful (as danpb also stated). Delaying for bureaucratic reasons just
>> seems silly. :)
> 
> So, for what it's worth, I remember discussing this when we discussed
> the current process, and the change you are proposing was one of the
> options put forward when we talked about it.
> 
> The reason for not doing all of them as soon as we know them was to keep
> a sense of ownership by the people who are actually working on the
> thing. Barcelona is a long way away and we'll all likely have rage quit
> by then, leaving the electorate for the name largely disjoint from the
> people working on the release.
> 
> Now, I hear you - and I'm not arguing that position. (In fact, I believe
> my original thought was in line with what you said here) BUT - I mostly
> want to point out that we have had this discussion, the discussion was
> not too long ago, it covered this point, and I sort of feel like if we
> have another discussion on naming process people might kill us with
> pitchforks.

That's fine. But I also think baking in an assumption that everyone will
rage quit in 2 cycles, so we shouldn't name it, seems massively
pessimistic.

I'll admit that I tuned out a bit in the last conversation because most
of the things people were arguing passionately about were things I felt
ambivalent towards. The thing I mostly care about is getting labels on
things past the next quarter so that we can reinforce that planning for
OpenStack projects isn't just about the next release, but includes big
efforts that span multiple releases.

Ok, I guess I'll propose the change, and that we start these activities
soon for the next TC meeting. And whoever the next TC class is can
address it.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-doc-tools] [bashate] liberty rc releases

2015-10-08 Thread Dimitri John Ledkov
Heya,

Looks like openstack-doc-tools have staged updated pbr requirements in
git tree, but there is no rc libery release for it?

Similarly bashate have old pbr global requirements, thus when
liberty's pbr is packaged with openstack-doc-tools from master things
fail at running openstack-doc-tools as downgraded pbr is attempted to
be installed for bashate.

Please release openstack-doc-tools & bashate that are installable &
testable with pbr 1.8

-- 
Regards,

Dimitri.
90 sleeps till Christmas, or less

https://clearlinux.org
Open Source Technology Center
Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon SN3 1RJ.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Blueprint to change (expand) traditional Ethernet interface naming schema in Fuel

2015-10-08 Thread Steven Hardy
On Thu, Oct 08, 2015 at 12:46:53PM +0300, Albert Syriy wrote:
>Hello,A 
>I would like to pay your attention to the changing interface naming
>schema, which is proposed to be implemented in FuelA [1].A In brief,
>Ethernet network interfaces may not be named as ethX, and there is a
>reported bug about itA [2]
>There are a lot of reasons to switch to the new naming schema, not only
>because it has been used in CentOS 7 (and probably will be used in next
>Ubuntu LTS), but becauseA new naming schema gave more predictable
>interface namesA [3]. There is a reported bug related to the topicA [4]
>I suspect, that changing interface naming schema may impact to the current
>Fuel code, manifests and tests, because hard-coded Ethernet interface
>names (like eth* ) should be removed from the code.A 
>Any comment on the blueprint?
>[1]A 
> https://blueprints.launchpad.net/fuel/+spec/new-network-interfaces-naming-schema
>[2]A https://bugs.launchpad.net/fuel/+bug/1494223
>[3]A 
> http://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/
>[4]A https://bugs.launchpad.net/mos/+bug/1487044

You might be interested to look at the os-net-config tool - we faced this
exact same issue with TripleO, and solved it via os-net-config, which
provides abstractions for network configuration, including mapping device
aliases (e.g "nic1") to real NIC names (e.g "em1" or whatever).

https://github.com/openstack/os-net-config

Although it was developed by TripleO folks, it's a standalone tool and
there's no reason why it can't be consumed by any other deployment
solution.

Here's some examples of how it works:

https://github.com/openstack/os-net-config/blob/master/etc/os-net-config/samples/interface.yaml

https://github.com/openstack/os-net-config/blob/master/etc/os-net-config/samples/bond_mapped.yaml

https://github.com/openstack/os-net-config/blob/master/etc/os-net-config/samples/mapping.yaml

Basically the "name" of the interface can either be the biosdevname of an
actual NIC, or you can use "nic1" etc, and os-net-config uses a sorted list
of the system names.

If you require more control than that, and/or you want to avoid the risk
that the mapping changes (e.g if the link goes down, because atm it looks
only for link-up devices), you can specify an explicit mapping by either
device name or MAC address:

https://github.com/openstack/os-net-config/blob/master/etc/os-net-config/samples/mapping.yaml

Personally I think it'd be great to see more collaboration on these sorts
of common requirements, vs reinventing different solutions to the same
problems in the various deployment orientated projects :)

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] naming N and O releases nowish

2015-10-08 Thread Daniel P. Berrange
On Wed, Oct 07, 2015 at 02:57:59PM +0200, Thierry Carrez wrote:
> Sean Dague wrote:
> > We're starting to make plans for the next cycle. Long term plans are
> > getting made for details that would happen in one or two cycles.
> > 
> > As we already have the locations for the N and O summits I think we
> > should do the naming polls now and have names we can use for this
> > planning instead of letters. It's pretty minor but it doesn't seem like
> > there is any real reason to wait and have everyone come up with working
> > names that turn out to be confusing later.
> 
> That sounds fair. However the release naming process currently states[1]:
> 
> """
> The process to chose the name for a release begins once the location of
> the design summit of the release to be named is announced and no sooner
> than the opening of development of the previous release.
> """
> 
> ...which if I read it correctly means we could pick N now, but not O. We
> might want to change that (again) first.

Since changing the naming process may take non-negligible time, could
we parallelize, so we can at least press ahead with picking a name for
N asap which is permitted by current rules.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova-compute][nova][libvirt] Extending Nova-Compute for image prefetching

2015-10-08 Thread Alberto Geniola
Hi all,

I'm considering to extend the Nova-Compute API in order to provide
image-prefetching capabilities to OS.

In order to allow image prefetching, I ended up with the need to add three
different APIs on the nova-compute nodes:

  1. Trigger an image prefetching

  2. List prefetched images

  3. Delete a prefetched image



About the point 1 I saw I can re-use the libvirt driver function
_create_image() to trigger the image prefetching. However, this approach
will not store any information about the stored image locally. This leads
to an issue: how do I retrieve a list of already fetched images? A quick
and simple possibility would be having a local file, storing information
about the fetched images. Would it be acceptable? Is there any best
practice in OS community?



Any ideas?


Ty,

Al.

-- 
Dott. Alberto Geniola

  albertogeni...@gmail.com
  +39-346-6271105
  https://www.linkedin.com/in/albertogeniola

Web: http://www.hw4u.it
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Scheduler proposal

2015-10-08 Thread Jeremy Stanley
On 2015-10-08 00:37:31 -0700 (-0700), Clint Byrum wrote:
> Excerpts from Maish Saidel-Keesing's message of 2015-10-08 00:14:55 -0700:
[...]
> > By adding in a new Database solution (Cassandra) we are now up to three
> > different database solutions in use in OpenStack
> > 
> > MySQL (practically everything)
> > MongoDB (Ceilometer)
> > Cassandra.
[...]
> 
> Just because they both say they're databases, doesn't mean they're even
> remotely similar.

The DNS is a database too. For that matter, so are filesystems.
Different kinds of databases are useful for different kinds of
tasks, and no one database is ideal for everything.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Blueprint to change (expand) traditional Ethernet interface naming schema in Fuel

2015-10-08 Thread Albert Syriy
Hello,

I would like to pay your attention to the changing interface naming schema,
which is proposed to be implemented in Fuel [1]

. In brief, Ethernet network interfaces may not be named as ethX, and there
is a reported bug about it [2]


There are a lot of reasons to switch to the new naming schema, not only
because it has been used in CentOS 7 (and probably will be used in next
Ubuntu LTS), but because new naming schema gave more predictable interface
names [3]
.
There is a reported bug related to the topic [4]


I suspect, that changing interface naming schema may impact to the current
Fuel code, manifests and tests, because hard-coded Ethernet interface names
(like eth* ) should be removed from the code.

Any comment on the blueprint?

[1]
https://blueprints.launchpad.net/fuel/+spec/new-network-interfaces-naming-schema
[2] https://bugs.launchpad.net/fuel/+bug/1494223
[3]
http://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/
[4] https://bugs.launchpad.net/mos/+bug/1487044

With Best Regards,

Albert Syriy,

Software Engineer,
Mirantis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] How to verify a service is correctly setup in heat template

2015-10-08 Thread Qiao,Liyong

hi Magnum hackers:

Recently, we upgrade to fedora atomic-5 image, but the docker (1.7.1) in 
that image doesn't works well.

see [1].

When I using that image to create a swarm bay, magnum told me that bay 
is usable, actually swarm-master

swarm-agent service are not running correctly, so that bay is not usable.
I proposed a fix [2] to check all service's status (using systemctl 
status) before trigger a signal,

Andrew Melton feel that checking is not reliable, so he propose fix [3].
but fix[3] is not working because additional signals will be ignored 
since in heat template

the default signal count=1. Please refer more information on [4]

So my question is why [2] can not work well ? is my understand wrong on 
https://bugs.launchpad.net/magnum/+bug/1502329/comments/5 ,

is there any other better way to get an asynchronous signal?

[1]https://bugs.launchpad.net/magnum/+bug/1499607
[2]https://review.openstack.org/#/c/228762/
[3]https://review.openstack.org/#/c/230639/
[4]https://bugs.launchpad.net/magnum/+bug/1502329

Thanks.

--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] naming N and O releases nowish

2015-10-08 Thread Sean Dague
On 10/08/2015 06:59 AM, Daniel P. Berrange wrote:
> On Wed, Oct 07, 2015 at 02:57:59PM +0200, Thierry Carrez wrote:
>> Sean Dague wrote:
>>> We're starting to make plans for the next cycle. Long term plans are
>>> getting made for details that would happen in one or two cycles.
>>>
>>> As we already have the locations for the N and O summits I think we
>>> should do the naming polls now and have names we can use for this
>>> planning instead of letters. It's pretty minor but it doesn't seem like
>>> there is any real reason to wait and have everyone come up with working
>>> names that turn out to be confusing later.
>>
>> That sounds fair. However the release naming process currently states[1]:
>>
>> """
>> The process to chose the name for a release begins once the location of
>> the design summit of the release to be named is announced and no sooner
>> than the opening of development of the previous release.
>> """
>>
>> ...which if I read it correctly means we could pick N now, but not O. We
>> might want to change that (again) first.
> 
> Since changing the naming process may take non-negligible time, could
> we parallelize, so we can at least press ahead with picking a name for
> N asap which is permitted by current rules.

Agreed. I believe that Monty and Jim signed up for shepherding this
after the last naming rules change. I've added it to the TC agenda for
next week to kickstart the process.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [cinder] [all] The future of Cinder API v1

2015-10-08 Thread Sean Dague
Just to follow up, there was a discussion at the TC meeting on this, and
given how close we are to summit we're proposing we have a cross project
session there about it - http://odsreg.openstack.org/cfp/details/27

We'll try to get that scheduled in a way that it will not conflict with
operator sessions, so we can operators in the room for it as well.

For folks that can't make it to summit, don't worry, we'll take that
discussion as a seed and bring the results back to the list / gerrit.

On 10/01/2015 11:05 AM, Ivan Kolodyazhny wrote:
> Sean,
> 
> Thanks for bringing this topic to TC meeting.
> 
> Regards,
> Ivan Kolodyazhny,
> Web Developer,
> http://blog.e0ne.info/,
> http://notacash.com/,
> http://kharkivpy.org.ua/
> 
> On Thu, Oct 1, 2015 at 1:43 PM, Sean Dague  > wrote:
> 
> This is now queued up for discussion this week -
> https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Agenda
> 
> On 10/01/2015 06:22 AM, Sean Dague wrote:
> > Some of us are actively watching the thread / participating. I'll make
> > sure it gets on the TC agenda in the near future.
> >
> > I think most of the recommendations are quite good, especially on the
> > client support front for clients / tools within our community.
> >
> > On 09/30/2015 10:37 PM, Matt Fischer wrote:
> >> Thanks for summarizing this Mark. What's the best way to get feedback
> >> about this to the TC? I'd love to see some of the items which I think
> >> are common sense for anyone who can't just blow away devstack and
> start
> >> over to get added for consideration.
> >>
> >> On Tue, Sep 29, 2015 at 11:32 AM, Mark Voelker
> 
> >> >> wrote:
> >>
> >>
> >> Mark T. Voelker
> >>
> >>
> >>
> >> > On Sep 29, 2015, at 12:36 PM, Matt Fischer
> 
> >> >>
> wrote:
> >> >
> >> >
> >> >
> >> > I agree with John Griffith. I don't have any empirical
> evidences
> >> to back
> >> > my "feelings" on that one but it's true that we weren't
> enable to
> >> enable
> >> > Cinder v2 until now.
> >> >
> >> > Which makes me wonder: When can we actually deprecate an API
> >> version? I
> >> > *feel* we are fast to jump on the deprecation when the
> replacement
> >> isn't
> >> > 100% ready yet for several versions.
> >> >
> >> > --
> >> > Mathieu
> >> >
> >> >
> >> > I don't think it's too much to ask that versions can't be
> >> deprecated until the new version is 100% working, passing all
> tests,
> >> and the clients (at least python-xxxclients) can handle it
> without
> >> issues. Ideally I'd like to also throw in the criteria that
> >> devstack, rally, tempest, and other services are all using and
> >> exercising the new API.
> >> >
> >> > I agree that things feel rushed.
> >>
> >>
> >> FWIW, the TC recently created an
> assert:follows-standard-deprecation
> >> tag.  Ivan linked to a thread in which Thierry asked for input on
> >> it, but FYI the final language as it was approved last week
> [1] is a
> >> bit different than originally proposed.  It now requires one
> release
> >> plus 3 linear months of
> deprecated-but-still-present-in-the-tree as
> >> a minimum, and recommends at least two full stable releases for
> >> significant features (an entire API version would undoubtedly
> fall
> >> into that bucket).  It also requires that a migration path
> will be
> >> documented.  However to Matt’s point, it doesn’t contain any
> >> language that says specific things like:
> >>
> >> In the case of major API version deprecation:
> >> * $oldversion and $newversion must both work with
> >> [cinder|nova|whatever]client and openstackclient during the
> >> deprecation period.
> >> * It must be possible to run $oldversion and $newversion
> >> concurrently on the servers to ensure end users don’t have to
> switch
> >> overnight.
> >> * Devstack uses $newversion by default.
> >> * $newversion works in Tempest/Rally/whatever else.
> >>
> >> What it *does* do is require that a thread be started here on
> >> openstack-operators [2] so that operators can provide
> feedback.  I
> >> would hope that feedback like “I can’t get clients to use it so
> >> please don’t remove it yet” would be taken into account by
> projects,
> >> which 

[openstack-dev] [neutron][testing][all] fixtures 1.4.0 breaking py34 jobs

2015-10-08 Thread Ihar Hrachyshka
Hi all,

just a heads up that today fixtures 1.4.0 broke neutron py34 gate [1] and we 
needed to patch some logging code to overcome it [2]. The failures were 
triggered by a patch that started to raise logging exceptions for incorrect 
format strings (which is fine), but it also started to raise exceptions from 
stdlib logging code, and apparently till 3.5 it had a bug for the case when 
someone uses LOG.exception() in context where no exception was actually raised.

More details about why I think the issue is in python interpreter are available 
in [2] commit message.

I agree that using LOG.exception in such context is wrong, but still I wanted 
to notify other about potential issue, and a way to fix it.

[1]: https://launchpad.net/bugs/1504053
[2]: https://review.openstack.org/#/c/232265/

Ihar


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-doc-tools] [bashate] liberty rc releases

2015-10-08 Thread Dimitri John Ledkov
On 8 October 2015 at 11:51, Andreas Jaeger  wrote:
> On 2015-10-08 12:32, Dimitri John Ledkov wrote:
>>
>> Heya,
>>
>> Looks like openstack-doc-tools have staged updated pbr requirements in
>> git tree, but there is no rc libery release for it?
>
>
> Dimitri,
>
> We released openstack-doc-tools yesterday, is there anything else to do?
>

Noticed just now, hence the email a minute ago... >_< but this is
email, hence ping pong delay.

bashate still needs a refresh of global requirements and a liberty
release it seems.

Regards,

Dimitri.

> Andreas
>
>> Similarly bashate have old pbr global requirements, thus when
>> liberty's pbr is packaged with openstack-doc-tools from master things
>> fail at running openstack-doc-tools as downgraded pbr is attempted to
>> be installed for bashate.
>>
>> Please release openstack-doc-tools & bashate that are installable &
>> testable with pbr 1.8
>>
>
>
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Regards,

Dimitri.
90 sleeps till Christmas, or less

https://clearlinux.org
Open Source Technology Center
Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon SN3 1RJ.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] Reviews needed: openstack-ansible-security

2015-10-08 Thread Major Hayden
Hey folks,

Now that the openstack-ansible-security role has been added to OpenStack, we're 
in need of some reviews[1]!

Many of these reviews are fairly easy to do as they involve a task or two plus 
a small amount of documentation.  Some reviews involve only documentation.  You 
can refer to each STIG requirement quickly using the STIG Viewer[2].  It's a 
great way for new folks to get started with reviews. ;)

Feel free to ask me any questions about any of the patches.  I'm in 
#openstack-ansible on Freenode as 'mhayden'.

[1] 
https://review.openstack.org/#/q/status:open+project:openstack/openstack-ansible-security,n,z
[2] https://www.stigviewer.com/stig/red_hat_enterprise_linux_6/

--
Major Hayden

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] We should move strutils.mask_password back into oslo-incubator

2015-10-08 Thread Matt Riedemann



On 10/8/2015 4:21 AM, Julien Danjou wrote:

On Wed, Oct 07 2015, Matt Riedemann wrote:


2. Backport the oslo.utils change to a stable branch, release it as a patch
release, bump minimum required version in stable g-r and then backport the nova
change and depend on the backported oslo.utils stable release - which also
makes it a dependent library version bump for any packagers/distros that have
already frozen libraries for their stable releases, which is kind of not fun.


You should not need to bump the minimum version in g-r. The minimum
version there should be the minimal version to have working code.

If you start bumping dependencies or dependencies of dependencies each
time they release because a bug or a security issue is fixed, it's going
to a never ending useless job.

When you're an operator, you know you need to always run the latest
stable version of the things you have in prod' to have all the fixes.
That's common good sense.



I don't know how many operators are tracking patch releases of 
dependencies on stable branches unless there is a new minimum 
requirement on those, especially if they aren't getting their updates 
from a distro provider. So while nova wouldn't be broken w/o the patched 
oslo.utils on stable, the OSSA wouldn't be fixed in that case.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] naming N and O releases nowish

2015-10-08 Thread Barrett, Carol L
Monty - Thanks for the background, it brings a viewpoint I hadn't considered.

>From a roadmap point of view, as we're working toward communicating the 
>direction for OpenStack project development across 3 releases (Liberty, 
>Mitake, N-Release), I think it would better to have a name for N, rather than 
>using N-Release.  

Thanks
Carol

-Original Message-
From: Monty Taylor [mailto:mord...@inaugust.com] 
Sent: Wednesday, October 07, 2015 3:22 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] naming N and O releases nowish

On 10/07/2015 09:24 AM, Sean Dague wrote:
> On 10/07/2015 08:57 AM, Thierry Carrez wrote:
>> Sean Dague wrote:
>>> We're starting to make plans for the next cycle. Long term plans are 
>>> getting made for details that would happen in one or two cycles.
>>>
>>> As we already have the locations for the N and O summits I think we 
>>> should do the naming polls now and have names we can use for this 
>>> planning instead of letters. It's pretty minor but it doesn't seem 
>>> like there is any real reason to wait and have everyone come up with 
>>> working names that turn out to be confusing later.
>>
>> That sounds fair. However the release naming process currently states[1]:
>>
>> """
>> The process to chose the name for a release begins once the location 
>> of the design summit of the release to be named is announced and no 
>> sooner than the opening of development of the previous release.
>> """
>>
>> ...which if I read it correctly means we could pick N now, but not O. 
>> We might want to change that (again) first.
>>
>> [1] http://governance.openstack.org/reference/release-naming.html
>
> Right, it seems like we should change it so that we can do naming as 
> soon as the location is announced.
>
> For projects like Nova that are trying to plan things more than one 
> cycle out, having those names to hang those features on is massively 
> useful (as danpb also stated). Delaying for bureaucratic reasons just 
> seems silly. :)

So, for what it's worth, I remember discussing this when we discussed the 
current process, and the change you are proposing was one of the options put 
forward when we talked about it.

The reason for not doing all of them as soon as we know them was to keep a 
sense of ownership by the people who are actually working on the thing. 
Barcelona is a long way away and we'll all likely have rage quit by then, 
leaving the electorate for the name largely disjoint from the people working on 
the release.

Now, I hear you - and I'm not arguing that position. (In fact, I believe my 
original thought was in line with what you said here) BUT - I mostly want to 
point out that we have had this discussion, the discussion was not too long 
ago, it covered this point, and I sort of feel like if we have another 
discussion on naming process people might kill us with pitchforks.

Monty


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-08 Thread Thierry Carrez
Maish Saidel-Keesing wrote:
> Operational overhead has a cost - maintaining 3 different database
> tools, backing them up, providing HA, etc. has operational cost.
> 
> This is not to say that this cannot be overseen, but it should be taken
> into consideration.
> 
> And *if* they can be consolidated into an agreed solution across the
> whole of OpenStack - that would be highly beneficial (IMHO).

Agreed, and that ties into the similar discussion we recently had about
picking a common DLM. Ideally we'd only add *one* general dependency and
use it for locks / leader election / syncing status around.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cloudkitty] Now that we are in the Big Tent what's the future?

2015-10-08 Thread Stéphane Albert
Hi everyone,

As you might have guessed based on the title CloudKitty is now part of
the Big Tent. It's not new as the decision was taken one week ago, but
some people might not be aware of this.

Thanks to our Big Tent integration we are lucky enough to get two design
summit slots.

The first slot is Wednesday, October 28 at 4:40pm. It's a working
session in the Tachibana room, which has a capacity of 8 people. Our
goal is to plan the future support of gnocchi in CloudKitty. Since some
internal parts are subject to change in the next cycle we want to be
sure it will fit perfectly with gnocchi to minimize future efforts. If
you are working on gnocchi or willing to help us on that, join us.

The second slot is Thursday, October 29 at 5:20pm. It's a fishbowl
session in the Kotobuki room, which is way bigger and can accommodate 55
people. We will review the pending blueprints, discuss of future changes
and the scope of this cycle. We would love to have people with new
ideas, use cases or wanting to help us on this project.
As we can have way more people in the room, feel free to join us and
discuss with us the future of the project.

We'll plan agendas for the design summit during the next IRC meeting. If
you want to add specific points join us next Monday (12th) at 14:00 UTC.

Cheers

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] review priorities etherpad

2015-10-08 Thread James Slagle
At the TripleO meething this week, we talked about using an etherpad
to help get some organization around reviews for the high priority
themes in progress.

I started on one: https://etherpad.openstack.org/p/tripleo-review-priorities

And I subjectively added a few things :). Feel free to add more stuff.
Personally, I like seeing it organized by "feature" or theme instead
of git repo, but we can work out whatever seems best.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs][Neutron] Networking Guide - Call for contributors

2015-10-08 Thread John Belamaric
I can write something up on the pluggable IPAM configuration for the Advanced 
Configuration section. How does the docs release schedule move in relation to 
the code release? I need an idea of when this needs to be done in order to make 
sure I'll have the time.

John

On Oct 7, 2015, at 8:11 PM, Edgar Magana 
> wrote:

Hello,

I would like to invite everybody to become an active contributor for the 
OpenStack Networking Guide: http://docs.openstack.org/networking-guide/

During the Liberty cycle we made a lot of progress and we feel that the guide 
is ready to have even more contributions and formalize a bit more the team 
around it.
The first thing that I want to propose is to have a regular meeting over IRC to 
discuss the progress and to welcome new contributors. This is the same process 
that other guides like the operators one are following currently.

The networking guide is based on this ToC: 
https://wiki.openstack.org/wiki/NetworkingGuide/TOC
Contribution process is the same that the rest of the OpenStack docs under the 
openstack-manuals git repo: 
https://github.com/openstack/openstack-manuals/tree/master/doc/networking-guide/source

Please, response to this thread and let me know if you could allocate some time 
to help us to make this guide a rock star as the other ones. Based on the 
responses, I will propose a couple of times for the IRC meeting that could 
allocate to everybody if possible, this is why is very important to let me know 
your time zone.

I am really looking forward to increase the contributors in this guide.

Thanks in advance!

Edgar Magana
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs][Neutron] Networking Guide - Call for contributors

2015-10-08 Thread John Belamaric
Lucky me :)

> On Oct 8, 2015, at 10:19 AM, Andreas Jaeger  wrote:
> 
> On 2015-10-08 16:07, John Belamaric wrote:
>> I can write something up on the pluggable IPAM configuration for the
>> Advanced Configuration section. How does the docs release schedule move
>> in relation to the code release? I need an idea of when this needs to be
>> done in order to make sure I'll have the time.
> 
> 
> The Networking Guide is continuously updated, so you can write anytime ;)
> 
> Andreas
> -- 
> Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>   HRB 21284 (AG Nürnberg)
>GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [nova] live migration in Mitaka

2015-10-08 Thread Pavel Boldin
Here you go: https://launchpad.net/~pboldin/+archive/ubuntu/libvirt-python

Use it, but please keep in mind that this is a draft reupload.

On Fri, Oct 2, 2015 at 11:38 PM, Mathieu Gagné  wrote:

> On 2015-10-02 4:18 PM, Pavel Boldin wrote:
> >
> > You have to pass device names from /dev/, e.g., if a VM has
> > ephemeral disk
> > attached at /dev/vdb you need to pass in 'vdb'. Format expected by
> > migrate_disks is ",...".
> >
> >
> > This is the format expected by the `virsh' utility and will not work in
> > Python.
> >
> > The libvirt-python has now support for passing lists to a parameter [1].
> >
> > [1]
> >
> http://libvirt.org/git/?p=libvirt-python.git;a=commit;h=9896626b8277e2ffba1523d2111c96b08fb799e8
> >
>
> Thanks for the info. I was banging my head against the wall, trying to
> understand why it didn't accept my list of strings.
>
> Now the next challenge is with Ubuntu packages, only python-libvirt
> 1.2.15 is available in Ubuntu Willy. :-/
>
> --
> Mathieu
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs][Neutron] Networking Guide - Call for contributors

2015-10-08 Thread Andreas Jaeger

On 2015-10-08 16:07, John Belamaric wrote:

I can write something up on the pluggable IPAM configuration for the
Advanced Configuration section. How does the docs release schedule move
in relation to the code release? I need an idea of when this needs to be
done in order to make sure I'll have the time.



The Networking Guide is continuously updated, so you can write anytime ;)

Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] We should move strutils.mask_password back into oslo-incubator

2015-10-08 Thread Jeremy Stanley
On 2015-10-08 08:58:06 -0500 (-0500), Matt Riedemann wrote:
[...]
> I don't know how many operators are tracking patch releases of
> dependencies on stable branches unless there is a new minimum
> requirement on those, especially if they aren't getting their
> updates from a distro provider. So while nova wouldn't be broken
> w/o the patched oslo.utils on stable, the OSSA wouldn't be fixed
> in that case.

The OSSA will link to https://review.openstack.org/220620 as part of
the stable/liberty fix and mention something along the lines of
"included in an upcoming oslo.utils 2.5.1 release" (in which case
operators _should_ check whether they are running a new enough
version of the library).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] python-tripleoclient has unknown requirement

2015-10-08 Thread Andreas Jaeger

Current requirements job fails on python-tripleoclient with:
'ipaddress' is not in global-requirements.txt

Could you get the requirement in global-requirements or replace it, please?

Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-08 Thread Joshua Harlow
On Thu, 8 Oct 2015 10:43:01 -0400
Monty Taylor  wrote:

> On 10/08/2015 09:01 AM, Thierry Carrez wrote:
> > Maish Saidel-Keesing wrote:
> >> Operational overhead has a cost - maintaining 3 different database
> >> tools, backing them up, providing HA, etc. has operational cost.
> >>
> >> This is not to say that this cannot be overseen, but it should be
> >> taken into consideration.
> >>
> >> And *if* they can be consolidated into an agreed solution across
> >> the whole of OpenStack - that would be highly beneficial (IMHO).
> >
> > Agreed, and that ties into the similar discussion we recently had
> > about picking a common DLM. Ideally we'd only add *one* general
> > dependency and use it for locks / leader election / syncing status
> > around.
> >
> 
> ++
> 
> All of the proposed DLM tools can fill this space successfully. There
> is definitely not a need for multiple.

On this point, and just thinking out loud. If we consider saving
compute_node information into say a node in said DLM backend (for
example a znode in zookeeper[1]); this information would be updated
periodically by that compute_node *itself* (it would say contain
information about what VMs are running on it, what there utilization is
and so-on).

For example the following layout could be used:

/nova/compute_nodes/

 data could be:

{
vms: [],
memory_free: XYZ,
cpu_usage: ABC,
memory_used: MNO,
...
}

Now if we imagine each/all schedulers having watches
on /nova/compute_nodes/ ([2] consul and etc.d have equivalent concepts
afaik) then when a compute_node updates that information a push
notification (the watch being triggered) will be sent to the
scheduler(s) and the scheduler(s) could then update a local in-memory
cache of the data about all the hypervisors that can be selected from
for scheduling. This avoids any reading of a large set of data in the
first place (besides an initial read-once on startup to read the
initial list + setup the watches); in a way its similar to push
notifications. Then when scheduling a VM -> hypervisor there isn't any
need to query anything but the local in-memory representation that the
scheduler is maintaining (and updating as watches are triggered)...

So this is why I was wondering about what capabilities of cassandra are
being used here; because the above I think are unique capababilties of
DLM like systems (zookeeper, consul, etcd) that could be advantageous
here...

[1]
https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#sc_zkDataModel_znodes

[2]
https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#ch_zkWatches


> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-08 Thread Joshua Harlow

Joshua Harlow wrote:

On Thu, 8 Oct 2015 10:43:01 -0400
Monty Taylor  wrote:


On 10/08/2015 09:01 AM, Thierry Carrez wrote:

Maish Saidel-Keesing wrote:

Operational overhead has a cost - maintaining 3 different database
tools, backing them up, providing HA, etc. has operational cost.

This is not to say that this cannot be overseen, but it should be
taken into consideration.

And *if* they can be consolidated into an agreed solution across
the whole of OpenStack - that would be highly beneficial (IMHO).

Agreed, and that ties into the similar discussion we recently had
about picking a common DLM. Ideally we'd only add *one* general
dependency and use it for locks / leader election / syncing status
around.


++

All of the proposed DLM tools can fill this space successfully. There
is definitely not a need for multiple.


On this point, and just thinking out loud. If we consider saving
compute_node information into say a node in said DLM backend (for
example a znode in zookeeper[1]); this information would be updated
periodically by that compute_node *itself* (it would say contain
information about what VMs are running on it, what there utilization is
and so-on).

For example the following layout could be used:

/nova/compute_nodes/

  data could be:

{
 vms: [],
 memory_free: XYZ,
 cpu_usage: ABC,
 memory_used: MNO,
 ...
}

Now if we imagine each/all schedulers having watches
on /nova/compute_nodes/ ([2] consul and etc.d have equivalent concepts
afaik) then when a compute_node updates that information a push
notification (the watch being triggered) will be sent to the
scheduler(s) and the scheduler(s) could then update a local in-memory
cache of the data about all the hypervisors that can be selected from
for scheduling. This avoids any reading of a large set of data in the
first place (besides an initial read-once on startup to read the
initial list + setup the watches); in a way its similar to push
notifications. Then when scheduling a VM ->  hypervisor there isn't any
need to query anything but the local in-memory representation that the
scheduler is maintaining (and updating as watches are triggered)...

So this is why I was wondering about what capabilities of cassandra are
being used here; because the above I think are unique capababilties of
DLM like systems (zookeeper, consul, etcd) that could be advantageous
here...

[1]
https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#sc_zkDataModel_znodes

[2]
https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#ch_zkWatches




And here's a final super-awesomeness,

Use the same existence of that znode + information (perhaps using 
ephemeral znodes or equivalent) to determine if a hypervisor is 'alive' 
or 'dead', thus removing the need to do queries and periodic writes to 
the nova database to determine if a hypervisors nova-compute service is 
alive or dead (with reads via 
https://github.com/openstack/nova/blob/master/nova/servicegroup/drivers/db.py#L33 
and other similar code scattered in nova)...



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Reviews needed: openstack-ansible-security

2015-10-08 Thread Clark, Robert Graham
It might be worth re-posting this with a [Security] tag. 

I know a number of us from the Security project have been quietly keeping tabs 
on this, it seems like great work. We didn't want to wade in because clearly 
things were already moving with some good momentum and there's no need for us 
to try and own everything security-related. 

However, now you're asking for review I'll make sure this gets discussed at 
today's weekly Security meeting [r1] and hopefully we'll get some reviews 
flowing.

[r1] https://etherpad.openstack.org/p/security-20151008-irc

Cheers
-Rob


> -Original Message-
> From: Major Hayden [mailto:ma...@mhtx.net]
> Sent: 08 October 2015 15:27
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [openstack-ansible] Reviews needed: 
> openstack-ansible-security
> 
> Hey folks,
> 
> Now that the openstack-ansible-security role has been added to OpenStack, 
> we're in need of some reviews[1]!
> 
> Many of these reviews are fairly easy to do as they involve a task or two 
> plus a small amount of documentation.  Some reviews involve only
> documentation.  You can refer to each STIG requirement quickly using the STIG 
> Viewer[2].  It's a great way for new folks to get started with
> reviews. ;)
> 
> Feel free to ask me any questions about any of the patches.  I'm in 
> #openstack-ansible on Freenode as 'mhayden'.
> 
> [1] 
> https://review.openstack.org/#/q/status:open+project:openstack/openstack-ansible-security,n,z
> [2] https://www.stigviewer.com/stig/red_hat_enterprise_linux_6/
> 
> --
> Major Hayden
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] We should move strutils.mask_password back into oslo-incubator

2015-10-08 Thread Ihar Hrachyshka
> On 08 Oct 2015, at 16:51, Matt Riedemann  wrote:
> 
> 
> 
> On 10/8/2015 9:25 AM, Jeremy Stanley wrote:
>> On 2015-10-08 08:58:06 -0500 (-0500), Matt Riedemann wrote:
>> [...]
>>> I don't know how many operators are tracking patch releases of
>>> dependencies on stable branches unless there is a new minimum
>>> requirement on those, especially if they aren't getting their
>>> updates from a distro provider. So while nova wouldn't be broken
>>> w/o the patched oslo.utils on stable, the OSSA wouldn't be fixed
>>> in that case.
>> 
>> The OSSA will link to https://review.openstack.org/220620 as part of
>> the stable/liberty fix and mention something along the lines of
>> "included in an upcoming oslo.utils 2.5.1 release" (in which case
>> operators _should_ check whether they are running a new enough
>> version of the library).
>> 
> 
> OK, that works for me. I'll end this thread and just move forward with the 
> necessary changes for #2 w/o bumping a minimum required version of oslo.utils 
> in g-r on stable.


One of the reasons why you don’t want to bump on CVE is that a lot of 
distributions choose to cherry-pick just that CVE fix and not rebase on top of 
an unknown, previously untested version, even if it ships from stable branches. 
In that case, their pbr version stays the same, and version bump would break 
them (of course that’s assuming they consider requirements.txt versions in 
their packaging).

Ihar


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] Plan to port Swift to Python 3

2015-10-08 Thread Victor Stinner

Hi,

Good news, we made good progress last weeks on porting Swift to Python 
3, a few changes were merged and all dependencies now work on Python 3. 
We only need two more simple changes to have a working pyhon34 check job:


* "py3: Update pbr and dnspython requirements"
  https://review.openstack.org/#/c/217423/
* "py3: Add py34 test environment to tox"
  https://review.openstack.org/#/c/199034/

With these changes, it will be possible to make the python34 check job 
voting to avoid Python 3 regressions. It's very important to avoid 
regressions, so we cannot go backward again in Python 3 support.


On IRC, it was said that it's better to merge Python 3 changes at the 
beginning of the Mitaka cycle, because Python 3 requires a lot of small 
changes which can likely introduce (subtle) bugs, and it's better to 
catch them early during the development cycle.


John Dickinson prefers incremental and small changes, whereas clayg 
looks to like giant patches to fix all Python 3 issues at once to avoid 
conflicts in other (non-Python3) changes. (Sorry, if I didn't summarized 
correctly the discussion we had yesterday.)


The problem is that it's hard to fix "all" Python 3 issues in a single 
patch, the patch would be super giant and just impossible to review. 
It's also annoying to have to write dozens of small patches: we loose 
time on merge conflicts, rebasing, random gate failures, etc.


I proposed a first patch serie of 6 changes to fix a lot of simple 
Python 3 issues "at once":


* "py3: Replace unicode with six.text_type"
  https://review.openstack.org/#/c/232476/

* "py3: Replace urllib imports with six.moves.urllib"
  https://review.openstack.org/#/c/232536/

* "py3: Use six.reraise() to reraise an exception"
  https://review.openstack.org/#/c/232537/

* "py3: Replace gen.next() with next(gen)"
  https://review.openstack.org/#/c/232538/

* "Replace itertools.ifilter with six.moves.filter"
  https://review.openstack.org/#/c/232539/

* "py3: Replace basestring with six.string_types"
  https://review.openstack.org/#/c/232540/

The overall diff is impressive: "61 files changed, 233 insertions(+), 
189 deletions(-)" ... but each change is quite simple. It's only one 
pattern replaced with a different pattern. For example, replace 
"unicode" with "six.text_type" (and add "import six" if needed). So 
these changes should be easy to review.


With a working (and voting?) python34 check job and these 6 changes, it 
will be (much) easier to work on porting Swift to Python 3. Following 
patches will be validated by the python34 check job, shorter and 
restricted to a few files.


Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-08 Thread Clint Byrum
Excerpts from Joshua Harlow's message of 2015-10-08 08:38:57 -0700:
> Joshua Harlow wrote:
> > On Thu, 8 Oct 2015 10:43:01 -0400
> > Monty Taylor  wrote:
> >
> >> On 10/08/2015 09:01 AM, Thierry Carrez wrote:
> >>> Maish Saidel-Keesing wrote:
>  Operational overhead has a cost - maintaining 3 different database
>  tools, backing them up, providing HA, etc. has operational cost.
> 
>  This is not to say that this cannot be overseen, but it should be
>  taken into consideration.
> 
>  And *if* they can be consolidated into an agreed solution across
>  the whole of OpenStack - that would be highly beneficial (IMHO).
> >>> Agreed, and that ties into the similar discussion we recently had
> >>> about picking a common DLM. Ideally we'd only add *one* general
> >>> dependency and use it for locks / leader election / syncing status
> >>> around.
> >>>
> >> ++
> >>
> >> All of the proposed DLM tools can fill this space successfully. There
> >> is definitely not a need for multiple.
> >
> > On this point, and just thinking out loud. If we consider saving
> > compute_node information into say a node in said DLM backend (for
> > example a znode in zookeeper[1]); this information would be updated
> > periodically by that compute_node *itself* (it would say contain
> > information about what VMs are running on it, what there utilization is
> > and so-on).
> >
> > For example the following layout could be used:
> >
> > /nova/compute_nodes/
> >
> >   data could be:
> >
> > {
> >  vms: [],
> >  memory_free: XYZ,
> >  cpu_usage: ABC,
> >  memory_used: MNO,
> >  ...
> > }
> >
> > Now if we imagine each/all schedulers having watches
> > on /nova/compute_nodes/ ([2] consul and etc.d have equivalent concepts
> > afaik) then when a compute_node updates that information a push
> > notification (the watch being triggered) will be sent to the
> > scheduler(s) and the scheduler(s) could then update a local in-memory
> > cache of the data about all the hypervisors that can be selected from
> > for scheduling. This avoids any reading of a large set of data in the
> > first place (besides an initial read-once on startup to read the
> > initial list + setup the watches); in a way its similar to push
> > notifications. Then when scheduling a VM ->  hypervisor there isn't any
> > need to query anything but the local in-memory representation that the
> > scheduler is maintaining (and updating as watches are triggered)...
> >
> > So this is why I was wondering about what capabilities of cassandra are
> > being used here; because the above I think are unique capababilties of
> > DLM like systems (zookeeper, consul, etcd) that could be advantageous
> > here...
> >
> > [1]
> > https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#sc_zkDataModel_znodes
> >
> > [2]
> > https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#ch_zkWatches
> >
> >
> 
> And here's a final super-awesomeness,
> 
> Use the same existence of that znode + information (perhaps using 
> ephemeral znodes or equivalent) to determine if a hypervisor is 'alive' 
> or 'dead', thus removing the need to do queries and periodic writes to 
> the nova database to determine if a hypervisors nova-compute service is 
> alive or dead (with reads via 
> https://github.com/openstack/nova/blob/master/nova/servicegroup/drivers/db.py#L33
>  
> and other similar code scattered in nova)...
> 

^^ THIS is the kind of architectural thinking I'd like to see us do more
of.

This isn't "hey I have a better database" it is "I have a way to reduce
the most common operations to O(1) complexity".

Ed, for all of the promise of your experiment, I'd actually rather see
time spent on Josh's idea above. In fact, I might spend time on Josh's
idea above. :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] We should move strutils.mask_password back into oslo-incubator

2015-10-08 Thread Matt Riedemann



On 10/8/2015 9:25 AM, Jeremy Stanley wrote:

On 2015-10-08 08:58:06 -0500 (-0500), Matt Riedemann wrote:
[...]

I don't know how many operators are tracking patch releases of
dependencies on stable branches unless there is a new minimum
requirement on those, especially if they aren't getting their
updates from a distro provider. So while nova wouldn't be broken
w/o the patched oslo.utils on stable, the OSSA wouldn't be fixed
in that case.


The OSSA will link to https://review.openstack.org/220620 as part of
the stable/liberty fix and mention something along the lines of
"included in an upcoming oslo.utils 2.5.1 release" (in which case
operators _should_ check whether they are running a new enough
version of the library).



OK, that works for me. I'll end this thread and just move forward with 
the necessary changes for #2 w/o bumping a minimum required version of 
oslo.utils in g-r on stable.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs][Neutron] Networking Guide - Call for contributors

2015-10-08 Thread Edgar Magana
Awesome Nate!  Are you located in Philadelphia?

Edgar

From: "Johnston, Nate"
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Wednesday, October 7, 2015 at 8:50 PM
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: Re: [openstack-dev] [OpenStack-docs][Neutron] Networking Guide - Call 
for contributors

I can definitely help.

—N.

On Oct 7, 2015, at 8:11 PM, Edgar Magana 
> wrote:

Hello,

I would like to invite everybody to become an active contributor for the 
OpenStack Networking Guide: http://docs.openstack.org/networking-guide/

During the Liberty cycle we made a lot of progress and we feel that the guide 
is ready to have even more contributions and formalize a bit more the team 
around it.
The first thing that I want to propose is to have a regular meeting over IRC to 
discuss the progress and to welcome new contributors. This is the same process 
that other guides like the operators one are following currently.

The networking guide is based on this ToC: 
https://wiki.openstack.org/wiki/NetworkingGuide/TOC
Contribution process is the same that the rest of the OpenStack docs under the 
openstack-manuals git repo: 
https://github.com/openstack/openstack-manuals/tree/master/doc/networking-guide/source

Please, response to this thread and let me know if you could allocate some time 
to help us to make this guide a rock star as the other ones. Based on the 
responses, I will propose a couple of times for the IRC meeting that could 
allocate to everybody if possible, this is why is very important to let me know 
your time zone.

I am really looking forward to increase the contributors in this guide.

Thanks in advance!

Edgar Magana
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs][Neutron] Networking Guide - Call for contributors

2015-10-08 Thread Daniel Mellado
I could try to allocate some time for this, I think it's deff worth the
effort!

El 08/10/15 a las 02:11, Edgar Magana escribió:
> Hello,
>
> I would like to invite everybody to become an active contributor for
> the OpenStack Networking
> Guide: http://docs.openstack.org/networking-guide/
>
> During the Liberty cycle we made a lot of progress and we feel that
> the guide is ready to have even more contributions and formalize a bit
> more the team around it. 
> The first thing that I want to propose is to have a regular meeting
> over IRC to discuss the progress and to welcome new contributors. This
> is the same process that other guides like the operators one are
> following currently.
>
> The networking guide is based on this
> ToC: https://wiki.openstack.org/wiki/NetworkingGuide/TOC
> Contribution process is the same that the rest of the OpenStack docs
> under the openstack-manuals git
> repo: 
> https://github.com/openstack/openstack-manuals/tree/master/doc/networking-guide/source
>
> Please, response to this thread and let me know if you could allocate
> some time to help us to make this guide a rock star as the other ones.
> Based on the responses, I will propose a couple of times for the IRC
> meeting that could allocate to everybody if possible, this is why is
> very important to let me know your time zone.
>
> I am really looking forward to increase the contributors in this guide. 
>
> Thanks in advance!
>
> Edgar Magana
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs][Neutron] Networking Guide - Call for contributors

2015-10-08 Thread Johnston, Nate
No, I am in Reston, VA.

—N.

On Oct 8, 2015, at 11:06 AM, Edgar Magana 
> wrote:

Awesome Nate!  Are you located in Philadelphia?

Edgar

From: "Johnston, Nate"
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Wednesday, October 7, 2015 at 8:50 PM
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: Re: [openstack-dev] [OpenStack-docs][Neutron] Networking Guide - Call 
for contributors

I can definitely help.

—N.

On Oct 7, 2015, at 8:11 PM, Edgar Magana 
> wrote:

Hello,

I would like to invite everybody to become an active contributor for the 
OpenStack Networking Guide: http://docs.openstack.org/networking-guide/

During the Liberty cycle we made a lot of progress and we feel that the guide 
is ready to have even more contributions and formalize a bit more the team 
around it.
The first thing that I want to propose is to have a regular meeting over IRC to 
discuss the progress and to welcome new contributors. This is the same process 
that other guides like the operators one are following currently.

The networking guide is based on this ToC: 
https://wiki.openstack.org/wiki/NetworkingGuide/TOC
Contribution process is the same that the rest of the OpenStack docs under the 
openstack-manuals git repo: 
https://github.com/openstack/openstack-manuals/tree/master/doc/networking-guide/source

Please, response to this thread and let me know if you could allocate some time 
to help us to make this guide a rock star as the other ones. Based on the 
responses, I will propose a couple of times for the IRC meeting that could 
allocate to everybody if possible, this is why is very important to let me know 
your time zone.

I am really looking forward to increase the contributors in this guide.

Thanks in advance!

Edgar Magana
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-08 Thread Monty Taylor

On 10/08/2015 09:01 AM, Thierry Carrez wrote:

Maish Saidel-Keesing wrote:

Operational overhead has a cost - maintaining 3 different database
tools, backing them up, providing HA, etc. has operational cost.

This is not to say that this cannot be overseen, but it should be taken
into consideration.

And *if* they can be consolidated into an agreed solution across the
whole of OpenStack - that would be highly beneficial (IMHO).


Agreed, and that ties into the similar discussion we recently had about
picking a common DLM. Ideally we'd only add *one* general dependency and
use it for locks / leader election / syncing status around.



++

All of the proposed DLM tools can fill this space successfully. There is 
definitely not a need for multiple.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] We should move strutils.mask_password back into oslo-incubator

2015-10-08 Thread Doug Hellmann
Excerpts from Matt Riedemann's message of 2015-10-07 14:38:07 -0500:
> Here's why:
> 
> https://review.openstack.org/#/c/220622/
> 
> That's marked as fixing an OSSA which means we'll have to backport the 
> fix in nova but it depends on a change to strutils.mask_password in 
> oslo.utils, which required a release and a minimum version bump in 
> global-requirements.
> 
> To backport the change in nova, we either have to:
> 
> 1. Copy mask_password out of oslo.utils and add it to nova in the 
> backport or,
> 
> 2. Backport the oslo.utils change to a stable branch, release it as a 
> patch release, bump minimum required version in stable g-r and then 
> backport the nova change and depend on the backported oslo.utils stable 
> release - which also makes it a dependent library version bump for any 
> packagers/distros that have already frozen libraries for their stable 
> releases, which is kind of not fun.

Bug fix releases do not generally require a minimum version bump. The
API hasn't changed, and there's nothing new in the library in this case,
so it's a documentation issue to ensure that users update to the new
release. All we should need to do is backport the fix to the appropriate
branch of oslo.utils and release a new version from that branch that is
compatible with the same branch of nova.

Doug

> 
> So I'm thinking this is one of those things that should ultimately live 
> in oslo-incubator so it can live in the respective projects. If 
> mask_password were in oslo-incubator, we'd have just fixed and 
> backported it there and then synced to nova on master and stable 
> branches, no dependent library version bumps required.
> 
> Plus I miss the good old days of reviewing oslo-incubator 
> syncs...(joking of course).
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-08 Thread Kevin L. Mitchell
On Wed, 2015-10-07 at 23:17 -0600, Chris Friesen wrote:
> Why is it inevitable?

Well, I would say that this is probably a consequence of the CAP[1]
theorem.

> Theoretically if the DB knew about what resources were originally available 
> and 
> what resources have been consumed, then it should be able to allocate 
> resources 
> race-free (possibly with some retries involved if racing against other 
> schedulers updating the DB, but that would be internal to the scheduler 
> itself).

The problem is, it can't.  The scheduler may be making the decision at
the same time that an update from a compute node is in flight, meaning
that the scheduler is missing (at least) one piece of information.  When
you include a database, that just makes the possibility of missing an
in-flight update worse, because you also have to factor in the latency
of the database update as well.  Also, we have to factor in the
possibility that there are multiple schedulers in play, which further
worsens the possibility of in-flight information critical to the
scheduling decision.  If you employ some sort of locking to try to
mitigate all this, you've just effectively thrown away the scalability
that deploying multiple schedulers was supposed to buy you.

[1] https://en.wikipedia.org/wiki/CAP_theorem
-- 
Kevin L. Mitchell 
Rackspace


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-08 Thread Ed Leafe
On Oct 8, 2015, at 8:01 AM, Thierry Carrez  wrote:

>> Operational overhead has a cost - maintaining 3 different database
>> tools, backing them up, providing HA, etc. has operational cost.
>> 
>> This is not to say that this cannot be overseen, but it should be taken
>> into consideration.
>> 
>> And *if* they can be consolidated into an agreed solution across the
>> whole of OpenStack - that would be highly beneficial (IMHO).
> 
> Agreed, and that ties into the similar discussion we recently had about
> picking a common DLM. Ideally we'd only add *one* general dependency and
> use it for locks / leader election / syncing status around.

Oh, yes, sorry, I left that out of this particular post, as it had been 
discussed at length back in July. But yes, introducing a new dependency has a 
high cost, and needs to be justified before anyone would ever consider taking 
on that added cost. That was in my original email [0] back in July:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
At this point I'm sure that most of you are filled with thoughts on
how this won't work, or how much trouble it will be to switch, or how
much more of a pain it will be, or how you hate non-relational DBs, or
any of a zillion other negative thoughts. FWIW, I have them too. But
instead of ranting, I would ask that we acknowledge for now that:

a) it will be disruptive and painful to switch something like this at
this point in Nova's development
b) it would have to provide *significant* improvement to make such a
change worthwhile

So what I'm asking from all of you is to help define the second part:
what we would want improved, and how to measure those benefits. In
other words, what results would you have to see in order to make you
reconsider your initial "nah, this'll never work" reaction, and start
to think that this is will be a worthwhile change to make to Nova.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Whether we make this type of change, or some other type of change, or keep 
things the way they are, having the data to justify that decision is always 
important.

-- Ed Leafe

[0] http://lists.openstack.org/pipermail/openstack-dev/2015-July/069593.html



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Security] Introducing Killick PKI

2015-10-08 Thread Chivers, Doug
Hi All,

At a previous OpenStack Security Project IRC meeting, we briefly discussed a 
lightweight traditional PKI using the Anchor validation functionality, for use 
in internal deployments, as an alternative to things like MS ADCS. To take this 
further, I have drafted a spec, which is in the security-specs repo, and would 
appreciate feedback:

https://review.openstack.org/#/c/231955/

Regards

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-08 Thread Ed Leafe
On Oct 8, 2015, at 10:24 AM, Joshua Harlow  wrote:



> Now if we imagine each/all schedulers having watches
> on /nova/compute_nodes/ ([2] consul and etc.d have equivalent concepts
> afaik) then when a compute_node updates that information a push
> notification (the watch being triggered) will be sent to the
> scheduler(s) and the scheduler(s) could then update a local in-memory
> cache of the data about all the hypervisors that can be selected from
> for scheduling. This avoids any reading of a large set of data in the
> first place (besides an initial read-once on startup to read the
> initial list + setup the watches); in a way its similar to push
> notifications. Then when scheduling a VM -> hypervisor there isn't any
> need to query anything but the local in-memory representation that the
> scheduler is maintaining (and updating as watches are triggered)...

You've hit upon the problem with the current design: multiple, and potentially 
out-of-sync copies of the data. What you're proposing doesn't really sound all 
that different than the current design, which has the compute nodes send the 
updates in their state to the scheduler both on a scheduled task, and in 
response to changes. The impetus for the Cassandra proposal was to eliminate 
this duplication, and have the resources being scheduled and the scheduler all 
working with the same data.

-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-08 Thread Ed Leafe
On Oct 8, 2015, at 11:03 AM, Kevin L. Mitchell  
wrote:

>> Theoretically if the DB knew about what resources were originally available 
>> and
>> what resources have been consumed, then it should be able to allocate 
>> resources
>> race-free (possibly with some retries involved if racing against other
>> schedulers updating the DB, but that would be internal to the scheduler 
>> itself).
> 
> The problem is, it can't.  The scheduler may be making the decision at
> the same time that an update from a compute node is in flight, meaning
> that the scheduler is missing (at least) one piece of information.  When
> you include a database, that just makes the possibility of missing an
> in-flight update worse, because you also have to factor in the latency
> of the database update as well.  Also, we have to factor in the
> possibility that there are multiple schedulers in play, which further
> worsens the possibility of in-flight information critical to the
> scheduling decision.  If you employ some sort of locking to try to
> mitigate all this, you've just effectively thrown away the scalability
> that deploying multiple schedulers was supposed to buy you.

Yes, the multiple scheduler part is very problematic. Not only could an update 
from the compute node not be received yet, there could also be updates from 
other schedulers that aren't caught. One of the most problematic use cases is 
requests for several similar VMs being received in a short period of time, and 
all scheduling processes handling them picking the same host. In the Cassandra 
scenario, the first would "win", and others would fail their attempt to update 
the resource with the claim, forcing them to select a different host without 
having to first go through the fail/retry cycle of the current design.

-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] A few questions on configuring DevStack for Neutron

2015-10-08 Thread Sean M. Collins
Please see my response here:

http://lists.openstack.org/pipermail/openstack-dev/2015-October/076251.html

In the future, do not create multiple threads since responses will get
lost
-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-08 Thread Hongbin Lu
Hi team,

I want to move the discussion in the review below to here, so that we can get 
more feedback

https://review.openstack.org/#/c/232175/

In summary, magnum currently added support for specifying the memory size of 
containers. The specification of the memory size is optional, and the COE won't 
reserve any memory to the containers with unspecified memory size. The debate 
is whether we should document this optional parameter in the quickstart guide. 
Below is the positions of both sides:

Pros:

* It is a good practice to always specifying the memory size, because 
containers with unspecified memory size won't have QoS guarantee.

* The in-development autoscaling feature [1] will query the memory size 
of each container to estimate the residual capacity and triggers scaling 
accordingly. Containers with unspecified memory size will be treated as taking 
0 memory, which negatively affects the scaling decision.
Cons:

* The quickstart guide should be kept as simple as possible, so it is 
not a good idea to have the optional parameter in the guide.

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-08 Thread Ed Leafe
On Oct 8, 2015, at 10:54 AM, Clint Byrum  wrote:

> ^^ THIS is the kind of architectural thinking I'd like to see us do more
> of.

Agreed. If nothing else, I'm glad that I was able to get people thinking about 
new approaches.

> This isn't "hey I have a better database" it is "I have a way to reduce
> the most common operations to O(1) complexity".
> 
> Ed, for all of the promise of your experiment, I'd actually rather see
> time spent on Josh's idea above. In fact, I might spend time on Josh's
> idea above. :)

Cool! I don't really care if my particular ideas are selected; I just want to 
make OpenStack better.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Glance] Liberty RC2 available

2015-10-08 Thread Thierry Carrez
Hello everyone,

(Note:
Those are the last of the release-candidate respins for common bugs and
translations updates. In the coming week leading to final release, only
major regressions or significant install/upgrade issues will trigger a
release candidate respin.)

Due to a number of release-critical issues spotted in Nova and Glance
during RC1 testing (as well as last-minute translations imports), new
release candidates were created for Liberty. The list of RC2 fixes, as
well as RC2 tarballs are available at:

https://launchpad.net/nova/liberty/liberty-rc2
https://launchpad.net/glance/liberty/liberty-rc2

Unless new release-critical issues are found that warrant a last-minute
release candidate respin, these tarballs will be formally released as
final "Liberty" versions in a week. You are therefore strongly
encouraged to test and validate these tarballs !

Alternatively, you can directly test the stable/liberty branch at:
http://git.openstack.org/cgit/openstack/nova/log/?h=stable/liberty
http://git.openstack.org/cgit/openstack/glance/log/?h=stable/liberty

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/nova/+filebug
or
https://bugs.launchpad.net/glance/+filebug

and tag it *liberty-rc-potential* to bring it to the release crew's
attention.

Thanks!

-- 
Thierry Carrez (ttx)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs][Neutron] Networking Guide - Call for contributors

2015-10-08 Thread Duarte, Adolfo
I’m in

From: Somanchi Trinath [mailto:trinath.soman...@freescale.com]
Sent: Thursday, October 08, 2015 4:19 AM
To: OpenStack Development Mailing List (not for usage questions); 
openstack-operat...@lists.openstack.org; openstack-d...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack-docs][Neutron] Networking Guide - Call 
for contributors

Hi-

Count me too.

-
Trinath

From: Edgar Magana [mailto:edgar.mag...@workday.com]
Sent: Thursday, October 08, 2015 5:42 AM
To: OpenStack Development Mailing List (not for usage questions) 
>; 
openstack-operat...@lists.openstack.org;
 openstack-d...@lists.openstack.org
Subject: [openstack-dev] [OpenStack-docs][Neutron] Networking Guide - Call for 
contributors

Hello,

I would like to invite everybody to become an active contributor for the 
OpenStack Networking Guide: http://docs.openstack.org/networking-guide/

During the Liberty cycle we made a lot of progress and we feel that the guide 
is ready to have even more contributions and formalize a bit more the team 
around it.
The first thing that I want to propose is to have a regular meeting over IRC to 
discuss the progress and to welcome new contributors. This is the same process 
that other guides like the operators one are following currently.

The networking guide is based on this ToC: 
https://wiki.openstack.org/wiki/NetworkingGuide/TOC
Contribution process is the same that the rest of the OpenStack docs under the 
openstack-manuals git repo: 
https://github.com/openstack/openstack-manuals/tree/master/doc/networking-guide/source

Please, response to this thread and let me know if you could allocate some time 
to help us to make this guide a rock star as the other ones. Based on the 
responses, I will propose a couple of times for the IRC meeting that could 
allocate to everybody if possible, this is why is very important to let me know 
your time zone.

I am really looking forward to increase the contributors in this guide.

Thanks in advance!

Edgar Magana
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-08 Thread Joshua Harlow

Clint Byrum wrote:

Excerpts from Joshua Harlow's message of 2015-10-08 08:38:57 -0700:

Joshua Harlow wrote:

On Thu, 8 Oct 2015 10:43:01 -0400
Monty Taylor   wrote:


On 10/08/2015 09:01 AM, Thierry Carrez wrote:

Maish Saidel-Keesing wrote:

Operational overhead has a cost - maintaining 3 different database
tools, backing them up, providing HA, etc. has operational cost.

This is not to say that this cannot be overseen, but it should be
taken into consideration.

And *if* they can be consolidated into an agreed solution across
the whole of OpenStack - that would be highly beneficial (IMHO).

Agreed, and that ties into the similar discussion we recently had
about picking a common DLM. Ideally we'd only add *one* general
dependency and use it for locks / leader election / syncing status
around.


++

All of the proposed DLM tools can fill this space successfully. There
is definitely not a need for multiple.

On this point, and just thinking out loud. If we consider saving
compute_node information into say a node in said DLM backend (for
example a znode in zookeeper[1]); this information would be updated
periodically by that compute_node *itself* (it would say contain
information about what VMs are running on it, what there utilization is
and so-on).

For example the following layout could be used:

/nova/compute_nodes/

   data could be:

{
  vms: [],
  memory_free: XYZ,
  cpu_usage: ABC,
  memory_used: MNO,
  ...
}

Now if we imagine each/all schedulers having watches
on /nova/compute_nodes/ ([2] consul and etc.d have equivalent concepts
afaik) then when a compute_node updates that information a push
notification (the watch being triggered) will be sent to the
scheduler(s) and the scheduler(s) could then update a local in-memory
cache of the data about all the hypervisors that can be selected from
for scheduling. This avoids any reading of a large set of data in the
first place (besides an initial read-once on startup to read the
initial list + setup the watches); in a way its similar to push
notifications. Then when scheduling a VM ->   hypervisor there isn't any
need to query anything but the local in-memory representation that the
scheduler is maintaining (and updating as watches are triggered)...

So this is why I was wondering about what capabilities of cassandra are
being used here; because the above I think are unique capababilties of
DLM like systems (zookeeper, consul, etcd) that could be advantageous
here...

[1]
https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#sc_zkDataModel_znodes

[2]
https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#ch_zkWatches



And here's a final super-awesomeness,

Use the same existence of that znode + information (perhaps using
ephemeral znodes or equivalent) to determine if a hypervisor is 'alive'
or 'dead', thus removing the need to do queries and periodic writes to
the nova database to determine if a hypervisors nova-compute service is
alive or dead (with reads via
https://github.com/openstack/nova/blob/master/nova/servicegroup/drivers/db.py#L33
and other similar code scattered in nova)...



^^ THIS is the kind of architectural thinking I'd like to see us do more
of.

This isn't "hey I have a better database" it is "I have a way to reduce
the most common operations to O(1) complexity".

Ed, for all of the promise of your experiment, I'd actually rather see
time spent on Josh's idea above. In fact, I might spend time on Josh's
idea above. :)


Go for it!

We (at yahoo) are also brainstorming this idea (or something like it), 
and as we hit more performance issues pushing the 1000+ hypervisors in a 
single cluster (no cell/s) (one of our many cluster/s) we will start 
adjusting (and hopefully more blogging, upstreaming and all that) what 
needs to be fixed/tweaked/altered to continue to push these boundaries.


Collab. and all that is welcome to of course :)

P.S.

The DLM spec @ https://review.openstack.org/#/c/209661/ (rendered nicely 
at 
http://docs-draft.openstack.org/61/209661/29/check/gate-openstack-specs-docs/2ff62fa//doc/build/html/specs/chronicles-of-a-dlm.html) 
mentions 'Such a consensus being built will also influence the future 
functionality and capabilities of OpenStack at large so we need to be 
especially careful, thoughtful, and explicit here.'


This statement was really targeted at cases like this, when we (as a 
community) choose a DLM solution we affect the larger capabilities of 
openstack, not just for locking but for scheduling (and likely for other 
functionality I can't even think of/predict...)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack 

[openstack-dev] Error Installing KILO release on powerpc

2015-10-08 Thread Rahul Arora
Hi Team

I am trying to run Openstack KILO release on my powerpc platform.I am able
to cross compile all the KILO related packages using yocto framework.But
while running the following command i am getting below error messages.

keystone-manage db_sync

Traceback (most recent call last):
  File "/usr/bin/keystone-manage", line 30, in 
from keystone import cli
  File "/usr/lib/python2.7/site-packages/keystone/cli.py", line 27, in

from keystone.common import sql
  File "/usr/lib/python2.7/site-packages/keystone/common/sql/__init__.py",
line 15, in 
from keystone.common.sql.core import *  # noqa
  File "/usr/lib/python2.7/site-packages/keystone/common/sql/core.py", line
28, in 
from oslo_db.sqlalchemy import session as db_session
  File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/session.py",
line 300, in 
from oslo_db.sqlalchemy import utils
  File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/utils.py", line
36, in 
from sqlalchemy import inspect
ImportError: cannot import name inspect

I am using below version of keystone.

*Version: 2015.1.0*

Please revert for any extra/firther informatio*n.*

Please help me out to nail down this issue.

Thanks for the help in advance.


--

Regards

Rahul Arora
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs][Neutron] Networking Guide - Call for contributors

2015-10-08 Thread Anthony Chow
Please count me in

(Pacific time zone).

On Thu, Oct 8, 2015 at 10:36 AM, Duarte, Adolfo 
wrote:

> I’m in
>
>
>
> *From:* Somanchi Trinath [mailto:trinath.soman...@freescale.com]
> *Sent:* Thursday, October 08, 2015 4:19 AM
> *To:* OpenStack Development Mailing List (not for usage questions);
> openstack-operat...@lists.openstack.org;
> openstack-d...@lists.openstack.org
> *Subject:* Re: [openstack-dev] [OpenStack-docs][Neutron] Networking Guide
> - Call for contributors
>
>
>
> Hi-
>
>
>
> Count me too.
>
>
>
> -
>
> Trinath
>
>
>
> *From:* Edgar Magana [mailto:edgar.mag...@workday.com
> ]
> *Sent:* Thursday, October 08, 2015 5:42 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>;
> openstack-operat...@lists.openstack.org;
> openstack-d...@lists.openstack.org
> *Subject:* [openstack-dev] [OpenStack-docs][Neutron] Networking Guide -
> Call for contributors
>
>
>
> Hello,
>
>
>
> I would like to invite everybody to become an active contributor for the
> OpenStack Networking Guide: http://docs.openstack.org/networking-guide/
>
>
>
> During the Liberty cycle we made a lot of progress and we feel that the
> guide is ready to have even more contributions and formalize a bit more the
> team around it.
>
> The first thing that I want to propose is to have a regular meeting over
> IRC to discuss the progress and to welcome new contributors. This is the
> same process that other guides like the operators one are following
> currently.
>
>
>
> The networking guide is based on this ToC:
> https://wiki.openstack.org/wiki/NetworkingGuide/TOC
>
> Contribution process is the same that the rest of the OpenStack docs under
> the openstack-manuals git repo:
> https://github.com/openstack/openstack-manuals/tree/master/doc/networking-guide/source
>
>
>
> Please, response to this thread and let me know if you could allocate some
> time to help us to make this guide a rock star as the other ones. Based on
> the responses, I will propose a couple of times for the IRC meeting that
> could allocate to everybody if possible, this is why is very important to
> let me know your time zone.
>
>
>
> I am really looking forward to increase the contributors in this guide.
>
>
>
> Thanks in advance!
>
>
>
> Edgar Magana
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security] Introducing Killick PKI

2015-10-08 Thread Adam Young

On 10/08/2015 12:50 PM, Chivers, Doug wrote:

Hi All,

At a previous OpenStack Security Project IRC meeting, we briefly discussed a 
lightweight traditional PKI using the Anchor validation functionality, for use 
in internal deployments, as an alternative to things like MS ADCS. To take this 
further, I have drafted a spec, which is in the security-specs repo, and would 
appreciate feedback:

https://review.openstack.org/#/c/231955/

Regards

Doug

How is this better than Dogtag/FreeIPA?




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] [heat] Mistral Workflow resource type - resource signal handling

2015-10-08 Thread ELISHA, Moshe (Moshe)
Hi,

I would like to propose a change in the behavior of the OS::Mistral::Workflow 
resource signal.

CURRENT:
The OS::Mistral::Workflow resource type is expecting the following request body 
on resource signal request:

{
  "input": {
...
  },
  "params": {
...
  }
}

The input section is optional and if exists it will be passed to the workflow 
execution as inputs
The params section is also optional and if exists it will be passed to the 
workflow execution as parameters.

The problem this approach creates is that external systems many times send a 
predefined body that you cannot control and it is obviously not in the format 
the resource is expecting.
So you basically have no way to pass the information from the request body to 
the workflow execution.


SUGGESTION:
OS::Mistral::Workflow will treat the root of the JSON request body as input 
parameters.
That way you will be able to use external systems by making sure your WF inputs 
are aligned with what the external system sends.

For example, if you try to put the WF alarm_url as a Ceilometer alarm action - 
Ceilometer will send a request similar to:

{
 "severity": "low",
 "alarm_name": "my-alarm",
 "current": "insufficient data",
 "alarm_id": "895fe8c8-3a6e-48bf-b557-eede3e7f4bbd",
 "reason": "1 datapoints are unknown",
 "reason_data": {
   "count": 1,
   "most_recent": null,
   "type": "threshold",
   "disposition": "unknown"
 },
 "previous": "ok"
}

The WF could get this info as input if it will be defined like so:

  my_workflow:
type: OS::Mistral::Workflow
properties:
  input:
current: !!null
alarm_id: !!null
reason: !!null
previous: !!null
severity: !!null
alarm_name: !!null
reason_data: !!null


The (least used) "params" section can be passed in an custom HTTP header and 
the OS::Mistral::Workflow will read those from the header and pass it to the WF 
execution.
Remember, we are trying to solve the problem where you can't influence the 
request format - so in any case the signal will not get the params in the 
request body.
If the WF of the user must receive params, the user will always be able to 
create a wrapper WF with only inputs that starts the orig WF with inputs and 
params.

In order to make this non-backward compatible change, I suggest to add a 
property "params_alarm_http_header_name" to the OS::Mistral::Workflow. If null 
the params are expected to be in the body as today.
If not null - the request should have a header with that name and the value 
will be a string representing a JSON dict.

I would really like to hear your opinion and comments.

Thanks.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [neutron] A larger batch of questions about configuring DevStack to use Neutron

2015-10-08 Thread Mike Spreitzer
> From: "Sean M. Collins" 
...
> 
> On Tue, Oct 06, 2015 at 11:25:03AM EDT, Mike Spreitzer wrote:
> > [Sorry, but I do not know if the thundering silence is because these 
> > questions are too hard, too easy, grossly off-topic, or simply because 

> > nobody cares.]
> 
> You sent your first e-mail on a Saturday. I saw it and flagged it for
> reply, but have not had a chance yet. It's only Tuesday. I do care and
> your questions are important. I will say though that it's a little
> difficult to answer your e-mail because of formatting and your thoughts
> seem to jump around. This is not intended as a personal criticism, it's
> just a little difficult to follow your e-mail in order to reply.

Thanks, I am glad somebody cares.  I used different subject lines because 
I was suspecting that I did not flag them correctly.  I see now that I was 
just too impatient.

..
> > In the section 
> > http://docs.openstack.org/developer/devstack/guides/
> neutron.html#using-neutron-with-a-single-interface
> > there is a helpful display of localrc contents.  It says, among other 
> > things,
> > 
> >OVS_PHYSICAL_BRIDGE=br-ex
> >PUBLIC_BRIDGE=br-ex
> > 
> > In the next top-level section, 
> > http://docs.openstack.org/developer/devstack/guides/
> neutron.html#using-neutron-with-multiple-interfaces
> > , there is no display of revised localrc contents and no mention of 
> > changing either bridge setting.  That is an oversight, right?
> 
> No, this is deliberate. Each section is meant to be independent, since
> each networking configuration and corresponding DevStack configuration
> is different. Of course, this may need to be explicitly stated in the
> guide, so there is always room for improvement.

I am not quite sure I understand your answer.  Is the intent that I can 
read only one section, ignore all the others, and that will tell me how to 
use DevStack to produce that section's configuration?  If so then it would 
be good if each section had a display of all the necessary localrc 
contents.  OTOH, if some sections build on other sections, it would be 
good if the dependent sections display the localrc contents that differ. 
Right now, the section at 
http://docs.openstack.org/developer/devstack/guides/neutron.html#using-neutron-with-multiple-interfaces
 
does not display any localrc contents at all.

> For example, There needs
> to be some editing done for that doc - the part about disabling the
> firewall is just dropped in the middle of the doc and breaks the flow -
> among other things. This is obviously not helpful to a new reader and we
> need to fix.
> 
> 
> > I am 
> > guessing I need to set OVS_PHYSICAL_BRIDGEand PUBLIC_BRIDGEto 
different 
> > values, and the exhibited `ovs-vsctl` commands in this section apply 
to 
> > $OVS_PHYSICAL_BRIDGE.  Is that right?  Are there other revisions I 
need to 
> > make to localrc?
> 
> No, this is not correct.
> 
> What does your networking layout look like on the DevStack node that you
> are trying to configure?

I have started over, from exactly the picture drawn at the start of the 
doc.  That has produced a configuration that mostly works.  However, I 
tried creating a VM connected to the public network, and that failed for 
lack of a Neutron DHCP server there.  I am going to work out how to change 
that, and am willing to contribute an update to this doc.  Would you want 
that in this section --- in which case this section needs to specify that 
the provider DOES NOT already have DHCP service on the hardware network 
--- or as a new section?

> 
> 
> > 
> > Looking at 
> > http://docs.openstack.org/networking-guide/scenario_legacy_ovs.html(or
, in 
> > former days, the doc now preserved at 
> > http://docs.ocselected.org/openstack-manuals/kilo/networking-
> guide/content/under_the_hood_openvswitch.html
> > ) I see the name br-ex used for $PUBLIC_BRIDGE--- not 
$OVS_PHYSICAL_BRIDGE
> > , right?  Wouldn't it be less confusing if 
> > http://docs.openstack.org/developer/devstack/guides/neutron.htmlused a 

> > name other than "br-ex" for the exhibited commands that apply to 
> > $OVS_PHYSICAL_BRIDGE?
> 
> No, this is deliberate - br-ex is the bridge that is used for external
> network traffic - such as floating IPs and public IP address ranges. On
> the network node, a physical interface is attached to br-ex so that
> traffic will flow.
> 
> PUBLIC_BRIDGE is a carryover from DevStack's Nova-Network support and is
> used in some places, with OVS_PHYSICAL_BRIDGE being used by DevStack's
> Neutron support, for the Open vSwitch driver specifically. They are two
> variables that for the most part serve the same purpose. Frankly,
> DevStack has a lot of problems with configuration knobs, and
> PUBLIC_BRIDGE and OVS_PHYSICAL_BRIDGE is just a symptom.

Ah, thanks, that helps.  But I am still confused.  When using Neutron with 
two interfaces, there will be a bridge for each.  I have learned that 
DevStack will automatically create one 

Re: [openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-08 Thread Adrian Otto
Thanks Hongbin, for raising this for discussion. There is a middle ground that 
we can reach. We can collect a set of “best practices”, and place them together 
in a document. Some of them will be operational best practices for cloud 
operators, and some of them will be for end users. We can make callouts to them 
in the quick start, so our newcomers know where to look for them, but this will 
help us to keep the quickstart concise. The practice of selecting a memory 
limit would be one of the best practices that we can call out to.

Adrian

On Oct 8, 2015, at 9:00 AM, Hongbin Lu 
> wrote:

Hi team,

I want to move the discussion in the review below to here, so that we can get 
more feedback

https://review.openstack.org/#/c/232175/

In summary, magnum currently added support for specifying the memory size of 
containers. The specification of the memory size is optional, and the COE won’t 
reserve any memory to the containers with unspecified memory size. The debate 
is whether we should document this optional parameter in the quickstart guide. 
Below is the positions of both sides:

Pros:
• It is a good practice to always specifying the memory size, because 
containers with unspecified memory size won’t have QoS guarantee.
• The in-development autoscaling feature [1] will query the memory size 
of each container to estimate the residual capacity and triggers scaling 
accordingly. Containers with unspecified memory size will be treated as taking 
0 memory, which negatively affects the scaling decision.
Cons:
• The quickstart guide should be kept as simple as possible, so it is 
not a good idea to have the optional parameter in the guide.

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [nova] live migration in Mitaka

2015-10-08 Thread Mathieu Gagné
Hi Pavel,

On 2015-10-08 9:39 AM, Pavel Boldin wrote:
> Here you go: https://launchpad.net/~pboldin/+archive/ubuntu/libvirt-python
> 
> Use it, but please keep in mind that this is a draft reupload.
> 

Thank you for your work. I'm sure a lot of people will benefit from it.
Live migration is a major keystone to our operation excellence and
anything that can improve it is welcomed.

I see the package is built against Wily. Will it work against Trusty? I
would say yes. It depends on libvirt0 (>= 1.2.16-2ubuntu11) which is
already available in the liberty repo of UCA.

Meanwhile, I managed to package 1.2.20 for test purposes for both Trusty
and Precise (not without raging =) as I needed to move on.

On a side note, I feel libvirt is such an important piece that it would
warrant its own official PPA. (both libvirt and libvirt-python)

I added comments to the review [1] because the change doesn't work in
its current state. I tested the patch against Kilo + Trusty + libvirt
1.2.20.

My main challenge at the moment is backporting the patch further to
Icehouse+Precise so I can move instances out of precise nodes and
upgrade them to trusty. Some far, and this is very early results, my
instances are stuck in "MIGRATING" state although they were completely
migrated to the destination. (domain is running on destination, removed
from source, ping is responding and I can SSH fine)

[1] https://review.openstack.org/#/c/227278/

-- 
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] python-tripleoclient has unknown requirement

2015-10-08 Thread James Slagle
On Thu, Oct 8, 2015 at 10:55 AM, Andreas Jaeger  wrote:
> Current requirements job fails on python-tripleoclient with:
> 'ipaddress' is not in global-requirements.txt
>
> Could you get the requirement in global-requirements or replace it, please?

I've submitted a patch for global-requirements.txt:

https://review.openstack.org/232694

>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [congress] Symantec's security group management policies

2015-10-08 Thread Su Zhang
Hello,

I've implemented a set of security group management policies and already
put them into our usecase doc.
Let me know if you guys have any comments. My policies is called "Security
Group Management "
You can find the use case doc at:
https://docs.google.com/document/d/1ExDmT06vDZjzOPePYBqojMRfXodvsk0R8nRkX-zrkSw/edit#heading=h.6z1ggtfrzg3n

Thanks,

--
Su Zhang
Senior Software Engineer
Symantec Corporation
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [Congress] Tokyo sessions

2015-10-08 Thread Tim Hinrichs
Rui is right.  Below I expanded on the short description Rui provided above.

Discussions with external teams: OPNFV, Monasca
Integration with other projects: congress gating (nova, neutron, etc.),
devstack plugin, keystone
Distributed architecture and additional features for M

Tim


On Wed, Oct 7, 2015 at 8:03 PM Rui Chen  wrote:

> In my memory, there are 4 topics about OPNFV, congress gating,
> distributed arch, Monasca.
>
> Some details in IRC meeting log
>
> http://eavesdrop.openstack.org/meetings/congressteammeeting/2015/congressteammeeting.2015-10-01-00.01.log.html
>
> 2015-10-08 9:48 GMT+08:00 zhangyali (D) :
>
>> Hi Tim,
>>
>>
>>
>> Thanks for informing the meeting information. But does the meeting have
>> some topics scheduled? I think it’s better to know what we are going to
>> talk. Thanks so much!
>>
>>
>>
>> Yali
>>
>>
>>
>> *发件人:* Tim Hinrichs [mailto:t...@styra.com]
>> *发送时间:* 2015年10月2日 2:52
>> *收件人:* OpenStack Development Mailing List (not for usage questions)
>> *主题:* [openstack-dev] [Congress] Tokyo sessions
>>
>>
>>
>> Hi all,
>>
>>
>>
>> We just got a tentative assignment for our meeting times in Tokyo.  Our 3
>> meetings are scheduled back-to-back-to-back on Wed afternoon from
>> 2:00-4:30p.  I don't think there's much chance of getting the meetings
>> moved, but does anyone have a hard conflict?
>>
>>
>>
>> Here's our schedule for Wed:
>>
>>
>>
>> Wed 11:15-12:45 HOL
>>
>> Wed 2:00-2:40 Working meeting
>>
>> Wed 2:50-3:30 Working meeting
>>
>> Wed 3:40-4:20 Working meeting
>>
>>
>>
>> Tim
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Nominating two new core reviewers

2015-10-08 Thread Jim Rollenhagen
Hi all,

I've been thinking a lot about Ironic's core reviewer team and how we might
make it better.

I'd like to grow the team more through trust and mentoring. We should be
able to promote someone to core based on a good knowledge of *some* of
the code base, and trust them not to +2 things they don't know about. I'd
also like to build a culture of mentoring non-cores on how to review, in
preparation for adding them to the team. Through these pieces, I'm hoping
we can have a few rounds of core additions this cycle.

With that said...

I'd like to nominate Vladyslav Drok (vdrok) for the core team. His reviews
have been super high quality, and the quantity is ever-increasing. He's
also started helping out with some smaller efforts (full tempest, for
example), and I'd love to see that continue with larger efforts.

I'd also like to nominate John Villalovos (jlvillal). John has been
reviewing a ton of code and making a real effort to learn everything,
and keep track of everything going on in the project.

Ironic cores, please reply with your vote; provided feedback is positive,
I'd like to make this official next week sometime. Thanks!

// jim


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Backport policy for Liberty

2015-10-08 Thread Robert Collins
On 9 October 2015 at 08:47, Steven Dake (stdake)  wrote:
> Kolla operators and developers,
>
> The general consensus of the Core Reviewer team for Kolla is that we should
> embrace a liberal backport policy for the Liberty release.  An example of
> liberal -> We add a new server service to Ansible, we would backport the
> feature to liberty.  This is in breaking with the typical OpenStack
> backports policy.  It also creates a whole bunch more work and has potential
> to introduce regressions in the Liberty release.
>
> Given these realities I want to put on hold any liberal backporting until
> after Summit.  I will schedule a fishbowl session for a backport policy
> discussion where we will decide as a community what type of backport policy
> we want.  The delivery required before we introduce any liberal backporting
> policy then should be a description of that backport policy discussion at
> Summit distilled into a RST file in our git repository.
>
> If you have any questions, comments, or concerns, please chime in on the
> thread.

I'll try to get to that session - I'm drafting
https://review.openstack.org/#/c/226157/ at the moment, which while
its aimed at clients and libraries is at least in the same discussion
space

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] py26 support in python-muranoclient

2015-10-08 Thread Vahid S Hashemian
Hello,

I am wondering if there is any near-term plan for removing the py26 
support from the client project (python-muranoclient).
For the tosca support blueprint python-muranoclient will become dependent 
on tosca-parser project and expect tosca-parser to support py26 (it 
currently does not support py26).

So the options are:
1. support py26 in tosca-parser
2. wait until py26 support is phased out in python-muranoclient (only if 
it's happening soon)

Thanks.
-
Vahid Hashemian, Ph.D.
Advisory Software Engineer, IBM Cloud




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Error Installing KILO release on powerpc

2015-10-08 Thread Tony Breeds
On Thu, Oct 08, 2015 at 11:08:06PM +0530, Rahul Arora wrote:
> Hi Team
> 
> I am trying to run Openstack KILO release on my powerpc platform.I am able
> to cross compile all the KILO related packages using yocto framework.But
> while running the following command i am getting below error messages.

Wait what?  cross compile?  You'll need to explain more about your platform /
setup as that really shoudlne't be needed.

Are you installign from git, disto packages or something else?

Havign said all that powerpc support is somewhat spotty.  It certainly works
but the build/setup experience will be a little more painful than normal.

> keystone-manage db_sync
> 
> Traceback (most recent call last):
>   File "/usr/bin/keystone-manage", line 30, in 
> from keystone import cli
>   File "/usr/lib/python2.7/site-packages/keystone/cli.py", line 27, in
> 
> from keystone.common import sql
>   File "/usr/lib/python2.7/site-packages/keystone/common/sql/__init__.py",
> line 15, in 
> from keystone.common.sql.core import *  # noqa
>   File "/usr/lib/python2.7/site-packages/keystone/common/sql/core.py", line
> 28, in 
> from oslo_db.sqlalchemy import session as db_session
>   File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/session.py",
> line 300, in 
> from oslo_db.sqlalchemy import utils
>   File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/utils.py", line
> 36, in 
> from sqlalchemy import inspect
> ImportError: cannot import name inspect

Seems like you just missed a dependancy.

Yours Tony.


pgpFxrPyaLfVP.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating two new core reviewers

2015-10-08 Thread Lucas Alvares Gomes
Hi,

On Thu, Oct 8, 2015 at 10:47 PM, Jim Rollenhagen  
wrote:
> Hi all,
>
> I've been thinking a lot about Ironic's core reviewer team and how we might
> make it better.
>
> I'd like to grow the team more through trust and mentoring. We should be
> able to promote someone to core based on a good knowledge of *some* of
> the code base, and trust them not to +2 things they don't know about. I'd
> also like to build a culture of mentoring non-cores on how to review, in
> preparation for adding them to the team. Through these pieces, I'm hoping
> we can have a few rounds of core additions this cycle.
>

Agreed, give people good choices and they will make good decisions!

> With that said...
>
> I'd like to nominate Vladyslav Drok (vdrok) for the core team. His reviews
> have been super high quality, and the quantity is ever-increasing. He's
> also started helping out with some smaller efforts (full tempest, for
> example), and I'd love to see that continue with larger efforts.
>
> I'd also like to nominate John Villalovos (jlvillal). John has been
> reviewing a ton of code and making a real effort to learn everything,
> and keep track of everything going on in the project.
>
> Ironic cores, please reply with your vote; provided feedback is positive,
> I'd like to make this official next week sometime. Thanks!
>

Vladyslav and John are active members of the Ironic community for a
good time now and they are doing a great job reviewing new code,
providing feedback and catching errors; Both have a lot of potential
and will be a great addition to the team.

+2 for both and welcome on board!

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-08 Thread Vikas Choudhary
In my opinion, there should be a more detailed document explaining
importance of commands and options.
Though --memory is an important attribute, but since objective of
quickstart is to get user a minimum working system within minimum time, it
seems better to skip this option in quickstart.


-Vikas

On Fri, Oct 9, 2015 at 1:47 AM, Egor Guz  wrote:

> Adrian,
>
> I agree with Steve, otherwise it’s hard to find balance what should go to
> quick start guide (e.g. many operators worry about cpu or I/O instead of
> memory).
> Also I belve auto-scalling deserve it’s own detail document.
>
> —
> Egor
>
> From: Adrian Otto >
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org >>
> Date: Thursday, October 8, 2015 at 13:04
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org >>
> Subject: Re: [openstack-dev] [magnum] Document adding --memory option to
> create containers
>
> Steve,
>
> I agree with the concept of a simple quickstart doc, but there also needs
> to be a comprehensive user guide, which does not yet exist. In the absence
> of the user guide, the quick start is the void where this stuff is starting
> to land. We simply need to put together a magnum reference document, and
> start moving content into that.
>
> Adrian
>
> On Oct 8, 2015, at 12:54 PM, Steven Dake (stdake)  > wrote:
>
> Quickstart guide should be dead dead dead dead simple.  The goal of the
> quickstart guide isn’t to tach people best practices around Magnum.  It is
> to get a developer operational to give them that sense of feeling that
> Magnum can be worked on.  The goal of any quickstart guide should be to
> encourage the thinking that a person involving themselves with the project
> the quickstart guide represents is a good use of the person’s limited time
> on the planet.
>
> Regards
> -steve
>
>
> From: Hongbin Lu >
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org >>
> Date: Thursday, October 8, 2015 at 9:00 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org >>
> Subject: [openstack-dev] [magnum] Document adding --memory option to
> create containers
>
> Hi team,
>
> I want to move the discussion in the review below to here, so that we can
> get more feedback
>
> https://review.openstack.org/#/c/232175/
>
> In summary, magnum currently added support for specifying the memory size
> of containers. The specification of the memory size is optional, and the
> COE won’t reserve any memory to the containers with unspecified memory
> size. The debate is whether we should document this optional parameter in
> the quickstart guide. Below is the positions of both sides:
>
> Pros:
> · It is a good practice to always specifying the memory size,
> because containers with unspecified memory size won’t have QoS guarantee.
> · The in-development autoscaling feature [1] will query the memory
> size of each container to estimate the residual capacity and triggers
> scaling accordingly. Containers with unspecified memory size will be
> treated as taking 0 memory, which negatively affects the scaling decision.
> Cons:
> · The quickstart guide should be kept as simple as possible, so it
> is not a good idea to have the optional parameter in the guide.
>
> Thoughts?
>
> [1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security] Introducing Killick PKI

2015-10-08 Thread Chivers, Doug
Very lightweight, automatic certificate security policy enforcement. 

Doug

> On 8 Oct 2015, at 18:48, Adam Young  wrote:
> 
>> On 10/08/2015 12:50 PM, Chivers, Doug wrote:
>> Hi All,
>> 
>> At a previous OpenStack Security Project IRC meeting, we briefly discussed a 
>> lightweight traditional PKI using the Anchor validation functionality, for 
>> use in internal deployments, as an alternative to things like MS ADCS. To 
>> take this further, I have drafted a spec, which is in the security-specs 
>> repo, and would appreciate feedback:
>> 
>> https://review.openstack.org/#/c/231955/
>> 
>> Regards
>> 
>> Doug
> How is this better than Dogtag/FreeIPA?
> 
> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-08 Thread Adrian Otto
Steve,

I agree with the concept of a simple quickstart doc, but there also needs to be 
a comprehensive user guide, which does not yet exist. In the absence of the 
user guide, the quick start is the void where this stuff is starting to land. 
We simply need to put together a magnum reference document, and start moving 
content into that.

Adrian

On Oct 8, 2015, at 12:54 PM, Steven Dake (stdake) 
> wrote:

Quickstart guide should be dead dead dead dead simple.  The goal of the 
quickstart guide isn’t to tach people best practices around Magnum.  It is to 
get a developer operational to give them that sense of feeling that Magnum can 
be worked on.  The goal of any quickstart guide should be to encourage the 
thinking that a person involving themselves with the project the quickstart 
guide represents is a good use of the person’s limited time on the planet.

Regards
-steve


From: Hongbin Lu >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, October 8, 2015 at 9:00 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [magnum] Document adding --memory option to create 
containers

Hi team,

I want to move the discussion in the review below to here, so that we can get 
more feedback

https://review.openstack.org/#/c/232175/

In summary, magnum currently added support for specifying the memory size of 
containers. The specification of the memory size is optional, and the COE won’t 
reserve any memory to the containers with unspecified memory size. The debate 
is whether we should document this optional parameter in the quickstart guide. 
Below is the positions of both sides:

Pros:
· It is a good practice to always specifying the memory size, because 
containers with unspecified memory size won’t have QoS guarantee.
· The in-development autoscaling feature [1] will query the memory size 
of each container to estimate the residual capacity and triggers scaling 
accordingly. Containers with unspecified memory size will be treated as taking 
0 memory, which negatively affects the scaling decision.
Cons:
· The quickstart guide should be kept as simple as possible, so it is 
not a good idea to have the optional parameter in the guide.

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-compute][nova][libvirt] Extending Nova-Compute for image prefetching

2015-10-08 Thread Michael Still
I think I'd rephrase your definition of pre-fetched to be honest --
something more like "images on this hypervisor node without a currently
running instance". So, your operations would become:

 - trigger an image prefetching
 - list unused base images (and perhaps when they were last used)
 - delete an unused image

All of that would need to tie into the image cache management code so that
its not stomping on your images. In fact, you're probably best of adding
all of this as tweaks to the image cache manager anyways.

One question though -- who is calling these APIs? Are you adding a central
service to orchestrate these calls?

Michael



On Thu, Oct 8, 2015 at 10:50 PM, Alberto Geniola 
wrote:

> Hi all,
>
> I'm considering to extend the Nova-Compute API in order to provide
> image-prefetching capabilities to OS.
>
> In order to allow image prefetching, I ended up with the need to add three
> different APIs on the nova-compute nodes:
>
>   1. Trigger an image prefetching
>
>   2. List prefetched images
>
>   3. Delete a prefetched image
>
>
>
> About the point 1 I saw I can re-use the libvirt driver function
> _create_image() to trigger the image prefetching. However, this approach
> will not store any information about the stored image locally. This leads
> to an issue: how do I retrieve a list of already fetched images? A quick
> and simple possibility would be having a local file, storing information
> about the fetched images. Would it be acceptable? Is there any best
> practice in OS community?
>
>
>
> Any ideas?
>
>
> Ty,
>
> Al.
>
> --
> Dott. Alberto Geniola
>
>   albertogeni...@gmail.com
>   +39-346-6271105
>   https://www.linkedin.com/in/albertogeniola
>
> Web: http://www.hw4u.it
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating two new core reviewers

2015-10-08 Thread Ruby Loo
Looking forward to more cores!!!

On 8 October 2015 at 17:47, Jim Rollenhagen  wrote:

> Hi all,
>
> ...

I'd like to nominate Vladyslav Drok (vdrok) for the core team. His reviews
> have been super high quality, and the quantity is ever-increasing. He's
> also started helping out with some smaller efforts (full tempest, for
> example), and I'd love to see that continue with larger efforts.
>

+2


> I'd also like to nominate John Villalovos (jlvillal). John has been
> reviewing a ton of code and making a real effort to learn everything,
> and keep track of everything going on in the project.
>

+2

--ruby
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Plugins related functionality in Fuel Client

2015-10-08 Thread Roman Prykhodchenko
Folks,

it’s time to speak about Fuel Plugins and the way they are managed.

Currently we have some methods in Fuel Client that allow to install, remove and 
do some other things to plugins. Everything looks great except that 
functionality requires Fuel Client to be installed on a master node and be 
running under a root user. It’s time for us to grow up and realize that nothing 
can require Fuel Client to be installed on a specific computer and of course we 
cannot require root permissions for any actions.

I’d like to move all that code to Nailgun, utilizing mules and hide it behind 
Nailgun’s API as soon as possible. For that I filed a bug [1] and I’d like to 
ask Fuel Enhancements subgroup of developers to take a close look at it.


1. https://bugs.launchpad.net/fuel/+bug/1504338


- romcheg



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] A few questions on configuring DevStack for Neutron

2015-10-08 Thread Christopher Aedo
On Thu, Oct 8, 2015 at 9:38 AM, Sean M. Collins  wrote:
> Please see my response here:
>
> http://lists.openstack.org/pipermail/openstack-dev/2015-October/076251.html
>
> In the future, do not create multiple threads since responses will get
> lost

Yep, thank you Sean - saw your response yesterday and was going to
follow-up this thread with a "please ignore" and a link to the other
thread.  I opted not to in hopes of reducing the noise but I think
your note here is correct and will close the loop for anyone who
happens across only this thread.

(Secretly though I hope this thread somehow becomes never-ending like
the "don't -1 for a long commit message" thread!)

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][testing][all] fixtures 1.4.0 breaking py34 jobs

2015-10-08 Thread Robert Collins
On 9 October 2015 at 01:10, Ihar Hrachyshka  wrote:
> Hi all,
>
> just a heads up that today fixtures 1.4.0 broke neutron py34 gate [1] and we 
> needed to patch some logging code to overcome it [2]. The failures were 
> triggered by a patch that started to raise logging exceptions for incorrect 
> format strings (which is fine), but it also started to raise exceptions from 
> stdlib logging code, and apparently till 3.5 it had a bug for the case when 
> someone uses LOG.exception() in context where no exception was actually 
> raised.
>
> More details about why I think the issue is in python interpreter are 
> available in [2] commit message.
>
> I agree that using LOG.exception in such context is wrong, but still I wanted 
> to notify other about potential issue, and a way to fix it.
>
> [1]: https://launchpad.net/bugs/1504053

Yeah - so the situation was that Ironic noticed they had bad logging
happen in prod and it wasn't caught in test. Tracking that down
pointed at the way logging eats all errors, and a patch from John
Villalovos to change fixtures to expose those errors.

Nova has had a local thing to detect bad strings for a while, but this
was a systemic fix - we're sorry about the firedrill :/.

The constraints stuff for unit tests, which we're in the last stages
of poc - basically we need the Neutron patches for constraints enabled
docs and flake8 runs, and then we can enable it in project-config...
at which point we can point at that as the template and encourage
wider adoption. Constraints jobs would have allowed back-pressure on
picking up the new release (e.g. if we had neutron unit test jobs
voting on requirements changes).

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] New API Guidelines Ready for Cross Project Review

2015-10-08 Thread Everett Toews
Hi All,

The following API guidelines are ready for cross project review. They will be 
merged on Oct. 16 if there's no further feedback.

1. Adds an API documentation guideline document
https://review.openstack.org/#/c/214817/

2. Add http400 for nonexistent resource
https://review.openstack.org/#/c/221163/

Cheers,
Everett
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [Fuel] 8.0 Region name support / Multi-DC

2015-10-08 Thread Andrew Woodward
Adam,

fuel does support multiple PXE networks via the nodegroup/multiple clusters
feature, however in the geo-diverse case this would receive limited use as
it's mostly useful for a spine and leaf network topology. As you noted, in
the geo-diverse case you would typically deploy an env for the keystone /
glance cluster and then separate env's (most like with different fuel
nodes) you would deploy the individual regions.

On Wed, Oct 7, 2015 at 3:28 AM Adam Heczko  wrote:

> Hi, although I'm not participating in this story since very beginning, let
> me add my 2 cents.
> For scalability purposes Nova considers rather use of 'cells' rather than
> 'regions' construct.
> Regions as name suggests deals with geographically dispersed data centre
> locations.
> In regards to Fuel architecture, since Fuel supports only one PXE network,
> it is IMO unable to deploy multi region clouds.
> Fuel uses 'environments' construct, but again it doesn't fit to 'region'
> nor 'cell', since Fuel's 'environment' deploys just another cluster (with
> own set of controllers, computes etc.) over the shared PXE network.
> It is probably quite affordable to add 'cells' capability to Fuel, maybe
> through Fuel-plugins mechanism, which could decouple nova-scheduler and
> related roles from 'main' controller role.
> For true multi-region capability, it would be required to operate
> multi-cobbler Fuel instances / multiple PXE networks with appropriate
> 'region' names provided.
> A initial approach to it would be probably to deploy multiple Fuel
> instances (one Fuel per region) and then bound them altogether through
> RESTful API / operate at scale through API, at least when it comes to
> Keystone and Galera cluster configuration.
> There are several approaches to multi region, maybe good one would be
> plugin allowing to select remote data centre Galera cluster as a partner
> for replication.
> I'm not sure at this moment how HA would be operated this way, since
> Keystone utilizes memcached for various operations. Would multi-region
> memcached memory states also be synchronized?
> So multi-region DC could rise up a lot related to it problems.
>
> Regards,
>
> A.
>
>
>
> On Wed, Oct 7, 2015 at 11:49 AM, Roman Sokolkov 
> wrote:
>
>> Sheena, thanks. I agree with Chris full Multi-DC it's different scale
>> task.
>>
>> For now Services just need +1 tiny step from Fuel/Product in favor
>> supporting current Multi-DC deployments architectures. (i.e. shared
>> Keystone)
>>
>> Andrew, Ruslan, Mike,
>>
>> i've created tiny blueprint
>> https://blueprints.launchpad.net/fuel/+spec/expose-region-name-to-ui
>>
>> We just need to expose already existing functionality to UI.
>>
>> Can someone pickup this blueprint? And/Or reassign to appropriate team.
>>
>> Thanks
>>
>> On Fri, Oct 2, 2015 at 7:41 PM, Sheena Gregson 
>> wrote:
>>
>>> Forwarding since Chris isn’t subscribed.
>>>
>>>
>>>
>>> *From:* Chris Clason [mailto:ccla...@mirantis.com]
>>> *Sent:* Friday, October 02, 2015 6:30 PM
>>> *To:* Sheena Gregson ; OpenStack Development
>>> Mailing List (not for usage questions) <
>>> openstack-dev@lists.openstack.org>
>>> *Subject:* Re: [openstack-dev] [Fuel] 8.0 Region name support / Multi-DC
>>>
>>>
>>>
>>> We are doing some technology evaluations with the intent of publishing
>>> reference architectures at various scale points (500, 1500, 2000 etc). Part
>>> of this work will be to determine how to best partition the nodes in to
>>> regions based on scale limits of OpenStack components and workload
>>> characteristics. The work we are doing increased in scope significantly, so
>>> the first RA will be coming at the end of Q1 or early Q2.
>>>
>>>
>>>
>>> We do plan on using some components of Fuel for our testing but the main
>>> purpose is path finding. The work we do will eventually make it into Fuel,
>>> but we are going to run in front of it a bit.
>>>
>>>
>>>
>>> On Fri, Oct 2, 2015 at 9:19 AM Sheena Gregson 
>>> wrote:
>>>
>>> Plans for multi-DC: my understanding is that we are working on
>>> developing a whitepaper in Q4 that will provide a possible OpenStack
>>> multi-DC configuration, but I do not know whether or not we intend to
>>> include Fuel in the scope of this work (my guess would be no).  Chris – I
>>> copied you in case you wanted to comment here.
>>>
>>>
>>>
>>> Regarding specifying region names in UI, is it possible to specify
>>> region names in API?  And (apologies for my ignorance on this one) what is
>>> the relative equivalence to environments in Fuel (e.g. 1 environment : many
>>> regions, 1 environment == 1 region)?
>>>
>>>
>>>
>>> *From:* Roman Sokolkov [mailto:rsokol...@mirantis.com]
>>> *Sent:* Friday, October 02, 2015 5:26 PM
>>> *To:* OpenStack Development Mailing List (not for usage questions) <
>>> openstack-dev@lists.openstack.org>
>>> *Subject:* [openstack-dev] [Fuel] 8.0 Region name support / 

Re: [openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-08 Thread Egor Guz
Adrian,

I agree with Steve, otherwise it’s hard to find balance what should go to quick 
start guide (e.g. many operators worry about cpu or I/O instead of memory).
Also I belve auto-scalling deserve it’s own detail document.

—
Egor

From: Adrian Otto >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, October 8, 2015 at 13:04
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] Document adding --memory option to create 
containers

Steve,

I agree with the concept of a simple quickstart doc, but there also needs to be 
a comprehensive user guide, which does not yet exist. In the absence of the 
user guide, the quick start is the void where this stuff is starting to land. 
We simply need to put together a magnum reference document, and start moving 
content into that.

Adrian

On Oct 8, 2015, at 12:54 PM, Steven Dake (stdake) 
> wrote:

Quickstart guide should be dead dead dead dead simple.  The goal of the 
quickstart guide isn’t to tach people best practices around Magnum.  It is to 
get a developer operational to give them that sense of feeling that Magnum can 
be worked on.  The goal of any quickstart guide should be to encourage the 
thinking that a person involving themselves with the project the quickstart 
guide represents is a good use of the person’s limited time on the planet.

Regards
-steve


From: Hongbin Lu >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, October 8, 2015 at 9:00 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [magnum] Document adding --memory option to create 
containers

Hi team,

I want to move the discussion in the review below to here, so that we can get 
more feedback

https://review.openstack.org/#/c/232175/

In summary, magnum currently added support for specifying the memory size of 
containers. The specification of the memory size is optional, and the COE won’t 
reserve any memory to the containers with unspecified memory size. The debate 
is whether we should document this optional parameter in the quickstart guide. 
Below is the positions of both sides:

Pros:
· It is a good practice to always specifying the memory size, because 
containers with unspecified memory size won’t have QoS guarantee.
· The in-development autoscaling feature [1] will query the memory size 
of each container to estimate the residual capacity and triggers scaling 
accordingly. Containers with unspecified memory size will be treated as taking 
0 memory, which negatively affects the scaling decision.
Cons:
· The quickstart guide should be kept as simple as possible, so it is 
not a good idea to have the optional parameter in the guide.

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs][Neutron] Networking Guide - Call for contributors

2015-10-08 Thread Edgar Magana
Daniel,

In which time zone are you located?

Thanks for volunteering.

Edgar

From: Daniel Mellado
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Thursday, October 8, 2015 at 8:06 AM
To: 
"openstack-dev@lists.openstack.org"
Subject: Re: [openstack-dev] [OpenStack-docs][Neutron] Networking Guide - Call 
for contributors

I could try to allocate some time for this, I think it's deff worth the effort!

El 08/10/15 a las 02:11, Edgar Magana escribió:
Hello,

I would like to invite everybody to become an active contributor for the 
OpenStack Networking Guide:  
http://docs.openstack.org/networking-guide/

During the Liberty cycle we made a lot of progress and we feel that the guide 
is ready to have even more contributions and formalize a bit more the team 
around it.
The first thing that I want to propose is to have a regular meeting over IRC to 
discuss the progress and to welcome new contributors. This is the same process 
that other guides like the operators one are following currently.

The networking guide is based on this ToC: 
 
https://wiki.openstack.org/wiki/NetworkingGuide/TOC
Contribution process is the same that the rest of the OpenStack docs under the 
openstack-manuals git repo: 

 
https://github.com/openstack/openstack-manuals/tree/master/doc/networking-guide/source

Please, response to this thread and let me know if you could allocate some time 
to help us to make this guide a rock star as the other ones. Based on the 
responses, I will propose a couple of times for the IRC meeting that could 
allocate to everybody if possible, this is why is very important to let me know 
your time zone.

I am really looking forward to increase the contributors in this guide.

Thanks in advance!

Edgar Magana



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-08 Thread Ed Leafe
On Oct 8, 2015, at 1:38 PM, Ian Wells  wrote:

>> You've hit upon the problem with the current design: multiple, and 
>> potentially out-of-sync copies of the data.
> 
> Arguably, this is the *intent* of the current design, not a problem with it.

It may have been the intent, but that doesn't mean that we are where we need to 
be.

> The data can never be perfect (ever) so go with 'good enough' and run with 
> it, and deal with the corner cases.

It is in defining what is "good enough" that is problematic.

> Truth be told, storing that data in MySQL is secondary to the correct 
> functioning of the scheduler.

I have no problem with MySQL (well, I do, but that's not relevant to this 
discussion). My issue is that the current system poorly replicates its data 
from MySQL to the places where it is needed.

> The one thing it helps with is when the scheduler restarts - it stands a 
> chance of making sensible decisions before it gets its full picture back.  
> (This is all very like route distribution protocols, you know: make the best 
> decision on the information you have to hand, assuming the rest of the system 
> will deal with your mistakes.  And hold times, and graceful restart, and…)

Yes, this is all well and good. My focus is on improving the information in 
hand when making that best decision.

> Is there any reason why the duplication (given it's not a huge amount of data 
> - megabytes, not gigabytes) is a problem?  Is there any reason why 
> inconsistency is a problem?

I'm sure that many of the larger deployments may have issues with the amount of 
data that must be managed in-memory by so many different parts of the system. 
Inconsistency is a problem, but one that has workarounds. The primary issue is 
scalability: with the current design, increasing the number of scheduler 
processes increases the raciness of the system.

> I do sympathise with your point in the following email where you have 5 VMs 
> scheduled by 5 schedulers to the same host, but consider:
> 
> 1. if only one host suits the 5 VMs this results in the same behaviour: 1 VM 
> runs, the rest don't.  There's more work to discover that but arguably less 
> work than maintaining a consistent database.

True, but in a large scale deployment this is an extremely rare case.

> 2. if many hosts suit the 5 VMs then this is *very* unlucky, because we 
> should be choosing a host at random from the set of suitable hosts and that's 
> a huge coincidence - so this is a tiny corner case that we shouldn't be 
> designing around

Here is where we differ in our understanding. With the current system of 
filters and weighers, 5 schedulers getting requests for identical VMs and 
having identical information are *expected* to select the same host. It is not 
a tiny corner case; it is the most likely result for the current system design. 
By catching this situation early (in the scheduling process) we can avoid 
multiple RPC round-trips to handle the fail/retry mechanism.

> The worst case, is, however
> 
> 3. we attempt to pick the optimal host, and the optimal host for all 5 VMs is 
> the same despite there being other less perfect choices out there.  That 
> would get you a stampeding herd and a bunch of retries.
> 
> I admit that the current system does not solve well for (3).

IMO, this is identical to (2).


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][elections] Magum PTL Election Conclusion and Results

2015-10-08 Thread Tony Breeds
Hello all,
Voting has now closed in the Magnum PTL election.  Thanks again to Adrian
and Hongbin for running.

Please join me in extending congratulations Adrian in retaining is role as PTL.

The results can be seen here:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_306b9309f2f39e38

Yours Tony.


pgpPllT0CEoq5.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] naming N and O releases nowish

2015-10-08 Thread Robert Collins
On 9 October 2015 at 00:53, Sean Dague  wrote:
> On 10/08/2015 06:59 AM, Daniel P. Berrange wrote:
>> On Wed, Oct 07, 2015 at 02:57:59PM +0200, Thierry Carrez wrote:
>>> Sean Dague wrote:
 We're starting to make plans for the next cycle. Long term plans are
 getting made for details that would happen in one or two cycles.

 As we already have the locations for the N and O summits I think we
 should do the naming polls now and have names we can use for this
 planning instead of letters. It's pretty minor but it doesn't seem like
 there is any real reason to wait and have everyone come up with working
 names that turn out to be confusing later.
>>>
>>> That sounds fair. However the release naming process currently states[1]:
>>>
>>> """
>>> The process to chose the name for a release begins once the location of
>>> the design summit of the release to be named is announced and no sooner
>>> than the opening of development of the previous release.
>>> """
>>>
>>> ...which if I read it correctly means we could pick N now, but not O. We
>>> might want to change that (again) first.
>>
>> Since changing the naming process may take non-negligible time, could
>> we parallelize, so we can at least press ahead with picking a name for
>> N asap which is permitted by current rules.
>
> Agreed. I believe that Monty and Jim signed up for shepherding this
> after the last naming rules change. I've added it to the TC agenda for
> next week to kickstart the process.

FWIW I don't think 2.5K developers are going to disappear without some
other major problems that will be much more pressing than the name we
chose :)

I'm +1 on picking once the venue is settled [or perhaps further out
but thats a different discussion].

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] PTL & Component Leads elections

2015-10-08 Thread Sergey Lukjanov
Voting period ended and so we have an officially selected Fuel PTL - DB.
Congrats!

Poll results & details -
http://civs.cs.cornell.edu/cgi-bin/results.pl?num_winners=1=E_b79041aa56684ec0

Let's start proposing candidates for the component lead positions!

On Wed, Sep 30, 2015 at 8:47 PM, Sergey Lukjanov 
wrote:

> Hi folks,
>
> I've just setup the voting system and you should start receiving email
> with topic "Poll: Fuel PTL Elections Fall 2015".
>
> NOTE: Please, don't forward this email, it contains *personal* unique
> token for the voting.
>
> Thanks.
>
> On Wed, Sep 30, 2015 at 3:28 AM, Vladimir Kuklin 
> wrote:
>
>> +1 to Igor. Do we have voting system set up?
>>
>> On Wed, Sep 30, 2015 at 4:35 AM, Igor Kalnitsky 
>> wrote:
>>
>>> > * September 29 - October 8: PTL elections
>>>
>>> So, it's in progress. Where I can vote? I didn't receive any emails.
>>>
>>> On Mon, Sep 28, 2015 at 7:31 PM, Tomasz Napierala
>>>  wrote:
>>> >> On 18 Sep 2015, at 04:39, Sergey Lukjanov 
>>> wrote:
>>> >>
>>> >>
>>> >> Time line:
>>> >>
>>> >> PTL elections
>>> >> * September 18 - September 28, 21:59 UTC: Open candidacy for PTL
>>> position
>>> >> * September 29 - October 8: PTL elections
>>> >
>>> > Just a reminder that we have a deadline for candidates today.
>>> >
>>> > Regards,
>>> > --
>>> > Tomasz 'Zen' Napierala
>>> > Product Engineering - Poland
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Yours Faithfully,
>> Vladimir Kuklin,
>> Fuel Library Tech Lead,
>> Mirantis, Inc.
>> +7 (495) 640-49-04
>> +7 (926) 702-39-68
>> Skype kuklinvv
>> 35bk3, Vorontsovskaya Str.
>> Moscow, Russia,
>> www.mirantis.com 
>> www.mirantis.ru
>> vkuk...@mirantis.com
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Sahara Technical Lead
> (OpenStack Data Processing)
> Principal Software Engineer
> Mirantis Inc.
>



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-08 Thread Monty Taylor

On 10/08/2015 08:39 PM, Robert Collins wrote:

This is a bugbear that keeps cropping up and biting us. I'm hoping we
can figure out a permanent fix.

The problem that occurs is the result of a few interacting things:
  - requests has very very specific versions of urllib3 it works with.
So specific they aren't always released yet.

  - Linux vendors often unbundle urllib3 from requests and then apply
what patches were needed to their urllib3; while not updating their
requests package dependencies to reflect this.

  - we use urllib3 in some places and requests in others (but we don't
mix them up)

  - if for any reason we have a distro-altered requests + a
pip-installed urllib3, requests will [usually] break... see the 'not
always released yet' key thing above.

Now, there are lots of places this last thing can happen; they all
depend on us having a dependency on requests that is compatible with
the version installed by the distro, but a urllib3 dependency that
triggers an upgrade of just urllib3. When constraints are in use, the
requests version has to match the distro requests version exactly, but
that will happen from time to time.

e.g.

  - dvsm test jobs where the base image already has python-requests
installed in it


We're working hard to get to the point where this one goes away, fwiw.


  - virtualenvs where system-site-packages are enabled


These make the easter bunny have sad.


There are a few strategies that have been proposed to fix this. AIUI they are:
  - make sure none of our testing environments include distro requests packages.


yes!


  - make our requirements be tightly matched to what requests needs to
deal with the unbundling

  - teach pip how to identify and avoid this situation by always
upgrading requests (even if thats just a re-install of the version
from PyPI) when the installed requests is a distro installed version
**and** urllib3 is being modified.

  - get the distros to stop un-vendoring urllib3


The first one addresses the situation for the CI gate but doesn't
avoid developers getting bitten on their local machines. And
installing any distro thing that uses requests would re-instate the
potential for breakage. So while its not harmful, I don't think its
sufficient to make this go away.

The second is trivially insufficient - anytime requests vendored
urllib3 is not precisely identical to a released urllib3, it becomes
impossible to satisfy that via dependency version pinning - the only
way to satisfy it is with the urllib3 in the distro that has whatever
change was needed included.

The third approach will require some negotiation I suspect - because
its aesthetically wrong: from an upstream perspective urllib3 is
independent of requests, and vice-versa, but from a distro perspective
they are tightly coupled, no variation permitted.

The fourth approach meets the stone wall of 'but security' and 'no
redundancy permitted' - I don't have the energy to try and get through
the near-religious mindset I've encountered there before, though hey -
if Fedora and Debian and Ubuntu folk are all interested in figuring
out a sustainable way forward, that would be great: please don't feel
cut out, I'm just not expecting anything.

If there are other approaches, great - please throw them up here.


I've got nothing. I'll continue hacking on #1 just because GATE. But I 
agree, it's necessary but not sufficient.


Thanks for the writeup.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-08 Thread Matt Riedemann



On 10/8/2015 7:57 PM, Monty Taylor wrote:

On 10/08/2015 08:39 PM, Robert Collins wrote:

This is a bugbear that keeps cropping up and biting us. I'm hoping we
can figure out a permanent fix.

The problem that occurs is the result of a few interacting things:
  - requests has very very specific versions of urllib3 it works with.
So specific they aren't always released yet.

  - Linux vendors often unbundle urllib3 from requests and then apply
what patches were needed to their urllib3; while not updating their
requests package dependencies to reflect this.

  - we use urllib3 in some places and requests in others (but we don't
mix them up)

  - if for any reason we have a distro-altered requests + a
pip-installed urllib3, requests will [usually] break... see the 'not
always released yet' key thing above.

Now, there are lots of places this last thing can happen; they all
depend on us having a dependency on requests that is compatible with
the version installed by the distro, but a urllib3 dependency that
triggers an upgrade of just urllib3. When constraints are in use, the
requests version has to match the distro requests version exactly, but
that will happen from time to time.

e.g.

  - dvsm test jobs where the base image already has python-requests
installed in it


We're working hard to get to the point where this one goes away, fwiw.


  - virtualenvs where system-site-packages are enabled


These make the easter bunny have sad.


There are a few strategies that have been proposed to fix this. AIUI
they are:
  - make sure none of our testing environments include distro requests
packages.


yes!


  - make our requirements be tightly matched to what requests needs to
deal with the unbundling

  - teach pip how to identify and avoid this situation by always
upgrading requests (even if thats just a re-install of the version
from PyPI) when the installed requests is a distro installed version
**and** urllib3 is being modified.

  - get the distros to stop un-vendoring urllib3


The first one addresses the situation for the CI gate but doesn't
avoid developers getting bitten on their local machines. And
installing any distro thing that uses requests would re-instate the
potential for breakage. So while its not harmful, I don't think its
sufficient to make this go away.

The second is trivially insufficient - anytime requests vendored
urllib3 is not precisely identical to a released urllib3, it becomes
impossible to satisfy that via dependency version pinning - the only
way to satisfy it is with the urllib3 in the distro that has whatever
change was needed included.

The third approach will require some negotiation I suspect - because
its aesthetically wrong: from an upstream perspective urllib3 is
independent of requests, and vice-versa, but from a distro perspective
they are tightly coupled, no variation permitted.

The fourth approach meets the stone wall of 'but security' and 'no
redundancy permitted' - I don't have the energy to try and get through
the near-religious mindset I've encountered there before, though hey -
if Fedora and Debian and Ubuntu folk are all interested in figuring
out a sustainable way forward, that would be great: please don't feel
cut out, I'm just not expecting anything.

If there are other approaches, great - please throw them up here.


I've got nothing. I'll continue hacking on #1 just because GATE. But I
agree, it's necessary but not sufficient.

Thanks for the writeup.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



FYI, related change that is triggering the conversation:

https://review.openstack.org/#/c/213310/

And there are related bugs in there with more details on other ways this 
fails for people outside the gate system.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-08 Thread Ton Ngo
We should reserve time at the next summit to discuss putting together a
detailed user guide, laying down a skeleton so contributors can start
filling in different parts.
Otherwise as we observe, everything is falling into the quick start guide.
Ton Ngo,



From:   "Qiao,Liyong" 
To: openstack-dev@lists.openstack.org
Date:   10/08/2015 06:32 PM
Subject:Re: [openstack-dev] [magnum] Document adding --memory option to
create containers



+1, we can add more detail explanation information of --memory in magnum
CLI instead of quick start.

Eli.

On 2015年10月09日 07:45, Vikas Choudhary wrote:
  In my opinion, there should be a more detailed document explaining
  importance of commands and options.
  Though --memory is an important attribute, but since objective of
  quickstart is to get user a minimum working system within minimum
  time, it seems better to skip this option in quickstart.


  -Vikas

  On Fri, Oct 9, 2015 at 1:47 AM, Egor Guz 
  wrote:
Adrian,

I agree with Steve, otherwise it’s hard to find balance what should
go to quick start guide (e.g. many operators worry about cpu or I/O
instead of memory).
Also I belve auto-scalling deserve it’s own detail document.

—
Egor

From: Adrian Otto 
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" 
Date: Thursday, October 8, 2015 at 13:04
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [magnum] Document adding --memory
option to create containers

Steve,

I agree with the concept of a simple quickstart doc, but there also
needs to be a comprehensive user guide, which does not yet exist.
In the absence of the user guide, the quick start is the void where
this stuff is starting to land. We simply need to put together a
magnum reference document, and start moving content into that.

Adrian

On Oct 8, 2015, at 12:54 PM, Steven Dake (stdake) > wrote:

Quickstart guide should be dead dead dead dead simple.? The goal of
the quickstart guide isn’t to tach people best practices around
Magnum.? It is to get a developer operational to give them that
sense of feeling that Magnum can be worked on.? The goal of any
quickstart guide should be to encourage the thinking that a person
involving themselves with the project the quickstart guide
represents is a good use of the person’s limited time on the
planet.

Regards
-steve


From: Hongbin Lu 
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" 
Date: Thursday, October 8, 2015 at 9:00 AM
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: [openstack-dev] [magnum] Document adding --memory option
to create containers

Hi team,

I want to move the discussion in the review below to here, so that
we can get more feedback

https://review.openstack.org/#/c/232175/

In summary, magnum currently added support for specifying the
memory size of containers. The specification of the memory size is
optional, and the COE won’t reserve any memory to the containers
with unspecified memory size. The debate is whether we should
document this optional parameter in the quickstart guide. Below is
the positions of both sides:

Pros:
·? ? ? ? ?It is a good practice to always specifying the memory
size, because containers with unspecified memory size won’t have
QoS guarantee.
·? ? ? ? ?The in-development autoscaling feature [1] will query the
memory size of each container to estimate the residual capacity and
triggers scaling accordingly. Containers with unspecified memory
size will be treated as taking 0 memory, which negatively affects
the scaling decision.
Cons:
·? ? ? ? ?The quickstart guide should be kept as simple as
possible, so it is not a good idea to have the optional parameter
in the guide.

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] py.test vs testrepository

2015-10-08 Thread Roman Prykhodchenko
Folks,

Since we’ve reached the consensus here I’d like to invite you to review the 
patch [1] that replaces py.test with testr without making debuging or running 
specific tests harder. Please also note that it has a dependency which needs to 
be reviewed and merged first one.

1. https://review.openstack.org/#/c/227895


- romcheg


> 7 жовт. 2015 р. о 14:41 Roman Prykhodchenko  написав(ла):
> 
> Michał,
> 
> some comments in-line
> 
>>> - testrepository and related components are used in OpenStack Infra
>>> environment for much more tasks than just running tests
>> 
>> If by "more tasks" you mean parallel testing, py.test also has a
>> possibility to do that by pytest-xdist.
> 
> As Monthy mentioned, it’s not only about testing, it’s more about deeper 
> integration with OpenStack Infra.
> 
> 
>>> - py.test won’t be added to global-requirements so there always be a chance
>>> of another dependency hell
>> 
>> As Igor Kalnitsky said, py.test doesn't have much requirements.
>> https://github.com/pytest-dev/pytest/blob/master/setup.py#L58
>> It's only argparse, which already is in global requirements without
>> any version pinned.
> 
> It’s not only about py.test, there is an up-to-date objective of sticking all 
> requirements to global-requirements because we have big problems because of 
> that every release.
> 
>> 
>> Cheers,
>> Michal
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Requests + urllib3 + distro packages

2015-10-08 Thread Robert Collins
This is a bugbear that keeps cropping up and biting us. I'm hoping we
can figure out a permanent fix.

The problem that occurs is the result of a few interacting things:
 - requests has very very specific versions of urllib3 it works with.
So specific they aren't always released yet.

 - Linux vendors often unbundle urllib3 from requests and then apply
what patches were needed to their urllib3; while not updating their
requests package dependencies to reflect this.

 - we use urllib3 in some places and requests in others (but we don't
mix them up)

 - if for any reason we have a distro-altered requests + a
pip-installed urllib3, requests will [usually] break... see the 'not
always released yet' key thing above.

Now, there are lots of places this last thing can happen; they all
depend on us having a dependency on requests that is compatible with
the version installed by the distro, but a urllib3 dependency that
triggers an upgrade of just urllib3. When constraints are in use, the
requests version has to match the distro requests version exactly, but
that will happen from time to time.

e.g.

 - dvsm test jobs where the base image already has python-requests
installed in it

 - virtualenvs where system-site-packages are enabled


There are a few strategies that have been proposed to fix this. AIUI they are:
 - make sure none of our testing environments include distro requests packages.

 - make our requirements be tightly matched to what requests needs to
deal with the unbundling

 - teach pip how to identify and avoid this situation by always
upgrading requests (even if thats just a re-install of the version
from PyPI) when the installed requests is a distro installed version
**and** urllib3 is being modified.

 - get the distros to stop un-vendoring urllib3


The first one addresses the situation for the CI gate but doesn't
avoid developers getting bitten on their local machines. And
installing any distro thing that uses requests would re-instate the
potential for breakage. So while its not harmful, I don't think its
sufficient to make this go away.

The second is trivially insufficient - anytime requests vendored
urllib3 is not precisely identical to a released urllib3, it becomes
impossible to satisfy that via dependency version pinning - the only
way to satisfy it is with the urllib3 in the distro that has whatever
change was needed included.

The third approach will require some negotiation I suspect - because
its aesthetically wrong: from an upstream perspective urllib3 is
independent of requests, and vice-versa, but from a distro perspective
they are tightly coupled, no variation permitted.

The fourth approach meets the stone wall of 'but security' and 'no
redundancy permitted' - I don't have the energy to try and get through
the near-religious mindset I've encountered there before, though hey -
if Fedora and Debian and Ubuntu folk are all interested in figuring
out a sustainable way forward, that would be great: please don't feel
cut out, I'm just not expecting anything.

If there are other approaches, great - please throw them up here.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-08 Thread Qiao,Liyong
+1, we can add more detail explanation information of --memory in magnum 
CLI instead of quick start.


Eli.

On 2015年10月09日 07:45, Vikas Choudhary wrote:
In my opinion, there should be a more detailed document explaining 
importance of commands and options.
Though --memory is an important attribute, but since objective of 
quickstart is to get user a minimum working system within minimum 
time, it seems better to skip this option in quickstart.



-Vikas

On Fri, Oct 9, 2015 at 1:47 AM, Egor Guz > wrote:


Adrian,

I agree with Steve, otherwise it’s hard to find balance what
should go to quick start guide (e.g. many operators worry about
cpu or I/O instead of memory).
Also I belve auto-scalling deserve it’s own detail document.

—
Egor

From: Adrian Otto >>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" >>
Date: Thursday, October 8, 2015 at 13:04
To: "OpenStack Development Mailing List (not for usage questions)"
>>
Subject: Re: [openstack-dev] [magnum] Document adding --memory
option to create containers

Steve,

I agree with the concept of a simple quickstart doc, but there
also needs to be a comprehensive user guide, which does not yet
exist. In the absence of the user guide, the quick start is the
void where this stuff is starting to land. We simply need to put
together a magnum reference document, and start moving content
into that.

Adrian

On Oct 8, 2015, at 12:54 PM, Steven Dake (stdake)
>> wrote:

Quickstart guide should be dead dead dead dead simple. The goal of
the quickstart guide isn’t to tach people best practices around
Magnum.  It is to get a developer operational to give them that
sense of feeling that Magnum can be worked on.  The goal of any
quickstart guide should be to encourage the thinking that a person
involving themselves with the project the quickstart guide
represents is a good use of the person’s limited time on the planet.

Regards
-steve


From: Hongbin Lu >>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" >>
Date: Thursday, October 8, 2015 at 9:00 AM
To: "OpenStack Development Mailing List (not for usage questions)"
>>
Subject: [openstack-dev] [magnum] Document adding --memory option
to create containers

Hi team,

I want to move the discussion in the review below to here, so that
we can get more feedback

https://review.openstack.org/#/c/232175/

In summary, magnum currently added support for specifying the
memory size of containers. The specification of the memory size is
optional, and the COE won’t reserve any memory to the containers
with unspecified memory size. The debate is whether we should
document this optional parameter in the quickstart guide. Below is
the positions of both sides:

Pros:
· It is a good practice to always specifying the memory
size, because containers with unspecified memory size won’t have
QoS guarantee.
· The in-development autoscaling feature [1] will query
the memory size of each container to estimate the residual
capacity and triggers scaling accordingly. Containers with
unspecified memory size will be treated as taking 0 memory, which
negatively affects the scaling decision.
Cons:
· The quickstart guide should be kept as simple as
possible, so it is not a good idea to have the optional parameter
in the guide.

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org


  1   2   >