[openstack-dev] [releases] oslo.policy 0.3.2 (kilo)

2015-04-13 Thread Doug Hellmann
We are eager to announce the release of:

oslo.policy 0.3.2: RBAC policy enforcement library for OpenStack

This release is part of the kilo series.

For more details, please see the git log history below and:

http://launchpad.net/oslo.policy/+milestone/0.3.2

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.policy

Changes in oslo.policy 0.3.1..0.3.2
---

4c8f38c 2015-04-06 14:48:36 + Avoid reloading policy files in policy.d for 
every call
a4185ef 2015-04-06 14:48:35 + set defaultbranch for reviews

Diffstat (except docs and test files)
-

.gitreview   |  1 +
oslo_policy/policy.py| 28 ++---
3 files changed, 80 insertions(+), 3 deletions(-)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][neutron] --config-dir vs. --config-file

2015-04-13 Thread Matthew Thode
The loading seems to me in a sorted order, so we can do 1.conf 2.conf etc.

https://github.com/openstack/oslo.config/blob/1.9.3/oslo_config/cfg.py#L1265-L1268


On 04/13/2015 02:45 PM, Kevin Benton wrote:
 What is the order of priority between the same option defined in two
 files with --config-dir? 
 
 With '--config-file' args it seemed that it was that the latter ones
 took priority over the earlier ones. So an admin previously had the
 ability to abuse that by putting all of the desired global settings in
 one of the earlier loaded configs and then add some node-specific
 overrides to the ones loaded later.
 
 Will there still be the ability to do that with RDO?
 
 On Mon, Apr 13, 2015 at 8:25 AM, Ihar Hrachyshka ihrac...@redhat.com
 mailto:ihrac...@redhat.com wrote:
 
 Hi,
 
 RDO/master (aka Delorean) moved neutron l3 agent to this configuration
 scheme, configuring l3 (and vpn) agent with --config-dir [1][2][3].
 
 We also provided a way to configure neutron services without ever
 touching a single configuration file from the package [4] where each
 service has a config-dir located under
 /etc/neutron/conf.d/service-name that can be populated by *.conf
 files that will be automatically read by services during startup.
 
 All other distributions are welcome to follow the path. Please don't
 introduce your own alternative to /etc/neutron/conf.d/... directory to
 avoid unneeded platform dependent differences in deployment tools.
 
 As for devstack, it's not really feasible to introduce such a change
 there (at least from my perspective), so it's downstream only.
 
 [1]:
 https://github.com/openstack-packages/neutron/blob/f20-master/openstack-
 neutron.spec#L602
 https://github.com/openstack-packages/neutron/blob/f20-master/openstack-%0Aneutron.spec#L602
 [2]:
 https://github.com/openstack-packages/neutron/blob/f20-master/neutron-l3
 -agent.service#L8
 [3]:
 https://github.com/openstack-packages/neutron-vpnaas/blob/f20-master/ope
 nstack-neutron-vpnaas.spec#L97
 https://github.com/openstack-packages/neutron-vpnaas/blob/f20-master/ope%0Anstack-neutron-vpnaas.spec#L97
 [4]: https://review.gerrithub.io/#/c/229162/
 
 Thanks,
 /Ihar
 
 On 03/13/2015 03:11 PM, Ihar Hrachyshka wrote:
 Hi all,
 
 (I'm starting a new [packaging] tag in this mailing list to reach
 out people who are packaging our software in distributions and
 whatnot.)
 
 Neutron vendor split [1] introduced situations where the set of
 configuration files for L3/VPN agent is not stable and depends on
 which packages are installed in the system. Specifically,
 fwaas_driver.ini file is now shipped in neutron_fwaas tarball
 (openstack-neutron-fwaas package in RDO), and so
 --config-file=/etc/neutron/fwaas_driver.ini argument should be
 passed to L3/VPN agent *only* when the new package with the file is
 installed.
 
 In devstack, we solve the problem by dynamically generating CLI
 arguments list based on which services are configured in
 local.conf [2]. It's not a viable approach in proper distribution
 packages though, where we usually hardcode arguments [3] in our
 service manifests (systemd unit files, in case of RDO).
 
 The immediate solution to solve the issue would be to use
 --config-dir argument that is also provided to us by oslo.config
 instead of --config-file, and put auxiliary files there [4] (those
 may be just symbolic links to actual files).
 
 I initially thought to put the directory under /etc/neutron/, but
 then realized we may be interested in keeping it out of user sight
 while it only references stock (upstream) configuration files.
 
 But then a question arises: whether it's useful just for this
 particular case? Maybe there is value in using --config-dir outside
 of it? And in that case, maybe the approach should be replicated to
 other services?
 
 AFAIU --config-dir could actually be useful to configure services.
 Now instead of messing with configuration files that are shipped
 with packages (and handling .rpmnew files [5] that are generated on
 upgrade when local changes to those files are detected), users (or
 deployment/installation tools) could instead drop a *.conf file in
 that configuration directory, being sure their stock configuration
 file is always current, and no .rpmnew files are there to manually
 solve conflicts).
 
 We can also use two --config-dir arguments, one for stock/upstream
 configuration files, located out of /etc/neutron/, and another one
 available for population with user configuration files, under
 /etc/neutron/. This is similar to how we put settings considered to
 be 'sane distro defaults' in neutron-dist.conf file that is not
 available for modification [6][7].
 
 Of course users would still be able to set up their deployment the
 old way. In that case, nothing will change for them. So the
 approach is backwards compatible.
 
 I wonder whether the idea seems reasonable and actually useful for
 people. If so, we may want to come up with some packaging
 standards (on where to put those 

[openstack-dev] [Group-Based-Policy] Fixing backward incompatible unnamed constraints removal

2015-04-13 Thread Ivar Lazzaro
Hello Team,

As per discussion in the latest GBP meeting [0] I'm hunting down all the
backward incompatible changes made on DB migrations regarding the removal
of unnamed constraints.
In this report [1] you can find the list of affected commits.

The problem is that some of the affected commits are already back ported to
Juno! and others will be [2], so I was wondering what's the plan regarding
how we want back port the compatibility fix to stable/juno.
I see two possibilities:

1) We backport [2] as is (with the broken migration), but we cut the new
stable release only once [3] is merged and back ported. This has the
advantage of having a cleaner backport tree in which all the changes in
master are cherry-picked without major changes.

2) We split [3] in multiple patches, and we only backport those that fix
commits that are already in Juno. Patches like [2] will be changed to
accomodate the fixed migration *before* being merged into the stable
branch. This will avoid intra-release code breakage (which is an issue for
people installing GBP directly from code).

Please share your thoughts, Thanks,
Ivar.

[0]
http://eavesdrop.openstack.org/meetings/networking_policy/2015/networking_policy.2015-04-09-18.00.log.txt
[1] https://bugs.launchpad.net/group-based-policy/+bug/1443606
[2] https://review.openstack.org/#/c/170972/
[3] https://review.openstack.org/#/c/173051/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][releases] OpenStack 2014.2.3 released

2015-04-13 Thread Adam Gandelman
Correction. Also included in the list of released projects for 2014.2.3,
Sahara: https://launchpad.net/sahara/juno/2014.2.3

Apologies,
Adam

On Mon, Apr 13, 2015 at 10:30 AM, Adam Gandelman gandelma...@gmail.com
wrote:

 Hello everyone,

 The OpenStack Stable Maintenance team is happy to announce the release
 of the 2014.2.3 stable Juno release.  We have been busy reviewing and
 accepting backported bugfixes to the stable/juno branches according
 to the criteria set at:

 https://wiki.openstack.org/wiki/StableBranch

 A total of 109 bugs have been fixed across all projects. These
 updates to Juno are intended to be low risk with no
 intentional regressions or API changes. The list of bugs, tarballs and
 other milestone information for each project may be found on Launchpad:

 https://launchpad.net/ceilometer/juno/2014.2.3
 https://launchpad.net/cinder/juno/2014.2.3
 https://launchpad.net/glance/juno/2014.2.3
 https://launchpad.net/heat/juno/2014.2.3
 https://launchpad.net/horizon/juno/2014.2.3
 https://launchpad.net/keystone/juno/2014.2.3
 https://launchpad.net/nova/juno/2014.2.3
 https://launchpad.net/neutron/juno/2014.2.3
 https://launchpad.net/trove/juno/2014.2.3

 Release notes may be found on the wiki:

 https://wiki.openstack.org/wiki/ReleaseNotes/2014.2.3

 The freeze on the stable/juno branches will be lifted today as we
 begin working toward the 2014.2.4 release.

 Thanks,
 Adam


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Regarding neutron bug # 1432582

2015-04-13 Thread Sudipto Biswas

Thanks, I have got a patchset out for review.
I have removed the exception that was being thrown back to the agent and 
have reduced the fix to just logging a meaningful message in the neutron 
server logs.

Appreciate your comments on the same.

Thanks,
Sudipto
On Monday 13 April 2015 11:56 AM, Kevin Benton wrote:
I would like to see some form of this merged at least as an error 
message. If a server has a bad CMOS battery and suffers a power 
outage, it's clock could easily be several years behind. In that 
scenario, the NTP daemon could refuse to sync due to a sanity check.


On Wed, Apr 8, 2015 at 10:46 AM, Sudipto Biswas 
sbisw...@linux.vnet.ibm.com mailto:sbisw...@linux.vnet.ibm.com wrote:


Hi Guys, I'd really appreciate your feedback on this.

Thanks,
Sudipto


On Monday 30 March 2015 12:11 PM, Sudipto Biswas wrote:

Someone from my team had installed the OS on baremetal with a
wrong 'date'
When this node was added to the Openstack controller, the logs
from the
neutron-agent on the compute node showed - AMQP connected.
But the neutron
agent-list command would not list this agent at all.

I could figure out the problem when the neutron-server debug
logs were enabled
and it vaguely pointed at the rejection of AMQP connections
due to a timestamp
miss match. The neutron-server was treating these requests as
stale due to the
timestamp of the node being behind the neutron-server.
However, there's no
good way to detect this if the agent runs on a node which is
ahead of time.

I recently raised a bug here:
https://bugs.launchpad.net/neutron/+bug/1432582

And tried to resolve this with the review:
https://review.openstack.org/#/c/165539/

It went through quite a few +2s after 15 odd patch sets but we
still are not
in common ground w.r.t addressing this situation.

My fix tries to log better and throw up an exception to the
neutron agent on
FIRST time boot of the agent for better detection of the problem.

I would like to get your thoughts on this fix. Whether this
seems legit to have
the fix per the patch OR could you suggest a approach to
tackle this OR suggest
just abandoning the change.




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kevin Benton


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] ironic-specs open for Liberty!

2015-04-13 Thread Devananda van der Veen
Jay,

Thanks for staying on top of updating  opening Liberty specs, and for all
your help with specs during Kilo!

-Deva

On Fri, Apr 10, 2015 at 10:40 AM Jay Faulkner j...@jvf.cc wrote:

 Hi,

 Just a note to let you know Liberty specs are open for Ironic.

 Template Changes
 
 There are two minor changes to the spec template for Liberty:
  - State Machine Impact is now a full section, where changes to Ironic’s
 State Machine should be called out.
  - In the REST API Impact section, submitters should indicate if the
 microversion should be incremented as part of the change.


 Kilo Specs
 —
 If you had a spec approved for Kilo that didn’t get implemented, please
 put in a merge request moving the spec from kilo-archive/ to liberty/, and
 ensure your spec complies with the new template by adding a State Machine
 Impact section and indicating any microversion version changes needed in
 the REST API Impact section.


 Backlog Specs
 ———
 As a reminder; Ironic does still employ a spec backlog for desired
 features that aren’t ready for implementation due to time, priority, or
 dependencies. If there are any specs currently in the backlog you’d like to
 propose for Liberty, please move it from backlog/ to liberty/ and add the
 additional required information for a full spec.


 Thanks to everyone for a successful Kilo cycle, and I’m looking forward to
 seeing the new slate of ideas for Liberty.


 -
 Jay Faulkner
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Kilo RC1 available

2015-04-13 Thread Ben Swartzlander

Hello everyone,

We have our release candidate for the Manila Kilo release. The RC1 
tarball, as well as a lists of last-minute features and fixed bugs since 
kilo-3 are available at:


https://launchpad.net/manila/kilo/kilo-rc1

Unless release-critical issues are found that warrant a release 
candidate respin, these RC1 will be formally released as the 2015.1.0 
final version on April 30. You are therefore strongly encouraged to test 
and validate the tarball!


Alternatively, you can directly test the proposed/kilo branches at:
https://github.com/openstack/manila/tree/proposed/kilo

If you find an issue that could be considered release-critical, please 
file it at:


https://bugs.launchpad.net/manila/+filebug

and tag it *kilo-rc-potential* to bring it to the core team's attention.

Note that the master branch of Manila is now open for Liberty 
development, and feature freeze restriction no longer applies there.


-Ben Swartzlander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Query on adding new table to cinder DB

2015-04-13 Thread Ivan Kolodyazhny
Hi Deepak,

Your steps look good for e except #3.1 - add unit-tests for new migrations

Regards,
Ivan Kolodyazhny

On Mon, Apr 13, 2015 at 8:20 PM, Deepak Shetty dpkshe...@gmail.com wrote:

 Hi Stackers,
 As part of my WIP work for implementing
 https://blueprints.launchpad.net/nova/+spec/volume-snapshot-improvements
 I am required to add a new table to cinder (snapshot_admin_metadata) and I
 was looking for some inputs on whats are the steps to add a new table to
 existing DB

 From what I know:

 1) Create a new migration script at
 cinder/db/sqlalchemy/migrate_repo/versions

 2) Implement the upgrade and downgrade methods

 3) Create your model inside cinder/db/sqlalchemy/models.py

 4) Sync DB using cinder-manage db sync

 Are these steps correct ?

 thanx,
 deepak

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Neutron] The specs process, effective operators feedback and product management

2015-04-13 Thread Maru Newby

 On Apr 10, 2015, at 11:04 AM, Boris Pavlovic bo...@pavlovic.me wrote:
 
 Hi, 
 
 I believe that specs are too detailed and too dev oriented for managers, 
 operators and devops. 
 They actually don't want/have time to write/read all the stuff in specs and 
 that's why the communication between dev  operators community is a broken. 
 
 I would recommend to think about simpler approaches like making mechanism for 
 proposing features/changes in projects. 
 Like we have in Rally:  
 https://rally.readthedocs.org/en/latest/feature_requests.html
 
 This is similar to specs but more concentrate on WHAT rather than HOW. 

+1

I think the line between HOW and WHAT are too often blurred in Neutron.  Unless 
we’re able to improve our ability to communicate at an appropriate level of 
abstraction with non-dev stakeholders, meeting their needs will continue to be 
a struggle.


Maru
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Andrey Skedzinskiy for fuel-qa(devops) core

2015-04-13 Thread Igor Kalnitsky
+1.

On Mon, Apr 13, 2015 at 4:09 PM, Sergii Golovatiuk
sgolovat...@mirantis.com wrote:
 Strong +1

 Nastya forgot to mention Andey's participation in Ubuntu 14.04 feature.
 With Andrey's help the feature went smooth and easy ;)


 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 On Mon, Apr 13, 2015 at 12:37 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 +1

 On Mon, Apr 13, 2015 at 11:37 AM, Alexander Kislitsky
 akislit...@mirantis.com wrote:

 Andrew shows great attention to the details. +1 for him.

 On Mon, Apr 13, 2015 at 11:22 AM, Anastasia Urlapova
 aurlap...@mirantis.com wrote:

 Guys,
 I would like to nominate Andrey Skedzinskiy[1] for
 fuel-qa[2]/fuel-devops[3] core team.

 Andrey is one of the strongest reviewers, under his watchful eye are
 such features as:
 - updrade/rollback master node
 - collect usage information
 - OS patching
 - UI tests
 and others

 Please vote for Andrey!


 Nastya.

 [1]http://stackalytics.com/?project_type=stackforgeuser_id=asledzinskiy
 [2]https://github.com/stackforge/fuel-qa
 [3]https://github.com/stackforge/fuel-devops


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 35bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com
 www.mirantis.ru
 vkuk...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - Joining the team - interested in a Debian Developer and experienced Python and Network programmer?

2015-04-13 Thread Joe Mcbride
My apologies to the list. This was not intended to be broadcast.



From: Joe Mcbride
Sent: Monday, April 13, 2015 12:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] - Joining the team - interested in a 
Debian Developer and experienced Python and Network programmer?

Hi Matt,
Our team at Rackspace is looking to add a developer, focused on building out 
and deploying Designate (DNSaaS for Openstack). When we go live, we expect to 
have the largest public deployment, so scaling and migration challenges will be 
particularly interesting technical problems to solve.

Best of luck on getting into the Neutron fun.

__
Joe McBride
Rackspace Cloud DNS
I’m hiring a software developer 
https://gist.github.com/joeracker/d49030cef6001a8f94d0



From: Matt Grant m...@mattgrant.net.nz
Sent: Thursday, April 9, 2015 2:13 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron] - Joining the team - interested in a Debian 
Developer and experienced Python and Network programmer?

Hi!

I am just wondering what the story is about joining the neutron team.
Could you tell me if you are looking for new contributors?

Previously I have programmed OSPFv2 in Zebra/Quagga, and worked as a
router developer for Allied Telesyn.  I also have extensive Python
programming experience, having worked on the DNS Management System.

I have been experimenting with IPv6 since 2008 on my own home network,
and I am currently installing a Juno Openstack cluster to learn ho
things tick.

Have you guys ever figured out how to do a hybrid L3 North/South Neutron
router that propagates tenant routes and networks into OSPF/BGP via a
routing daemon, and uses floating MAC addresses/costed flow rules via
OVS to fail over to a hot standby router? There are practical use cases
for such a thing in smaller deployments.

I have a single stand alone example working by turning off
neutron-l3-agent network name space support, and importing the connected
interface and static routes into Bird and Birdv6. The AMPQ connection
back to the neutron-server is via the upstream interface and is secured
via transport mode IPSEC (just easier than bothering with https/SSL).
Bird looks easier to run from neutron as they are single process than a
multi process Quagga implementation.  Incidentally, I am running this in
an LXC container.

Could some one please point me in the right direction.  I would love to
be in Vancouver :-)

Best Regards,

--
Matt Grant,  Debian and Linux Systems Administration and Consulting
Mobile: 021 0267 0578
Email: m...@mattgrant.net.nz


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] [Murano] Mistral devstack installation is failing in murano gate job

2015-04-13 Thread Dmitri Zimine
My 2c: 

Yes Mistral moved to YAQL 1.0 based on Murano team recommendations :)

some questions/comments before we decide how to proceed: 

1) Let’s clarify the impact: this problem doesn’t affect Murano directly; but 
it impacts Murano-Congress-Mistral initiative, correct? 
Is this a voting gate? What exactly is impacted? Are there any simpler 
workarounds? 

2) on YAQL readiness:
Mistral moved to YAQL it because 1) power 2) upcoming docs and 3) compatibility.

We target to claim Mistral DSL “complete” in Kilo. YAQL is a big part of DSL 
from the user standpoint.
Changing YAQL makes users migrate their workflows.
Thus we want to stick to a version of YAQL which will be documented and used 
long term. 

If YAQL 1.0 is not ready in Kilo we should revert no questions. 
If it is ready, and comes with documentation - would it be good for Murano 
users if Murano moves to it?

3) given that YAQL 0.2 is supported for another cycle (.5 year) and users of 
both Mistral and Murano are using it,
are there any plans to add documentation to it? It is the lack of docs on 0.2 
is the biggest reason to push forward. 
(Does this sound like an invitation to cheat and offer no docs for 1.0 in kilo 
to convince Mistral to stay on 0.2?)

DZ 

On Apr 13, 2015, at 6:13 AM, Serg Melikyan smelik...@mirantis.com wrote:

 Hi Nikolay  Filip,
 
 indeed, root cause of the issue is that Murano  Mistral use different
 version of yaql library. Murano installs yaql 0.2.4 and overrides
 1.0.0b2 already installed and expected by Mistral.
 
 We decided that we are not going to switch to the yaql 1.0.0 in Kilo
 since we already finished Kilo development and working on bug-fixes
 and releasing RC. This gate may be fixed if only Mistral will revert
 1.0.0 support in Kilo :'(
 
 Nikolay, what do you think about migrating to YAQL 1.0.0 in the next
 release? I know that it was me who proposed Mistral team to adopt yaql
 1.0.0, and I am sorry, I didn't realize all consequences of moving
 Mistral to yaql 1.0.0 and Murano team living with yaql 0.2.4.
 
 We need to work on packaging and supporting yaql in Ubuntu/CentOS in
 order to add this library to the global-requirements and to avoid this
 kind of issues in the future.
 
 On Mon, Apr 13, 2015 at 3:58 PM, Nikolay Makhotkin
 nmakhot...@mirantis.com wrote:
 
 We are facing an issue with Mistral devstack installation in our gate job 
 testing murano-congress-mistral integration (policy enforcement) [1] . 
 Mistral devstack scripts are failing with following import error [2]
 
 
 Hi, Filip!
 
 Recently Mistral has moved to new YAQL, and it seems this dependency is 
 missed (yaql 1.0, currently yaql 1.0.0b2)
 
 I think the root of problem is that Murano and Mistral have different yaql 
 versions installed.
 
 --
 Best Regards,
 Nikolay
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] [Murano] Mistral devstack installation is failing in murano gate job

2015-04-13 Thread Stan Lagun
1) yaql 1.0 is not a drop-in replacement for yaql 0.2 but only one version
can be installed on any given system. Unless we use virtualenv, Docker or
anything else to isolate applications. So if Murano and Mistral uses
different yaql versions they will unable to live together on the same host

2) Currently we observe such impact on devstack tests but in general that
will mean Murano and Mistral cannot be installed on the same DevStack or be
together in some OpenStack distribution like Mirantis OpenStack at least
the way they are deployed currently

3) yaql 1.0 is in beta status and is ready exactly to that degree. We don't
expect any breaking changes anymore but it may still contain some bugs

4) Murano will move to yaql 1.0. We just didn't managed to do that in time
before FF and it is too late to do that in Kilo

5) Generally we should have documentation for both versions. But the fact
is that at the moment we don't even have documentation for 1.0 which is of
a higher priority for us. So once again I suggest to contribute rather than
wait for somebody else

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

sla...@mirantis.com

On Mon, Apr 13, 2015 at 9:19 PM, Dmitri Zimine dzim...@stackstorm.com
wrote:

 My 2c:

 Yes Mistral moved to YAQL 1.0 based on Murano team recommendations :)

 some questions/comments before we decide how to proceed:

 1) Let’s clarify the impact: this problem doesn’t affect Murano directly;
 but it impacts Murano-Congress-Mistral initiative, correct?
 Is this a voting gate? What exactly is impacted? Are there any simpler
 workarounds?

 2) on YAQL readiness:
 Mistral moved to YAQL it because 1) power 2) upcoming docs and 3)
 compatibility.

 We target to claim Mistral DSL “complete” in Kilo. YAQL is a big part of
 DSL from the user standpoint.
 Changing YAQL makes users migrate their workflows.
 Thus we want to stick to a version of YAQL which will be documented and
 used long term.

 If YAQL 1.0 is not ready in Kilo we should revert no questions.
 If it is ready, and comes with documentation - would it be good for Murano
 users if Murano moves to it?

 3) given that YAQL 0.2 is supported for another cycle (.5 year) and users
 of both Mistral and Murano are using it,
 are there any plans to add documentation to it? It is the lack of docs on
 0.2 is the biggest reason to push forward.
 (Does this sound like an invitation to cheat and offer no docs for 1.0 in
 kilo to convince Mistral to stay on 0.2?)

 DZ

 On Apr 13, 2015, at 6:13 AM, Serg Melikyan smelik...@mirantis.com wrote:

  Hi Nikolay  Filip,
 
  indeed, root cause of the issue is that Murano  Mistral use different
  version of yaql library. Murano installs yaql 0.2.4 and overrides
  1.0.0b2 already installed and expected by Mistral.
 
  We decided that we are not going to switch to the yaql 1.0.0 in Kilo
  since we already finished Kilo development and working on bug-fixes
  and releasing RC. This gate may be fixed if only Mistral will revert
  1.0.0 support in Kilo :'(
 
  Nikolay, what do you think about migrating to YAQL 1.0.0 in the next
  release? I know that it was me who proposed Mistral team to adopt yaql
  1.0.0, and I am sorry, I didn't realize all consequences of moving
  Mistral to yaql 1.0.0 and Murano team living with yaql 0.2.4.
 
  We need to work on packaging and supporting yaql in Ubuntu/CentOS in
  order to add this library to the global-requirements and to avoid this
  kind of issues in the future.
 
  On Mon, Apr 13, 2015 at 3:58 PM, Nikolay Makhotkin
  nmakhot...@mirantis.com wrote:
 
  We are facing an issue with Mistral devstack installation in our gate
 job testing murano-congress-mistral integration (policy enforcement) [1] .
 Mistral devstack scripts are failing with following import error [2]
 
 
  Hi, Filip!
 
  Recently Mistral has moved to new YAQL, and it seems this dependency is
 missed (yaql 1.0, currently yaql 1.0.0b2)
 
  I think the root of problem is that Murano and Mistral have different
 yaql versions installed.
 
  --
  Best Regards,
  Nikolay
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
  http://mirantis.com | smelik...@mirantis.com
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

[openstack-dev] [releases] oslo.db 1.8.0 (liberty)

2015-04-13 Thread Doug Hellmann
We are excited to announce the release of:

oslo.db 1.8.0: Oslo Database library

For more details, please see the git log history below and:

http://launchpad.net/oslo.db/+milestone/1.8.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.db

Changes in oslo.db 1.7.1..1.8.0
---

9e004bd 2015-04-10 15:25:56 +0300 Sanity check after migration
af9a99b 2015-04-09 17:43:08 +0300 Add filters for DBDataError exception
198a9e7 2015-04-07 14:47:07 -0700 Add pypi download + version badges
f94046b 2015-04-07 18:50:05 +0200 exc_filters: support for ForeignKey error on 
delete
e95e8ef 2015-04-04 02:04:41 -0400 Standardize setup.cfg summary for oslo libs
124239c 2015-04-03 16:29:00 +0200 Handle CHECK constraint integrity in 
PostgreSQL
3522ef7 2015-03-26 17:36:03 +0100 Catch DBDuplicateError in MySQL if primary 
key is binary
1792c9f 2015-03-21 06:01:13 + Imported Translations from Transifex
2982693 2015-03-21 00:16:54 + Updated from global requirements
ec9b645 2015-03-19 06:01:12 + Imported Translations from Transifex
7bb0356 2015-03-18 10:43:57 +0200 Provide working SQLA_VERSION attribute
02aeda2 2015-03-17 14:17:41 +0300 Avoid excessing logging of RetryRequest 
exception
74b539b 2015-03-13 13:44:54 +0300 Fixed bug in InsertFromSelect columns order
ebbf23d 2015-03-12 12:45:31 -0400 Add process guards + invalidate to the 
connection pool
e0baed6 2015-03-05 14:06:59 + Implement generic update-on-match feature

Diffstat (except docs and test files)
-

README.rst |  11 +-
.../locale/en_GB/LC_MESSAGES/oslo.db-log-error.po  |  18 +-
.../en_GB/LC_MESSAGES/oslo.db-log-warning.po   |  25 +-
.../locale/fr/LC_MESSAGES/oslo.db-log-warning.po   |  23 +-
oslo.db/locale/oslo.db-log-warning.pot |  23 +-
oslo_db/api.py |   6 +-
oslo_db/exception.py   |  25 +
oslo_db/sqlalchemy/compat/utils.py |  12 +-
oslo_db/sqlalchemy/exc_filters.py  |  44 +-
oslo_db/sqlalchemy/migration.py|  10 +-
oslo_db/sqlalchemy/session.py  |  22 +
oslo_db/sqlalchemy/update_match.py | 508 +
oslo_db/sqlalchemy/utils.py|  57 ++-
.../old_import_api/sqlalchemy/test_exc_filters.py  |  49 ++
.../sqlalchemy/test_migration_common.py|   3 +-
requirements.txt   |   8 +-
setup.cfg  |   2 +-
test-requirements-py2.txt  |   6 +-
test-requirements-py3.txt  |   6 +-
25 files changed, 1432 insertions(+), 87 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index e3384db..8350f43 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -9,3 +9,3 @@ iso8601=0.1.9
-oslo.i18n=1.3.0  # Apache-2.0
-oslo.config=1.9.0  # Apache-2.0
-oslo.utils=1.2.0   # Apache-2.0
+oslo.i18n=1.5.0,1.6.0  # Apache-2.0
+oslo.config=1.9.3,1.10.0  # Apache-2.0
+oslo.utils=1.4.0,1.5.0   # Apache-2.0
@@ -14 +14 @@ sqlalchemy-migrate=0.9.5
-stevedore=1.1.0  # Apache-2.0
+stevedore=1.3.0,1.4.0  # Apache-2.0
diff --git a/test-requirements-py2.txt b/test-requirements-py2.txt
index 6ff3a1b..24c3b46 100644
--- a/test-requirements-py2.txt
+++ b/test-requirements-py2.txt
@@ -15,2 +15,2 @@ sphinx=1.1.2,!=1.2.0,!=1.3b1,1.3
-oslosphinx=2.2.0  # Apache-2.0
-oslotest=1.2.0  # Apache-2.0
+oslosphinx=2.5.0,2.6.0  # Apache-2.0
+oslotest=1.5.1,1.6.0  # Apache-2.0
@@ -19 +19 @@ testtools=0.9.36,!=1.2.0
-tempest-lib=0.3.0
+tempest-lib=0.4.0
diff --git a/test-requirements-py3.txt b/test-requirements-py3.txt
index 4290cc6..6ca989c 100644
--- a/test-requirements-py3.txt
+++ b/test-requirements-py3.txt
@@ -14,2 +14,2 @@ sphinx=1.1.2,!=1.2.0,!=1.3b1,1.3
-oslosphinx=2.2.0  # Apache-2.0
-oslotest=1.2.0  # Apache-2.0
+oslosphinx=2.5.0,2.6.0  # Apache-2.0
+oslotest=1.5.1,1.6.0  # Apache-2.0
@@ -19 +19 @@ testtools=0.9.36,!=1.2.0
-tempest-lib=0.3.0
+tempest-lib=0.4.0
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][neutron] --config-dir vs. --config-file

2015-04-13 Thread Kevin Benton
What is the order of priority between the same option defined in two files
with --config-dir?

With '--config-file' args it seemed that it was that the latter ones took
priority over the earlier ones. So an admin previously had the ability to
abuse that by putting all of the desired global settings in one of the
earlier loaded configs and then add some node-specific overrides to the
ones loaded later.

Will there still be the ability to do that with RDO?

On Mon, Apr 13, 2015 at 8:25 AM, Ihar Hrachyshka ihrac...@redhat.com
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi,

 RDO/master (aka Delorean) moved neutron l3 agent to this configuration
 scheme, configuring l3 (and vpn) agent with --config-dir [1][2][3].

 We also provided a way to configure neutron services without ever
 touching a single configuration file from the package [4] where each
 service has a config-dir located under
 /etc/neutron/conf.d/service-name that can be populated by *.conf
 files that will be automatically read by services during startup.

 All other distributions are welcome to follow the path. Please don't
 introduce your own alternative to /etc/neutron/conf.d/... directory to
 avoid unneeded platform dependent differences in deployment tools.

 As for devstack, it's not really feasible to introduce such a change
 there (at least from my perspective), so it's downstream only.

 [1]:
 https://github.com/openstack-packages/neutron/blob/f20-master/openstack-
 neutron.spec#L602
 [2]:
 https://github.com/openstack-packages/neutron/blob/f20-master/neutron-l3
 - -agent.service#L8
 [3]:
 https://github.com/openstack-packages/neutron-vpnaas/blob/f20-master/ope
 nstack-neutron-vpnaas.spec#L97
 [4]: https://review.gerrithub.io/#/c/229162/

 Thanks,
 /Ihar

 On 03/13/2015 03:11 PM, Ihar Hrachyshka wrote:
  Hi all,
 
  (I'm starting a new [packaging] tag in this mailing list to reach
  out people who are packaging our software in distributions and
  whatnot.)
 
  Neutron vendor split [1] introduced situations where the set of
  configuration files for L3/VPN agent is not stable and depends on
  which packages are installed in the system. Specifically,
  fwaas_driver.ini file is now shipped in neutron_fwaas tarball
  (openstack-neutron-fwaas package in RDO), and so
  --config-file=/etc/neutron/fwaas_driver.ini argument should be
  passed to L3/VPN agent *only* when the new package with the file is
  installed.
 
  In devstack, we solve the problem by dynamically generating CLI
  arguments list based on which services are configured in
  local.conf [2]. It's not a viable approach in proper distribution
  packages though, where we usually hardcode arguments [3] in our
  service manifests (systemd unit files, in case of RDO).
 
  The immediate solution to solve the issue would be to use
  --config-dir argument that is also provided to us by oslo.config
  instead of --config-file, and put auxiliary files there [4] (those
  may be just symbolic links to actual files).
 
  I initially thought to put the directory under /etc/neutron/, but
  then realized we may be interested in keeping it out of user sight
  while it only references stock (upstream) configuration files.
 
  But then a question arises: whether it's useful just for this
  particular case? Maybe there is value in using --config-dir outside
  of it? And in that case, maybe the approach should be replicated to
  other services?
 
  AFAIU --config-dir could actually be useful to configure services.
  Now instead of messing with configuration files that are shipped
  with packages (and handling .rpmnew files [5] that are generated on
  upgrade when local changes to those files are detected), users (or
  deployment/installation tools) could instead drop a *.conf file in
  that configuration directory, being sure their stock configuration
  file is always current, and no .rpmnew files are there to manually
  solve conflicts).
 
  We can also use two --config-dir arguments, one for stock/upstream
  configuration files, located out of /etc/neutron/, and another one
  available for population with user configuration files, under
  /etc/neutron/. This is similar to how we put settings considered to
  be 'sane distro defaults' in neutron-dist.conf file that is not
  available for modification [6][7].
 
  Of course users would still be able to set up their deployment the
  old way. In that case, nothing will change for them. So the
  approach is backwards compatible.
 
  I wonder whether the idea seems reasonable and actually useful for
  people. If so, we may want to come up with some packaging
  standards (on where to put those config-dir(s), how to name them,
  how to maintain symbolic links inside them) to avoid more work for
  deployment tools.
 
  [1]:
  https://blueprints.launchpad.net/neutron/+spec/core-vendor-decompositi
 on
 
 
 [2]:
  http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/neutron#
 n393
 
 
 [3]:
  

Re: [openstack-dev] [packaging][neutron] --config-dir vs. --config-file

2015-04-13 Thread Dimitri John Ledkov
Hello,

For Clear Linux* for Intel Architecture we do not allow to package
things in /etc, instead we leave /etc completely empty and for
user/admin modifications only.
Typically we achieve this by moving sane distro defaults to be
compiled in defaults, or read from alternative locations somewhere
under /usr.
This is similar to e.g. how udev reads from /usr/lib  /etc. (ditto
systemd units, XDG Freedesktop spec, etc.)

Integration wise, it helps a lot if there is a conf.d like directory
somewhere under /usr  under /etc, such that both packaging/packages
and user can integrate things.

I'll need to look more into this, but e.g. support for
/usr/share/neutron/conf.d/*.conf or
/usr/share/openstack/neutron/*.conf would be useful to us and other
distributions as well.

Shipping things in /etc is a pain on both dpkg  rpm based
distributions as config file handling is complex and has many corner
cases, hence in the past we all had to do transitions of stock
config from /etc - /usr transitions (e.g. udev rules). Please keep
/etc for _only_ user created configurations and changes without any
stock, documentation, defaults shipped there.

Regards,

Dimitri.

ps sorry for loss of context, only recently subscribed, don't have
full access to the thread and hence the ugly top-post reply, sorry
about that.

On 13 April 2015 at 09:25, Ihar Hrachyshka ihrac...@redhat.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi,

 RDO/master (aka Delorean) moved neutron l3 agent to this configuration
 scheme, configuring l3 (and vpn) agent with --config-dir [1][2][3].

 We also provided a way to configure neutron services without ever
 touching a single configuration file from the package [4] where each
 service has a config-dir located under
 /etc/neutron/conf.d/service-name that can be populated by *.conf
 files that will be automatically read by services during startup.

 All other distributions are welcome to follow the path. Please don't
 introduce your own alternative to /etc/neutron/conf.d/... directory to
 avoid unneeded platform dependent differences in deployment tools.

 As for devstack, it's not really feasible to introduce such a change
 there (at least from my perspective), so it's downstream only.

 [1]:
 https://github.com/openstack-packages/neutron/blob/f20-master/openstack-
 neutron.spec#L602
 [2]:
 https://github.com/openstack-packages/neutron/blob/f20-master/neutron-l3
 - -agent.service#L8
 [3]:
 https://github.com/openstack-packages/neutron-vpnaas/blob/f20-master/ope
 nstack-neutron-vpnaas.spec#L97
 [4]: https://review.gerrithub.io/#/c/229162/

 Thanks,
 /Ihar

 On 03/13/2015 03:11 PM, Ihar Hrachyshka wrote:
 Hi all,

 (I'm starting a new [packaging] tag in this mailing list to reach
 out people who are packaging our software in distributions and
 whatnot.)

 Neutron vendor split [1] introduced situations where the set of
 configuration files for L3/VPN agent is not stable and depends on
 which packages are installed in the system. Specifically,
 fwaas_driver.ini file is now shipped in neutron_fwaas tarball
 (openstack-neutron-fwaas package in RDO), and so
 --config-file=/etc/neutron/fwaas_driver.ini argument should be
 passed to L3/VPN agent *only* when the new package with the file is
 installed.

 In devstack, we solve the problem by dynamically generating CLI
 arguments list based on which services are configured in
 local.conf [2]. It's not a viable approach in proper distribution
 packages though, where we usually hardcode arguments [3] in our
 service manifests (systemd unit files, in case of RDO).

 The immediate solution to solve the issue would be to use
 --config-dir argument that is also provided to us by oslo.config
 instead of --config-file, and put auxiliary files there [4] (those
 may be just symbolic links to actual files).

 I initially thought to put the directory under /etc/neutron/, but
 then realized we may be interested in keeping it out of user sight
 while it only references stock (upstream) configuration files.

 But then a question arises: whether it's useful just for this
 particular case? Maybe there is value in using --config-dir outside
 of it? And in that case, maybe the approach should be replicated to
 other services?

 AFAIU --config-dir could actually be useful to configure services.
 Now instead of messing with configuration files that are shipped
 with packages (and handling .rpmnew files [5] that are generated on
 upgrade when local changes to those files are detected), users (or
 deployment/installation tools) could instead drop a *.conf file in
 that configuration directory, being sure their stock configuration
 file is always current, and no .rpmnew files are there to manually
 solve conflicts).

 We can also use two --config-dir arguments, one for stock/upstream
 configuration files, located out of /etc/neutron/, and another one
 available for population with user configuration files, under
 /etc/neutron/. This is similar to how we put settings considered to

Re: [openstack-dev] [all] Problems with keystoneclient stable branch (and maybe yours too)

2015-04-13 Thread Doug Hellmann
Excerpts from Brant Knudson's message of 2015-04-12 20:14:00 -0500:
 There were several problems with the keystoneclient stable/juno branch that
 have been or are in the process of being fixed since its creation.
 Hopefully this note will be useful to other projects that create stable
 branches for their libraries.

Thanks for documenting these, Brant.

 1) Unit tests didn't pass with earlier packages
 
 The supported versions of several of the packages in requirements.txt in
 the stable branch are in the process of being capped[0], so that the tests
 are now running with older versions of the packages. Since we don't
 normally test with the older packages we didn't know that the
 keystoneclient unit tests don't actually pass with the old version of the
 package. This is fixed by correcting the tests to work with the older
 versions of the packages.[1][2]
 
 [0] https://review.openstack.org/#/c/172220/
 [1] https://review.openstack.org/#/c/172655/
 [2] https://review.openstack.org/#/c/172256/
 
 It would be great if we were testing with the minimum versions of the
 packages that we say we support somehow since that would have caught this.

OK, this was unexpected but does make sense. We have requests for
minimum version testing periodically for lots of other reasons, so we
should add this one to the list in case it is finally long enough to
attract someone to work on the problem.

 2) Incorrect cap in requirements.txt
 
 python-keystoneclient in stable/juno was capped at =1.1.0, and 1.1.0 is
 the version tagged for the stable branch. When you create a review in
 stable/juno it installs python-keystoneclient and now the system has got a
 version like 1.1.0.post1, which is 1.1.0, so now python-keystoneclient
 doesn't match the requirements and swift-proxy fails to start (swift-proxy
 is very good at catching this problem for whatever reason). The cap should
 have been 1.2.0 so that we can propose patches and also make fix releases
 (1.1.1, 1.1.2, etc.).[3]
 
 [3] https://review.openstack.org/#/c/172718/

Approved.

 
 I tried to recap all of the clients but that didn't pass Jenkins, probably
 because one or more clients didn't use semver correctly and have
 requirements updates in a micro release.[4]
 
 [4] https://review.openstack.org/#/c/172719/

Did you literally update them all, or only the ones that looked like
they might be wrong? It looks like those caps came from the cap.py
script in the repository, which makes me wonder if we were just too
aggressive with defining what the cap should be.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] VMware CI

2015-04-13 Thread Matt Riedemann



On 4/12/2015 12:23 AM, Gary Kotton wrote:

Hi,
Can a core please take a look at
https://review.openstack.org/#/c/171037. The CI is broken due to
commit e7ae5bb7fbdd5b79bde8937958dd0a645554a5f0.
Thanks
Gary


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The VMware NSX CI passed on patch set 15 of 
https://review.openstack.org/#/c/136935/ so why did it break the CI 
post-merge?


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Consistent variable documentation for diskimage-builder elements

2015-04-13 Thread Dan Prince
On Tue, 2015-04-07 at 21:06 +, Gregory Haynes wrote:
 Hello,
 
 Id like to propse a standard for consistently documenting our
 diskimage-builder elements. I have pushed a review which transforms the
 apt-sources element to this format[1][2]. Essentially, id like to move
 in the direction of making all our element README.rst's contain a sub
 section called Environment Vairables with a Definition List[3] where
 each entry is the environment variable. Under that environment variable
 we will have a field list[4] with Required, Default, Description, and
 optionally Example.
 
 The goal here is that rather than users being presented with a wall of
 text that they need to dig through to remember the name of a variable,
 there is a quick way for them to get the information they need. It also
 should help us to remember to document the vital bits of information for
 each vairable we use.
 
 Thoughts?

I like the direction of the cleanup. +2

I do wonder who we'll enforce consistency in making sure future changes
adhere to the new format. It would be nice to have a CI check on these
things so people don't constantly need to debate the correct syntax,
etc.

Dan

 
 Cheers,
 Greg
 
 1 - https://review.openstack.org/#/c/171320/
 2 - 
 http://docs-draft.openstack.org/20/171320/1/check/gate-diskimage-builder-docs/d3bdf04//doc/build/html/elements/apt-sources/README.html
 3 - 
 http://docutils.sourceforge.net/docs/user/rst/quickref.html#definition-lists
 4 - http://docutils.sourceforge.net/docs/user/rst/quickref.html#field-lists
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3] IPAM alternate refactoring

2015-04-13 Thread Kevin Benton
I think removing all occurrences of create_port inside of another
transaction is something we should be doing for a couple of reasons.

First, it's a recipe for the cherished lock wait timeout deadlocks
because create_port makes yielding calls. These are awful to troubleshoot
and are pretty annoying for users (request takes ~60 seconds and then blows
up).

Second, create_port in ML2 expects the transaction to be committed to the
DB by the time it's done with pre-commit phase, which we break by opening a
parent transaction before calling it so the failure handling semantics may
be messed up.



On Mon, Apr 13, 2015 at 9:48 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 Have we found the last of them?  I wonder.  I suppose any higher level
 service like a router that needs to create ports under the hood (under
 the API) will have this problem.  The DVR fip namespace creation comes
 to mind.  It will create a port to use as the external gateway port
 for that namespace.  This could spring up in the context of another
 create_port, I think (VM gets new port bound to a compute host where a
 fip namespace needs to spring in to existence).

 Carl

 On Mon, Apr 13, 2015 at 10:24 AM, John Belamaric
 jbelama...@infoblox.com wrote:
  Thanks Pavel. I see an additional case in L3_NAT_dbonly_mixin, where it
  starts the transaction in create_router, then eventually gets to
  create_port:
 
  create_router (starts tx)
-self._update_router_gw_info
-_create_gw_port
-_create_router_gw_port
-create_port(plugin)
 
  So that also would need to be unwound.
 
  On 4/13/15, 10:44 AM, Pavel Bondar pbon...@infoblox.com wrote:
 
 Hi,
 
 I made some investigation on the topic[1] and see several issues on this
 way.
 
 1. Plugin's create_port() is wrapped up in top level transaction for
 create floating ip case[2], so it becomes more complicated to do IPAM
 calls outside main db transaction.
 
 - for create floating ip case transaction is initialized on
 create_floatingip level:
 create_floatingip(l3_db)-create_port(plugin)-create_port(db_base)
 So IPAM call should be added into create_floatingip to be outside db
 transaction
 
 - for usual port create transaction is initialized on plugin's
 create_port level, and John's change[1] cover this case:
 create_port(plugin)-create_port(db_base)
 
 Create floating ip work-flow involves calling plugin's create_port,
 so IPAM code inside of it should be executed only when it is not wrapped
 into top level transaction.
 
 2. It is opened question about error handling.
 Should we use taskflow to manage IPAM calls to external systems?
 Or simple exception based model is enough to handle rollback actions on
 third party systems in case of failing main db transaction.
 
 [1] https://review.openstack.org/#/c/172443/
 [2] neutron/db/l3_db.py: line 905
 
 Thanks,
 Pavel
 
 On 10.04.2015 21:04, openstack-dev-requ...@lists.openstack.org wrote:
  L3 Team,
 
  I have put up a WIP [1] that provides an approach that shows the ML2
 create_port method refactored to use the IPAM driver prior to initiating
 the database transaction. Details are in the commit message - this is
 really just intended to provide a strawman for discussion of the
 options. The actual refactor here is only about 40 lines of code.
 
  [1] https://review.openstack.org/#/c/172443/
 
 
  Thanks,
  John
 
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [neutron] Neutron scaling datapoints?

2015-04-13 Thread Daniel Comnea
My $2 cents:

I like the 3rd party backend however instead of ZK wouldn't Consul [1] fit
better due to lighter/ out of box multi DC awareness?

Dani

[1] Consul - https://www.consul.io/


On Mon, Apr 13, 2015 at 9:51 AM, Wangbibo wangb...@huawei.com wrote:

  Hi Kevin,



 Totally agree with you that heartbeat from each agent is something that we
 cannot eliminate currently. Agent status depends on it, and further
 scheduler and HA depends on agent status.



 I proposed a Liberty spec for introducing open framework/pluggable agent
 status drivers.[1][2]  It allows us to use some other 3rd party backend
 to monitor agent status, such as zookeeper, memcached. Meanwhile, it
 guarantees backward compatibility so that users could still use db-based
 status monitoring mechanism as their default choice.



 Base on that, we may do further optimization on issues Attila and you
 mentioned. Thanks.



 [1] BP  -
 https://blueprints.launchpad.net/neutron/+spec/agent-group-and-status-drivers

 [2] Liberty Spec proposed - https://review.openstack.org/#/c/168921/



 Best,

 Robin









 *发件人:* Kevin Benton [mailto:blak...@gmail.com]
 *发送时间:* 2015年4月11日 12:35
 *收件人:* OpenStack Development Mailing List (not for usage questions)
 *主题:* Re: [openstack-dev] [neutron] Neutron scaling datapoints?



 Which periodic updates did you have in mind to eliminate? One of the few
 remaining ones I can think of is sync_routers but it would be great if you
 can enumerate the ones you observed because eliminating overhead in agents
 is something I've been working on as well.



 One of the most common is the heartbeat from each agent. However, I don't
 think we can't eliminate them because they are used to determine if the
 agents are still alive for scheduling purposes. Did you have something else
 in mind to determine if an agent is alive?



 On Fri, Apr 10, 2015 at 2:18 AM, Attila Fazekas afaze...@redhat.com
 wrote:

 I'm 99.9% sure, for scaling above 100k managed node,
 we do not really need to split the openstack to multiple smaller openstack,
 or use significant number of extra controller machine.

 The problem is openstack using the right tools SQL/AMQP/(zk),
 but in a wrong way.

 For example.:
 Periodic updates can be avoided almost in all cases

 The new data can be pushed to the agent just when it needed.
 The agent can know when the AMQP connection become unreliable (queue or
 connection loose),
 and needs to do full sync.
 https://bugs.launchpad.net/neutron/+bug/1438159

 Also the agents when gets some notification, they start asking for details
 via the
 AMQP - SQL. Why they do not know it already or get it with the
 notification ?


 - Original Message -
  From: Neil Jerram neil.jer...@metaswitch.com

  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
  Sent: Thursday, April 9, 2015 5:01:45 PM
  Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?
 
  Hi Joe,
 
  Many thanks for your reply!
 
  On 09/04/15 03:34, joehuang wrote:
   Hi, Neil,
  
From theoretic, Neutron is like a broadcast domain, for example,
enforcement of DVR and security group has to touch each regarding host
where there is VM of this project resides. Even using SDN controller,
 the
touch to regarding host is inevitable. If there are plenty of
 physical
hosts, for example, 10k, inside one Neutron, it's very hard to
 overcome
the broadcast storm issue under concurrent operation, that's the
bottleneck for scalability of Neutron.
 
  I think I understand that in general terms - but can you be more
  specific about the broadcast storm?  Is there one particular message
  exchange that involves broadcasting?  Is it only from the server to
  agents, or are there 'broadcasts' in other directions as well?
 
  (I presume you are talking about control plane messages here, i.e.
  between Neutron components.  Is that right?  Obviously there can also be
  broadcast storm problems in the data plane - but I don't think that's
  what you are talking about here.)
 
   We need layered architecture in Neutron to solve the broadcast domain
   bottleneck of scalability. The test report from OpenStack cascading
 shows
   that through layered architecture Neutron cascading, Neutron can
   supports up to million level ports and 100k level physical hosts. You
 can
   find the report here:
  
 http://www.slideshare.net/JoeHuang7/test-report-for-open-stack-cascading-solution-to-support-1-million-v-ms-in-100-data-centers
 
  Many thanks, I will take a look at this.
 
   Neutron cascading also brings extra benefit: One cascading Neutron
 can
   have many cascaded Neutrons, and different cascaded Neutron can
 leverage
   different SDN controller, maybe one is ODL, the other one is
 OpenContrail.
  
   Cascading Neutron---
/ \
   --cascaded Neutron--   --cascaded Neutron-
   |  |
   

Re: [openstack-dev] [neutron][lbaas] adding lbaas core

2015-04-13 Thread Kyle Mestery
On Mon, Apr 13, 2015 at 3:39 PM, Doug Wiegley doug...@parksidesoftware.com
wrote:

 Hi all,

 I'd like to nominate Philip Toohill as a neutron-lbaas core. Good guy, did
 a bunch of work on the ref impl for lbaasv2, and and I'll let the
 numbers[1] speak for themselves.

 Existing lbaas cores, please vote.  All three of us.  :-)

 +1


 [1] http://stackalytics.com/report/contribution/neutron-lbaas/30

 Thanks,
 doug



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstackclient] osc slowness

2015-04-13 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2015-04-13 07:15:57 -0400:

 So, under the current model I think we're paying a pretty high strategy
 tax in OSC use in devstack. It's adding minutes of time in a normal run.
 I don't know all the internals of OSC and what can be done to make it
 better. But I think that as a CLI we should be as responsive as
 possible.  1s seems like it should be target for at least all the
 keystone operations. I do think this is one of the places (like
 rootwrap) where load time is something to not ignore.

I *believe* the time is scanning the plugins. It doesn't actually
load them, but it has to look through all of the entry point
registries to find what commands are available. I originally built
cliff (the framework under OSC) this way because I thought we would
put the commands in separate repositories.

Since we aren't doing that for the vast majority of them, we can
change the implementation of cliff to support hard-coded commands
more easily, and to have it only scan the entry points for commands
that aren't in that hard-coded list. We would need to load them all
to generate help output and the tab-completion instructions, but I
think it's OK to take a bit of a penalty in those cases.

I plan to work on this during liberty.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas] adding lbaas core

2015-04-13 Thread Doug Wiegley
Hi all,

I'd like to nominate Philip Toohill as a neutron-lbaas core. Good guy, did a 
bunch of work on the ref impl for lbaasv2, and and I'll let the numbers[1] 
speak for themselves.

Existing lbaas cores, please vote.  All three of us.  :-)

[1] http://stackalytics.com/report/contribution/neutron-lbaas/30

Thanks,
doug



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] adding lbaas core

2015-04-13 Thread Brandon Logan
?+1


From: Kyle Mestery mest...@mestery.com
Sent: Monday, April 13, 2015 3:44 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas] adding lbaas core

On Mon, Apr 13, 2015 at 3:39 PM, Doug Wiegley 
doug...@parksidesoftware.commailto:doug...@parksidesoftware.com wrote:
Hi all,

I'd like to nominate Philip Toohill as a neutron-lbaas core. Good guy, did a 
bunch of work on the ref impl for lbaasv2, and and I'll let the numbers[1] 
speak for themselves.

Existing lbaas cores, please vote.  All three of us.  :-)

+1

[1] http://stackalytics.com/report/contribution/neutron-lbaas/30

Thanks,
doug



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [releases] oslo.db 1.8.0 (liberty)

2015-04-13 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2015-04-13 15:28:22 -0400:
 We are excited to announce the release of:
 
 oslo.db 1.8.0: Oslo Database library

The upload job failed for the sdist so some installations using older
versions of pip may have also failed. Users with newer versions of pip
should have seen the wheel and been able to use it, so probably didn't
notice any issues at all.

fungi ran the upload by hand and the file is available on PyPI and our
mirrors now.

Doug


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday April 14th at 19:00 UTC

2015-04-13 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday April 14th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

In case you missed it or would like a refresher, meeting logs and
minutes from our last meeting are available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-04-07-19.03.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-04-07-19.03.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-04-07-19.03.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][barbican] default certificate manager

2015-04-13 Thread Brandon Logan
I'm of the opinion, which may not be the popular opinion, that barbican is the 
secret store for openstack.  It is in openstack, it is meant to be used by 
other openstack services.  v1 lives in the same code base as v2.  Version 
transitions such as these are going to end up having requirements only for one 
version.  I don't think think that is a bad thing as v1 will eventually be 
deprecated.  I am not, however, a packager so I do not know the pains you have 
nor the perspective.  Sounds like you are okay with leaving it in, which is my 
preference, but I can obviously be swayed.

Thanks,
Brandon

From: Ihar Hrachyshka ihrac...@redhat.com
Sent: Monday, April 13, 2015 9:40 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron][lbaas][barbican] default certificate 
manager

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 04/10/2015 09:18 PM, Brandon Logan wrote:
 Hi Ihar, I'm not against the lazy loading solution, just wondering
 what the real issue is here.  Is your problem with this that
 python-barbicanclient needs to be in the requirements.txt?  Or is
 the problem that v1 will import it even though it isn't used?


I package neutron for RDO, so I use requirements.txt as a suggestion.
My main problem was that python-barbicanclient was not packaged for
RDO when I started looking into the issue, but now that it's packaged
in Fedora [1], the issue is not that significant to me. Of course it's
a wasted dependency installed for nothing (plus its own dependencies),
but that's not a disaster, and if upstream team thinks it's the right
thing to do, let it be so, and I'm happy to abandon the change.

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1208454

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBAgAGBQJVK9VFAAoJEC5aWaUY1u57dCkH/R73ECDlHVl2ocBWfTk4BEqi
R8j/wpCCSz3x9uffWR9F8mJoqEnvekIvTtoaHaleiVfZTAhGRDRoxT7nOuMBFBDp
ynmeJEicualeiAFX1z6//KA4L6y5hqGaV71axCRmAT/c0P5fuK08WIMBOkzQRyuo
JmJbej5pOOlDRos0+PJd2+7qxAVU2CAuVBrJIVsJoG4zuISNDalxeOIaYKHU0+Tu
/r7bztTrjkbcs6jiHrvv8MugsivrV1hGEBDsIVgC/Fsgy19f0X2aEjbh7G6lioab
Vm6G+fDCFJVVQ6Xbc9qQPs1geRrocVAb7ZGeuhT/RdoMFTxBR8EJnPqWHXkYWuA=
=O4Ll
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Consistent variable documentation for diskimage-builder elements

2015-04-13 Thread Clint Byrum
Excerpts from Dan Prince's message of 2015-04-13 14:07:28 -0700:
 On Tue, 2015-04-07 at 21:06 +, Gregory Haynes wrote:
  Hello,
  
  Id like to propse a standard for consistently documenting our
  diskimage-builder elements. I have pushed a review which transforms the
  apt-sources element to this format[1][2]. Essentially, id like to move
  in the direction of making all our element README.rst's contain a sub
  section called Environment Vairables with a Definition List[3] where
  each entry is the environment variable. Under that environment variable
  we will have a field list[4] with Required, Default, Description, and
  optionally Example.
  
  The goal here is that rather than users being presented with a wall of
  text that they need to dig through to remember the name of a variable,
  there is a quick way for them to get the information they need. It also
  should help us to remember to document the vital bits of information for
  each vairable we use.
  
  Thoughts?
 
 I like the direction of the cleanup. +2
 
 I do wonder who we'll enforce consistency in making sure future changes
 adhere to the new format. It would be nice to have a CI check on these
 things so people don't constantly need to debate the correct syntax,
 etc.

I agree Dan, which is why I'd like to make sure these are machine
readable and consistent. I think it would actually make sense to make
our argument isolation efforts utilize this format, as that would make
sure that these are consistent with the code as well.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][barbican] default certificate manager

2015-04-13 Thread Doug Wiegley

 On Apr 13, 2015, at 3:38 PM, Brandon Logan brandon.lo...@rackspace.com 
 wrote:
 
 I'm of the opinion, which may not be the popular opinion, that barbican is 
 the secret store for openstack.  It is in openstack, it is meant to be used 
 by other openstack services.  v1 lives in the same code base as v2.  Version 
 transitions such as these are going to end up having requirements only for 
 one version.  I don't think think that is a bad thing as v1 will eventually 
 be deprecated.  I am not, however, a packager so I do not know the pains you 
 have nor the perspective.  Sounds like you are okay with leaving it in, which 
 is my preference, but I can obviously be swayed.

And by eventually, I believe Brandon meant Liberty.

Thanks,
doug

 
 Thanks,
 Brandon
 
 From: Ihar Hrachyshka ihrac...@redhat.com
 Sent: Monday, April 13, 2015 9:40 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [neutron][lbaas][barbican] default certificate 
 manager
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 04/10/2015 09:18 PM, Brandon Logan wrote:
 Hi Ihar, I'm not against the lazy loading solution, just wondering
 what the real issue is here.  Is your problem with this that
 python-barbicanclient needs to be in the requirements.txt?  Or is
 the problem that v1 will import it even though it isn't used?
 
 
 I package neutron for RDO, so I use requirements.txt as a suggestion.
 My main problem was that python-barbicanclient was not packaged for
 RDO when I started looking into the issue, but now that it's packaged
 in Fedora [1], the issue is not that significant to me. Of course it's
 a wasted dependency installed for nothing (plus its own dependencies),
 but that's not a disaster, and if upstream team thinks it's the right
 thing to do, let it be so, and I'm happy to abandon the change.
 
 [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1208454
 
 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2
 
 iQEcBAEBAgAGBQJVK9VFAAoJEC5aWaUY1u57dCkH/R73ECDlHVl2ocBWfTk4BEqi
 R8j/wpCCSz3x9uffWR9F8mJoqEnvekIvTtoaHaleiVfZTAhGRDRoxT7nOuMBFBDp
 ynmeJEicualeiAFX1z6//KA4L6y5hqGaV71axCRmAT/c0P5fuK08WIMBOkzQRyuo
 JmJbej5pOOlDRos0+PJd2+7qxAVU2CAuVBrJIVsJoG4zuISNDalxeOIaYKHU0+Tu
 /r7bztTrjkbcs6jiHrvv8MugsivrV1hGEBDsIVgC/Fsgy19f0X2aEjbh7G6lioab
 Vm6G+fDCFJVVQ6Xbc9qQPs1geRrocVAb7ZGeuhT/RdoMFTxBR8EJnPqWHXkYWuA=
 =O4Ll
 -END PGP SIGNATURE-
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][pbr] splitting our deployment vs install dependencies

2015-04-13 Thread Robert Collins
On 13 April 2015 at 22:04, Thierry Carrez thie...@openstack.org wrote:
 This observation led to yet more IRC discussion and eventually
 https://etherpad.openstack.org/p/stable-omg-deps

 In short, the proposal is that we:
  - stop trying to use install_requires to reproduce exactly what
 works, and instead use it to communicate known constraints ( X, Y is
 broken etc).
  - use a requirements.txt file we create *during* CI to capture
 exactly what worked, and also capture the dpkg and rpm versions of
 packages that were present when it worked, and so on. So we'll build a
 git tree where its history is an audit trail of exactly what worked
 for everything that passed CI, formatted to make it really really easy
 for other people to consume.

 I totally agree that we need to stop trying to provide two different
 sets of dependency information (known good deps, known bad deps) using
 the same dataset.

 If I understand you correctly, today we provide a requirements.txt and
 generate an install_requires from it, and in the new world order we
 would provide a install_requires with known-bad info in it and
 generate a known-good requirements.txt (during CI) from it.

Yes, with two clarifying points: the known-good has to live in a
different repo from the project, because we only discover that during
CI, after the commits have been made. Secondly, the install_requires
will be delivered via setup.cfg in the project tree.

 Questions:

 How would global-requirements evolve in that picture ? Would we have
 some global-install-requires thing to replace it ?

I think global-requirements today is (by necessity) mostly known-bad,
and doesn't need to change much. It needs to learn how to reference
setup.cfg metadata as well/rather than {test-}requirements{-pyNN}.txt.
There's a separate discussion we had a few weeks back about
consolidating the non-install-requires we have into setup.cfg with
appropriate tags, which we'll want to do at the same time.

 Distro packagers today rely on requirements.txt (and global
 -requirements) to determine what version of libraries they need to
 package. Would they just rely on install_requires instead ? Where is
 that information provided ? setup.cfg ?

Yes. Project + global-requirements is a good combination. They might
want to reference the known-good exact lists as an additional data
source.

 How does this proposal affect stable branches ? In order to keep the
 breakage there under control, we now have stable branches for all the
 OpenStack libraries and cap accordingly[1]. We planned to cap all other
 libraries to the version that was there when the stable branch was
 cut.  Where would we do those cappings in the new world order ? In
 install_requires ? Or should we not do that anymore ?

 [1]
 http://specs.openstack.org/openstack/openstack-specs/specs/library-stable-branches.html

I don't think there's a hard and fast answer here. Whats proposed
there should work fine.

On the one hand, semver tells us when *a* backwards compat break
happens, but it doesn't tell us if *that break affects user X*. For
instance, the general migration pattern we expect is:
 - introduce new API  V=2.3.0
 - migrate all our users V~=2.3
 - deprecate old API V~=2.3
 - gc deprecated code at some future date V=3.0

In fact, I'd say we're hoping to never have a supported release broken
by that process... so capping just creates artificial version
conflicts which we have to resolve by issuing updates to say that
actually the new major version is still compatible with this new
release...

OTOH there will eventually be releases of our libraries that do break
prior releases of our servers/clients - and when that happens capped
requirements will actually be useful, but only to people running
unsupported releases :).

OTGH if we do deliberately break supported releases in our libraries,
then the capping process is absolutely essential.

Personally, I'd be more worried about the versions of our dependencies
that *aren't* coordinated with our projects, because if they aren't
capped, (and they're doing semver) we're less likely to find out the
easy way (in advance :)) about issues.

But that then swings back around to known good vs known bad. One way
of looking at that is that safe capping requires several items of
data:
- what version to use with ~= - I'm not sure that using the exact
version we got is correct. e.g. with semver, if 1.2.3 is known-good,
we should use ~=1.2 (e.g. =1.2, ==1.*), but with date based its
harder to predict what will indicate a breaking version :). And of
course for non-semver, 1.2.3 doesn't tell us whether 1.3 will be
breaking, or even 1.2.4.
- a known good version to base our cap on

If we generated the first item and stored it somewhere, then when we
generate known-good == lists from CI, we could also generate a
known-good capped list, (e.g. transforming 1.2.3 to ~=1.2 for semver
projects). We could in principle add that to our tarball releases of
projects, even though we can't 

Re: [openstack-dev] 答复: [neutron] Neutron scaling datapoints?

2015-04-13 Thread Joshua Harlow

Did the following get addressed?

https://aphyr.com/posts/316-call-me-maybe-etcd-and-consul

Seems like quite a few things got raised in that post about etcd/consul.

Maybe they are fixed, idk...

https://aphyr.com/posts/291-call-me-maybe-zookeeper though worked as 
expected (and without issue)...


I quote:

'''
Recommendations

Use Zookeeper. It’s mature, well-designed, and battle-tested. Because 
the consequences of its connection model and linearizability properties 
are subtle, you should, wherever possible, take advantage of tested 
recipes and client libraries like Curator, which do their best to 
correctly handle the complex state transitions associated with session 
and connection loss.

'''

Daniel Comnea wrote:

My $2 cents:

I like the 3rd party backend however instead of ZK wouldn't Consul [1]
fit better due to lighter/ out of box multi DC awareness?

Dani

[1] Consul - https://www.consul.io/


On Mon, Apr 13, 2015 at 9:51 AM, Wangbibo wangb...@huawei.com
mailto:wangb...@huawei.com wrote:

Hi Kevin,

__ __

Totally agree with you that heartbeat from each agent is something
that we cannot eliminate currently. Agent status depends on it, and
further scheduler and HA depends on agent status.

__ __

I proposed a Liberty spec for introducing open framework/pluggable
agent status drivers.[1][2]  It allows us to use some other 3^rd
party backend to monitor agent status, such as zookeeper, memcached.
Meanwhile, it guarantees backward compatibility so that users could
still use db-based status monitoring mechanism as their default
choice.

__ __

Base on that, we may do further optimization on issues Attila and
you mentioned. Thanks. 

__ __

[1] BP  -

https://blueprints.launchpad.net/neutron/+spec/agent-group-and-status-drivers

[2] Liberty Spec proposed - https://review.openstack.org/#/c/168921/

__ __

Best,

Robin

__ __

__ __

__ __

__ __

*发件人:*Kevin Benton [mailto:blak...@gmail.com
mailto:blak...@gmail.com]
*发送时间:*2015年4月11日12:35
*收件人:*OpenStack Development Mailing List (not for usage questions)
*主题:*Re: [openstack-dev] [neutron] Neutron scaling datapoints?

__ __

Which periodic updates did you have in mind to eliminate? One of the
few remaining ones I can think of is sync_routers but it would be
great if you can enumerate the ones you observed because eliminating
overhead in agents is something I've been working on as well.

__ __

One of the most common is the heartbeat from each agent. However, I
don't think we can't eliminate them because they are used to
determine if the agents are still alive for scheduling purposes. Did
you have something else in mind to determine if an agent is alive?

__ __

On Fri, Apr 10, 2015 at 2:18 AM, Attila Fazekas afaze...@redhat.com
mailto:afaze...@redhat.com wrote:

I'm 99.9% sure, for scaling above 100k managed node,
we do not really need to split the openstack to multiple smaller
openstack,
or use significant number of extra controller machine.

The problem is openstack using the right tools SQL/AMQP/(zk),
but in a wrong way.

For example.:
Periodic updates can be avoided almost in all cases

The new data can be pushed to the agent just when it needed.
The agent can know when the AMQP connection become unreliable (queue
or connection loose),
and needs to do full sync.
https://bugs.launchpad.net/neutron/+bug/1438159

Also the agents when gets some notification, they start asking for
details via the
AMQP - SQL. Why they do not know it already or get it with the
notification ?


- Original Message -
  From: Neil Jerram neil.jer...@metaswitch.com
mailto:neil.jer...@metaswitch.com

  To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
mailto:openstack-dev@lists.openstack.org
  Sent: Thursday, April 9, 2015 5:01:45 PM
  Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?

  Hi Joe,

  Many thanks for your reply!

  On 09/04/15 03:34, joehuang wrote:
   Hi, Neil,
  
From theoretic, Neutron is like a broadcast domain, for example,
enforcement of DVR and security group has to touch each
regarding host
where there is VM of this project resides. Even using SDN
controller, the
   touch to regarding host is inevitable. If there are plenty of
physical
hosts, for example, 10k, inside one Neutron, it's very hard to
overcome
the broadcast storm issue under concurrent operation, that's the
bottleneck for scalability of Neutron.

  I think I understand that in general terms - but can you be more
  specific about the broadcast storm?  Is there one particular message
 

Re: [openstack-dev] [all][pbr] splitting our deployment vs install dependencies

2015-04-13 Thread Matthew Thode
I think what we are trying to do is two separate things.

One is to define the dependencies that packagers use.  This would likely
be minimum versions with caps that are known to fail (not assumed).

The second is to define a set of verifiably known working deps.  This
would likely need an update mechanism probably so it doesn't stagnate.

What could then be done is to merge the two sources by default or
packagers can just removed the auto-capped file.  Separating it out
allows us to keep track of known bad versions as well.  If I see a cap
on a lib at this point I assume it was a protectionist measure, not
because it was actually a bug or something.

As it is now, it is almost impossible to package 2014.2.3 because at
least my distro has removed a bunch of old libraries that were not
needed because the caps not there.

For kilo packaging will likely be fine because we can lock the deps from
the start so versions we need are not removed.  Our distro at least
allows the package manager to choose which version of a package to
install to meet the requirements.

On 04/13/2015 05:04 PM, Robert Collins wrote:
 On 13 April 2015 at 22:04, Thierry Carrez thie...@openstack.org wrote:
 This observation led to yet more IRC discussion and eventually
 https://etherpad.openstack.org/p/stable-omg-deps

 In short, the proposal is that we:
  - stop trying to use install_requires to reproduce exactly what
 works, and instead use it to communicate known constraints ( X, Y is
 broken etc).
  - use a requirements.txt file we create *during* CI to capture
 exactly what worked, and also capture the dpkg and rpm versions of
 packages that were present when it worked, and so on. So we'll build a
 git tree where its history is an audit trail of exactly what worked
 for everything that passed CI, formatted to make it really really easy
 for other people to consume.

 I totally agree that we need to stop trying to provide two different
 sets of dependency information (known good deps, known bad deps) using
 the same dataset.

 If I understand you correctly, today we provide a requirements.txt and
 generate an install_requires from it, and in the new world order we
 would provide a install_requires with known-bad info in it and
 generate a known-good requirements.txt (during CI) from it.
 
 Yes, with two clarifying points: the known-good has to live in a
 different repo from the project, because we only discover that during
 CI, after the commits have been made. Secondly, the install_requires
 will be delivered via setup.cfg in the project tree.
 
 Questions:

 How would global-requirements evolve in that picture ? Would we have
 some global-install-requires thing to replace it ?
 
 I think global-requirements today is (by necessity) mostly known-bad,
 and doesn't need to change much. It needs to learn how to reference
 setup.cfg metadata as well/rather than {test-}requirements{-pyNN}.txt.
 There's a separate discussion we had a few weeks back about
 consolidating the non-install-requires we have into setup.cfg with
 appropriate tags, which we'll want to do at the same time.
 
 Distro packagers today rely on requirements.txt (and global
 -requirements) to determine what version of libraries they need to
 package. Would they just rely on install_requires instead ? Where is
 that information provided ? setup.cfg ?
 
 Yes. Project + global-requirements is a good combination. They might
 want to reference the known-good exact lists as an additional data
 source.
 
 How does this proposal affect stable branches ? In order to keep the
 breakage there under control, we now have stable branches for all the
 OpenStack libraries and cap accordingly[1]. We planned to cap all other
 libraries to the version that was there when the stable branch was
 cut.  Where would we do those cappings in the new world order ? In
 install_requires ? Or should we not do that anymore ?

 [1]
 http://specs.openstack.org/openstack/openstack-specs/specs/library-stable-branches.html
 
 I don't think there's a hard and fast answer here. Whats proposed
 there should work fine.
 
 On the one hand, semver tells us when *a* backwards compat break
 happens, but it doesn't tell us if *that break affects user X*. For
 instance, the general migration pattern we expect is:
  - introduce new API  V=2.3.0
  - migrate all our users V~=2.3
  - deprecate old API V~=2.3
  - gc deprecated code at some future date V=3.0
 
 In fact, I'd say we're hoping to never have a supported release broken
 by that process... so capping just creates artificial version
 conflicts which we have to resolve by issuing updates to say that
 actually the new major version is still compatible with this new
 release...
 
 OTOH there will eventually be releases of our libraries that do break
 prior releases of our servers/clients - and when that happens capped
 requirements will actually be useful, but only to people running
 unsupported releases :).
 
 OTGH if we do deliberately break supported releases 

[openstack-dev] etherpad.openstack.org upgraded

2015-04-13 Thread Clark Boylan
Just letting everyone know I just upgraded etherpad.openstack.org to the
latest etherpad-lite version to address CVE-2015-3297.

If you see any javascript load errors you may need to do a hard refresh
of your etherpads (sorry about this, I will have to figure out a way to
invalidate cached js automagically for you next time).

This should give us plenty of time to burn the server in before next
months summit. If you do notice any performance weirdness or other bugs
please do let us know so we can get them fixed prior to the summit.

Thank you for your patience,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [doc] what's happened to api documents?

2015-04-13 Thread henry hly
Thanks a lot, henry :)

On Mon, Apr 13, 2015 at 6:57 PM, Henry Gessau ges...@cisco.com wrote:
 On Mon, Apr 13, 2015, henry hly henry4...@gmail.com wrote:
 http://developer.openstack.org/api-ref-networking-v2.html

 The above api document seems lost most of the content, leaving only
 port, network, subnet?

 In the navigation bar on the left there is a link to the rest of the Neutron
 API, which is implemented as extensions:
 http://developer.openstack.org/api-ref-networking-v2-ext.html


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstackclient] osc slowness

2015-04-13 Thread Dean Troyer
On Mon, Apr 13, 2015 at 5:20 PM, Doug Hellmann d...@doughellmann.com
wrote:

 Excerpts from Sean Dague's message of 2015-04-13 07:15:57 -0400:
 I *believe* the time is scanning the plugins. It doesn't actually
 load them, but it has to look through all of the entry point
 registries to find what commands are available. I originally built
 cliff (the framework under OSC) this way because I thought we would
 put the commands in separate repositories.


FWIW, as things grow commands for projects outside layers 1  some of 2 are
in external repos.  The some of time here is due to doing all imports up
front rather than as required; I've proposed
https://review.openstack.org/173098 as the first step to fix this.


 Since we aren't doing that for the vast majority of them, we can
 change the implementation of cliff to support hard-coded commands
 more easily, and to have it only scan the entry points for commands
 that aren't in that hard-coded list. We would need to load them all
 to generate help output and the tab-completion instructions, but I
 think it's OK to take a bit of a penalty in those cases.


If we do the above the entry point scan is maybe two orders of magnitude
faster without the forced imports.

I am also working on at least one deferred import inside cliff itself, cmd2
is pokey...

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo][clients] Let's speed up start of OpenStack libs and clients by optimizing imports with profimp

2015-04-13 Thread Robert Collins
On 9 April 2015 at 00:59, Doug Hellmann d...@doughellmann.com wrote:

  Another data point on how slow our libraries/CLIs can be:
 
  $ time openstack -h
  snip
  real0m2.491s
  user0m2.378s
  sys 0m0.111s


 pbr should be snappy - taking 100ms to get the version is wrong.

 I have always considered pbr a packaging/installation time tool, and not
 something that would be used at runtime. Why are we using pbr to get the
 version of an installed package, instead of asking pkg_resources?

Why do you make that sound like an either-or?

pbr *does* ask pkg_resources.
And if the thing isn't installed, we have to figure the version out ourselves.

We can either have that if-then-else code in one place, e.g. pbr, or
we can have it in many places, and suffer code copyitis.

The 100ms issue above is either:
 - the package isn't installed, so we're falling back to complex code
 - a bug.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo][clients] Let's speed up start of OpenStack libs and clients by optimizing imports with profimp

2015-04-13 Thread Robert Collins
On 9 April 2015 at 01:12, Flavio Percoco fla...@redhat.com wrote:
 On 08/04/15 08:59 -0400, Doug Hellmann wrote:

 I have always considered pbr a packaging/installation time tool, and not
 something that would be used at runtime. Why are we using pbr to get the
 version of an installed package, instead of asking pkg_resources?


 Just wanted to +1 the above.

 I've also considered pbr a packaging/install tool. Furthermore, I
 believe having it as a runtime requirement makes packagers life more
 complicated because that means pbr will obviously need to be added as
 a runtime requirement for that package.

pbr is a consolidation of a bunch of packaging / build related things
we had as copy-paste in the past. Some of those are purely build time,
others, like having a version present for editable or not installed
packages, is not.

If we want to make a hard separation, and have a pbr_runtime separate
package, we can definitely do that.

But there should be utterly no difficulty in having pbr packaged in
distros - its packaged in Ubuntu, for instance.

Also we've been adding features to make it more aligned with distro
needs - I'd love it if the conversation focused on what that needs,
rather than 'ripping it out' - since I really loath the copy-paste
hell that I fear that will lead to.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron scaling datapoints?

2015-04-13 Thread joehuang
Tooz provides a mechanism for grouping agents and agent status/liveness 
management, multiple coordinator services may be required in large scale 
deployment, especially for 100k nodes level. We can't make assumption that only 
one coordinator service is enough to manage all nodes, that means tooz may need 
to support multiple coordinate backend.

And Nova already supports several segregation concepts, for example, Cells, 
Availability Zone, Host Aggregates,  Where the coordinate backend will 
resides? How to group agents? It's weird to put coordinator in availability 
zone(AZ in short) 1, but all managed agents in AZ 2. If AZ 1 is power off, then 
all agents in AZ2 lost management. Do we need segregation concept for agents, 
or reuse Nova concept, or build mapping between them? Especially if multiple 
coordinate backend will work under one Neutron.

Best Regards
Chaoyi Huang ( Joe Huang )

-Original Message-
From: Joshua Harlow [mailto:harlo...@outlook.com] 
Sent: Monday, April 13, 2015 11:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?

joehuang wrote:
 Hi, Kevin and Joshua,

 As my understanding, Tooz only addresses the issue of agent status 
 management, but how to solve the concurrent dynamic load impact on 
 large scale ( for example 100k managed nodes with the dynamic load 
 like security goup rule update, routers_updated, etc )

Yes, that is correct, let's not confuse status/liveness management with 
updates... since IMHO they are to very different things (the latter can be 
eventually consistent IMHO will the liveness 'question' probably should not 
be...).


 And one more question is, if we have 100k managed nodes, how to do the 
 partition? Or all nodes will be managed by one Tooz service, like 
 Zookeeper? Can Zookeeper manage 100k nodes status?

I can get u some data/numbers from some studies I've seen, but what u are 
talking about is highly specific as to what u are doing with zookeeper... There 
is no one solution for all the things IMHO; choose what's best from your 
tool-belt for each problem...


 Best Regards

 Chaoyi Huang ( Joe Huang )

 *From:*Kevin Benton [mailto:blak...@gmail.com]
 *Sent:* Monday, April 13, 2015 3:52 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [neutron] Neutron scaling datapoints?

Timestamps are just one way (and likely the most primitive), using 
redis
 (or memcache) key/value and expiry are another (and letting memcache 
 or redis expire using its own internal algorithms), using zookeeper 
 ephemeral nodes[1] are another... The point being that its backend 
 specific and tooz supports varying backends.

 Very cool. Is the backend completely transparent so a deployer could 
 choose a service they are comfortable maintaining, or will that change 
 the properties WRT to resiliency of state on node restarts, partitions, etc?

 The Nova implementation of Tooz seemed pretty straight-forward, 
 although it looked like it had pluggable drivers for service management 
 already.
 Before I dig into it much further I'll file a spec on the Neutron side 
 to see if I can get some other cores onboard to do the review work if 
 I push a change to tooz.

 On Sun, Apr 12, 2015 at 9:38 AM, Joshua Harlow harlo...@outlook.com 
 mailto:harlo...@outlook.com wrote:

 Kevin Benton wrote:

 So IIUC tooz would be handling the liveness detection for the agents.
 That would be nice to get ride of that logic in Neutron and just 
 register callbacks for rescheduling the dead.

 Where does it store that state, does it persist timestamps to the DB 
 like Neutron does? If so, how would that scale better? If not, who 
 does a given node ask to know if an agent is online or offline when 
 making a scheduling decision?


 Timestamps are just one way (and likely the most primitive), using 
 redis (or memcache) key/value and expiry are another (and letting 
 memcache or redis expire using its own internal algorithms), using 
 zookeeper ephemeral nodes[1] are another... The point being that its 
 backend specific and tooz supports varying backends.


 However, before (what I assume is) the large code change to implement 
 tooz, I would like to quantify that the heartbeats are actually a 
 bottleneck. When I was doing some profiling of them on the master 
 branch a few months ago, processing a heartbeat took an order of 
 magnitude less time (50ms) than the 'sync routers' task of the l3 
 agent (~300ms). A few query optimizations might buy us a lot more 
 headroom before we have to fall back to large refactors.


 Sure, always good to avoid prematurely optimizing things...

 Although this is relevant for u I think anyway:

 https://review.openstack.org/#/c/138607/ (same thing/nearly same in nova)...

 https://review.openstack.org/#/c/172502/ (a WIP implementation of the 
 latter).

 [1]
 

Re: [openstack-dev] [neutron][L3] IPAM alternate refactoring

2015-04-13 Thread Kevin Benton
The thing is, is that you *should* be able to call core_plugin.create_port
in a transaction.

Well it depends on what you mean by that. If you mean create_port should be
part of the same transaction, I disagree because it leads to either
inconsistency or a loss of veto power for drivers with external backends.

With the current code, if you enclose create_port in a transaction and then
have a failure in the parent transaction after the port is created, the DB
creation will be rolled back but nothing will inform the backend to release
the resources it allocated for the port.

If we switch to a notification system like you described where
notifications are deferred until after create_port is complete, we just end
up removing the ability for backends to block a create_port call if
necessary. That's a pretty significant change because callers will think
they have successfully created a port when not all of the relevant systems
have confirmed it.

This is going to become even more pronounced if procedures to allocate IP
addresses and whatever else for the port result in calls to external
servers.

In the hack you showed, wouldn't it be easier to just to have a way to
register extra DB operations to be performed on port_create? Something like
a run-time defined mechanism driver with only a create port pre-commit
method.


We're still left with questions such as: What happens if I commit a
mega-transaction and then all (Or even more complicated, one) of the
notifications fails, but this isn't a new problem.

This is why I think we shouldn't just rely on the DB to make
mega-transactions. It doesn't really work with us calling out to other
systems. We need a more generic system to manage flows of tasks that each
have rollback mechanisms so the semantics rolling back large operations are
handled in a database independent manner. If only such a system existed. ;-)

On Mon, Apr 13, 2015 at 3:50 PM, Assaf Muller amul...@redhat.com wrote:



 - Original Message -
  I think removing all occurrences of create_port inside of another
 transaction
  is something we should be doing for a couple of reasons.

 The issues you're pointing out are very much real. It's a *huge* pain to
 workaround
 this issue and you can look for an example here:

 https://github.com/openstack/neutron/blob/master/neutron/db/l3_hamode_db.py#L303

 The thing is, is that you *should* be able to call core_plugin.create_port
 in a
 transaction. I think that the correct thing to do is to eliminate the
 issue with
 create_port, and not work around the issue with awful patterns such as the
 one
 in the link above. There's a few different acute issues with that pattern:
 1) We have no automated way to tell if create_port is being called in a
 transaction
or not, currently it's left up to reviewers to spot such occurrences
 and prevent
them from being merged.
 2) The mental load it adds to read that code is not trivial.
 3) Transactions are awesome... I'd very much like to group up
 core_plugin.create_port
and create_ha_port_binding in a single transaction and avoid having to
 deal with
edge cases manually.
 4) Sometimes you can't use the try/except/manual cleanup approach (If you
 delete a resource
in transaction A, then transaction B fails, good luck re-creating the
 resource you already
deleted).

 The better long term approach would be to introduce a framework at the API
 layer that queues
 up notifications (Both HTTP to vendor servers and RPC to agents) at the
 start of an API or RPC call.
 You're then free to use a single huge transaction (Fun!), and finally all
 queued up notifications
 will be sent for you automagically. That's the simplest approach, I
 haven't thought this through
 and I'm sure there will be issues but it should be possible. We're still
 left with questions such
 as: What happens if I commit a mega-transaction and then all (Or even more
 complicated, one) of
 the notifications fails, but this isn't a new problem.

 
  First, it's a recipe for the cherished lock wait timeout deadlocks
 because
  create_port makes yielding calls. These are awful to troubleshoot and are
  pretty annoying for users (request takes ~60 seconds and then blows up).
 
  Second, create_port in ML2 expects the transaction to be committed to
 the DB
  by the time it's done with pre-commit phase, which we break by opening a
  parent transaction before calling it so the failure handling semantics
 may
  be messed up.
 
 
 
  On Mon, Apr 13, 2015 at 9:48 AM, Carl Baldwin  c...@ecbaldwin.net 
 wrote:
 
 
  Have we found the last of them? I wonder. I suppose any higher level
  service like a router that needs to create ports under the hood (under
  the API) will have this problem. The DVR fip namespace creation comes
  to mind. It will create a port to use as the external gateway port
  for that namespace. This could spring up in the context of another
  create_port, I think (VM gets new port bound to a compute host where a
  fip 

Re: [openstack-dev] [all] Problems with keystoneclient stable branch (and maybe yours too)

2015-04-13 Thread gordon chung

 2) Incorrect cap in requirements.txt

 python-keystoneclient in stable/juno was capped at =1.1.0, and 1.1.0 is
 the version tagged for the stable branch. When you create a review in
 stable/juno it installs python-keystoneclient and now the system has got a
 version like 1.1.0.post1, which is1.1.0, so now python-keystoneclient
 doesn't match the requirements and swift-proxy fails to start (swift-proxy
 is very good at catching this problem for whatever reason). The cap should
 have been 1.2.0 so that we can propose patches and also make fix releases
 (1.1.1, 1.1.2, etc.).[3]

 [3] https://review.openstack.org/#/c/172718/

 Approved.

we have the same issue for ceilometerclient for both icehouse[1] and juno[2], i 
put up requirement patches for each [3][4]

[1] https://review.openstack.org/#/c/173085/
[2] https://review.openstack.org/#/c/173086/
[3] https://review.openstack.org/#/c/173149/
[4] https://review.openstack.org/#/c/173148/



 I tried to recap all of the clients but that didn't pass Jenkins, probably
 because one or more clients didn't use semver correctly and have
 requirements updates in a micro release.[4]

 [4] https://review.openstack.org/#/c/172719/

 Did you literally update them all, or only the ones that looked like
 they might be wrong? It looks like those caps came from the cap.py
 script in the repository, which makes me wonder if we were just too
 aggressive with defining what the cap should be.


don't know about others but full disclosure, we didn't use SEMVER correctly. :\

cheers,
gord


  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3] IPAM alternate refactoring

2015-04-13 Thread Assaf Muller


- Original Message -
 I think removing all occurrences of create_port inside of another transaction
 is something we should be doing for a couple of reasons.

The issues you're pointing out are very much real. It's a *huge* pain to 
workaround
this issue and you can look for an example here:
https://github.com/openstack/neutron/blob/master/neutron/db/l3_hamode_db.py#L303

The thing is, is that you *should* be able to call core_plugin.create_port in a
transaction. I think that the correct thing to do is to eliminate the issue with
create_port, and not work around the issue with awful patterns such as the one
in the link above. There's a few different acute issues with that pattern:
1) We have no automated way to tell if create_port is being called in a 
transaction
   or not, currently it's left up to reviewers to spot such occurrences and 
prevent
   them from being merged.
2) The mental load it adds to read that code is not trivial.
3) Transactions are awesome... I'd very much like to group up 
core_plugin.create_port
   and create_ha_port_binding in a single transaction and avoid having to deal 
with
   edge cases manually.
4) Sometimes you can't use the try/except/manual cleanup approach (If you 
delete a resource
   in transaction A, then transaction B fails, good luck re-creating the 
resource you already
   deleted).

The better long term approach would be to introduce a framework at the API 
layer that queues
up notifications (Both HTTP to vendor servers and RPC to agents) at the start 
of an API or RPC call.
You're then free to use a single huge transaction (Fun!), and finally all 
queued up notifications
will be sent for you automagically. That's the simplest approach, I haven't 
thought this through
and I'm sure there will be issues but it should be possible. We're still left 
with questions such
as: What happens if I commit a mega-transaction and then all (Or even more 
complicated, one) of
the notifications fails, but this isn't a new problem.

 
 First, it's a recipe for the cherished lock wait timeout deadlocks because
 create_port makes yielding calls. These are awful to troubleshoot and are
 pretty annoying for users (request takes ~60 seconds and then blows up).
 
 Second, create_port in ML2 expects the transaction to be committed to the DB
 by the time it's done with pre-commit phase, which we break by opening a
 parent transaction before calling it so the failure handling semantics may
 be messed up.
 
 
 
 On Mon, Apr 13, 2015 at 9:48 AM, Carl Baldwin  c...@ecbaldwin.net  wrote:
 
 
 Have we found the last of them? I wonder. I suppose any higher level
 service like a router that needs to create ports under the hood (under
 the API) will have this problem. The DVR fip namespace creation comes
 to mind. It will create a port to use as the external gateway port
 for that namespace. This could spring up in the context of another
 create_port, I think (VM gets new port bound to a compute host where a
 fip namespace needs to spring in to existence).
 
 Carl
 
 On Mon, Apr 13, 2015 at 10:24 AM, John Belamaric
  jbelama...@infoblox.com  wrote:
  Thanks Pavel. I see an additional case in L3_NAT_dbonly_mixin, where it
  starts the transaction in create_router, then eventually gets to
  create_port:
  
  create_router (starts tx)
  -self._update_router_gw_info
  -_create_gw_port
  -_create_router_gw_port
  -create_port(plugin)
  
  So that also would need to be unwound.
  
  On 4/13/15, 10:44 AM, Pavel Bondar  pbon...@infoblox.com  wrote:
  
 Hi,
  
 I made some investigation on the topic[1] and see several issues on this
 way.
  
 1. Plugin's create_port() is wrapped up in top level transaction for
 create floating ip case[2], so it becomes more complicated to do IPAM
 calls outside main db transaction.
  
 - for create floating ip case transaction is initialized on
 create_floatingip level:
 create_floatingip(l3_db)-create_port(plugin)-create_port(db_base)
 So IPAM call should be added into create_floatingip to be outside db
 transaction
  
 - for usual port create transaction is initialized on plugin's
 create_port level, and John's change[1] cover this case:
 create_port(plugin)-create_port(db_base)
  
 Create floating ip work-flow involves calling plugin's create_port,
 so IPAM code inside of it should be executed only when it is not wrapped
 into top level transaction.
  
 2. It is opened question about error handling.
 Should we use taskflow to manage IPAM calls to external systems?
 Or simple exception based model is enough to handle rollback actions on
 third party systems in case of failing main db transaction.
  
 [1] https://review.openstack.org/#/c/172443/
 [2] neutron/db/l3_db.py: line 905
  
 Thanks,
 Pavel
  
 On 10.04.2015 21:04, openstack-dev-requ...@lists.openstack.org wrote:
  L3 Team,
  
  I have put up a WIP [1] that provides an approach that shows the ML2
 create_port method refactored to use the IPAM driver prior to initiating
 the database transaction. Details 

Re: [openstack-dev] [qa] official clients and tempest

2015-04-13 Thread Matthew Treinish
On Thu, Apr 09, 2015 at 11:05:10AM +0900, Ken'ichi Ohmichi wrote:
 2015-04-09 4:14 GMT+09:00 Sean Dague s...@dague.net:
  On 04/08/2015 02:58 PM, David Kranz wrote:
  On 04/08/2015 02:36 PM, Matthew Treinish wrote:
  On Wed, Apr 08, 2015 at 01:08:03PM -0400, David Kranz wrote:
  Since tempest no longer uses the official clients as a literal code
  dependency, except for the cli tests which are being removed, the clients
  have been dropping from requirements.txt. But when debugging issues
  uncovered by tempest, or when debugging tempest itself, it is useful to 
  use
  the cli to check various things. I think it would be a good service to 
  users
  of tempest to include the client libraries when tempest is installed on a
  machine. Is there a reason to not do this?
  i
 
  Umm, so that is not what requirements.txt is for, we should only put what 
  is
  required to run the tempest in the requirements file. It's a package 
  dependencies
  list, not a list of everything you find useful for developing tempest 
  code.
  I was more thinking of users of tempest than developers of tempest,
  though it is useful to both.
  But we can certainly say that this is an issue for those who provide
  tempest to users.
 
  I'm in agreement with Matt here. Tempest requirements should be what
  Tempest actually requires.
 
  Installing the CLI is pretty easy, it's package installed in any Linux
  distro. apt-get, yum, or even pip install and you are off and running.
 
  I don't think having Tempest side effect dragging in the CLI tools is
  useful. Those should instead be something people install themselves.
 
 requirements.txt needs to contain necessary packages only for
 deploying Tempest as Matthew said.
 but David's idea is interesting. Official clients are easy to use, and
 it is nice debugging way to compare results of both Tempest and
 official clients.
 Since David's mail, I have another idea for debugging problems:
 
   How about adding a commandline option to switch API function to
 tempest-lib's service clients into official clients in the future?
 
 We are working for tempest-lib's service clients to migrate tests from
 Tempest to projects' repository, these service clients will handle
 REST API operations and they would be useful for debugging because
 these clients' code is based on Tempest which we are using for the
 gate problems.
 If official clients have the option, we can reproduce Tempest's
 operation more easily when facing/debugging problems.
 

So we've discussed this before, in Paris and a bit since, and building a cli, or
other clients on top of the tempest clients is definitely doable. Especially
after the service clients start to move into tempest-lib it would not be
difficult. Although, I really don't think want to get into that game for the
tempest clients, at least for as the official clients are concerned. There is
still some value in having separate client implementations to keep ourselves
honest in the APIs. I've talked with Dean about doing this with OSC before, and
we keep coming back to having the distinct implementation for testing, to ensure
we don't code around bugs.

That being said if people want to do own there own as a separate client, it
wouldn't be very hard to do after the migration is started. I really don't feel
like having multiple client implementations really hurt OpenStack, it would
probably just help make the APIs better in the long run. 

As an aside, I actually used to have a couple of scripts lying around to use a
older version of the tempest clients to do some basic tasks against a cloud. It
worked well for what it was, and I liked it because I was far more familiar with
that code and debugging failures when they occurred.

-Matt Treinish


pgp9Kp92QQZmY.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova-scheduler] Scheduler sub-group (gantt) meeting agenda 4/14

2015-04-13 Thread Dugger, Donald D
Meeting on #openstack-meeting at 1500 UTC (9:00AM MDT)



1)  Vancouver design summit - more thoughts?

2)  Opens



(Light agenda this week, could be a quick meeting)

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - Joining the team - interested in a Debian Developer and experienced Python and Network programmer?

2015-04-13 Thread Joe Mcbride
Hi Matt,
Our team at Rackspace is looking to add a developer, focused on building out 
and deploying Designate (DNSaaS for Openstack). When we go live, we expect to 
have the largest public deployment, so scaling and migration challenges will be 
particularly interesting technical problems to solve.

Best of luck on getting into the Neutron fun.

__
Joe McBride
Rackspace Cloud DNS
I’m hiring a software developer 
https://gist.github.com/joeracker/d49030cef6001a8f94d0



From: Matt Grant m...@mattgrant.net.nz
Sent: Thursday, April 9, 2015 2:13 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron] - Joining the team - interested in a Debian 
Developer and experienced Python and Network programmer?

Hi!

I am just wondering what the story is about joining the neutron team.
Could you tell me if you are looking for new contributors?

Previously I have programmed OSPFv2 in Zebra/Quagga, and worked as a
router developer for Allied Telesyn.  I also have extensive Python
programming experience, having worked on the DNS Management System.

I have been experimenting with IPv6 since 2008 on my own home network,
and I am currently installing a Juno Openstack cluster to learn ho
things tick.

Have you guys ever figured out how to do a hybrid L3 North/South Neutron
router that propagates tenant routes and networks into OSPF/BGP via a
routing daemon, and uses floating MAC addresses/costed flow rules via
OVS to fail over to a hot standby router? There are practical use cases
for such a thing in smaller deployments.

I have a single stand alone example working by turning off
neutron-l3-agent network name space support, and importing the connected
interface and static routes into Bird and Birdv6. The AMPQ connection
back to the neutron-server is via the upstream interface and is secured
via transport mode IPSEC (just easier than bothering with https/SSL).
Bird looks easier to run from neutron as they are single process than a
multi process Quagga implementation.  Incidentally, I am running this in
an LXC container.

Could some one please point me in the right direction.  I would love to
be in Vancouver :-)

Best Regards,

--
Matt Grant,  Debian and Linux Systems Administration and Consulting
Mobile: 021 0267 0578
Email: m...@mattgrant.net.nz


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] This week's meeting

2015-04-13 Thread Sean M. Collins
I am on PTO - if someone else wishes to chair the weekly meeting please feel 
free to do so.
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Query on adding new table to cinder DB

2015-04-13 Thread Deepak Shetty
Hi Stackers,
As part of my WIP work for implementing
https://blueprints.launchpad.net/nova/+spec/volume-snapshot-improvements I
am required to add a new table to cinder (snapshot_admin_metadata) and I
was looking for some inputs on whats are the steps to add a new table to
existing DB

From what I know:

1) Create a new migration script at
cinder/db/sqlalchemy/migrate_repo/versions

2) Implement the upgrade and downgrade methods

3) Create your model inside cinder/db/sqlalchemy/models.py

4) Sync DB using cinder-manage db sync

Are these steps correct ?

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][releases] OpenStack 2014.2.3 released

2015-04-13 Thread Adam Gandelman
Hello everyone,

The OpenStack Stable Maintenance team is happy to announce the release
of the 2014.2.3 stable Juno release.  We have been busy reviewing and
accepting backported bugfixes to the stable/juno branches according
to the criteria set at:

https://wiki.openstack.org/wiki/StableBranch

A total of 109 bugs have been fixed across all projects. These
updates to Juno are intended to be low risk with no
intentional regressions or API changes. The list of bugs, tarballs and
other milestone information for each project may be found on Launchpad:

https://launchpad.net/ceilometer/juno/2014.2.3
https://launchpad.net/cinder/juno/2014.2.3
https://launchpad.net/glance/juno/2014.2.3
https://launchpad.net/heat/juno/2014.2.3
https://launchpad.net/horizon/juno/2014.2.3
https://launchpad.net/keystone/juno/2014.2.3
https://launchpad.net/nova/juno/2014.2.3
https://launchpad.net/neutron/juno/2014.2.3
https://launchpad.net/trove/juno/2014.2.3

Release notes may be found on the wiki:

https://wiki.openstack.org/wiki/ReleaseNotes/2014.2.3

The freeze on the stable/juno branches will be lifted today as we
begin working toward the 2014.2.4 release.

Thanks,
Adam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - Joining the team - interested in a Debian Developer and experienced Python and Network programmer?

2015-04-13 Thread Carl Baldwin
Hi, I'm getting back from a little time off over the weekend.

Artem Dmytrenko and Jaume Devesa have done great work [1] with me over
the last year figuring out how to integrate routing protocols with
Neutron.  We have had to exhibit some patience as this work has not
yet bubbled to the top of the team's overall priority list.  However,
I think it is an important part of Neutron's future.  I've started
putting up some blueprints to get this back on track.  The first one
here [2] lays some ground work.

I invite you to come discuss more at our L3 meeeting on Thurdays at
1500 UTC [3].  There is a bit more information about BGP/dynamic
routing on the team page.

Carl

[1] https://review.openstack.org/#/c/125401/
[2] https://review.openstack.org/#/c/172244/
[3] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam

On Thu, Apr 9, 2015 at 8:26 AM, Mathieu Rohon mathieu.ro...@gmail.com wrote:
 Hi Matt,

 Jaume did an awesome work at proposing and implementing a framework for
 announcing public IP with a BGP speaker [1].
 Unfortunately, the spec hasn't been merged in kilo. Hope it will be
 resubmitted in L.
 Your proposal seems to be a mix of Jaume proposal and HA router design?

 We also play with a BGP speaker (BagPipe[3], derived from ExaBGP, written in
 python) for IPVPN attachment [2].

 [1]https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing
 [2]https://launchpad.net/bgpvpn
 [3]https://github.com/Orange-OpenSource/bagpipe-bgp

 On Thu, Apr 9, 2015 at 3:54 PM, Kyle Mestery mest...@mestery.com wrote:

 On Thu, Apr 9, 2015 at 2:13 AM, Matt Grant m...@mattgrant.net.nz wrote:

 Hi!

 I am just wondering what the story is about joining the neutron team.
 Could you tell me if you are looking for new contributors?

 We're always looking for someone new to participate! Thanks for reaching
 out!


 Previously I have programmed OSPFv2 in Zebra/Quagga, and worked as a
 router developer for Allied Telesyn.  I also have extensive Python
 programming experience, having worked on the DNS Management System.

 Sounds like you have extensive experience programming network elements. :)


 I have been experimenting with IPv6 since 2008 on my own home network,
 and I am currently installing a Juno Openstack cluster to learn ho
 things tick.

 Great, this will give you an overview of things.


 Have you guys ever figured out how to do a hybrid L3 North/South Neutron
 router that propagates tenant routes and networks into OSPF/BGP via a
 routing daemon, and uses floating MAC addresses/costed flow rules via
 OVS to fail over to a hot standby router? There are practical use cases
 for such a thing in smaller deployments.

 BGP integration with L3 is something we'll look at again for Liberty. Carl
 Baldwin leads the L3 work in Neutron, and would be a good person to sync
 with on this work item. I suspect he may be looking for people to help
 integrate the BGP work in Liberty, this may be a good place for you to jump
 in.

 I have a single stand alone example working by turning off
 neutron-l3-agent network name space support, and importing the connected
 interface and static routes into Bird and Birdv6. The AMPQ connection
 back to the neutron-server is via the upstream interface and is secured
 via transport mode IPSEC (just easier than bothering with https/SSL).
 Bird looks easier to run from neutron as they are single process than a
 multi process Quagga implementation.  Incidentally, I am running this in
 an LXC container.

 Nice!


 Could some one please point me in the right direction.  I would love to
 be in Vancouver :-)

 If you're not already on #openstack-neutron on Freenode, jump in there.
 Plenty of helpful people abound. Since you're in New Zealand, I would
 suggest reaching out to Akihiro Motoki (amotoki) on IRC, as he's in Japan
 and closer to your timezone.

 Thanks!
 Kyle

 Best Regards,

 --
 Matt Grant,  Debian and Linux Systems Administration and Consulting
 Mobile: 021 0267 0578
 Email: m...@mattgrant.net.nz



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)

[openstack-dev] [Fuel] Nominate Andrey Skedzinskiy for fuel-qa(devops) core

2015-04-13 Thread Anastasia Urlapova
Guys,
I would like to nominate Andrey Skedzinskiy[1] for
fuel-qa[2]/fuel-devops[3] core team.

Andrey is one of the strongest reviewers, under his watchful eye are such
features as:
- updrade/rollback master node
- collect usage information
- OS patching
- UI tests
and others

Please vote for Andrey!


Nastya.

[1]http://stackalytics.com/?project_type=stackforgeuser_id=asledzinskiy
[2]https://github.com/stackforge/fuel-qa
[3]https://github.com/stackforge/fuel-devops
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron scaling datapoints?

2015-04-13 Thread joehuang

-Original Message-
From: Attila Fazekas [mailto:afaze...@redhat.com] 
Sent: Monday, April 13, 2015 3:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?


- Original Message -
 From: joehuang joehu...@huawei.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Sunday, April 12, 2015 1:20:48 PM
 Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?
 
 
 
 Hi, Kevin,
 
 
 
 I assumed that all agents are connected to same IP address of 
 RabbitMQ, then the connection will exceed the port ranges limitation.
 
https://news.ycombinator.com/item?id=1571300

TCP connections are identified by the (src ip, src port, dest ip, dest port) 
tuple.

The server doesn't need multiple IPs to handle  65535 connections. All the 
server connections to a given IP are to the same port. For a given client, the 
unique key for an http connection is (client-ip, PORT, server-ip, 80). The only 
number that can vary is PORT, and that's a value on the client. So, the client 
is limited to 65535 connections to the server. But, a second client could also 
have another 65K connections to the same server-ip:port.


[[joehuang]] Sorry, long time not writing socket based app, I may make a 
mistake for HTTP server to spawn a thread to handle a new connection. I'll 
check again.

 
 For a RabbitMQ cluster, for sure the client can connect to any one of 
 member in the cluster, but in this case, the client has to be designed 
 in fail-safe
 manner: the client should be aware of the cluster member failure, and 
 reconnect to other survive member. No such mechnism has been 
 implemented yet.
 
 
 
 Other way is to use LVS or DNS based like load balancer, or something else.
 If you put one load balancer ahead of a cluster, then we have to take 
 care of the port number limitation, there are so many agents will 
 require connection concurrently, 100k level, and the requests can not be 
 rejected.
 
 
 
 Best Regards
 
 
 
 Chaoyi Huang ( joehuang )
 
 
 
 From: Kevin Benton [blak...@gmail.com]
 Sent: 12 April 2015 9:59
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?
 
 The TCP/IP stack keeps track of connections as a combination of IP + 
 TCP port. The two byte port limit doesn't matter unless all of the 
 agents are connecting from the same IP address, which shouldn't be the 
 case unless compute nodes connect to the rabbitmq server via one IP 
 address running port address translation.
 
 Either way, the agents don't connect directly to the Neutron server, 
 they connect to the rabbit MQ cluster. Since as many Neutron server 
 processes can be launched as necessary, the bottlenecks will likely 
 show up at the messaging or DB layer.
 
 On Sat, Apr 11, 2015 at 6:46 PM, joehuang  joehu...@huawei.com  wrote:
 
 
 
 
 
 As Kevin talking about agents, I want to remind that in TCP/IP stack, 
 port ( not Neutron Port ) is a two bytes field, i.e. port ranges from 
 0 ~ 65535, supports maximum 64k port number.
 
 
 
  above 100k managed node  means more than 100k L2 agents/L3 
 agents... will be alive under Neutron.
 
 
 
 Want to know the detail design how to support 99.9% possibility for 
 scaling Neutron in this way, and PoC and test would be a good support for 
 this idea.
 
 
 
 I'm 99.9% sure, for scaling above 100k managed node, we do not really 
 need to split the openstack to multiple smaller openstack, or use 
 significant number of extra controller machine.
 
 
 
 Best Regards
 
 
 
 Chaoyi Huang ( joehuang )
 
 
 
 From: Kevin Benton [ blak...@gmail.com ]
 Sent: 11 April 2015 12:34
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?
 
 Which periodic updates did you have in mind to eliminate? One of the 
 few remaining ones I can think of is sync_routers but it would be 
 great if you can enumerate the ones you observed because eliminating 
 overhead in agents is something I've been working on as well.
 
 One of the most common is the heartbeat from each agent. However, I 
 don't think we can't eliminate them because they are used to determine 
 if the agents are still alive for scheduling purposes. Did you have 
 something else in mind to determine if an agent is alive?
 
 On Fri, Apr 10, 2015 at 2:18 AM, Attila Fazekas  afaze...@redhat.com 
 
 wrote:
 
 
 I'm 99.9% sure, for scaling above 100k managed node, we do not really 
 need to split the openstack to multiple smaller openstack, or use 
 significant number of extra controller machine.
 
 The problem is openstack using the right tools SQL/AMQP/(zk), but in a 
 wrong way.
 
 For example.:
 Periodic updates can be avoided almost in all cases
 
 The new data can be pushed to the agent just when it needed.
 The agent can know when the AMQP connection 

Re: [openstack-dev] [oslo] eventlet 0.17.3 is now fully Python 3 compatible

2015-04-13 Thread Victor Stinner
 Worth noting we've already switched to using PyMySQL in nodepool,
 storyboard and some of the subunit2sql tooling. It's been working
 out great so far.

Great. Did you notice a performance regression? Mike wrote that PyMySQL is much 
slower than MySQL-Python.

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][pbr] splitting our deployment vs install dependencies

2015-04-13 Thread Miguel Angel Ajo Pelayo

 On 13/4/2015, at 3:53, Robert Collins robe...@robertcollins.net wrote:
 
 On 13 April 2015 at 13:09, Robert Collins robe...@robertcollins.net wrote:
 On 13 April 2015 at 12:53, Monty Taylor mord...@inaugust.com wrote:
 
 What we have in the gate is the thing that produces the artifacts that
 someone installing using the pip tool would get. Shipping anything with
 those artifacts other that a direct communication of what we tested is
 just mean to our end users.
 
 Actually its not.
 
 What we test is point in time. At 2:45 UTC on Monday installing this
 git ref of nova worked.
 
 Noone can reconstruct that today.
 
 I entirely agree with the sentiment you're expressing, but we're not
 delivering that sentiment today.
 
 This observation led to yet more IRC discussion and eventually
 https://etherpad.openstack.org/p/stable-omg-deps
 
 In short, the proposal is that we:
 - stop trying to use install_requires to reproduce exactly what
 works, and instead use it to communicate known constraints ( X, Y is
 broken etc).
 - use a requirements.txt file we create *during* CI to capture
 exactly what worked, and also capture the dpkg and rpm versions of
 packages that were present when it worked, and so on. So we'll build a
 git tree where its history is an audit trail of exactly what worked
 for everything that passed CI, formatted to make it really really easy
 for other people to consume.
 

That sounds like a very neat idea, this way we could look back, and backtrack
to discover which package version change breaks the system.


Miguel Angel Ajo




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron scaling datapoints?

2015-04-13 Thread Attila Fazekas




- Original Message -
 From: joehuang joehu...@huawei.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Sunday, April 12, 2015 1:20:48 PM
 Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?
 
 
 
 Hi, Kevin,
 
 
 
 I assumed that all agents are connected to same IP address of RabbitMQ, then
 the connection will exceed the port ranges limitation.
 
https://news.ycombinator.com/item?id=1571300

TCP connections are identified by the (src ip, src port, dest ip, dest port) 
tuple.

The server doesn't need multiple IPs to handle  65535 connections. All the 
server connections to a given IP are to the same port. For a given client, the 
unique key for an http connection is (client-ip, PORT, server-ip, 80). The only 
number that can vary is PORT, and that's a value on the client. So, the client 
is limited to 65535 connections to the server. But, a second client could also 
have another 65K connections to the same server-ip:port.

 
 For a RabbitMQ cluster, for sure the client can connect to any one of member
 in the cluster, but in this case, the client has to be designed in fail-safe
 manner: the client should be aware of the cluster member failure, and
 reconnect to other survive member. No such mechnism has been implemented
 yet.
 
 
 
 Other way is to use LVS or DNS based like load balancer, or something else.
 If you put one load balancer ahead of a cluster, then we have to take care
 of the port number limitation, there are so many agents will require
 connection concurrently, 100k level, and the requests can not be rejected.
 
 
 
 Best Regards
 
 
 
 Chaoyi Huang ( joehuang )
 
 
 
 From: Kevin Benton [blak...@gmail.com]
 Sent: 12 April 2015 9:59
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?
 
 The TCP/IP stack keeps track of connections as a combination of IP + TCP
 port. The two byte port limit doesn't matter unless all of the agents are
 connecting from the same IP address, which shouldn't be the case unless
 compute nodes connect to the rabbitmq server via one IP address running port
 address translation.
 
 Either way, the agents don't connect directly to the Neutron server, they
 connect to the rabbit MQ cluster. Since as many Neutron server processes can
 be launched as necessary, the bottlenecks will likely show up at the
 messaging or DB layer.
 
 On Sat, Apr 11, 2015 at 6:46 PM, joehuang  joehu...@huawei.com  wrote:
 
 
 
 
 
 As Kevin talking about agents, I want to remind that in TCP/IP stack, port (
 not Neutron Port ) is a two bytes field, i.e. port ranges from 0 ~ 65535,
 supports maximum 64k port number.
 
 
 
  above 100k managed node  means more than 100k L2 agents/L3 agents... will
 be alive under Neutron.
 
 
 
 Want to know the detail design how to support 99.9% possibility for scaling
 Neutron in this way, and PoC and test would be a good support for this idea.
 
 
 
 I'm 99.9% sure, for scaling above 100k managed node,
 we do not really need to split the openstack to multiple smaller openstack,
 or use significant number of extra controller machine.
 
 
 
 Best Regards
 
 
 
 Chaoyi Huang ( joehuang )
 
 
 
 From: Kevin Benton [ blak...@gmail.com ]
 Sent: 11 April 2015 12:34
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?
 
 Which periodic updates did you have in mind to eliminate? One of the few
 remaining ones I can think of is sync_routers but it would be great if you
 can enumerate the ones you observed because eliminating overhead in agents
 is something I've been working on as well.
 
 One of the most common is the heartbeat from each agent. However, I don't
 think we can't eliminate them because they are used to determine if the
 agents are still alive for scheduling purposes. Did you have something else
 in mind to determine if an agent is alive?
 
 On Fri, Apr 10, 2015 at 2:18 AM, Attila Fazekas  afaze...@redhat.com 
 wrote:
 
 
 I'm 99.9% sure, for scaling above 100k managed node,
 we do not really need to split the openstack to multiple smaller openstack,
 or use significant number of extra controller machine.
 
 The problem is openstack using the right tools SQL/AMQP/(zk),
 but in a wrong way.
 
 For example.:
 Periodic updates can be avoided almost in all cases
 
 The new data can be pushed to the agent just when it needed.
 The agent can know when the AMQP connection become unreliable (queue or
 connection loose),
 and needs to do full sync.
 https://bugs.launchpad.net/neutron/+bug/1438159
 
 Also the agents when gets some notification, they start asking for details
 via the
 AMQP - SQL. Why they do not know it already or get it with the notification
 ?
 
 
 - Original Message -
  From: Neil Jerram  neil.jer...@metaswitch.com 
  To: OpenStack Development Mailing List (not for usage questions) 
  

Re: [openstack-dev] [Fuel] Nominate Andrey Skedzinskiy for fuel-qa(devops) core

2015-04-13 Thread Alexander Kislitsky
Andrew shows great attention to the details. +1 for him.

On Mon, Apr 13, 2015 at 11:22 AM, Anastasia Urlapova aurlap...@mirantis.com
 wrote:

 Guys,
 I would like to nominate Andrey Skedzinskiy[1] for
 fuel-qa[2]/fuel-devops[3] core team.

 Andrey is one of the strongest reviewers, under his watchful eye are such
 features as:
 - updrade/rollback master node
 - collect usage information
 - OS patching
 - UI tests
 and others

 Please vote for Andrey!


 Nastya.

 [1]http://stackalytics.com/?project_type=stackforgeuser_id=asledzinskiy
 [2]https://github.com/stackforge/fuel-qa
 [3]https://github.com/stackforge/fuel-devops

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] In loving memory of Chris Yeoh

2015-04-13 Thread wu jiang
What bad news.. Chris helped me a lot, we lost a mentor and friend.
May God bless his/her soul.

WingWJ

On Mon, Apr 13, 2015 at 1:03 PM, Gary Kotton gkot...@vmware.com wrote:

 Hi,
 I am very saddened to read this. Not only will Chris be missed on a
 professional level but on a personal level. He was a real mensh
 (http://www.thefreedictionary.com/mensh). He was always helpful and
 supportive. Wishing his family a long life.
 Thanks
 Gary

 On 4/13/15, 4:33 AM, Michael Still mi...@stillhq.com wrote:

 Hi, as promised I now have details of a charity for people to donate
 to in Chris' memory:
 
 
 
 https://urldefense.proofpoint.com/v2/url?u=http-3A__participate.freetobrea
 the.org_site_TR-3Fpx-3D1582460-26fr-5Fid-3D2710-26pg-3Dpersonal-23.VSscH5S
 Ud90d=AwIGaQc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=VlZxHpZBmzzk
 WT5jqz9JYBk8YTeq9N3-diTlNj4GyNcm=IFwED7YYaddl7JbqZ5OLChF6gtEGxYkxfFHwjWRm
 sD8s=B3EgunFqBdY8twmv-iJ7G7xvKZ4Th48oB4HKSv2uGKge=
 
 In the words of the family:
 
 We would prefer that people donate to lung cancer research in lieu of
 flowers. Lung cancer has the highest mortality rate out of all the
 cancers, and the lowest funding out of all the cancers. There is a
 stigma attached that lung cancer is a smoker's disease, and that
 sufferers deserve their fate. They bring it on through lifestyle
 choice. Except that Chris has never smoked in his life, like a
 surprisingly large percentage of lung cancer sufferers. These people
 suffer for the incorrect beliefs of the masses, and those that are
 left behind are equally innocent. We shouldn't be doing this now. He
 shouldn't be gone. We need to do more to fix this. There will be
 charity envelopes available at the funeral, or you can choose your
 preferred research to fund, should you wish to do so. You have our
 thanks.
 
 Michael
 
 On Wed, Apr 8, 2015 at 2:49 PM, Michael Still mi...@stillhq.com wrote:
  It is my sad duty to inform the community that Chris Yeoh passed away
 this
  morning. Chris leaves behind a daughter Alyssa, aged 6, who I hope will
  remember Chris as the clever and caring person that I will remember him
 as.
  I haven¹t had a chance to confirm with the family if they want flowers
 or a
  donation to a charity. As soon as I know those details I will reply to
 this
  email.
 
  Chris worked on open source for a very long time, with OpenStack being
 just
  the most recent in a long chain of contributions. He worked tirelessly
 on
  his contributions to Nova, including mentoring other developers. He was
  dedicated to the cause, with a strong vision of what OpenStack could
 become.
  He even named his cat after the project.
 
  Chris might be the only person to have ever sent an email to his
 coworkers
  explaining what his code review strategy would be after brain surgery.
 It
  takes phenomenal strength to carry on in the face of that kind of
 adversity,
  but somehow he did. Frankly, I think I would have just sat on the beach.
 
  Chris was also a contributor to the Linux Standards Base (LSB), where he
  helped improve the consistency and interoperability between Linux
  distributions. He ran the ŒHackfest¹ programming contests for a number
 of
  years at Australia¹s open source conference -- linux.conf.au. He
 supported
  local Linux user groups in South Australia and Canberra, including
  involvement at installfests and speaking at local meetups. He competed
 in a
  programming challenge called Loki Hack, and beat out the world to win
 the
  event[1].
 
  Alyssa¹s memories of her dad need to last her a long time, so we¹ve
 decided
  to try and collect some fond memories of Chris to help her along the
 way. If
  you feel comfortable doing so, please contribute a memory or two at
 
 
 https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_form
 s_d_1kX-2DePqAO7Cuudppwqz1cqgBXAsJx27GkdM-2DeCZ0c1V8_viewformd=AwIGaQc=
 Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=VlZxHpZBmzzkWT5jqz9JYBk8YTe
 q9N3-diTlNj4GyNcm=IFwED7YYaddl7JbqZ5OLChF6gtEGxYkxfFHwjWRmsD8s=iihsaOMe
 lNeIR3VZapWKjr5KLgMQArZ3nifKDo1yy8oe=
 
  Chris was humble, helpful and honest. The OpenStack and broader Open
 Source
  communities are poorer for his passing.
 
  Michael
 
  [1]
 
 https://urldefense.proofpoint.com/v2/url?u=http-3A__www.lokigames.com_hac
 k_d=AwIGaQc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=VlZxHpZBmzzkW
 T5jqz9JYBk8YTeq9N3-diTlNj4GyNcm=IFwED7YYaddl7JbqZ5OLChF6gtEGxYkxfFHwjWRm
 sD8s=9SJI7QK-jzCsVUN2hTXSthqiXNEbq2Fvl9JqQiX9tfoe=
 
 
 
 --
 Rackspace Australia
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 

[openstack-dev] 答复: [neutron] Neutron scaling datapoints?

2015-04-13 Thread Wangbibo
Hi Kevin,

Totally agree with you that heartbeat from each agent is something that we 
cannot eliminate currently. Agent status depends on it, and further scheduler 
and HA depends on agent status.

I proposed a Liberty spec for introducing open framework/pluggable agent status 
drivers.[1][2]  It allows us to use some other 3rd party backend to monitor 
agent status, such as zookeeper, memcached. Meanwhile, it guarantees backward 
compatibility so that users could still use db-based status monitoring 
mechanism as their default choice.

Base on that, we may do further optimization on issues Attila and you 
mentioned. Thanks.

[1] BP  -  
https://blueprints.launchpad.net/neutron/+spec/agent-group-and-status-drivers
[2] Liberty Spec proposed - https://review.openstack.org/#/c/168921/

Best,
Robin




发件人: Kevin Benton [mailto:blak...@gmail.com]
发送时间: 2015年4月11日 12:35
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [neutron] Neutron scaling datapoints?

Which periodic updates did you have in mind to eliminate? One of the few 
remaining ones I can think of is sync_routers but it would be great if you can 
enumerate the ones you observed because eliminating overhead in agents is 
something I've been working on as well.

One of the most common is the heartbeat from each agent. However, I don't think 
we can't eliminate them because they are used to determine if the agents are 
still alive for scheduling purposes. Did you have something else in mind to 
determine if an agent is alive?

On Fri, Apr 10, 2015 at 2:18 AM, Attila Fazekas 
afaze...@redhat.commailto:afaze...@redhat.com wrote:
I'm 99.9% sure, for scaling above 100k managed node,
we do not really need to split the openstack to multiple smaller openstack,
or use significant number of extra controller machine.

The problem is openstack using the right tools SQL/AMQP/(zk),
but in a wrong way.

For example.:
Periodic updates can be avoided almost in all cases

The new data can be pushed to the agent just when it needed.
The agent can know when the AMQP connection become unreliable (queue or 
connection loose),
and needs to do full sync.
https://bugs.launchpad.net/neutron/+bug/1438159

Also the agents when gets some notification, they start asking for details via 
the
AMQP - SQL. Why they do not know it already or get it with the notification ?


- Original Message -
 From: Neil Jerram 
 neil.jer...@metaswitch.commailto:neil.jer...@metaswitch.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Sent: Thursday, April 9, 2015 5:01:45 PM
 Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?

 Hi Joe,

 Many thanks for your reply!

 On 09/04/15 03:34, joehuang wrote:
  Hi, Neil,
 
   From theoretic, Neutron is like a broadcast domain, for example,
   enforcement of DVR and security group has to touch each regarding host
   where there is VM of this project resides. Even using SDN controller, the
   touch to regarding host is inevitable. If there are plenty of physical
   hosts, for example, 10k, inside one Neutron, it's very hard to overcome
   the broadcast storm issue under concurrent operation, that's the
   bottleneck for scalability of Neutron.

 I think I understand that in general terms - but can you be more
 specific about the broadcast storm?  Is there one particular message
 exchange that involves broadcasting?  Is it only from the server to
 agents, or are there 'broadcasts' in other directions as well?

 (I presume you are talking about control plane messages here, i.e.
 between Neutron components.  Is that right?  Obviously there can also be
 broadcast storm problems in the data plane - but I don't think that's
 what you are talking about here.)

  We need layered architecture in Neutron to solve the broadcast domain
  bottleneck of scalability. The test report from OpenStack cascading shows
  that through layered architecture Neutron cascading, Neutron can
  supports up to million level ports and 100k level physical hosts. You can
  find the report here:
  http://www.slideshare.net/JoeHuang7/test-report-for-open-stack-cascading-solution-to-support-1-million-v-ms-in-100-data-centers

 Many thanks, I will take a look at this.

  Neutron cascading also brings extra benefit: One cascading Neutron can
  have many cascaded Neutrons, and different cascaded Neutron can leverage
  different SDN controller, maybe one is ODL, the other one is OpenContrail.
 
  Cascading Neutron---
   / \
  --cascaded Neutron--   --cascaded Neutron-
  |  |
  -ODL--   OpenContrail
 
 
  And furthermore, if using Neutron cascading in multiple data centers, the
  DCI controller (Data center inter-connection controller) can also be used
  under cascading Neutron, to provide NaaS ( network as a service ) across
  data 

[openstack-dev] ??magnum??About clean none use container imag

2015-04-13 Thread 449171342
From now on magnum had container create and delete api .The  container create 
api will pull docker image from docker-registry.But the container delete api 
didn't delete image.It will let the image remain even though didn't had 
container use it.Is it much better we can clear the image in someway?__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]About clean none use container imag

2015-04-13 Thread Jay Lau
Interesting topic, pulling image is time consuming, so someone might not
want to delete the container; But for some cases, if the image was not
used, then it is better to remove them from disk to release space. You may
want to send out an email to [openstack][magnum] ML to get more feedback ;-)

2015-04-13 14:51 GMT+08:00 449171342 449171...@qq.com:



 From now on magnum had container create and delete api .The  container create 
 api will pull docker image from docker-registry.But the container delete api 
 didn't delete image.It will let the image remain even though didn't had 
 container use it.Is it much better we can clear the image in someway?
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Regarding neutron bug # 1432582

2015-04-13 Thread Kevin Benton
I would like to see some form of this merged at least as an error message.
If a server has a bad CMOS battery and suffers a power outage, it's clock
could easily be several years behind. In that scenario, the NTP daemon
could refuse to sync due to a sanity check.

On Wed, Apr 8, 2015 at 10:46 AM, Sudipto Biswas sbisw...@linux.vnet.ibm.com
 wrote:

 Hi Guys, I'd really appreciate your feedback on this.

 Thanks,
 Sudipto


 On Monday 30 March 2015 12:11 PM, Sudipto Biswas wrote:

 Someone from my team had installed the OS on baremetal with a wrong 'date'
 When this node was added to the Openstack controller, the logs from the
 neutron-agent on the compute node showed - AMQP connected. But the
 neutron
 agent-list command would not list this agent at all.

 I could figure out the problem when the neutron-server debug logs were
 enabled
 and it vaguely pointed at the rejection of AMQP connections due to a
 timestamp
 miss match. The neutron-server was treating these requests as stale due
 to the
 timestamp of the node being behind the neutron-server. However, there's no
 good way to detect this if the agent runs on a node which is ahead of
 time.

 I recently raised a bug here: https://bugs.launchpad.net/
 neutron/+bug/1432582

 And tried to resolve this with the review:
 https://review.openstack.org/#/c/165539/

 It went through quite a few +2s after 15 odd patch sets but we still are
 not
 in common ground w.r.t addressing this situation.

 My fix tries to log better and throw up an exception to the neutron agent
 on
 FIRST time boot of the agent for better detection of the problem.

 I would like to get your thoughts on this fix. Whether this seems legit
 to have
 the fix per the patch OR could you suggest a approach to tackle this OR
 suggest
 just abandoning the change.



 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron scaling datapoints?

2015-04-13 Thread Attila Fazekas




- Original Message -
 From: Kevin Benton blak...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Sunday, April 12, 2015 4:17:29 AM
 Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?
 
 
 
 So IIUC tooz would be handling the liveness detection for the agents. That
 would be nice to get ride of that logic in Neutron and just register
 callbacks for rescheduling the dead.
 
 Where does it store that state, does it persist timestamps to the DB like
 Neutron does? If so, how would that scale better? If not, who does a given
 node ask to know if an agent is online or offline when making a scheduling
 decision?
 
You might find interesting the proposed solution in this bug:
https://bugs.launchpad.net/nova/+bug/1437199

 However, before (what I assume is) the large code change to implement tooz, I
 would like to quantify that the heartbeats are actually a bottleneck. When I
 was doing some profiling of them on the master branch a few months ago,
 processing a heartbeat took an order of magnitude less time (50ms) than the
 'sync routers' task of the l3 agent (~300ms). A few query optimizations
 might buy us a lot more headroom before we have to fall back to large
 refactors.
 Kevin Benton wrote:
 
 
 
 One of the most common is the heartbeat from each agent. However, I
 don't think we can't eliminate them because they are used to determine
 if the agents are still alive for scheduling purposes. Did you have
 something else in mind to determine if an agent is alive?
 
 Put each agent in a tooz[1] group; have each agent periodically heartbeat[2],
 have whoever needs to schedule read the active members of that group (or use
 [3] to get notified via a callback), profit...
 
 Pick from your favorite (supporting) driver at:
 
 http://docs.openstack.org/ developer/tooz/compatibility. html
 
 [1] http://docs.openstack.org/ developer/tooz/compatibility. html#grouping
 [2] https://github.com/openstack/ tooz/blob/0.13.1/tooz/ coordination.py#L315
 [3] http://docs.openstack.org/ developer/tooz/tutorial/group_
 membership.html#watching- group-changes
 
 
 __ __ __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-request@lists. openstack.org?subject: unsubscribe
 http://lists.openstack.org/ cgi-bin/mailman/listinfo/ openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][pbr] splitting our deployment vs install dependencies

2015-04-13 Thread Thierry Carrez
Robert Collins wrote:
 On 13 April 2015 at 13:09, Robert Collins robe...@robertcollins.net wrote:
 On 13 April 2015 at 12:53, Monty Taylor mord...@inaugust.com wrote:

 What we have in the gate is the thing that produces the artifacts that
 someone installing using the pip tool would get. Shipping anything with
 those artifacts other that a direct communication of what we tested is
 just mean to our end users.

 Actually its not.

 What we test is point in time. At 2:45 UTC on Monday installing this
 git ref of nova worked.

 Noone can reconstruct that today.

 I entirely agree with the sentiment you're expressing, but we're not
 delivering that sentiment today.
 
 This observation led to yet more IRC discussion and eventually
 https://etherpad.openstack.org/p/stable-omg-deps
 
 In short, the proposal is that we:
  - stop trying to use install_requires to reproduce exactly what
 works, and instead use it to communicate known constraints ( X, Y is
 broken etc).
  - use a requirements.txt file we create *during* CI to capture
 exactly what worked, and also capture the dpkg and rpm versions of
 packages that were present when it worked, and so on. So we'll build a
 git tree where its history is an audit trail of exactly what worked
 for everything that passed CI, formatted to make it really really easy
 for other people to consume.

I totally agree that we need to stop trying to provide two different
sets of dependency information (known good deps, known bad deps) using
the same dataset.

If I understand you correctly, today we provide a requirements.txt and
generate an install_requires from it, and in the new world order we
would provide a install_requires with known-bad info in it and
generate a known-good requirements.txt (during CI) from it.

Questions:

How would global-requirements evolve in that picture ? Would we have
some global-install-requires thing to replace it ?

Distro packagers today rely on requirements.txt (and global
-requirements) to determine what version of libraries they need to
package. Would they just rely on install_requires instead ? Where is
that information provided ? setup.cfg ?

How does this proposal affect stable branches ? In order to keep the
breakage there under control, we now have stable branches for all the
OpenStack libraries and cap accordingly[1]. We planned to cap all other
libraries to the version that was there when the stable branch was
cut.  Where would we do those cappings in the new world order ? In
install_requires ? Or should we not do that anymore ?

[1]
http://specs.openstack.org/openstack/openstack-specs/specs/library-stable-branches.html

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [doc] what's happened to api documents?

2015-04-13 Thread henry hly
http://developer.openstack.org/api-ref-networking-v2.html

The above api document seems lost most of the content, leaving only
port, network, subnet?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Sebastian Kalinowski for fuel-web/python-fuelclient core

2015-04-13 Thread Evgeniy L
+1

On Fri, Apr 10, 2015 at 1:35 PM, Roman Prykhodchenko m...@romcheg.me wrote:

 +1. Sebastian does great job in reviews!

  10 квіт. 2015 о 12:05 Igor Kalnitsky ikalnit...@mirantis.com
 написав(ла):
 
  Hi Fuelers,
 
  I'd like to nominate Sebastian Kalinowski for the both fuel-web-core
  [1] and python-fuelclient-core [2] teams. Sebastian's doing a really
  good review with detailed feedback and he's a regular participant in
  IRC. I believe that having him among the cores we will increase our
  overall performance.
 
  Fuel Cores, please reply back with +1/-1.
 
  Thanks,
  Igor
 
  [1]:
 http://stackalytics.com/?project_type=stackforgemodule=fuel-webrelease=kilo
  [2]:
 http://stackalytics.com/?project_type=stackforgemodule=python-fuelclientrelease=kilo
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstackclient] osc slowness

2015-04-13 Thread Sean Dague
While I was working on the grenade refactor I was considering using
openstack client for some resource create / testing. Doing so made me
realize that osc is sluggish. From what I can tell due to the way it
loads the world, there is a minimum 1.5s overhead on every command
execution. For instance, openstack server list takes a solid extra
second over nova list in my environment.

I wrote a little tool to figure out how much time we're spending in
openstack client - https://review.openstack.org/#/c/172713/

On a randomly selected dsvm-full run from master it's about 4.5 minutes.
Now, that being side, there are a bunch of REST calls it's making, so
it's not all OSC's fault. However there is a lot of time lost to that
reload the world issue. Especially when we are making accounts.

For instance, the create accounts section of Keystone setup:
https://github.com/openstack-dev/devstack/blob/master/stack.sh#L968-L1016

Now takes 3.5 minutes in master -
http://logs.openstack.org/13/172713/1/check/check-tempest-dsvm-full/d3b0b8e/logs/devstacklog.txt.gz

2015-04-12 12:37:40.997 | + echo_summary 'Starting Keystone'
2015-04-12 12:41:06.833 | + echo_summary 'Configuring and starting Horizon'

The same chunk in Icehouse took just over 1 minute -
http://logs.openstack.org/28/165928/2/check/check-tempest-dsvm-full/f0b3e07/logs/devstacklog.txt.gz

2015-04-10 15:59:08.699 | + echo_summary 'Starting Keystone'
2015-04-10 16:00:00.313 | + echo_summary 'Configuring and starting Horizon'

In master we do create a few more accounts as well, again, it's not all
OSC, however OSC is definitely adding to it.

A really great comparison between OSC and Keystone commands is provided
by the ec2 user creation:

Icehouse:
http://logs.openstack.org/28/165928/2/check/check-tempest-dsvm-full/f0b3e07/logs/devstacklog.txt.gz#_2015-04-10_16_01_07_148

Master:
http://logs.openstack.org/13/172713/1/check/check-tempest-dsvm-full/d3b0b8e/logs/devstacklog.txt.gz#_2015-04-12_12_43_19_655

The keystone version of the commands take ~ 500ms, the OSC versions 1700ms.


So, under the current model I think we're paying a pretty high strategy
tax in OSC use in devstack. It's adding minutes of time in a normal run.
I don't know all the internals of OSC and what can be done to make it
better. But I think that as a CLI we should be as responsive as
possible.  1s seems like it should be target for at least all the
keystone operations. I do think this is one of the places (like
rootwrap) where load time is something to not ignore.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TripleO: CI down... SSL cert expired

2015-04-13 Thread Derek Higgins

On 11/04/15 14:02, Dan Prince wrote:

Looks like our SSL certificate has expired for the currently active CI
cloud. We are working on getting a new one generated and installed.
Until then CI jobs won't get processed.


A new cert has been installed in the last few minutes and ZUUL has 
started kicking off new jobs so we should be through the backlog soon.


At this weeks meeting we'll discuss putting something in place to ensure 
we are ahead of this the next time.


Derek



Dan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [doc] what's happened to api documents?

2015-04-13 Thread Henry Gessau
On Mon, Apr 13, 2015, henry hly henry4...@gmail.com wrote:
 http://developer.openstack.org/api-ref-networking-v2.html
 
 The above api document seems lost most of the content, leaving only
 port, network, subnet?

In the navigation bar on the left there is a link to the rest of the Neutron
API, which is implemented as extensions:
http://developer.openstack.org/api-ref-networking-v2-ext.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] [Cinderclient] All patches getting -1 on Cinderclient

2015-04-13 Thread Gorka Eguileor
Hi all,

Currently all patches in Cinderclient are getting -1 from Jenkins
because gate-tempest-dsvm-neutron-src-python-cinderclient-juno is
failing.

I opened a LP bug [1] on this, but basically the issue comes from Heat's
requirements cap on Cinderclient [2] to an upper bound of 1.1.1 when
current version is reported as 1.1.1.post100.

So if you're getting -1 from that job, remember it's not you. ;-)


Cheers,
Gorka.

[1] https://bugs.launchpad.net/tempest/+bug/1442086
[2] https://github.com/openstack/heat/blob/stable/juno/requirements.txt#L25


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Andrey Skedzinskiy for fuel-qa(devops) core

2015-04-13 Thread Vladimir Kuklin
+1

On Mon, Apr 13, 2015 at 11:37 AM, Alexander Kislitsky 
akislit...@mirantis.com wrote:

 Andrew shows great attention to the details. +1 for him.

 On Mon, Apr 13, 2015 at 11:22 AM, Anastasia Urlapova 
 aurlap...@mirantis.com wrote:

 Guys,
 I would like to nominate Andrey Skedzinskiy[1] for
 fuel-qa[2]/fuel-devops[3] core team.

 Andrey is one of the strongest reviewers, under his watchful eye are such
 features as:
 - updrade/rollback master node
 - collect usage information
 - OS patching
 - UI tests
 and others

 Please vote for Andrey!


 Nastya.

 [1]http://stackalytics.com/?project_type=stackforgeuser_id=asledzinskiy
 [2]https://github.com/stackforge/fuel-qa
 [3]https://github.com/stackforge/fuel-devops

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Sebastian Kalinowski for fuel-web/python-fuelclient core

2015-04-13 Thread Dmitry Pyzhov
+1

On Mon, Apr 13, 2015 at 2:07 PM, Evgeniy L e...@mirantis.com wrote:

 +1

 On Fri, Apr 10, 2015 at 1:35 PM, Roman Prykhodchenko m...@romcheg.me
 wrote:

 +1. Sebastian does great job in reviews!

  10 квіт. 2015 о 12:05 Igor Kalnitsky ikalnit...@mirantis.com
 написав(ла):
 
  Hi Fuelers,
 
  I'd like to nominate Sebastian Kalinowski for the both fuel-web-core
  [1] and python-fuelclient-core [2] teams. Sebastian's doing a really
  good review with detailed feedback and he's a regular participant in
  IRC. I believe that having him among the cores we will increase our
  overall performance.
 
  Fuel Cores, please reply back with +1/-1.
 
  Thanks,
  Igor
 
  [1]:
 http://stackalytics.com/?project_type=stackforgemodule=fuel-webrelease=kilo
  [2]:
 http://stackalytics.com/?project_type=stackforgemodule=python-fuelclientrelease=kilo
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3] IPAM alternate refactoring

2015-04-13 Thread Pavel Bondar
Hi,

I made some investigation on the topic[1] and see several issues on this
way.

1. Plugin's create_port() is wrapped up in top level transaction for
create floating ip case[2], so it becomes more complicated to do IPAM
calls outside main db transaction.

- for create floating ip case transaction is initialized on
create_floatingip level:
create_floatingip(l3_db)-create_port(plugin)-create_port(db_base)
So IPAM call should be added into create_floatingip to be outside db
transaction

- for usual port create transaction is initialized on plugin's
create_port level, and John's change[1] cover this case:
create_port(plugin)-create_port(db_base)

Create floating ip work-flow involves calling plugin's create_port,
so IPAM code inside of it should be executed only when it is not wrapped
into top level transaction.

2. It is opened question about error handling.
Should we use taskflow to manage IPAM calls to external systems?
Or simple exception based model is enough to handle rollback actions on
third party systems in case of failing main db transaction.

[1] https://review.openstack.org/#/c/172443/
[2] neutron/db/l3_db.py: line 905

Thanks,
Pavel

On 10.04.2015 21:04, openstack-dev-requ...@lists.openstack.org wrote:
 L3 Team,
 
 I have put up a WIP [1] that provides an approach that shows the ML2 
 create_port method refactored to use the IPAM driver prior to initiating the 
 database transaction. Details are in the commit message - this is really just 
 intended to provide a strawman for discussion of the options. The actual 
 refactor here is only about 40 lines of code.
 
 [1] https://review.openstack.org/#/c/172443/
 
 
 Thanks,
 John


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][barbican] default certificate manager

2015-04-13 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 04/10/2015 09:18 PM, Brandon Logan wrote:
 Hi Ihar, I'm not against the lazy loading solution, just wondering
 what the real issue is here.  Is your problem with this that
 python-barbicanclient needs to be in the requirements.txt?  Or is
 the problem that v1 will import it even though it isn't used?
 

I package neutron for RDO, so I use requirements.txt as a suggestion.
My main problem was that python-barbicanclient was not packaged for
RDO when I started looking into the issue, but now that it's packaged
in Fedora [1], the issue is not that significant to me. Of course it's
a wasted dependency installed for nothing (plus its own dependencies),
but that's not a disaster, and if upstream team thinks it's the right
thing to do, let it be so, and I'm happy to abandon the change.

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1208454

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBAgAGBQJVK9VFAAoJEC5aWaUY1u57dCkH/R73ECDlHVl2ocBWfTk4BEqi
R8j/wpCCSz3x9uffWR9F8mJoqEnvekIvTtoaHaleiVfZTAhGRDRoxT7nOuMBFBDp
ynmeJEicualeiAFX1z6//KA4L6y5hqGaV71axCRmAT/c0P5fuK08WIMBOkzQRyuo
JmJbej5pOOlDRos0+PJd2+7qxAVU2CAuVBrJIVsJoG4zuISNDalxeOIaYKHU0+Tu
/r7bztTrjkbcs6jiHrvv8MugsivrV1hGEBDsIVgC/Fsgy19f0X2aEjbh7G6lioab
Vm6G+fDCFJVVQ6Xbc9qQPs1geRrocVAb7ZGeuhT/RdoMFTxBR8EJnPqWHXkYWuA=
=O4Ll
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Proposed Liberty release schedule

2015-04-13 Thread Thierry Carrez
Hi everyone,

Although we'll discuss in Vancouver future changes in the release
schedule/model, for Liberty we'll still use a default 6-month cycle
with intermediary milestones.

Looking at the date for the Tokyo summit (Oct 27-30), that leaves two
options for the Liberty release date: Oct 15 or Oct 8. October 8 would
result in a *very* short cycle (23 weeks), so I think October 15 is the
best option.

Working backward from there, that would place liberty-3 (feature freeze)
at September 3. We could consider making the pre-release period one week
shorter, but we made full usage of the 6 weeks we have between FF and
release those last cycles, so I think it makes more sense to keep that.

Working backward again, that places liberty-2 at July 30 and liberty-1
either June 18 or June 25. June 18 might be a bit too close to the
summit so a bit useless, so I guess June 25 is slightly better.

In summary, here is the proposed Liberty common schedule:

liberty-1: June 25th
liberty-2: July 30th
liberty-3: September 3rd
final release: October 15th

Let me know if you see issues with it, like crazy holidays somewhere
that would ruin it.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Is there any way to put the driver backend error message to the horizon

2015-04-13 Thread Alex Meade
I think there is a lot to discuss here and I would love to push for a
solution implemented in Liberty. I have a proposed summit session on this
topic (Asynchrounous Error Reporting). I also discussed this briefly at the
Kilo summit. I will work on formalizing some of these ideas and hopefully
we can pick a path forward at the summit.

Keep the discussion going :) I will try to organize everyones thoughts.

-Alex

On Mon, Apr 13, 2015 at 10:00 AM, Erlon Cruz sombra...@gmail.com wrote:

 I like Duncan's idea. To have a dash in horizon where admin can see error
 events. It can hide backend details from tenants and  would save the time
 of browsing through logs seeking for the operations that  caused errors
 (the request id also should be logged in the metadata to allow further
 investigation). We have notice this problem while ago, and at the time we
 found this bug[1] about the same problem.

 [1] https://bugs.launchpad.net/horizon/+bug/1352516


 On Fri, Apr 10, 2015 at 5:26 PM, gordon chung g...@live.ca wrote:

  I'd say events are *more* useful in that workflow, not less, as long as
  they contain enough context. For example, the user creates a volume,
  tries to attach it which fails for some config error, so the user
  deletes it. With an event based model, the admin now has an error event
  in their queue. If we used a db field then the error status is
  potentially revived by the successful delete.

 +1

 Nova currently emits a good set of events and errors and we've found it
 especially useful to debug / do postmortem analysis by collecting these
 notifications and being able to view the entire workflow. we've found quite
 a few occasions where the error popups presented in Horizon are not the
 real error but just the last/wrapped error.

 there are various consumers that already collate these error
 notifications from Nova and i don't think it's much of a change if any to
 collect error notifications from Cinder. i don't think there's any change
 from Ceilometer POV -- just publish to error topic.

 cheers,

 gord

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Sebastian Kalinowski for fuel-web/python-fuelclient core

2015-04-13 Thread Sergii Golovatiuk
+1

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Mon, Apr 13, 2015 at 1:09 PM, Dmitry Pyzhov dpyz...@mirantis.com wrote:

 +1

 On Mon, Apr 13, 2015 at 2:07 PM, Evgeniy L e...@mirantis.com wrote:

 +1

 On Fri, Apr 10, 2015 at 1:35 PM, Roman Prykhodchenko m...@romcheg.me
 wrote:

 +1. Sebastian does great job in reviews!

  10 квіт. 2015 о 12:05 Igor Kalnitsky ikalnit...@mirantis.com
 написав(ла):
 
  Hi Fuelers,
 
  I'd like to nominate Sebastian Kalinowski for the both fuel-web-core
  [1] and python-fuelclient-core [2] teams. Sebastian's doing a really
  good review with detailed feedback and he's a regular participant in
  IRC. I believe that having him among the cores we will increase our
  overall performance.
 
  Fuel Cores, please reply back with +1/-1.
 
  Thanks,
  Igor
 
  [1]:
 http://stackalytics.com/?project_type=stackforgemodule=fuel-webrelease=kilo
  [2]:
 http://stackalytics.com/?project_type=stackforgemodule=python-fuelclientrelease=kilo
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Several nominations for fuel project cores

2015-04-13 Thread Dmitry Pyzhov
Hi,

1) I want to nominate Vladimir Sharshov to fuel-astute core. We hardly need
more core reviewers here. At the moment Vladimir is one of the main
contributors and reviewers in astute.

2) I want to nominate Alexander Kislitsky to fuel-stats core. He is the
lead of this feature and one of the main authors in this repo.

3) I want to nominate Dmitry Shulyak to fuel-web and fuel-ostf cores. He is
one of the main contributors and reviewers in both repos.

Core reviewers, please reply with +1/-1 for each nomination.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [grenade] module upgrade refactor progress

2015-04-13 Thread Sean Dague
While we now have devstack external plugins, grenade (our upgrade
testing framework) was really monolithic. It grew out of a last minute
set of test scripts for Folsom that discovered a number of our database
migrations didn't work with real data in them, and that nova compute had
the annoying habit of killing off VMs when it went down. It has grown in
scope since then, but very organically, and not always clearly at times.

The first step in making this plugable externally is separating out
everything that's really global, vs., what's per service.

That was mostly done last week (with one important part missing,
resource survival testing). The top of the current unmerged stack is
here - https://review.openstack.org/#/c/172648/

== New Structure ==

The crux of this is that all the project specific code now lives in:

grenade/
projects/
10_keystone/.
20_ceilometer/
30_swift/


The current (in flux) interface is as follows:

* settings - similar to the devstack plugin, this is a place for initial
setup. So far this has been useful to register things you'd like grenade
to do for you. For instance

 more projects/10_keystone/settings
register_project_for_upgrade keystone
register_db_to_save keystone

Tells us we should register this directory for upgrade (it does a little
magic when it does that). And to save off a database.

* upgrade.sh - the service upgrade script, which is expected to upgrade
 restart the service. It is basically what upgrade-$foo was previously.
In the current patch stream ``upgrade.sh`` is also responsible for doing
a service sanity check once done.

The following functions are provided to help with that:

- ensure_services_started
- ensure_logs_exist

The project also supports a local from-juno/ from-kilo/ within-juno/
directory structure just like we did before. It's just in the service
directory.

* shutdown.sh - the service down script, which is also responsible for
doing a sanity check that the service is actually down.

The following functions are provided to help with that:

- ensure_services_stopped


== Resource Survival ==

This is still in flight, and here is a preview of where this is headed
this week. One of the important things that grenade does is ensure that
resources (like functioning VMs) survive the upgrade unscathed. Mostly
because, once upon a time they did not.

This started as a simple shell script. That broke at some point and no
one noticed (though, we didn't have regressions, so that's at least
something). Last summer we rebuilt that tool as python using the Tempest
clients (javelin2). As that was ending we realized that basically we'd
just recreated ansible in the small (our yaml file and theirs are way
too close to assume otherwise). This also meant we created a new
coupling with Tempest, and a new global coupling of 1 tool that needed
to understand all projects. So we created a new bottleneck.

Grenade is going to get out of the business of dictating a tool. Instead
it's going to dictate an interface:

It will look something like this (exact names in flux, we'll see how the
code evolves).

resources.sh [create|verify_noapi|verify|destroy]

- pre shutdown
   - create - make some stuff that we think might not survive upgrades
(i.e. more than just db records)
   - verify - make sure that stuff is working

- post shutdown
   - verify_noapi - make sure that stuff is working, with checks that
work without any API services up.

- post upgrade
  - verify - make sure stuff is still running

- post grenade
  - destroy - delete everything so we don't leave crud everywhere

The verify_noapi is currently a hole in our testing, and something Clark
brought up in Darmstadt last summer. It's a good hole to fill.

There will also be some convenience functions provided to store/fetch
persistent data so that grenade can keep track of things like instance
ids / ip addresses and such for resource scripts.

== Upgrade Order ==

This remains one of the last sticking points. Today our upgrade.sh
iterates every project in a specific order and does both the upgrade and
restart at the same time.

The good thing about this is it is simple, and more closely follows a
'rolling-ish' upgrade pattern. The problem is dependency management.
Especially when we talk about libraries from one project injecting into
others that aren't in requirements.txt. Like ceilometermiddleware,
ironicclient.

I'm starting to think we should upgrade / restart as separate steps,
because it will largely get rid of the dependency ordering issues. But
that's up for grabs.

== External API ==

An external plugin definition, similar to the devstack one, is coming.
But not util the rest of this settles out. My hope is it will exist by
Vancouver so we can do an External Plugins in Devstack / Grenade Design
Summit session there. Both as a forum to ask questions about the
existing structure, as well as a discussion of what should move into
external plugins for both 

Re: [openstack-dev] [Fuel] Nominate Andrey Skedzinskiy for fuel-qa(devops) core

2015-04-13 Thread Sergey Vasilenko
+1
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Andrey Skedzinskiy for fuel-qa(devops) core

2015-04-13 Thread Sergii Golovatiuk
Strong +1

Nastya forgot to mention Andey's participation in Ubuntu 14.04 feature.
With Andrey's help the feature went smooth and easy ;)


--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Mon, Apr 13, 2015 at 12:37 PM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 +1

 On Mon, Apr 13, 2015 at 11:37 AM, Alexander Kislitsky 
 akislit...@mirantis.com wrote:

 Andrew shows great attention to the details. +1 for him.

 On Mon, Apr 13, 2015 at 11:22 AM, Anastasia Urlapova 
 aurlap...@mirantis.com wrote:

 Guys,
 I would like to nominate Andrey Skedzinskiy[1] for
 fuel-qa[2]/fuel-devops[3] core team.

 Andrey is one of the strongest reviewers, under his watchful eye are
 such features as:
 - updrade/rollback master node
 - collect usage information
 - OS patching
 - UI tests
 and others

 Please vote for Andrey!


 Nastya.

 [1]http://stackalytics.com/?project_type=stackforgeuser_id=asledzinskiy
 [2]https://github.com/stackforge/fuel-qa
 [3]https://github.com/stackforge/fuel-devops


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 35bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] [Murano] Mistral devstack installation is failing in murano gate job

2015-04-13 Thread Nikolay Makhotkin

 We are facing an issue with Mistral devstack installation in our gate job
 testing murano-congress-mistral integration (policy enforcement) [1] .
 Mistral devstack scripts are failing with following import error [2]


Hi, Filip!

Recently Mistral has moved to new YAQL, and it seems this dependency is
missed (yaql 1.0, currently yaql 1.0.0b2)

I think the root of problem is that Murano and Mistral have different yaql
versions installed.

-- 
Best Regards,
Nikolay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Liberty specs are now open

2015-04-13 Thread Matthew Gilliard
 Dumb question from me, is there an easy way to get a view that filters out 
 specs that haven't been re-submitted against the liberty directory?

You can use a regex in the files: filter, so I think you mean:

https://review.openstack.org/#/q/status:open+project:openstack/nova-specs+file:%255E.*liberty.*,n,z

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Is there any way to put the driver backend error message to the horizon

2015-04-13 Thread Erlon Cruz
I like Duncan's idea. To have a dash in horizon where admin can see error
events. It can hide backend details from tenants and  would save the time
of browsing through logs seeking for the operations that  caused errors
(the request id also should be logged in the metadata to allow further
investigation). We have notice this problem while ago, and at the time we
found this bug[1] about the same problem.

[1] https://bugs.launchpad.net/horizon/+bug/1352516


On Fri, Apr 10, 2015 at 5:26 PM, gordon chung g...@live.ca wrote:

  I'd say events are *more* useful in that workflow, not less, as long as
  they contain enough context. For example, the user creates a volume,
  tries to attach it which fails for some config error, so the user
  deletes it. With an event based model, the admin now has an error event
  in their queue. If we used a db field then the error status is
  potentially revived by the successful delete.

 +1

 Nova currently emits a good set of events and errors and we've found it
 especially useful to debug / do postmortem analysis by collecting these
 notifications and being able to view the entire workflow. we've found quite
 a few occasions where the error popups presented in Horizon are not the
 real error but just the last/wrapped error.

 there are various consumers that already collate these error notifications
 from Nova and i don't think it's much of a change if any to collect error
 notifications from Cinder. i don't think there's any change from Ceilometer
 POV -- just publish to error topic.

 cheers,

 gord

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Andrey Skedzinskiy for fuel-qa(devops) core

2015-04-13 Thread Evgeniy L
+1

On Mon, Apr 13, 2015 at 1:37 PM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 +1

 On Mon, Apr 13, 2015 at 11:37 AM, Alexander Kislitsky 
 akislit...@mirantis.com wrote:

 Andrew shows great attention to the details. +1 for him.

 On Mon, Apr 13, 2015 at 11:22 AM, Anastasia Urlapova 
 aurlap...@mirantis.com wrote:

 Guys,
 I would like to nominate Andrey Skedzinskiy[1] for
 fuel-qa[2]/fuel-devops[3] core team.

 Andrey is one of the strongest reviewers, under his watchful eye are
 such features as:
 - updrade/rollback master node
 - collect usage information
 - OS patching
 - UI tests
 and others

 Please vote for Andrey!


 Nastya.

 [1]http://stackalytics.com/?project_type=stackforgeuser_id=asledzinskiy
 [2]https://github.com/stackforge/fuel-qa
 [3]https://github.com/stackforge/fuel-devops


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 35bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] [Murano] Mistral devstack installation is failing in murano gate job

2015-04-13 Thread Filip Blaha

Hello

We are facing an issue with Mistral devstack installation in our gate 
job testing murano-congress-mistral integration (policy enforcement) [1] 
. Mistral devstack scripts are failing with following import error [2]


2015-04-12 14:06:25.236 | Traceback (most recent call last):
2015-04-12 14:06:25.236 |   File 
/opt/stack/new/mistral/tools/sync_db.py, line 20, in module

2015-04-12 14:06:25.236 | from mistral.services import action_manager
2015-04-12 14:06:25.236 |   File 
/opt/stack/new/mistral/mistral/services/action_manager.py, line 25, in 
module

2015-04-12 14:06:25.236 | from mistral.services import actions
2015-04-12 14:06:25.236 |   File 
/opt/stack/new/mistral/mistral/services/actions.py, line 17, in module
2015-04-12 14:06:25.236 | from mistral.workbook import parser as 
spec_parser
2015-04-12 14:06:25.236 |   File 
/opt/stack/new/mistral/mistral/workbook/parser.py, line 20, in module
2015-04-12 14:06:25.236 | from mistral.workbook.v2 import actions as 
actions_v2
2015-04-12 14:06:25.236 |   File 
/opt/stack/new/mistral/mistral/workbook/v2/actions.py, line 18, in 
module

2015-04-12 14:06:25.236 | from mistral.workbook.v2 import base
2015-04-12 14:06:25.236 |   File 
/opt/stack/new/mistral/mistral/workbook/v2/base.py, line 15, in module

2015-04-12 14:06:25.236 | from mistral.workbook import base
2015-04-12 14:06:25.236 |   File 
/opt/stack/new/mistral/mistral/workbook/base.py, line 23, in module

2015-04-12 14:06:25.236 | from mistral import expressions as expr
2015-04-12 14:06:25.236 |   File 
/opt/stack/new/mistral/mistral/expressions.py, line 22, in module
2015-04-12 14:06:25.237 | from yaql.language import exceptions as 
yaql_exc

2015-04-12 14:06:25.237 | ImportError: No module named language


Does anyone know what could be the cause?

[1] 
https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/murano.yaml#L42
[2] 
http://logs.openstack.org/04/171504/6/check/gate-murano-congress-devstack-dsvm/3b2d7e1/logs/devstacklog.txt.gz


Regards
Filip

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] eventlet 0.17.3 is now fully Python 3 compatible

2015-04-13 Thread Jeremy Stanley
On 2015-04-13 04:03:49 -0400 (-0400), Victor Stinner wrote:
 Great. Did you notice a performance regression?

Nope. Worth noting, we implemented it primarily for its lack of
compiled extensions, and to a lesser because it supports Python 3.x.
I suspect if we do later run into any unexpected performance
issues... well, it's pure Python. We have lots of people who can
help.

 Mike wrote that PyMySQL is much slower than MySQL-Python.

I don't recall him saying that specifically. Also last I recall he
admitted he hadn't actually tested under the sorts of load we would
drive in a production OpenStack service--merely performed some
fairly artificial benchmarks looping known-expensive operations that
may not ultimately reflect places in our codebase where introducing
any sort of slowdown would be noticeable compared to other
operations being performed.

Chances are the Project Infrastructure systems will continue
incrementally switching to PyMySQL mainly because it's easier to
install and works on a broader variety of platforms.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] [Murano] Mistral devstack installation is failing in murano gate job

2015-04-13 Thread Serg Melikyan
Hi Nikolay  Filip,

indeed, root cause of the issue is that Murano  Mistral use different
version of yaql library. Murano installs yaql 0.2.4 and overrides
1.0.0b2 already installed and expected by Mistral.

We decided that we are not going to switch to the yaql 1.0.0 in Kilo
since we already finished Kilo development and working on bug-fixes
and releasing RC. This gate may be fixed if only Mistral will revert
1.0.0 support in Kilo :'(

Nikolay, what do you think about migrating to YAQL 1.0.0 in the next
release? I know that it was me who proposed Mistral team to adopt yaql
1.0.0, and I am sorry, I didn't realize all consequences of moving
Mistral to yaql 1.0.0 and Murano team living with yaql 0.2.4.

We need to work on packaging and supporting yaql in Ubuntu/CentOS in
order to add this library to the global-requirements and to avoid this
kind of issues in the future.

On Mon, Apr 13, 2015 at 3:58 PM, Nikolay Makhotkin
nmakhot...@mirantis.com wrote:

 We are facing an issue with Mistral devstack installation in our gate job 
 testing murano-congress-mistral integration (policy enforcement) [1] . 
 Mistral devstack scripts are failing with following import error [2]


 Hi, Filip!

 Recently Mistral has moved to new YAQL, and it seems this dependency is 
 missed (yaql 1.0, currently yaql 1.0.0b2)

 I think the root of problem is that Murano and Mistral have different yaql 
 versions installed.

 --
 Best Regards,
 Nikolay

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3] IPAM alternate refactoring

2015-04-13 Thread Carl Baldwin
On Mon, Apr 13, 2015 at 8:44 AM, Pavel Bondar pbon...@infoblox.com wrote:
 Hi,

 I made some investigation on the topic[1] and see several issues on this
 way.

 1. Plugin's create_port() is wrapped up in top level transaction for
 create floating ip case[2], so it becomes more complicated to do IPAM
 calls outside main db transaction.

Is it time to look at breaking the bond between a floating and a port?
 I think the only reason that a port is created to back a floating IP
is for IPAM because IP addresses are only reserved by creating a port
with it.

I'm sure this won't be very easy but I think it is worth a look to see
what will be involved.

 - for create floating ip case transaction is initialized on
 create_floatingip level:
 create_floatingip(l3_db)-create_port(plugin)-create_port(db_base)
 So IPAM call should be added into create_floatingip to be outside db
 transaction

Ditto.

 - for usual port create transaction is initialized on plugin's
 create_port level, and John's change[1] cover this case:
 create_port(plugin)-create_port(db_base)

 Create floating ip work-flow involves calling plugin's create_port,
 so IPAM code inside of it should be executed only when it is not wrapped
 into top level transaction.

 2. It is opened question about error handling.
 Should we use taskflow to manage IPAM calls to external systems?
 Or simple exception based model is enough to handle rollback actions on
 third party systems in case of failing main db transaction.

Yes, error handling could be problematic.  I think there will always
be the possibility of having an inconsistent state between the two
systems.  We should consider such failure modes and have a way to
clean up.  Is this cleanup the sort of thing that taskflow provides?

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3] IPAM alternate refactoring

2015-04-13 Thread Carl Baldwin
Have we found the last of them?  I wonder.  I suppose any higher level
service like a router that needs to create ports under the hood (under
the API) will have this problem.  The DVR fip namespace creation comes
to mind.  It will create a port to use as the external gateway port
for that namespace.  This could spring up in the context of another
create_port, I think (VM gets new port bound to a compute host where a
fip namespace needs to spring in to existence).

Carl

On Mon, Apr 13, 2015 at 10:24 AM, John Belamaric
jbelama...@infoblox.com wrote:
 Thanks Pavel. I see an additional case in L3_NAT_dbonly_mixin, where it
 starts the transaction in create_router, then eventually gets to
 create_port:

 create_router (starts tx)
   -self._update_router_gw_info
   -_create_gw_port
   -_create_router_gw_port
   -create_port(plugin)

 So that also would need to be unwound.

 On 4/13/15, 10:44 AM, Pavel Bondar pbon...@infoblox.com wrote:

Hi,

I made some investigation on the topic[1] and see several issues on this
way.

1. Plugin's create_port() is wrapped up in top level transaction for
create floating ip case[2], so it becomes more complicated to do IPAM
calls outside main db transaction.

- for create floating ip case transaction is initialized on
create_floatingip level:
create_floatingip(l3_db)-create_port(plugin)-create_port(db_base)
So IPAM call should be added into create_floatingip to be outside db
transaction

- for usual port create transaction is initialized on plugin's
create_port level, and John's change[1] cover this case:
create_port(plugin)-create_port(db_base)

Create floating ip work-flow involves calling plugin's create_port,
so IPAM code inside of it should be executed only when it is not wrapped
into top level transaction.

2. It is opened question about error handling.
Should we use taskflow to manage IPAM calls to external systems?
Or simple exception based model is enough to handle rollback actions on
third party systems in case of failing main db transaction.

[1] https://review.openstack.org/#/c/172443/
[2] neutron/db/l3_db.py: line 905

Thanks,
Pavel

On 10.04.2015 21:04, openstack-dev-requ...@lists.openstack.org wrote:
 L3 Team,

 I have put up a WIP [1] that provides an approach that shows the ML2
create_port method refactored to use the IPAM driver prior to initiating
the database transaction. Details are in the commit message - this is
really just intended to provide a strawman for discussion of the
options. The actual refactor here is only about 40 lines of code.

 [1] https://review.openstack.org/#/c/172443/


 Thanks,
 John


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting minutes - 04/13/2015

2015-04-13 Thread Renat Akhmerov
Thanks for joining us today at #openstack-mistral

Meeting minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-04-13-16.20.html
 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-04-13-16.20.html
Meeting log: 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-04-13-16.20.log.html
 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-04-13-16.20.log.html

The next meeting will be held on Apr 20, same place and time.

Renat Akhmerov
@ Mirantis Inc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting reminder - 04/13/2015

2015-04-13 Thread Renat Akhmerov
Hi,

This is a reminder about our team meeting today at 16.20 UTC at 
#openstack-meeting.

Agenda:
Review AIs
Current Status
RC1 progress
Open Discussion

Thanks

Renat Akhmerov
@ Mirantis Inc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][glanceclient][cinderclient] Problems with juno check jobs

2015-04-13 Thread stuart . mclaren

Hi Gorka,

Glance is seeing something very similar [3].

I've updated the two bugs ([1],[3]) with some extra info.
Both issues seem to have started around April 7th.

Would anyone from infra be able to take a quick look?

Thanks,

-Stuart

[3] https://bugs.launchpad.net/glance/+bug/1442682


Date: Mon, 13 Apr 2015 13:35:42 +0200
From: Gorka Eguileor gegui...@redhat.com
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Heat] [Cinderclient] All patches getting -1
on  Cinderclient
Message-ID: 20150413113541.ga23...@mail.corp.redhat.com
Content-Type: text/plain; charset=us-ascii

Hi all,

Currently all patches in Cinderclient are getting -1 from Jenkins
because gate-tempest-dsvm-neutron-src-python-cinderclient-juno is
failing.

I opened a LP bug [1] on this, but basically the issue comes from Heat's
requirements cap on Cinderclient [2] to an upper bound of 1.1.1 when
current version is reported as 1.1.1.post100.

So if you're getting -1 from that job, remember it's not you. ;-)


Cheers,
Gorka.

[1] https://bugs.launchpad.net/tempest/+bug/1442086
[2] https://github.com/openstack/heat/blob/stable/juno/requirements.txt#L25


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][database][quotas] reservations table ??

2015-04-13 Thread Attila Fazekas




- Original Message -
 From: Kevin L. Mitchell kevin.mitch...@rackspace.com
 To: openstack-dev@lists.openstack.org
 Sent: Friday, April 10, 2015 5:47:26 PM
 Subject: Re: [openstack-dev] [nova][database][quotas] reservations table ??
 
 On Fri, 2015-04-10 at 02:38 -0400, Attila Fazekas wrote:
  I noticed the nova DB has reservations table with an expire field (+24h)
  and a periodic task
  in the scheduler (60 sec) for expire the otherwise not deleted records [2].
  
  Both the table and the observed operations are strange.
  
  What this table and its operations are trying to solve ?
  Why does it needed ?
  Why this solution was chosen ?
 
 It might help to know that this is reservations for the quota system.
 The basic reason that this exists is because of parallelism: say the
 user makes a request to boot a new instance, and that new instance would
 fill their quota.  Nova begins processing the request, but while it's
 doing so, the user makes a second (or third, fourth, fifth, etc.)
 request.  With a reservation, we can count the first request against
 their quota and reject the extra requests; without a reservation, we
 have no way of knowing that nova is already processing a request, and so
 could allow the user to vastly exceed their quota.
 
Just the very existence of the `expire` makes the solution very suspicious.

As I see the operations does no ensure parallel safe quota enforcement 
at resource creation and based on stale data. (wireshark)

It is based on a data originated from different transaction,
 even without SELECT .. WITH SHARED LOCK.

When moving the delta to/from reservations the service puts a lock 
(SELECT .. FOR UPDATE) on all same tenant related quota_usages row,
this is the only safety mechanism I saw.
Alone it is not enough.

No quota related table touched in the same transaction
when the instance state changed (or created). :(

---
The reservations table is not really needed.

What is really needed is doing the quota_usages changes
 and resource state changes in the same transaction !

The transactions are all or nothing constructs,
nothing can happen which needs any `expire` thing.
 
The transaction needs to ensure really it does the state change.
It can mean just read it with SELECT .. FOR UPDATE  
for an existing record (for ex.: instance)

The transaction also needs to ensure quota check happened 
based on not stale data - SELECT .. WITH SHARED LOCK for
- quota limit queries
- for calculating the actual number of things or for just reading the
  values from the quota_usages

In most cases, the quota check and update can be merged to a single UPDATE 
statement
and it fully can happen on the DB side, without actually fetching
any quota related information by the service.

The mysql UPDATE statement with the right expressions and sub-queries,
automatically can place the minimum required locks and do the update when 
needed.

The number of changed rows returned by the UPDATE,
can indicate is the quota successfully allocated (passed the check) or not.

When it's not successful, just ROLLBACK and tell something to the user about
the `Out of Quota` issue.
  
It is recommended to put the quota check close to the end of the transaction,
in order to minimize the lock hold time related to quota_usages table.

At the end we will not lock the quota_usages twice (as we do now),
and we do not left behind 4 virtually deleted rows in a `bonus` table,
and do not use +1 extra transaction and +8 extra UPDATE  per instance create,
and consistency is ensured.


  PS.:
  Is the uuid in the table referenced by anything?
 
 Once the operation that allocated the reservation completes, it either
 rolls back the reservation (in the case of failure) or it commits the
 reservation (updating a cache quota usages table).  This involves
 updating the reservation table to delete the reservation, and a UUID
 helps match up the specific row.  (Or rows; most operations involve more
 than one quota and thus more than one row.)  The expiration logic is to
 deal with the case that the operation never completed because nova
 crashed in the middle, and provides a stop-gap measure to ensure that
 the usage isn't counted against the user forever.

Just to confirm, the same UUID just exists in the reservation table only,
and temporary in one workers memory .?


 --
 Kevin L. Mitchell kevin.mitch...@rackspace.com
 Rackspace
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


PS.:
The `Refresh` is also strange thing in this context.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [releases] tooz 0.13.2

2015-04-13 Thread Doug Hellmann
We are jubilant to announce the release of:

tooz 0.13.2: Coordination library for distributed systems.

This release is part of the stable/kilo series.

For more details, please see the git log history below and:

http://launchpad.net/python-tooz/+milestone/0.13.2

Please report issues through launchpad:

http://bugs.launchpad.net/python-tooz/

Changes in tooz 0.13.1..0.13.2
--

5362af3 2015-04-06 14:53:28 + set defaultbranch for reviews
01859a0 2015-03-25 15:01:24 +0100 Use a sentinel connection pool to manage 
failover
889f86c 2015-03-25 15:01:13 +0100 fix mysql driver url parsing
49220c6 2015-03-25 15:00:28 +0100 Avoid re-using the same timeout for further 
watcher ops

Diffstat (except docs and test files)
-

.gitreview   |  1 +
setup-mysql-env.sh   | 20 --
tooz/drivers/mysql.py|  4 +--
tooz/drivers/redis.py| 23 
5 files changed, 110 insertions(+), 10 deletions(-)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][glanceclient][cinderclient] Problems with juno check jobs

2015-04-13 Thread Feodor Tersin
I proposed https://review.openstack.org/#/c/172522/ to fix this for all
projects whose versions are restricted by global requirements.

On Mon, Apr 13, 2015 at 5:55 PM, stuart.mcla...@hp.com wrote:

 Hi Gorka,

 Glance is seeing something very similar [3].

 I've updated the two bugs ([1],[3]) with some extra info.
 Both issues seem to have started around April 7th.

 Would anyone from infra be able to take a quick look?

 Thanks,

 -Stuart

 [3] https://bugs.launchpad.net/glance/+bug/1442682

  Date: Mon, 13 Apr 2015 13:35:42 +0200
 From: Gorka Eguileor gegui...@redhat.com
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Heat] [Cinderclient] All patches getting -1
 on  Cinderclient
 Message-ID: 20150413113541.ga23...@mail.corp.redhat.com
 Content-Type: text/plain; charset=us-ascii

 Hi all,

 Currently all patches in Cinderclient are getting -1 from Jenkins
 because gate-tempest-dsvm-neutron-src-python-cinderclient-juno is
 failing.

 I opened a LP bug [1] on this, but basically the issue comes from Heat's
 requirements cap on Cinderclient [2] to an upper bound of 1.1.1 when
 current version is reported as 1.1.1.post100.

 So if you're getting -1 from that job, remember it's not you. ;-)


 Cheers,
 Gorka.

 [1] https://bugs.launchpad.net/tempest/+bug/1442086
 [2] https://github.com/openstack/heat/blob/stable/juno/
 requirements.txt#L25


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3] IPAM alternate refactoring

2015-04-13 Thread John Belamaric
Thanks Pavel. I see an additional case in L3_NAT_dbonly_mixin, where it
starts the transaction in create_router, then eventually gets to
create_port:

create_router (starts tx)
  -self._update_router_gw_info
  -_create_gw_port
  -_create_router_gw_port
  -create_port(plugin)

So that also would need to be unwound.

On 4/13/15, 10:44 AM, Pavel Bondar pbon...@infoblox.com wrote:

Hi,

I made some investigation on the topic[1] and see several issues on this
way.

1. Plugin's create_port() is wrapped up in top level transaction for
create floating ip case[2], so it becomes more complicated to do IPAM
calls outside main db transaction.

- for create floating ip case transaction is initialized on
create_floatingip level:
create_floatingip(l3_db)-create_port(plugin)-create_port(db_base)
So IPAM call should be added into create_floatingip to be outside db
transaction

- for usual port create transaction is initialized on plugin's
create_port level, and John's change[1] cover this case:
create_port(plugin)-create_port(db_base)

Create floating ip work-flow involves calling plugin's create_port,
so IPAM code inside of it should be executed only when it is not wrapped
into top level transaction.

2. It is opened question about error handling.
Should we use taskflow to manage IPAM calls to external systems?
Or simple exception based model is enough to handle rollback actions on
third party systems in case of failing main db transaction.

[1] https://review.openstack.org/#/c/172443/
[2] neutron/db/l3_db.py: line 905

Thanks,
Pavel

On 10.04.2015 21:04, openstack-dev-requ...@lists.openstack.org wrote:
 L3 Team,
 
 I have put up a WIP [1] that provides an approach that shows the ML2
create_port method refactored to use the IPAM driver prior to initiating
the database transaction. Details are in the commit message - this is
really just intended to provide a strawman for discussion of the
options. The actual refactor here is only about 40 lines of code.
 
 [1] https://review.openstack.org/#/c/172443/
 
 
 Thanks,
 John


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Is there any way to put the driver backend error message to the horizon

2015-04-13 Thread Duncan Thomas
George

What has been said is that:
1) With an async API, there is no error from the client in the request.
e.g. for a create, the request returns success well before the backend has
been contacted about the request. There is no path back to the client with
which to send an error.

2) Quite often there is a desire for the admin to see error messages, but
not the tenant - this is especially true for managed / public clouds.

On 13 April 2015 at 18:21, George Peristerakis gperi...@redhat.com wrote:

  Hi Lui,

 I'm not familiar with the error you are trying to show, but Here's how
 Horizon typically works. In the case of cinder, we have a wrapper around
 the python-cinderclient which if the client sends a exception with a valid
 message, by default Horizon will display the exception message. The message
 can also be overridden in the translation file. So a good start is to look
 in python-cinderclient and see if you could produce a more meaningful
 message.


 Cheers.
 George


 On 10/04/15 06:16 AM, liuxinguo wrote:

 Hi,

 When we create a volume in the horizon, there may occurrs some errors at the 
 driver
 backend, and the in horizon we just see a error in the volume status.

 So is there any way to put the error information to the horizon so users can 
 know what happened exactly just from the horizon?
 Thanks,
 Liu




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][neutron] --config-dir vs. --config-file

2015-04-13 Thread Matthew Thode
We already do this somewhat in gentoo (at least for some daemon
initialization stuff) in /etc/conf.d/$DAEMON_NAME.conf.  Adding a
--config-dir option to that would be very simple.  Gentoo at least will
also make the first --config-dir option (/etc/neutron) optional as well
since we have some users that would like that level of separation.

In the mean time, do we install configs to those locations by default?
I'm not seeing that as a subdir of etc in the neutron repo.

On 04/13/2015 10:25 AM, Ihar Hrachyshka wrote:
 Hi,
 
 RDO/master (aka Delorean) moved neutron l3 agent to this configuration
 scheme, configuring l3 (and vpn) agent with --config-dir [1][2][3].
 
 We also provided a way to configure neutron services without ever
 touching a single configuration file from the package [4] where each
 service has a config-dir located under
 /etc/neutron/conf.d/service-name that can be populated by *.conf
 files that will be automatically read by services during startup.
 
 All other distributions are welcome to follow the path. Please don't
 introduce your own alternative to /etc/neutron/conf.d/... directory to
 avoid unneeded platform dependent differences in deployment tools.
 
 As for devstack, it's not really feasible to introduce such a change
 there (at least from my perspective), so it's downstream only.
 
 [1]:
 https://github.com/openstack-packages/neutron/blob/f20-master/openstack-
 neutron.spec#L602
 [2]:
 https://github.com/openstack-packages/neutron/blob/f20-master/neutron-l3
 -agent.service#L8
 [3]:
 https://github.com/openstack-packages/neutron-vpnaas/blob/f20-master/ope
 nstack-neutron-vpnaas.spec#L97
 [4]: https://review.gerrithub.io/#/c/229162/
 
 Thanks,
 /Ihar
 
 On 03/13/2015 03:11 PM, Ihar Hrachyshka wrote:
 Hi all,
 
 (I'm starting a new [packaging] tag in this mailing list to reach
 out people who are packaging our software in distributions and
 whatnot.)
 
 Neutron vendor split [1] introduced situations where the set of
 configuration files for L3/VPN agent is not stable and depends on
 which packages are installed in the system. Specifically,
 fwaas_driver.ini file is now shipped in neutron_fwaas tarball
 (openstack-neutron-fwaas package in RDO), and so
 --config-file=/etc/neutron/fwaas_driver.ini argument should be
 passed to L3/VPN agent *only* when the new package with the file is
 installed.
 
 In devstack, we solve the problem by dynamically generating CLI
 arguments list based on which services are configured in
 local.conf [2]. It's not a viable approach in proper distribution
 packages though, where we usually hardcode arguments [3] in our
 service manifests (systemd unit files, in case of RDO).
 
 The immediate solution to solve the issue would be to use
 --config-dir argument that is also provided to us by oslo.config
 instead of --config-file, and put auxiliary files there [4] (those
 may be just symbolic links to actual files).
 
 I initially thought to put the directory under /etc/neutron/, but
 then realized we may be interested in keeping it out of user sight
 while it only references stock (upstream) configuration files.
 
 But then a question arises: whether it's useful just for this
 particular case? Maybe there is value in using --config-dir outside
 of it? And in that case, maybe the approach should be replicated to
 other services?
 
 AFAIU --config-dir could actually be useful to configure services.
 Now instead of messing with configuration files that are shipped
 with packages (and handling .rpmnew files [5] that are generated on
 upgrade when local changes to those files are detected), users (or
 deployment/installation tools) could instead drop a *.conf file in
 that configuration directory, being sure their stock configuration
 file is always current, and no .rpmnew files are there to manually
 solve conflicts).
 
 We can also use two --config-dir arguments, one for stock/upstream
 configuration files, located out of /etc/neutron/, and another one
 available for population with user configuration files, under
 /etc/neutron/. This is similar to how we put settings considered to
 be 'sane distro defaults' in neutron-dist.conf file that is not
 available for modification [6][7].
 
 Of course users would still be able to set up their deployment the
 old way. In that case, nothing will change for them. So the
 approach is backwards compatible.
 
 I wonder whether the idea seems reasonable and actually useful for
 people. If so, we may want to come up with some packaging
 standards (on where to put those config-dir(s), how to name them,
 how to maintain symbolic links inside them) to avoid more work for
 deployment tools.
 
 [1]:
 https://blueprints.launchpad.net/neutron/+spec/core-vendor-decompositi
 on
 
 
 [2]:
 http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/neutron#
 n393
 
 
 [3]:
 https://github.com/openstack-packages/neutron/blob/f20-master/neutron-
 l3-agent.service#L8
 
 
 [4]: https://review.gerrithub.io/#/c/218562/
 [5]:
 

Re: [openstack-dev] [packaging][neutron] --config-dir vs. --config-file

2015-04-13 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 04/13/2015 05:42 PM, Matthew Thode wrote:
 We already do this somewhat in gentoo (at least for some daemon 
 initialization stuff) in /etc/conf.d/$DAEMON_NAME.conf.  Adding a 
 --config-dir option to that would be very simple.  Gentoo at least
 will also make the first --config-dir option (/etc/neutron)
 optional as well since we have some users that would like that
 level of separation.

I am not sure you want to pass the whole /etc/neutron as --config-dir
to your services since the directory may contain lots of files that
are irrelevant to the service in question. That's why RDO went with
per service directory + global /etc/neutron/neutron.conf.

 
 In the mean time, do we install configs to those locations by
 default? I'm not seeing that as a subdir of etc in the neutron
 repo.
 

If you ask about /etc/neutron/conf.d/, then no, so far it's RDO only.
As for l3 agent config dirs to load advanced services configuration
files on demand, we just link to appropriate configuration files
located in default /etc/neutron/... locations from there:

https://github.com/openstack-packages/neutron/blob/f20-master/openstack-
neutron.spec#L604

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBAgAGBQJVK+f7AAoJEC5aWaUY1u57OYAIAKHiX6BtE0MBflbMuGEQoQRM
O6EqNM8YVC/lBi+u5VvTv7l9UMATtV9aI8/EJs3LU6w/UMj8jl4fiITVJ4T/pWNg
rNb1TkePeM4ut7eCusEkqBupwYl0aCu/UQaDo2ZQ7zUUYgKIS1A39bkhTn9ihRQC
edumZt63UmTjEEgJTNflypf9BB+85uB3Li7AQJdu96q9dfZFpcswCBLVCs9kColj
FSeBpElrApcC2C2uSkPBIkaOkKexMWMYj1h4nsdQ6qGZmpFptpCa646jYYa9tYgP
zeTscF5JDY7lxqi71eHwNRfZCmSgQsR4pDfwA9UBuJDLKdLIOnIW+JMN9FyqNfY=
=4GT5
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] ironic-discoverd release status update

2015-04-13 Thread Dmitry Tantsur

Hi all!

This time I'm trying to roughly follow OpenStack release procedures, so 
ironic-discoverd just got a stable/1.1 branch, which is equivalent to 
RC. I'm proud to say that the upcoming 1.1.0 release (which is scheduled 
on Apr 30, just like other projects) is mostly about polishing existing 
features [1]. We got a devstack plugin [2] working, so that everyone now 
can try in-band inspection without too much effort. Thanks everyone who 
participated!


Dmitry

[1] https://bugs.launchpad.net/ironic-discoverd/+milestone/1.1.0
[2] https://etherpad.openstack.org/p/DiscoverdDevStack

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstackclient] osc slowness

2015-04-13 Thread Boris Pavlovic
Sean,

Nice work on this. So now it's clear that starting time of libs makes
sense.

One way to improve this is to use https://github.com/boris-42/profimp  that
allows to trace any python import
and not to import all modules when they are not required.

Btw I already saw few patches that are improving perf:

  https://review.openstack.org/#/c/170851/

  https://review.openstack.org/#/c/164066/



Best regards,
Boris Pavlovic

On Mon, Apr 13, 2015 at 2:15 PM, Sean Dague s...@dague.net wrote:

 While I was working on the grenade refactor I was considering using
 openstack client for some resource create / testing. Doing so made me
 realize that osc is sluggish. From what I can tell due to the way it
 loads the world, there is a minimum 1.5s overhead on every command
 execution. For instance, openstack server list takes a solid extra
 second over nova list in my environment.

 I wrote a little tool to figure out how much time we're spending in
 openstack client - https://review.openstack.org/#/c/172713/

 On a randomly selected dsvm-full run from master it's about 4.5 minutes.
 Now, that being side, there are a bunch of REST calls it's making, so
 it's not all OSC's fault. However there is a lot of time lost to that
 reload the world issue. Especially when we are making accounts.

 For instance, the create accounts section of Keystone setup:
 https://github.com/openstack-dev/devstack/blob/master/stack.sh#L968-L1016

 Now takes 3.5 minutes in master -

 http://logs.openstack.org/13/172713/1/check/check-tempest-dsvm-full/d3b0b8e/logs/devstacklog.txt.gz

 2015-04-12 12:37:40.997 | + echo_summary 'Starting Keystone'
 2015-04-12 12:41:06.833 | + echo_summary 'Configuring and starting Horizon'

 The same chunk in Icehouse took just over 1 minute -

 http://logs.openstack.org/28/165928/2/check/check-tempest-dsvm-full/f0b3e07/logs/devstacklog.txt.gz

 2015-04-10 15:59:08.699 | + echo_summary 'Starting Keystone'
 2015-04-10 16:00:00.313 | + echo_summary 'Configuring and starting Horizon'

 In master we do create a few more accounts as well, again, it's not all
 OSC, however OSC is definitely adding to it.

 A really great comparison between OSC and Keystone commands is provided
 by the ec2 user creation:

 Icehouse:

 http://logs.openstack.org/28/165928/2/check/check-tempest-dsvm-full/f0b3e07/logs/devstacklog.txt.gz#_2015-04-10_16_01_07_148

 Master:

 http://logs.openstack.org/13/172713/1/check/check-tempest-dsvm-full/d3b0b8e/logs/devstacklog.txt.gz#_2015-04-12_12_43_19_655

 The keystone version of the commands take ~ 500ms, the OSC versions 1700ms.


 So, under the current model I think we're paying a pretty high strategy
 tax in OSC use in devstack. It's adding minutes of time in a normal run.
 I don't know all the internals of OSC and what can be done to make it
 better. But I think that as a CLI we should be as responsive as
 possible.  1s seems like it should be target for at least all the
 keystone operations. I do think this is one of the places (like
 rootwrap) where load time is something to not ignore.

 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >