Re: [openstack-dev] mox3 WAS Re: [nova] Do we have some guidelines for mock, stub, mox when writing unit test?

2015-03-27 Thread Julien Danjou
On Fri, Mar 27 2015, Alan Pevec wrote:

 What's the status of migration to mock, can mox and mox3 be dropped
 from global-requirements ?

I'm working on removing mox in favor of mox3 to reduce a bit our
dependency. But it takes a lot of time, especially because almost no
reviewer care so the patches stay in review for… ever.
Example: https://review.openstack.org/#/c/147476/

So talking about mox-mock conversion is completely utopic IMHO.

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Thierry Carrez
Kyle Mestery wrote:
 On Thu, Mar 26, 2015 at 7:58 PM, Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com wrote:
 
 On 03/26/2015 06:31 PM, Michael Still wrote:
  Hi,
 
  I thought it would be a good idea to send out a status update for the
  migration from nova-network to Neutron, as there hasn't been as much
  progress as we'd hoped for in Kilo. There are a few issues which have
  been slowing progress down.
 
 Thanks for writing up the status!
 
  First off, creating an all encompassing turn key upgrade is probably
  not possible. This was also never a goal of this effort -- to quote
  the spec for this work, Consequently, the process described here is
  not a “one size fits all” automated push-button tool but a series of
  steps that should be obvious to automate and customise to meet local
  needs [1]. The variety of deployment and configuration options
  available makes a turn key migration very hard to write, and possibly
  impossible to test. We therefore have opted for writing migration
  tools, which allow operators to plug components together in the way
  that makes sense for their deployment and then migrate using those.
 
 Yes, I'm quite convinced that it will end up being a fairly custom
 effort for virtually all deployments complex enough where just starting
 over or cold migration isn't an option.
 
  However, talking to operators at the Ops Summit, is has become clear
  that some operators simply aren't interested in moving to Neutron --
  largely because they either aren't interested in tenant networks, or
  have corporate network environments that make deploying Neutron very
  hard.
 
 I totally get point #1: nova-network has less features, but I don't
 need the rest, and nova-network is rock solid for me.
 
 I'm curious about the second point about Neutron being more difficult to
 deploy than nova-network.  That's interesting because it actually seems
 like Neutron is more flexible when it comes to integration with existing
 networks.  Do you know any more details?  If not, perhaps those with
 that concern could fill in with some detail here?
 
  So, even if we provide migration tools, it is still likely that
  we will end up with loyal nova-network users who aren't interested in
  moving. From the Nova perspective, the end goal of all of this effort
  is to delete the nova-network code, and if we can't do that because
  some people simply don't want to move, then what is gained by putting
  a lot of effort into migration tooling?
 
 To me it comes down to the reasons people don't want to move.  I'd like
 to dig into exactly why people don't want to use Neutron.  If there are
 legitimate reasons why nova-network will work better, then Neutron has
 not met parity and we're certainly not ready to deprecate nova-network.
 
 I still think getting down to a single networking project should be the
 end goal.  The confusion around networking choices has been detrimental
 to OpenStack.
 
  Therefore, there is some talk about spinning nova-network out into its
  own project where it could continue to live on and be better
  maintained than the current Nova team is able to do. However, this is
  a relatively new idea and we haven't had a chance to determine how
  feasible it is given where we are in the release cycle. I assume that
  if we did this, we would need to find a core team passionate about
  maintaining nova-network, and we would still need to provide some
  migration tooling for operators who are keen to move to Neutron.
  However, that migration tooling would be less critical than it is now.
 
 From a purely technical perspective, it seems like quite a bit of work.
  It reminds me of we'll just split the scheduler out, and we see how
 long that's taking in practice.  I really think all of that effort is
 better spent just improving Neutron.
 
 From a community perspective, I'm not thrilled about long term
 fragmentation for such a fundamental piece of our stack.  So, I'd really
 like to dig into the current state of gaps between Neutron and
 nova-network.  If there were no real gaps, there would be no sensible
 argument to keep the 2nd option.
 
 I agree with Russell here. After talking to a few folks, my sense is
 there is still a misunderstanding between people running nova-network
 and those developing Neutron. I realize not everyone wants tenant
 networks, and I think we can look at the use case for that and see how
 to map it to what Neutron has, and fill in any missing gaps. There are
 some discussions started already to see how we can fill those gaps.

Part of it is corner (or simplified) use cases not being optimally
served by Neutron, and I think Neutron could more aggressively address
those. But the other part is 

Re: [openstack-dev] [Heat] Decoupling Heat integration tests from Heat tree

2015-03-27 Thread Pavlo Shchelokovskyy
Hi,

On Thu, Mar 26, 2015 at 10:26 PM, Zane Bitter zbit...@redhat.com wrote:

 On 26/03/15 10:38, Pavlo Shchelokovskyy wrote:

 Hi all,

 following IRC discussion here is a summary of what I propose can be done
 in this regard, in the order of increased decoupling:

 1) make a separate requirements.txt for integration tests and modify the
 tox job to use it. The code of these tests is pretty much decoupled
 already, not using any modules from the main heat tree. The actual
 dependencies are mostly api clients and test framework. Making this
 happen should decrease the time needed to setup the tox env and thus
 speed up the test run somewhat.


 +1

  2) provide separate distutils' setup.py/setup.cfg
 http://setup.py/setup.cfg to ease packaging and installing this test
 suit to run it against an already deployed cloud (especially scenario
 tests seem to be valuable in this regard).


 +1

  3) move the integration tests to a separate repo and use it as git
 submodule in the main tree. The main reasons not to do it as far as I've
 collected are not being able to provide code change and test in the same
 (or dependent) commits, and lesser reviewers' attention to a separate
 repo.


 -0

 I'm not sure what the advantage is here, and there are a bunch of
 downsides (basically, I agree with Ryan). Unfortunately I missed the IRC
 discussion, can you elaborate on how decoupling to this degree might help
 us?


Presumably this could enable a more streamlined packaging and publishing of
the test suit (e.g. to PyPI). But I agree, right now it is not really
needed given the downsides, I just brought it up as an extreme separation
case to collect more opinions.

Given the feedback we have in the thread, I will proceed with the first
point as this should have immediate benefit for the duration of the test
job and already give help to those who want to package the test suit
separately. Distutils stuff can be added later.

Best regards,

Pavlo Shchelokovskyy
Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] What do people think of YAPF (like gofmt, for python)?

2015-03-27 Thread Sean Dague
On 03/26/2015 06:46 PM, Robert Collins wrote:
 On 27 March 2015 at 09:14, Ryan Brown rybr...@redhat.com wrote:
 
 Ooof, that's huge. If we can configure it to be less aggressive I love
 the *idea* of having everything formatted semantically, but that's a
 pretty major burden for everyone involved.
 
 It's huge today. It wouldn't be if we did it :).
 
 I suggest that given the other aspects of our code review system, we
 might treat this like translations as a long term thing - that is
 setup a job somewhere to run the autoformatter against trunk and
 submit the result as a patchset.
 
 To get over the initial hump, I'd have a human break the patch up into
 per-file changes or some such.
 
 A variation would be to have a config file saying which files are
 covered, and slowly dial that up, one file at a time.
 
 Once everything is covered, and we're dealing with commits that change
 things async (via the job above), then we can talk about how to help
 developers submit code using it in the first place.

Honestly, is there a problem here that really needs to be solved? I've
been a little confused about this thread because I thought we're all
actively calling people out for nit picking irrelevant style issues.

I feel like building a large system to avoid not being human to each
other as completely dysfunctional.

In the Nova tree we have many cross cutting efforts that need to touch a
lot of things (v3 renames, mox - mock). This seems like giant churn for
no real gain.

I'm not convinced the answer to white space fights is a tool. I think
it's stop being silly, actively call out cores when they are, and
actively call out new folks that it is completely unhelpful in the
reviewing (and only going to raise ire of core teams, not ingratiate
yourself).

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Decoupling Heat integration tests from Heat tree

2015-03-27 Thread Anastasia Kuznetsova
Hello,

As a QA engineer, I like the idea to make integration tests more
independent and have an ability to package them and run against any
deployed cloud, it will be very useful.
But I assume, that creating a separate repository is not needed and it is
enough to just collect all functional/integration tests in separate folder
like now.

Best regards,
Anastasia Kuznetsova

On Fri, Mar 27, 2015 at 6:18 AM, Angus Salkeld asalk...@mirantis.com
wrote:

 On Fri, Mar 27, 2015 at 6:26 AM, Zane Bitter zbit...@redhat.com wrote:

 On 26/03/15 10:38, Pavlo Shchelokovskyy wrote:

 Hi all,

 following IRC discussion here is a summary of what I propose can be done
 in this regard, in the order of increased decoupling:

 1) make a separate requirements.txt for integration tests and modify the
 tox job to use it. The code of these tests is pretty much decoupled
 already, not using any modules from the main heat tree. The actual
 dependencies are mostly api clients and test framework. Making this
 happen should decrease the time needed to setup the tox env and thus
 speed up the test run somewhat.


 +1

  2) provide separate distutils' setup.py/setup.cfg
 http://setup.py/setup.cfg to ease packaging and installing this test
 suit to run it against an already deployed cloud (especially scenario
 tests seem to be valuable in this regard).


 +1

  3) move the integration tests to a separate repo and use it as git
 submodule in the main tree. The main reasons not to do it as far as I've
 collected are not being able to provide code change and test in the same
 (or dependent) commits, and lesser reviewers' attention to a separate
 repo.


 -0

 I'm not sure what the advantage is here, and there are a bunch of
 downsides (basically, I agree with Ryan). Unfortunately I missed the IRC
 discussion, can you elaborate on how decoupling to this degree might help
 us?


 I think the overall goal is to make it easier for an operator to run tests
 against their cloud to make sure
 everything is working. We should really have a common approach to this so
 they don't have to do something
 different for each project. Any opinions from the QA team?

 Maybe have it as it's own package, then you can install it and run
 something like:
 os-functional-tests-run package-name auth args here

 -A




 cheers,
 Zane.

  What do you think about it? Please share your comments.

 Best regards,

 Pavlo Shchelokovskyy
 Software Engineer
 Mirantis Inc
 www.mirantis.com http://www.mirantis.com


 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Sean Dague
On 03/27/2015 05:22 AM, Thierry Carrez wrote:
snip
 Part of it is corner (or simplified) use cases not being optimally
 served by Neutron, and I think Neutron could more aggressively address
 those. But the other part is ignorance and convenience: that Neutron
 thing is a scary beast, last time I looked into it I couldn't make sense
 of it, and nova-network just works for me.
 
 That is why during the Ops Summit we discussed coming up with a
 migration guide that would explain the various ways you can use Neutron
 to cover nova-network use cases, demystify a few dark areas, and outline
 the step-by-step manual process you can go through to migrate from one
 to the other.
 
 We found a dev/ops volunteer for writing that migration guide but he was
 unfortunately not allowed to spend time on this. I heard we have new
 volunteers, but I'll let them announce what their plans are, rather than
 put words in their mouth.
 
 This migration guide can happen even if we follow the nova-net spinout
 plan (for the few who want to migrate to Neutron), so this is a
 complementary solution rather than an alternative. Personally I doubt
 there would suddenly be enough people interested in nova-net development
 to successfully spin it out and maintain it. I also agree with Russell
 that long-term fragmentation at this layer of the stack is generally not
 desirable.

I think if you boil everything down, you end up with 3 really important
differences.

1) neutron is a fleet of services (it's very micro service) and every
service requires multiple and different config files. Just configuring
the fleet is a beast if it not devstack (and even if it is)

2) neutron assumes a primary interesting thing to you is tenant secured
self service networks. This is actually explicitly not interesting to a
lot of deploys for policy, security, political reasons/restrictions.

3) neutron open source backend defaults to OVS (largely because #2). OVS
is it's own complicated engine that you need to learn to debug. While
Linux bridge has challenges, it's also something that anyone who's
worked with Linux  Virtualization for the last 10 years has some
experience with.

(also, the devstack setup code for neutron is a rats nest, as it was
mostly not paid attention to. This means it's been 0 help in explaining
anything to people trying to do neutron. For better or worse devstack is
our executable manual for a lot of these things)

so that being said, I think we need to talk about minimum viable
neutron as a model and figure out how far away that is from n-net. This
week at the QA Sprint, Dean, Sean Collins, and I have spent some time
hashing it out, hopefully with something to show the end of the week.
This will be the new devstack code for neutron (the old lib/neutron is
moved to lib/neutron-legacy).

Default setup will be provider networks (which means no tenant
isolation). For that you should only need neutron-api, -dhcp, and -l2.
So #1 is made a bunch better. #2 not a thing at all. And for #3 we'd
like to revert back to linux bridge for the base case (though first code
will probably be OVS because that's the happy path today).

Mixin #1: NEUTRON_BRIDGE_WITH=OVS

First optional layer being flip from linuxbridge - ovs. That becomes
one bite sized thing to flip over once you understand it.

Mixin #2: self service networks

This will be off in the default case, but can be enabled later.

... and turtles all the way up.


Provider networks w/ Linux bridge are really close to the simplicity on
the wire people expected with n-net. The only last really difference is
floating ips. And the problem here was best captured by Sean Collins on
Wed, Floating ips in nova are overloaded. They are both elastic ips, but
they are also how you get public addresses in a default environment.
Dean shared that that dual purpose is entirely due to constraints of the
first NASA cloud which only had a /26 of routable IPs. In neutron this
is just different, you don't need floating ips to have public addresses.
But the mental model has stuck.


Anyway, while I'm not sure this is going to solve everyone's issues, I
think it's a useful exercise anyway for devstack's neutron support to
revert to a minimum viable neutron for learning purposes, and let you
layer on complexity manually over time. And I'd be really curious if a
n-net - provider network side step (still on linux bridge) would
actually be a more reasonable transition for most environments.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Irina Povolotskaya for fuel-docs core

2015-03-27 Thread Igor Zinovik
+1

On 26 March 2015 at 19:26, Fabrizio Soppelsa fsoppe...@mirantis.com wrote:
 +1 definitely


 On 03/25/2015 10:10 PM, Dmitry Borodaenko wrote:

 Fuelers,

 I'd like to nominate Irina Povolotskaya for the fuel-docs-core team.
 She has contributed thousands of lines of documentation to Fuel over
 the past several months, and has been a diligent reviewer:


 http://stackalytics.com/?user_id=ipovolotskayarelease=allproject_type=allmodule=fuel-docs

 I believe it's time to grant her core reviewer rights in the fuel-docs
 repository.

 Core reviewer approval process definition:
 https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Igor Zinovik
Deployment Engineer at Mirantis, Inc
izino...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] mox3 WAS Re: [nova] Do we have some guidelines for mock, stub, mox when writing unit test?

2015-03-27 Thread Alan Pevec
 I'm working on removing mox in favor of mox3 to reduce a bit our dependency.

Then mox3 should be imported to openstack namespace as suggested by
Chuck in Dec 2013.
Looks like last known source repository is
https://github.com/emonty/pymox with last commit from Aug 2013 or did
it move somewhere else?


Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] fuel-dev-tools repo

2015-03-27 Thread Przemyslaw Kaminski
Hello,

In accordance with the consensus that was reached on the ML I've set up
the fuel-dev-tools repository [1]. It will be the target repo to merge
my 2 private repos [2] and [3] (I don't think it's necessary to set up 2
different repos for this now). The core reviewers are the fuel-core
group. I needed core permissions to set things up and merged a
Cookiecutter patchset [4] to test things. After that I revoked my core
permissions leaving only the fuel-core team.

P.

[1] https://github.com/stackforge/fuel-dev-tools
[2] https://github.com/stackforge/fuel-dev-tools
[3] https://github.com/CGenie/vagrant-fuel-dev
[4] https://review.openstack.org/#/c/167968/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel-dev-tools repo

2015-03-27 Thread Przemyslaw Kaminski
Sorry, I meant

[2] https://github.com/CGenie/fuel-utils/

P.

On 03/27/2015 08:34 AM, Przemyslaw Kaminski wrote:
 Hello,
 
 In accordance with the consensus that was reached on the ML I've set up
 the fuel-dev-tools repository [1]. It will be the target repo to merge
 my 2 private repos [2] and [3] (I don't think it's necessary to set up 2
 different repos for this now). The core reviewers are the fuel-core
 group. I needed core permissions to set things up and merged a
 Cookiecutter patchset [4] to test things. After that I revoked my core
 permissions leaving only the fuel-core team.
 
 P.
 
 [1] https://github.com/stackforge/fuel-dev-tools
 [2] https://github.com/stackforge/fuel-dev-tools
 [3] https://github.com/CGenie/vagrant-fuel-dev
 [4] https://review.openstack.org/#/c/167968/
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] swift memory usage in centos7 devstack jobs

2015-03-27 Thread Alan Pevec
 I'll spend a bit more time on this -- I haven't determined if it's
 centos or swift specific yet -- but in the mean-time, beware of
 recent pyOpenSSL

But how come that same recent pyOpenSSL doesn't consume more memory on Ubuntu?


Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Is yaml-devel still needed for Devstack

2015-03-27 Thread Adam Young
I recently got Devstack to run on RHEL.  In doing so, I had to hack 
around the dependency on yaml-devel (I just removed it from devstack's 
required packages)


There is no yaml-devel in EPEL or the main repos for RHEL7.1/Centos7.

Any idea what the right approach is to this moving forward?  Is this 
something that is going to bite us in RDO packaging?


The dependency is a general one: 
http://git.openstack.org/cgit/openstack-dev/devstack/tree/files/rpms/general#n25


So I don't know what actually needs it.  I find it interesting that 
Fedora does not seem to have it, either, but  I've had no problem 
running devstack on Fedora 21.  Can we remove this dependency, or at 
least move it closer to where it is needed?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Decoupling Heat integration tests from Heat tree

2015-03-27 Thread Ryan Brown
On 03/27/2015 06:57 AM, Pavlo Shchelokovskyy wrote:
 Hi,
 
 On Thu, Mar 26, 2015 at 10:26 PM, Zane Bitter zbit...@redhat.com
 mailto:zbit...@redhat.com wrote:
 
 [snip]
 
 3) move the integration tests to a separate repo and use it as git
 submodule in the main tree. The main reasons not to do it as far
 as I've
 collected are not being able to provide code change and test in
 the same
 (or dependent) commits, and lesser reviewers' attention to a
 separate repo.
 
 
 -0
 
 I'm not sure what the advantage is here, and there are a bunch of
 downsides (basically, I agree with Ryan). Unfortunately I missed the
 IRC discussion, can you elaborate on how decoupling to this degree
 might help us?
 
 
 Presumably this could enable a more streamlined packaging and publishing
 of the test suit (e.g. to PyPI). But I agree, right now it is not really
 needed given the downsides, I just brought it up as an extreme
 separation case to collect more opinions.
 
 Given the feedback we have in the thread, I will proceed with the first
 point as this should have immediate benefit for the duration of the test
 job and already give help to those who want to package the test suit
 separately. Distutils stuff can be added later.
 
 Best regards, 
 Pavlo Shchelokovskyy

If we only do 1 and 2, not 3 we get all the benefits (separate package,
streamlined publishing, etc) without having to deal with the submodule
disadvantages I (and you) mentioned earlier.

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is yaml-devel still needed for Devstack

2015-03-27 Thread Sean Dague
On 03/27/2015 09:01 AM, Adam Young wrote:
 I recently got Devstack to run on RHEL.  In doing so, I had to hack
 around the dependency on yaml-devel (I just removed it from devstack's
 required packages)
 
 There is no yaml-devel in EPEL or the main repos for RHEL7.1/Centos7.
 
 Any idea what the right approach is to this moving forward?  Is this
 something that is going to bite us in RDO packaging?
 
 The dependency is a general one:
 http://git.openstack.org/cgit/openstack-dev/devstack/tree/files/rpms/general#n25
 
 
 So I don't know what actually needs it.  I find it interesting that
 Fedora does not seem to have it, either, but  I've had no problem
 running devstack on Fedora 21.  Can we remove this dependency, or at
 least move it closer to where it is needed?

pyyaml will use libyaml to c accelerate yaml parsing. It's not strictly
required, but there may be performance implications.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is yaml-devel still needed for Devstack

2015-03-27 Thread Lars Kellogg-Stedman
On Fri, Mar 27, 2015 at 09:01:12AM -0400, Adam Young wrote:
 I recently got Devstack to run on RHEL.  In doing so, I had to hack around
 the dependency on yaml-devel (I just removed it from devstack's required
 packages)
 
 There is no yaml-devel in EPEL or the main repos for RHEL7.1/Centos7.

Fedora and CentOS (7) both have libyaml and libyaml-devel.  I wonder
if this is just a package naming issue in devstack?  libyaml-devel is
used by PyYAML to build C extensions, although PyYAML will also
operate without it.

-- 
Lars Kellogg-Stedman l...@redhat.com | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/



pgpHQYSbNavhr.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel-dev-tools repo

2015-03-27 Thread Roman Prykhodchenko
IIRC in this thread we agreed to use separate core groups for different 
repositories 
http://lists.openstack.org/pipermail/openstack-dev/2015-January/055111.html 
http://lists.openstack.org/pipermail/openstack-dev/2015-January/055111.html
Why not follow that approach in this case?

 27 бер. 2015 о 09:01 Przemyslaw Kaminski pkamin...@mirantis.com написав(ла):
 
 Sorry, I meant
 
 [2] https://github.com/CGenie/fuel-utils/
 
 P.
 
 On 03/27/2015 08:34 AM, Przemyslaw Kaminski wrote:
 Hello,
 
 In accordance with the consensus that was reached on the ML I've set up
 the fuel-dev-tools repository [1]. It will be the target repo to merge
 my 2 private repos [2] and [3] (I don't think it's necessary to set up 2
 different repos for this now). The core reviewers are the fuel-core
 group. I needed core permissions to set things up and merged a
 Cookiecutter patchset [4] to test things. After that I revoked my core
 permissions leaving only the fuel-core team.
 
 P.
 
 [1] https://github.com/stackforge/fuel-dev-tools
 [2] https://github.com/stackforge/fuel-dev-tools
 [3] https://github.com/CGenie/vagrant-fuel-dev
 [4] https://review.openstack.org/#/c/167968/
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is yaml-devel still needed for Devstack

2015-03-27 Thread Ryan Brown
On 03/27/2015 09:04 AM, Sean Dague wrote:
 On 03/27/2015 09:01 AM, Adam Young wrote:
 I recently got Devstack to run on RHEL.  In doing so, I had to hack
 around the dependency on yaml-devel (I just removed it from devstack's
 required packages)

 There is no yaml-devel in EPEL or the main repos for RHEL7.1/Centos7.

 Any idea what the right approach is to this moving forward?  Is this
 something that is going to bite us in RDO packaging?

 The dependency is a general one:
 http://git.openstack.org/cgit/openstack-dev/devstack/tree/files/rpms/general#n25


 So I don't know what actually needs it.  I find it interesting that
 Fedora does not seem to have it, either, but  I've had no problem
 running devstack on Fedora 21.  Can we remove this dependency, or at
 least move it closer to where it is needed?
 
 pyyaml will use libyaml to c accelerate yaml parsing. It's not strictly
 required, but there may be performance implications.

Since it's a soft requirement should we patch devstack to reflect that?

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon][Sahara] FFE Request for bp data-processing-edit-templates

2015-03-27 Thread Chad Roberts
I'd like to request a FFE for a Data Processing (Sahara) feature.

The backend and client lib work are already in for Kilo.

The 2 patches are here:  
https://review.openstack.org/#/q/status:open+project:openstack/horizon+branch:master+topic:bp/data-processing-edit-templates,n,z

Thanks,
Chad


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: PCI passthrough of 40G ethernet interface

2015-03-27 Thread jacob jacob
After update to latest firmware and using version 1.2.37 of i40e
driver, things are looking better with PCI passthrough.

]# ethtool -i eth3
driver: i40e
version: 1.2.37
firmware-version: f4.33.31377 a1.2 n4.42 e1930
bus-info: :00:07.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes


There are still issues running dpdk 1.8.0 from the VM using the pci
passthrough devices and looks like it puts the devices in a bad state.
i40e driver will not bind after this happens and a host reboot is
required to recover.
I'll post further updates as i make progress.
Thanks for all the help.

On Thu, Mar 26, 2015 at 8:50 PM, yongli he yongli...@intel.com wrote:
 在 2015年03月11日 22:15, jacob jacob 写道:
 Hi, jacob

   we now find   przemyslaw.czesnowicz have same NIC, hope will help a little
 bit.

 Yongli He


 -- Forwarded message --
 From: jacob jacob opstk...@gmail.com
 Date: Tue, Mar 10, 2015 at 6:00 PM
 Subject: PCI passthrough of 40G ethernet interface
 To: openst...@lists.openstack.org



 Hi,
 I'm interested in finding out if anyone has successfully tested PCI
 passthrough functionality for 40G interfaces in Openstack(KVM hypervisor).

 I am trying this out on a Fedora 21 host  with Fedora 21 VM
 image.(3.18.7-200.fc21.x86_64)

 Was able to successfully test PCI passthrough of 10 G interfaces:
   Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network
 Connection (rev 01)

 With 40G interface testing, the PCI device is passed through to the VM but
 data transfer is failing.
 0a:00.1 Ethernet controller: Intel Corporation Ethernet Controller XL710
 for 40GbE QSFP+ (rev 01)

 Tried this with both the i40e driver and latest dpdk driver but no luck so
 far.

 With the i40e driver, the data transfer fails.
  Relevant dmesg output:
  [   11.544088] i40e :00:05.0 eth1: NIC Link is Up 40 Gbps Full Duplex,
 Flow Control: None
 [   11.689178] i40e :00:06.0 eth2: NIC Link is Up 40 Gbps Full Duplex,
 Flow Control: None
 [   16.704071] [ cut here ]
 [   16.705053] WARNING: CPU: 1 PID: 0 at net/sched/sch_generic.c:303
 dev_watchdog+0x23e/0x250()
 [   16.705053] NETDEV WATCHDOG: eth1 (i40e): transmit queue 1 timed out
 [   16.705053] Modules linked in: cirrus ttm drm_kms_helper i40e drm ppdev
 serio_raw i2c_piix4 virtio_net parport_pc ptp virtio_balloon
 crct10dif_pclmul pps_core parport pvpanic crc32_pclmul ghash_clmulni_intel
 virtio_blk crc32c_intel virtio_pci virtio_ring virtio ata_generic pata_acpi
 [   16.705053] CPU: 1 PID: 0 Comm: swapper/1 Not tainted
 3.18.7-200.fc21.x86_64 #1
 [   16.705053] Hardware name: Fedora Project OpenStack Nova, BIOS
 1.7.5-20140709_153950- 04/01/2014
 [   16.705053]   2e5932b294d0c473 88043fc83d48
 8175e686
 [   16.705053]   88043fc83da0 88043fc83d88
 810991d1
 [   16.705053]  88042958f5c0 0001 88042865f000
 0001
 [   16.705053] Call Trace:
 [   16.705053]  IRQ  [8175e686] dump_stack+0x46/0x58
 [   16.705053]  [810991d1] warn_slowpath_common+0x81/0xa0
 [   16.705053]  [81099245] warn_slowpath_fmt+0x55/0x70
 [   16.705053]  [8166e62e] dev_watchdog+0x23e/0x250
 [   16.705053]  [8166e3f0] ? dev_graft_qdisc+0x80/0x80
 [   16.705053]  [810fd52a] call_timer_fn+0x3a/0x120
 [   16.705053]  [8166e3f0] ? dev_graft_qdisc+0x80/0x80
 [   16.705053]  [810ff692] run_timer_softirq+0x212/0x2f0
 [   16.705053]  [8109d7a4] __do_softirq+0x124/0x2d0
 [   16.705053]  [8109db75] irq_exit+0x125/0x130
 [   16.705053]  [817681d8] smp_apic_timer_interrupt+0x48/0x60
 [   16.705053]  [817662bd] apic_timer_interrupt+0x6d/0x80
 [   16.705053]  EOI  [811005c8] ? hrtimer_start+0x18/0x20
 [   16.705053]  [8105ca96] ? native_safe_halt+0x6/0x10
 [   16.705053]  [810f81d3] ? rcu_eqs_enter+0xa3/0xb0
 [   16.705053]  [8101ec7f] default_idle+0x1f/0xc0
 [   16.705053]  [8101f64f] arch_cpu_idle+0xf/0x20
 [   16.705053]  [810dad35] cpu_startup_entry+0x3c5/0x410
 [   16.705053]  [8104a2af] start_secondary+0x1af/0x1f0
 [   16.705053] ---[ end trace 7bda53aeda558267 ]---
 [   16.705053] i40e :00:05.0 eth1: tx_timeout recovery level 1
 [   16.705053] i40e :00:05.0: i40e_vsi_control_tx: VSI seid 519 Tx ring
 0 disable timeout
 [   16.744198] i40e :00:05.0: i40e_vsi_control_tx: VSI seid 520 Tx ring
 64 disable timeout
 [   16.779322] i40e :00:05.0: i40e_ptp_init: added PHC on eth1
 [   16.791819] i40e :00:05.0: PF 40 attempted to control timestamp mode
 on port 1, which is owned by PF 1
 [   16.933869] i40e :00:05.0 eth1: NIC Link is Up 40 Gbps Full Duplex,
 Flow Control: None
 [   18.853624] SELinux: initialized (dev tmpfs, type tmpfs), uses transition
 SIDs
 [   22.720083] i40e :00:05.0 eth1: tx_timeout recovery level 2
 [   

[openstack-dev] [keystone][fernet] Fernet tokens sync

2015-03-27 Thread Boris Bobrov
Hello,

As you know, keystone introduced non-persistent tokens in kilo -- Fernet 
tokens. These tokens use Fernet keys, that are rotated from time to time. A 
great description of key rotation and replication can be found on [0] and [1] 
(thanks, lbragstad). In HA setup there are multiple nodes with Keystone and 
that requires key replication. How do we do that with new Fernet tokens?

Please keep in mind that the solution should be HA -- there should not be any 
master server, pushing keys to slave servers, because master server might go 
down.

I can see some ways to do that.

1. Mount some distributed network file system to /etc/keystone/fernet-keys/ 
(the directory, where keys are) and leave syncronization and dealing with race 
conditions to it. This solution will not require any changes to existing code.

Are there any mature filesystems for that?

2. Use a queue of staged keys. It would mean that a new staging key will be 
generated if there are no other staging keys in queue. Example:

Suppose we have keystone setup on 2 servers.

I. In the beginning they have keys 0, 1, 2.

II. Rotation happens on keystone-1. 0 becomes 3, 1 is removed. Before 
generating 0, check that there are no keys in the queue. There are no keys in 
the queue, generate it and push to keystone-2's queue.

III. Rotations happens on keystone-2. 0 becomes 3, 1 is removed. Before 
generating 0, check that there are no keys in the queue. There is a key from 
keystone-1, use it as new 0.

Thanks to Alexander Makarov for the idea.

How do we store this queue? Should we use some backend, rely on creation time 
or something else?

This way requires changes to keystone code.

3. Store keys in backend completely and use well-known sync mechanisms. This 
would require some changes to keystone code too.

-- 
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon][Sahara] FFE Request for bp/sahara-shell-action-form

2015-03-27 Thread Ethan Gafford
I'm requesting a FFE for a Data Processing (Sahara) feature.

The job type that this form supports is merged for Kilo on the backend.

The patch (which has no unmerged dependencies) is here: 
https://review.openstack.org/#/c/160510/

Thank you,
Ethan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Tim Bell
From the stats 
(http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014),


-43% of production clouds use OVS (the default for Neutron)

-30% of production clouds are Nova network based

-15% of production clouds use linux bridge

There is therefore a significant share of the OpenStack production user 
community who are interested in a simple provider network linux bridge based 
solution.

I think it is important to make an attractive cloud solution  where deployers 
can look at the balance of function and their skills and choose the appropriate 
combinations.

Whether a simple network model should be the default is a different question to 
should there be a simple option. Personally, one of the most regular complaints 
I get is the steep learning curve for a new deployment. If we could make it so 
that people can do it as a series of steps (by making an path to add OVS) 
rather than a large leap, I think that would be attractive.

BTW, CERN is on nova network with 3,200 hypervisors across 2 sites and we're 
interested to move to Neutron to stay mainstream. The CERN network is set up as 
a series of IP services which correspond to broadcast domains. A typical IP 
service is around 200-500 servers with a set of top of the rack switches and 
one or two router uplinks. An IP address is limited to an IP service. We then 
layer a secondary set of IP networks on the hypervisors on the access switches 
which are allocated to VMs. We change router and switch vendor on average every 
5 years as part of public procurement and therefore generic solutions are 
required. Full details of the CERN network can be found at 
http://indico.cern.ch/event/327056/contribution/0/material/slides/0.pdf.

Tim

From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: 27 March 2015 17:36
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to 
Neutron migration work

No, no. Most OpenStack deployments are neutron based with ovs because its the 
default these days.

There are all sorts of warnings to folks for years saying if you start with 
nova-network, there will be pain for you later. Hopefully, that has scared away 
most new folks from doing it. Most of the existing folks are there because they 
started before Neutron was up to speed. Thats a different problem.

So I would expect the number of folks needing to go from nova-network to 
neutron to be a small number of clouds, not a big number. Changing the defaults 
now to favor that small minority of clouds, seems like an odd choice.

Really, I don't think finding the right solution to migrate those still using 
nova-network to neutron has anything to do with what the default out of the box 
experience for new clouds should be...

Having linuxbridge be the default for folks moving from nova-network to neutron 
might make much more sense then saying everyone should by default get 
linuxbridge.

Thanks,
Kevin

From: Dean Troyer [dtro...@gmail.com]
Sent: Friday, March 27, 2015 9:06 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to 
Neutron migration work
On Fri, Mar 27, 2015 at 10:48 AM, Assaf Muller 
amul...@redhat.commailto:amul...@redhat.com wrote:
Looking at the latest user survey, OVS looks to be 3 times as popular as
Linux bridge for production deployments. Having LB as the default seems
like an odd choice. You also wouldn't want to change the default before
LB is tested at the gate.

Simple things need to be simple to accomplish, and defaults MUST be simple to 
use.

LB's support requirements are very simple compared to OVS.  This is an 
achievable first step away from nova-net and once conquered the second step 
becomes less overwhelming.  Look at the success of swallowing the entire 
elephant at once that we've seen in the last $TOO_MANY years.

dt

--

Dean Troyer
dtro...@gmail.commailto:dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Volunteers Needed for OpenStack Booth at PyCon

2015-03-27 Thread Ed Leafe
On Mar 26, 2015, at 4:42 PM, Stefano Maffulli stef...@openstack.org wrote:

 Details of the show, schedule and peak times on the show floor are on
 https://etherpad.openstack.org/p/pycon-2015-booth
 
 If you are interested in helping, please add your name to the etherpad
 in the time slot you'd be available.

I took one, and might be there for more.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What's Up Doc? March 26, 2015 [all]

2015-03-27 Thread Steve Martinelli
Anne Gentle annegen...@justwriteclick.com wrote on 03/26/2015 08:37:13 
PM:

 Install Guides updates
 -
 We've got a spec ready for the changes to the Install Guides now 
 published at: 
 http://specs.openstack.org/openstack/docs-specs/specs/kilo/
 installguide-kilo.html
 I'm sure the keystone team will rejoice to see changes to support 
 the Identity v3 API by default. Also the openstack CLI will 
 substitute for keystone CLI commands.

As someone who works on keystone and openstack CLI, this was a double win!
Matt has been awesome to work with, too. Thanks for getting this done.
The keystone team is rejoicing. \o/

 
 Anne
 -- 
 Anne Gentle
 annegen...@justwriteclick.com
 
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Thanks,

Steve Martinelli
OpenStack Keystone Core__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Sean M. Collins
Let's also keep in mind that the ML2 Plugin has *both* openvswitch and
linuxbridge mechanism drivers enabled[1]. If I understand things
correctly, this means this discussion shouldn't turn into a debate about
which mechanism everyone prefers, since *both* are enabled.

There is one thing that we do in DevStack currently, where we select the
openvswitch agent[2] by default - I don't know what impact that has when
you want to use linuxbridge as the mechanism. I have to do some more
research, ideally we'd be able to run both OVS and linux bridge
mechanisms by default so that people want OVS get OVS and linuxbridge
people get linuxbridge and we can all live happily ever after. :)


[1]: 
https://github.com/openstack-dev/devstack/blob/master/lib/neutron_plugins/ml2#L25
[2]: 
https://github.com/openstack-dev/devstack/blob/master/lib/neutron_plugins/ml2#L21

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Jay Pipes
On Fri, Mar 27, 2015 at 09:31:39AM +1100, Michael Still wrote:
 Hi,
 
 I thought it would be a good idea to send out a status update for the
 migration from nova-network to Neutron, as there hasn't been as much
 progress as we'd hoped for in Kilo. There are a few issues which have
 been slowing progress down.
 
 First off, creating an all encompassing turn key upgrade is probably
 not possible. This was also never a goal of this effort -- to quote
 the spec for this work, Consequently, the process described here is
 not a “one size fits all” automated push-button tool but a series of
 steps that should be obvious to automate and customise to meet local
 needs [1]. The variety of deployment and configuration options
 available makes a turn key migration very hard to write, and possibly
 impossible to test. We therefore have opted for writing migration
 tools, which allow operators to plug components together in the way
 that makes sense for their deployment and then migrate using those.

As Russell mentioned in an earlier response on this thread, the fact is
that most migrations from nova-net to Neutron would require custom work
to make it happen. Adding documentation on how to do migration from
nova-net to Neutron is a grand idea, but I suspect it will ultimately
fall short of the needs of the (very few) operators that would attempt
such a thing (as opposed to a cold migration from older nova-net zones
to newer greenfield Neutron zones).

 However, talking to operators at the Ops Summit, is has become clear
 that some operators simply aren't interested in moving to Neutron --
 largely because they either aren't interested in tenant networks, or
 have corporate network environments that make deploying Neutron very
 hard. So, even if we provide migration tools, it is still likely that
 we will end up with loyal nova-network users who aren't interested in
 moving. From the Nova perspective, the end goal of all of this effort
 is to delete the nova-network code

Actually, IMO, the end goal should be to provide our end users with the
most stable, simple to deploy and operate, and scalable network as a
service product. The goal shouldn't be the separation or deletion of the
nova-network code -- just as is true that the goal of the Gantt project
should not be the split of the nova-scheduler itself, but rather to
provide the most stable, intuitive and easy-to-use placement engine for
OpenStack end users.

, and if we can't do that because
 some people simply don't want to move, then what is gained by putting
 a lot of effort into migration tooling?

As Sean mentioned (I think), if Neutron was attractive to nova-network
deployers as an alternative handler of cloud network servicing, then
there would be more value in spending time on the nova-network to
Neutron migration.

But, there's the rub. Neutron *isn't* attractive to this set of people
because:

a) It doesn't provide for automatic (sub)net allocation for a user or
tenant in the same way that nova-network just Gets This Done for a user
that wants to launch an instance. As I think Kevin Fox mentioned, a
cloud admin should be able to easily set up a bunch of networks usable
by tenants, and Nova should be able to ask Neutron to just do the
needful and wire up a subnet for use by the instance without the user
needing to create a subnet, a router object, or wiring up the
connectivity themselves. I complained about this very problem (of not
having automated subnet and IP assignments) nearly *two years ago*:

http://lists.openstack.org/pipermail/openstack-dev/2013-July/011981.html

and was told by Neutron core team members that they weren't really
interested in changing Neutron to be more like Nova's network
auto-service behaviours.

b) It is way more complicated to deploy Neutron than nova-network (even
nova-network in multihost mode). While the myriad vendor plugins for L2
and L3 are nice flexibility, they add much complexity to the deployment
of Neutron. Just ask Thomas Goirand, who is currently working on
packaging the Neutron vendor mechanism drivers for Debian, about that
complexity.

c) There's been no demonstration that data plane performance of
nova-network with linux bridging can be beaten by the open source
Neutron SDN solutions. Not having any reliable and transparent
benchmarking that compares the huge matrix of network topologies,
overlay providers, and data plane options is a major reason for the lack
of uptake of Neutron by all but the bravest greenfield deployments.

 Therefore, there is some talk about spinning nova-network out into its
 own project where it could continue to live on and be better
 maintained than the current Nova team is able to do. However, this is
 a relatively new idea and we haven't had a chance to determine how
 feasible it is given where we are in the release cycle. I assume that
 if we did this, we would need to find a core team passionate about
 maintaining nova-network, and we would still need to provide some
 migration 

Re: [openstack-dev] [keystone][fernet] Fernet tokens sync

2015-03-27 Thread Jay Pipes
On Fri, Mar 27, 2015 at 11:48:29AM -0400, David Stanek wrote:
 On Fri, Mar 27, 2015 at 10:14 AM, Boris Bobrov bbob...@mirantis.com wrote:
 
  As you know, keystone introduced non-persistent tokens in kilo -- Fernet
  tokens. These tokens use Fernet keys, that are rotated from time to time. A
  great description of key rotation and replication can be found on [0] and
  [1]
  (thanks, lbragstad). In HA setup there are multiple nodes with Keystone and
  that requires key replication. How do we do that with new Fernet tokens?
 
  Please keep in mind that the solution should be HA -- there should not be
  any
  master server, pushing keys to slave servers, because master server
  might go
  down.
 
 
 In my test environment I was using ansible to sync the keys across multiple
 nodes. Keystone should probably provide some guidance around this process,
 but I don't think it should deal with the actual syncing. I think that's
 better left to an installation's existing configuration management tools.

Agreed. This is the same reason why I don't support building in
replication functionality to Glance, either. There's lots of external
tools that can do this kind of thing, from shared filesystems to
BitTorrent, to using Ansible to orchestrate stuff...

The best solution is one we don't have to write ourselves.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Irina Povolotskaya for fuel-docs core

2015-03-27 Thread Anastasia Urlapova
+ 10

On Fri, Mar 27, 2015 at 4:28 AM, Igor Zinovik izino...@mirantis.com wrote:

 +1

 On 26 March 2015 at 19:26, Fabrizio Soppelsa fsoppe...@mirantis.com
 wrote:
  +1 definitely
 
 
  On 03/25/2015 10:10 PM, Dmitry Borodaenko wrote:
 
  Fuelers,
 
  I'd like to nominate Irina Povolotskaya for the fuel-docs-core team.
  She has contributed thousands of lines of documentation to Fuel over
  the past several months, and has been a diligent reviewer:
 
 
 
 http://stackalytics.com/?user_id=ipovolotskayarelease=allproject_type=allmodule=fuel-docs
 
  I believe it's time to grant her core reviewer rights in the fuel-docs
  repository.
 
  Core reviewer approval process definition:
  https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Igor Zinovik
 Deployment Engineer at Mirantis, Inc
 izino...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Sean M. Collins
On Fri, Mar 27, 2015 at 01:17:48PM EDT, Jay Pipes wrote:
 But, there's the rub. Neutron *isn't* attractive to this set of people
 because:
 
 a) It doesn't provide for automatic (sub)net allocation for a user or
 tenant in the same way that nova-network just Gets This Done for a user
 that wants to launch an instance. As I think Kevin Fox mentioned, a
 cloud admin should be able to easily set up a bunch of networks usable
 by tenants, and Nova should be able to ask Neutron to just do the
 needful and wire up a subnet for use by the instance without the user
 needing to create a subnet, a router object, or wiring up the
 connectivity themselves. I complained about this very problem (of not
 having automated subnet and IP assignments) nearly *two years ago*:
 
 http://lists.openstack.org/pipermail/openstack-dev/2013-July/011981.html
 
 and was told by Neutron core team members that they weren't really
 interested in changing Neutron to be more like Nova's network
 auto-service behaviours.

I can't speak for others, but I think the subnet allocation API is a
first step towards fixing that[1]. 

On the IPv6 side - I am adamant[2] that it should not require complex
operations since protocols like Prefix Delegation should make
provisioning networking dead simple to the user - similar to how Comcast
deploys IPv6 for residential customers - just plug in. This will be a
big part of my speaking session with Carl[3].

[1]: 
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/subnet-allocation.html
[2]: http://lists.openstack.org/pipermail/openstack-dev/2015-March/059329.html
[3]: 
https://openstacksummitmay2015vancouver.sched.org/event/085f7141a451efc531430dc15d886bb2#.VQyLY0aMVE5

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Fox, Kevin M
I think the main disconnect comes from this

Is NaaS a critical feature of the cloud, or not? nova-network asserts no. The 
neutron team asserts yes, and neutron is being developed with that in mind 
currently. This is a critical assertion that should be discussed.

With my app developer hat on, I tend to agree with NaaS being a requirement for 
a very useful cloud.  Living without it is much like living in the times before 
vm's as a service was a thing. It really hurts to build non trivial apps 
without it.

As a cloud provider, you always need to consider what's the best thing for your 
customers, not yourself. I think the extra pain to setup NaaS has been worth it 
on every cloud I've deployed/used.

Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Friday, March 27, 2015 10:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to 
Neutron migration work

On Fri, Mar 27, 2015 at 09:31:39AM +1100, Michael Still wrote:
 Hi,

 I thought it would be a good idea to send out a status update for the
 migration from nova-network to Neutron, as there hasn't been as much
 progress as we'd hoped for in Kilo. There are a few issues which have
 been slowing progress down.

 First off, creating an all encompassing turn key upgrade is probably
 not possible. This was also never a goal of this effort -- to quote
 the spec for this work, Consequently, the process described here is
 not a “one size fits all” automated push-button tool but a series of
 steps that should be obvious to automate and customise to meet local
 needs [1]. The variety of deployment and configuration options
 available makes a turn key migration very hard to write, and possibly
 impossible to test. We therefore have opted for writing migration
 tools, which allow operators to plug components together in the way
 that makes sense for their deployment and then migrate using those.

As Russell mentioned in an earlier response on this thread, the fact is
that most migrations from nova-net to Neutron would require custom work
to make it happen. Adding documentation on how to do migration from
nova-net to Neutron is a grand idea, but I suspect it will ultimately
fall short of the needs of the (very few) operators that would attempt
such a thing (as opposed to a cold migration from older nova-net zones
to newer greenfield Neutron zones).

 However, talking to operators at the Ops Summit, is has become clear
 that some operators simply aren't interested in moving to Neutron --
 largely because they either aren't interested in tenant networks, or
 have corporate network environments that make deploying Neutron very
 hard. So, even if we provide migration tools, it is still likely that
 we will end up with loyal nova-network users who aren't interested in
 moving. From the Nova perspective, the end goal of all of this effort
 is to delete the nova-network code

Actually, IMO, the end goal should be to provide our end users with the
most stable, simple to deploy and operate, and scalable network as a
service product. The goal shouldn't be the separation or deletion of the
nova-network code -- just as is true that the goal of the Gantt project
should not be the split of the nova-scheduler itself, but rather to
provide the most stable, intuitive and easy-to-use placement engine for
OpenStack end users.

, and if we can't do that because
 some people simply don't want to move, then what is gained by putting
 a lot of effort into migration tooling?

As Sean mentioned (I think), if Neutron was attractive to nova-network
deployers as an alternative handler of cloud network servicing, then
there would be more value in spending time on the nova-network to
Neutron migration.

But, there's the rub. Neutron *isn't* attractive to this set of people
because:

a) It doesn't provide for automatic (sub)net allocation for a user or
tenant in the same way that nova-network just Gets This Done for a user
that wants to launch an instance. As I think Kevin Fox mentioned, a
cloud admin should be able to easily set up a bunch of networks usable
by tenants, and Nova should be able to ask Neutron to just do the
needful and wire up a subnet for use by the instance without the user
needing to create a subnet, a router object, or wiring up the
connectivity themselves. I complained about this very problem (of not
having automated subnet and IP assignments) nearly *two years ago*:

http://lists.openstack.org/pipermail/openstack-dev/2013-July/011981.html

and was told by Neutron core team members that they weren't really
interested in changing Neutron to be more like Nova's network
auto-service behaviours.

b) It is way more complicated to deploy Neutron than nova-network (even
nova-network in multihost mode). While the myriad vendor plugins for L2
and L3 are nice flexibility, they add much complexity to the deployment
of Neutron. Just ask Thomas 

Re: [openstack-dev] [keystone] What's Up Doc? March 26, 2015 [all]

2015-03-27 Thread Morgan Fainberg


 On Mar 27, 2015, at 09:29, Brant Knudson b...@acm.org wrote:
 
 
 
 On Thu, Mar 26, 2015 at 7:37 PM, Anne Gentle annegen...@justwriteclick.com 
 wrote:
 Here's the latest news installment from docsland.
 
 Install Guides updates
 -
 We've got a spec ready for the changes to the Install Guides now published 
 at: 
 http://specs.openstack.org/openstack/docs-specs/specs/kilo/installguide-kilo.html
 I'm sure the keystone team will rejoice to see changes to support the 
 Identity v3 API by default. Also the openstack CLI will substitute for 
 keystone CLI commands.
 
 I, for one, am rejoicing in this moment.
 
  - Brant

And there was much rejoicing!


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Dean Troyer
On Fri, Mar 27, 2015 at 11:35 AM, Fox, Kevin M kevin@pnnl.gov wrote:

  So I would expect the number of folks needing to go from nova-network to
 neutron to be a small number of clouds, not a big number. Changing the
 defaults now to favor that small minority of clouds, seems like an odd
 choice.


This is not the default for deployments, except for the ignorant people
using DevStack for deployment and they have already self-selected for
failure by doing that in the first place.


 Really, I don't think finding the right solution to migrate those still
 using nova-network to neutron has anything to do with what the default out
 of the box experience for new clouds should be...


Honestly, I don't give a rat's a$$ about the migrations.  But I do care
about the knowledge required to mentally shift from nova-net to neutron.
And we have failed there.  DevStack has totally failed there to make
Neutron usable without having to learn far too much to just get a dev cloud.

Having linuxbridge be the default for folks moving from nova-network to
 neutron might make much more sense then saying everyone should by default
 get linuxbridge.


The complaint was about DevStack using LB as its default.  I DO NOT want
the overhead of OVS on my development VMs when I am not doing any
network-related work. I am not alone here.

Simple things MUST be simple to do.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][TripleO] API validation breaks physical/flat network creation

2015-03-27 Thread Dan Prince
We had a breaking API validation land a week ago which prevents our
TripleO physical/flat network from being created. As such we would
request a fast revert here to fix things:

https://review.openstack.org/#/c/168207/1

It took us a bit longer than normal to identify this regression and push
the subsequent revert due some other unrelated CI breakages this last
week.

Thanks,

Dan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Sean Dague
On 03/27/2015 11:48 AM, Assaf Muller wrote:
 
 
 - Original Message -
 On 03/27/2015 05:22 AM, Thierry Carrez wrote:
 snip
 Part of it is corner (or simplified) use cases not being optimally
 served by Neutron, and I think Neutron could more aggressively address
 those. But the other part is ignorance and convenience: that Neutron
 thing is a scary beast, last time I looked into it I couldn't make sense
 of it, and nova-network just works for me.

 That is why during the Ops Summit we discussed coming up with a
 migration guide that would explain the various ways you can use Neutron
 to cover nova-network use cases, demystify a few dark areas, and outline
 the step-by-step manual process you can go through to migrate from one
 to the other.

 We found a dev/ops volunteer for writing that migration guide but he was
 unfortunately not allowed to spend time on this. I heard we have new
 volunteers, but I'll let them announce what their plans are, rather than
 put words in their mouth.

 This migration guide can happen even if we follow the nova-net spinout
 plan (for the few who want to migrate to Neutron), so this is a
 complementary solution rather than an alternative. Personally I doubt
 there would suddenly be enough people interested in nova-net development
 to successfully spin it out and maintain it. I also agree with Russell
 that long-term fragmentation at this layer of the stack is generally not
 desirable.

 I think if you boil everything down, you end up with 3 really important
 differences.

 1) neutron is a fleet of services (it's very micro service) and every
 service requires multiple and different config files. Just configuring
 the fleet is a beast if it not devstack (and even if it is)

 2) neutron assumes a primary interesting thing to you is tenant secured
 self service networks. This is actually explicitly not interesting to a
 lot of deploys for policy, security, political reasons/restrictions.

 3) neutron open source backend defaults to OVS (largely because #2). OVS
 is it's own complicated engine that you need to learn to debug. While
 Linux bridge has challenges, it's also something that anyone who's
 worked with Linux  Virtualization for the last 10 years has some
 experience with.

 (also, the devstack setup code for neutron is a rats nest, as it was
 mostly not paid attention to. This means it's been 0 help in explaining
 anything to people trying to do neutron. For better or worse devstack is
 our executable manual for a lot of these things)

 so that being said, I think we need to talk about minimum viable
 neutron as a model and figure out how far away that is from n-net. This
 week at the QA Sprint, Dean, Sean Collins, and I have spent some time
 hashing it out, hopefully with something to show the end of the week.
 This will be the new devstack code for neutron (the old lib/neutron is
 moved to lib/neutron-legacy).

 Default setup will be provider networks (which means no tenant
 isolation). For that you should only need neutron-api, -dhcp, and -l2.
 So #1 is made a bunch better. #2 not a thing at all. And for #3 we'd
 like to revert back to linux bridge for the base case (though first code
 will probably be OVS because that's the happy path today).

 
 Looking at the latest user survey, OVS looks to be 3 times as popular as
 Linux bridge for production deployments. Having LB as the default seems
 like an odd choice. You also wouldn't want to change the default before
 LB is tested at the gate.

Sure, actually testing defaults is presumed here. I didn't think it
needed to be called out separately.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to handle vendor-specific API microversions?

2015-03-27 Thread Chris Friesen

On 03/24/2015 11:10 AM, Chris Friesen wrote:

On 03/24/2015 07:42 AM, Sean Dague wrote:

On 03/24/2015 09:11 AM, Jeremy Stanley wrote:

On 2015-03-23 22:34:17 -0600 (-0600), Chris Friesen wrote:

How would that be expected to work for things where it's
fundamentally just a minor extension to an existing nova API?
(Exposing additional information as part of nova show, for
example.)


Conversely, how do you recommend users of your environment reconcile
the difference in nova show output compared to what they get from
the other OpenStack environments they're using? How do you propose
to address the need for client libraries to cater to your divergent
API returning different numbers of parameters for certain methods?


We had been trying to control things properly via the extensions mechanism so
that the changes could be documented/controlled.

As for clients, if the properties in the response are named, then simply adding
a new property to a response message body shouldn't be a problem--clients could
just ignore properties that they don't understand.


I think these conversations work better in the concrete than the abstract.

Chris, what additional attributes are you exposing on nova show which
are critical for your installation? Can we figure out a way to
generically support whatever that is?


In some cases it might be something that could conceivably go in upstream, but
hasn't yet.  This might be something as simple as having nova show display the
server group that an instance is in, or it might be a bugfix that hasn't been
merged upstream yet (like https://review.openstack.org/#/c/16306; for example)
or it might be per-instance control over things that upstream currently only
allows control over at the image/flavor level.  Some of these might take a
release or two to get merged (and there's no guarantee that they would ever be
merged) but customers want the functionality in the meantime.

In other cases the change is unlikely to ever be merged upstream, either because
it's too domain-specific or the solution is messy or even proprietary.


Haven't seen any responses to this.

As I see it, nova is really pushing for interoperability, but what is a vendor 
supposed to do when they have customers asking for extensions to the existing 
behaviour, and they want it in a month rather than the 6-9 months it might take 
to push upstream?  (Assuming its something that upstream is even interested in.)


I think it would be better to have an explicit method of declaring/versioning 
vendor-specific extensions (even if it's not used at all by the core Nova API) 
than to have each vendor winging it on their own.


That way you would still get interoperability of the core Nova API (allowing 
customers to use multiple cloud vendors as long as they stick to the core API) 
while still giving a well-defined way to provide customized behaviour.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Assaf Muller


- Original Message -
 On 03/27/2015 05:22 AM, Thierry Carrez wrote:
 snip
  Part of it is corner (or simplified) use cases not being optimally
  served by Neutron, and I think Neutron could more aggressively address
  those. But the other part is ignorance and convenience: that Neutron
  thing is a scary beast, last time I looked into it I couldn't make sense
  of it, and nova-network just works for me.
  
  That is why during the Ops Summit we discussed coming up with a
  migration guide that would explain the various ways you can use Neutron
  to cover nova-network use cases, demystify a few dark areas, and outline
  the step-by-step manual process you can go through to migrate from one
  to the other.
  
  We found a dev/ops volunteer for writing that migration guide but he was
  unfortunately not allowed to spend time on this. I heard we have new
  volunteers, but I'll let them announce what their plans are, rather than
  put words in their mouth.
  
  This migration guide can happen even if we follow the nova-net spinout
  plan (for the few who want to migrate to Neutron), so this is a
  complementary solution rather than an alternative. Personally I doubt
  there would suddenly be enough people interested in nova-net development
  to successfully spin it out and maintain it. I also agree with Russell
  that long-term fragmentation at this layer of the stack is generally not
  desirable.
 
 I think if you boil everything down, you end up with 3 really important
 differences.
 
 1) neutron is a fleet of services (it's very micro service) and every
 service requires multiple and different config files. Just configuring
 the fleet is a beast if it not devstack (and even if it is)
 
 2) neutron assumes a primary interesting thing to you is tenant secured
 self service networks. This is actually explicitly not interesting to a
 lot of deploys for policy, security, political reasons/restrictions.
 
 3) neutron open source backend defaults to OVS (largely because #2). OVS
 is it's own complicated engine that you need to learn to debug. While
 Linux bridge has challenges, it's also something that anyone who's
 worked with Linux  Virtualization for the last 10 years has some
 experience with.
 
 (also, the devstack setup code for neutron is a rats nest, as it was
 mostly not paid attention to. This means it's been 0 help in explaining
 anything to people trying to do neutron. For better or worse devstack is
 our executable manual for a lot of these things)
 
 so that being said, I think we need to talk about minimum viable
 neutron as a model and figure out how far away that is from n-net. This
 week at the QA Sprint, Dean, Sean Collins, and I have spent some time
 hashing it out, hopefully with something to show the end of the week.
 This will be the new devstack code for neutron (the old lib/neutron is
 moved to lib/neutron-legacy).
 
 Default setup will be provider networks (which means no tenant
 isolation). For that you should only need neutron-api, -dhcp, and -l2.
 So #1 is made a bunch better. #2 not a thing at all. And for #3 we'd
 like to revert back to linux bridge for the base case (though first code
 will probably be OVS because that's the happy path today).
 

Looking at the latest user survey, OVS looks to be 3 times as popular as
Linux bridge for production deployments. Having LB as the default seems
like an odd choice. You also wouldn't want to change the default before
LB is tested at the gate.

 Mixin #1: NEUTRON_BRIDGE_WITH=OVS
 
 First optional layer being flip from linuxbridge - ovs. That becomes
 one bite sized thing to flip over once you understand it.
 
 Mixin #2: self service networks
 
 This will be off in the default case, but can be enabled later.
 
 ... and turtles all the way up.
 
 
 Provider networks w/ Linux bridge are really close to the simplicity on
 the wire people expected with n-net. The only last really difference is
 floating ips. And the problem here was best captured by Sean Collins on
 Wed, Floating ips in nova are overloaded. They are both elastic ips, but
 they are also how you get public addresses in a default environment.
 Dean shared that that dual purpose is entirely due to constraints of the
 first NASA cloud which only had a /26 of routable IPs. In neutron this
 is just different, you don't need floating ips to have public addresses.
 But the mental model has stuck.
 
 
 Anyway, while I'm not sure this is going to solve everyone's issues, I
 think it's a useful exercise anyway for devstack's neutron support to
 revert to a minimum viable neutron for learning purposes, and let you
 layer on complexity manually over time. And I'd be really curious if a
 n-net - provider network side step (still on linux bridge) would
 actually be a more reasonable transition for most environments.
 
   -Sean
 
 --
 Sean Dague
 http://dague.net
 
 
 __
 OpenStack Development Mailing List 

Re: [openstack-dev] [keystone][fernet] Fernet tokens sync

2015-03-27 Thread David Stanek
On Fri, Mar 27, 2015 at 10:14 AM, Boris Bobrov bbob...@mirantis.com wrote:

 As you know, keystone introduced non-persistent tokens in kilo -- Fernet
 tokens. These tokens use Fernet keys, that are rotated from time to time. A
 great description of key rotation and replication can be found on [0] and
 [1]
 (thanks, lbragstad). In HA setup there are multiple nodes with Keystone and
 that requires key replication. How do we do that with new Fernet tokens?

 Please keep in mind that the solution should be HA -- there should not be
 any
 master server, pushing keys to slave servers, because master server
 might go
 down.


In my test environment I was using ansible to sync the keys across multiple
nodes. Keystone should probably provide some guidance around this process,
but I don't think it should deal with the actual syncing. I think that's
better left to an installation's existing configuration management tools.


-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Mohammad Banikazemi


Sean Dague s...@dague.net wrote on 03/27/2015 07:11:18 AM:

 From: Sean Dague s...@dague.net
 To: openstack-dev@lists.openstack.org
 Date: 03/27/2015 07:12 AM
 Subject: Re: [openstack-dev] [Nova][Neutron] Status of the nova-
 network to Neutron migration work

 On 03/27/2015 05:22 AM, Thierry Carrez wrote:
 snip
  Part of it is corner (or simplified) use cases not being optimally
  served by Neutron, and I think Neutron could more aggressively address
  those. But the other part is ignorance and convenience: that Neutron
  thing is a scary beast, last time I looked into it I couldn't make
sense
  of it, and nova-network just works for me.
 
  That is why during the Ops Summit we discussed coming up with a
  migration guide that would explain the various ways you can use Neutron
  to cover nova-network use cases, demystify a few dark areas, and
outline
  the step-by-step manual process you can go through to migrate from one
  to the other.
 
  We found a dev/ops volunteer for writing that migration guide but he
was
  unfortunately not allowed to spend time on this. I heard we have new
  volunteers, but I'll let them announce what their plans are, rather
than
  put words in their mouth.
 
  This migration guide can happen even if we follow the nova-net spinout
  plan (for the few who want to migrate to Neutron), so this is a
  complementary solution rather than an alternative. Personally I doubt
  there would suddenly be enough people interested in nova-net
development
  to successfully spin it out and maintain it. I also agree with Russell
  that long-term fragmentation at this layer of the stack is generally
not
  desirable.

 I think if you boil everything down, you end up with 3 really important
 differences.

 1) neutron is a fleet of services (it's very micro service) and every
 service requires multiple and different config files. Just configuring
 the fleet is a beast if it not devstack (and even if it is)

 2) neutron assumes a primary interesting thing to you is tenant secured
 self service networks. This is actually explicitly not interesting to a
 lot of deploys for policy, security, political reasons/restrictions.

 3) neutron open source backend defaults to OVS (largely because #2). OVS
 is it's own complicated engine that you need to learn to debug. While
 Linux bridge has challenges, it's also something that anyone who's
 worked with Linux  Virtualization for the last 10 years has some
 experience with.

 (also, the devstack setup code for neutron is a rats nest, as it was
 mostly not paid attention to. This means it's been 0 help in explaining
 anything to people trying to do neutron. For better or worse devstack is
 our executable manual for a lot of these things)

 so that being said, I think we need to talk about minimum viable
 neutron as a model and figure out how far away that is from n-net. This
 week at the QA Sprint, Dean, Sean Collins, and I have spent some time
 hashing it out, hopefully with something to show the end of the week.
 This will be the new devstack code for neutron (the old lib/neutron is
 moved to lib/neutron-legacy).

 Default setup will be provider networks (which means no tenant
 isolation). For that you should only need neutron-api, -dhcp, and -l2.
 So #1 is made a bunch better. #2 not a thing at all. And for #3 we'd
 like to revert back to linux bridge for the base case (though first code
 will probably be OVS because that's the happy path today).


Are you suggesting that for the common use cases that will use the default
setup, the external network connectivity doesn't matter much?

 Mixin #1: NEUTRON_BRIDGE_WITH=OVS

 First optional layer being flip from linuxbridge - ovs. That becomes
 one bite sized thing to flip over once you understand it.

 Mixin #2: self service networks

 This will be off in the default case, but can be enabled later.

 ... and turtles all the way up.


 Provider networks w/ Linux bridge are really close to the simplicity on
 the wire people expected with n-net. The only last really difference is
 floating ips. And the problem here was best captured by Sean Collins on
 Wed, Floating ips in nova are overloaded. They are both elastic ips, but
 they are also how you get public addresses in a default environment.
 Dean shared that that dual purpose is entirely due to constraints of the
 first NASA cloud which only had a /26 of routable IPs. In neutron this
 is just different, you don't need floating ips to have public addresses.
 But the mental model has stuck.


 Anyway, while I'm not sure this is going to solve everyone's issues, I
 think it's a useful exercise anyway for devstack's neutron support to
 revert to a minimum viable neutron for learning purposes, and let you
 layer on complexity manually over time. And I'd be really curious if a
 n-net - provider network side step (still on linux bridge) would
 actually be a more reasonable transition for most environments.

-Sean

 --
 Sean Dague
 http://dague.net

 [attachment 

Re: [openstack-dev] [keystone][fernet] Fernet tokens sync

2015-03-27 Thread Boris Bobrov
On Friday 27 March 2015 17:14:28 Boris Bobrov wrote:
 Hello,
 
 As you know, keystone introduced non-persistent tokens in kilo -- Fernet
 tokens. These tokens use Fernet keys, that are rotated from time to time. A
 great description of key rotation and replication can be found on [0] and
 [1] (thanks, lbragstad). In HA setup there are multiple nodes with
 Keystone and that requires key replication. How do we do that with new
 Fernet tokens?
 
 Please keep in mind that the solution should be HA -- there should not be
 any master server, pushing keys to slave servers, because master server
 might go down.

 [...]

[0] and [1] in the mail are:

[0]: http://lbragstad.com/?p=133
[1]: http://lbragstad.com/?p=156

After some discussion in #openstack-keystone it seems that token rotation 
should not be an often procedure and that 15 minutes in the blog post was just 
an example for the sake of simple math.


-- 
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Fox, Kevin M
The floating ip only on external netwoks thing has always been a little odd to 
me...

Floating ip's are very important to ensure a user can switch out one instance 
with another and keep 'state' consistent (the other piece being cinder 
volumes). But why can't you do this on a provider network? It really is the 
same thing. You can force the fixed ip to whatever you want, but its a 
completely different mechanism.


On the subject of, we don't need the rest of user defined networking, just 
provider networks, I'd add this:

One of the things I see long term as beneficial that cloud will provide is a 
catalog of open source cloud applications. As a user, you go to the catalog, 
search for... trac for example, and hit launch. easy, done...

As a developer of such templates, its a real pain to have to deal with neutron 
networking vs nova networking, let alone the many different ways of configuring 
neutron. On top of that, one of the great features of NaaS is that you can push 
isolation to the network layer and not have to deal so much with 
authentication. Take ElasticSearch for example. It has no concept of 
authentication since is a backend service. You put it on its own network that 
only the webservers can get to. But that means you can't write a template that 
will work on anything but proper NaaS securely.

So, short term, your not wanting to deal with the complication of a more 
featureful neutron, but your really just pushing the complication to the cloud 
users/app developers, slowing down development of cloud apps, and therefore 
your users experience is diminished since their selection of apps is restricted 
with all sorts of caviots. This application works only if your service 
provider setup up NaaS. Really, the way I see it, its the cloud admin's job to 
deal with complication so that the end users don't have to. Its one of the 
things that makes being a cloud user so great. A few skilled cloud admins can 
make it possible for many many less experienced folks to do amazing things on 
top. The cloud and cloud amdin hides all the complexity from the user.

Lets reduce the fragmentation as much as we can here. it will actually make the 
app ecosystem and user experience much better in the long run.

Thanks,
Kevin

From: Sean Dague [s...@dague.net]
Sent: Friday, March 27, 2015 4:11 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to 
Neutron migration work

On 03/27/2015 05:22 AM, Thierry Carrez wrote:
snip
 Part of it is corner (or simplified) use cases not being optimally
 served by Neutron, and I think Neutron could more aggressively address
 those. But the other part is ignorance and convenience: that Neutron
 thing is a scary beast, last time I looked into it I couldn't make sense
 of it, and nova-network just works for me.

 That is why during the Ops Summit we discussed coming up with a
 migration guide that would explain the various ways you can use Neutron
 to cover nova-network use cases, demystify a few dark areas, and outline
 the step-by-step manual process you can go through to migrate from one
 to the other.

 We found a dev/ops volunteer for writing that migration guide but he was
 unfortunately not allowed to spend time on this. I heard we have new
 volunteers, but I'll let them announce what their plans are, rather than
 put words in their mouth.

 This migration guide can happen even if we follow the nova-net spinout
 plan (for the few who want to migrate to Neutron), so this is a
 complementary solution rather than an alternative. Personally I doubt
 there would suddenly be enough people interested in nova-net development
 to successfully spin it out and maintain it. I also agree with Russell
 that long-term fragmentation at this layer of the stack is generally not
 desirable.

I think if you boil everything down, you end up with 3 really important
differences.

1) neutron is a fleet of services (it's very micro service) and every
service requires multiple and different config files. Just configuring
the fleet is a beast if it not devstack (and even if it is)

2) neutron assumes a primary interesting thing to you is tenant secured
self service networks. This is actually explicitly not interesting to a
lot of deploys for policy, security, political reasons/restrictions.

3) neutron open source backend defaults to OVS (largely because #2). OVS
is it's own complicated engine that you need to learn to debug. While
Linux bridge has challenges, it's also something that anyone who's
worked with Linux  Virtualization for the last 10 years has some
experience with.

(also, the devstack setup code for neutron is a rats nest, as it was
mostly not paid attention to. This means it's been 0 help in explaining
anything to people trying to do neutron. For better or worse devstack is
our executable manual for a lot of these things)

so that being said, I think we need to talk about 

Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Mark Voelker
Inline…


On Mar 27, 2015, at 11:48 AM, Assaf Muller amul...@redhat.com wrote:

 
 
 - Original Message -
 On 03/27/2015 05:22 AM, Thierry Carrez wrote:
 snip
 Part of it is corner (or simplified) use cases not being optimally
 served by Neutron, and I think Neutron could more aggressively address
 those. But the other part is ignorance and convenience: that Neutron
 thing is a scary beast, last time I looked into it I couldn't make sense
 of it, and nova-network just works for me.
 
 That is why during the Ops Summit we discussed coming up with a
 migration guide that would explain the various ways you can use Neutron
 to cover nova-network use cases, demystify a few dark areas, and outline
 the step-by-step manual process you can go through to migrate from one
 to the other.
 
 We found a dev/ops volunteer for writing that migration guide but he was
 unfortunately not allowed to spend time on this. I heard we have new
 volunteers, but I'll let them announce what their plans are, rather than
 put words in their mouth.
 
 This migration guide can happen even if we follow the nova-net spinout
 plan (for the few who want to migrate to Neutron), so this is a
 complementary solution rather than an alternative. Personally I doubt
 there would suddenly be enough people interested in nova-net development
 to successfully spin it out and maintain it. I also agree with Russell
 that long-term fragmentation at this layer of the stack is generally not
 desirable.
 
 I think if you boil everything down, you end up with 3 really important
 differences.
 
 1) neutron is a fleet of services (it's very micro service) and every
 service requires multiple and different config files. Just configuring
 the fleet is a beast if it not devstack (and even if it is)
 
 2) neutron assumes a primary interesting thing to you is tenant secured
 self service networks. This is actually explicitly not interesting to a
 lot of deploys for policy, security, political reasons/restrictions.
 
 3) neutron open source backend defaults to OVS (largely because #2). OVS
 is it's own complicated engine that you need to learn to debug. While
 Linux bridge has challenges, it's also something that anyone who's
 worked with Linux  Virtualization for the last 10 years has some
 experience with.
 
 (also, the devstack setup code for neutron is a rats nest, as it was
 mostly not paid attention to. This means it's been 0 help in explaining
 anything to people trying to do neutron. For better or worse devstack is
 our executable manual for a lot of these things)
 
 so that being said, I think we need to talk about minimum viable
 neutron as a model and figure out how far away that is from n-net. This
 week at the QA Sprint, Dean, Sean Collins, and I have spent some time
 hashing it out, hopefully with something to show the end of the week.
 This will be the new devstack code for neutron (the old lib/neutron is
 moved to lib/neutron-legacy).
 
 Default setup will be provider networks (which means no tenant
 isolation). For that you should only need neutron-api, -dhcp, and -l2.
 So #1 is made a bunch better. #2 not a thing at all. And for #3 we'd
 like to revert back to linux bridge for the base case (though first code
 will probably be OVS because that's the happy path today).
 
 
 Looking at the latest user survey, OVS looks to be 3 times as popular as

3x as popular *with existing Neutron users* though.  Not people that are still 
running nova-net.  I think we have to bear in mind here that when we’re looking 
at user survey results we’re looking at a general audience of OpenStack users, 
and what we’re trying to solve on this thread is a specific subset of that 
audience.  The very fact that those people are still running nova-net may be a 
good indicator that they don’t find the Neutron choices that lots of other 
people have made to be a good fit for their particular use cases (else they’d 
have switched by now).  We got some reinforcement of this idea during 
discussion at the Operator’s Midcycle Meetup in Philadelphia: the feedback from 
nova-net users that I heard was that OVS+Neutron was too complicated and too 
hard to debug compared to what they have today, hence they didn’t find it a 
compelling option.  

Linux Bridge is, in the eyes of many folks in that room, a simpler model in 
terms of operating and debugging so I think it’s likely a very reasonable for 
this group of users.  However in the interest of ensuring that those operators 
have a chance to chime in here, I’ve added openstack-operators to the thread.

At Your Service,

Mark T. Voelker


 Linux bridge for production deployments. Having LB as the default seems
 like an odd choice. You also wouldn't want to change the default before
 LB is tested at the gate.
 
 Mixin #1: NEUTRON_BRIDGE_WITH=OVS
 
 First optional layer being flip from linuxbridge - ovs. That becomes
 one bite sized thing to flip over once you understand it.
 
 Mixin #2: self service networks
 
 This will 

Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Fox, Kevin M
No, no. Most OpenStack deployments are neutron based with ovs because its the 
default these days.

There are all sorts of warnings to folks for years saying if you start with 
nova-network, there will be pain for you later. Hopefully, that has scared away 
most new folks from doing it. Most of the existing folks are there because they 
started before Neutron was up to speed. Thats a different problem.

So I would expect the number of folks needing to go from nova-network to 
neutron to be a small number of clouds, not a big number. Changing the defaults 
now to favor that small minority of clouds, seems like an odd choice.

Really, I don't think finding the right solution to migrate those still using 
nova-network to neutron has anything to do with what the default out of the box 
experience for new clouds should be...

Having linuxbridge be the default for folks moving from nova-network to neutron 
might make much more sense then saying everyone should by default get 
linuxbridge.

Thanks,
Kevin

From: Dean Troyer [dtro...@gmail.com]
Sent: Friday, March 27, 2015 9:06 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to 
Neutron migration work

On Fri, Mar 27, 2015 at 10:48 AM, Assaf Muller 
amul...@redhat.commailto:amul...@redhat.com wrote:
Looking at the latest user survey, OVS looks to be 3 times as popular as
Linux bridge for production deployments. Having LB as the default seems
like an odd choice. You also wouldn't want to change the default before
LB is tested at the gate.

Simple things need to be simple to accomplish, and defaults MUST be simple to 
use.

LB's support requirements are very simple compared to OVS.  This is an 
achievable first step away from nova-net and once conquered the second step 
becomes less overwhelming.  Look at the success of swallowing the entire 
elephant at once that we've seen in the last $TOO_MANY years.

dt

--

Dean Troyer
dtro...@gmail.commailto:dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Sean M. Collins
On Fri, Mar 27, 2015 at 11:11:42AM EDT, Mohammad Banikazemi wrote:
 Are you suggesting that for the common use cases that will use the default
 setup, the external network connectivity doesn't matter much?

No, if anything the reverse. The default will have external connectivity
by default, by using provider networks or flat networking.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-ansible-deployment] Nominating Nolan Brubaker for core team

2015-03-27 Thread Dave Wilde
+1

 Dave Wilde
...   Software Engineer III - RCBOPS
...   Rackspace - the open cloud company


From: Ian Cordasco 
ian.corda...@rackspace.commailto:ian.corda...@rackspace.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: March 25, 2015 at 10:48:34
To: Hugh Saunders h...@wherenow.orgmailto:h...@wherenow.org, OpenStack 
Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [os-ansible-deployment] Nominating Nolan Brubaker 
for core team

+1

On 3/25/15, 10:36, Hugh Saunders h...@wherenow.org wrote:

Great proposal, Nolan will be an asset to the core team.


+1

--
Hugh Saunders



On 25 March 2015 at 15:24, Kevin Carter
kevin.car...@rackspace.com wrote:

Greetings,

I would like to nominate Nolan Brubaker (palendae on IRC) for the
os-ansible-deployment-core team. Nolan has been involved with the project
for the last few months and has been an active reviewer with solid
reviews. IMHO, I think he is ready to receive core
 powers on the repository.

References:
 [
https://review.openstack.org/#/q/project:stackforge/os-ansible-deployment+
reviewer:%22nolan+brubaker%253Cnolan.brubaker%2540rackspace.com%253E%22,n,
z
https://review.openstack.org/#/q/project:stackforge/os-ansible-deployment
+reviewer:%22nolan+brubaker%253Cnolan.brubaker%2540rackspace.com%253E%22,n
,z ]

Please respond with +1/-1s or any other concerns.

As a reminder, we are using the voting process outlined at [
https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess ] to
add members to our core team.

—

Kevin Carter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][fernet] Fernet tokens sync

2015-03-27 Thread Morgan Fainberg
Matt,

The idea is you have a staging key (next key) and you generate that, and sync 
it out. Once it is synced out you can rotate to it as needed. All keys on the 
server are valid for token validation. Only the active key is used for a 
given keystone to issue a token.

Lance has some ansible stuff he put together for syncing the keys: 
https://github.com/lbragstad/revolver

--Morgan

Sent via mobile

 On Mar 27, 2015, at 09:02, Matt Fischer m...@mattfischer.com wrote:
 
 Do the keys all need to be changed at once in a cluster? If so that makes it 
 difficult for puppet at least how we do puppet deployments.
 
 Also, David can you share your ansible script for this?
 
 On Fri, Mar 27, 2015 at 9:48 AM, David Stanek dsta...@dstanek.com wrote:
 
 On Fri, Mar 27, 2015 at 10:14 AM, Boris Bobrov bbob...@mirantis.com wrote:
 As you know, keystone introduced non-persistent tokens in kilo -- Fernet
 tokens. These tokens use Fernet keys, that are rotated from time to time. A
 great description of key rotation and replication can be found on [0] and 
 [1]
 (thanks, lbragstad). In HA setup there are multiple nodes with Keystone and
 that requires key replication. How do we do that with new Fernet tokens?
 
 Please keep in mind that the solution should be HA -- there should not be 
 any
 master server, pushing keys to slave servers, because master server might 
 go
 down.
 
 In my test environment I was using ansible to sync the keys across multiple 
 nodes. Keystone should probably provide some guidance around this process, 
 but I don't think it should deal with the actual syncing. I think that's 
 better left to an installation's existing configuration management tools.
 
 
 -- 
 David
 blog: http://www.traceback.org
 twitter: http://twitter.com/dstanek
 www: http://dstanek.com
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][fernet] Fernet tokens sync

2015-03-27 Thread Matt Fischer
Do the keys all need to be changed at once in a cluster? If so that makes
it difficult for puppet at least how we do puppet deployments.

Also, David can you share your ansible script for this?

On Fri, Mar 27, 2015 at 9:48 AM, David Stanek dsta...@dstanek.com wrote:


 On Fri, Mar 27, 2015 at 10:14 AM, Boris Bobrov bbob...@mirantis.com
 wrote:

 As you know, keystone introduced non-persistent tokens in kilo -- Fernet
 tokens. These tokens use Fernet keys, that are rotated from time to time.
 A
 great description of key rotation and replication can be found on [0] and
 [1]
 (thanks, lbragstad). In HA setup there are multiple nodes with Keystone
 and
 that requires key replication. How do we do that with new Fernet tokens?

 Please keep in mind that the solution should be HA -- there should not be
 any
 master server, pushing keys to slave servers, because master server
 might go
 down.


 In my test environment I was using ansible to sync the keys across
 multiple nodes. Keystone should probably provide some guidance around this
 process, but I don't think it should deal with the actual syncing. I think
 that's better left to an installation's existing configuration management
 tools.


 --
 David
 blog: http://www.traceback.org
 twitter: http://twitter.com/dstanek
 www: http://dstanek.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] What's Up Doc? March 26, 2015 [all]

2015-03-27 Thread Brant Knudson
On Thu, Mar 26, 2015 at 7:37 PM, Anne Gentle annegen...@justwriteclick.com
wrote:

 Here's the latest news installment from docsland.

 Install Guides updates
 -
 We've got a spec ready for the changes to the Install Guides now published
 at:

 http://specs.openstack.org/openstack/docs-specs/specs/kilo/installguide-kilo.html
 I'm sure the keystone team will rejoice to see changes to support the
 Identity v3 API by default. Also the openstack CLI will substitute for
 keystone CLI commands.


I, for one, am rejoicing in this moment.

 - Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [libvirt] The risk of hanging when shutdown instance.

2015-03-27 Thread Chris Friesen

On 03/26/2015 07:44 PM, Rui Chen wrote:

Yes, you are right, but we found our instance hang at first dom.shutdown() call,
if the dom.shutdown() don't return, there is no chance to execute dom.destroy(),
right?


Correct.  The code is written assuming dom.shutdown() cannot block indefinitely.

The libvirt docs at 
https://libvirt.org/html/libvirt-libvirt-domain.html#virDomainShutdown; say 
...this command returns as soon as the shutdown request is issued rather than 
blocking until the guest is no longer running.


If dom.shutdown() blocks indefinitely, then that's a libvirt bug.

Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Dean Troyer
On Fri, Mar 27, 2015 at 10:48 AM, Assaf Muller amul...@redhat.com wrote:

 Looking at the latest user survey, OVS looks to be 3 times as popular as
 Linux bridge for production deployments. Having LB as the default seems
 like an odd choice. You also wouldn't want to change the default before
 LB is tested at the gate.


Simple things need to be simple to accomplish, and defaults MUST be simple
to use.

LB's support requirements are very simple compared to OVS.  This is an
achievable first step away from nova-net and once conquered the second step
becomes less overwhelming.  Look at the success of swallowing the entire
elephant at once that we've seen in the last $TOO_MANY years.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Volunteers Needed for OpenStack Booth at PyCon

2015-03-27 Thread Flavio Percoco

On 26/03/15 14:42 -0700, Stefano Maffulli wrote:

Volunteers Needed for OpenStack Booth

We are looking for 3 more knowledgeable volunteers who can staff the
OpenStack booth in shifts (see the booth schedule below).  If you are
available and have at least one year of experience contributing to or
using OpenStack, please email ridolfoden...@gmail.com. We have free
sponsor passes available.

Details of the show, schedule and peak times on the show floor are on
https://etherpad.openstack.org/p/pycon-2015-booth

If you are interested in helping, please add your name to the etherpad
in the time slot you'd be available.


o/ Signed up for one of the slots. Will probably do more!



Thanks,
Stef

PyCon Event details:
Location: Palais des Congres Montreal Convention Center in Montreal,
Canada
Show Dates: April 8-16, 2015
Exhibit Hall Dates: April 9-11, 2015




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpzEiCtdktRT.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to handle vendor-specific API microversions?

2015-03-27 Thread Russell Bryant
On 03/27/2015 03:03 PM, Chris Friesen wrote:
 On 03/27/2015 12:44 PM, Dan Smith wrote:
 To quote John from an earlier email in this thread:

 Its worth noting, we do have the experimental flag:
 
 The first header specifies the version number of the API which was
 executed. Experimental is only returned if the operator has made a
 modification to the API behaviour that is non standard. This is only
 intended to be a transitional mechanism while some functionality used
 by cloud operators is upstreamed and it will be removed within a small
 number of releases.
 

 So if you have an extension that gets accepted upstream you can use the
 experimental flag until you migrate to the upstream version of the
 extension.

 Yes, but please note the last sentence in the quoted bit. This is to
 help people clean their dirty laundry. Going forward, you shouldn't
 expect to deliver features to your customers via this path.

 That is *not* what I would call interoperability, this is exactly what
 we do not want.

 +1.
 
 So for the case where a customer really wants some functionality, and
 wants it *soon* rather than waiting for it to get merged upstream, what
 is the recommended implementation path for a vendor?
 
 And what about stuff that's never going to get merged upstream because
 it's too specialized or too messy or depends on proprietary stuff?
 
 I ask this as an employee of a vendor that provides some modifications
 that customers seem to find useful (using the existing extensions
 mechanism to control them) and we want to do the right thing here.  Some
 of the modifications could make sense upstream and we are currently
 working on pushing those, but it's not at all clear how we're supposed
 to handle the above scenarios once the existing extension code gets
 removed.

This is why Red Hat's development approach is upstream first.  You
really screw yourself if you ship something before it goes upstream,
*especially* when the changes are user visible.

I'm really not interested in the never going upstream case at all.
It's damaging to users and the OpenStack ecosystem overall.  That view
seems pretty widely shared in this thread so far.

However, I think the question of backporting a feature that has landed
upstream that Steve Gordon just raised is a good case to consider.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] heat delete woes in Juno

2015-03-27 Thread Matt Fischer
Pavlo,

Here is a link to one of the stacks. It is fairly simple just some
routers/nets/subnets. The description is a bit odd perhaps, but legal. I've
changed the template to not point at IPs at internal DNS.

http://paste.ubuntu.com/10690759/

I created and deleted this in a loop about 5 times and it finally failed to
delete on the last run. Now that it is stuck in DELETE_FAILED no amount of
deleting will help. I'm concerned that a template this simple can get stuck
like this.

I will have stack_abandon enabled next week as it just landed in Puppet:
https://review.openstack.org/#/c/168157/ and will plan on trying that then.


On Thu, Mar 26, 2015 at 12:40 PM, Pavlo Shchelokovskyy 
pshchelokovs...@mirantis.com wrote:

 Hi Matt,

 if it would be feasible/appropriate, could you provide us with templates
 for stacks that show this behavior (try to get them with heat
 template-show stack-name-or-id)? This would help us to test and
 understand the problem better.

 And yes, just the day before I was contacted by one of my colleagues who
 seems to experience similar problems with Juno-based OpenStack deployment
 (though I did not had a chance to look through the issue yet).

 Best regards,

 Pavlo Shchelokovskyy
 Software Engineer
 Mirantis Inc
 www.mirantis.com

 On Thu, Mar 26, 2015 at 8:17 PM, Matt Fischer m...@mattfischer.com
 wrote:

 Nobody on the operators list had any ideas on this, so re-posting here.

 We've been having some issues with heat delete-stack in Juno. The issues
 generally fall into three categories:

 1) it takes multiple calls to heat to delete a stack. Presumably due
 to heat being unable to figure out the ordering on deletion and resources
 being in use.

 2) undeleteable stacks. Stacks that refuse to delete, get stuck in
 DELETE_FAILED state. In this case, they show up in stack-list and
 stack-show, yet resource-list and stack-delete deny their existence. This
 means I can't be sure whether they have any real resources very easily.

 3) As a corollary to item 1, stacks for which heat can never unwind the
 dependencies and stay in DELETE_IN_PROGRESS forever.

 Does anyone have any work-arounds for these or recommendations on
 cleanup? My main worry is removing a stack from the database that is still
 consuming the customer's resources. I also don't just want to remove stacks
 from the database and leave orphaned records in the DB.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to handle vendor-specific API microversions?

2015-03-27 Thread Steve Gordon
- Original Message -
 From: Chris Friesen chris.frie...@windriver.com
 To: openstack-dev@lists.openstack.org
 
 On 03/27/2015 12:44 PM, Dan Smith wrote:
  To quote John from an earlier email in this thread:
 
  Its worth noting, we do have the experimental flag:
  
  The first header specifies the version number of the API which was
  executed. Experimental is only returned if the operator has made a
  modification to the API behaviour that is non standard. This is only
  intended to be a transitional mechanism while some functionality used
  by cloud operators is upstreamed and it will be removed within a small
  number of releases.
  
 
  So if you have an extension that gets accepted upstream you can use the
  experimental flag until you migrate to the upstream version of the
  extension.
 
  Yes, but please note the last sentence in the quoted bit. This is to
  help people clean their dirty laundry. Going forward, you shouldn't
  expect to deliver features to your customers via this path.
 
  That is *not* what I would call interoperability, this is exactly what
  we do not want.
 
  +1.
 
 So for the case where a customer really wants some functionality, and wants
 it
 *soon* rather than waiting for it to get merged upstream, what is the
 recommended implementation path for a vendor?

Well, before all else the key is to at least propose it in the community and 
see what the appetite for it is. I think part of the problem here is that we're 
still discussing this mostly in the abstract, although you provided some high 
level examples in response to Sean the only link was to a review that merged 
the same day it was proposed (albeit in 2012). I'm interested in whether there 
is a specific proposal you can link to that you put forward in the past and it 
wasn't accepted or was held up or whether you are working on a preset 
assumption here?

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Bug tagging practice

2015-03-27 Thread Dmitry Pyzhov
Clarification: numbers include only open bugs on 6.1. We have about 15
module-volumes bugs on 7.0, many bugs on the 'next' milestone and some
number of bugs in progress.

On Fri, Mar 27, 2015 at 9:40 PM, Dmitry Pyzhov dpyz...@mirantis.com wrote:

 Guys,

 I've tried to sort more than 200 bugs on my team. I've tried several
 approaches to this issue and here the solution.

 First of all, assigning bugs to teams is great. Totally. Awesome. Let's
 keep using it.

 Second. I have my own one-more-launchpad-parser:
 https://github.com/dmi-try/launchpad-report
 I have no time to add multithreading to it. So it takes more than 5 hours
 do it job. But it 100% suitable for me and works just great. It takes every
 single fuel and mos bug, checks every single task for this bug, analyses it
 and gives me a CSV report. It notices every single missed triage and fix
 action on every single milestone. So I do even know that we have some
 unfinished backports on 4.x branches. It shows every tag, every bug
 creation date and bug update date (in dev version). I'm looking forward to
 see this functions in our web tool. Because it is bad to have several tools
 for one task.

 Third. Our 'nailgun' and 'ui' tags are useless. Almost each bug can be
 applied to some component or to some feature. So I've introduced a lot of
 feature-* and module-* tags for my team and we will evaluate them. You can
 find all new tags later in this email.

 Fourth. We do have low-hanging-fruit
 https://bugs.launchpad.net/fuel/+bugs?field.tag=low-hanging-fruit tag
 and it is great. I've also added tech-debt
 https://bugs.launchpad.net/fuel/+bugs?field.tag=tech-debt tag in order
 to group bugs that are not related to the user experience or functionality.
 And I've added feature
 https://bugs.launchpad.net/fuel/+bugs?field.tag=feature tag for
 complicated request that required to be properly designed. Some of them not
 even close to be real bugs. But almost every request talks about users
 pain. So it is rude to close them. That's why we have:

 Sixth. 'next https://launchpad.net/fuel/+milestone/next' milestone.
 Bugs in this milestone cannot be fixed with our bugfixing process. We do
 need proper prioritization for them in our backlog.

 Our feature and module tags with amount of bugs per each tag.

  feature-advanced-networking 2
  feature-bonding 3
  feature-client 1
  feature-deadlocks 1
  feature-demo-site 2
  feature-hardware-change 5
  feature-image-based 13
  feature-logging 4
  feature-mongo 2
  feature-multi-l2 3
  feature-native-provisioning 6
  feature-plugins 5
  feature-progress-bar 2
  feature-redeployment 4
  feature-remote-repos 2
  feature-reset-env 5
  feature-security 3
  feature-simple-mode 1
  feature-stats 9
  feature-stop-deployment 3
  feature-upgrade 8
  feature-validation 9
  module-amqp 1
  module-build 2
  module-client 13
  module-fuelmenu 1
  module-master-node-installation 2
  module-nailgun 1
  module-nailgun-agent 1
  module-netcheck 11
  module-networks 8
  module-ostf 19
  module-serialization 4
  module-shotgun 16
  module-tasks 13
  module-volumes 8

 I'm going to add this tags in our triaging process. And assign owner for
 each tag.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to handle vendor-specific API microversions?

2015-03-27 Thread Dean Troyer
On Fri, Mar 27, 2015 at 12:28 PM, Chris Friesen chris.frie...@windriver.com
 wrote:

 As I see it, nova is really pushing for interoperability, but what is a
 vendor supposed to do when they have customers asking for extensions to the
 existing behaviour, and they want it in a month rather than the 6-9 months
 it might take to push upstream?  (Assuming its something that upstream is
 even interested in.


Vendors with the need to further isolate themselves from the interoperable
community are free to build their own API endpoints and provide clients to
communicate with them (which of course is exactly what we (should) not want
to happen).  Only the most trivial of 'extension' does not require
client-side support to be usable so some subset of the N clients and
libraries in use also need to be changed.

dt

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to handle vendor-specific API microversions?

2015-03-27 Thread Chris Friesen

On 03/27/2015 12:44 PM, Dan Smith wrote:

To quote John from an earlier email in this thread:

Its worth noting, we do have the experimental flag:

The first header specifies the version number of the API which was
executed. Experimental is only returned if the operator has made a
modification to the API behaviour that is non standard. This is only
intended to be a transitional mechanism while some functionality used
by cloud operators is upstreamed and it will be removed within a small
number of releases.


So if you have an extension that gets accepted upstream you can use the
experimental flag until you migrate to the upstream version of the
extension.


Yes, but please note the last sentence in the quoted bit. This is to
help people clean their dirty laundry. Going forward, you shouldn't
expect to deliver features to your customers via this path.


That is *not* what I would call interoperability, this is exactly what
we do not want.


+1.


So for the case where a customer really wants some functionality, and wants it 
*soon* rather than waiting for it to get merged upstream, what is the 
recommended implementation path for a vendor?


And what about stuff that's never going to get merged upstream because it's too 
specialized or too messy or depends on proprietary stuff?


I ask this as an employee of a vendor that provides some modifications that 
customers seem to find useful (using the existing extensions mechanism to 
control them) and we want to do the right thing here.  Some of the modifications 
could make sense upstream and we are currently working on pushing those, but 
it's not at all clear how we're supposed to handle the above scenarios once the 
existing extension code gets removed.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Bug tagging practice

2015-03-27 Thread Dmitry Pyzhov
Guys,

I've tried to sort more than 200 bugs on my team. I've tried several
approaches to this issue and here the solution.

First of all, assigning bugs to teams is great. Totally. Awesome. Let's
keep using it.

Second. I have my own one-more-launchpad-parser:
https://github.com/dmi-try/launchpad-report
I have no time to add multithreading to it. So it takes more than 5 hours
do it job. But it 100% suitable for me and works just great. It takes every
single fuel and mos bug, checks every single task for this bug, analyses it
and gives me a CSV report. It notices every single missed triage and fix
action on every single milestone. So I do even know that we have some
unfinished backports on 4.x branches. It shows every tag, every bug
creation date and bug update date (in dev version). I'm looking forward to
see this functions in our web tool. Because it is bad to have several tools
for one task.

Third. Our 'nailgun' and 'ui' tags are useless. Almost each bug can be
applied to some component or to some feature. So I've introduced a lot of
feature-* and module-* tags for my team and we will evaluate them. You can
find all new tags later in this email.

Fourth. We do have low-hanging-fruit
https://bugs.launchpad.net/fuel/+bugs?field.tag=low-hanging-fruit tag and
it is great. I've also added tech-debt
https://bugs.launchpad.net/fuel/+bugs?field.tag=tech-debt tag in order to
group bugs that are not related to the user experience or functionality.
And I've added feature
https://bugs.launchpad.net/fuel/+bugs?field.tag=feature tag for
complicated request that required to be properly designed. Some of them not
even close to be real bugs. But almost every request talks about users
pain. So it is rude to close them. That's why we have:

Sixth. 'next https://launchpad.net/fuel/+milestone/next' milestone. Bugs
in this milestone cannot be fixed with our bugfixing process. We do need
proper prioritization for them in our backlog.

Our feature and module tags with amount of bugs per each tag.

 feature-advanced-networking 2
 feature-bonding 3
 feature-client 1
 feature-deadlocks 1
 feature-demo-site 2
 feature-hardware-change 5
 feature-image-based 13
 feature-logging 4
 feature-mongo 2
 feature-multi-l2 3
 feature-native-provisioning 6
 feature-plugins 5
 feature-progress-bar 2
 feature-redeployment 4
 feature-remote-repos 2
 feature-reset-env 5
 feature-security 3
 feature-simple-mode 1
 feature-stats 9
 feature-stop-deployment 3
 feature-upgrade 8
 feature-validation 9
 module-amqp 1
 module-build 2
 module-client 13
 module-fuelmenu 1
 module-master-node-installation 2
 module-nailgun 1
 module-nailgun-agent 1
 module-netcheck 11
 module-networks 8
 module-ostf 19
 module-serialization 4
 module-shotgun 16
 module-tasks 13
 module-volumes 8

I'm going to add this tags in our triaging process. And assign owner for
each tag.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to handle vendor-specific API microversions?

2015-03-27 Thread Dan Smith
 To quote John from an earlier email in this thread:
 
 Its worth noting, we do have the experimental flag:
 
 The first header specifies the version number of the API which was
 executed. Experimental is only returned if the operator has made a
 modification to the API behaviour that is non standard. This is only
 intended to be a transitional mechanism while some functionality used
 by cloud operators is upstreamed and it will be removed within a small
 number of releases.
 
 
 So if you have an extension that gets accepted upstream you can use the
 experimental flag until you migrate to the upstream version of the
 extension.

Yes, but please note the last sentence in the quoted bit. This is to
help people clean their dirty laundry. Going forward, you shouldn't
expect to deliver features to your customers via this path.

 That is *not* what I would call interoperability, this is exactly what
 we do not want.

+1.

--Dan



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to handle vendor-specific API microversions?

2015-03-27 Thread Steve Gordon
- Original Message -
 From: Chris Friesen chris.frie...@windriver.com
 To: openstack-dev@lists.openstack.org
 

 Haven't seen any responses to this.
 
 As I see it, nova is really pushing for interoperability, but what is a
 vendor
 supposed to do when they have customers asking for extensions to the existing
 behaviour, and they want it in a month rather than the 6-9 months it might
 take
 to push upstream?  (Assuming its something that upstream is even interested
 in.)
 
 I think it would be better to have an explicit method of declaring/versioning
 vendor-specific extensions (even if it's not used at all by the core Nova
 API)
 than to have each vendor winging it on their own.

In this scenario each vendor is still really winging it, as it removes the 
impetus for them to bring the relevent use cases and resulting requirements to 
the community and ultimately design/deliver an interoperable resolution - 
instead encouraging the continued adding of proprietary extensions. Arguably 
the delays seen on some features are in fact exacerbated by this kind of 
behaviour as if certain vendors or their users are not participating in 
advocating the use case then it's not clear to the rest of the community why it 
should be a priority.

Now, all of that said, where I agree there is a pitfall here that will 
potentially negatively impact vendors and some operators [1] is that it seems 
this will on face value make it more challenging to take a feature that has 
landed on master and backport it to an earlier release. These feature backports 
of course aren't in scope for the stable branches [2], but this is one reason 
frequently cited as to why some operators prefer to roll their own packaging 
and is also something the distros do from time to time (or at least in the 
interests of full disclosure I can think of some instances where we have). I 
would note that I am not advocating a change in the policy here but just 
outlining something I have been thinking about lately.

-Steve

[1] https://etherpad.openstack.org/p/PHL-ops-packaging
[2] https://wiki.openstack.org/wiki/StableBranch#Appropriate_Fixes

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to handle vendor-specific API microversions?

2015-03-27 Thread Chris Friesen

On 03/27/2015 01:40 PM, Steve Gordon wrote:

- Original Message -

From: Chris Friesen chris.frie...@windriver.com



So for the case where a customer really wants some functionality, and
wants it *soon* rather than waiting for it to get merged upstream, what is
the recommended implementation path for a vendor?


Well, before all else the key is to at least propose it in the community and
see what the appetite for it is. I think part of the problem here is that
we're still discussing this mostly in the abstract, although you provided
some high level examples in response to Sean the only link was to a review
that merged the same day it was proposed (albeit in 2012). I'm interested in
whether there is a specific proposal you can link to that you put forward in
the past and it wasn't accepted or was held up or whether you are working on
a preset assumption here?


Whoops...I had meant to link to https://review.openstack.org/163060; and 
managed to miss the last character.  My bad.  The API change I was talking about 
has now been split out to https://review.openstack.org/168418;.


I haven't proposed any features (with spec/blueprint) for kilo or earlier.  I'm 
planning on proposing some for the L release.  (Some are already in for review, 
though I realize they're not going to get attention until Kilo is out.)


I may be making invalid assumptions about how long it takes to get things done, 
but if so it's coloured by past experience.


Some examples:

I proposed a one-line trivial change in April of last year and it took almost 2 
months before anyone even looked at it.


I reported https://bugs.launchpad.net/nova/+bug/1213224; in 2013 and it hasn't 
been fixed.


I opened https://bugs.launchpad.net/nova/+bug/1289064; over a year ago, 
proposed a fix (which admittedly had flaws), then handed it off to someone else, 
then it bounced around a few other people and still isn't resolved.


I opened https://bugs.launchpad.net/nova/+bug/1284719; over a year ago and it's 
not yet resolved.


I opened https://bugs.launchpad.net/nova/+bug/1298690; a year ago and it hasn't 
been touched.



Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] swift memory usage in centos7 devstack jobs

2015-03-27 Thread Ian Wienand

On 03/27/2015 08:47 PM, Alan Pevec wrote:

But how come that same recent pyOpenSSL doesn't consume more memory
on Ubuntu?


Because we don't use it in CI; I believe the packaged version is
installed before devstack runs on our ubuntu CI vm's.  It's probably a
dependency of some base package there, or something we've
pre-installed.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] Floating IP traffic statistics meters

2015-03-27 Thread Megh Bhatt
Hello,
I’d like to request an exemption for the following to go into the Kilo release.

This work is crucial for:
Cloud operators need to be able to bill customers based on floating IP traffic 
statistics. 

Status of the work:
In summary the patch only introduces 4 new meters - 
ip.floating.transmit.packets, ip.floating.transmit.bytes, 
ip.floating.receive.packets, ip.floating.receive.bytes and adds 2 new functions 
to the neutron_client - a) Function to get list of all floating IPs and 2) Get 
information about a specific port.
- The patch necessary for this is already submitted for the review - 
https://review.openstack.org/#/c/166491/
- The document impact patch has already been reviewed and is waiting for the 
ceilometer commit to go through - https://review.openstack.org/#/c/166489/

Thanks

Megh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Group-Based-Policy] Service Chain Instance ownership

2015-03-27 Thread Sumit Naiksatam
Hi Ivar, Thanks for bringing this up and my apologies for the late
response (although I noticed that you already provided a fix, so
thankfully you were not blocked ;-)). As discussed during the GBP IRC
meeting, my suggestion would also be to use the first option, and
create the service chain instances as admin. I agree that ideally it
would be nice for the users to at least be able to see the created
service chain instances, but that might require some kind of resource
level access control. We should deal with it as a separate
enhancement/feature.

Thanks,
~Sumit.



On Thu, Mar 19, 2015 at 3:13 PM, Ivar Lazzaro ivarlazz...@gmail.com wrote:
 Hello Folks,

 [tl;dr]

 On implicit chains, the Service Chain Instance ownership in GBP is
 inconsistent, depending on the actor triggering the chain. Possible solution
 is to have all the implicit SCI owned by an admin, or by the provider of the
 chain. Any suggestion is welcome.

 [boringpostwithexampleandstuff]

 I've recently file a bug [0] regarding Service Chain Instance ownership, and
 I would like to get some advice on how to proceed with it.

 Let's take the following final state as an example:

 PTG-A ---PRS-Chain---PTG-B

 PTG A is providing a PRS with a redirect action, which is consumed by PTG-B.
 Reaching this state triggers an implicit SCI creation based on the policy
 action.
 The above state can be reached in multiple ways, some of them are:

 - PTG-A provides PRS-Chain (which is already consumed by PTG-B);
 - PTG-B consumes PRS-Chain (which is already provided by PTG-A);
 - Redirect action is added to PRS-Chain (which is already provided and
 consumed).

 Depending on how that state is reached, in today's implementation the tenant
 owning the SCI may be ultimately different! This is definitively a problem,
 especially when PTG-A and PTG-B are owned by different tenants.
 If having inconsistent behavior on the same state isn't bad enough, another
 issue is that whoever triggers the SCI deletion should also own the SCI or
 will get an exception! And this is definitively not part of the intent.
 In short, we need to decide who has to own the chain instances (and with
 them, the network services themselves). There are two main choices:

 - An Admin owns them all. This will not impact the users' quota, and makes
 it easier to bridge different tenants' networks (when needed/applicable).
 The downside (?) is that the user will never see the SCI and will never be
 able to control the services without admin permissions;

 - The Provider is always the owner. This solution is trickier as far as
 quota are concerned, especially when the services are created using VMs
 (NFV). does the provider need to care about that? why has my VM limit
 suddenly dropped to 0 now that I'm providing this cursed PRS? On the upside,
 the provider can control and modify the service itself if he needs to.

 I personally am a fan of the first option, the user should only care about
 the Intent, and not about this kind of details. But I would like to have
 some insight from the community, especially from other projects that may
 have had this issue and can *provide* (ahah) a bit of their wisdom :)

 Thanks,
 Ivar.

 [0] https://bugs.launchpad.net/group-based-policy/+bug/1432816

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer][FFE] Floating IP traffic statistics meters

2015-03-27 Thread Megh Bhatt
Hello,
Apologies for the double post, forgot to include FFE in the subject:

I’d like to request an exemption for the following to go into the Kilo release.

This work is crucial for:
Cloud operators need to be able to bill customers based on floating IP traffic 
statistics. 

Why this needs an FFE?
It’s officially new feature adding 4 new meters 

Status of the work:
In summary the patch only introduces 4 new meters - 
ip.floating.transmit.packets, ip.floating.transmit.bytes, 
ip.floating.receive.packets, ip.floating.receive.bytes and adds 2 new functions 
to the neutron_client - a) Function to get list of all floating IPs and 2) Get 
information about a specific port.
- The patch necessary for this is already submitted for the review - 
https://review.openstack.org/#/c/166491/
- The document impact patch has already been reviewed and is waiting for the 
ceilometer commit to go through - https://review.openstack.org/#/c/166489/

Thanks

Megh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Magnum is now in the openstack git namespace

2015-03-27 Thread Steven Dake (stdake)
For those folks that weren’t aware of the scheduled outage today in gerrit to 
handle the renames, the magnum repository has moved to the openstack namespace 
\o/ :)

I pull from

git pull http://github.com/openstack/magnum

But you can also pull from the openstack git server

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit maintenance concluded

2015-03-27 Thread Jeremy Stanley
On 2015-03-28 02:30:58 + (+), Everett Toews wrote:
 How is a repo determined to be unmaintained/abandoned?

In this particular case its authors requested the change. See
https://review.openstack.org/167387 for details.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gerrit maintenance concluded

2015-03-27 Thread Jeremy Stanley
Our maintenance has concluded successfully without incident and the
accompanying Gerrit outage was roughly 30 minutes as predicted.

We moved 5 repositories to new Git namespaces:

stackforge/bindep
- openstack-infra/bindep
stackforge/gnocchi
- openstack/gnocchi
stackforge/magnum
- openstack/magnum
stackforge/os-client-config
- openstack/os-client-config
stackforge/python-magnumclient
- openstack/python-magnumclient

We renamed 2 repositories:

stackforge/fuel-tasks-validator
- stackforge/fuel-tasklib
stackforge/xstatic-angular-irdragndrop
- stackforge/xstatic-angular-lrdragndrop

We retired 6 unmaintained/abandoned repositories:

stackforge/cookbook-monasca-agent
- stackforge-attic/cookbook-monasca-agent
stackforge/cookbook-monasca-api
- stackforge-attic/cookbook-monasca-api
stackforge/cookbook-monasca-notification
- stackforge-attic/cookbook-monasca-notification
stackforge/cookbook-monasca-persister
- stackforge-attic/cookbook-monasca-persister
stackforge/cookbook-monasca-schema
- stackforge-attic/cookbook-monasca-schema
stackforge/cookbook-monasca-thresh
- stackforge-attic/cookbook-monasca-thresh

I've uploaded these .gitreview updates and request the respective
core reviewers approve them as soon as possible to make things
easier on your contributors:

bindep - https://review.openstack.org/168525
fuel-tasklib - https://review.openstack.org/168526
gnocchi - https://review.openstack.org/168527
magnum - https://review.openstack.org/168528
os-client-config - https://review.openstack.org/168529
python-magnumclient - https://review.openstack.org/168530
xstatic-angular-lrdragndrop - https://review.openstack.org/168531

Developers will either need to re-clone a new copy of the
repository, or manually update their remotes with something like:

git remote set-url origin https://git.openstack.org/$ORG/$PROJECT

For users of Gertty, James Blair has provided the following example
recipe to rename a repository its database...

sqlite3 ~/.gertty.db update project
set name='openstack-infra/bindep'
where name='stackforge/bindep'
sqlite3 ~/.gertty.db update change
set id = replace(
id, 'stackforge%2Fbindep',
'openstack-infra%2Fbindep')
where id like 'stackforge%%2Fbindep%'

Make sure to rename any associated on-disk repository directories in
gertty's git-root as well.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Steve Wormley
So, I figured I'd weigh in on this as an employee of a nova-network using
company.

Nova-network allowed us to do a couple things simply.

1. Attach openstack networks to our existing VLANs using our existing
firewall/gateway and allow easy access to hardware such as database servers
and storage on the same VLAN.
2. Floating IPs managed at each compute node(multi-host) and via the
standard nova API calls.
3. Access to our instances via their private IP addresses from inside the
company(see 1)

Our forklift replacement to neutron(as we know we can't 'migrate') is at
the following state.
2 meant we can't use pure provider VLAN networks so we had to wait for DVR
VLAN support to work.

Now that that works, I had to go in and convince Neutron to let me
configure my own gateways as the next hop instead of the central SNAT
gateway's assigned IP. This also required making it so the distributed L3
agents could do ARP for the 'real' gateway on the subnet.

Item 3 works fine until a floating IP is assigned. For nova-network this
was trivial connection tracked routing sending packets that reached an
instance via its private IP back out the private VLAN and everything else
via the assigned public IP.

Neutron, OVS and the various veth connections between them means I can't
use packet marking between instances and the router NS, between that and a
whole bunch of other things we had to borrow some IP header bits to track
where a packet came in so if a response to that connection hit the DVR
router it could be sent back out the private network.

And for the next week I get to try and make this all python code so we can
actually finally test it without hand crafted iptables and OVS rules.

For our model most of the Neutron features are wasted, but as we've been
told that nova-network is going away we're going to figure out how to make
Neutron work going forward.

-Steve Wormley
Not really speaking for my employer
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Magnum is now in the openstack git namespace

2015-03-27 Thread Adrian Otto
I have updated the Magnum project wiki pages for new contributors with revised 
instructions so they reflect the new repo location.

Adrian

On Mar 27, 2015, at 8:27 PM, Steven Dake (stdake) 
std...@cisco.commailto:std...@cisco.com wrote:

For those folks that weren’t aware of the scheduled outage today in gerrit to 
handle the renames, the magnum repository has moved to the openstack namespace 
\o/ :)

I pull from

git pull http://github.com/openstack/magnum

But you can also pull from the openstack git server

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev