Re: [openstack-dev] [Glance] Recall for previous iscsi backend BP

2014-11-19 Thread Flavio Percoco

On 19/11/14 15:21 +0800, henry hly wrote:

In the Previous BP [1], support for iscsi backend is introduced into
glance. However, it was abandoned because of Cinder backend
replacement.

The reason is that all storage backend details should be hidden by
cinder, not exposed to other projects. However, with more and more
interest in Converged Storage like Ceph, it's necessary to expose
storage backend to glance as well as cinder.

An example  is that when transferring bits between volume and image,
we can utilize advanced storage offload capability like linked clone
to do very fast instant copy. Maybe we need a more general glance
backend location support not only with iscsi.



[1] https://blueprints.launchpad.net/glance/+spec/iscsi-backend-store


Hey Henry,

This blueprint has been superseeded by one proposing a Cinder store
for Glance. The Cinder store is, unfortunately, in a sorry state.
Short story, it's not fully implemented.

I truly think Glance is not the place where you'd have an iscsi store,
that's Cinder's field and the best way to achieve what you want is by
having a fully implemented Cinder store that doesn't rely on Cinder's
API but has access to the volumes.

Unfortunately, this is not possible now and I don't think it'll be
possible until L (or even M?).

FWIW, I think the use case you've mentioned is useful and it's
something we have in our TODO list.

Cheers,
Flavio

--
@flaper87
Flavio Percoco


pgps0pmEesXKz.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic VM Consolidation Agent as part of Nova

2014-11-19 Thread Mehdi Sheikhalishahi
Where do you suggest as the best place for such a service?

On Tue, Nov 18, 2014 at 11:49 PM, Joe Gordon joe.gord...@gmail.com wrote:



 On Tue, Nov 18, 2014 at 7:40 AM, Mehdi Sheikhalishahi 
 mehdi.alish...@gmail.com wrote:

 Hi,

 I would like to bring Dynamic VM Consolidation capability into Nova. That
 is I would like to check compute nodes status periodically (let's say every
 15 minutes) and consolidate VMs if there is any opportunity to turn off
 some compute nodes.

 Any hints on how to get into this development process as part of nova?


 While I like the idea of having dynamic VM consolidation capabilities
 somewhere in OpenStack, it doesn't belongs in Nova. This service should
 live outside of Nova and just consume Nova's REST APIs. If there is some
 piece of information that this service would need that isn't made available
 via the REST API, we can fix that.



 Thanks,
 Mehdi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-19 Thread Matthias Runge
On 19/11/14 05:25, Richard Jones wrote:
 I've just had a long discussion with #infra folk about the
 global-requirements thing, which deviated (quite naturally) into a
 discussion about packaging (and their thoughts were in line with where
 Radomir and I are heading).
 
 In their view, bower components don't need to be in global-requirements:
 
 - there are no other projects that use bower components, so we don't
 need to ensure cross-project compatibility
 - we can vet new versions of bower components as part of standard
 Horizon change review
 
That sounds good to me! Thanks for doing this!

Matthias


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Recall for previous iscsi backend BP

2014-11-19 Thread Duncan Thomas
I think that having a stand-alone  (client of cinder) rich data streaming
service (http put/get with offset support, which can be used for
conventional glance plus volume upload/download directly), and rich
data-source semantics exposed so that it can be used in an optimal way
by/for nova, need not wait on the cinder roadmap to be realised, and is
ultimately the right way to progress this.

Certain features may need to wait for cinder features (e.g. read-only
multi-attach is not available yet), but the basic framework could be
written right now, I think

On 19 November 2014 10:00, Flavio Percoco fla...@redhat.com wrote:

 On 19/11/14 15:21 +0800, henry hly wrote:

 In the Previous BP [1], support for iscsi backend is introduced into
 glance. However, it was abandoned because of Cinder backend
 replacement.

 The reason is that all storage backend details should be hidden by
 cinder, not exposed to other projects. However, with more and more
 interest in Converged Storage like Ceph, it's necessary to expose
 storage backend to glance as well as cinder.

 An example  is that when transferring bits between volume and image,
 we can utilize advanced storage offload capability like linked clone
 to do very fast instant copy. Maybe we need a more general glance
 backend location support not only with iscsi.



 [1] https://blueprints.launchpad.net/glance/+spec/iscsi-backend-store


 Hey Henry,

 This blueprint has been superseeded by one proposing a Cinder store
 for Glance. The Cinder store is, unfortunately, in a sorry state.
 Short story, it's not fully implemented.

 I truly think Glance is not the place where you'd have an iscsi store,
 that's Cinder's field and the best way to achieve what you want is by
 having a fully implemented Cinder store that doesn't rely on Cinder's
 API but has access to the volumes.

 Unfortunately, this is not possible now and I don't think it'll be
 possible until L (or even M?).

 FWIW, I think the use case you've mentioned is useful and it's
 something we have in our TODO list.

 Cheers,
 Flavio

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Recall for previous iscsi backend BP

2014-11-19 Thread henry hly
Hi Flavio,

Thanks for your information about Cinder Store, Yet I have a little
concern about Cinder backend: Suppose cinder and glance both use Ceph
as Store, then if cinder  can do instant copy to glance by ceph clone
(maybe not now but some time later), what information would be stored
in glance? Obviously volume UUID is not a good choice, because after
volume is deleted then image can't be referenced. The best choice is
that cloned ceph object URI also be stored in glance location, letting
both glance and cinder see the backend store details.

However, although it really make sense for Ceph like All-in-one Store,
I'm not sure if iscsi backend can be used the same way.

On Wed, Nov 19, 2014 at 4:00 PM, Flavio Percoco fla...@redhat.com wrote:
 On 19/11/14 15:21 +0800, henry hly wrote:

 In the Previous BP [1], support for iscsi backend is introduced into
 glance. However, it was abandoned because of Cinder backend
 replacement.

 The reason is that all storage backend details should be hidden by
 cinder, not exposed to other projects. However, with more and more
 interest in Converged Storage like Ceph, it's necessary to expose
 storage backend to glance as well as cinder.

 An example  is that when transferring bits between volume and image,
 we can utilize advanced storage offload capability like linked clone
 to do very fast instant copy. Maybe we need a more general glance
 backend location support not only with iscsi.



 [1] https://blueprints.launchpad.net/glance/+spec/iscsi-backend-store


 Hey Henry,

 This blueprint has been superseeded by one proposing a Cinder store
 for Glance. The Cinder store is, unfortunately, in a sorry state.
 Short story, it's not fully implemented.

 I truly think Glance is not the place where you'd have an iscsi store,
 that's Cinder's field and the best way to achieve what you want is by
 having a fully implemented Cinder store that doesn't rely on Cinder's
 API but has access to the volumes.

 Unfortunately, this is not possible now and I don't think it'll be
 possible until L (or even M?).

 FWIW, I think the use case you've mentioned is useful and it's
 something we have in our TODO list.

 Cheers,
 Flavio

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][stable] Review request

2014-11-19 Thread Flavio Percoco

Please, abstain to send review requests to the mailing list

http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html

Thanks,
Flavio


Hi All,

On 18/11/14 17:42 +, Kekane, Abhishek wrote:


Greetings!!!



Can anyone please review this patch [1].

It requires one more +2 to get merged in stable/juno.



We want to use stable/juno in production environment and this patch will fix
the blocker bug [1] for restrict download image feature.

Please do the needful.



[1] https://review.openstack.org/#/c/133858/

[2] https://bugs.launchpad.net/glance/+bug/1387973





Thank You in advance.



Abhishek Kekane


__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


pgpn13I7KvWyK.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic VM Consolidation Agent as part of Nova

2014-11-19 Thread Stuart Fox
I'd be interested in the opposite, the ability to rebalance the vm's across
the fleet to even out load. Agreed it should live outside of Nova tho

BR,
Stuart
On Nov 19, 2014 12:15 AM, Mehdi Sheikhalishahi mehdi.alish...@gmail.com
wrote:

 Where do you suggest as the best place for such a service?

 On Tue, Nov 18, 2014 at 11:49 PM, Joe Gordon joe.gord...@gmail.com
 wrote:



 On Tue, Nov 18, 2014 at 7:40 AM, Mehdi Sheikhalishahi 
 mehdi.alish...@gmail.com wrote:

 Hi,

 I would like to bring Dynamic VM Consolidation capability into Nova.
 That is I would like to check compute nodes status periodically (let's say
 every 15 minutes) and consolidate VMs if there is any opportunity to turn
 off some compute nodes.

 Any hints on how to get into this development process as part of nova?


 While I like the idea of having dynamic VM consolidation capabilities
 somewhere in OpenStack, it doesn't belongs in Nova. This service should
 live outside of Nova and just consume Nova's REST APIs. If there is some
 piece of information that this service would need that isn't made available
 via the REST API, we can fix that.



 Thanks,
 Mehdi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Poll for change in weekly meeting time.

2014-11-19 Thread Flavio Percoco

On 18/11/14 18:55 +, Nikhil Komawar wrote:

We've had a few responses to this poll however, they do not seem to cover the
entire set of developers including many of the cores and developers who are
going to be actively working on the features this cycle.

Based on the responses received, I'd like to propose a (unified - no
alternating) time of 14:30UTC on Thursdays (channel(s) TBD). This change will
help the vast majority of developers who prefer the current earlier time slot
as well as reduce the no-shows in the later one.

If you'r not happy with the proposal please vote and (or) please reach out
before this week's meeting on Thursday. Please let me know if you've any
concerns.


/me is happy


Thanks,
-Nikhil
━━━
From: Nikhil Komawar
Sent: Friday, October 31, 2014 4:41 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Glance] Poll for change in weekly meeting time.

Hi all,

It was noticed in the past few meetings that the participation in the
alternating time-slot of Thursday at 20 UTC (or called as later time slot)
was low. With the growing interest in Glance of developers from eastern
latitudes and their involvement in meetings, please find this email as a
proposal to move all meetings to an earlier time-slot.

Here's a poll [0] to find what time-slots work best for everyone as well as for
the interest to remove the alternating time-slot aspect in the schedule.

Please be empathetic in your votes, try to suggest all possible options that
would work for you and note the changes in your timezone due to day-light
savings ending. Please let me know if you've any more questions.

[0] http://doodle.com/nwc26k8satuyvvmz

Thanks,
-Nikhil



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


pgp2FcMW9czH6.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Tempest basic

2014-11-19 Thread Vineet Menon
Hi,

I'm trying to run a single tempest test on openstack but things aren't
working as expected.

My pwd is tempest root.

This is what I'm trying to do, 'nosetests -v
tempest.scenario.test_minimum_basic.py', but it throws error.
I tried 'nosetests -v tempest/scenario/test_minimum_basic.py' as well, but
again errors are being thrown.

I'm following '
https://docs.google.com/presentation/d/1M3XhAco_0u7NZQn3Gz53z9VOHHrkQBzEs5gt43ZvhOc/edit#slide=id.gcc7522_3_13'
presentation as guide.


Regards,

Vineet Menon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tempest basic

2014-11-19 Thread Anna Kamyshnikova
Hi!

Try nosetests -sv tempest.scenario.test_minimum_basic

Regards,
Ann


On Wed, Nov 19, 2014 at 12:16 PM, Vineet Menon mvineetme...@gmail.com
wrote:

 Hi,

 I'm trying to run a single tempest test on openstack but things aren't
 working as expected.

 My pwd is tempest root.

 This is what I'm trying to do, 'nosetests -v
 tempest.scenario.test_minimum_basic.py', but it throws error.
 I tried 'nosetests -v tempest/scenario/test_minimum_basic.py' as well, but
 again errors are being thrown.

 I'm following '
 https://docs.google.com/presentation/d/1M3XhAco_0u7NZQn3Gz53z9VOHHrkQBzEs5gt43ZvhOc/edit#slide=id.gcc7522_3_13'
 presentation as guide.


 Regards,

 Vineet Menon


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic VM Consolidation Agent as part of Nova

2014-11-19 Thread Mehdi Sheikhalishahi
If we define a parameter like 'consolidation degree' and consolidate based
on this factor, we can achieve both objectives with an implementation.
There is an attempt by OpenStack neat project, but it is not integrated
into OS.

On Wed, Nov 19, 2014 at 10:03 AM, Stuart Fox stu...@demonware.net wrote:

 I'd be interested in the opposite, the ability to rebalance the vm's
 across the fleet to even out load. Agreed it should live outside of Nova tho

 BR,
 Stuart
 On Nov 19, 2014 12:15 AM, Mehdi Sheikhalishahi mehdi.alish...@gmail.com
 wrote:

 Where do you suggest as the best place for such a service?

 On Tue, Nov 18, 2014 at 11:49 PM, Joe Gordon joe.gord...@gmail.com
 wrote:



 On Tue, Nov 18, 2014 at 7:40 AM, Mehdi Sheikhalishahi 
 mehdi.alish...@gmail.com wrote:

 Hi,

 I would like to bring Dynamic VM Consolidation capability into Nova.
 That is I would like to check compute nodes status periodically (let's say
 every 15 minutes) and consolidate VMs if there is any opportunity to turn
 off some compute nodes.

 Any hints on how to get into this development process as part of nova?


 While I like the idea of having dynamic VM consolidation capabilities
 somewhere in OpenStack, it doesn't belongs in Nova. This service should
 live outside of Nova and just consume Nova's REST APIs. If there is some
 piece of information that this service would need that isn't made available
 via the REST API, we can fix that.



 Thanks,
 Mehdi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L2 Agent][Debt] Bootstrapping an L2 agent debt repayment task force

2014-11-19 Thread Rossella Sblendido
My name is also on the etherpad, I'd like to work on improving RPC and
ovslib. I am willing to take care of the BP, if somebody else is
interested we can do it together.

cheers,

Rossella


On 11/19/2014 12:01 AM, Kevin Benton wrote:
 My name is already on the etherpad in the RPC section, but I'll
 reiterate here that I'm very interested in optimizing a lot of the
 expensive ongoing communication between the L2 agent and the server on
 the message bus.
 
 On Tue, Nov 18, 2014 at 9:12 AM, Carl Baldwin c...@ecbaldwin.net
 mailto:c...@ecbaldwin.net wrote:
 
 At the recent summit, we held a session about debt repayment in the
 Neutron agents [1].  Some work was identified for the L2 agent.  We
 had a discussion in the Neutron meeting today about bootstrapping that
 work.
 
 The first order of business will be to generate a blueprint
 specification for the work similar, in purpose, to the one that is
 under discussion for the L3 agent [3].  I personally am at or over
 capacity for BP writing this cycle.  We need a volunteer to take this
 on coordinating with others who have been identified on the etherpad
 for L2 agent work (you know who you are) and other volunteers who have
 yet to be identified.
 
 This task force will use the weekly Neutron meeting, the ML, and IRC
 to coordinate efforts.  But first, we need to bootstrap the task
 force.  If you plan to participate, please reply to this email and
 describe how you will contribute, especially if you are willing to be
 the lead author of a BP.  I will reconcile this with the etherpad to
 see where gaps have been left.
 
 I am planning to contribute as a core reviewer of blueprints and code
 submissions only.
 
 Carl
 
 [1] https://etherpad.openstack.org/p/kilo-neutron-agents-technical-debt
 [2]
 
 http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-11-18-14.02.html
 [3] https://review.openstack.org/#/c/131535/
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Kevin Benton
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Who maintains the iCal meeting data?

2014-11-19 Thread Tony Breeds
Hi All,
I was going to make an iCal feed for all the openstack meetings.

I discovered I was waaay behind the times and one exists and is linked from

https://wiki.openstack.org/wiki/Meetings

With some of the post Paris changes it's a little out of date.  I'm really
happy to help maintain it if that's a thing someone can do ;P

So whom do I poke?
Should that information be slightly more visible?

Yours Tony.


pgpqevrYX20R0.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Waiting for Haproxy backends

2014-11-19 Thread Vladimir Kuklin
Hi everyone

Actually, we changed a lot in 5.1 HA and there are some changes in 6.0
also. Right now we are using assymetric cluster and use location
constraints to control resources. We started using xml diffs as the most
reliable and supported approach as it does not depend on pcs/crmsh
implementation. Regarding corosync 2.x we are looking forward to moving to
it but we did not fit our 6.0 release timeframe. We will surely move to
pacemaker plugin and corosync 2.x in 6.1 release as it should fix lots of
our problems.

On Wed, Nov 19, 2014 at 3:47 AM, Andrew Woodward xar...@gmail.com wrote:

 On Wed, Nov 12, 2014 at 4:10 AM, Aleksandr Didenko
 adide...@mirantis.com wrote:
  HI,
 
  in order to make sure some critical Haproxy backends are running (like
 mysql
  or keystone) before proceeding with deployment, we use execs like [1] or
  [2].

 We used to do the API waiting in the puppet resource providers
 consuming them [4] which tends to be very effective (unless it never
 comes up) as it doesn't care what is in-between the resource and the
 API it's attempting to use. This way works for everything except mysql
 because other services depend on it.

 
  We're currently working on a minor improvements of those execs, but
 there is

 really, we should not use these execs, they are bad and we need to be
 doing a proper response validation like in [4] instead of the just
 using the simple (and often wrong) haproxy health check

  another approach - we can replace those execs with puppet resource
 providers
  and move all the iterations/loops/timeouts logic there. Also we should
 fail

 yes, this will become the most reliable method. I'm partially still on
 the fence of which provider we are modifying. In the service provider,
 we could identify the check method (ie http 200 from a specific url)
 and the start check, and the provider would block until the check
 passes or timeout is reached. (I'm still on the fence of which
 provider to do this for haproxy, or each of the openstack API
 services. I'm leaning towards each API since this will allow the check
 to work regardless of haproxy, and should let it also work with
 refresh)

  catalog compilation/run if those resource providers are not able to
 ensure
  needed Haproxy backends are up and running. Because there is no point to
  proceed with deployment if keystone is not running, for example.
 
  If no one objects, I can start implementing this for Fuel-6.1. We can
  address it as a part of pacemaker improvements BP [3] or create a new BP.

 unless we are fixing the problem with pacemaker it should have its own
 spec, possibly w/o a blueprint

 
  [1]
 
 https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/osnailyfacter/manifests/cluster_ha.pp#L551-L572
  [2]
 
 https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/openstack/manifests/ha/mysqld.pp#L28-L33
  [3] https://blueprints.launchpad.net/fuel/+spec/pacemaker-improvements
 
  Regards,
  Aleksandr Didenko
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 [4]
 https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/neutron/lib/puppet/provider/neutron.rb#L83-116

 --
 Andrew
 Mirantis
 Ceph community

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L2 Agent][Debt] Bootstrapping an L2 agent debt repayment task force

2014-11-19 Thread marios
On 19/11/14 11:34, Rossella Sblendido wrote:
 My name is also on the etherpad, I'd like to work on improving RPC and
 ovslib. I am willing to take care of the BP, if somebody else is
 interested we can do it together.

sure I can help - (let's sync up on irc to split out the sections and
get an etherpad going which we can then just paste into a spec - we can
just use the l3 one as a template?)

thanks, marios

 
 cheers,
 
 Rossella
 
 
 On 11/19/2014 12:01 AM, Kevin Benton wrote:
 My name is already on the etherpad in the RPC section, but I'll
 reiterate here that I'm very interested in optimizing a lot of the
 expensive ongoing communication between the L2 agent and the server on
 the message bus.

 On Tue, Nov 18, 2014 at 9:12 AM, Carl Baldwin c...@ecbaldwin.net
 mailto:c...@ecbaldwin.net wrote:

 At the recent summit, we held a session about debt repayment in the
 Neutron agents [1].  Some work was identified for the L2 agent.  We
 had a discussion in the Neutron meeting today about bootstrapping that
 work.

 The first order of business will be to generate a blueprint
 specification for the work similar, in purpose, to the one that is
 under discussion for the L3 agent [3].  I personally am at or over
 capacity for BP writing this cycle.  We need a volunteer to take this
 on coordinating with others who have been identified on the etherpad
 for L2 agent work (you know who you are) and other volunteers who have
 yet to be identified.

 This task force will use the weekly Neutron meeting, the ML, and IRC
 to coordinate efforts.  But first, we need to bootstrap the task
 force.  If you plan to participate, please reply to this email and
 describe how you will contribute, especially if you are willing to be
 the lead author of a BP.  I will reconcile this with the etherpad to
 see where gaps have been left.

 I am planning to contribute as a core reviewer of blueprints and code
 submissions only.

 Carl

 [1] https://etherpad.openstack.org/p/kilo-neutron-agents-technical-debt
 [2]
 
 http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-11-18-14.02.html
 [3] https://review.openstack.org/#/c/131535/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 -- 
 Kevin Benton


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do we need to add xml support for new API extensions on v2 API ?

2014-11-19 Thread Christopher Yeoh
On Wed, 19 Nov 2014 13:11:40 +0800
Chen CH Ji jiche...@cn.ibm.com wrote:

 
 Hi
  I saw we are removing v2 XML support proposed
 several days before
 
  For new api extensions, do we need to add it now and
 remove it later or only support JSON ? Thanks

I don't think any api additions to the api should include XML support.

Regards,

Chris

 
 Best Regards!
 
 Kevin (Chen) Ji 纪 晨
 
 Engineer, zVM Development, CSTL
 Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
 Phone: +86-10-82454158
 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
 District, Beijing 100193, PRC

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] API-WG meeting (note time change this week)

2014-11-19 Thread Christopher Yeoh
Hi,

We have moved to alternating times each week for the API WG meeting so
people from other timezones can attend. Since this is an odd week 
the meeting will be Thursday UTC 1600. Details here:

https://wiki.openstack.org/wiki/Meetings/API-WG

The google ical feed hasn't been updated yet, but thats not surprising
since the wiki page was only updated a few hours ago.

Regards,

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Installing multinode(control node)

2014-11-19 Thread Chhavi Kant/TVM/TCS
Hi,

I want to install multinode in openstack, i need some guidence on what all are 
the services that i need to enable for installing control node.
Attached is the localrc. 

-- 
Thanks  Regards

Chhavi Kant

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you




localrc
Description: Binary data
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Cleaning up spec review queue.

2014-11-19 Thread Dmitry Tantsur

On 11/18/2014 06:13 PM, Chris K wrote:

Hi all,

In an effort to keep the Ironic specs review queue as up to date as
possible, I have identified several specs that were proposed in the Juno
cycle and have not been updated to reflect the changes to the current
Kilo cycle.

I would like to set a deadline to either update them to reflect the Kilo
cycle or abandon them if they are no longer relevant.
If there are no objections I will abandon any specs on the list below
that have not been updated to reflect the Kilo cycle after the end of
the next Ironic meeting (Nov. 24th 2014).

Below is the list of specs I have identified that would be affected:
https://review.openstack.org/#/c/107344 - *Generic Hardware Discovery Bits*

Killed it with fire :D


https://review.openstack.org/#/c/102557 - *Driver for NetApp storage arrays*
https://review.openstack.org/#/c/108324 - *DRAC hardware discovery*

Imre, are you going to work on it?


https://review.openstack.org/#/c/103065 - *Design spec for iLO driver
for firmware settings*
https://review.openstack.org/#/c/108646 - *Add HTTP GET support for
vendor_passthru API*

This one is replaced by Lucas' work.


https://review.openstack.org/#/c/94923 - *Make the REST API fully
asynchronous*
https://review.openstack.org/#/c/103760 - *iLO Management Driver for
firmware update*
https://review.openstack.org/#/c/110217 - *Cisco UCS Driver*
https://review.openstack.org/#/c/96538 - *Add console log support*
https://review.openstack.org/#/c/100729 - *Add metric reporting spec.*
https://review.openstack.org/#/c/101122 - *Firmware setting design spec.*
https://review.openstack.org/#/c/96545 - *Reset service processor*
*
*
*This list may also be found on this ether pad:
https://etherpad.openstack.org/p/ironic-juno-specs-to-be-removed*
*
*
If you believe one of the above specs should not be abandoned please
update the spec to reflect the current Kilo cycle, or let us know that a
update is forth coming.

Please feel free to reply to this email, I will also bring this topic up
at the next meeting to ensure we have as much visibility as possible
before abandoning the old specs.

Thank you,
Chris Krelle
IRC: NobodyCam


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Cleaning up spec review queue.

2014-11-19 Thread Imre Farkas

On 11/19/2014 12:07 PM, Dmitry Tantsur wrote:

On 11/18/2014 06:13 PM, Chris K wrote:

Hi all,

In an effort to keep the Ironic specs review queue as up to date as
possible, I have identified several specs that were proposed in the Juno
cycle and have not been updated to reflect the changes to the current
Kilo cycle.

I would like to set a deadline to either update them to reflect the Kilo
cycle or abandon them if they are no longer relevant.
If there are no objections I will abandon any specs on the list below
that have not been updated to reflect the Kilo cycle after the end of
the next Ironic meeting (Nov. 24th 2014).

Below is the list of specs I have identified that would be affected:
https://review.openstack.org/#/c/107344 - *Generic Hardware Discovery
Bits*

Killed it with fire :D


https://review.openstack.org/#/c/102557 - *Driver for NetApp storage
arrays*
https://review.openstack.org/#/c/108324 - *DRAC hardware discovery*

Imre, are you going to work on it?


I think it's replaced by Lucas' proposal: 
https://review.openstack.org/#/c/125920

I will discuss it with him and abandon one of them.

Imre

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Secret store API validation

2014-11-19 Thread Kelsey, Timothy John


On 18/11/2014 21:07, Nathan Reller rellerrel...@yahoo.com wrote:

 It seems we need to add some validation to the process

Yes, we are planning to add some validation checks in Kilo. I would
submit a bug report for this.

The big part of the issue is that we need to be clearer about the
expected input types to the API as well as the SecretStores. This was
a big topic of discussion at the summit. I hope to have a spec out
soon, I hope, that will address this issue.

-Nate

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OK, I¹ll file a bug and look forward to reviewing your spec. Thanks Nate.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][neutron] Proposal to split Neutron into separate repositories

2014-11-19 Thread Paul Michali (pcm)
I like the definition.


PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pc_m (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83




On Nov 18, 2014, at 10:10 PM, Sumit Naiksatam sumitnaiksa...@gmail.com wrote:

 On Tue, Nov 18, 2014 at 4:44 PM, Mohammad Hanif mha...@brocade.com wrote:
 I agree with Paul as advanced services go beyond just L4-L7.  Today, VPNaaS
 deals with L3 connectivity but belongs in advanced services.  Where does
 Edge-VPN work belong?  We need a broader definition for advanced services
 area.
 
 
 So the following definition is being proposed to capture the broader
 context and complement Neutron's current mission statement:
 
 To implement services and associated libraries that provide
 abstractions for advanced network functions beyond basic L2/L3
 connectivity and forwarding.
 
 What do people think?
 
 Thanks,
 —Hanif.
 
 From: Paul Michali (pcm) p...@cisco.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Tuesday, November 18, 2014 at 4:08 PM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [tc][neutron] Proposal to split Neutron into
 separate repositories
 
 On Nov 18, 2014, at 6:36 PM, Armando M. arma...@gmail.com wrote:
 
 Mark, Kyle,
 
 What is the strategy for tracking the progress and all the details about
 this initiative? Blueprint spec, wiki page, or something else?
 
 One thing I personally found useful about the spec approach adopted in [1],
 was that we could quickly and effectively incorporate community feedback;
 having said that I am not sure that the same approach makes sense here,
 hence the question.
 
 Also, what happens for experimental efforts that are neither L2-3 nor L4-7
 (e.g. TaaS or NFV related ones?), but they may still benefit from this
 decomposition (as it promotes better separation of responsibilities)? Where
 would they live? I am not sure we made any particular progress of the
 incubator project idea that was floated a while back.
 
 
 Would it make sense to define the advanced services repo as being for
 services that are beyond basic connectivity and routing? For example, VPN
 can be L2 and L3. Seems like restricting to L4-L7 may cause some confusion
 as to what’s in and what’s out.
 
 
 Regards,
 
 PCM (Paul Michali)
 
 MAIL …..…. p...@cisco.com
 IRC ……..… pc_m (irc.freenode.com)
 TW ………... @pmichali
 GPG Key … 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
 
 
 
 Cheers,
 Armando
 
 [1] https://review.openstack.org/#/c/134680/
 
 On 18 November 2014 15:32, Doug Wiegley do...@a10networks.com wrote:
 
 Hi,
 
 so the specs repository would continue to be shared during the Kilo
 cycle.
 
 One of the reasons to split is that these two teams have different
 priorities and velocities.  Wouldn’t that be easier to track/manage as
 separate launchpad projects and specs repos, irrespective of who is
 approving them?
 
 Thanks,
 doug
 
 
 
 On Nov 18, 2014, at 10:31 PM, Mark McClain m...@mcclain.xyz wrote:
 
 All-
 
 Over the last several months, the members of the Networking Program have
 been discussing ways to improve the management of our program.  When the
 Quantum project was initially launched, we envisioned a combined service
 that included all things network related.  This vision served us well in the
 early days as the team mostly focused on building out layers 2 and 3;
 however, we’ve run into growth challenges as the project started building
 out layers 4 through 7.  Initially, we thought that development would float
 across all layers of the networking stack, but the reality is that the
 development concentrates around either layer 2 and 3 or layers 4 through 7.
 In the last few cycles, we’ve also discovered that these concentrations have
 different velocities and a single core team forces one to match the other to
 the detriment of the one forced to slow down.
 
 Going forward we want to divide the Neutron repository into two separate
 repositories lead by a common Networking PTL.  The current mission of the
 program will remain unchanged [1].  The split would be as follows:
 
 Neutron (Layer 2 and 3)
 - Provides REST service and technology agnostic abstractions for layer 2
 and layer 3 services.
 
 Neutron Advanced Services Library (Layers 4 through 7)
 - A python library which is co-released with Neutron
 - The advance service library provides controllers that can be configured
 to manage the abstractions for layer 4 through 7 services.
 
 Mechanics of the split:
 - Both repositories are members of the same program, so the specs
 repository would continue to be shared during the Kilo cycle.  The PTL and
 the drivers team will retain approval responsibilities they now share.
 - The split would occur around Kilo-1 (subject to coordination of the
 Infra and Networking 

[openstack-dev] [tempest] request to review bug 1321617

2014-11-19 Thread Harshada Kakad
Hi All,

Could someone please, review the bug
https://bugs.launchpad.net/tempest/+bug/1321617


Test case related to quota usage (test_show_quota_usage,
test_cinder_quota_defaults and test_cinder_quota_show)
uses tenant name, ideally it should use tenant id as quota
requires UUID of tenant and not tenant name.

Cinder quota-show ideally requires tenant_id to show quota.
As there is bug in cinder (BUG: 1307475) , that if we give
non-existent tenant_id still cinder show the quota and
returns 200 (OK). Hence, these testcases should use
tenant_id and not tenant_name.


Here is the link for review : https://review.openstack.org/#/c/95087/


-- 
*Regards,*
*Harshada Kakad*
**
*Sr. Software Engineer*
*C3/101, Saudamini Complex, Right Bhusari Colony, Paud Road, Pune – 411013,
India*
*Mobile-9689187388*
*Email-Id : harshada.ka...@izeltech.com harshada.ka...@izeltech.com*
*website : www.izeltech.com http://www.izeltech.com*

-- 
*Disclaimer*
The information contained in this e-mail and any attachment(s) to this 
message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information of Izel 
Technologies Pvt. Ltd. If you are not the intended recipient, you are 
notified that any review, use, any form of reproduction, dissemination, 
copying, disclosure, modification, distribution and/or publication of this 
e-mail message, contents or its attachment(s) is strictly prohibited and 
you are requested to notify us the same immediately by e-mail and delete 
this mail immediately. Izel Technologies Pvt. Ltd accepts no liability for 
virus infected e-mail or errors or omissions or consequences which may 
arise as a result of this e-mail transmission.
*End of Disclaimer*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Who maintains the iCal meeting data?

2014-11-19 Thread Thierry Carrez
Tony Breeds wrote:
 I was going to make an iCal feed for all the openstack meetings.
 
 I discovered I was waaay behind the times and one exists and is linked from
 
 https://wiki.openstack.org/wiki/Meetings
 
 With some of the post Paris changes it's a little out of date.  I'm really
 happy to help maintain it if that's a thing someone can do ;P
 
 So whom do I poke?
 Should that information be slightly more visible?

The iCal is currently maintained by Anne (annegentle) and myself. In
parallel, a small group is building a gerrit-powered agenda so that we
can describe meetings in YAML and check for conflicts automatically, and
build the ics automatically rather than manually.

That should still take a few weeks before we can migrate to that though,
so in the mean time if you volunteer to keep the .ics up to date with
changes to the wiki page, that would be of great help! It's maintained
as a google calendar, I can add you to the ACL there if you send me your
google email.

Regards,

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic Policy

2014-11-19 Thread Henry Nash
Hi Adam,

So a comprehensive write-up...although I'm not sure we have made the case for 
why we need a complete rewrite of how policy is managed.  We seemed to have 
lept into a solution without looking at other possible solutions to the 
problems we are trying to solve.  Here's a start at an alternative approach:

Problem 1: The current services don't use the centralised policy store/fetch of 
keystone, meaning that a) policy file management is hard, and b) we can't 
support the policy-per-endpoint style of working
Solution: Let's get the other services using it!  No code changes required in 
Keytsone.  The fact that we haven't succeeded before, just means we haven't 
tried hard enough.

Problem 2: Different domains want to be able to create their own roles which 
are more meaningful to their users...but our roles are global and are 
directly linked to the rules in the policy file - something only a cloud 
operator is going to want to own.
Solution: Have some kind of domain-scoped role-group (maybe just called 
domain-roles?) that a domain owner can define, that maps to a set of 
underlying roles that a policy file understands (see: 
https://review.openstack.org/#/c/133855/). [As has been pointed out, what we 
are really doing with this is finally doing real RBAC, where what we call roles 
today are really capabilities and domain-roles are really just roles].  As this 
evolves, cloud providers could slowly migrate to the position where each 
service API is effectively a role (i.e. a capability) and at the domain level 
there exists the abstraction that makes sense for the users of that domain 
into the underlying capabilities. No code changes...this just uses policy files 
as they are today (plus domain-groups) - and tokens as they are too. And I 
think that level of functionality would satisfy a lot of people. Eventually (as 
pointed out by samuelmz) the policy file could even simply become the 
definition of the service capabilities (and whether each capability is open, 
closed or is a role)...maybe just registered and stored in the service 
entity the keystone DB (allowing dynamic service registration). My point being, 
that we really didn't require much code change (nor really any conceptual 
changes) to get to this end point...and certainly no rewriting of policy/token 
formats etc.  [In reality, this last point would cause problems with token size 
(since a broad admin capability would need a lot of capabilities), so some kind 
a collections of capabilities would be required.]

Problem 3: A cloud operator wants to be able to enable resellers to white label 
her services (who in turn may resell to others) - so needs some kind of 
inheritance model so that service level agreements can be supported by policy 
(e.g. let the reseller give the support expert from the cloud provider have 
access to their projects).
Solution: We already have hierarchical inheritance in the works...so that we 
would allow a reseller to assign roles to a user/group from the parent onto 
their own domain/project. Further, domain-roles are just another thing that can 
(optionally) be inherited and used in this fashion.

My point about all the above is that I think while what you have laid out is a 
great set of stepsI don't think we have conceptual agreement as to whether 
that path is the only way we could go to solve out problems.

Henry
On 18 Nov 2014, at 23:40, Adam Young ayo...@redhat.com wrote:

 There is a lot of discussion about policy.  I've attempted to pull the 
 majority of the work into a single document that explains the process in a 
 step-by-step manner:
 
 
 http://adam.younglogic.com/2014/11/dynamic-policy-in-keystone/
 
 Its really long, so I won't bother reposting the whole article here.  
 Instead, I will post the links to the topic on Gerrit.
 
 https://review.openstack.org/#/q/topic:dynamic-policy,n,z
 
 
 There is one additional review worth noting:
 
 https://review.openstack.org/#/c/133855/
 
 Which is for private groups of roles  specific to a domain.  This is 
 related, but not part of the critical path for the things I wrote above.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Parallels loopback disk format support

2014-11-19 Thread Maxim Nestratov

Greetings,

In scope of these changes [1], I would like to add a new image format 
into glance. For this purpose there was created a blueprint [2] and 
would really appreciate if someone from glance team could review this 
proposal.


[1] https://review.openstack.org/#/c/111335/
[2] https://blueprints.launchpad.net/glance/+spec/pcs-support

Best,

Maxim Nestratov,
Lead Software Developer,
Parallels


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-11-19 Thread Sylvain Bauza


Le 18/11/2014 20:05, Doug Hellmann a écrit :

On Nov 17, 2014, at 7:18 PM, Kevin L. Mitchell kevin.mitch...@rackspace.com 
wrote:


On Mon, 2014-11-17 at 18:48 -0500, Doug Hellmann wrote:

I’ve spent a bit of time thinking about the resource ownership issue.
The challenge there is we don’t currently have any libraries that
define tables in the schema of an application. I think that’s a good
pattern to maintain, since it avoids introducing a lot of tricky
issues like how to manage migrations for the library, how to ensure
they are run by the application, etc. The fact that this common quota
thing needs to store some data in a schema that it controls says to me
that it is really an app and not a library. Making the quota manager
an app solves the API definition issue, too, since we can describe a
generic way to configure quotas and other applications can then use
that API to define specific rules using the quota manager’s API.

I don’t know if we need a new application or if it would make sense
to, as with policy, add quota management features to keystone. A
single well-defined app has some appeal, but there’s also a certain
amount of extra ramp-up time needed to go that route that we wouldn’t
need if we added the features directly to keystone.

I'll also point out that it was largely because of the storage needs
that I chose to propose Boson[1] as a separate app, rather than as a
library.  Further, the dimensions over which quota-covered resources
needed to be tracked seemed to me to be complicated enough that it would
be better to define a new app and make it support that one domain well,
which is why I didn't propose it as something to add to Keystone.
Consider: nova has quotas that are applied by user, other quotas that
are applied by tenant, and even some quotas on what could be considered
sub-resources—a limit on the number of security group rules per security
group, for instance.

My current feeling is that, if we can figure out a way to make the quota
problem into an acceptable library, that will work; it would probably
have to maintain its own database separate from the client app and have
features for automatically managing the schema, since we couldn't
necessarily rely on the client app to invoke the proper juju there.  If,
on the other hand, that ends up failing, then the best route is probably
to begin by developing a separate app, like Boson, as a PoC; then, after
we have some idea of just how difficult it is to actually solve the
problem, we can evaluate whether it makes sense to actually fold it into
a service like Keystone, or whether it should stand on its own.

(Personally, I think Boson should be created and should stand on its
own, but I also envision using it for purposes outside of OpenStack…)

Thanks for mentioning Boson again. I’m embarrassed that I completely forgot 
about the fact that you mentioned this at the summit.

I’ll have to look at the proposal more closely before I comment in any detail, 
but I take it as a good sign that we’re coming back around to the idea of 
solving this with an app instead of a library.


I assume I'm really late in the thread so I can just sit and give +1 to 
this direction : IMHO, quotas need to managed thanks to a CRUD interface 
which implies to get an app, as it sounds unreasonable to extend each 
consumer app API.


That said, back to Blazar, I just would like to emphasize that Blazar is 
not trying to address the quota enforcement level, but rather provide a 
centralized endpoint for managing reservations.
Consequently, Blazar can also be considered as a consumer of this quota 
system, whatever it's in a library or on a separate REST API.


Last thing, I don't think that a quota application necessarly means that 
quotas enforcement should be managed thanks to external calls to this 
app. I can rather see an external system able to set for each project a 
local view of what should be enforced locally. If operators don't want 
to deploy that quota management project, it's up to them to address the 
hetergenous setups for each project.


My 2 cts (too),
-Sylvain


Doug


Just my $.02…

[1] https://wiki.openstack.org/wiki/Boson
--
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][oslo.messaging] Multi-versioning RPC Service API support

2014-11-19 Thread Denis Makogon
Hello Stackers.




When i was browsing through bugs of oslo.messaging [1] i found one
[2] pretty interesting (it’s old as universe), but it doesn’t seem like a
bug, mostly like a blueprint.

Digging into code of oslo.messaging i’ve found that, at least, for now,
there’s no way launch single service that would be able to handle

multiple versions (actually it can if manager implementation can handle
request for different RPC API versions).


So, i’d like to understand if it’s still valid? And if it is i’d
like to collect use cases from all projects and see if oslo.messaging can
handle such case.

But, as first step to understanding multi-versioning/multi-managers
strategy for RPC services, i want to clarify few things. Current code maps

single version to a list of RPC service endpoints implementation, so here
comes question:

- Does a set of endpoints represent single RPC API version cap?

If that’s it, how should we represent multi-versioning? If we’d
follow existing pattern: each RPC API version cap represents its own set of
endpoints,

let me provide some implementation details here, for now ‘endpoints’ is a
list of classes for a single version cap, but if we’d support multiple
version

caps ‘endpoints’ would become a dictionary that contains pairs of
‘version_cap’-’endpoints’. This type of multi-versioning seems to be the
easiest.


Thoughts/Suggestion?


[1] https://bugs.launchpad.net/oslo.messaging
https://launchpad.net/oslo.messaging

[2] https://bugs.launchpad.net/oslo.messaging/+bug/1050374


Kind regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] kilo graduation plans

2014-11-19 Thread Doug Hellmann

On Nov 18, 2014, at 6:11 PM, Sachi King sachi.k...@anchor.com.au wrote:

 On Wednesday, November 12, 2014 02:06:02 PM Doug Hellmann wrote:
 During our “Graduation Schedule” summit session we worked through the list 
 of modules remaining the in the incubator. Our notes are in the etherpad 
 [1], but as part of the Write it Down” theme for Oslo this cycle I am also 
 posting a summary of the outcome here on the mailing list for wider 
 distribution. Let me know if you remembered the outcome for any of these 
 modules differently than what I have written below.
 
 Doug
 
 
 
 Deleted or deprecated modules:
 
 funcutils.py - This was present only for python 2.6 support, but it is no 
 longer used in the applications. We are keeping it in the stable/juno branch 
 of the incubator, and removing it from master 
 (https://review.openstack.org/130092)
 
 hooks.py - This is not being used anywhere, so we are removing it. 
 (https://review.openstack.org/#/c/125781/)
 
 quota.py - A new quota management system is being created 
 (https://etherpad.openstack.org/p/kilo-oslo-common-quota-library) and should 
 replace this, so we will keep it in the incubator for now but deprecate it.
 
 crypto/utils.py - We agreed to mark this as deprecated and encourage the use 
 of Barbican or cryptography.py (https://review.openstack.org/134020)
 
 cache/ - Morgan is going to be working on a new oslo.cache library as a 
 front-end for dogpile, so this is also deprecated 
 (https://review.openstack.org/134021)
 
 apiclient/ - With the SDK project picking up steam, we felt it was safe to 
 deprecate this code as well (https://review.openstack.org/134024).
 
 xmlutils.py - This module was used to provide a security fix for some XML 
 modules that have since been updated directly. It was removed. 
 (https://review.openstack.org/#/c/125021/)
 
 
 
 Graduating:
 
 oslo.context:
 - Dims is driving this
 - https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context
 - includes:
  context.py
 
 oslo.service:
 
 During the Oslo graduation schedule meet up someone was mentioning they'd 
 be willing to help out as a contact for questions during this process.
 Can anyone put me in contact with that person or remember who he was?

I don’t know if it was me, but I’ll volunteer now. :-)

dhellmann on freenode, or this email address, are the best way to reach me. I’m 
in the US Eastern time zone.

Doug

 
 - Sachi is driving this
 - https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-service
 - includes:
  eventlet_backdoor.py
  loopingcall.py
  periodic_task.py
  request_utils.py
  service.py
  sslutils.py
  systemd.py
  threadgroup.py
 
 oslo.utils:
 - We need to look into how to preserve the git history as we import these 
 modules.
 - includes:
  fileutils.py
  versionutils.py
 
 
 
 Remaining untouched:
 
 scheduler/ - Gantt probably makes this code obsolete, but it isn’t clear 
 whether Gantt has enough traction yet so we will hold onto these in the 
 incubator for at least another cycle.
 
 report/ - There’s interest in creating an oslo.reports library containing 
 this code, but we haven’t had time to coordinate with Solly about doing that.
 
 
 
 Other work:
 
 We will continue the work on oslo.concurrency and oslo.log that we started 
 during Juno.
 
 [1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-11-19 Thread Endre Karlson
All I can say at the moment is that Usage and Quota management is a crappy
thing to do in OpenStack. Every service has it's own way of doing it both
in clients and api's. +n+ for making a effort in standardizing this thing
in a way that could be alike across projects..

2014-11-19 14:33 GMT+01:00 Sylvain Bauza sba...@redhat.com:


 Le 18/11/2014 20:05, Doug Hellmann a écrit :

  On Nov 17, 2014, at 7:18 PM, Kevin L. Mitchell 
 kevin.mitch...@rackspace.com wrote:

  On Mon, 2014-11-17 at 18:48 -0500, Doug Hellmann wrote:

 I’ve spent a bit of time thinking about the resource ownership issue.
 The challenge there is we don’t currently have any libraries that
 define tables in the schema of an application. I think that’s a good
 pattern to maintain, since it avoids introducing a lot of tricky
 issues like how to manage migrations for the library, how to ensure
 they are run by the application, etc. The fact that this common quota
 thing needs to store some data in a schema that it controls says to me
 that it is really an app and not a library. Making the quota manager
 an app solves the API definition issue, too, since we can describe a
 generic way to configure quotas and other applications can then use
 that API to define specific rules using the quota manager’s API.

 I don’t know if we need a new application or if it would make sense
 to, as with policy, add quota management features to keystone. A
 single well-defined app has some appeal, but there’s also a certain
 amount of extra ramp-up time needed to go that route that we wouldn’t
 need if we added the features directly to keystone.

 I'll also point out that it was largely because of the storage needs
 that I chose to propose Boson[1] as a separate app, rather than as a
 library.  Further, the dimensions over which quota-covered resources
 needed to be tracked seemed to me to be complicated enough that it would
 be better to define a new app and make it support that one domain well,
 which is why I didn't propose it as something to add to Keystone.
 Consider: nova has quotas that are applied by user, other quotas that
 are applied by tenant, and even some quotas on what could be considered
 sub-resources—a limit on the number of security group rules per security
 group, for instance.

 My current feeling is that, if we can figure out a way to make the quota
 problem into an acceptable library, that will work; it would probably
 have to maintain its own database separate from the client app and have
 features for automatically managing the schema, since we couldn't
 necessarily rely on the client app to invoke the proper juju there.  If,
 on the other hand, that ends up failing, then the best route is probably
 to begin by developing a separate app, like Boson, as a PoC; then, after
 we have some idea of just how difficult it is to actually solve the
 problem, we can evaluate whether it makes sense to actually fold it into
 a service like Keystone, or whether it should stand on its own.

 (Personally, I think Boson should be created and should stand on its
 own, but I also envision using it for purposes outside of OpenStack…)

 Thanks for mentioning Boson again. I’m embarrassed that I completely
 forgot about the fact that you mentioned this at the summit.

 I’ll have to look at the proposal more closely before I comment in any
 detail, but I take it as a good sign that we’re coming back around to the
 idea of solving this with an app instead of a library.


 I assume I'm really late in the thread so I can just sit and give +1 to
 this direction : IMHO, quotas need to managed thanks to a CRUD interface
 which implies to get an app, as it sounds unreasonable to extend each
 consumer app API.

 That said, back to Blazar, I just would like to emphasize that Blazar is
 not trying to address the quota enforcement level, but rather provide a
 centralized endpoint for managing reservations.
 Consequently, Blazar can also be considered as a consumer of this quota
 system, whatever it's in a library or on a separate REST API.

 Last thing, I don't think that a quota application necessarly means that
 quotas enforcement should be managed thanks to external calls to this app.
 I can rather see an external system able to set for each project a local
 view of what should be enforced locally. If operators don't want to deploy
 that quota management project, it's up to them to address the hetergenous
 setups for each project.

 My 2 cts (too),
 -Sylvain


  Doug

  Just my $.02…

 [1] https://wiki.openstack.org/wiki/Boson
 --
 Kevin L. Mitchell kevin.mitch...@rackspace.com
 Rackspace


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 

Re: [openstack-dev] Quota management and enforcement across projects

2014-11-19 Thread Doug Hellmann

On Nov 19, 2014, at 8:33 AM, Sylvain Bauza sba...@redhat.com wrote:

 
 Le 18/11/2014 20:05, Doug Hellmann a écrit :
 On Nov 17, 2014, at 7:18 PM, Kevin L. Mitchell 
 kevin.mitch...@rackspace.com wrote:
 
 On Mon, 2014-11-17 at 18:48 -0500, Doug Hellmann wrote:
 I’ve spent a bit of time thinking about the resource ownership issue.
 The challenge there is we don’t currently have any libraries that
 define tables in the schema of an application. I think that’s a good
 pattern to maintain, since it avoids introducing a lot of tricky
 issues like how to manage migrations for the library, how to ensure
 they are run by the application, etc. The fact that this common quota
 thing needs to store some data in a schema that it controls says to me
 that it is really an app and not a library. Making the quota manager
 an app solves the API definition issue, too, since we can describe a
 generic way to configure quotas and other applications can then use
 that API to define specific rules using the quota manager’s API.
 
 I don’t know if we need a new application or if it would make sense
 to, as with policy, add quota management features to keystone. A
 single well-defined app has some appeal, but there’s also a certain
 amount of extra ramp-up time needed to go that route that we wouldn’t
 need if we added the features directly to keystone.
 I'll also point out that it was largely because of the storage needs
 that I chose to propose Boson[1] as a separate app, rather than as a
 library.  Further, the dimensions over which quota-covered resources
 needed to be tracked seemed to me to be complicated enough that it would
 be better to define a new app and make it support that one domain well,
 which is why I didn't propose it as something to add to Keystone.
 Consider: nova has quotas that are applied by user, other quotas that
 are applied by tenant, and even some quotas on what could be considered
 sub-resources—a limit on the number of security group rules per security
 group, for instance.
 
 My current feeling is that, if we can figure out a way to make the quota
 problem into an acceptable library, that will work; it would probably
 have to maintain its own database separate from the client app and have
 features for automatically managing the schema, since we couldn't
 necessarily rely on the client app to invoke the proper juju there.  If,
 on the other hand, that ends up failing, then the best route is probably
 to begin by developing a separate app, like Boson, as a PoC; then, after
 we have some idea of just how difficult it is to actually solve the
 problem, we can evaluate whether it makes sense to actually fold it into
 a service like Keystone, or whether it should stand on its own.
 
 (Personally, I think Boson should be created and should stand on its
 own, but I also envision using it for purposes outside of OpenStack…)
 Thanks for mentioning Boson again. I’m embarrassed that I completely forgot 
 about the fact that you mentioned this at the summit.
 
 I’ll have to look at the proposal more closely before I comment in any 
 detail, but I take it as a good sign that we’re coming back around to the 
 idea of solving this with an app instead of a library.
 
 I assume I'm really late in the thread so I can just sit and give +1 to this 
 direction : IMHO, quotas need to managed thanks to a CRUD interface which 
 implies to get an app, as it sounds unreasonable to extend each consumer app 
 API.
 
 That said, back to Blazar, I just would like to emphasize that Blazar is not 
 trying to address the quota enforcement level, but rather provide a 
 centralized endpoint for managing reservations.
 Consequently, Blazar can also be considered as a consumer of this quota 
 system, whatever it's in a library or on a separate REST API.
 
 Last thing, I don't think that a quota application necessarly means that 
 quotas enforcement should be managed thanks to external calls to this app. I 
 can rather see an external system able to set for each project a local view 
 of what should be enforced locally. If operators don't want to deploy that 
 quota management project, it's up to them to address the hetergenous setups 
 for each project.

I’m not sure what this means. You want the new service to be optional? How 
would apps written against the service find and manage quota data if the 
service isn’t there?

Doug

 
 My 2 cts (too),
 -Sylvain
 
 Doug
 
 Just my $.02…
 
 [1] https://wiki.openstack.org/wiki/Boson
 -- 
 Kevin L. Mitchell kevin.mitch...@rackspace.com
 Rackspace
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [all][oslo.messaging] Multi-versioning RPC Service API support

2014-11-19 Thread Doug Hellmann

On Nov 19, 2014, at 8:49 AM, Denis Makogon dmako...@mirantis.com wrote:

 Hello Stackers.
 
 
 
 When i was browsing through bugs of oslo.messaging [1] i found one 
 [2] pretty interesting (it’s old as universe), but it doesn’t seem like a 
 bug, mostly like a blueprint. 
 Digging into code of oslo.messaging i’ve found that, at least, for now, 
 there’s no way launch single service that would be able to handle 
 multiple versions (actually it can if manager implementation can handle 
 request for different RPC API versions).
 
 So, i’d like to understand if it’s still valid? And if it is i’d like 
 to collect use cases from all projects and see if oslo.messaging can handle 
 such case.
 But, as first step to understanding multi-versioning/multi-managers strategy 
 for RPC services, i want to clarify few things. Current code maps 
 single version to a list of RPC service endpoints implementation, so here 
 comes question:
 
 - Does a set of endpoints represent single RPC API version cap?
 
 If that’s it, how should we represent multi-versioning? If we’d 
 follow existing pattern: each RPC API version cap represents its own set of 
 endpoints,
 let me provide some implementation details here, for now ‘endpoints’ is a 
 list of classes for a single version cap, but if we’d support multiple version
 caps ‘endpoints’ would become a dictionary that contains pairs of 
 ‘version_cap’-’endpoints’. This type of multi-versioning seems to be the 
 easiest.
 
 
 Thoughts/Suggestion?

The dispatcher [1] supports endpoints with versions, and searches for a 
compatible endpoint for incoming requests. I’ll go ahead and close the ticket. 
There are lots of others open and valid, so you might want to start looking at 
some that aren’t quite so old if you’re looking for something to contribute. 
Drop by #openstack-oslo on freenode if you want to chat about any of them.

Thanks!
Doug

[1] 
http://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo/messaging/rpc/dispatcher.py#n153

 
 
 [1] https://bugs.launchpad.net/oslo.messaging
 [2] https://bugs.launchpad.net/oslo.messaging/+bug/1050374
 
 
 Kind regards,
 Denis Makogon
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo.messaging] Multi-versioning RPC Service API support

2014-11-19 Thread Russell Bryant
On 11/19/2014 09:18 AM, Doug Hellmann wrote:
 
 On Nov 19, 2014, at 8:49 AM, Denis Makogon dmako...@mirantis.com
 mailto:dmako...@mirantis.com wrote:
 
 Hello Stackers.



 When i was browsing through bugs of oslo.messaging [1] i found
 one [2] pretty interesting (it’s old as universe), but it doesn’t seem
 like a bug, mostly like a blueprint.
 Digging into code of oslo.messaging i’ve found that, at least, for
 now, there’s no way launch single service that would be able to handle
 multiple versions (actually it can if manager implementation can
 handle request for different RPC API versions).

 So, i’d like to understand if it’s still valid? And if it is
 i’d like to collect use cases from all projects and see if
 oslo.messaging can handle such case.
 But, as first step to understanding multi-versioning/multi-managers
 strategy for RPC services, i want to clarify few things. Current code
 maps
 single version to a list of RPC service endpoints implementation, so
 here comes question:

 -Does a set of endpoints represent single RPC API version cap?

 If that’s it, how should we represent multi-versioning? If
 we’d follow existing pattern: each RPC API version cap represents its
 own set of endpoints,
 let me provide some implementation details here, for now ‘endpoints’
 is a list of classes for a single version cap, but if we’d support
 multiple version
 caps ‘endpoints’ would become a dictionary that contains pairs of
 ‘version_cap’-’endpoints’. This type of multi-versioning seems to be
 the easiest.


 Thoughts/Suggestion?
 
 The dispatcher [1] supports endpoints with versions, and searches for a
 compatible endpoint for incoming requests. I’ll go ahead and close the
 ticket. There are lots of others open and valid, so you might want to
 start looking at some that aren’t quite so old if you’re looking for
 something to contribute. Drop by #openstack-oslo on freenode if you want
 to chat about any of them.
 
 Thanks!
 Doug
 
 [1] 
 http://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo/messaging/rpc/dispatcher.py#n153

In particular, each endpoint can have an associated namespace, which
effectively allows separate APIs to be separately versioned since a
request comes in and identifies the namespace it is targeting.

Services can also separate versions by just using multiple topics.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-11-19 Thread Sylvain Bauza


Le 19/11/2014 15:06, Doug Hellmann a écrit :

On Nov 19, 2014, at 8:33 AM, Sylvain Bauza sba...@redhat.com wrote:


Le 18/11/2014 20:05, Doug Hellmann a écrit :

On Nov 17, 2014, at 7:18 PM, Kevin L. Mitchell kevin.mitch...@rackspace.com 
wrote:


On Mon, 2014-11-17 at 18:48 -0500, Doug Hellmann wrote:

I’ve spent a bit of time thinking about the resource ownership issue.
The challenge there is we don’t currently have any libraries that
define tables in the schema of an application. I think that’s a good
pattern to maintain, since it avoids introducing a lot of tricky
issues like how to manage migrations for the library, how to ensure
they are run by the application, etc. The fact that this common quota
thing needs to store some data in a schema that it controls says to me
that it is really an app and not a library. Making the quota manager
an app solves the API definition issue, too, since we can describe a
generic way to configure quotas and other applications can then use
that API to define specific rules using the quota manager’s API.

I don’t know if we need a new application or if it would make sense
to, as with policy, add quota management features to keystone. A
single well-defined app has some appeal, but there’s also a certain
amount of extra ramp-up time needed to go that route that we wouldn’t
need if we added the features directly to keystone.

I'll also point out that it was largely because of the storage needs
that I chose to propose Boson[1] as a separate app, rather than as a
library.  Further, the dimensions over which quota-covered resources
needed to be tracked seemed to me to be complicated enough that it would
be better to define a new app and make it support that one domain well,
which is why I didn't propose it as something to add to Keystone.
Consider: nova has quotas that are applied by user, other quotas that
are applied by tenant, and even some quotas on what could be considered
sub-resources—a limit on the number of security group rules per security
group, for instance.

My current feeling is that, if we can figure out a way to make the quota
problem into an acceptable library, that will work; it would probably
have to maintain its own database separate from the client app and have
features for automatically managing the schema, since we couldn't
necessarily rely on the client app to invoke the proper juju there.  If,
on the other hand, that ends up failing, then the best route is probably
to begin by developing a separate app, like Boson, as a PoC; then, after
we have some idea of just how difficult it is to actually solve the
problem, we can evaluate whether it makes sense to actually fold it into
a service like Keystone, or whether it should stand on its own.

(Personally, I think Boson should be created and should stand on its
own, but I also envision using it for purposes outside of OpenStack…)

Thanks for mentioning Boson again. I’m embarrassed that I completely forgot 
about the fact that you mentioned this at the summit.

I’ll have to look at the proposal more closely before I comment in any detail, 
but I take it as a good sign that we’re coming back around to the idea of 
solving this with an app instead of a library.

I assume I'm really late in the thread so I can just sit and give +1 to this 
direction : IMHO, quotas need to managed thanks to a CRUD interface which 
implies to get an app, as it sounds unreasonable to extend each consumer app 
API.

That said, back to Blazar, I just would like to emphasize that Blazar is not 
trying to address the quota enforcement level, but rather provide a centralized 
endpoint for managing reservations.
Consequently, Blazar can also be considered as a consumer of this quota system, 
whatever it's in a library or on a separate REST API.

Last thing, I don't think that a quota application necessarly means that quotas 
enforcement should be managed thanks to external calls to this app. I can 
rather see an external system able to set for each project a local view of what 
should be enforced locally. If operators don't want to deploy that quota 
management project, it's up to them to address the hetergenous setups for each 
project.

I’m not sure what this means. You want the new service to be optional? How 
would apps written against the service find and manage quota data if the 
service isn’t there?


My bad. Let me rephrase it. I'm seeing this service as providing added 
value for managing quotas by ensuring consistency across all projects. 
But as I said, I'm also thinking that the quota enforcement has still to 
be done at the customer project level.


So, I can imagine a client (or a Facade if you prefer) providing quota 
resources to the customer app which could be either fetched (thru some 
caching) from the service, or directly taken from the existing quota DB.


In order to do that, I could imagine those steps :
 #1 : customer app makes use of oslo.quota for managing its own quota 
resources
 #2 : the external app 

Re: [openstack-dev] [Fuel] Power management in Cobbler

2014-11-19 Thread Vladimir Kozhukalov
I am absolutely -1 for using Cobbler for that. Lastly, Ironic guys became
much more open for adopting new features (at least if they are implemented
in terms of Ironic drivers). Currently, it looks like we are  probably able
to deliver zero step Fuel Ironic driver by 6.1. Ironic already has working
IPMI stuff and they don't oppose ssh based power management any more.
Personally, I'd prefer to focus our efforts towards  Ironic stuff and
keeping in mind that Cobbler will be removed in the nearest future.

Vladimir Kozhukalov

On Wed, Nov 5, 2014 at 7:28 PM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 I am +1 for using cobbler as power management before we merge Ironic-based
 stuff. It is essential part also for our HA and stop
 provisioning/deployment mechanism.

 On Tue, Nov 4, 2014 at 1:00 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Not long time ago we discussed necessity of power management feature in
 Fuel.

 What is your opinion on power management support in Cobbler, i took a
 look at documentation [1] and templates [2] that  we have right now.
 And it actually looks like we can make use of it.

 The only issue is that power address that cobbler system is configured
 with is wrong.
 Because provisioning serializer uses one reported by boostrap, but it can
 be easily fixed.

 Ofcourse another question is separate network for power management, but
 we can leave with
 admin for now.

 Please share your opinions on this matter. Thanks

 [1] http://www.cobblerd.org/manuals/2.6.0/4/5_-_Power_Management.html
 [2] http://paste.openstack.org/show/129063/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Parallels loopback disk format support

2014-11-19 Thread Nikhil Komawar
Hi Maxim,

Thanks for showing interest in this aspect. Like nova-specs, Glance also needs 
a spec to be create for discussion related to the blueprint. 

Please try to create one here [1]. Additionally you may join us at the meeting 
[2] if you feel stuck or need clarifications.

[1] https://github.com/openstack/glance-specs
[2] https://wiki.openstack.org/wiki/Meetings#Glance_Team_meeting

Thanks,
-Nikhil


From: Maxim Nestratov [mnestra...@parallels.com]
Sent: Wednesday, November 19, 2014 8:27 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [glance] Parallels loopback disk format support

Greetings,

In scope of these changes [1], I would like to add a new image format
into glance. For this purpose there was created a blueprint [2] and
would really appreciate if someone from glance team could review this
proposal.

[1] https://review.openstack.org/#/c/111335/
[2] https://blueprints.launchpad.net/glance/+spec/pcs-support

Best,

Maxim Nestratov,
Lead Software Developer,
Parallels


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][stable] Review request

2014-11-19 Thread Nikhil Komawar
This patch is a bit different as it is not a straightforward cherry-pick from 
the commit in master (that fixes the bug). So, it seemed like a good idea to 
start the discussion.

Nonetheless, it's my bad turning into a miscommunication where the email just 
became a review request; while the intent was of a (possible) 
discussion/clarification.
 
Thanks,
-Nikhil


From: Flavio Percoco [fla...@redhat.com]
Sent: Wednesday, November 19, 2014 3:59 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance][stable] Review request

Please, abstain to send review requests to the mailing list

http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html

Thanks,
Flavio

Hi All,
On 18/11/14 17:42 +, Kekane, Abhishek wrote:

Greetings!!!



Can anyone please review this patch [1].

It requires one more +2 to get merged in stable/juno.



We want to use stable/juno in production environment and this patch will fix
the blocker bug [1] for restrict download image feature.

Please do the needful.



[1] https://review.openstack.org/#/c/133858/

[2] https://bugs.launchpad.net/glance/+bug/1387973





Thank You in advance.



Abhishek Kekane


__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV][Telco] Telco Working Group meeting minutes (2014-11-19)

2014-11-19 Thread Steve Gordon
Hi all,

Please find minutes nd logs for the Telco Working Group meeting held at 1400 
UTC on Wednesday the 19th of November at the following locations:

* Meeting ended Wed Nov 19 15:02:01 2014 UTC.  Information about MeetBot at 
http://wiki.debian.org/MeetBot . (v 0.1.4)
* Minutes:
http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-11-19-14.02.html
* Minutes (text): 
http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-11-19-14.02.txt
* Log:
http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-11-19-14.02.log.html

Action items:

sgordon_ to investigate current state of storyboard and report back
sgordon_ to update wiki structure and provide a draft template for 
discussion
sgordon_ to kick off M/L discussion about template for use cases
jannis_rake-reve to create glossary page on wiki
sgordon_ to lock in meeting time for next week on monday barring further 
feedback
mkoderer to kick off mailing list thread to get wider feedback
sgordon_ to follow up on vlan trunking


Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Power management in Cobbler

2014-11-19 Thread Tomasz Napierala

 On 19 Nov 2014, at 16:10, Vladimir Kozhukalov vkozhuka...@mirantis.com 
 wrote:
 
 I am absolutely -1 for using Cobbler for that. Lastly, Ironic guys became 
 much more open for adopting new features (at least if they are implemented in 
 terms of Ironic drivers). Currently, it looks like we are  probably able to 
 deliver zero step Fuel Ironic driver by 6.1. Ironic already has working IPMI 
 stuff and they don't oppose ssh based power management any more. Personally, 
 I'd prefer to focus our efforts towards  Ironic stuff and keeping in mind 
 that Cobbler will be removed in the nearest future. 

I know that due to time constraints we would be better to go with Cobbler, but 
I also think we should be closer to the community and switch to Ironic as soon 
as possible. 

Regards,
-- 
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] v3 integrated tests taking a lot longer?

2014-11-19 Thread Jay Pipes

On 11/18/2014 06:48 PM, Matt Riedemann wrote:

I just started noticing today that the v3 integrated api samples tests
seem to be taking a lot longer than the other non-v3 integrated api
samples tests. On my 4 VCPU, 4 GB RAM VM some of those tests are taking
anywhere from 15-50+ seconds, while the non-v3 tests are taking less
than a second.

Has something changed recently in how the v3 API code is processed that
might have caused this?  With microversions or jsonschema validation
perhaps?

I was thinking it was oslo.db 1.1.0 at first since that was a recent
update but given the difference in times between v3 and non-v3 api
samples tests I'm thinking otherwise.


Heya,

I've been stung in the past by running either tox or run_tests.sh while 
active in a virtualenv. The speed goes from ~2 seconds per API sample 
test to ~15 seconds per API sample test...


Not sure if this is what is causing your problem, but worth a check.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Error message when Neutron network is running out of IP addresses

2014-11-19 Thread Matt Riedemann



On 11/18/2014 5:27 PM, Vishvananda Ishaya wrote:

It looks like this has not been reported so a bug would be great. It
looks like it might be as easy as adding the NoMoreFixedIps exception to
the list where FixedIpLimitExceeded is caught in nova/network/manager.py

Vish

On Nov 18, 2014, at 8:13 AM, Edgar Magana edgar.mag...@workday.com
mailto:edgar.mag...@workday.com wrote:


Hello Community,

When a network subnet runs out of IP addresses a request to create a
VM on that network fails with the Error message: No valid host was
found. There are not enough hosts available.
In the nova logs the error message is: NoMoreFixedIps: No fixed IP
addresses available for network:

Obviously, this is not the desirable behavior, is there any work in
progress to change it or I should open a bug to properly propagate the
right error message.

Thanks,

Edgar
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Except this is neutron right?  In that case nova.network.neutronv2.api 
needs to translate the NeutronClientException to a NovaException and 
raise that back up so the compute manager can tell the scheduler it blew 
up in setting up networking.


When you open a bug, please provide a stacktrace so we know where you're 
hitting this in the neutronv2 API.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Error message when Neutron network is running out of IP addresses

2014-11-19 Thread Edgar Magana
Ok, I will open a bug and commit a patch!  :-)

Edgar

From: Vishvananda Ishaya vishvana...@gmail.commailto:vishvana...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, November 18, 2014 at 3:27 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova][Neutron] Error message when Neutron network 
is running out of IP addresses

It looks like this has not been reported so a bug would be great. It looks like 
it might be as easy as adding the NoMoreFixedIps exception to the list where 
FixedIpLimitExceeded is caught in nova/network/manager.py

Vish

On Nov 18, 2014, at 8:13 AM, Edgar Magana 
edgar.mag...@workday.commailto:edgar.mag...@workday.com wrote:

Hello Community,

When a network subnet runs out of IP addresses a request to create a VM on that 
network fails with the Error message: No valid host was found. There are not 
enough hosts available.
In the nova logs the error message is: NoMoreFixedIps: No fixed IP addresses 
available for network:

Obviously, this is not the desirable behavior, is there any work in progress to 
change it or I should open a bug to properly propagate the right error message.

Thanks,

Edgar
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] We lost some commits during upstream puppet manifests merge

2014-11-19 Thread Vladimir Kuklin
Fuelers

I am writing that we had a really sad incident - we noticed that after we
merged upstream keystone module we lost modifications (Change-Id:
Idfe4b54caa0d96a93e93bfff12d8b6216f83e2f1
https://review.openstack.org/#/q/Idfe4b54caa0d96a93e93bfff12d8b6216f83e2f1,n,z)
for memcached dogpile driver which are crucial for us. And here I can see 2
problems:

1) how can we ensure that we did not lose anything else?
2) how can we ensure that this will never happen again?

Sadly, it seems that the first question implies that we recheck all the
upstream merge/adaptation commits by hand and check that we did not lose
anything.

Regarding question number 2 we do already have established process for
upstream code merge:
http://docs.mirantis.com/fuel-dev/develop/module_structure.html#contributing-to-existing-fuel-library-modules.
It seems that this process had  not been established when keystone code was
reviewed. I see two ways here:

1) We should enforce code review workflow and specifically say that
upstream merges can be accepted only after we have 2 '+2s' from core
reviewers after they recheck that corresponding change does not introduce
any regressions.
2) We should speed up development of some modular testing framework that
will check that corresponding change affects only particular pieces. It
seems much easier if we split deployment into stages (oh my, I am again
talking about granular deployment feature) and each particular commit
affects only one of the stages, so that we can see the difference and catch
regressions eariler.





-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Power management in Cobbler

2014-11-19 Thread Matthew Mosesohn
Tomasz, Vladimir, others,

The way I see it is we need a way to discover the corresponding IPMI
address for a given node for out-of-band power management. The
ultimate ipmitool command is going to be exactly the same whether it
comes from Cobbler or Ironic, and all we need to do is feed
information to the appropriate utility when it comes to power
management. If it's the same command, it doesn't matter who does it.
Ironic of course is a better option, but I'm not sure where we are
with discovering ipmi IP addresses or prompting admins to enter this
data for every node. Without this step, neither Cobbler nor Ironic is
capable of handling this task.

Best Regards,
Matthew Mosesohn

On Wed, Nov 19, 2014 at 7:38 PM, Tomasz Napierala
tnapier...@mirantis.com wrote:

 On 19 Nov 2014, at 16:10, Vladimir Kozhukalov vkozhuka...@mirantis.com 
 wrote:

 I am absolutely -1 for using Cobbler for that. Lastly, Ironic guys became 
 much more open for adopting new features (at least if they are implemented 
 in terms of Ironic drivers). Currently, it looks like we are  probably able 
 to deliver zero step Fuel Ironic driver by 6.1. Ironic already has working 
 IPMI stuff and they don't oppose ssh based power management any more. 
 Personally, I'd prefer to focus our efforts towards  Ironic stuff and 
 keeping in mind that Cobbler will be removed in the nearest future.

 I know that due to time constraints we would be better to go with Cobbler, 
 but I also think we should be closer to the community and switch to Ironic as 
 soon as possible.

 Regards,
 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com







 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-11-19 Thread Doug Hellmann

On Nov 19, 2014, at 9:51 AM, Sylvain Bauza sba...@redhat.com wrote:

 
 Le 19/11/2014 15:06, Doug Hellmann a écrit :
 On Nov 19, 2014, at 8:33 AM, Sylvain Bauza sba...@redhat.com wrote:
 
 Le 18/11/2014 20:05, Doug Hellmann a écrit :
 On Nov 17, 2014, at 7:18 PM, Kevin L. Mitchell 
 kevin.mitch...@rackspace.com wrote:
 
 On Mon, 2014-11-17 at 18:48 -0500, Doug Hellmann wrote:
 I’ve spent a bit of time thinking about the resource ownership issue.
 The challenge there is we don’t currently have any libraries that
 define tables in the schema of an application. I think that’s a good
 pattern to maintain, since it avoids introducing a lot of tricky
 issues like how to manage migrations for the library, how to ensure
 they are run by the application, etc. The fact that this common quota
 thing needs to store some data in a schema that it controls says to me
 that it is really an app and not a library. Making the quota manager
 an app solves the API definition issue, too, since we can describe a
 generic way to configure quotas and other applications can then use
 that API to define specific rules using the quota manager’s API.
 
 I don’t know if we need a new application or if it would make sense
 to, as with policy, add quota management features to keystone. A
 single well-defined app has some appeal, but there’s also a certain
 amount of extra ramp-up time needed to go that route that we wouldn’t
 need if we added the features directly to keystone.
 I'll also point out that it was largely because of the storage needs
 that I chose to propose Boson[1] as a separate app, rather than as a
 library.  Further, the dimensions over which quota-covered resources
 needed to be tracked seemed to me to be complicated enough that it would
 be better to define a new app and make it support that one domain well,
 which is why I didn't propose it as something to add to Keystone.
 Consider: nova has quotas that are applied by user, other quotas that
 are applied by tenant, and even some quotas on what could be considered
 sub-resources—a limit on the number of security group rules per security
 group, for instance.
 
 My current feeling is that, if we can figure out a way to make the quota
 problem into an acceptable library, that will work; it would probably
 have to maintain its own database separate from the client app and have
 features for automatically managing the schema, since we couldn't
 necessarily rely on the client app to invoke the proper juju there.  If,
 on the other hand, that ends up failing, then the best route is probably
 to begin by developing a separate app, like Boson, as a PoC; then, after
 we have some idea of just how difficult it is to actually solve the
 problem, we can evaluate whether it makes sense to actually fold it into
 a service like Keystone, or whether it should stand on its own.
 
 (Personally, I think Boson should be created and should stand on its
 own, but I also envision using it for purposes outside of OpenStack…)
 Thanks for mentioning Boson again. I’m embarrassed that I completely 
 forgot about the fact that you mentioned this at the summit.
 
 I’ll have to look at the proposal more closely before I comment in any 
 detail, but I take it as a good sign that we’re coming back around to the 
 idea of solving this with an app instead of a library.
 I assume I'm really late in the thread so I can just sit and give +1 to 
 this direction : IMHO, quotas need to managed thanks to a CRUD interface 
 which implies to get an app, as it sounds unreasonable to extend each 
 consumer app API.
 
 That said, back to Blazar, I just would like to emphasize that Blazar is 
 not trying to address the quota enforcement level, but rather provide a 
 centralized endpoint for managing reservations.
 Consequently, Blazar can also be considered as a consumer of this quota 
 system, whatever it's in a library or on a separate REST API.
 
 Last thing, I don't think that a quota application necessarly means that 
 quotas enforcement should be managed thanks to external calls to this app. 
 I can rather see an external system able to set for each project a local 
 view of what should be enforced locally. If operators don't want to deploy 
 that quota management project, it's up to them to address the hetergenous 
 setups for each project.
 I’m not sure what this means. You want the new service to be optional? How 
 would apps written against the service find and manage quota data if the 
 service isn’t there?
 
 My bad. Let me rephrase it. I'm seeing this service as providing added value 
 for managing quotas by ensuring consistency across all projects. But as I 
 said, I'm also thinking that the quota enforcement has still to be done at 
 the customer project level.

Oh, yes, that is true. I envision the API for the new service having a call 
that means “try to consume X units of a given quota” and that it would return 
information about whether that can be done. The apps would have to define what 

Re: [openstack-dev] WSME 0.6.2 released

2014-11-19 Thread Doug Hellmann
Version 0.6.2 was incorrectly configured to build universal wheels, leading to 
installation errors under Python 3 because of a dependency on ipaddr that only 
works on Python 2 (there is a version of the module already in the standard 
library for Python 3).

To fix this, fungi removed the bad wheel from PyPI and our mirror. I just 
tagged version 0.6.3 with the wheel build settings changed so that the wheels 
are only built for Python 2. This release was just uploaded to PyPI and should 
hit the mirror fairly soon.

Sorry for the issues,
Doug

On Nov 18, 2014, at 10:01 AM, Doug Hellmann d...@doughellmann.com wrote:

 The WSME development team has released version 0.6.2, which includes several 
 bug fixes.
 
 $ git log --oneline --no-merges 0.6.1..0.6.2
 2bb9362 Fix passing Dict/Array based UserType as params
 ea9f71d Document next version changes
 4e68f96 Allow non-auto-registered complex type
 292c556 Make the flask adapter working with flask.ext.restful
 c833702 Avoid Sphinx 1.3x in the tests
 6cb0180 Doc: status= - status_code=
 4441ca7 Minor documentation edits
 2c29787 Fix tox configuration.
 26a6acd Add support for manually specifying supported content types in 
 @wsmeexpose.
 7cee58b Fix broken sphinx tests.
 baa816c fix errors/warnings in tests
 2e1863d Use APIPATH_MAXLEN from the right module
 
 Please report issues through launchpad https://launchpad.net/wsme or the 
 #wsme channel on IRC.
 
 Doug
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Error message when Neutron network is running out of IP addresses

2014-11-19 Thread Edgar Magana
Nova People,

In Havana the behavior of this issue was different, basically the VM was 
successfully created and after trying multiple times to get an IP the state was 
changed to ERROR.
This behavior is different in Juno (currently testing Icehouse) but I am not 
able to find the bug fix.
As an operator this is really important, anyone can help me to find the fix 
that I just described above?

Thanks,

Edgar

From: Vishvananda Ishaya vishvana...@gmail.commailto:vishvana...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, November 18, 2014 at 3:27 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova][Neutron] Error message when Neutron network 
is running out of IP addresses

It looks like this has not been reported so a bug would be great. It looks like 
it might be as easy as adding the NoMoreFixedIps exception to the list where 
FixedIpLimitExceeded is caught in nova/network/manager.py

Vish

On Nov 18, 2014, at 8:13 AM, Edgar Magana 
edgar.mag...@workday.commailto:edgar.mag...@workday.com wrote:

Hello Community,

When a network subnet runs out of IP addresses a request to create a VM on that 
network fails with the Error message: No valid host was found. There are not 
enough hosts available.
In the nova logs the error message is: NoMoreFixedIps: No fixed IP addresses 
available for network:

Obviously, this is not the desirable behavior, is there any work in progress to 
change it or I should open a bug to properly propagate the right error message.

Thanks,

Edgar
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally]Rally created tenant and users

2014-11-19 Thread Ajay Kalambur (akalambu)
Hi
Is there a way to specify that the Rally created tenants and users and created 
with admin privileges
Currently its created using member role and hence some admin operations are not 
allowed
I want to specify that the account created has admin access
Ajay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.db][nova] NovaObject.save() needs its own DB transaction

2014-11-19 Thread Matthew Booth
We currently have a pattern in Nova where all database code lives in
db/sqla/api.py[1]. Database transactions are only ever created or used
in this module. This was an explicit design decision:
https://blueprints.launchpad.net/nova/+spec/db-session-cleanup .

However, it presents a problem when we consider NovaObjects, and
dependencies between them. For example, take Instance.save(). An
Instance has relationships with several other object types, one of which
is InstanceInfoCache. Consider the following code, which is amongst what
happens in spawn():

instance = Instance.get_by_uuid(uuid)
instance.vm_state = vm_states.ACTIVE
instance.info_cache.network_info = new_nw_info
instance.save()

instance.save() does (simplified):
  self.info_cache.save()
  self._db_save()

Both of these saves happen in separate db transactions. This has at
least 2 undesirable effects:

1. A failure can result in an inconsistent database. i.e. info_cache
having been persisted, but instance.vm_state not having been persisted.

2. Even in the absence of a failure, an external reader can see the new
info_cache but the old instance.

This is one example, but there are lots. We might convince ourselves
that the impact of this particular case is limited, but there will be
others where it isn't. Confidently assuring ourselves of a limited
impact also requires a large amount of context which not many
maintainers will have. New features continue to add to the problem,
including numa topology and pci requests.

I don't think we can reasonably remove the cascading save() above due to
the deliberate design of objects. Objects don't correspond directly to
their datamodels, so save() does more work than just calling out to the
DB. We need a way to allow cascading object saves to happen within a
single DB transaction. This will mean:

1. A change will be persisted either entirely or not at all in the event
of a failure.

2. A reader will see either the whole change or none of it.

We are not talking about crossing an RPC boundary. The single database
transaction only makes sense within the context of a single RPC call.
This will always be the case when NovaObject.save() cascades to other
object saves.

Note that we also have a separate problem, which is that the DB api's
internal use of transactions is wildly inconsistent. A single db api
call can result in multiple concurrent db transactions from the same
thread, and all the deadlocks that implies. This needs to be fixed, but
it doesn't require changing our current assumption that DB transactions
live only within the DB api.

Note that there is this recently approved oslo.db spec to make
transactions more manageable:

https://review.openstack.org/#/c/125181/11/specs/kilo/make-enginefacade-a-facade.rst,cm

Again, while this will be a significant benefit to the DB api, it will
not solve the problem of cascading object saves without allowing
transaction management at the level of NovaObject.save(): we need to
allow something to call a db api with an existing session, and we need
to allow something to pass an existing db transaction to NovaObject.save().

An obvious precursor to that is removing N309 from hacking, which
specifically tests for db apis which accept a session argument. We then
need to consider how NovaObject.save() should manage and propagate db
transactions.

I think the following pattern would solve it:

@remotable
def save():
session = insert magic here
try:
r = self._save(session)
session.commit() (or reader/writer magic from oslo.db)
return r
except Exception:
session.rollback() (or reader/writer magic from oslo.db)
raise

@definitelynotremotable
def _save(session):
previous contents of save() move here
session is explicitly passed to db api calls
cascading saves call object._save(session)

Whether we wait for the oslo.db updates or not, we need something like
the above. We could implement this today by exposing
db.sqla.api.get_session().

Thoughts?

Matt

[1] At a slight tangent, this looks like an artifact of some premature
generalisation a few years ago. It seems unlikely that anybody is going
to rewrite the db api using an ORM other than sqlalchemy, so we should
probably ditch it and promote it to db/api.py.
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-19 Thread Fox, Kevin M
Perhaps they are there to support older browsers?

Thanks,
Kevin

From: Matthias Runge [mru...@redhat.com]
Sent: Wednesday, November 19, 2014 12:27 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Horizon] the future of angularjs development in 
Horizon

On 18/11/14 14:48, Thomas Goirand wrote:


 And then, does selenium continues to work for testing Horizon? If so,
 then the solution could be to send the .dll and .xpi files in non-free,
 and remove them from Selenium in main.

Yes, it still works; that leaves the question, why they are included in
the tarball at all.

In Fedora, we do not distribute .dll or selenium xpi files with selenium
at all.

Matthias


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Power management in Cobbler

2014-11-19 Thread Fox, Kevin M
Would net booting a minimal discovery image work? You usually can dump ipmi 
network information from the host.

Thanks,
Kevin

From: Matthew Mosesohn [mmoses...@mirantis.com]
Sent: Wednesday, November 19, 2014 7:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Bogdan Dobrelya
Subject: Re: [openstack-dev] [Fuel] Power management in Cobbler

Tomasz, Vladimir, others,

The way I see it is we need a way to discover the corresponding IPMI
address for a given node for out-of-band power management. The
ultimate ipmitool command is going to be exactly the same whether it
comes from Cobbler or Ironic, and all we need to do is feed
information to the appropriate utility when it comes to power
management. If it's the same command, it doesn't matter who does it.
Ironic of course is a better option, but I'm not sure where we are
with discovering ipmi IP addresses or prompting admins to enter this
data for every node. Without this step, neither Cobbler nor Ironic is
capable of handling this task.

Best Regards,
Matthew Mosesohn

On Wed, Nov 19, 2014 at 7:38 PM, Tomasz Napierala
tnapier...@mirantis.com wrote:

 On 19 Nov 2014, at 16:10, Vladimir Kozhukalov vkozhuka...@mirantis.com 
 wrote:

 I am absolutely -1 for using Cobbler for that. Lastly, Ironic guys became 
 much more open for adopting new features (at least if they are implemented 
 in terms of Ironic drivers). Currently, it looks like we are  probably able 
 to deliver zero step Fuel Ironic driver by 6.1. Ironic already has working 
 IPMI stuff and they don't oppose ssh based power management any more. 
 Personally, I'd prefer to focus our efforts towards  Ironic stuff and 
 keeping in mind that Cobbler will be removed in the nearest future.

 I know that due to time constraints we would be better to go with Cobbler, 
 but I also think we should be closer to the community and switch to Ironic as 
 soon as possible.

 Regards,
 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com







 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][neutron] Proposal to split Neutron into separate repositories

2014-11-19 Thread Kyle Mestery
On Tue, Nov 18, 2014 at 5:36 PM, Armando M. arma...@gmail.com wrote:
 Mark, Kyle,

 What is the strategy for tracking the progress and all the details about
 this initiative? Blueprint spec, wiki page, or something else?

We're in the process of writing a spec for this now, but we first
wanted community feedback. Also, it's on the TC agenda for next week I
believe, so once we get signoff from the TC, we'll propose the spec.

Thanks,
Kyle

 One thing I personally found useful about the spec approach adopted in [1],
 was that we could quickly and effectively incorporate community feedback;
 having said that I am not sure that the same approach makes sense here,
 hence the question.

 Also, what happens for experimental efforts that are neither L2-3 nor L4-7
 (e.g. TaaS or NFV related ones?), but they may still benefit from this
 decomposition (as it promotes better separation of responsibilities)? Where
 would they live? I am not sure we made any particular progress of the
 incubator project idea that was floated a while back.

 Cheers,
 Armando

 [1] https://review.openstack.org/#/c/134680/

 On 18 November 2014 15:32, Doug Wiegley do...@a10networks.com wrote:

 Hi,

  so the specs repository would continue to be shared during the Kilo
  cycle.

 One of the reasons to split is that these two teams have different
 priorities and velocities.  Wouldn’t that be easier to track/manage as
 separate launchpad projects and specs repos, irrespective of who is
 approving them?

 Thanks,
 doug



 On Nov 18, 2014, at 10:31 PM, Mark McClain m...@mcclain.xyz wrote:

 All-

 Over the last several months, the members of the Networking Program have
 been discussing ways to improve the management of our program.  When the
 Quantum project was initially launched, we envisioned a combined service
 that included all things network related.  This vision served us well in the
 early days as the team mostly focused on building out layers 2 and 3;
 however, we’ve run into growth challenges as the project started building
 out layers 4 through 7.  Initially, we thought that development would float
 across all layers of the networking stack, but the reality is that the
 development concentrates around either layer 2 and 3 or layers 4 through 7.
 In the last few cycles, we’ve also discovered that these concentrations have
 different velocities and a single core team forces one to match the other to
 the detriment of the one forced to slow down.

 Going forward we want to divide the Neutron repository into two separate
 repositories lead by a common Networking PTL.  The current mission of the
 program will remain unchanged [1].  The split would be as follows:

 Neutron (Layer 2 and 3)
 - Provides REST service and technology agnostic abstractions for layer 2
 and layer 3 services.

 Neutron Advanced Services Library (Layers 4 through 7)
 - A python library which is co-released with Neutron
 - The advance service library provides controllers that can be configured
 to manage the abstractions for layer 4 through 7 services.

 Mechanics of the split:
 - Both repositories are members of the same program, so the specs
 repository would continue to be shared during the Kilo cycle.  The PTL and
 the drivers team will retain approval responsibilities they now share.
 - The split would occur around Kilo-1 (subject to coordination of the
 Infra and Networking teams). The timing is designed to enable the proposed
 REST changes to land around the time of the December development sprint.
 - The core team for each repository will be determined and proposed by
 Kyle Mestery for approval by the current core team.
 - The Neutron Server and the Neutron Adv Services Library would be
 co-gated to ensure that incompatibilities are not introduced.
 - The Advance Service Library would be an optional dependency of Neutron,
 so integrated cross-project checks would not be required to enable it during
 testing.
 - The split should not adversely impact operators and the Networking
 program should maintain standard OpenStack compatibility and deprecation
 cycles.

 This proposal to divide into two repositories achieved a strong consensus
 at the recent Paris Design Summit and it does not conflict with the current
 governance model or any proposals circulating as part of the ‘Big Tent’
 discussion.

 Kyle and mark

 [1]
 https://git.openstack.org/cgit/openstack/governance/plain/reference/programs.yaml
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [tc][neutron] Proposal to split Neutron into separate repositories

2014-11-19 Thread Kyle Mestery
On Tue, Nov 18, 2014 at 5:32 PM, Doug Wiegley do...@a10networks.com wrote:
 Hi,

 so the specs repository would continue to be shared during the Kilo cycle.

 One of the reasons to split is that these two teams have different
 priorities and velocities.  Wouldn’t that be easier to track/manage as
 separate launchpad projects and specs repos, irrespective of who is
 approving them?

My thinking here is that the specs repo is shared (at least initialy)
because the projects are under one umbrella, and we want them to work
closely together initially. This keeps everyone in the loop. Once
things mature, we can look at reevaluating this. Does that make sense?

Thanks,
Kyle

 Thanks,
 doug



 On Nov 18, 2014, at 10:31 PM, Mark McClain m...@mcclain.xyz wrote:

 All-

 Over the last several months, the members of the Networking Program have
 been discussing ways to improve the management of our program.  When the
 Quantum project was initially launched, we envisioned a combined service
 that included all things network related.  This vision served us well in the
 early days as the team mostly focused on building out layers 2 and 3;
 however, we’ve run into growth challenges as the project started building
 out layers 4 through 7.  Initially, we thought that development would float
 across all layers of the networking stack, but the reality is that the
 development concentrates around either layer 2 and 3 or layers 4 through 7.
 In the last few cycles, we’ve also discovered that these concentrations have
 different velocities and a single core team forces one to match the other to
 the detriment of the one forced to slow down.

 Going forward we want to divide the Neutron repository into two separate
 repositories lead by a common Networking PTL.  The current mission of the
 program will remain unchanged [1].  The split would be as follows:

 Neutron (Layer 2 and 3)
 - Provides REST service and technology agnostic abstractions for layer 2 and
 layer 3 services.

 Neutron Advanced Services Library (Layers 4 through 7)
 - A python library which is co-released with Neutron
 - The advance service library provides controllers that can be configured to
 manage the abstractions for layer 4 through 7 services.

 Mechanics of the split:
 - Both repositories are members of the same program, so the specs repository
 would continue to be shared during the Kilo cycle.  The PTL and the drivers
 team will retain approval responsibilities they now share.
 - The split would occur around Kilo-1 (subject to coordination of the Infra
 and Networking teams). The timing is designed to enable the proposed REST
 changes to land around the time of the December development sprint.
 - The core team for each repository will be determined and proposed by Kyle
 Mestery for approval by the current core team.
 - The Neutron Server and the Neutron Adv Services Library would be co-gated
 to ensure that incompatibilities are not introduced.
 - The Advance Service Library would be an optional dependency of Neutron, so
 integrated cross-project checks would not be required to enable it during
 testing.
 - The split should not adversely impact operators and the Networking program
 should maintain standard OpenStack compatibility and deprecation cycles.

 This proposal to divide into two repositories achieved a strong consensus at
 the recent Paris Design Summit and it does not conflict with the current
 governance model or any proposals circulating as part of the ‘Big Tent’
 discussion.

 Kyle and mark

 [1]
 https://git.openstack.org/cgit/openstack/governance/plain/reference/programs.yaml
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][neutron] Proposal to split Neutron into separate repositories

2014-11-19 Thread Ivar Lazzaro
While I agree that a unified endpoint could be a good solution for now, I
think that the easiest way of doing this would be by implementing it as an
external Neutron service.

Using python entry_points, the advanced service extensions can be loaded in
Neutron just like we do today (using neutron.conf).

We will basically have a new project for which Neutron will be a dependency
(not the other way around!) so that any module of Neutron can be
imported/used just like the new code was living within Neutron itself.

As far as UTs are concerned, Neutron will also be in the test-requirements
for the new project, which means that any existing UT framework in Neutron
today can be easily reused by the new services.

This is compliant with the requirement that Neutron stays the only
endpoint, giving the ability to the user to load the new services when she
wants by configuring Neutron alone, while separating the concerns more
easily and clearly.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Power management in Cobbler

2014-11-19 Thread Tomasz Napierala

 On 19 Nov 2014, at 17:56, Fox, Kevin M kevin@pnnl.gov wrote:
 
 Would net booting a minimal discovery image work? You usually can dump ipmi 
 network information from the host.
 

To boot from minimal iso (which is waht we do now) you still need to tell the 
host to do it. This is where IPMI discovery is needed I guess.

Regards,
-- 
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db][nova] NovaObject.save() needs its own DB transaction

2014-11-19 Thread Boris Pavlovic
Matthew,


LOL ORM on top of another ORM 

https://img.neoseeker.com/screenshots/TW92aWVzL0RyYW1h/inception_image33.png



Best regards,
Boris Pavlovic

On Wed, Nov 19, 2014 at 8:46 PM, Matthew Booth mbo...@redhat.com wrote:

 We currently have a pattern in Nova where all database code lives in
 db/sqla/api.py[1]. Database transactions are only ever created or used
 in this module. This was an explicit design decision:
 https://blueprints.launchpad.net/nova/+spec/db-session-cleanup .

 However, it presents a problem when we consider NovaObjects, and
 dependencies between them. For example, take Instance.save(). An
 Instance has relationships with several other object types, one of which
 is InstanceInfoCache. Consider the following code, which is amongst what
 happens in spawn():

 instance = Instance.get_by_uuid(uuid)
 instance.vm_state = vm_states.ACTIVE
 instance.info_cache.network_info = new_nw_info
 instance.save()

 instance.save() does (simplified):
   self.info_cache.save()
   self._db_save()

 Both of these saves happen in separate db transactions. This has at
 least 2 undesirable effects:

 1. A failure can result in an inconsistent database. i.e. info_cache
 having been persisted, but instance.vm_state not having been persisted.

 2. Even in the absence of a failure, an external reader can see the new
 info_cache but the old instance.

 This is one example, but there are lots. We might convince ourselves
 that the impact of this particular case is limited, but there will be
 others where it isn't. Confidently assuring ourselves of a limited
 impact also requires a large amount of context which not many
 maintainers will have. New features continue to add to the problem,
 including numa topology and pci requests.

 I don't think we can reasonably remove the cascading save() above due to
 the deliberate design of objects. Objects don't correspond directly to
 their datamodels, so save() does more work than just calling out to the
 DB. We need a way to allow cascading object saves to happen within a
 single DB transaction. This will mean:

 1. A change will be persisted either entirely or not at all in the event
 of a failure.

 2. A reader will see either the whole change or none of it.

 We are not talking about crossing an RPC boundary. The single database
 transaction only makes sense within the context of a single RPC call.
 This will always be the case when NovaObject.save() cascades to other
 object saves.

 Note that we also have a separate problem, which is that the DB api's
 internal use of transactions is wildly inconsistent. A single db api
 call can result in multiple concurrent db transactions from the same
 thread, and all the deadlocks that implies. This needs to be fixed, but
 it doesn't require changing our current assumption that DB transactions
 live only within the DB api.

 Note that there is this recently approved oslo.db spec to make
 transactions more manageable:


 https://review.openstack.org/#/c/125181/11/specs/kilo/make-enginefacade-a-facade.rst,cm

 Again, while this will be a significant benefit to the DB api, it will
 not solve the problem of cascading object saves without allowing
 transaction management at the level of NovaObject.save(): we need to
 allow something to call a db api with an existing session, and we need
 to allow something to pass an existing db transaction to NovaObject.save().

 An obvious precursor to that is removing N309 from hacking, which
 specifically tests for db apis which accept a session argument. We then
 need to consider how NovaObject.save() should manage and propagate db
 transactions.

 I think the following pattern would solve it:

 @remotable
 def save():
 session = insert magic here
 try:
 r = self._save(session)
 session.commit() (or reader/writer magic from oslo.db)
 return r
 except Exception:
 session.rollback() (or reader/writer magic from oslo.db)
 raise

 @definitelynotremotable
 def _save(session):
 previous contents of save() move here
 session is explicitly passed to db api calls
 cascading saves call object._save(session)

 Whether we wait for the oslo.db updates or not, we need something like
 the above. We could implement this today by exposing
 db.sqla.api.get_session().

 Thoughts?

 Matt

 [1] At a slight tangent, this looks like an artifact of some premature
 generalisation a few years ago. It seems unlikely that anybody is going
 to rewrite the db api using an ORM other than sqlalchemy, so we should
 probably ditch it and promote it to db/api.py.
 --
 Matthew Booth
 Red Hat Engineering, Virtualisation Team

 Phone: +442070094448 (UK)
 GPG ID:  D33C3490
 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___

Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Mike Bayer

 On Nov 18, 2014, at 1:38 PM, Eugene Nikanorov enikano...@mirantis.com wrote:
 
 Hi neutron folks,
 
 There is an ongoing effort to refactor some neutron DB logic to be compatible 
 with galera/mysql which doesn't support locking (with_lockmode('update')).
 
 Some code paths that used locking in the past were rewritten to retry the 
 operation if they detect that an object was modified concurrently.
 The problem here is that all DB operations (CRUD) are performed in the scope 
 of some transaction that makes complex operations to be executed in atomic 
 manner.
 For mysql the default transaction isolation level is 'REPEATABLE READ' which 
 means that once the code issue a query within a transaction, this query will 
 return the same result while in this transaction (e.g. the snapshot is taken 
 by the DB during the first query and then reused for the same query).
 In other words, the retry logic like the following will not work:
 
 def allocate_obj():
 with session.begin(subtrans=True):
  for i in xrange(n_retries):
   obj = session.query(Model).filter_by(filters)
   count = session.query(Model).filter_by(id=obj.id 
 http://obj.id/).update({'allocated': True})
   if count:
return obj
 
 since usually methods like allocate_obj() is called from within another 
 transaction, we can't simply put transaction under 'for' loop to fix the 
 issue.

has this been confirmed?  the point of systems like repeatable read is not just 
that you read the “old” data, it’s also to ensure that updates to that data 
either proceed or fail explicitly; locking is also used to prevent concurrent 
access that can’t be reconciled.  A lower isolation removes these advantages.  

I ran a simple test in two MySQL sessions as follows:

session 1:

mysql create table some_table(data integer) engine=innodb;
Query OK, 0 rows affected (0.01 sec)

mysql insert into some_table(data) values (1);
Query OK, 1 row affected (0.00 sec)

mysql begin;
Query OK, 0 rows affected (0.00 sec)

mysql select data from some_table;
+--+
| data |
+--+
|1 |
+--+
1 row in set (0.00 sec)


session 2:

mysql begin;
Query OK, 0 rows affected (0.00 sec)

mysql update some_table set data=2 where data=1;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

then back in session 1, I ran:

mysql update some_table set data=3 where data=1;

this query blocked;  that’s because session 2 has placed a write lock on the 
table.  this is the effect of repeatable read isolation.

while it blocked, I went to session 2 and committed the in-progress transaction:

mysql commit;
Query OK, 0 rows affected (0.00 sec)

then session 1 unblocked, and it reported, correctly, that zero rows were 
affected:

Query OK, 0 rows affected (7.29 sec)
Rows matched: 0  Changed: 0  Warnings: 0

the update had not taken place, as was stated by “rows matched:

mysql select * from some_table;
+--+
| data |
+--+
|1 |
+--+
1 row in set (0.00 sec)

the code in question would do a retry at this point; it is checking the number 
of rows matched, and that number is accurate.

if our code did *not* block at the point of our UPDATE, then it would have 
proceeded, and the other transaction would have overwritten what we just did, 
when it committed.   I don’t know that read committed is necessarily any better 
here.

now perhaps, with Galera, none of this works correctly.  That would be a 
different issue in which case sure, we should use whatever isolation is 
recommended for Galera.  But I’d want to potentially peg it to the fact that 
Galera is in use, or not.

would love also to hear from Jay Pipes on this since he literally wrote the 
book on MySQL ! :)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Power management in Cobbler

2014-11-19 Thread Fox, Kevin M
I meant, either go into the bios and set the node to netboot, or hit f12(or 
whatever) and netboot. the netbooted discovery image should be able to gather 
all the rest of the bits? Minimal ISO could also be just gpxe or something like 
that and would do the same as above. Then you don't have to update the iso 
every time you enhance discovery process too.

Hmmm... Some bmc's do dhcp though out of the box. I guess if you watched for 
dhcp leases and then tried to contact them over ipmi with a few default 
username/passwords, you'd probably get a fair number of them without much 
effort, if they are preconfigured.

In our experience though, we usually get nodes in that we have to configure 
netboot in the bios, then the next easiest step is to install and then 
configure the bmc via the installed linux. You can manually setup the bmc 
username/password/ip/whatever but its work. Most of the bmc's we've seen have 
ipmi over the network disabled by default. :/ So in that case, the former, 
netboot the box, load a discovery image, and have it configure the bmc all in 
one go would be nicer I think.

Thanks,
Kevin

From: Tomasz Napierala [tnapier...@mirantis.com]
Sent: Wednesday, November 19, 2014 9:28 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Bogdan Dobrelya
Subject: Re: [openstack-dev] [Fuel] Power management in Cobbler

 On 19 Nov 2014, at 17:56, Fox, Kevin M kevin@pnnl.gov wrote:

 Would net booting a minimal discovery image work? You usually can dump ipmi 
 network information from the host.


To boot from minimal iso (which is waht we do now) you still need to tell the 
host to do it. This is where IPMI discovery is needed I guess.

Regards,
--
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db][nova] NovaObject.save() needs its own DB transaction

2014-11-19 Thread Mike Bayer

 On Nov 19, 2014, at 11:46 AM, Matthew Booth mbo...@redhat.com wrote:
 
 We currently have a pattern in Nova where all database code lives in
 db/sqla/api.py[1]. Database transactions are only ever created or used
 in this module. This was an explicit design decision:
 https://blueprints.launchpad.net/nova/+spec/db-session-cleanup .
 
 However, it presents a problem when we consider NovaObjects, and
 dependencies between them. For example, take Instance.save(). An
 Instance has relationships with several other object types, one of which
 is InstanceInfoCache. Consider the following code, which is amongst what
 happens in spawn():
 
 instance = Instance.get_by_uuid(uuid)
 instance.vm_state = vm_states.ACTIVE
 instance.info_cache.network_info = new_nw_info
 instance.save()
 
 instance.save() does (simplified):
  self.info_cache.save()
  self._db_save()
 
 Both of these saves happen in separate db transactions.
 

 I don't think we can reasonably remove the cascading save() above due to
 the deliberate design of objects. Objects don't correspond directly to
 their datamodels, so save() does more work than just calling out to the
 DB. We need a way to allow cascading object saves to happen within a
 single DB transaction.

So this is actually part of what https://review.openstack.org/#/c/125181/ aims 
to solve.If it isn’t going to achieve this (and I think I see what the 
problem is), we need to fix it.

 
 Note that we also have a separate problem, which is that the DB api's
 internal use of transactions is wildly inconsistent. A single db api
 call can result in multiple concurrent db transactions from the same
 thread, and all the deadlocks that implies. This needs to be fixed, but
 it doesn't require changing our current assumption that DB transactions
 live only within the DB api.
 
 Note that there is this recently approved oslo.db spec to make
 transactions more manageable:
 
 https://review.openstack.org/#/c/125181/11/specs/kilo/make-enginefacade-a-facade.rst,cm
 
 Again, while this will be a significant benefit to the DB api, it will
 not solve the problem of cascading object saves without allowing
 transaction management at the level of NovaObject.save(): we need to
 allow something to call a db api with an existing session, and we need
 to allow something to pass an existing db transaction to NovaObject.save().

OK so here is why EngineFacade as described so far doesn’t work, because if it 
is like this:

def some_api_operation -

novaobject1.save() -

   @writer
   def do_some_db_thing()

novaobject2.save() -

   @writer
   def do_some_other_db_thing()

then yes, those two @writer calls aren’t coordinated.   So yes, I think 
something that ultimately communicates the same meaning as @writer needs to be 
at the top:

@something_that_invokes_writer_without_exposing_db_stuff
def some_api_operation -

# … etc

If my decorator is not clear enough, let me clarify that a decorator that is 
present at the API/ nova objects layer will interact with the SQL layer through 
some form of dependency injection, and not any kind of explicit import; that 
is, when the SQL layer is invoked, it registers some kind of state onto the 
@something_that_invokes_writer_without_exposing_db_stuff system that causes its 
“cleanup”, in this case the commit(), to occur at the end of that topmost 
decorator.


 I think the following pattern would solve it:
 
 @remotable
 def save():
session = insert magic here
try:
r = self._save(session)
session.commit() (or reader/writer magic from oslo.db)
return r
except Exception:
session.rollback() (or reader/writer magic from oslo.db)
raise
 
 @definitelynotremotable
 def _save(session):
previous contents of save() move here
session is explicitly passed to db api calls
cascading saves call object._save(session)

so again with EngineFacade rewrite, the @definitelynotremotable system should 
also interact such that if @writer is invoked internally, an error is raised, 
just the same as when @writer is invoked within @reader.


 
 Whether we wait for the oslo.db updates or not, we need something like
 the above. We could implement this today by exposing
 db.sqla.api.get_session().

EngineFacade is hoped to be ready for Kilo and obviously Nova is very much 
hoped to be my first customer for integration. It would be great if folks 
want to step up and help implement it, or at least take hold of a prototype I 
can build relatively quickly and integration test it and/or work on a real nova 
integration.

 
 Thoughts?
 
 Matt
 
 [1] At a slight tangent, this looks like an artifact of some premature
 generalisation a few years ago. It seems unlikely that anybody is going
 to rewrite the db api using an ORM other than sqlalchemy, so we should
 probably ditch it and promote it to db/api.py.

funny you should mention that as this has already happened and it is 

Re: [openstack-dev] [oslo.db][nova] NovaObject.save() needs its own DB transaction

2014-11-19 Thread Mike Bayer

 On Nov 19, 2014, at 12:59 PM, Boris Pavlovic bpavlo...@mirantis.com wrote:
 
 Matthew, 
 
 
 LOL ORM on top of another ORM 
 
 https://img.neoseeker.com/screenshots/TW92aWVzL0RyYW1h/inception_image33.png 
 https://img.neoseeker.com/screenshots/TW92aWVzL0RyYW1h/inception_image33.png

I know where you stand on this Boris, but I fail to see how this is a 
productive contribution to the discussion.  Leo Dicaprio isn’t going to solve 
our issue here and I look forward to iterating on what we have today.




 
 
 
 Best regards,
 Boris Pavlovic 
 
 On Wed, Nov 19, 2014 at 8:46 PM, Matthew Booth mbo...@redhat.com 
 mailto:mbo...@redhat.com wrote:
 We currently have a pattern in Nova where all database code lives in
 db/sqla/api.py[1]. Database transactions are only ever created or used
 in this module. This was an explicit design decision:
 https://blueprints.launchpad.net/nova/+spec/db-session-cleanup 
 https://blueprints.launchpad.net/nova/+spec/db-session-cleanup .
 
 However, it presents a problem when we consider NovaObjects, and
 dependencies between them. For example, take Instance.save(). An
 Instance has relationships with several other object types, one of which
 is InstanceInfoCache. Consider the following code, which is amongst what
 happens in spawn():
 
 instance = Instance.get_by_uuid(uuid)
 instance.vm_state = vm_states.ACTIVE
 instance.info_cache.network_info = new_nw_info
 instance.save()
 
 instance.save() does (simplified):
   self.info_cache.save()
   self._db_save()
 
 Both of these saves happen in separate db transactions. This has at
 least 2 undesirable effects:
 
 1. A failure can result in an inconsistent database. i.e. info_cache
 having been persisted, but instance.vm_state not having been persisted.
 
 2. Even in the absence of a failure, an external reader can see the new
 info_cache but the old instance.
 
 This is one example, but there are lots. We might convince ourselves
 that the impact of this particular case is limited, but there will be
 others where it isn't. Confidently assuring ourselves of a limited
 impact also requires a large amount of context which not many
 maintainers will have. New features continue to add to the problem,
 including numa topology and pci requests.
 
 I don't think we can reasonably remove the cascading save() above due to
 the deliberate design of objects. Objects don't correspond directly to
 their datamodels, so save() does more work than just calling out to the
 DB. We need a way to allow cascading object saves to happen within a
 single DB transaction. This will mean:
 
 1. A change will be persisted either entirely or not at all in the event
 of a failure.
 
 2. A reader will see either the whole change or none of it.
 
 We are not talking about crossing an RPC boundary. The single database
 transaction only makes sense within the context of a single RPC call.
 This will always be the case when NovaObject.save() cascades to other
 object saves.
 
 Note that we also have a separate problem, which is that the DB api's
 internal use of transactions is wildly inconsistent. A single db api
 call can result in multiple concurrent db transactions from the same
 thread, and all the deadlocks that implies. This needs to be fixed, but
 it doesn't require changing our current assumption that DB transactions
 live only within the DB api.
 
 Note that there is this recently approved oslo.db spec to make
 transactions more manageable:
 
 https://review.openstack.org/#/c/125181/11/specs/kilo/make-enginefacade-a-facade.rst,cm
  
 https://review.openstack.org/#/c/125181/11/specs/kilo/make-enginefacade-a-facade.rst,cm
 
 Again, while this will be a significant benefit to the DB api, it will
 not solve the problem of cascading object saves without allowing
 transaction management at the level of NovaObject.save(): we need to
 allow something to call a db api with an existing session, and we need
 to allow something to pass an existing db transaction to NovaObject.save().
 
 An obvious precursor to that is removing N309 from hacking, which
 specifically tests for db apis which accept a session argument. We then
 need to consider how NovaObject.save() should manage and propagate db
 transactions.
 
 I think the following pattern would solve it:
 
 @remotable
 def save():
 session = insert magic here
 try:
 r = self._save(session)
 session.commit() (or reader/writer magic from oslo.db)
 return r
 except Exception:
 session.rollback() (or reader/writer magic from oslo.db)
 raise
 
 @definitelynotremotable
 def _save(session):
 previous contents of save() move here
 session is explicitly passed to db api calls
 cascading saves call object._save(session)
 
 Whether we wait for the oslo.db updates or not, we need something like
 the above. We could implement this today by exposing
 db.sqla.api.get_session().
 
 Thoughts?
 
 Matt
 
 [1] At a slight tangent, this looks like an artifact of some 

[openstack-dev] [QA] Meeting Thursday November 20th at 17:00 UTC

2014-11-19 Thread Matthew Treinish
Hi everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, November 20th at 17:00 UTC in the #openstack-meeting
channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

It's also worth noting that a few weeks ago we started having a regular
dedicated Devstack topic during the meetings. So if anyone is interested in
Devstack development please join the meetings to be a part of the discussion.

To help people figure out what time 17:00 UTC is in other timezones tomorrow's
meeting will be at:

12:00 EST
02:00 JST
03:30 ACDT
18:00 CET
11:00 CST
9:00 PST

-Matt Treinish


pgpHiRreaoc7D.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db][nova] NovaObject.save() needs its own DB transaction

2014-11-19 Thread Dan Smith
 However, it presents a problem when we consider NovaObjects, and
 dependencies between them.

I disagree with this assertion, because:

 For example, take Instance.save(). An
 Instance has relationships with several other object types, one of which
 is InstanceInfoCache. Consider the following code, which is amongst what
 happens in spawn():
 
 instance = Instance.get_by_uuid(uuid)
 instance.vm_state = vm_states.ACTIVE
 instance.info_cache.network_info = new_nw_info
 instance.save()
 
 instance.save() does (simplified):
   self.info_cache.save()
   self._db_save()
 
 Both of these saves happen in separate db transactions.

This has always been two DB calls, and for a while recently, it was two
RPCs, each of which did one call.

 This has at least 2 undesirable effects:
 
 1. A failure can result in an inconsistent database. i.e. info_cache
 having been persisted, but instance.vm_state not having been persisted.
 
 2. Even in the absence of a failure, an external reader can see the new
 info_cache but the old instance.

I think you might want to pick a different example. We update the
info_cache all the time asynchronously, due to time has passed and
other non-user-visible reasons.

 New features continue to add to the problem,
 including numa topology and pci requests.

NUMA and PCI information are now created atomically with the instance
(or at least, passed to SQLA in a way I expect does the insert as a
single transaction). We don't yet do that in save(), I think because we
didn't actually change this information after creation until recently.

Definitely agree that we should not save the PCI part without the base
instance part.

 I don't think we can reasonably remove the cascading save() above due to
 the deliberate design of objects. Objects don't correspond directly to
 their datamodels, so save() does more work than just calling out to the
 DB. We need a way to allow cascading object saves to happen within a
 single DB transaction. This will mean:
 
 1. A change will be persisted either entirely or not at all in the event
 of a failure.
 
 2. A reader will see either the whole change or none of it.

This is definitely what we should strive for in cases where the updates
are related, but as I said above, for things (like info cache) where it
doesn't matter, we should be fine.

 Note that there is this recently approved oslo.db spec to make
 transactions more manageable:
 
 https://review.openstack.org/#/c/125181/11/specs/kilo/make-enginefacade-a-facade.rst,cm
 
 Again, while this will be a significant benefit to the DB api, it will
 not solve the problem of cascading object saves without allowing
 transaction management at the level of NovaObject.save(): we need to
 allow something to call a db api with an existing session, and we need
 to allow something to pass an existing db transaction to NovaObject.save().

I don't agree that we need to be concerned about this at the
NovaObject.save() level. I do agree that Instance.save() needs to have a
relationship to its sub-objects that facilitates atomicity (where
appropriate), and that such a pattern can be used for other such
hierarchies.

 An obvious precursor to that is removing N309 from hacking, which
 specifically tests for db apis which accept a session argument. We then
 need to consider how NovaObject.save() should manage and propagate db
 transactions.

Right, so I believe that we had more consistent handling of transactions
in the past. We had a mechanism for passing around the session between
chained db/api methods to ensure they happened atomically. I think Boris
led the charge to eliminate that, culminating with the hacking rule you
mentioned.

Maybe getting back to the justification for removing that facility would
help us understand the challenges we face going forward?

 [1] At a slight tangent, this looks like an artifact of some premature
 generalisation a few years ago. It seems unlikely that anybody is going
 to rewrite the db api using an ORM other than sqlalchemy, so we should
 probably ditch it and promote it to db/api.py.

We've had a few people ask about it, in terms of rewriting some or all
of our DB API to talk to a totally non-SQL backend. Further, AFAIK, RAX
rewrites a few of the DB API calls to use raw SQL queries for
performance (or did, at one point).

I'm quite happy to have the implementation of Instance.save() make use
of primitives to ensure atomicity where appropriate. I don't think
that's something that needs or deserves generalization at this point,
and I'm not convinced it needs to be in the save method itself. Right
now we update several things atomically by passing something to db/api
that gets turned into properly-related SQLA objects. I think we could do
the same for any that we're currently cascading separately, even if the
db/api update method uses a transaction to ensure safety.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev 

Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Ryan Moats
I was waiting for this because I think I may have a slightly different (and
outside of the box) view on how to approach a solution to this.

Conceptually (at least in my mind) there isn't a whole lot of difference
between how the example below (i.e. updates from two concurrent threads) is
handled
and how/if neutron wants to support a multi-master database scenario (which
in turn lurks in the background when one starts thinking/talking about
multi-region support).

If neutron wants to eventually support multi-master database scenarios, I
see two ways to go about it:

1) Defer multi-master support to the database itself.
2) Take responsibility for managing the conflict resolution inherent in
multi-master scenarios itself.

The first approach is certainly simpler in the near term, but it has the
down side of restricting the choice of databases to those that have solved
multi-master and further, may lead to code bifurcation based on possibly
different solutions to the conflict resolution scenarios inherent in
multi-master.

The second approach is certainly more complex as neutron assumes more
responsibility for its own actions, but it has the advantage that (if done
right) would be transparent to the underlying databases (with all that
implies)

My reason for asking this question here is that if the community wants to
consider #2, then these problems are the place to start crafting that
solution - if we solve the conflicts inherent with the  two conncurrent
thread scenarios, then I think we will find that we've solved the
multi-master problem essentially for free.

Ryan Moats

Mike Bayer mba...@redhat.com wrote on 11/19/2014 12:05:35 PM:

 From: Mike Bayer mba...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 11/19/2014 12:05 PM
 Subject: Re: [openstack-dev] [Neutron] DB: transaction isolation and
 related questions

 On Nov 18, 2014, at 1:38 PM, Eugene Nikanorov enikano...@mirantis.com
wrote:

 Hi neutron folks,

 There is an ongoing effort to refactor some neutron DB logic to be
 compatible with galera/mysql which doesn't support locking
 (with_lockmode('update')).

 Some code paths that used locking in the past were rewritten to
 retry the operation if they detect that an object was modified
concurrently.
 The problem here is that all DB operations (CRUD) are performed in
 the scope of some transaction that makes complex operations to be
 executed in atomic manner.
 For mysql the default transaction isolation level is 'REPEATABLE
 READ' which means that once the code issue a query within a
 transaction, this query will return the same result while in this
 transaction (e.g. the snapshot is taken by the DB during the first
 query and then reused for the same query).
 In other words, the retry logic like the following will not work:

 def allocate_obj():
 with session.begin(subtrans=True):
  for i in xrange(n_retries):
   obj = session.query(Model).filter_by(filters)
   count = session.query(Model).filter_by(id=obj.id
 ).update({'allocated': True})
   if count:
return obj

 since usually methods like allocate_obj() is called from within
 another transaction, we can't simply put transaction under 'for'
 loop to fix the issue.

 has this been confirmed?  the point of systems like repeatable read
 is not just that you read the “old” data, it’s also to ensure that
 updates to that data either proceed or fail explicitly; locking is
 also used to prevent concurrent access that can’t be reconciled.  A
 lower isolation removes these advantages.

 I ran a simple test in two MySQL sessions as follows:

 session 1:

 mysql create table some_table(data integer) engine=innodb;
 Query OK, 0 rows affected (0.01 sec)

 mysql insert into some_table(data) values (1);
 Query OK, 1 row affected (0.00 sec)

 mysql begin;
 Query OK, 0 rows affected (0.00 sec)

 mysql select data from some_table;
 +--+
 | data |
 +--+
 |1 |
 +--+
 1 row in set (0.00 sec)

 session 2:

 mysql begin;
 Query OK, 0 rows affected (0.00 sec)

 mysql update some_table set data=2 where data=1;
 Query OK, 1 row affected (0.00 sec)
 Rows matched: 1  Changed: 1  Warnings: 0

 then back in session 1, I ran:

 mysql update some_table set data=3 where data=1;

 this query blocked;  that’s because session 2 has placed a write
 lock on the table.  this is the effect of repeatable read isolation.

 while it blocked, I went to session 2 and committed the in-progress
 transaction:

 mysql commit;
 Query OK, 0 rows affected (0.00 sec)

 then session 1 unblocked, and it reported, correctly, that zero rows
 were affected:

 Query OK, 0 rows affected (7.29 sec)
 Rows matched: 0  Changed: 0  Warnings: 0

 the update had not taken place, as was stated by “rows matched:

 mysql select * from some_table;
 +--+
 | data |
 +--+
 |1 |
 +--+
 1 row in set (0.00 sec)

 the code in question would do a retry at 

Re: [openstack-dev] [glance] Parallels loopback disk format support

2014-11-19 Thread Maxim Nestratov

Hi Nikhil,

Thank you for your response and advice. I'm currently creating the spec 
and will publish a review for it shortly.


Best,
Maxim

19.11.2014 18:15, Nikhil Komawar пишет:

Hi Maxim,

Thanks for showing interest in this aspect. Like nova-specs, Glance also needs 
a spec to be create for discussion related to the blueprint.

Please try to create one here [1]. Additionally you may join us at the meeting 
[2] if you feel stuck or need clarifications.

[1] https://github.com/openstack/glance-specs
[2] https://wiki.openstack.org/wiki/Meetings#Glance_Team_meeting

Thanks,
-Nikhil


From: Maxim Nestratov [mnestra...@parallels.com]
Sent: Wednesday, November 19, 2014 8:27 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [glance] Parallels loopback disk format support

Greetings,

In scope of these changes [1], I would like to add a new image format
into glance. For this purpose there was created a blueprint [2] and
would really appreciate if someone from glance team could review this
proposal.

[1] https://review.openstack.org/#/c/111335/
[2] https://blueprints.launchpad.net/glance/+spec/pcs-support

Best,

Maxim Nestratov,
Lead Software Developer,
Parallels


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [policy] [congress] Protocol for Congress -- Enactor

2014-11-19 Thread Gregory Lebovitz
anyone read this? comments?

On Sat, Nov 1, 2014 at 11:13 AM, Gregory Lebovitz gregory.i...@gmail.com
wrote:

 Summary from IRC chat 10/14/2014 on weekly meeting [1] [2]

 Topic:  Declarative Language for Congress — Enactor/Enforcer

 Question: Shall we specify a declarative language for communicating policy
 configured in Congress to enactors / enforcement systems

 Hypothesis (derived at conclusion of discussion):
  - Specify declarative protocol and framework for describing policy
 with extensible attributes/value fields described in a base ontology, with
 additional affinity ontologies, is what is needed earlier than later, to be
 able to achieve it as an end-state, before too many Enactors dive into
 one-offs.
  - We could achieve that specification once we know the right structure

 Discussion:

- Given the following framework:
- Elements:
  - Congress - The policy description point, a place where:
 - (a) policy inputs are collected
 - (b) collected policy inputs are integrated
 - (c) policy is defined
 - (d) declares policy intent to enforcing / enacting systems
 - (e) observes state of environment, noting policy violations
  - Feeders - provides policy inputs to Congress
  - Enactors / Enforcers - receives policy declarations from
  Congress and enacts / enforces the policy according to its 
 capabilities
 - E.g. Nova for VM placement, Neutron for interface
 connectivity, FWaaS for access control, etc.

 What will the protocol be for the Congress — Enactors / Enforcers?


 thinrichs:  we’ve we've been assuming that Congress will leverage
 whatever the Enactors (policy engines) and Feeders (and more generally
 datacenter services) that exist are using. For basic datacenter services,
 we had planned on teaching Congress what their API is and what it does. So
 there's no new protocol there—we'd just use HTTP or whatever the service
 expects. For Enactors, there are 2 pieces: (1) what policy does Congress
 push and (2) what protocol does it use to do that? We don't know the answer
 to (1) yet.  (2) is less important, I think. For (2) we could use opflex,
 for example, or create a new one. (1) is hard because the Enactors likely
 have different languages that they understand. I’m not aware of anyone
 thinking about (2). I’m not thinking about (2) b/c I don't know the answer
 to (1). The *really* hard thing to understand IMO is how these Enactors
 should cooperate (in terms of the information they exchange and the
 functionality they provide).  The bits they use to wrap the messages they
 send while cooperating is a lower-level question.

 jasonsb  glebo: feel the need to clarify (2)

 glebo: if we come out strongly with a framework spec that identifies
 a protocol for (2), and make it clear that Congress participants, including
 several data center Feeders and Enactors, are in consensus, then the other
 Feeders  Enactors will line up, in order to be useful in the modern
 deployments. Either that, or they will remain isolated from the
 new environment, or their customers will have to create custom connectors
 to the new environment. It seems that we have 2 options. (a) Congress
 learns any language spoken by Feeders and Enactors, or (b) specifies a
 single protocol for Congress — Enactors policy declarations, including a
 highly adaptable public registry(ies) for defining the meaning of content
 blobs in those messages. For (a) Congress would get VERY bloated with an
 abstraction layer, modules, semantics and state for each different language
 it needed to speak. And there would be 10s of these languages. For (b),
 there would be one way to structure messages that were constructed of blobs
 in (e.g.) some sort of Type/Length/Value (TLV) method, where the Types and
 Values were specified in some Internet registry.

 jasonsb: Could we attack this from the opposite direction? E.g. if
 Congress wanted to provide an operational dashboard to show if things are
 in compliance, it would be better served by receiving the state and stats
 from the Enactors in a single protocol. Could a dashboard like this be a
 carrot to lure the various players into a single protocol for Congress —
 Enactor?

 glebo  jasonsb: If Congress has to give Enactors precise instructions on
 what to do, then Congress will bloat, having to have intelligence about
 each Enactor type, and hold its state and such. If Congress can deliver
 generalized policy declarations, and the Enactor is responsible for
 interpreting it, and applying it, and gathering and analyzing the state so
 that it knows how to react, then the intelligence and state that it is
 specialized in knowing will live in the Enactor. A smaller Congress is
 better, and this provides cleaner “layering” of the problem space overall.

 thinrichs: would love to see a single (2) language, but doesn’t see that
 as a practical solution in the short term, 

Re: [openstack-dev] [api] API-WG meeting (note time change this week)

2014-11-19 Thread Everett Toews
On Nov 19, 2014, at 4:56 AM, Christopher Yeoh cbky...@gmail.com wrote:

 Hi,
 
 We have moved to alternating times each week for the API WG meeting so
 people from other timezones can attend. Since this is an odd week 
 the meeting will be Thursday UTC 1600. Details here:
 
 https://wiki.openstack.org/wiki/Meetings/API-WG
 
 The google ical feed hasn't been updated yet, but thats not surprising
 since the wiki page was only updated a few hours ago.

I see on the Meetings [1] page the link to the Google Calendar iCal feed.

1. Do you know the link to the Google Calendar itself, not just the iCal feed?

2. Do you know if there is a way to subscribe to only the API WG meeting from 
that calendar?

Thanks,
Everett


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Cross-Project Liaison for the API Working Group

2014-11-19 Thread Everett Toews
On Nov 16, 2014, at 4:59 PM, Christopher Yeoh cbky...@gmail.com wrote:

 My 2c is we should say The liason should be the PTL or whomever they 
 delegate to be their representative  and not mention anything about the 
 person needing to be a core developer. It removes any ambiguity about who 
 ultimately decides who the liason is (the PTL) without saying that they have 
 to do the work themselves.

Sure. Go ahead and change it to that.

Everett


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Summit summary

2014-11-19 Thread Everett Toews
Does anybody know what happened to the Etherpad? It’s completely blank now!!!

If you check the Timeslider, it appears that it only ever existed on Nov. 15. 
Bizarre.

Everett


On Nov 14, 2014, at 5:05 PM, Everett Toews everett.to...@rackspace.com wrote:

 Hi All,
 
 Here’s a summary of what happened at the Summit from the API Working Group 
 perspective.
 
 Etherpad: https://etherpad.openstack.org/p/kilo-crossproject-api-wg
 
 The 2 design summit sessions on Tuesday were very well attended, maybe 100ish 
 people I’m guessing. I got the impression there were developers from a 
 diverse set of projects just from the people who spoke up during the session. 
 We spent pretty much all of these 2 sessions discussing the working group 
 itself.
 
 Some action items of note:
 
 Update the wiki page [1] with the decisions made during the discussion
 Add an additional meeting time [2] to accommodate EU time
 Email the WG about the Nova (and Neutron?) API microversions effort and how 
 it might be a strategy for moving forward with API changes
 
 Review the rest of the action items in the etherpad to get a better picture.
 
 The follow up session on Thursday (last slot of the day) was attended by 
 about half the people of the Tuesday sessions. We reviewed what happened on 
 Tuesday and then got to work. We ran through the workflow of creating a 
 guideline. We basically did #1 and #2 of How to Contribute [3] but instead of 
 first taking notes on the API Improvement in the wiki we just discussed it in 
 the session. We then submitted the patch for a new guideline [4].
 
 As you can see there’s still a lot of work to be done in that review. It may 
 even be that we need a fresh start with it. But it was a good exercise for 
 everyone present to walk through the process together for the first time. I 
 think it really helped put everyone on the same page for working together as 
 a group.
 
 Thanks,
 Everett
 
 [1] https://wiki.openstack.org/wiki/API_Working_Group
 [2] https://wiki.openstack.org/wiki/Meetings/API-WG
 [3] https://wiki.openstack.org/wiki/API_Working_Group#How_to_Contribute
 [4] https://review.openstack.org/#/c/133087/
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Summit summary

2014-11-19 Thread melanie witt
On Nov 19, 2014, at 11:38, Everett Toews everett.to...@rackspace.com wrote:

 Does anybody know what happened to the Etherpad? It’s completely blank now!!!
 
 If you check the Timeslider, it appears that it only ever existed on Nov. 15. 
 Bizarre.

I see it as blank now too, however I can see all of the previous revisions and 
content when I drag the timeslider back.

melanie (melwitt)






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API-WG meeting (note time change this week)

2014-11-19 Thread Anne Gentle
On Wed, Nov 19, 2014 at 1:34 PM, Everett Toews everett.to...@rackspace.com
wrote:

 On Nov 19, 2014, at 4:56 AM, Christopher Yeoh cbky...@gmail.com wrote:

  Hi,
 
  We have moved to alternating times each week for the API WG meeting so
  people from other timezones can attend. Since this is an odd week
  the meeting will be Thursday UTC 1600. Details here:
 
  https://wiki.openstack.org/wiki/Meetings/API-WG
 
  The google ical feed hasn't been updated yet, but thats not surprising
  since the wiki page was only updated a few hours ago.

 I see on the Meetings [1] page the link to the Google Calendar iCal feed.

 1. Do you know the link to the Google Calendar itself, not just the iCal
 feed?


All I can find is:
*https://www.google.com/calendar/embed?src=bj05mroquq28jhud58esggqmh4%40group.calendar.google.comctz=America/Chicago
https://www.google.com/calendar/embed?src=bj05mroquq28jhud58esggqmh4%40group.calendar.google.comctz=America/Chicago
*

 (Calendar ID: bj05mroquq28jhud58esggq...@group.calendar.google.com)



 2. Do you know if there is a way to subscribe to only the API WG meeting
 from that calendar?


I don't think so, it's an ical feed for all OpenStack meetings. I've added
this alternating Thursday one to the OpenStack calendar. Thierry and I have
permissions, and as noted in this thread[1], he's working on automation.

Still, what I have to do is set my own calendar items for the meetings that
matter to me.

Anne

1.
http://lists.openstack.org/pipermail/openstack-dev/2014-November/051036.html


 Thanks,
 Everett


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Waiting for Haproxy backends

2014-11-19 Thread Sergii Golovatiuk
Hi crew,

Please see my inline comments.

Hi Everyone,

 I was reading the blueprints mentioned here and thought I'd take the
 opportunity to introduce myself and ask a few questions.
 For those that don't recognise my name, Pacemaker is my baby - so I take a
 keen interest helping people have a good experience with it :)

 A couple of items stood out to me (apologies if I repeat anything that is
 already well understood):

 * Operations with CIB utilizes almost 100% of CPU on the Controller

  We introduced a new CIB algorithm in 1.1.12 which is O(2) faster/less
 resource hungry than prior versions.
  I would be interested to hear your experiences with it if you are able to
 upgrade to that version.


Our team is aware of that. That's really nice improvement. Thank you very
much for that. We've prepared all packages, though we have feature freeze.
Pacemaker 1.1.12 will be added to next release.


 * Corosync shutdown process takes a lot of time

  Corosync (and Pacemaker) can shut down incredibly quickly.
  If corosync is taking a long time, it will be because it is waiting for
 pacemaker, and pacemaker is almost always waiting for for one of the
 clustered services to shut down.


As part of improvement we have idea to split signalling layer (corosync)
and resource management (pacemaker) layers by specifying

service {
   name: pacemaker
   ver:  1
}

and create upstart script to set start ordering. That will allow us

1. Create some notifications in puppet for pacemaker
2. Restart and manage corosync and pacemaker independently
3. Use respawn in upstart to restart corosync or pacemaker


 * Current Fuel Architecture is limited to Corosync 1.x and Pacemaker 1.x

  Corosync 2 is really the way to go.
  Is there something in particular that is holding you back?
  Also, out of interest, are you using cman or the pacemaker plugin?


We use almost standard corosync 1.x and pacemaker from CentOS 6.5 and
Ubuntu 12.04. However, we've prepared corosync 2.x and pacemaker 1.1.12
packages. Also we have update puppet manifests on review. As was said
above, we can't just add at the end of development cycle.



 *  Diff operations against Corosync CIB require to save data to file rather
   than keep all data in memory

  Can someone clarify this one for me?


That's our implementation for puppet. We can't just use shadow on
distributed environment, so we run


  Also, I notice that the corosync init script has been modified to
 set/unset maintenance-mode with cibadmin.
  Any reason not to use crm_attribute instead?  You might find its a less
 fragile solution than a hard-coded diff.


Can you give a particular line where you see that?

* Debug process of OCF scripts is not unified requires a lot of actions from
  Cloud Operator

  Two things to mention here... the first is crm_resource
 --force-(start|stop|check) which queries the cluster for the resource's
 definition but runs the command directly.

 Combined with -V, this means that you get to see everything the agent is
 doing.


We write many own OCF scripts. We just need to see how OCF script behaves.
ocf_tester is not enough for our cases. I'll try if crm_resource -V
--force-start is better.



  Also, pacemaker now supports the ability for agents to emit specially
 formatted error messages that are stored in the cib and can be shown back
 to users.
  This can make things much less painful for admins. Look for
 PCMK_OCF_REASON_PREFIX in the upstream resource-agents project.


Thank you for tip.



 * Openstack services are not managed by Pacemaker


The general idea to have all openstack services under pacemaker control
rather than having upstart and pacemaker. It will be very handy for
operators to see the status of all services from one console. Also it will
give us flexibility to have more complex service verification checks in
monitor function.



  Oh?

 * Compute nodes aren't in Pacemaker cluster, hence, are lacking a viable
  control plane for their's compute/nova services.

  pacemaker-remoted might be of some interest here.

 http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Pacemaker_Remote/index.html


 * Creating and committing shadows not only adds constant pain with
 dependencies and unneeded complexity but also rewrites cluster attributes
 and even other changes if you mess up with ordering and it’s really hard to
 debug it.

  Is this still an issue?  I'm reasonably sure this is specific to the way
 crmsh uses shadows.
  Using the native tools it should be possible to commit only the delta, so
 any other changes that occur while you're updating the shadow would not be
 an issue, and existing attributes wouldn't be rewritten.


We are on the way to replace pcs and crm with native tools in puppet
service provider.



 * Restarting resources by Puppet’s pacemaker service provider restarts
 them even if they are running on other nodes and it sometimes impacts the
 cluster.

  Not available yet, but upstream there is now a smart 

Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Jay Pipes
Hi Eugene, please see comments inline. But, bottom line, is that setting 
the transaction isolation level to READ_COMMITTED should be avoided.


On 11/18/2014 01:38 PM, Eugene Nikanorov wrote:

Hi neutron folks,

There is an ongoing effort to refactor some neutron DB logic to be
compatible with galera/mysql which doesn't support locking
(with_lockmode('update')).

Some code paths that used locking in the past were rewritten to retry
the operation if they detect that an object was modified concurrently.
The problem here is that all DB operations (CRUD) are performed in the
scope of some transaction that makes complex operations to be executed
in atomic manner.


Yes. The root of the problem in Neutron is that the session object is 
passed through all of the various plugin methods and the 
session.begin(subtransactions=True) is used all over the place, when in 
reality many things should not need to be done in long-lived 
transactional containers.



For mysql the default transaction isolation level is 'REPEATABLE READ'
which means that once the code issue a query within a transaction, this
query will return the same result while in this transaction (e.g. the
snapshot is taken by the DB during the first query and then reused for
the same query).


Correct.

However note that the default isolation level in PostgreSQL is READ 
COMMITTED, though it is important to point out that PostgreSQL's READ 
COMMITTED isolation level does *NOT* allow one session to see changes 
committed during query execution by concurrent transactions.


It is a common misunderstanding that MySQL's READ COMMITTED isolation 
level is the same as PostgreSQL's READ COMMITTED isolation level. It is 
not. PostgreSQL's READ COMMITTED isolation level is actually most 
closely similar to MySQL's REPEATABLE READ isolation level.


I bring this up because the proposed solution of setting the isolation 
level to READ COMMITTED will not work like you think it will on 
PostgreSQL. Regardless, please see below as to why setting the isolation 
level to READ COMMITTED is not the appropriate solution to this problem 
anyway...



In other words, the retry logic like the following will not work:

def allocate_obj():
 with session.begin(subtrans=True):
  for i in xrange(n_retries):
   obj = session.query(Model).filter_by(filters)
   count = session.query(Model).filter_by(id=obj.id
http://obj.id).update({'allocated': True})
   if count:
return obj

since usually methods like allocate_obj() is called from within another
transaction, we can't simply put transaction under 'for' loop to fix the
issue.


Exactly. The above code, from here:

https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/helpers.py#L98

has no chance of working at all under the existing default isolation 
levels for either MySQL or PostgreSQL. If another session updates the 
same row in between the time the first session began and the UPDATE 
statement in the first session starts, then the first session will 
return 0 rows affected. It will continue to return 0 rows affected for 
each loop, as long as the same transaction/session is still in effect, 
which in the code above, is the case.



The particular issue here is
https://bugs.launchpad.net/neutron/+bug/1382064 with the proposed fix:
https://review.openstack.org/#/c/129288

So far the solution proven by tests is to change transaction isolation
level for mysql to be 'READ COMMITTED'.
The patch suggests changing the level for particular transaction where
issue occurs (per sqlalchemy, it will be reverted to engine default once
transaction is committed)
This isolation level allows the code above to see different result in
each iteration.


Not for PostgreSQL, see above. You would need to set the level to READ 
*UNCOMMITTED* to get that behaviour for PostgreSQL, and setting to READ 
UNCOMMITTED is opening up the code to a variety of other issues and 
should be avoided.



At the same time, any code that relies that repeated query under same
transaction gives the same result may potentially break.

So the question is: what do you think about changing the default
isolation level to READ COMMITTED for mysql project-wise?
It is already so for postgress, however we don't have much concurrent
test coverage to guarantee that it's safe to move to a weaker isolation
level.


PostgreSQL READ COMMITTED is the same as MySQL's REPEATABLE READ. :) So, 
no, it doesn't work for PostgreSQL either.


The design of the Neutron plugin code's interaction with the SQLAlchemy 
session object is the main problem here. Instead of doing all of this 
within a single transactional container, the code should instead be 
changed to perform the SELECT statements in separate transactions/sessions.


That means not using the session parameter supplied to the 
neutron.plugins.ml2.drivers.helpers.TypeDriverHelper.allocate_partially_specified_segment() 
method, and instead performing 

Re: [openstack-dev] [OpenStack-dev][Nova] Migration stuck - resize/migrating

2014-11-19 Thread Solly Ross
Indeed.  Ensure you have SSH access between compute nodes (I'm working on some 
code to remove this requirement, but it may be a while before it gets merged).

Also, if you can, could you post logs somewhere with the 'debug' config option 
enabled?  I might be able to spot something quickly, since I've been working on 
the related code recently.

Best Regards,
Solly Ross

- Original Message -
 From: Vishvananda Ishaya vishvana...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, November 18, 2014 4:07:31 PM
 Subject: Re: [openstack-dev] [OpenStack-dev][Nova] Migration stuck -  
 resize/migrating
 
 Migrate/resize uses scp to copy files back and forth with the libvirt driver.
 This shouldn’t be necessary with shared storage, but it may still need ssh
 configured between the user that nova is running as in order to complete the
 migration. It is also possible that there is a bug in the code path dealing
 with shared storage, although I would have expected you to see a traceback
 somewhere.
 
 Vish
 
 On Nov 11, 2014, at 1:10 AM, Eduard Matei  eduard.ma...@cloudfounders.com 
 wrote:
 
 
 
 
 Hi,
 
 I'm testing our cinder volume driver in the following setup:
 - 2 nodes, ubuntu, devstack juno (2014.2.1)
 - shared storage (common backend), our custom software solution + cinder
 volume on shared storage
 - 1 instance running on node 1, /instances directory on shared storage
 - kvm, libvirt (with live migration flags)
 
 Live migration of instance between nodes works perfectly.
 Migrate simply blocks. The instance in in status Resize/Migrate, no errors in
 n-cpu or n-sch, and it stays like that for over 8 hours (all night). I
 thought it was copying the disk, but it's a 20GB sparse file with approx.
 200 mb of data, and the nodes have 1Gbps link, so it should be a couple of
 seconds.
 
 Any difference between live migration and migration?
 As i said, we use a shared filesystem-like storage solution so the volume
 files and the instance files are visible on both nodes, so no data needs
 copying.
 
 I know it's tricky to debug since we use a custom cinder driver, but anyone
 has any ideas where to start looking?
 
 Thanks,
 Eduard
 
 --
 Eduard Biceri Matei, Senior Software Developer
 www.cloudfounders.com
 | eduard.ma...@cloudfounders.com
 
 CloudFounders, The Private Cloud Software Company
 Disclaimer:
 This email and any files transmitted with it are confidential and intended
 solely for the use of the individual or entity to whom they are addressed.
 If you are not the named addressee or an employee or agent responsible for
 delivering this message to the named addressee, you are hereby notified that
 you are not authorized to read, print, retain, copy or disseminate this
 message or any part of it. If you have received this email in error we
 request you to notify us by reply e-mail and to delete all electronic files
 of the message. If you are not the intended recipient you are notified that
 disclosing, copying, distributing or taking any action in reliance on the
 contents of this information is strictly prohibited.
 E-mail transmission cannot be guaranteed to be secure or error free as
 information could be intercepted, corrupted, lost, destroyed, arrive late or
 incomplete, or contain viruses. The sender therefore does not accept
 liability for any errors or omissions in the content of this message, and
 shall have no liability for any loss or damage suffered by the user, which
 arise as a result of e-mail transmission.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Mike Bayer

 On Nov 19, 2014, at 1:49 PM, Ryan Moats rmo...@us.ibm.com wrote:
 
 I was waiting for this because I think I may have a slightly different (and 
 outside of the box) view on how to approach a solution to this.
 
 Conceptually (at least in my mind) there isn't a whole lot of difference 
 between how the example below (i.e. updates from two concurrent threads) is 
 handled
 and how/if neutron wants to support a multi-master database scenario (which 
 in turn lurks in the background when one starts thinking/talking about 
 multi-region support).
 
 If neutron wants to eventually support multi-master database scenarios, I see 
 two ways to go about it:
 
 1) Defer multi-master support to the database itself.
 2) Take responsibility for managing the conflict resolution inherent in 
 multi-master scenarios itself.
 
 The first approach is certainly simpler in the near term, but it has the down 
 side of restricting the choice of databases to those that have solved 
 multi-master and further, may lead to code bifurcation based on possibly 
 different solutions to the conflict resolution scenarios inherent in 
 multi-master.
 
 The second approach is certainly more complex as neutron assumes more 
 responsibility for its own actions, but it has the advantage that (if done 
 right) would be transparent to the underlying databases (with all that 
 implies)
 
multi-master is a very advanced use case so I don’t see why it would be 
unreasonable to require a multi-master vendor database.   Reinventing a complex 
system like that in the application layer is an unnecessary reinvention.

As far as working across different conflict resolution scenarios, while there 
may be differences across backends, these differences will be much less 
significant compared to the differences against non-clustered backends in which 
we are inventing our own multi-master solution.   I doubt a home rolled 
solution would insulate us at all from “code bifurcation” as this is already a 
fact of life in targeting different backends even without any implication of 
clustering.   Even with simple things like transaction isolation, we see that 
different databases have different behavior, and if you look at the logic in 
oslo.db inside of 
https://github.com/openstack/oslo.db/blob/master/oslo/db/sqlalchemy/exc_filters.py
 
https://github.com/openstack/oslo.db/blob/master/oslo/db/sqlalchemy/exc_filters.py
 you can see an example of just how complex it is to just do the most 
rudimental task of organizing exceptions into errors that mean the same thing.


 My reason for asking this question here is that if the community wants to 
 consider #2, then these problems are the place to start crafting that 
 solution - if we solve the conflicts inherent with the  two conncurrent 
 thread scenarios, then I think we will find that we've solved the 
 multi-master problem essentially for free”.
 

Maybe I’m missing something, if we learn how to write out a row such that a 
concurrent transaction against the same row doesn’t throw us off, where is the 
part where that data is replicated to databases running concurrently on other 
IP numbers in a way that is atomic come out of that effort “for free” ?   A 
home-rolled “multi master” scenario would have to start with a system that has 
multiple create_engine() calls, since we need to communicate directly to 
multiple database servers. From there it gets really crazy.  Where’s all that ?




 
 Ryan Moats
 
 Mike Bayer mba...@redhat.com wrote on 11/19/2014 12:05:35 PM:
 
  From: Mike Bayer mba...@redhat.com
  To: OpenStack Development Mailing List (not for usage questions) 
  openstack-dev@lists.openstack.org
  Date: 11/19/2014 12:05 PM
  Subject: Re: [openstack-dev] [Neutron] DB: transaction isolation and
  related questions
  
  On Nov 18, 2014, at 1:38 PM, Eugene Nikanorov enikano...@mirantis.com 
  wrote:
  
  Hi neutron folks,
  
  There is an ongoing effort to refactor some neutron DB logic to be 
  compatible with galera/mysql which doesn't support locking 
  (with_lockmode('update')).
  
  Some code paths that used locking in the past were rewritten to 
  retry the operation if they detect that an object was modified concurrently.
  The problem here is that all DB operations (CRUD) are performed in 
  the scope of some transaction that makes complex operations to be 
  executed in atomic manner.
  For mysql the default transaction isolation level is 'REPEATABLE 
  READ' which means that once the code issue a query within a 
  transaction, this query will return the same result while in this 
  transaction (e.g. the snapshot is taken by the DB during the first 
  query and then reused for the same query).
  In other words, the retry logic like the following will not work:
  
  def allocate_obj():
  with session.begin(subtrans=True):
   for i in xrange(n_retries):
obj = session.query(Model).filter_by(filters)
count = session.query(Model).filter_by(id=obj.id
  

Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Mike Bayer

 On Nov 19, 2014, at 2:58 PM, Jay Pipes jaypi...@gmail.com wrote:
 
 
 In other words, the retry logic like the following will not work:
 
 def allocate_obj():
 with session.begin(subtrans=True):
  for i in xrange(n_retries):
   obj = session.query(Model).filter_by(filters)
   count = session.query(Model).filter_by(id=obj.id
 http://obj.id).update({'allocated': True})
   if count:
return obj
 
 since usually methods like allocate_obj() is called from within another
 transaction, we can't simply put transaction under 'for' loop to fix the
 issue.
 
 Exactly. The above code, from here:
 
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/helpers.py#L98
 
 has no chance of working at all under the existing default isolation levels 
 for either MySQL or PostgreSQL. If another session updates the same row in 
 between the time the first session began and the UPDATE statement in the 
 first session starts, then the first session will return 0 rows affected. It 
 will continue to return 0 rows affected for each loop, as long as the same 
 transaction/session is still in effect, which in the code above, is the case.

oh, because it stays a zero, right.  yeah I didn’t understand that that was the 
failure case before.  should have just pinged you on IRC to answer the question 
without me wasting everyone’s time! :)

 
 The design of the Neutron plugin code's interaction with the SQLAlchemy 
 session object is the main problem here. Instead of doing all of this within 
 a single transactional container, the code should instead be changed to 
 perform the SELECT statements in separate transactions/sessions.
 
 That means not using the session parameter supplied to the 
 neutron.plugins.ml2.drivers.helpers.TypeDriverHelper.allocate_partially_specified_segment()
  method, and instead performing the SQL statements in separate transactions.
 
 Mike Bayer's EngineFacade blueprint work should hopefully unclutter the 
 current passing of a session object everywhere, but until that hits, it 
 should be easy enough to simply ensure that you don't use the same session 
 object over and over again, instead of changing the isolation level.

OK but EngineFacade was all about unifying broken-up transactions into one big 
transaction.   I’ve never been partial to the “retry something inside of a 
transaction” approach i usually prefer to have the API method raise and retry 
it’s whole series of operations all over again.  How do you propose 
EngineFacade’s transaction-unifying behavior with 
separate-transaction-per-SELECT (and wouldn’t that need to include the UPDATE 
as well? )  Did you see it as having the “one main transaction” with separate 
“ad-hoc, out of band” transactions as needed?




 
 All the best,
 -jay
 
 Your feedback is appreciated.
 
 Thanks,
 Eugene.
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Ian Wells
On 19 November 2014 11:58, Jay Pipes jaypi...@gmail.com wrote:

 Some code paths that used locking in the past were rewritten to retry

 the operation if they detect that an object was modified concurrently.
 The problem here is that all DB operations (CRUD) are performed in the
 scope of some transaction that makes complex operations to be executed
 in atomic manner.


 Yes. The root of the problem in Neutron is that the session object is
 passed through all of the various plugin methods and the
 session.begin(subtransactions=True) is used all over the place, when in
 reality many things should not need to be done in long-lived transactional
 containers.


I think the issue is one of design, and it's possible what we discussed at
the summit may address some of this.

At the moment, Neutron's a bit confused about what it is.  Some plugins
treat a call to Neutron as the period of time in which an action should be
completed - the 'atomicity' thing.  This is not really compatible with a
distributed system and it's certainly not compatible with the principle of
eventual consistency that Openstack is supposed to follow.  Some plugins
treat the call as a change to desired networking state, and the action on
the network is performed asynchronously to bring the network state into
alignment with the state of the database.  (Many plugins do a bit of both.)

When you have a plugin that's decided to be synchronous, then there are
cases where the DB lock is held for a technically indefinite period of
time.  This is basically broken.

What we said at the summit is that we should move to an entirely async
model for the API, which in turn gets us to the 'desired state' model for
the DB.  DB writes would take one of two forms:

- An API call has requested that the data be updated, which it can do
immediately - the DB transaction takes as long as it takes to write the DB
consistently, and can hold locks on referenced rows to main consistency
providing the whole operation remains brief
- A network change has completed and the plugin wants to update an object's
state - again, the DB transaction contains only DB ops and nothing else and
should be quick.

Now, if we moved to that model, DB locks would be very very brief for the
sort of queries we'd need to do.  Setting aside the joys of Galera (and I
believe we identified that using one Galera node and doing all writes
through it worked just fine, though we could probably distribute read-only
transactions across all of them in the future), would there be any need for
transaction retries in that scenario?  I would have thought that DB locking
would be just fine as long as there was nothing but DB operations for the
period a transaction was open, and thus significantly changing the DB
lock/retry model now is a waste of time because it's a problem that will go
away.

Does that theory hold water?

-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Ryan Moats



Ian Wells ijw.ubu...@cack.org.uk wrote on 11/19/2014 02:33:40 PM:

[snip]

 When you have a plugin that's decided to be synchronous, then there
 are cases where the DB lock is held for a technically indefinite
 period of time.  This is basically broken.

A big +1 to this statement

Ryan Moats___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Ryan Moats


 Mike Bayer mba...@redhat.com wrote on 11/19/2014 02:10:18 PM:

  From: Mike Bayer mba...@redhat.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Date: 11/19/2014 02:11 PM
  Subject: Re: [openstack-dev] [Neutron] DB: transaction isolation and
  related questions
 
  On Nov 19, 2014, at 1:49 PM, Ryan Moats rmo...@us.ibm.com wrote:
 
  I was waiting for this because I think I may have a slightly
  different (and outside of the box) view on how to approach a solution
to this.
 
  Conceptually (at least in my mind) there isn't a whole lot of
  difference between how the example below (i.e. updates from two
  concurrent threads) is handled
  and how/if neutron wants to support a multi-master database scenario
  (which in turn lurks in the background when one starts thinking/
  talking about multi-region support).
 
  If neutron wants to eventually support multi-master database
  scenarios, I see two ways to go about it:
 
  1) Defer multi-master support to the database itself.
  2) Take responsibility for managing the conflict resolution inherent
  in multi-master scenarios itself.
 
  The first approach is certainly simpler in the near term, but it has
  the down side of restricting the choice of databases to those that
  have solved multi-master and further, may lead to code bifurcation
  based on possibly different solutions to the conflict resolution
  scenarios inherent in multi-master.
  The second approach is certainly more complex as neutron assumes
  more responsibility for its own actions, but it has the advantage
  that (if done right) would be transparent to the underlying
  databases (with all that implies)

 multi-master is a very advanced use case so I don’t see why it would
 be unreasonable to require a multi-master vendor database.
 Reinventing a complex system like that in the application layer is
 an unnecessary reinvention.

 As far as working across different conflict resolution scenarios,
 while there may be differences across backends, these differences
 will be much less significant compared to the differences against
 non-clustered backends in which we are inventing our own multi-
 master solution.   I doubt a home rolled solution would insulate us
 at all from “code bifurcation” as this is already a fact of life in
 targeting different backends even without any implication of
 clustering.   Even with simple things like transaction isolation, we
 see that different databases have different behavior, and if you
 look at the logic in oslo.db inside of https://github.com/openstack/
 oslo.db/blob/master/oslo/db/sqlalchemy/exc_filters.py you can see an
 example of just how complex it is to just do the most rudimental
 task of organizing exceptions into errors that mean the same thing.

I didn't say it was unreasonable, I only point out that there is an
alternative for consideration.

BTW, I view your examples from oslo as helping make my argument for
me (and I don't think that was your intent :) )

  My reason for asking this question here is that if the community
  wants to consider #2, then these problems are the place to start
  crafting that solution - if we solve the conflicts inherent with the
  two conncurrent thread scenarios, then I think we will find that
  we've solved the multi-master problem essentially for free”.

 Maybe I’m missing something, if we learn how to write out a row such
 that a concurrent transaction against the same row doesn’t throw us
 off, where is the part where that data is replicated to databases
 running concurrently on other IP numbers in a way that is atomic
 come out of that effort “for free” ?   A home-rolled “multi master”
 scenario would have to start with a system that has multiple
 create_engine() calls, since we need to communicate directly to
 multiple database servers. From there it gets really crazy.  Where’sall
that ?

Boiled down, what you are talking about here w.r.t. concurrent
transactions is really conflict resolution, which is the hardest
part of implementing multi-master (as a side note, using locking in
this case is the equivalent of option #1).

All I wished to point out is that there are other ways to solve the
conflict resolution that could then be leveraged into a multi-master
scenario.

As for the parts that I glossed over, once conflict resolution is
separated out, replication turns into a much simpler problem with
well understood patterns and so I view that part as coming
for free.

Ryan___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] v3 integrated tests taking a lot longer?

2014-11-19 Thread Matt Riedemann



On 11/19/2014 9:40 AM, Jay Pipes wrote:

On 11/18/2014 06:48 PM, Matt Riedemann wrote:

I just started noticing today that the v3 integrated api samples tests
seem to be taking a lot longer than the other non-v3 integrated api
samples tests. On my 4 VCPU, 4 GB RAM VM some of those tests are taking
anywhere from 15-50+ seconds, while the non-v3 tests are taking less
than a second.

Has something changed recently in how the v3 API code is processed that
might have caused this?  With microversions or jsonschema validation
perhaps?

I was thinking it was oslo.db 1.1.0 at first since that was a recent
update but given the difference in times between v3 and non-v3 api
samples tests I'm thinking otherwise.


Heya,

I've been stung in the past by running either tox or run_tests.sh while
active in a virtualenv. The speed goes from ~2 seconds per API sample
test to ~15 seconds per API sample test...

Not sure if this is what is causing your problem, but worth a check.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



So...

Run: 13101 in 21466.460047 sec

...

mriedem@ubuntu:~/git/nova$ du -sh .testrepository/
1002M   .testrepository/

I removed that and I'm running again now.

We remove pyc files on each run, is there any reason why we couldn't 
also remove .testrepository on every run?


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Recall for previous iscsi backend BP

2014-11-19 Thread Alex Meade
Hey Henry/Folks,

I think it could make sense for Glance to store the volume UUID, the idea
is that no matter where an image is stored it should be *owned* by Glance
and not deleted out from under it. But that is more of a single tenant vs
multi tenant cinder store.

It makes sense for Cinder to at least abstract all of the block storage
needs. Glance and any other service should reuse Cinders ability to talk to
certain backends. It would be wasted effort to reimplement Cinder drivers
as Glance stores. I do agree with Duncan that a great way to solve these
issues is a third party transfer service, which others and I in the Glance
community have discussed at numerous summits (since San Diego).

-Alex



On Wed, Nov 19, 2014 at 3:40 AM, henry hly henry4...@gmail.com wrote:

 Hi Flavio,

 Thanks for your information about Cinder Store, Yet I have a little
 concern about Cinder backend: Suppose cinder and glance both use Ceph
 as Store, then if cinder  can do instant copy to glance by ceph clone
 (maybe not now but some time later), what information would be stored
 in glance? Obviously volume UUID is not a good choice, because after
 volume is deleted then image can't be referenced. The best choice is
 that cloned ceph object URI also be stored in glance location, letting
 both glance and cinder see the backend store details.

 However, although it really make sense for Ceph like All-in-one Store,
 I'm not sure if iscsi backend can be used the same way.

 On Wed, Nov 19, 2014 at 4:00 PM, Flavio Percoco fla...@redhat.com wrote:
  On 19/11/14 15:21 +0800, henry hly wrote:
 
  In the Previous BP [1], support for iscsi backend is introduced into
  glance. However, it was abandoned because of Cinder backend
  replacement.
 
  The reason is that all storage backend details should be hidden by
  cinder, not exposed to other projects. However, with more and more
  interest in Converged Storage like Ceph, it's necessary to expose
  storage backend to glance as well as cinder.
 
  An example  is that when transferring bits between volume and image,
  we can utilize advanced storage offload capability like linked clone
  to do very fast instant copy. Maybe we need a more general glance
  backend location support not only with iscsi.
 
 
 
  [1] https://blueprints.launchpad.net/glance/+spec/iscsi-backend-store
 
 
  Hey Henry,
 
  This blueprint has been superseeded by one proposing a Cinder store
  for Glance. The Cinder store is, unfortunately, in a sorry state.
  Short story, it's not fully implemented.
 
  I truly think Glance is not the place where you'd have an iscsi store,
  that's Cinder's field and the best way to achieve what you want is by
  having a fully implemented Cinder store that doesn't rely on Cinder's
  API but has access to the volumes.
 
  Unfortunately, this is not possible now and I don't think it'll be
  possible until L (or even M?).
 
  FWIW, I think the use case you've mentioned is useful and it's
  something we have in our TODO list.
 
  Cheers,
  Flavio
 
  --
  @flaper87
  Flavio Percoco
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Waiting for Haproxy backends

2014-11-19 Thread Jay Pipes

On 11/18/2014 07:25 PM, Andrew Woodward wrote:

On Tue, Nov 18, 2014 at 3:18 PM, Andrew Beekhof abeek...@redhat.com wrote:

* Openstack services are not managed by Pacemaker

  Oh?


fuel doesn't (currently) set up API services in pacemaker


Nor should it, IMO. Other than the Neutron dhcp-agent, all OpenStack 
services that run on a controller node are completely stateless. 
Therefore, I don't see any reason to use corosync/pacemaker for 
management of these resources. haproxy should just spread the HTTP 
request load evenly across all API services and things should be fine, 
allowing haproxy's http healthcheck monitoring to handle the simple 
service status checks.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Clint Byrum
Excerpts from Mike Bayer's message of 2014-11-19 10:05:35 -0800:
 
  On Nov 18, 2014, at 1:38 PM, Eugene Nikanorov enikano...@mirantis.com 
  wrote:
  
  Hi neutron folks,
  
  There is an ongoing effort to refactor some neutron DB logic to be 
  compatible with galera/mysql which doesn't support locking 
  (with_lockmode('update')).
  
  Some code paths that used locking in the past were rewritten to retry the 
  operation if they detect that an object was modified concurrently.
  The problem here is that all DB operations (CRUD) are performed in the 
  scope of some transaction that makes complex operations to be executed in 
  atomic manner.
  For mysql the default transaction isolation level is 'REPEATABLE READ' 
  which means that once the code issue a query within a transaction, this 
  query will return the same result while in this transaction (e.g. the 
  snapshot is taken by the DB during the first query and then reused for the 
  same query).
  In other words, the retry logic like the following will not work:
  
  def allocate_obj():
  with session.begin(subtrans=True):
   for i in xrange(n_retries):
obj = session.query(Model).filter_by(filters)
count = session.query(Model).filter_by(id=obj.id 
  http://obj.id/).update({'allocated': True})
if count:
 return obj
  
  since usually methods like allocate_obj() is called from within another 
  transaction, we can't simply put transaction under 'for' loop to fix the 
  issue.
 
 has this been confirmed?  the point of systems like repeatable read is not 
 just that you read the “old” data, it’s also to ensure that updates to that 
 data either proceed or fail explicitly; locking is also used to prevent 
 concurrent access that can’t be reconciled.  A lower isolation removes these 
 advantages.  
 

Yes this is confirmed and fails reliably on Galera based systems.

 I ran a simple test in two MySQL sessions as follows:
 
 session 1:
 
 mysql create table some_table(data integer) engine=innodb;
 Query OK, 0 rows affected (0.01 sec)
 
 mysql insert into some_table(data) values (1);
 Query OK, 1 row affected (0.00 sec)
 
 mysql begin;
 Query OK, 0 rows affected (0.00 sec)
 
 mysql select data from some_table;
 +--+
 | data |
 +--+
 |1 |
 +--+
 1 row in set (0.00 sec)
 
 
 session 2:
 
 mysql begin;
 Query OK, 0 rows affected (0.00 sec)
 
 mysql update some_table set data=2 where data=1;
 Query OK, 1 row affected (0.00 sec)
 Rows matched: 1  Changed: 1  Warnings: 0
 
 then back in session 1, I ran:
 
 mysql update some_table set data=3 where data=1;
 
 this query blocked;  that’s because session 2 has placed a write lock on the 
 table.  this is the effect of repeatable read isolation.

With Galera this session might happen on another node. There is no
distributed lock, so this would not block...

 
 while it blocked, I went to session 2 and committed the in-progress 
 transaction:
 
 mysql commit;
 Query OK, 0 rows affected (0.00 sec)
 
 then session 1 unblocked, and it reported, correctly, that zero rows were 
 affected:
 
 Query OK, 0 rows affected (7.29 sec)
 Rows matched: 0  Changed: 0  Warnings: 0
 
 the update had not taken place, as was stated by “rows matched:
 
 mysql select * from some_table;
 +--+
 | data |
 +--+
 |1 |
 +--+
 1 row in set (0.00 sec)
 
 the code in question would do a retry at this point; it is checking the 
 number of rows matched, and that number is accurate.
 
 if our code did *not* block at the point of our UPDATE, then it would have 
 proceeded, and the other transaction would have overwritten what we just did, 
 when it committed.   I don’t know that read committed is necessarily any 
 better here.
 
 now perhaps, with Galera, none of this works correctly.  That would be a 
 different issue in which case sure, we should use whatever isolation is 
 recommended for Galera.  But I’d want to potentially peg it to the fact that 
 Galera is in use, or not.
 
 would love also to hear from Jay Pipes on this since he literally wrote the 
 book on MySQL ! :)

What you missed is that with Galera the commit that happened last would
be rolled back. This is a reality in many scenarios on SQL databases and
should be handled _regardless_ of Galera. It is a valid way to handle
deadlocks on single node DBs as well (pgsql will do this sometimes).

One simply cannot rely on multi-statement transactions to always succeed.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] v3 integrated tests taking a lot longer?

2014-11-19 Thread Jay Pipes

On 11/19/2014 04:00 PM, Matt Riedemann wrote:

On 11/19/2014 9:40 AM, Jay Pipes wrote:

On 11/18/2014 06:48 PM, Matt Riedemann wrote:

I just started noticing today that the v3 integrated api samples tests
seem to be taking a lot longer than the other non-v3 integrated api
samples tests. On my 4 VCPU, 4 GB RAM VM some of those tests are taking
anywhere from 15-50+ seconds, while the non-v3 tests are taking less
than a second.

Has something changed recently in how the v3 API code is processed that
might have caused this?  With microversions or jsonschema validation
perhaps?

I was thinking it was oslo.db 1.1.0 at first since that was a recent
update but given the difference in times between v3 and non-v3 api
samples tests I'm thinking otherwise.


Heya,

I've been stung in the past by running either tox or run_tests.sh while
active in a virtualenv. The speed goes from ~2 seconds per API sample
test to ~15 seconds per API sample test...

Not sure if this is what is causing your problem, but worth a check.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



So...

Run: 13101 in 21466.460047 sec

...

mriedem@ubuntu:~/git/nova$ du -sh .testrepository/
1002M.testrepository/

I removed that and I'm running again now.

We remove pyc files on each run, is there any reason why we couldn't
also remove .testrepository on every run?


Well, having some history is sometimes useful (for instance when you 
want to do: ./run_tests.sh -V --failing to execute only the tests that 
failed during the last run), so I think having a separate flag (-R) to 
run_tests.sh would be fine.


But, then again I just learned that run_tests.sh is apparently 
deprecated. Shame :(


-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Mike Bayer

 On Nov 19, 2014, at 3:47 PM, Ryan Moats rmo...@us.ibm.com wrote:
 
  
 BTW, I view your examples from oslo as helping make my argument for
 me (and I don't think that was your intent :) )
 

I disagree with that as IMHO the differences in producing MM in the app layer 
against arbitrary backends (Postgresql vs. DB2 vs. MariaDB vs. ???)  will incur 
a lot more “bifurcation” than a system that targets only a handful of existing 
MM solutions.  The example I referred to in oslo.db is dealing with distinct, 
non MM backends.   That level of DB-specific code and more is a given if we are 
building a MM system against multiple backends generically.

It’s not possible to say which approach would be better or worse at the level 
of “how much database specific application logic do we need”, though in my 
experience, no matter what one is trying to do, the answer is always, “tons”; 
we’re dealing not just with databases but also Python drivers that have a vast 
amount of differences in behaviors, at every level.On top of all of that, 
hand-rolled MM adds just that much more application code to be developed and 
maintained, which also claims it will do a better job than mature (ish?) 
database systems designed to do the same job against a specific backend.



 
   My reason for asking this question here is that if the community 
   wants to consider #2, then these problems are the place to start 
   crafting that solution - if we solve the conflicts inherent with the
   two conncurrent thread scenarios, then I think we will find that 
   we've solved the multi-master problem essentially for free”.
   
  Maybe I’m missing something, if we learn how to write out a row such
  that a concurrent transaction against the same row doesn’t throw us 
  off, where is the part where that data is replicated to databases 
  running concurrently on other IP numbers in a way that is atomic 
  come out of that effort “for free” ?   A home-rolled “multi master” 
  scenario would have to start with a system that has multiple 
  create_engine() calls, since we need to communicate directly to 
  multiple database servers. From there it gets really crazy.  Where’sall 
  that ?
 
 Boiled down, what you are talking about here w.r.t. concurrent
 transactions is really conflict resolution, which is the hardest
 part of implementing multi-master (as a side note, using locking in
 this case is the equivalent of option #1).  
 
 All I wished to point out is that there are other ways to solve the
 conflict resolution that could then be leveraged into a multi-master
 scenario.
 
 As for the parts that I glossed over, once conflict resolution is
 separated out, replication turns into a much simpler problem with
 well understood patterns and so I view that part as coming
 for free.
 
 Ryan
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3] Reminder: Meeting Thursday at 1500 UTC

2014-11-19 Thread Carl Baldwin
The Neutron L3 team will meet [1] tomorrow at the regular time.  I'd
like to discuss the progress of the functional tests for the L3 agent
to see how we can get that on track.  I don't think we need to wait
for the BP to merge before get something going.

We will likely not have a meeting next week for the Thanksgiving
holiday in the US.

Carl

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Waiting for Haproxy backends

2014-11-19 Thread Andrew Beekhof

 On 20 Nov 2014, at 6:55 am, Sergii Golovatiuk sgolovat...@mirantis.com 
 wrote:
 
 Hi crew,
 
 Please see my inline comments.
 
 Hi Everyone,
 
 I was reading the blueprints mentioned here and thought I'd take the 
 opportunity to introduce myself and ask a few questions.
 For those that don't recognise my name, Pacemaker is my baby - so I take a 
 keen interest helping people have a good experience with it :)
 
 A couple of items stood out to me (apologies if I repeat anything that is 
 already well understood):
 
 * Operations with CIB utilizes almost 100% of CPU on the Controller
 
  We introduced a new CIB algorithm in 1.1.12 which is O(2) faster/less 
 resource hungry than prior versions.
  I would be interested to hear your experiences with it if you are able to 
 upgrade to that version.
  
 Our team is aware of that. That's really nice improvement. Thank you very 
 much for that. We've prepared all packages, though we have feature freeze. 
 Pacemaker 1.1.12 will be added to next release.
  
 * Corosync shutdown process takes a lot of time
 
  Corosync (and Pacemaker) can shut down incredibly quickly.
  If corosync is taking a long time, it will be because it is waiting for 
 pacemaker, and pacemaker is almost always waiting for for one of the 
 clustered services to shut down.
 
 As part of improvement we have idea to split signalling layer (corosync) and 
 resource management (pacemaker) layers by specifying
 service { 
name: pacemaker
ver:  1
 }
 
 and create upstart script to set start ordering. That will allow us
 
 1. Create some notifications in puppet for pacemaker
 2. Restart and manage corosync and pacemaker independently
 3. Use respawn in upstart to restart corosync or pacemaker
 
 
 * Current Fuel Architecture is limited to Corosync 1.x and Pacemaker 1.x
 
  Corosync 2 is really the way to go.
  Is there something in particular that is holding you back?
  Also, out of interest, are you using cman or the pacemaker plugin?
 
 We use almost standard corosync 1.x and pacemaker from CentOS 6.5

Please be aware that the plugin is not long for this world on CentOS.
It was already removed once (in 6.4 betas) and is not even slightly tested at 
RH and about the only ones using it upstream are SUSE.

http://blog.clusterlabs.org/blog/2013/pacemaker-on-rhel6-dot-4/ has some 
relevant details.
The short version is that I would really encourage a transition to CMAN (which 
is really just corosync 1.x plus a more mature and better tested plugin from 
the corosync people).
See http://clusterlabs.org/quickstart-redhat.html , its really quite painless.

 and Ubuntu 12.04. However, we've prepared corosync 2.x and pacemaker 1.1.12 
 packages. Also we have update puppet manifests on review. As was said above, 
 we can't just add at the end of development cycle.

Yep, makes sense.

  
 
 *  Diff operations against Corosync CIB require to save data to file rather
   than keep all data in memory
 
  Can someone clarify this one for me?
  
 That's our implementation for puppet. We can't just use shadow on distributed 
 environment, so we run 
 
  Also, I notice that the corosync init script has been modified to set/unset 
 maintenance-mode with cibadmin.
  Any reason not to use crm_attribute instead?  You might find its a less 
 fragile solution than a hard-coded diff.
  
 Can you give a particular line where you see that?  

I saw it in one of the bugs:
   https://bugs.launchpad.net/fuel/+bug/1340172

Maybe it is no longer accurate

 
 * Debug process of OCF scripts is not unified requires a lot of actions from
  Cloud Operator
 
  Two things to mention here... the first is crm_resource 
 --force-(start|stop|check) which queries the cluster for the resource's 
 definition but runs the command directly. 
  Combined with -V, this means that you get to see everything the agent is 
 doing.
 
 We write many own OCF scripts. We just need to see how OCF script behaves. 
 ocf_tester is not enough for our cases.

Agreed. ocf_tester is more for out-of-cluster regression testing, not really 
good for debugging a running cluster.

 I'll try if crm_resource -V --force-start is better.
  
 
  Also, pacemaker now supports the ability for agents to emit specially 
 formatted error messages that are stored in the cib and can be shown back to 
 users.
  This can make things much less painful for admins. Look for 
 PCMK_OCF_REASON_PREFIX in the upstream resource-agents project.
 
 Thank you for tip. 
 
 
 * Openstack services are not managed by Pacemaker
 
 The general idea to have all openstack services under pacemaker control 
 rather than having upstart and pacemaker. It will be very handy for operators 
 to see the status of all services from one console. Also it will give us 
 flexibility to have more complex service verification checks in monitor 
 function.
  
 
  Oh?
 
 * Compute nodes aren't in Pacemaker cluster, hence, are lacking a viable
  control plane for their's compute/nova services.
 
  

Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Eugene Nikanorov
Wow, lots of feedback in a matter of hours.

First of all, reading postgres docs I see that READ COMMITTED is the same
as for mysql, so it should address the issue we're discussing:

*Read Committed* is the default isolation level in PostgreSQL. When a
transaction uses this isolation level, a SELECT query (without a FOR
UPDATE/SHARE clause) *sees only data committed before the query began (not
before TX began - Eugene)*; it never sees either uncommitted data or
changes committed during query execution by concurrent transactions. In
effect, a SELECT query sees a snapshot of the database as of the instant
the query begins to run. However, SELECT does see the effects of previous
updates executed within its own transaction, even though they are not yet
committed. *Also note that two successive **SELECT commands can see
different data, even though they are within a single transaction, if other
transactions commit changes during execution of the first SELECT. *
http://www.postgresql.org/docs/8.4/static/transaction-iso.html

So, in my opinion, unless neutron code has parts that rely on 'repeatable
read' transaction isolation level (and I believe such code is possible,
didn't inspected closely yet), switching to READ COMMITTED is fine for
mysql.

On multi-master scenario: it is not really an advanced use case. It is
basic, we need to consider it as a basic and build architecture with
respect to this fact.
Retry approach fits well here, however it either requires proper
isolation level, or redesign of whole DB access layer.

Also, thanks Clint for clarification about example scenario described by Mike
Bayer.
Initially the issue was discovered with concurrent tests on multi master
environment with galera as a DB backend.

Thanks,
Eugene

On Thu, Nov 20, 2014 at 12:20 AM, Mike Bayer mba...@redhat.com wrote:


 On Nov 19, 2014, at 3:47 PM, Ryan Moats rmo...@us.ibm.com wrote:

 
 BTW, I view your examples from oslo as helping make my argument for
 me (and I don't think that was your intent :) )


 I disagree with that as IMHO the differences in producing MM in the app
 layer against arbitrary backends (Postgresql vs. DB2 vs. MariaDB vs. ???)
  will incur a lot more “bifurcation” than a system that targets only a
 handful of existing MM solutions.  The example I referred to in oslo.db is
 dealing with distinct, non MM backends.   That level of DB-specific code
 and more is a given if we are building a MM system against multiple
 backends generically.

 It’s not possible to say which approach would be better or worse at the
 level of “how much database specific application logic do we need”, though
 in my experience, no matter what one is trying to do, the answer is always,
 “tons”; we’re dealing not just with databases but also Python drivers that
 have a vast amount of differences in behaviors, at every level.On top
 of all of that, hand-rolled MM adds just that much more application code to
 be developed and maintained, which also claims it will do a better job than
 mature (ish?) database systems designed to do the same job against a
 specific backend.




   My reason for asking this question here is that if the community
   wants to consider #2, then these problems are the place to start
   crafting that solution - if we solve the conflicts inherent with the
   two conncurrent thread scenarios, then I think we will find that
   we've solved the multi-master problem essentially for free”.
 
  Maybe I’m missing something, if we learn how to write out a row such
  that a concurrent transaction against the same row doesn’t throw us
  off, where is the part where that data is replicated to databases
  running concurrently on other IP numbers in a way that is atomic
  come out of that effort “for free” ?   A home-rolled “multi master”
  scenario would have to start with a system that has multiple
  create_engine() calls, since we need to communicate directly to
  multiple database servers. From there it gets really crazy.  Where’sall
 that ?

 Boiled down, what you are talking about here w.r.t. concurrent
 transactions is really conflict resolution, which is the hardest
 part of implementing multi-master (as a side note, using locking in
 this case is the equivalent of option #1).

 All I wished to point out is that there are other ways to solve the
 conflict resolution that could then be leveraged into a multi-master
 scenario.

 As for the parts that I glossed over, once conflict resolution is
 separated out, replication turns into a much simpler problem with
 well understood patterns and so I view that part as coming
 for free.

 Ryan
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [nova] v3 integrated tests taking a lot longer?

2014-11-19 Thread Andrew Laski


On 11/19/2014 04:16 PM, Jay Pipes wrote:

On 11/19/2014 04:00 PM, Matt Riedemann wrote:

On 11/19/2014 9:40 AM, Jay Pipes wrote:

On 11/18/2014 06:48 PM, Matt Riedemann wrote:

I just started noticing today that the v3 integrated api samples tests
seem to be taking a lot longer than the other non-v3 integrated api
samples tests. On my 4 VCPU, 4 GB RAM VM some of those tests are 
taking

anywhere from 15-50+ seconds, while the non-v3 tests are taking less
than a second.

Has something changed recently in how the v3 API code is processed 
that

might have caused this?  With microversions or jsonschema validation
perhaps?

I was thinking it was oslo.db 1.1.0 at first since that was a recent
update but given the difference in times between v3 and non-v3 api
samples tests I'm thinking otherwise.


Heya,

I've been stung in the past by running either tox or run_tests.sh while
active in a virtualenv. The speed goes from ~2 seconds per API sample
test to ~15 seconds per API sample test...

Not sure if this is what is causing your problem, but worth a check.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



So...

Run: 13101 in 21466.460047 sec

...

mriedem@ubuntu:~/git/nova$ du -sh .testrepository/
1002M.testrepository/

I removed that and I'm running again now.

We remove pyc files on each run, is there any reason why we couldn't
also remove .testrepository on every run?


Well, having some history is sometimes useful (for instance when you 
want to do: ./run_tests.sh -V --failing to execute only the tests that 
failed during the last run), so I think having a separate flag (-R) to 
run_tests.sh would be fine.


Testrepository also uses its history of test run times to try to group 
tests so that each thread takes about the same amount of time to run.




But, then again I just learned that run_tests.sh is apparently 
deprecated. Shame :(


-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Parallels loopback disk format support

2014-11-19 Thread Fei Long Wang
IIUC, the blueprint just want to add a new image format, and no code
change in Glance, is it? If that's the case, I'm wondering if we really
need a blueprint/spec. Because the image format could be configured in
glance-api.conf. Please correct me if I missed anything. Cheers.


On 20/11/14 02:27, Maxim Nestratov wrote:
 Greetings,

 In scope of these changes [1], I would like to add a new image format
 into glance. For this purpose there was created a blueprint [2] and
 would really appreciate if someone from glance team could review this
 proposal.

 [1] https://review.openstack.org/#/c/111335/
 [2] https://blueprints.launchpad.net/glance/+spec/pcs-support

 Best,

 Maxim Nestratov,
 Lead Software Developer,
 Parallels


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers  Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Who maintains the iCal meeting data?

2014-11-19 Thread Tony Breeds
On Wed, Nov 19, 2014 at 01:24:03PM +0100, Thierry Carrez wrote:

 The iCal is currently maintained by Anne (annegentle) and myself. In
 parallel, a small group is building a gerrit-powered agenda so that we
 can describe meetings in YAML and check for conflicts automatically, and
 build the ics automatically rather than manually.

Sounds good.
 
 That should still take a few weeks before we can migrate to that though,
 so in the mean time if you volunteer to keep the .ics up to date with
 changes to the wiki page, that would be of great help! It's maintained
 as a google calendar, I can add you to the ACL there if you send me your
 google email.

Done in a provate email.

Yours Tony.


pgpe3V_V44aXA.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Mike Bayer

 On Nov 19, 2014, at 4:14 PM, Clint Byrum cl...@fewbar.com wrote:
 
 
 One simply cannot rely on multi-statement transactions to always succeed.

agree, but the thing you want is that the transaction either succeeds or 
explicitly fails, the latter hopefully in such a way that a retry can be added 
which has a chance at succeeding, if needed.  We have transaction replay logic 
in place in nova for example based on known failure conditions like concurrency 
exceptions, and this replay logic works, because it starts a new transaction.   
In this specific case, since it’s looping within a transaction where the data 
won’t change, it’ll never succeed, and the retry mechanism is useless.   But 
the isolation mode change won’t really help here as pointed out by Jay; 
discrete transactions have to be used instead.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Eugene Nikanorov
 But the isolation mode change won’t really help here as pointed out by
Jay; discrete transactions have to be used instead.
I still think it will, per postgres documentation (which might look
confusing, but still...)
It actually helps for mysql, that was confirmed. For postgres it appears to
be the same.

Thanks,
Eugene.

On Thu, Nov 20, 2014 at 12:56 AM, Mike Bayer mba...@redhat.com wrote:


  On Nov 19, 2014, at 4:14 PM, Clint Byrum cl...@fewbar.com wrote:
 
 
  One simply cannot rely on multi-statement transactions to always succeed.

 agree, but the thing you want is that the transaction either succeeds or
 explicitly fails, the latter hopefully in such a way that a retry can be
 added which has a chance at succeeding, if needed.  We have transaction
 replay logic in place in nova for example based on known failure conditions
 like concurrency exceptions, and this replay logic works, because it starts
 a new transaction.   In this specific case, since it’s looping within a
 transaction where the data won’t change, it’ll never succeed, and the retry
 mechanism is useless.   But the isolation mode change won’t really help
 here as pointed out by Jay; discrete transactions have to be used instead.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] mid-cycle meet-up planning ...

2014-11-19 Thread Jay S. Bryant

All,

For those of you that weren't able to make the Kilo meet-up in Paris I 
wanted to send out a note regarding Cinder's Kilo mid-cycle meet-up.


IBM has offered to host it in, warm, sunny, Austin, Texas.  The planned 
dates are January 27, 28 and 29, 2015.


I have put together an etherpad with the current plan and will be 
keeping the etherpad updated as we continue to firm out the details: 
https://etherpad.openstack.org/p/cinder-kilo-midcycle-meetup


I need to have a good idea how many people are planning to participate 
sooner, rather than later, so that I can make sure we have a big enough 
room.  So, if you think you are going to be able to make it please add 
your name to the 'Planned Attendees' list.


Again, we will also use Google Hangout to virtually include those who 
cannot be physically present.  I have a space in the etherpad to include 
your name if you wish to join that way.


I look forward to another successful meet-up with all of you!

Jay
(jungleboyj)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas] Abandon Old LBaaS V2 Review

2014-11-19 Thread Brandon Logan
Evgeny,
Since change sets got moved to the feature branch, this review has
remained on master.  It needs to be abandoned:

https://review.openstack.org/#/c/109849/

Thanks,
Brandon

On Mon, 2014-11-17 at 12:31 -0800, Stephen Balukoff wrote:
 Awesome!
 
 On Mon, Nov 10, 2014 at 9:10 AM, Susanne Balle sleipnir...@gmail.com
 wrote:
 Works for me. Susanne
 
 On Mon, Nov 10, 2014 at 10:57 AM, Brandon Logan
 brandon.lo...@rackspace.com wrote:
 https://wiki.openstack.org/wiki/Meetings#LBaaS_meeting
 
 That is updated for lbaas and advanced services with
 the new times.
 
 Thanks,
 Brandon
 
 On Mon, 2014-11-10 at 11:07 +, Doug Wiegley wrote:
  #openstack-meeting-4
 
 
   On Nov 10, 2014, at 10:33 AM, Evgeny Fedoruk
 evge...@radware.com wrote:
  
   Thanks,
   Evg
  
   -Original Message-
   From: Doug Wiegley [mailto:do...@a10networks.com]
   Sent: Friday, November 07, 2014 9:04 PM
   To: OpenStack Development Mailing List
   Subject: [openstack-dev] [neutron][lbaas] meeting
 day/time change
  
   Hi all,
  
   Neutron LBaaS meetings are now going to be
 Tuesdays at 16:00 UTC.
  
   Safe travels.
  
   Thanks,
   Doug
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
  
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
  
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
 -- 
 Stephen Balukoff 
 Blue Box Group, LLC 
 (800)613-4305 x807
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-19 Thread Thomas Goirand
On 11/19/2014 04:27 PM, Matthias Runge wrote:
 On 18/11/14 14:48, Thomas Goirand wrote:
 

 And then, does selenium continues to work for testing Horizon? If so,
 then the solution could be to send the .dll and .xpi files in non-free,
 and remove them from Selenium in main.

 Yes, it still works; that leaves the question, why they are included in
 the tarball at all.
 
 In Fedora, we do not distribute .dll or selenium xpi files with selenium
 at all.
 
 Matthias

Thanks for letting me know. I have opened a bug against the current
selenium package in non-free, to ask to have it uploaded in Debian main,
without the .xpi file. Let's see how it goes.

Cheers,

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] v3 integrated tests taking a lot longer?

2014-11-19 Thread Matt Riedemann



On 11/19/2014 3:34 PM, Andrew Laski wrote:


On 11/19/2014 04:16 PM, Jay Pipes wrote:

On 11/19/2014 04:00 PM, Matt Riedemann wrote:

On 11/19/2014 9:40 AM, Jay Pipes wrote:

On 11/18/2014 06:48 PM, Matt Riedemann wrote:

I just started noticing today that the v3 integrated api samples tests
seem to be taking a lot longer than the other non-v3 integrated api
samples tests. On my 4 VCPU, 4 GB RAM VM some of those tests are
taking
anywhere from 15-50+ seconds, while the non-v3 tests are taking less
than a second.

Has something changed recently in how the v3 API code is processed
that
might have caused this?  With microversions or jsonschema validation
perhaps?

I was thinking it was oslo.db 1.1.0 at first since that was a recent
update but given the difference in times between v3 and non-v3 api
samples tests I'm thinking otherwise.


Heya,

I've been stung in the past by running either tox or run_tests.sh while
active in a virtualenv. The speed goes from ~2 seconds per API sample
test to ~15 seconds per API sample test...

Not sure if this is what is causing your problem, but worth a check.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



So...

Run: 13101 in 21466.460047 sec

...

mriedem@ubuntu:~/git/nova$ du -sh .testrepository/
1002M.testrepository/

I removed that and I'm running again now.

We remove pyc files on each run, is there any reason why we couldn't
also remove .testrepository on every run?


Well, having some history is sometimes useful (for instance when you
want to do: ./run_tests.sh -V --failing to execute only the tests that
failed during the last run), so I think having a separate flag (-R) to
run_tests.sh would be fine.


Testrepository also uses its history of test run times to try to group
tests so that each thread takes about the same amount of time to run.



But, then again I just learned that run_tests.sh is apparently
deprecated. Shame :(

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Turns out that my huge .testrepository is not apparently the issue, so 
I'll press on.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-11-19 Thread Blair Bethwaite
On 20 November 2014 05:25,  openstack-dev-requ...@lists.openstack.org wrote:
 --

 Message: 24
 Date: Wed, 19 Nov 2014 10:57:17 -0500
 From: Doug Hellmann d...@doughellmann.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Quota management and enforcement across
 projects
 Message-ID: 13f4f7a1-d4ec-4d14-a163-d477a4fd9...@doughellmann.com
 Content-Type: text/plain; charset=windows-1252


 On Nov 19, 2014, at 9:51 AM, Sylvain Bauza sba...@redhat.com wrote:
 My bad. Let me rephrase it. I'm seeing this service as providing added value 
 for managing quotas by ensuring consistency across all projects. But as I 
 said, I'm also thinking that the quota enforcement has still to be done at 
 the customer project level.

 Oh, yes, that is true. I envision the API for the new service having a call 
 that means ?try to consume X units of a given quota? and that it would return 
 information about whether that can be done. The apps would have to define 
 what quotas they care about, and make the appropriate calls.

For actions initiated directly through core OpenStack service APIs
(Nova, Cinder, Neutron, etc - anything using Keystone policy),
shouldn't quota-enforcement be handled by Keystone? To me this is just
a subset of authz, and OpenStack already has a well established
service for such decisions.

It sounds like the idea here is to provide something generic that
could be used outside of OpenStack? I worry that might be premature
scope creep that detracts from the outcome.

-- 
Cheers,
~Blairo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] APIImpact flag for specs

2014-11-19 Thread Everett Toews

On Nov 13, 2014, at 2:06 PM, Everett Toews 
everett.to...@rackspace.commailto:everett.to...@rackspace.com wrote:

On Nov 12, 2014, at 10:45 PM, Angus Salkeld 
asalk...@mirantis.commailto:asalk...@mirantis.com wrote:

On Sat, Nov 1, 2014 at 6:45 AM, Everett Toews 
everett.to...@rackspace.commailto:everett.to...@rackspace.com wrote:
Hi All,

Chris Yeoh started the use of an APIImpact flag in commit messages for specs in 
Nova. It adds a requirement for an APIImpact flag in the commit message for a 
proposed spec if it proposes changes to the REST API. This will make it much 
easier for people such as the API Working Group who want to review API changes 
across OpenStack to find and review proposed API changes.

For example, specifications with the APIImpact flag can be found with the 
following query:

https://review.openstack.org/#/q/status:open+project:openstack/nova-specs+message:apiimpact,n,z

Chris also proposed a similar change to many other projects and I did the rest. 
Here’s the complete list if you’d like to review them.

Barbican: https://review.openstack.org/131617
Ceilometer: https://review.openstack.org/131618
Cinder: https://review.openstack.org/131620
Designate: https://review.openstack.org/131621
Glance: https://review.openstack.org/131622
Heat: https://review.openstack.org/132338
Ironic: https://review.openstack.org/132340
Keystone: https://review.openstack.org/132303
Neutron: https://review.openstack.org/131623
Nova: https://review.openstack.org/#/c/129757
Sahara: https://review.openstack.org/132341
Swift: https://review.openstack.org/132342
Trove: https://review.openstack.org/132346
Zaqar: https://review.openstack.org/132348

There are even more projects in stackforge that could use a similar change. If 
you know of a project in stackforge that would benefit from using an APIImapct 
flag in its specs, please propose the change and let us know here.


I seem to have missed this, I'll place my review comment here too.

I like the general idea of getting more consistent/better API. But, is 
reviewing every spec across all projects just going to introduce a new non 
scalable bottle neck into our work flow (given the increasing move away from 
this approach: moving functional tests to projects, getting projects to do more 
of their own docs, etc..). Wouldn't a better approach be to have an API liaison 
in each project that can keep track of new guidelines and catch potential 
problems?

I see have added a new section here: 
https://wiki.openstack.org/wiki/CrossProjectLiaisons

Isn't that enough?

I replied in the review. We’ll continue the discussion there.

The cross project liaisons are big help but the APIImpact flag let’s the API WG 
automate discovery of API changing specs. It's just one more tool in the box to 
help us find changes that impact the API.

Note that the patch says nothing about requiring a review from someone 
associated with the API WG. If you add the APIImpact flag and nobody comes 
along to review it, continue on as normal.

The API WG is not intended to be a gatekeeper of every change to every API. As 
you say that doesn't scale. We don't want to be a bottleneck. However, tools 
such as the APIImpact flag can help us be more effective.

(Angus suggested I give my review comment a bit more visibility. I agree :)

Everett

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API-WG meeting (note time change this week)

2014-11-19 Thread Christopher Yeoh
On Wed, 19 Nov 2014 19:34:44 +
Everett Toews everett.to...@rackspace.com wrote:
 
 2. Do you know if there is a way to subscribe to only the API WG
 meeting from that calendar?

I haven't been able to find a way to do that. Fortunately for me most
of the openstack meetings end up being between 12am and 8am so it
doesn't actually clutter up my calendar view ;-)

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] LP/review cleanup day

2014-11-19 Thread Angus Salkeld
Hi all

As an action from our meeting I'd like to announce a cleanup day on the 2nd
of December.

What does this mean?

1) We have been noticing a lot of old and potentially out of date bugs that
need
some attention (re-test/triage/mark invalid). Also we have 97 bugs
in-progress
I wonder if that is real? Maybe some have partial fixes and have been left
in-progress.

2) We have probably need to do a manual-abandon on some really old reviews
so it doesn't clutter up the review.

3) We have a lot of out of date blueprints that basically need deleting.
We need to go through and agree on a list of them to kill.

Regards
Angus
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >