Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-11 Thread Kenichi Oomichi

> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Tuesday, September 09, 2014 4:29 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] Kilo Cycle Goals Exercise
> 
> > 3. Another long-term topic is standardizing our APIs so that we use
> > consistent terminology and formatting (I think we have at least 3 forms
> > of errors returned now?). I’m not sure we have anyone ready to drive
> > this, yet, so I don’t think it’s something to consider for Kilo.
> 
> +10
> 
> Frankly, I believe this should be our #1 priority from a cross-project
> perspective.
> 
> The inconsistencies in the current APIs (even within the same project's
> APIs!) is just poor form and since our REST APIs are the very first
> impression that we give to the outside developer community, it really is
> incumbent on us to make sure they are explicit, free of side-effects,
> well-documented, consistent, easy-to-use, and hide implementation
> details thoroughly.

+1

The REST API consistency is important for whole OpenStack projects,
The list for it in my mind is that
 * API URL/Attribute naming
 * HTTP status code on success
 * Behavior against wrong input (HTTP status code, message in a response)
It is difficult to apply them to all projects, but it would be worth
for improving the quality from the viewpoint of the outside world.

Thanks
Ken'ichi Ohmichi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon]Dev environment default login

2014-09-11 Thread Richard Jones
I needs to point at a devstack or openstack installation for login to work.

On 12 September 2014 15:50, Rajdeep Dua  wrote:

> Hi,
> I have setup a local dev environment with a custom dashboard using
> instructions below
>
> http://docs.openstack.org/developer/horizon/topics/tutorial.html
>
> Horizon started using ./run_tests.sh --runserver 0.0.0.0:8877
> What is the default password for admin login?
>
> Note : it is not pointing to any openstack installation yet
>
> Thanks
> Rajdeep
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon]Dev environment default login

2014-09-11 Thread Rajdeep Dua
Hi,
I have setup a local dev environment with a custom dashboard using
instructions below

http://docs.openstack.org/developer/horizon/topics/tutorial.html

Horizon started using ./run_tests.sh --runserver 0.0.0.0:8877
What is the default password for admin login?

Note : it is not pointing to any openstack installation yet

Thanks
Rajdeep
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Architecture]Suggestions for the third vendors' plugin and driver

2014-09-11 Thread Kevin Benton
> Maybe I missed something, but what's the solution?

There isn't one yet. That's why it's going to be discussed at the summit.

> I think we should release a workable version.

Definitely. But that doesn't have anything to do with it living in the same
repository. By putting it in a different repo, it provides smaller code
bases to learn for new contributors wanting to become a core developer in
addition to a clear separation between plugins and core code.

> Besides of user experience, the open source drivers are also used for
developing and verifying new features, even small-scale case.

Sure, but this also isn't affected by the code being in a separate repo.

> The community should and just need focus on the Neutron core and provide
framework for vendors' devices.

I agree, but without the open source drivers being separated as well, it's
very difficult for the framework for external drivers to be stable enough
to be useful.

On Thu, Sep 11, 2014 at 7:24 PM, Germy Lure  wrote:

> Some comments inline.
>
> BR,
> Germy
>
> On Thu, Sep 11, 2014 at 5:47 PM, Kevin Benton  wrote:
>
>> This has been brought up several times already and I believe is going to
>> be discussed at the Kilo summit.
>>
> Maybe I missed something, but what's the solution?
>
>
>> I agree that reviewing third party patches eats community time. However,
>> claiming that the community pays 46% of it's energy to maintain
>> vendor-specific code doesn't make any sense. LOC in the repo has very
>> little to do with ongoing required maintenance. Assuming the APIs for the
>> plugins stay consistent, there should be few 'maintenance' changes required
>> to a plugin once it's in the tree. If there are that many changes to
>> plugins just to keep them operational, that means Neutron is far too
>> unstable to support drivers living outside of the tree anyway.
>>
> Yes, you are right. "Neutron is far too unstable to support drivers living
> outside of the tree anyway". So I think this is really our important point.
> The community should focus on standardizing NB&SB API, introducing and
> improving new features NOT wasting energy to introduce and maintain
> vendor-specific codes.
>
>>
>> On a related note, if we are going to pull plugins/drivers out of
>> Neutron, I think all of them should be removed, including the OVS and
>> LinuxBridge ones. There is no reason for them to be there if Neutron has
>> stable enough internal APIs to eject the 3rd party plugins from the repo.
>> They should be able to live in a separate neutron-opensource-drivers repo
>> or something along those lines. This will free up significant amounts of
>> developer/reviewer cycles for neutron to work on the API refactor, task
>> based workflows, performance improvements for the DB operations, etc.
>>
> I think we should release a workable version. User can experience the
> functions powered by built-in components. And they can replace them with
> the release of those vendors who cooperate with them. The community should
> not work for vendor's codes.
>
>>
>> If the open source drivers stay in the tree and the others are removed,
>> there is little incentive to keep the internal APIs stable and 3rd party
>> drivers sitting outside of the tree will break on every refactor or data
>> structure change. If that's the way we want to treat external driver
>> developers, let's be explicit about it and just post warnings that 3rd
>> party drivers can break at any point and that the onus is on the external
>> developers to learn what changed an react to it. At some point they will
>> stop bothering with Neutron completely in their deployments and mimic its
>> public API.
>>
> Besides of user experience, the open source drivers are also used for
> developing and verifying new features, even small-scale case.
>
>>
>> A clear separation of the open source drivers/plugins and core Neutron
>> would give a much better model for 3rd party driver developers to follow
>> and would enforce a stable internal API in the Neutron core.
>>
> The community should and just need focus on the Neutron core and provide
> framework for vendors' devices. Vendors just need adapt Neutron API and
> focus on their codes' quality. If not, I think the architecture is not
> proper. Everyone should only carry their own monkey.
>
>>
>>
>>
>> On Thu, Sep 11, 2014 at 1:54 AM, Germy Lure  wrote:
>>
>>> Hi stackers,
>>>
>>> According to my statistics(J2), the LOC of vendors' plugin and driver is
>>> about 102K, while the whole under neutron is 220K.
>>> That is to say the community has paid and is paying over 46% energy to
>>> maintain vendors' code. If we take mails, bugs,
>>> BPs  and so on into consideration, this percentage will be more.
>>>
>>> Most of these codes are just plugins and drivers implementing
>>> almost  the same functions. Every vendor submits a plugin,
>>> and the community only do the same thing, repeat and repeat.
>>> Meaningless.I think it's time to move them out.
>>> Let's focus on improving those

Re: [openstack-dev] [Neutron][Architecture]Suggestions for the third vendors' plugin and driver

2014-09-11 Thread Germy Lure
Some comments inline.

BR,
Germy

On Thu, Sep 11, 2014 at 5:47 PM, Kevin Benton  wrote:

> This has been brought up several times already and I believe is going to
> be discussed at the Kilo summit.
>
Maybe I missed something, but what's the solution?


> I agree that reviewing third party patches eats community time. However,
> claiming that the community pays 46% of it's energy to maintain
> vendor-specific code doesn't make any sense. LOC in the repo has very
> little to do with ongoing required maintenance. Assuming the APIs for the
> plugins stay consistent, there should be few 'maintenance' changes required
> to a plugin once it's in the tree. If there are that many changes to
> plugins just to keep them operational, that means Neutron is far too
> unstable to support drivers living outside of the tree anyway.
>
Yes, you are right. "Neutron is far too unstable to support drivers living
outside of the tree anyway". So I think this is really our important point.
The community should focus on standardizing NB&SB API, introducing and
improving new features NOT wasting energy to introduce and maintain
vendor-specific codes.

>
> On a related note, if we are going to pull plugins/drivers out of Neutron,
> I think all of them should be removed, including the OVS and LinuxBridge
> ones. There is no reason for them to be there if Neutron has stable enough
> internal APIs to eject the 3rd party plugins from the repo. They should be
> able to live in a separate neutron-opensource-drivers repo or something
> along those lines. This will free up significant amounts of
> developer/reviewer cycles for neutron to work on the API refactor, task
> based workflows, performance improvements for the DB operations, etc.
>
I think we should release a workable version. User can experience the
functions powered by built-in components. And they can replace them with
the release of those vendors who cooperate with them. The community should
not work for vendor's codes.

>
> If the open source drivers stay in the tree and the others are removed,
> there is little incentive to keep the internal APIs stable and 3rd party
> drivers sitting outside of the tree will break on every refactor or data
> structure change. If that's the way we want to treat external driver
> developers, let's be explicit about it and just post warnings that 3rd
> party drivers can break at any point and that the onus is on the external
> developers to learn what changed an react to it. At some point they will
> stop bothering with Neutron completely in their deployments and mimic its
> public API.
>
Besides of user experience, the open source drivers are also used for
developing and verifying new features, even small-scale case.

>
> A clear separation of the open source drivers/plugins and core Neutron
> would give a much better model for 3rd party driver developers to follow
> and would enforce a stable internal API in the Neutron core.
>
The community should and just need focus on the Neutron core and provide
framework for vendors' devices. Vendors just need adapt Neutron API and
focus on their codes' quality. If not, I think the architecture is not
proper. Everyone should only carry their own monkey.

>
>
>
> On Thu, Sep 11, 2014 at 1:54 AM, Germy Lure  wrote:
>
>> Hi stackers,
>>
>> According to my statistics(J2), the LOC of vendors' plugin and driver is
>> about 102K, while the whole under neutron is 220K.
>> That is to say the community has paid and is paying over 46% energy to
>> maintain vendors' code. If we take mails, bugs,
>> BPs  and so on into consideration, this percentage will be more.
>>
>> Most of these codes are just plugins and drivers implementing almost  the
>> same functions. Every vendor submits a plugin,
>> and the community only do the same thing, repeat and repeat.
>> Meaningless.I think it's time to move them out.
>> Let's focus on improving those exist but still weak features, on
>> introducing important and interesting new features.
>>
>> My suggestions now:
>> 1.monopolized plugins
>>   1)The community only standards NB API and keeps built-ins, such as ML2,
>> OVS and Linux bridge plugins.
>>   2)Vendors maintain their plugins locally.
>>   3)Users get neutron from community and plugin from some vendor on
>> demand.
>> 2.service plugins
>>   1)The community standards SB API and keeps open source driver(iptables,
>> openSwan and etc.) as built-in.
>>   2)Vendors only provide drivers not plugin. And those drivers also need
>> not deliver to community.
>>   3)Like above, Users can get code on demand from vendors or just use
>> open source.
>> 3.ML2 plugin
>>   1)Like service and monopolized plugin, the community just keep open
>> source implementations as built-in.
>>   2)L2-population should be kept.
>>
>> I am very happy to discuss this further.
>>
>> vendors' code stat. table(excluding built-in plugins and drivers)
>> 
>> Path

Re: [openstack-dev] Supporting Javascript clients calling OpenStack APIs

2014-09-11 Thread Richard Jones
On 12 September 2014 09:24, Richard Jones  wrote:

> On 12 September 2014 07:50, Adam Young  wrote:
>
>> So, lets have these two approaches work in parallel.  THe proxy will get
>> things goint while we work out the CORS approach.
>>
>
> I will look at submitting my middleware for inclusion anyway then.
>

Submitted to oslo.middleware https://review.openstack.org/#/c/120964/


 Richard
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-11 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2014-09-11 15:21:26 -0700:
> On 09/09/14 19:56, Clint Byrum wrote:
> > Excerpts from Samuel Merritt's message of 2014-09-09 16:12:09 -0700:
> >> On 9/9/14, 12:03 PM, Monty Taylor wrote:
> >>> On 09/04/2014 01:30 AM, Clint Byrum wrote:
>  Excerpts from Flavio Percoco's message of 2014-09-04 00:08:47 -0700:
> > Greetings,
> >
> > Last Tuesday the TC held the first graduation review for Zaqar. During
> > the meeting some concerns arose. I've listed those concerns below with
> > some comments hoping that it will help starting a discussion before the
> > next meeting. In addition, I've added some comments about the project
> > stability at the bottom and an etherpad link pointing to a list of use
> > cases for Zaqar.
> >
> 
>  Hi Flavio. This was an interesting read. As somebody whose attention has
>  recently been drawn to Zaqar, I am quite interested in seeing it
>  graduate.
> 
> > # Concerns
> >
> > - Concern on operational burden of requiring NoSQL deploy expertise to
> > the mix of openstack operational skills
> >
> > For those of you not familiar with Zaqar, it currently supports 2 nosql
> > drivers - MongoDB and Redis - and those are the only 2 drivers it
> > supports for now. This will require operators willing to use Zaqar to
> > maintain a new (?) NoSQL technology in their system. Before expressing
> > our thoughts on this matter, let me say that:
> >
> >   1. By removing the SQLAlchemy driver, we basically removed the
> > chance
> > for operators to use an already deployed "OpenStack-technology"
> >   2. Zaqar won't be backed by any AMQP based messaging technology 
> > for
> > now. Here's[0] a summary of the research the team (mostly done by
> > Victoria) did during Juno
> >   3. We (OpenStack) used to require Redis for the zmq matchmaker
> >   4. We (OpenStack) also use memcached for caching and as the oslo
> > caching lib becomes available - or a wrapper on top of dogpile.cache -
> > Redis may be used in place of memcached in more and more deployments.
> >   5. Ceilometer's recommended storage driver is still MongoDB,
> > although
> > Ceilometer has now support for sqlalchemy. (Please correct me if I'm
> > wrong).
> >
> > That being said, it's obvious we already, to some extent, promote some
> > NoSQL technologies. However, for the sake of the discussion, lets assume
> > we don't.
> >
> > I truly believe, with my OpenStack (not Zaqar's) hat on, that we can't
> > keep avoiding these technologies. NoSQL technologies have been around
> > for years and we should be prepared - including OpenStack operators - to
> > support these technologies. Not every tool is good for all tasks - one
> > of the reasons we removed the sqlalchemy driver in the first place -
> > therefore it's impossible to keep an homogeneous environment for all
> > services.
> >
> 
>  I whole heartedly agree that non traditional storage technologies that
>  are becoming mainstream are good candidates for use cases where SQL
>  based storage gets in the way. I wish there wasn't so much FUD
>  (warranted or not) about MongoDB, but that is the reality we live in.
> 
> > With this, I'm not suggesting to ignore the risks and the extra burden
> > this adds but, instead of attempting to avoid it completely by not
> > evolving the stack of services we provide, we should probably work on
> > defining a reasonable subset of NoSQL services we are OK with
> > supporting. This will help making the burden smaller and it'll give
> > operators the option to choose.
> >
> > [0] http://blog.flaper87.com/post/marconi-amqp-see-you-later/
> >
> >
> > - Concern on should we really reinvent a queue system rather than
> > piggyback on one
> >
> > As mentioned in the meeting on Tuesday, Zaqar is not reinventing message
> > brokers. Zaqar provides a service akin to SQS from AWS with an OpenStack
> > flavor on top. [0]
> >
> 
>  I think Zaqar is more like SMTP and IMAP than AMQP. You're not really
>  trying to connect two processes in real time. You're trying to do fully
>  asynchronous messaging with fully randomized access to any message.
> 
>  Perhaps somebody should explore whether the approaches taken by large
>  scale IMAP providers could be applied to Zaqar.
> 
>  Anyway, I can't imagine writing a system to intentionally use the
>  semantics of IMAP and SMTP. I'd be very interested in seeing actual use
>  cases for it, apologies if those have been posted before.
> >>>
> >>> It seems like you're EITHER describing something called XMPP that has at
> >>> least one open source scalable backend called ejabberd. OR, you've
> >>> actually hit the nail on the 

Re: [openstack-dev] [Ceilometer] Adding Dina Belova to ceilometer-core

2014-09-11 Thread Lu, Lianhao
Definitely +1 from me

Lianhao

> -Original Message-
> From: Julien Danjou [mailto:jul...@danjou.info]
> Sent: Thursday, September 11, 2014 9:25 PM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [Ceilometer] Adding Dina Belova to ceilometer-core
> 
> Hi,
> 
> Dina has been doing a great work and has been very helpful during the
> Juno cycle and her help is very valuable. She's been doing a lot of
> reviews and has been very active in our community.
> 
> I'd like to propose that we add Dina Belova to the ceilometer-core
> group, as I'm convinced it'll help the project.
> 
> Please, dear ceilometer-core members, reply with your votes!
> 
> --
> Julien Danjou
> // Free Software hacker
> // http://julien.danjou.info

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-09-11 Thread Jamie Lennox


- Original Message -
> From: "Travis S Tripp" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Friday, 12 September, 2014 10:30:53 AM
> Subject: [openstack-dev] masking X-Auth-Token in debug output - proposed 
> consistency
> 
> 
> 
> Hi All,
> 
> 
> 
> I’m just helping with bug triage in Glance and we’ve got a bug to update how
> tokens are redacted in the glanceclient [1]. It says to update to whatever
> cross-project approach is agreed upon and references this thread:
> 
> 
> 
> http://lists.openstack.org/pipermail/openstack-dev/2014-June/037345.html
> 
> 
> 
> I just went through the thread and as best as I can tell there wasn’t a
> conclusion in the ML. However, if we are going to do anything, IMO the
> thread leans toward {SHA1}, with Morgan Fainberg dissenting.
> However, he references a patch that was ultimately abandoned.
> 
> 
> 
> If there was a conclusion to this, please let me know so I can update and
> work on closing this bug.

We handle this in the keystoneclient Session object by just printing REDACTED 
or something similar. The problem with using a SHA1 is that for backwards 
compatability we often use the SHA1 of a PKI token as if it were a UUID token 
and so this is still sensitive data. There is working in keystone by 
morganfainberg (which i think was merged) to add a new audit_it which will be 
able to identify a token across calls without exposing any sensitive 
information. We will support this in session when available. 

The best i can say for standardization is that when glanceclient adopts the 
session it will be handled the same way as all the other clients and 
improvements can happen there without you having to worry about it. 


Jamie

 
> [1] https://bugs.launchpad.net/python-glanceclient/+bug/1329301
> 
> 
> 
> Thanks,
> Travis
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-09-11 Thread Morgan Fainberg
Hi Travis,

By and large we have addressed this in the Session code within Keystoneclient 
via the function here (and other similar cases): 
https://github.com/openstack/python-keystoneclient/blob/01cabf6bbbee8b5340295f3be5e1fa7111387e7d/keystoneclient/session.py#L126-L131

If/when Glanceclient is moved to consuming the session code, it should help 
alleviate the issues with printing the Token ID’s in the logs themselves.

Along with the changes for the session code, all tokens issued from Keystone 
(Juno and beyond) will also include audit_id fields that are safe to use in 
logging (they are part of the token data). There are two elements to the 
audit_ids field, the first (will always exist) and is the local token’s 
audit_id (audit ids are randomly generated and should be considered as globally 
unique as a UUID). The second element will exist if the token has ever been 
part of a rescope (exchange of a token for another token of a different scope, 
e.g. changing to a new project/tenant). The second audit_id is the audit_id of 
the first token in the chain (unique for the entire chain of tokens).

I don’t believe we’re exposing the audit_ids yet to the services behind the 
auth_token middleware nor using them for logging in cases such as the above 
linked logging function. I would like to eventually see the audit_ids used 
(where they exist) for logging cases like this.

I’m sure Jamie Lennox can chime in and provide a bit more insight as to the 
status of converting Glanceclient to using session as I know he’s been working 
on the client front in this regard. I hope that sometime within the K 
development cycle timeline we will be converting the logging over to audit_ids 
where possible (but that has not been 100% decided on).

Cheers,
Morgan

—
Morgan Fainberg


-Original Message-
From: Tripp, Travis S 
Reply: OpenStack Development Mailing List (not for usage questions) 
>
Date: September 11, 2014 at 17:35:30
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject:  [openstack-dev] masking X-Auth-Token in debug output - proposed 
consistency

> Hi All,
>  
> I'm just helping with bug triage in Glance and we've got a bug to update how 
> tokens are redacted  
> in the glanceclient [1]. It says to update to whatever cross-project approach 
> is agreed  
> upon and references this thread:
>  
> http://lists.openstack.org/pipermail/openstack-dev/2014-June/037345.html  
>  
> I just went through the thread and as best as I can tell there wasn't a 
> conclusion in the  
> ML. However, if we are going to do anything, IMO the thread leans toward 
> {SHA1},  
> with Morgan Fainberg dissenting. However, he references a patch that was 
> ultimately  
> abandoned.
>  
> If there was a conclusion to this, please let me know so I can update and 
> work on closing  
> this bug.
>  
> [1] https://bugs.launchpad.net/python-glanceclient/+bug/1329301
>  
> Thanks,
> Travis
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [clients] [keystone] lack of retrying tokens leads to overall OpenStack fragility

2014-09-11 Thread Jamie Lennox


- Original Message -
> From: "Steven Hardy" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Friday, 12 September, 2014 12:21:52 AM
> Subject: Re: [openstack-dev] [all] [clients] [keystone] lack of retrying 
> tokens leads to overall OpenStack fragility
> 
> On Wed, Sep 10, 2014 at 08:46:45PM -0400, Jamie Lennox wrote:
> > 
> > - Original Message -
> > > From: "Steven Hardy" 
> > > To: "OpenStack Development Mailing List (not for usage questions)"
> > > 
> > > Sent: Thursday, September 11, 2014 1:55:49 AM
> > > Subject: Re: [openstack-dev] [all] [clients] [keystone] lack of retrying
> > > tokens leads to overall OpenStack fragility
> > > 
> > > On Wed, Sep 10, 2014 at 10:14:32AM -0400, Sean Dague wrote:
> > > > Going through the untriaged Nova bugs, and there are a few on a similar
> > > > pattern:
> > > > 
> > > > Nova operation in progress takes a while
> > > > Crosses keystone token expiration time
> > > > Timeout thrown
> > > > Operation fails
> > > > Terrible 500 error sent back to user
> > > 
> > > We actually have this exact problem in Heat, which I'm currently trying
> > > to
> > > solve:
> > > 
> > > https://bugs.launchpad.net/heat/+bug/1306294
> > > 
> > > Can you clarify, is the issue either:
> > > 
> > > 1. Create novaclient object with username/password
> > > 2. Do series of operations via the client object which eventually fail
> > > after $n operations due to token expiry
> > > 
> > > or:
> > > 
> > > 1. Create novaclient object with username/password
> > > 2. Some really long operation which means token expires in the course of
> > > the service handling the request, blowing up and 500-ing
> > > 
> > > If the former, then it does sound like a client, or usage-of-client bug,
> > > although note if you pass a *token* vs username/password (as is currently
> > > done for glance and heat in tempest, because we lack the code to get the
> > > token outside of the shell.py code..), there's nothing the client can do,
> > > because you can't request a new token with longer expiry with a token...
> > > 
> > > However if the latter, then it seems like not really a client problem to
> > > solve, as it's hard to know what action to take if a request failed
> > > part-way through and thus things are in an unknown state.
> > > 
> > > This issue is a hard problem, which can possibly be solved by
> > > switching to a trust scoped token (service impersonates the user), but
> > > then
> > > you're effectively bypassing token expiry via delegation which sits
> > > uncomfortably with me (despite the fact that we may have to do this in
> > > heat
> > > to solve the afforementioned bug)
> > > 
> > > > It seems like we should have a standard pattern that on token
> > > > expiration
> > > > the underlying code at least gives one retry to try to establish a new
> > > > token to complete the flow, however as far as I can tell *no* clients
> > > > do
> > > > this.
> > > 
> > > As has been mentioned, using sessions may be one solution to this, and
> > > AFAIK session support (where it doesn't already exist) is getting into
> > > various clients via the work being carried out to add support for v3
> > > keystone by David Hu:
> > > 
> > > https://review.openstack.org/#/q/owner:david.hu%2540hp.com,n,z
> > > 
> > > I see patches for Heat (currently gating), Nova and Ironic.
> > > 
> > > > I know we had to add that into Tempest because tempest runs can exceed
> > > > 1
> > > > hr, and we want to avoid random fails just because we cross a token
> > > > expiration boundary.
> > > 
> > > I can't claim great experience with sessions yet, but AIUI you could do
> > > something like:
> > > 
> > > from keystoneclient.auth.identity import v3
> > > from keystoneclient import session
> > > from keystoneclient.v3 import client
> > > 
> > > auth = v3.Password(auth_url=OS_AUTH_URL,
> > >username=USERNAME,
> > >password=PASSWORD,
> > >project_id=PROJECT,
> > >user_domain_name='default')
> > > sess = session.Session(auth=auth)
> > > ks = client.Client(session=sess)
> > > 
> > > And if you can pass the same session into the various clients tempest
> > > creates then the Password auth-plugin code takes care of reauthenticating
> > > if the token cached in the auth plugin object is expired, or nearly
> > > expired:
> > > 
> > > https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/auth/identity/base.py#L120
> > > 
> > > So in the tempest case, it seems like it may be a case of migrating the
> > > code creating the clients to use sessions instead of passing a token or
> > > username/password into the client object?
> > > 
> > > That's my understanding of it atm anyway, hopefully jamielennox will be
> > > along
> > > soon with more details :)
> > > 
> > > Steve
> > 
> > 
> > By clients here are you referring to the CLIs or the python libraries?
> > Implementation is at different points wit

[openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-09-11 Thread Tripp, Travis S
Hi All,

I'm just helping with bug triage in Glance and we've got a bug to update how 
tokens are redacted in the glanceclient [1].  It says to update to whatever 
cross-project approach is agreed upon and references this thread:

http://lists.openstack.org/pipermail/openstack-dev/2014-June/037345.html

I just went through the thread and as best as I can tell there wasn't a 
conclusion in the ML.  However, if we are going to do anything, IMO the thread 
leans toward {SHA1}, with Morgan Fainberg dissenting.  However, he 
references a patch that was ultimately abandoned.

If there was a conclusion to this, please let me know so I can update and work 
on closing this bug.

[1] https://bugs.launchpad.net/python-glanceclient/+bug/1329301

Thanks,
Travis
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [clients] [keystone] lack of retrying tokens leads to overall OpenStack fragility

2014-09-11 Thread Jamie Lennox


- Original Message -
> From: "Sean Dague" 
> To: openstack-dev@lists.openstack.org
> Sent: Thursday, 11 September, 2014 9:44:43 PM
> Subject: Re: [openstack-dev] [all] [clients] [keystone] lack of retrying 
> tokens leads to overall OpenStack fragility
> 
> On 09/10/2014 08:46 PM, Jamie Lennox wrote:
> > 
> > - Original Message -
> >> From: "Steven Hardy" 
> >> To: "OpenStack Development Mailing List (not for usage questions)"
> >> 
> >> Sent: Thursday, September 11, 2014 1:55:49 AM
> >> Subject: Re: [openstack-dev] [all] [clients] [keystone] lack of retrying
> >> tokens leads to overall OpenStack fragility
> >>
> >> On Wed, Sep 10, 2014 at 10:14:32AM -0400, Sean Dague wrote:
> >>> Going through the untriaged Nova bugs, and there are a few on a similar
> >>> pattern:
> >>>
> >>> Nova operation in progress takes a while
> >>> Crosses keystone token expiration time
> >>> Timeout thrown
> >>> Operation fails
> >>> Terrible 500 error sent back to user
> >>
> >> We actually have this exact problem in Heat, which I'm currently trying to
> >> solve:
> >>
> >> https://bugs.launchpad.net/heat/+bug/1306294
> >>
> >> Can you clarify, is the issue either:
> >>
> >> 1. Create novaclient object with username/password
> >> 2. Do series of operations via the client object which eventually fail
> >> after $n operations due to token expiry
> >>
> >> or:
> >>
> >> 1. Create novaclient object with username/password
> >> 2. Some really long operation which means token expires in the course of
> >> the service handling the request, blowing up and 500-ing
> >>
> >> If the former, then it does sound like a client, or usage-of-client bug,
> >> although note if you pass a *token* vs username/password (as is currently
> >> done for glance and heat in tempest, because we lack the code to get the
> >> token outside of the shell.py code..), there's nothing the client can do,
> >> because you can't request a new token with longer expiry with a token...
> >>
> >> However if the latter, then it seems like not really a client problem to
> >> solve, as it's hard to know what action to take if a request failed
> >> part-way through and thus things are in an unknown state.
> >>
> >> This issue is a hard problem, which can possibly be solved by
> >> switching to a trust scoped token (service impersonates the user), but
> >> then
> >> you're effectively bypassing token expiry via delegation which sits
> >> uncomfortably with me (despite the fact that we may have to do this in
> >> heat
> >> to solve the afforementioned bug)
> >>
> >>> It seems like we should have a standard pattern that on token expiration
> >>> the underlying code at least gives one retry to try to establish a new
> >>> token to complete the flow, however as far as I can tell *no* clients do
> >>> this.
> >>
> >> As has been mentioned, using sessions may be one solution to this, and
> >> AFAIK session support (where it doesn't already exist) is getting into
> >> various clients via the work being carried out to add support for v3
> >> keystone by David Hu:
> >>
> >> https://review.openstack.org/#/q/owner:david.hu%2540hp.com,n,z
> >>
> >> I see patches for Heat (currently gating), Nova and Ironic.
> >>
> >>> I know we had to add that into Tempest because tempest runs can exceed 1
> >>> hr, and we want to avoid random fails just because we cross a token
> >>> expiration boundary.
> >>
> >> I can't claim great experience with sessions yet, but AIUI you could do
> >> something like:
> >>
> >> from keystoneclient.auth.identity import v3
> >> from keystoneclient import session
> >> from keystoneclient.v3 import client
> >>
> >> auth = v3.Password(auth_url=OS_AUTH_URL,
> >>username=USERNAME,
> >>password=PASSWORD,
> >>project_id=PROJECT,
> >>user_domain_name='default')
> >> sess = session.Session(auth=auth)
> >> ks = client.Client(session=sess)
> >>
> >> And if you can pass the same session into the various clients tempest
> >> creates then the Password auth-plugin code takes care of reauthenticating
> >> if the token cached in the auth plugin object is expired, or nearly
> >> expired:
> >>
> >> https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/auth/identity/base.py#L120
> >>
> >> So in the tempest case, it seems like it may be a case of migrating the
> >> code creating the clients to use sessions instead of passing a token or
> >> username/password into the client object?
> >>
> >> That's my understanding of it atm anyway, hopefully jamielennox will be
> >> along
> >> soon with more details :)
> >>
> >> Steve
> > 
> > 
> > By clients here are you referring to the CLIs or the python libraries?
> > Implementation is at different points with each.
> > 
> > Sessions will handle automatically reauthenticating and retrying a request,
> > however it relies on the service throwing a 401 Unauthenticated error. If
> > a service is returning a 500 (or a timeout?) th

Re: [openstack-dev] [zaqar] Juno Performance Testing (Round 2)

2014-09-11 Thread Boris Pavlovic
Kurt,

Speaking generally, I’d like to see the project bake this in over time as
> part of the CI process. It’s definitely useful information not just for
> the developers but also for operators in terms of capacity planning. We’ve
>

talked as a team about doing this with Rally  (and in fact, some work has

been started there), but it may be useful to also run a large-scale test

on a regular basis (at least per milestone).


I believe, we will be able to generate distributed load and generate at
least
20k rps in K cycle. We've done a lot of work during J in this direction,
but there is still a lot of to do.

So you'll be able to use the same tool for gates, local usage and
large-scale tests.

Best regards,
Boris Pavlovic



On Fri, Sep 12, 2014 at 3:17 AM, Kurt Griffiths <
kurt.griffi...@rackspace.com> wrote:

> On 9/11/14, 2:11 PM, "Devananda van der Veen" 
> wrote:
>
> >OK - those resource usages sound better. At least you generated enough
> >load to saturate the uWSGI process CPU, which is a good point to look
> >at performance of the system.
> >
> >At that peak, what was the:
> >- average msgs/sec
> >- min/max/avg/stdev time to [post|get|delete] a message
>
> To be honest, it was a quick test and I didn’t note the exact metrics
> other than eyeballing them to see that they were similar to the results
> that I published for the scenarios that used the same load options (e.g.,
> I just re-ran some of the same test scenarios).
>
> Some of the metrics you mention aren’t currently reported by zaqar-bench,
> but could be added easily enough. In any case, I think zaqar-bench is
> going to end up being mostly useful to track relative performance gains or
> losses on a patch-by-patch basis, and also as an easy way to smoke-test
> both python-marconiclient and the service. For large-scale testing and
> detailed metrics, other tools (e.g., Tsung, JMeter) are better for the
> job, so I’ve been considering using them in future rounds.
>
> >Is that 2,181 msg/sec total, or per-producer?
>
> That metric was a combined average rate for all producers.
>
> >
> >I'd really like to see the total throughput and latency graphed as #
> >of clients increases. Or if graphing isn't your thing, even just post
> >a .csv of the raw numbers and I will be happy to graph it.
> >
> >It would also be great to see how that scales as you add more Redis
> >instances until all the available CPU cores on your Redis host are in
> >Use.
>
> Yep, I’ve got a long list of things like this that I’d like to see in
> future rounds of performance testing (and I welcome anyone in the
> community with an interest to join in), but I have to balance that effort
> with a lot of other things that are on my plate right now.
>
> Speaking generally, I’d like to see the project bake this in over time as
> part of the CI process. It’s definitely useful information not just for
> the developers but also for operators in terms of capacity planning. We’ve
> talked as a team about doing this with Rally (and in fact, some work has
> been started there), but it may be useful to also run a large-scale test
> on a regular basis (at least per milestone). Regardless, I think it would
> be great for the Zaqar team to connect with other projects (at the
> summit?) who are working on perf testing to swap ideas, collaborate on
> code/tools, etc.
>
> --KG
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] pep8 - splitting expressions

2014-09-11 Thread Ed Leafe
On Sep 11, 2014, at 5:05 PM, Kevin L. Mitchell  
wrote:

> I'd suggest trying:
> 
>res = amodel.Assemblies(
>uri=common.ASSEM_URI_STR % pecan.request.host_url,
>name='Solum_CAMP_assemblies',
>type='assemblies',
>description=common.ASSEM_DESC_STR,
>assembly_links=a_links,
>parameter_definitions_uri=common.ASSEM_PARAM_STR %
>pecan.request.host_url)
> 
> By moving the first argument to a line by itself, pep8 can be satisfied
> by indenting the following lines by 4 spaces.

When not using visual indentation, the standard is to indent two levels (i.e., 
8 spaces), to distinguish it from typical blocks. But yes, I much prefer this 
than creating temporary names to accommodate the visual indentation style.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] On an API proxy from baremetal to ironic

2014-09-11 Thread James Penick
We manage a fairly large nova-baremetal installation at Yahoo. And while we've 
developed tools to hit the nova-bm API, we're planning to move to ironic 
without any support for the nova BM API. Definitely no interest in the proxy 
API from our end. 
Sometimes you just need to let a thing die. 
-James
 :)= 

 On Wednesday, September 10, 2014 12:51 PM, Ben Nemec 
 wrote:
   

 -BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/10/2014 02:26 PM, Dan Smith wrote:
>> 1) Is this tested anywhere?  There are no unit tests in the patch
>> and it's not clear to me that there would be any Tempest coverage
>> of this code path.  Providing this and having it break a couple
>> of months down the line seems worse than not providing it at all.
>> This is obviously fixable though.
> 
> AFAIK, baremetal doesn't have any tempest-level testing at all
> anyway. However, I don't think our proxy code breaks, like, ever. I
> expect that unit tests for this stuff is plenty sufficient.

Right, but this would actually be running against Ironic, which does
have Tempest testing.  It might require some client changes to be able
to hit a Baremetal API instead of Ironic though.

> 
>> 2) If we think maintaining compatibility for existing users is
>> that important, why aren't we proxying everything?  Is it too 
>> difficult/impossible due to the differences between Baremetal
>> and Ironic?  And if they're that different, does it still make
>> sense to allow one to look like the other?  As it stands, this
>> isn't going to let deployers use their existing tools without
>> modification anyway.
> 
> Ideally we'd proxy everything, based on our current API
> guarantees. However, I think the compromise of just the show/index
> stuff came about because it would be extremely easy to do, provide
> some measure of continuity, and provide us a way to return
> something nicer for the create/update operations than a 500. It
> seemed like a completely fair and practical balance.

Fair enough.  I'm still not crazy about it, but since it already
exists and you say these interfaces don't require much maintenance I
guess that takes care of my major concerns.

- -Ben
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJUEKsAAAoJEDehGd0Fy7uqM3YIAKaqJPCwyS1l3NoKhj7qmGlT
wqdPspI2LyVgnHY62iq73O6FSpmEp0JzEcuBxHi21gK3tIBrvRr+mOsNtGNoj7Of
84YmcFyWgBR75rRDSLLnVu7rs1LJ0jpGwVzWDi/vmzVoxWdNXwSx223mQTwi9gJ3
n+Rgf0HYOKUwGgDVDpyWFv1DUBo/Hgc3ZdG8pzwnEqONN0bmRlBQMZRJrl2+8Jvj
zTYxDmunWp8FbTdKE80JcQ1YQYjmg4anCzaH0MEwax+j6lxu8MwEtM61ISJ7vV3L
KqTSW2OrjtqKY/9oHSnKiBuD9RInyWhML6pq8jsniadPw+TOatJ4PZaCyTS9XvI=
=cSmK
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


   ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Supporting Javascript clients calling OpenStack APIs

2014-09-11 Thread Richard Jones
On 11 September 2014 18:00, Robert Collins 
wrote:

> FWIW I'm very much in favour of having a single host API - I was
> looking at doing that in Apache for TripleO deployments anyway, due to
> the better SSL deployment characteristics. We then would register the
> actual single host endpoint in publicURL.
>

Yep, basically.



> How would that work for multiple regions with javascript - can you
> switch hosts fairly easily ?
>

I'm not confident I'm completely across the implications of regions, but I
believe there's nothing about them that makes anything planned here break.


 Richard
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Supporting Javascript clients calling OpenStack APIs

2014-09-11 Thread Richard Jones
On 12 September 2014 07:50, Adam Young  wrote:

>  On 09/11/2014 03:15 AM, Richard Jones wrote:
>
>  [This is Horizon-related but affects every service in OpenStack, hence no
> filter in the subject]
>
>  I would like for OpenStack to support browser-based Javascript API
> clients.
> Currently this is not possible because of cross-origin resource blocking in
> Javascript clients - that is, given some Javascript hosted on
> "https://horizon.company.com/"; you cannot, for example, call from that
> Javascript code to an API on "https://apis.company.com:5000/v2.0/tokens";
> to
> authenticate with Keystone.
>
>  There are three solutions to this problem:
>
>  1. the Horizon solution, in which those APIs are proxied by a very thick
>layer of additional Python API, plus some Python view code with some
>Javascript on the top only calling the Horizon view code,
> 2. add CORS support to all the OpenStack APIs though a new WSGI middleware
>(for example oslo.middleware.cors) and configured into each of the API
>services individually since they all exist on different "origin"
>host:port combinations, or
> 3. a new web service that proxies all the APIs and serves the static
>Javascript (etc) content from the one origin (host). APIs are then
> served
>from new URL roots "/name/" where the name is from the serviceCatalog
>entry. Static content can be served from "/static/". The serviceCatalog
> from
>keystone will be rewritten on the fly to point the API publicURLs at the
>new service. Requests are no longer cross-origin.
>
>  I have implemented options 2 and 3 as an exercise to see how horrid each
> one
>  is.
>
>
> I don't think these are mutually exclusive.  I can see people wanting
> either in some deployments.
>

I think I agree :)


== CORS Middleware ==
>
>  For those wanting a bit of background, I have written up a spec for oslo
> that
> talks about how this could work: https://review.openstack.org/#/c/119485/
>
>  The middleware option results in a reasonably nice bit of middleware.
> It's
> short and relatively easy to test. The big problem with it comes in
> configuring it in all the APIs. The configuration for the middleware takes
> two forms:
>
>  1. hooking oslo.middleware.cors into the WSGI pipeline (there's more than
>one in each API),
> 2. adding the CORS configuration itself for the middleware in the API's
> main
>configuration file (eg. keystone.conf or nova.conf).
>
>  So for each service, that's two configuration files *and* the kicker is
> that
> the paste configuration file is non-trivially different in almost every
> case.
>
> This is one reason I thought that it should be done by auth_token
> middleware.  The other reason is that I don't think we want to blanket
> accept CORS from everywhere, but instead we should do so based on the
> service catalog.
>

It's very important to understand that CORS is entirely advisory. Nothing
is required to pay any attention to it. Modern browsers do, of course, but
in the absence of a browser initiating a CORS conversation (by sending an
Origin header) the CORS middleware should do nothing whatsoever. A GET
request will still return the body of data requested - it's just that the
browser will see the CORS header and block an application from seeing that
data. Security through access controls, XSS protections, etc. must be
handled by other mechanisms.


For a POC deployment, for a small company, all-in-one,  what you are doing
> shouild be fine, but then, if you were running all of your services that
> way, in one web server, you wouldn't need CORS either.
>

This isn't possible in the current OpenStack model - all the APIs run on
different ports, which count as different origins for cross-origin resource
issues (origin is defined as {protocol, host, port}).


So, lets have these two approaches work in parallel.  THe proxy will get
> things goint while we work out the CORS approach.
>

I will look at submitting my middleware for inclusion anyway then.


 Richard
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] Juno Performance Testing (Round 2)

2014-09-11 Thread Kurt Griffiths
On 9/11/14, 2:11 PM, "Devananda van der Veen" 
wrote:

>OK - those resource usages sound better. At least you generated enough
>load to saturate the uWSGI process CPU, which is a good point to look
>at performance of the system.
>
>At that peak, what was the:
>- average msgs/sec
>- min/max/avg/stdev time to [post|get|delete] a message

To be honest, it was a quick test and I didn’t note the exact metrics
other than eyeballing them to see that they were similar to the results
that I published for the scenarios that used the same load options (e.g.,
I just re-ran some of the same test scenarios).

Some of the metrics you mention aren’t currently reported by zaqar-bench,
but could be added easily enough. In any case, I think zaqar-bench is
going to end up being mostly useful to track relative performance gains or
losses on a patch-by-patch basis, and also as an easy way to smoke-test
both python-marconiclient and the service. For large-scale testing and
detailed metrics, other tools (e.g., Tsung, JMeter) are better for the
job, so I’ve been considering using them in future rounds.

>Is that 2,181 msg/sec total, or per-producer?

That metric was a combined average rate for all producers.

>
>I'd really like to see the total throughput and latency graphed as #
>of clients increases. Or if graphing isn't your thing, even just post
>a .csv of the raw numbers and I will be happy to graph it.
>
>It would also be great to see how that scales as you add more Redis
>instances until all the available CPU cores on your Redis host are in
>Use.

Yep, I’ve got a long list of things like this that I’d like to see in
future rounds of performance testing (and I welcome anyone in the
community with an interest to join in), but I have to balance that effort
with a lot of other things that are on my plate right now.

Speaking generally, I’d like to see the project bake this in over time as
part of the CI process. It’s definitely useful information not just for
the developers but also for operators in terms of capacity planning. We’ve
talked as a team about doing this with Rally (and in fact, some work has
been started there), but it may be useful to also run a large-scale test
on a regular basis (at least per milestone). Regardless, I think it would
be great for the Zaqar team to connect with other projects (at the
summit?) who are working on perf testing to swap ideas, collaborate on
code/tools, etc.

--KG


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Request for J3 FFE - add reset-state function for backups

2014-09-11 Thread Jay Bryant
It isn't a huge change.   I am ok with it if we can get the issues
addressed.   Especially Duncan's concern.
On Sep 11, 2014 12:17 PM, "Mike Perez"  wrote:

> On 12:23 Tue 09 Sep , yunling wrote:
> > Hi Cinder Folks,I would like to request a FFE for add reset-state
> function for backups[1][2].The spec of add reset-state function for backups
> has been reviewed and merged[2]. These code changes have been well tested
> and are not very complex[3]. I would appreciate any consideration for an
> FFE.Thanks,
>
> It looks like the current review has some comments that are waiting too be
> addressed now.
>
> --
> Mike Perez
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-11 Thread Jay Pipes

On 09/11/2014 04:09 PM, Zane Bitter wrote:

Swift is the current exception here, but one could argue, and people
have[2], that Swift is also the only project that actually conforms to
our stated design tenets for OpenStack. I'd struggle to tell the Zaqar
folks they've done the Wrong Thing... especially when abandoning the
RDBMS driver was done largely at the direction of the TC iirc.




[2] http://blog.linux2go.dk/2013/08/30/openstack-design-tenets-part-2/


No offense to Soren, who wrote some interesting and poignant things, nor 
to the Swift developers, who continue to produce excellent work, but 
Swift is object storage. It is a data plane system with a small API 
surface, a very limited functional domain, and a small, inflexible 
storage schema (which is perfectly fine for its use cases). It's needs 
for a relational database are nearly non-existent. It replicates a 
SQLite database around using rsync [1]. Try doing that with a schema of 
any complexity and you will quickly find the limitations of such a strategy.


If Nova was to take Soren's advice and implement its data-access layer 
on top of Cassandra or Riak, we would just end up re-inventing SQL Joins 
in Python-land. I've said it before, and I'll say it again. In Nova at 
least, the SQL schema is complex because the problem domain is complex. 
That means lots of relations, lots of JOINs, and that means the best way 
to query for that data is via an RDBMS.


And I say that knowing just how *poor* some of the queries are in Nova!

For projects like Swift, Zaqar, even Keystone, Glance and Cinder, a 
non-RDBMS solution might be a perfectly reasonable solution for the 
underlying data storage and access layer (and for the record, I never 
said that Zaqar should or should not use an RDBMS for its storage). For 
complex control plane software like Nova, though, an RDBMS is the best 
tool for the job given the current lay of the land in open source data 
storage solutions matched with Nova's complex query and transactional 
requirements.


Folks in these other programs have actually, you know, thought about 
these kinds of things and had serious discussions about alternatives. It 
would be nice to have someone acknowledge that instead of snarky 
comments implying everyone else "has it wrong".


Going back in my hole,
-jay

[1] 
https://github.com/openstack/swift/blob/master/swift/common/db_replicator.py#L232


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Server Groups - remove VM from group?

2014-09-11 Thread Chris Friesen

On 09/11/2014 04:22 PM, Joe Cropper wrote:

I would be a little wary about the DB level locking for stuff like that
— it’s certainly doable, but also comes at the expense of things
behaving ever-so-slightly different from DBMS to DBMS.  Perhaps there
are multiple “logical efforts” here—i.e., adding some APIs and cleaning
up existing code.


I think you could actually do it without locking.  Pick a host as we do 
now, write it into the database, then check whether you hit a race and 
if so then clear that host from the database and go back to the beginning.


Basically the same algorithm that we do now, but all contained within 
the scheduler code.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Server Groups - remove VM from group?

2014-09-11 Thread Joe Cropper
I would be a little wary about the DB level locking for stuff like that — it’s 
certainly doable, but also comes at the expense of things behaving 
ever-so-slightly different from DBMS to DBMS.  Perhaps there are multiple 
“logical efforts” here—i.e., adding some APIs and cleaning up existing code.

In any case, I’ve started a blueprint on this [1] and we can continue iterating 
in the nova-spec once kilo opens up.  Thanks all for the good discussion on 
this thus far.

[1] https://blueprints.launchpad.net/nova/+spec/dynamic-server-groups

- Joe
On Sep 11, 2014, at 5:04 PM, Chris Friesen  wrote:

> On 09/11/2014 03:01 PM, Jay Pipes wrote:
>> On 09/11/2014 04:51 PM, Matt Riedemann wrote:
>>> On 9/10/2014 6:00 PM, Russell Bryant wrote:
 On 09/10/2014 06:46 PM, Joe Cropper wrote:
> Hmm, not sure I follow the concern, Russell.  How is that any different
> from putting a VM into the group when it’s booted as is done today?
>  This simply defers the ‘group insertion time’ to some time after
> initial the VM’s been spawned, so I’m not sure this creates anymore
> race
> conditions than what’s already there [1].
> 
> [1] Sure, the to-be-added VM could be in the midst of a migration or
> something, but that would be pretty simple to check make sure its task
> state is None or some such.
 
 The way this works at boot is already a nasty hack.  It does policy
 checking in the scheduler, and then has to re-do some policy checking at
 launch time on the compute node.  I'm afraid of making this any worse.
 In any case, it's probably better to discuss this in the context of a
 more detailed design proposal.
 
>>> 
>>> This [1] is the hack you're referring to right?
>>> 
>>> [1]
>>> http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py?id=2014.2.b3#n1297
>>> 
>> 
>> That's the hack *I* had in the back of my mind.
> 
> I think that's the only boot hack related to server groups.
> 
> I was thinking that it should be possible to deal with the race more cleanly 
> by recording the selected compute node in the database at the time of 
> scheduling.  As it stands, the host is implicitly encoded in the compute node 
> to which we send the boot request and nobody else knows about it.
> 
> Chris
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-11 Thread Zane Bitter

On 09/09/14 19:56, Clint Byrum wrote:

Excerpts from Samuel Merritt's message of 2014-09-09 16:12:09 -0700:

On 9/9/14, 12:03 PM, Monty Taylor wrote:

On 09/04/2014 01:30 AM, Clint Byrum wrote:

Excerpts from Flavio Percoco's message of 2014-09-04 00:08:47 -0700:

Greetings,

Last Tuesday the TC held the first graduation review for Zaqar. During
the meeting some concerns arose. I've listed those concerns below with
some comments hoping that it will help starting a discussion before the
next meeting. In addition, I've added some comments about the project
stability at the bottom and an etherpad link pointing to a list of use
cases for Zaqar.



Hi Flavio. This was an interesting read. As somebody whose attention has
recently been drawn to Zaqar, I am quite interested in seeing it
graduate.


# Concerns

- Concern on operational burden of requiring NoSQL deploy expertise to
the mix of openstack operational skills

For those of you not familiar with Zaqar, it currently supports 2 nosql
drivers - MongoDB and Redis - and those are the only 2 drivers it
supports for now. This will require operators willing to use Zaqar to
maintain a new (?) NoSQL technology in their system. Before expressing
our thoughts on this matter, let me say that:

  1. By removing the SQLAlchemy driver, we basically removed the
chance
for operators to use an already deployed "OpenStack-technology"
  2. Zaqar won't be backed by any AMQP based messaging technology for
now. Here's[0] a summary of the research the team (mostly done by
Victoria) did during Juno
  3. We (OpenStack) used to require Redis for the zmq matchmaker
  4. We (OpenStack) also use memcached for caching and as the oslo
caching lib becomes available - or a wrapper on top of dogpile.cache -
Redis may be used in place of memcached in more and more deployments.
  5. Ceilometer's recommended storage driver is still MongoDB,
although
Ceilometer has now support for sqlalchemy. (Please correct me if I'm
wrong).

That being said, it's obvious we already, to some extent, promote some
NoSQL technologies. However, for the sake of the discussion, lets assume
we don't.

I truly believe, with my OpenStack (not Zaqar's) hat on, that we can't
keep avoiding these technologies. NoSQL technologies have been around
for years and we should be prepared - including OpenStack operators - to
support these technologies. Not every tool is good for all tasks - one
of the reasons we removed the sqlalchemy driver in the first place -
therefore it's impossible to keep an homogeneous environment for all
services.



I whole heartedly agree that non traditional storage technologies that
are becoming mainstream are good candidates for use cases where SQL
based storage gets in the way. I wish there wasn't so much FUD
(warranted or not) about MongoDB, but that is the reality we live in.


With this, I'm not suggesting to ignore the risks and the extra burden
this adds but, instead of attempting to avoid it completely by not
evolving the stack of services we provide, we should probably work on
defining a reasonable subset of NoSQL services we are OK with
supporting. This will help making the burden smaller and it'll give
operators the option to choose.

[0] http://blog.flaper87.com/post/marconi-amqp-see-you-later/


- Concern on should we really reinvent a queue system rather than
piggyback on one

As mentioned in the meeting on Tuesday, Zaqar is not reinventing message
brokers. Zaqar provides a service akin to SQS from AWS with an OpenStack
flavor on top. [0]



I think Zaqar is more like SMTP and IMAP than AMQP. You're not really
trying to connect two processes in real time. You're trying to do fully
asynchronous messaging with fully randomized access to any message.

Perhaps somebody should explore whether the approaches taken by large
scale IMAP providers could be applied to Zaqar.

Anyway, I can't imagine writing a system to intentionally use the
semantics of IMAP and SMTP. I'd be very interested in seeing actual use
cases for it, apologies if those have been posted before.


It seems like you're EITHER describing something called XMPP that has at
least one open source scalable backend called ejabberd. OR, you've
actually hit the nail on the head with bringing up SMTP and IMAP but for
some reason that feels strange.

SMTP and IMAP already implement every feature you've described, as well
as retries/failover/HA and a fully end to end secure transport (if
installed properly) If you don't actually set them up to run as a public
messaging interface but just as a cloud-local exchange, then you could
get by with very low overhead for a massive throughput - it can very
easily be run on a single machine for Sean's simplicity, and could just
as easily be scaled out using well known techniques for public cloud
sized deployments?

So why not use existing daemons that do this? You could still use the
REST API you've got, but instead of writing it to a mongo backend and
trying to implement all of

Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-11 Thread Zane Bitter

On 09/09/14 15:03, Monty Taylor wrote:

On 09/04/2014 01:30 AM, Clint Byrum wrote:

Excerpts from Flavio Percoco's message of 2014-09-04 00:08:47 -0700:

Greetings,

Last Tuesday the TC held the first graduation review for Zaqar. During
the meeting some concerns arose. I've listed those concerns below with
some comments hoping that it will help starting a discussion before the
next meeting. In addition, I've added some comments about the project
stability at the bottom and an etherpad link pointing to a list of use
cases for Zaqar.



Hi Flavio. This was an interesting read. As somebody whose attention has
recently been drawn to Zaqar, I am quite interested in seeing it
graduate.


# Concerns

- Concern on operational burden of requiring NoSQL deploy expertise to
the mix of openstack operational skills

For those of you not familiar with Zaqar, it currently supports 2 nosql
drivers - MongoDB and Redis - and those are the only 2 drivers it
supports for now. This will require operators willing to use Zaqar to
maintain a new (?) NoSQL technology in their system. Before expressing
our thoughts on this matter, let me say that:

 1. By removing the SQLAlchemy driver, we basically removed the
chance
for operators to use an already deployed "OpenStack-technology"
 2. Zaqar won't be backed by any AMQP based messaging technology for
now. Here's[0] a summary of the research the team (mostly done by
Victoria) did during Juno
 3. We (OpenStack) used to require Redis for the zmq matchmaker
 4. We (OpenStack) also use memcached for caching and as the oslo
caching lib becomes available - or a wrapper on top of dogpile.cache -
Redis may be used in place of memcached in more and more deployments.
 5. Ceilometer's recommended storage driver is still MongoDB,
although
Ceilometer has now support for sqlalchemy. (Please correct me if I'm
wrong).

That being said, it's obvious we already, to some extent, promote some
NoSQL technologies. However, for the sake of the discussion, lets assume
we don't.

I truly believe, with my OpenStack (not Zaqar's) hat on, that we can't
keep avoiding these technologies. NoSQL technologies have been around
for years and we should be prepared - including OpenStack operators - to
support these technologies. Not every tool is good for all tasks - one
of the reasons we removed the sqlalchemy driver in the first place -
therefore it's impossible to keep an homogeneous environment for all
services.



I whole heartedly agree that non traditional storage technologies that
are becoming mainstream are good candidates for use cases where SQL
based storage gets in the way. I wish there wasn't so much FUD
(warranted or not) about MongoDB, but that is the reality we live in.


With this, I'm not suggesting to ignore the risks and the extra burden
this adds but, instead of attempting to avoid it completely by not
evolving the stack of services we provide, we should probably work on
defining a reasonable subset of NoSQL services we are OK with
supporting. This will help making the burden smaller and it'll give
operators the option to choose.

[0] http://blog.flaper87.com/post/marconi-amqp-see-you-later/


- Concern on should we really reinvent a queue system rather than
piggyback on one

As mentioned in the meeting on Tuesday, Zaqar is not reinventing message
brokers. Zaqar provides a service akin to SQS from AWS with an OpenStack
flavor on top. [0]



I think Zaqar is more like SMTP and IMAP than AMQP. You're not really
trying to connect two processes in real time. You're trying to do fully
asynchronous messaging with fully randomized access to any message.

Perhaps somebody should explore whether the approaches taken by large
scale IMAP providers could be applied to Zaqar.

Anyway, I can't imagine writing a system to intentionally use the
semantics of IMAP and SMTP. I'd be very interested in seeing actual use
cases for it, apologies if those have been posted before.


It seems like you're EITHER describing something called XMPP that has at
least one open source scalable backend called ejabberd. OR, you've
actually hit the nail on the head with bringing up SMTP and IMAP but for
some reason that feels strange.

SMTP and IMAP already implement every feature you've described, as well
as retries/failover/HA and a fully end to end secure transport (if
installed properly) If you don't actually set them up to run as a public
messaging interface but just as a cloud-local exchange, then you could
get by with very low overhead for a massive throughput - it can very
easily be run on a single machine for Sean's simplicity, and could just
as easily be scaled out using well known techniques for public cloud
sized deployments?

So why not use existing daemons that do this? You could still use the
REST API you've got, but instead of writing it to a mongo backend and
trying to implement all of the things that already exist in SMTP/IMAP -
you could just have them front to it. You could even bypass normal
del

Re: [openstack-dev] [solum] pep8 - splitting expressions

2014-09-11 Thread Kevin L. Mitchell
On Tue, 2014-09-09 at 12:05 -0700, Gilbert Pilz wrote:
> I have a question with regards to splitting expressions in order to
> conform to the pep8 line-length restriction. I have the following bit
> of code:
> 
> 
> res = amodel.Assemblies(uri=common.ASSEM_URI_STR %
> pecan.request.host_url,
> name='Solum_CAMP_assemblies',
> type='assemblies',
> description=common.ASSEM_DESC_STR,
> assembly_links=a_links,
> 
> parameter_definitions_uri=common.ASSEM_PARAM_STR %
> pecan.request.host_url)

I'd suggest trying:

res = amodel.Assemblies(
uri=common.ASSEM_URI_STR % pecan.request.host_url,
name='Solum_CAMP_assemblies',
type='assemblies',
description=common.ASSEM_DESC_STR,
assembly_links=a_links,
parameter_definitions_uri=common.ASSEM_PARAM_STR %
pecan.request.host_url)

By moving the first argument to a line by itself, pep8 can be satisfied
by indenting the following lines by 4 spaces.
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Server Groups - remove VM from group?

2014-09-11 Thread Chris Friesen

On 09/11/2014 03:01 PM, Jay Pipes wrote:

On 09/11/2014 04:51 PM, Matt Riedemann wrote:

On 9/10/2014 6:00 PM, Russell Bryant wrote:

On 09/10/2014 06:46 PM, Joe Cropper wrote:

Hmm, not sure I follow the concern, Russell.  How is that any different
from putting a VM into the group when it’s booted as is done today?
  This simply defers the ‘group insertion time’ to some time after
initial the VM’s been spawned, so I’m not sure this creates anymore
race
conditions than what’s already there [1].

[1] Sure, the to-be-added VM could be in the midst of a migration or
something, but that would be pretty simple to check make sure its task
state is None or some such.


The way this works at boot is already a nasty hack.  It does policy
checking in the scheduler, and then has to re-do some policy checking at
launch time on the compute node.  I'm afraid of making this any worse.
In any case, it's probably better to discuss this in the context of a
more detailed design proposal.



This [1] is the hack you're referring to right?

[1]
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py?id=2014.2.b3#n1297



That's the hack *I* had in the back of my mind.


I think that's the only boot hack related to server groups.

I was thinking that it should be possible to deal with the race more 
cleanly by recording the selected compute node in the database at the 
time of scheduling.  As it stands, the host is implicitly encoded in the 
compute node to which we send the boot request and nobody else knows 
about it.


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Supporting Javascript clients calling OpenStack APIs

2014-09-11 Thread Adam Young

On 09/11/2014 03:15 AM, Richard Jones wrote:

[This is Horizon-related but affects every service in OpenStack, hence no
filter in the subject]

I would like for OpenStack to support browser-based Javascript API 
clients.
Currently this is not possible because of cross-origin resource 
blocking in

Javascript clients - that is, given some Javascript hosted on
"https://horizon.company.com/"; you cannot, for example, call from that
Javascript code to an API on 
"https://apis.company.com:5000/v2.0/tokens"; to

authenticate with Keystone.

There are three solutions to this problem:

1. the Horizon solution, in which those APIs are proxied by a very thick
   layer of additional Python API, plus some Python view code with some
   Javascript on the top only calling the Horizon view code,
2. add CORS support to all the OpenStack APIs though a new WSGI middleware
   (for example oslo.middleware.cors) and configured into each of the API
   services individually since they all exist on different "origin"
   host:port combinations, or
3. a new web service that proxies all the APIs and serves the static
   Javascript (etc) content from the one origin (host). APIs are then 
served

   from new URL roots "/name/" where the name is from the serviceCatalog
   entry. Static content can be served from "/static/". The 
serviceCatalog from
   keystone will be rewritten on the fly to point the API publicURLs 
at the

   new service. Requests are no longer cross-origin.

I have implemented options 2 and 3 as an exercise to see how horrid 
each one

is.


I don't think these are mutually exclusive.  I can see people wanting 
either in some deployments.







== CORS Middleware ==

For those wanting a bit of background, I have written up a spec for 
oslo that

talks about how this could work: https://review.openstack.org/#/c/119485/

The middleware option results in a reasonably nice bit of middleware. It's
short and relatively easy to test. The big problem with it comes in
configuring it in all the APIs. The configuration for the middleware takes
two forms:

1. hooking oslo.middleware.cors into the WSGI pipeline (there's more than
   one in each API),
2. adding the CORS configuration itself for the middleware in the 
API's main

   configuration file (eg. keystone.conf or nova.conf).

So for each service, that's two configuration files *and* the kicker 
is that
the paste configuration file is non-trivially different in almost 
every case.
This is one reason I thought that it should be done by auth_token 
middleware.  The other reason is that I don't think we want to blanket 
accept CORS from everywhere, but instead we should do so based on the 
service catalog.


This is for the non-trivial deployment case like MOC:

http://www.bu.edu/hic/projects/massachusetts-open-cloud/

Where one "Horizon" instance is going to have to talk to multiple, 
non-trusted instances for each of the services.  CORS should only be 
acceptable between services in the same service catalog.  Yes, I realize 
this is not security enforcment, it is just one step in the strategy.


For a POC deployment, for a small company, all-in-one,  what you are 
doing shouild be fine, but then, if you were running all of your 
services that way, in one web server, you wouldn't need CORS either.


So, lets have these two approaches work in parallel.  THe proxy will get 
things goint while we work out the CORS approach.




That's a lot of work, and confusing for deployers. Configuration 
management
tools can ease *some* of this burden (the *.conf files) but those 
paste files

are a bit of a mess :(

Once the config change is in place, it works (well, except for an 
issue I ran

into relating to oslo.middleware.sizelimit which I'll go into in another
place).

The implementation hasn't been pushed up for review as I'm not sure it 
should

be. I can do this if people wish me to.


== New Single-Point API Service ==

Actually, this is not horrid in any way - unless that publicURL rewriting
gives you the heebie-jeebies.

It works, and offers us some nice additional features like being able 
to host
the service behind SSL without needing to get a bazillion 
certificates. And

maybe load balancing. And maybe API access filtering.

I note that https://openrepose.org already exists to be *something* like
this, but it's not *precisely* what I'm proposing. Also Java euwww ;)


So, I propose that the idea of CORS-in-all-the-things as an idea be
put aside as unworkable.

I intend to pursue the single-point API service that I have described as a
way of moving forward in prototyping a pure-Javascript OpenStack 
Dashboard.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding Dina Belova to ceilometer-core

2014-09-11 Thread Angus Salkeld
On Thu, Sep 11, 2014 at 11:24 PM, Julien Danjou  wrote:

> Hi,
>
> Dina has been doing a great work and has been very helpful during the
> Juno cycle and her help is very valuable. She's been doing a lot of
> reviews and has been very active in our community.
>
> I'd like to propose that we add Dina Belova to the ceilometer-core
> group, as I'm convinced it'll help the project.
>
> Please, dear ceilometer-core members, reply with your votes!
>

+1


>
> --
> Julien Danjou
> // Free Software hacker
> // http://julien.danjou.info
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Working on 6.0 and new releases in general

2014-09-11 Thread Dmitry Borodaenko
Roman,

In the workflow with 2 package repositories per 1 fuel branch set
(e.g. icehouse & juno for 6.0, or juno & kilo later in 6.0 release
cycle), we should aim to keep packages targeted for both repositories
as close as possible, so default mode of operation should be to put
new packages in both repositories.

Use cases:

1. We need new packages for Juno, we don't expect them to break
Icehouse. We should put packages to both repos: if there's impact on
Icehouse we want to know sooner rather than later.

2. We need to add a patch to a package. If it's not a backport of
bugfix from Juno, most likely the bug is present in both Icehouse and
Juno, and so should be the fix.

3. We need to make a packaging change (init script change, add a
missing file, modify dependencies, etc.). Unless a change is
explicitly Juno specific (e.g. config option that doesn't exist in
Icehouse, or versioned dependency incompatible with Icehouse), it
should be applied to both releases.

The flow should be pretty much the same as master->stable rule we have
in git: apply the change to Juno, backport to Icehouse.

-DmitryB




On Thu, Sep 11, 2014 at 12:22 PM, Roman Vyalov  wrote:
> Mike,
> 2 jobs for Icehouse and Juno equal 2 different repository with packages for
> Fuel 6.0. This can be problem for current osci workflow.
> For example:  We need building new packages. Which repository we must put
> packages? to icehouse or/and Juno ?
> if new packages will break icehouse repository, but required for Juno ...
>
> On Wed, Sep 10, 2014 at 12:39 AM, Mike Scherbakov 
> wrote:
>>
>> Aleksandra,
>> you've got us exactly right. Fuel CI for OSTF can wait a bit longer, but
>> "4 fuel-library tests" should happen right after we create stable/5.1. Also,
>> for Fuel CI for OSTF - I don't think it's actually necessary to support <5.0
>> envs.
>>
>> Your questions:
>>
>> Create jobs for both Icehouse and Juno, but it doesn't make sense to do
>> staging for Juno till it starts to pass deployment in HA mode. Once it
>> passes deployment in HA, staging should be enabled. Then, once it passes
>> OSTF - we extend criteria, and pass only those mirrors which also pass OSTF
>> phase
>> Once Juno starts to pass BVT with OSTF check enabled, I think we can
>> disable Icehouse checks. Not sure about fuel-library tests on Fuel CI with
>> Icehouse - we might want to continue using them.
>>
>> Thanks,
>>
>> On Wed, Sep 10, 2014 at 12:22 AM, Aleksandra Fedorova
>>  wrote:
>>>
>>> > Our Fuel CI can do 4 builds against puppet modules: 2 voting, with
>>> > Icehouse packages; 2 non-voting, with Juno packages.
>>> > Then, I'd suggest to create ISO with 2 releases (Icehouse, Juno)
>>> > actually before Juno becomes stable. We will be able to run 2 sets of BVTs
>>> > (against Icehouse and Juno), and it means that we will be able to see 
>>> > almost
>>> > immediately if something in nailgun/astute/puppet integration broke. For
>>> > Juno builds it's going to be all red initially.
>>>
>>> Let me rephrase:
>>>
>>> We keep one Fuel master branch for two OpenStack releases. And we make
>>> sure that Fuel master code is compatible with both of them. And we use
>>> current release (Icehouse) as a reference for test results of upcoming
>>> release, till we obtain stable enough reference point in Juno itself.
>>> Moreover we'd like to have OSTF code running on all previous Fuel releases.
>>>
>>> Changes to CI workflow look as follows:
>>>
>>> Nightly builds:
>>>   1) We build two mirrors: one for Icehouse and one for Juno.
>>>   2) From each mirror we build Fuel ISO using exactly the same fuel
>>> master branch code.
>>>   3) Then we run BVT tests on both (using the same fuel-main code for
>>> system tests).
>>>   4) If Icehouse BVT tests pass, we deploy both ISO images (even with
>>> failed Juno tests) onto Fuel CI.
>>>
>>> On Fuel CI we should run:
>>>   - 4 fuel-library tests (revert master node, inject fuel-library code in
>>> master node and run deployment):
>>> 2 (ubuntu and centos) voting Icehouse tests and 2 non-voting
>>> Juno tests
>>>   - 5 OSTF tests (revert deployed environment, inject OSTF code into
>>> master node, run OSTF):
>>> voting on 4.1, 5.0, 5.1, master/icehouse and non-voting on
>>> master/Juno
>>>   - other tests, which don't use prebuilt environment, work as before
>>>
>>> The major action point here would be OSTF tests, as we don't have yet
>>> working implementation of injecting OSTF code into deployed environment. And
>>> we don't run any tests on old environments.
>>>
>>>
>>> Questions:
>>>
>>> 1) How should we test mirrors?
>>>
>>> Current master mirrors go through the 4 hours test cycle involving Fuel
>>> ISO build:
>>>   1. we build temporary mirror
>>>   2. build custom iso from it
>>>   3. run two custom bvt jobs
>>>   4. if they pass we move mirror to stable and sitch to it for our
>>> "primary" fuel_master_iso
>>>
>>> Should we test only Icehouse mirrors, or both, but ignoring again failed
>>> BVT for Juno? Maybe we 

Re: [openstack-dev] [nova] Server Groups - remove VM from group?

2014-09-11 Thread Jay Pipes

On 09/11/2014 04:51 PM, Matt Riedemann wrote:

On 9/10/2014 6:00 PM, Russell Bryant wrote:

On 09/10/2014 06:46 PM, Joe Cropper wrote:

Hmm, not sure I follow the concern, Russell.  How is that any different
from putting a VM into the group when it’s booted as is done today?
  This simply defers the ‘group insertion time’ to some time after
initial the VM’s been spawned, so I’m not sure this creates anymore race
conditions than what’s already there [1].

[1] Sure, the to-be-added VM could be in the midst of a migration or
something, but that would be pretty simple to check make sure its task
state is None or some such.


The way this works at boot is already a nasty hack.  It does policy
checking in the scheduler, and then has to re-do some policy checking at
launch time on the compute node.  I'm afraid of making this any worse.
In any case, it's probably better to discuss this in the context of a
more detailed design proposal.



This [1] is the hack you're referring to right?

[1]
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py?id=2014.2.b3#n1297


That's the hack *I* had in the back of my mind.

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Server Groups - remove VM from group?

2014-09-11 Thread Matt Riedemann



On 9/10/2014 6:00 PM, Russell Bryant wrote:

On 09/10/2014 06:46 PM, Joe Cropper wrote:

Hmm, not sure I follow the concern, Russell.  How is that any different
from putting a VM into the group when it’s booted as is done today?
  This simply defers the ‘group insertion time’ to some time after
initial the VM’s been spawned, so I’m not sure this creates anymore race
conditions than what’s already there [1].

[1] Sure, the to-be-added VM could be in the midst of a migration or
something, but that would be pretty simple to check make sure its task
state is None or some such.


The way this works at boot is already a nasty hack.  It does policy
checking in the scheduler, and then has to re-do some policy checking at
launch time on the compute node.  I'm afraid of making this any worse.
In any case, it's probably better to discuss this in the context of a
more detailed design proposal.



This [1] is the hack you're referring to right?

[1] 
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py?id=2014.2.b3#n1297


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Working on 6.0 and new releases in general

2014-09-11 Thread Mike Scherbakov
What would be your suggestions then on the issue? We want to avoid breaking
of our fuel-library and other code without knowing about it, so it's not an
option to simply forget about compatibility with Icehouse packages.

And I'm actually +1 for introducing Kilo CI jobs as soon as we can, so to
keep our CI cycle very close to the upstream.

On Thu, Sep 11, 2014 at 11:22 PM, Roman Vyalov  wrote:

> Mike,
> 2 jobs for Icehouse and Juno equal 2 different repository with packages
> for Fuel 6.0. This can be problem for current osci workflow.
> For example:  We need building new packages. Which repository we must put
> packages? to icehouse or/and Juno ?
> if new packages will break icehouse repository, but required for Juno ...
>
> On Wed, Sep 10, 2014 at 12:39 AM, Mike Scherbakov <
> mscherba...@mirantis.com> wrote:
>
>> Aleksandra,
>> you've got us exactly right. Fuel CI for OSTF can wait a bit longer, but "4
>> fuel-library tests" should happen right after we create stable/5.1. Also,
>> for Fuel CI for OSTF - I don't think it's actually necessary to support
>> <5.0 envs.
>>
>> Your questions:
>>
>>1. Create jobs for both Icehouse and Juno, but it doesn't make sense
>>to do staging for Juno till it starts to pass deployment in HA mode. Once
>>it passes deployment in HA, staging should be enabled. Then, once it 
>> passes
>>OSTF - we extend criteria, and pass only those mirrors which also pass 
>> OSTF
>>phase
>>2. Once Juno starts to pass BVT with OSTF check enabled, I think we
>>can disable Icehouse checks. Not sure about fuel-library tests on Fuel CI
>>with Icehouse - we might want to continue using them.
>>
>> Thanks,
>>
>> On Wed, Sep 10, 2014 at 12:22 AM, Aleksandra Fedorova <
>> afedor...@mirantis.com> wrote:
>>
>>> > Our Fuel CI can do 4 builds against puppet modules: 2 voting, with
>>> Icehouse packages; 2 non-voting, with Juno packages.
>>> > Then, I'd suggest to create ISO with 2 releases (Icehouse, Juno)
>>> actually before Juno becomes stable. We will be able to run 2 sets of BVTs
>>> (against Icehouse and Juno), and it means that we will be able to see
>>> almost immediately if something in nailgun/astute/puppet integration broke.
>>> For Juno builds it's going to be all red initially.
>>>
>>> Let me rephrase:
>>>
>>> We keep one Fuel master branch for two OpenStack releases. And we make
>>> sure that Fuel master code is compatible with both of them. And we use
>>> current release (Icehouse) as a reference for test results of upcoming
>>> release, till we obtain stable enough reference point in Juno itself.
>>> Moreover we'd like to have OSTF code running on all previous Fuel releases.
>>>
>>> Changes to CI workflow look as follows:
>>>
>>> Nightly builds:
>>>   1) We build two mirrors: one for Icehouse and one for Juno.
>>>   2) From each mirror we build Fuel ISO using exactly the same fuel
>>> master branch code.
>>>   3) Then we run BVT tests on both (using the same fuel-main code for
>>> system tests).
>>>   4) If Icehouse BVT tests pass, we deploy both ISO images (even with
>>> failed Juno tests) onto Fuel CI.
>>>
>>> On Fuel CI we should run:
>>>   - 4 fuel-library tests (revert master node, inject fuel-library code
>>> in master node and run deployment):
>>> 2 (ubuntu and centos) voting Icehouse tests and 2 non-voting
>>> Juno tests
>>>   - 5 OSTF tests (revert deployed environment, inject OSTF code into
>>> master node, run OSTF):
>>> voting on 4.1, 5.0, 5.1, master/icehouse and non-voting on
>>> master/Juno
>>>   - other tests, which don't use prebuilt environment, work as before
>>>
>>> The major action point here would be OSTF tests, as we don't have yet
>>> working implementation of injecting OSTF code into deployed environment.
>>> And we don't run any tests on old environments.
>>>
>>>
>>> Questions:
>>>
>>> 1) How should we test mirrors?
>>>
>>> Current master mirrors go through the 4 hours test cycle involving Fuel
>>> ISO build:
>>>   1. we build temporary mirror
>>>   2. build custom iso from it
>>>   3. run two custom bvt jobs
>>>   4. if they pass we move mirror to stable and sitch to it for our
>>> "primary" fuel_master_iso
>>>
>>> Should we test only Icehouse mirrors, or both, but ignoring again failed
>>> BVT for Juno? Maybe we should enable these tests only later in release
>>> cycle, say, after SCF?
>>>
>>> 2) It is not clear for me when and how we will switch from supporting
>>> two releases back to one.
>>> Should we add one more milestone to our release process? The "Switching
>>> point", when we disable and remove Icehouse tasks and move to Juno
>>> completely? I guess it should happen before next SCF?
>>>
>>>
>>>
>>> On Tue, Sep 9, 2014 at 9:52 PM, Mike Scherbakov <
>>> mscherba...@mirantis.com> wrote:
>>>
 > What we need to achieve that is have 2 build series based on Fuel
 master: one with Icehouse packages, and one with Juno, and, as Mike
 proposed, keep our manifests backwards compatible with Ice

Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-11 Thread Zane Bitter

On 04/09/14 08:14, Sean Dague wrote:


I've been one of the consistent voices concerned about a hard
requirement on adding NoSQL into the mix. So I'll explain that thinking
a bit more.

I feel like when the TC makes an integration decision previously this
has been about evaluating the project applying for integration, and if
they met some specific criteria they were told about some time in the
past. I think that's the wrong approach. It's a locally optimized
approach that fails to ask the more interesting question.

Is OpenStack better as a whole if this is a mandatory component of
OpenStack? Better being defined as technically better (more features,
less janky code work arounds, less unexpected behavior from the stack).
Better from the sense of easier or harder to run an actual cloud by our
Operators (taking into account what kinds of moving parts they are now
expected to manage). Better from the sense of a better user experience
in interacting with OpenStack as whole. Better from a sense that the
OpenStack release will experience less bugs, less unexpected cross
project interactions, an a greater overall feel of consistency so that
the OpenStack API feels like one thing.

https://dague.net/2014/08/26/openstack-as-layers/


I don't want to get off-topic here, but I want to state before this 
becomes the de-facto starting point for a layering discussion that I 
don't accept this model at all. It is not based on any analysis 
whatsoever but appears to be entirely arbitrary - a collection of 
individual prejudices arranged visually.


On a hopefully more constructive note, I believe there are at least two 
analyses that _would_ produce interesting data here:


1) Examine the dependencies, both hard and optional, between projects 
and enumerate the things you lose when ignoring each optional one.
2) Analyse projects based on the type of user consuming the service - 
e.g. Nova is mostly used (directly or indirectly via e.g. Heat and/or 
Horizon) by actual, corporeal persons, while Zaqar is used by both 
persons (to set up queues) and services (which actually send and receive 
messages) - of both OpenStack and applications. I believe, BTW that this 
analysis will uncover a lot of missing features in Keystone[1].


What you can _not_ produce is a linear model of the different types of 
clouds for different use cases, because different organisations have 
wildly differing needs.



One of the interesting qualities of Layers 1 & 2 is they all follow an
AMQP + RDBMS pattern (excepting swift). You can have a very effective
IaaS out of that stack. They are the things that you can provide pretty
solid integration testing on (and if you look at where everything stood
before the new TC mandates on testing / upgrade that was basically what
was getting integration tested). (Also note, I'll accept Barbican is
probably in the wrong layer, and should be a Layer 2 service.)


Swift is the current exception here, but one could argue, and people 
have[2], that Swift is also the only project that actually conforms to 
our stated design tenets for OpenStack. I'd struggle to tell the Zaqar 
folks they've done the Wrong Thing... especially when abandoning the 
RDBMS driver was done largely at the direction of the TC iirc.


Speaking of Swift, I would really love to see it investigated as a 
potential storage backend for Zaqar. If it proves to have the right 
guarantees (and durability is the crucial one, so it sounds promising) 
then that has the potential to smooth over a lot of the deployment problem.



While large shops can afford to have a dedicated team to figure out how
to make mongo or redis HA, provide monitoring, have a DR plan for when a
huricane requires them to flip datacenters, that basically means
OpenStack heads further down the path of "only for the big folks". I
don't want OpenStack to be only for the big folks, I want OpenStack to
be for all sized folks. I really do want to have all the local small
colleges around here have OpenStack clouds, because it's something that
people believe they can do and manage. I know the people that work in
this places, they all come out to the LUG I run. We've talked about
this. OpenStack is basically seen as too complex for them to use as it
stands, and that pains me a ton.


This is a great point, and one that we definitely have to keep in mind.

It's also worth noting that small organisations also get the most 
benefit. Rather than having to stand up a cluster of reliable message 
brokers (large organisations are much more likely to need this kind of 
flexibility anyway) - potentially one cluster per application - they can 
have their IT department deploy e.g. a single Redis cluster and have 
messaging handled for every application in their cloud with all the 
benefits of multitenancy.


Part of the move to the cloud is inevitably going to mean organisational 
changes in a lot of places, where the operations experts will 
increasingly focus on maintaining the cloud itself, rath

[openstack-dev] [qa] Tempest Bug triage

2014-09-11 Thread David Kranz
So we had a Bug Day this week and the results were a bit disappointing 
due to lack of participation. We went from 124 New bugs to 75. There 
were also many cases where bugs referred to logs that no longer existed. 
This suggests that we really need to keep up with bug triage in real 
time. Since bug triage should involve the Core review team, we propose 
to rotate the responsibility of triaging bugs weekly. I put up an 
etherpad here https://etherpad.openstack.org/p/qa-bug-triage-rotation 
and I hope the tempest core review team will sign up. Given our size, 
this should involve signing up once every two months or so. I took next 
week.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Working on 6.0 and new releases in general

2014-09-11 Thread Roman Vyalov
Mike,
2 jobs for Icehouse and Juno equal 2 different repository with packages for
Fuel 6.0. This can be problem for current osci workflow.
For example:  We need building new packages. Which repository we must put
packages? to icehouse or/and Juno ?
if new packages will break icehouse repository, but required for Juno ...

On Wed, Sep 10, 2014 at 12:39 AM, Mike Scherbakov 
wrote:

> Aleksandra,
> you've got us exactly right. Fuel CI for OSTF can wait a bit longer, but "4
> fuel-library tests" should happen right after we create stable/5.1. Also,
> for Fuel CI for OSTF - I don't think it's actually necessary to support
> <5.0 envs.
>
> Your questions:
>
>1. Create jobs for both Icehouse and Juno, but it doesn't make sense
>to do staging for Juno till it starts to pass deployment in HA mode. Once
>it passes deployment in HA, staging should be enabled. Then, once it passes
>OSTF - we extend criteria, and pass only those mirrors which also pass OSTF
>phase
>2. Once Juno starts to pass BVT with OSTF check enabled, I think we
>can disable Icehouse checks. Not sure about fuel-library tests on Fuel CI
>with Icehouse - we might want to continue using them.
>
> Thanks,
>
> On Wed, Sep 10, 2014 at 12:22 AM, Aleksandra Fedorova <
> afedor...@mirantis.com> wrote:
>
>> > Our Fuel CI can do 4 builds against puppet modules: 2 voting, with
>> Icehouse packages; 2 non-voting, with Juno packages.
>> > Then, I'd suggest to create ISO with 2 releases (Icehouse, Juno)
>> actually before Juno becomes stable. We will be able to run 2 sets of BVTs
>> (against Icehouse and Juno), and it means that we will be able to see
>> almost immediately if something in nailgun/astute/puppet integration broke.
>> For Juno builds it's going to be all red initially.
>>
>> Let me rephrase:
>>
>> We keep one Fuel master branch for two OpenStack releases. And we make
>> sure that Fuel master code is compatible with both of them. And we use
>> current release (Icehouse) as a reference for test results of upcoming
>> release, till we obtain stable enough reference point in Juno itself.
>> Moreover we'd like to have OSTF code running on all previous Fuel releases.
>>
>> Changes to CI workflow look as follows:
>>
>> Nightly builds:
>>   1) We build two mirrors: one for Icehouse and one for Juno.
>>   2) From each mirror we build Fuel ISO using exactly the same fuel
>> master branch code.
>>   3) Then we run BVT tests on both (using the same fuel-main code for
>> system tests).
>>   4) If Icehouse BVT tests pass, we deploy both ISO images (even with
>> failed Juno tests) onto Fuel CI.
>>
>> On Fuel CI we should run:
>>   - 4 fuel-library tests (revert master node, inject fuel-library code in
>> master node and run deployment):
>> 2 (ubuntu and centos) voting Icehouse tests and 2 non-voting
>> Juno tests
>>   - 5 OSTF tests (revert deployed environment, inject OSTF code into
>> master node, run OSTF):
>> voting on 4.1, 5.0, 5.1, master/icehouse and non-voting on
>> master/Juno
>>   - other tests, which don't use prebuilt environment, work as before
>>
>> The major action point here would be OSTF tests, as we don't have yet
>> working implementation of injecting OSTF code into deployed environment.
>> And we don't run any tests on old environments.
>>
>>
>> Questions:
>>
>> 1) How should we test mirrors?
>>
>> Current master mirrors go through the 4 hours test cycle involving Fuel
>> ISO build:
>>   1. we build temporary mirror
>>   2. build custom iso from it
>>   3. run two custom bvt jobs
>>   4. if they pass we move mirror to stable and sitch to it for our
>> "primary" fuel_master_iso
>>
>> Should we test only Icehouse mirrors, or both, but ignoring again failed
>> BVT for Juno? Maybe we should enable these tests only later in release
>> cycle, say, after SCF?
>>
>> 2) It is not clear for me when and how we will switch from supporting two
>> releases back to one.
>> Should we add one more milestone to our release process? The "Switching
>> point", when we disable and remove Icehouse tasks and move to Juno
>> completely? I guess it should happen before next SCF?
>>
>>
>>
>> On Tue, Sep 9, 2014 at 9:52 PM, Mike Scherbakov > > wrote:
>>
>>> > What we need to achieve that is have 2 build series based on Fuel
>>> master: one with Icehouse packages, and one with Juno, and, as Mike
>>> proposed, keep our manifests backwards compatible with Icehouse.
>>> Exactly. Our Fuel CI can do 4 builds against puppet modules: 2 voting,
>>> with Icehouse packages; 2 non-voting, with Juno packages.
>>>
>>> Then, I'd suggest to create ISO with 2 releases (Icehouse, Juno)
>>> actually before Juno becomes stable. We will be able to run 2 sets of BVTs
>>> (against Icehouse and Juno), and it means that we will be able to see
>>> almost immediately if something in nailgun/astute/puppet integration broke.
>>> For Juno builds it's going to be all red initially.
>>>
>>> Another suggestion would be to lower green switch in BVTs f

Re: [openstack-dev] [Fuel] Experimental features and how they affect HCF

2014-09-11 Thread Anastasia Urlapova
QA-agree.

--
nurla

On Thu, Sep 11, 2014 at 6:28 PM, Mike Scherbakov 
wrote:

> > Mike, i've just want to say, if feature isn't ready for production use
> and we have no other choice, we should provide detailed limitations and
> examples of proper use.
> Fully agree, such features should become experimental. We should have this
> information in release notes.
>
> Basically, Patching of OpenStack becomes as such, unfortunately. We still
> have bugs, and there is no guarantee that we won't find more.
>
> So, let's add "experimental" tag to issues around Zabbix & Patching of
> OpenStack.
>
> On Thu, Sep 11, 2014 at 6:19 PM, Anastasia Urlapova <
> aurlap...@mirantis.com> wrote:
>
>> Mike, i've just want to say, if feature isn't ready for production use
>> and we have no other choice, we should provide detailed limitations and
>> examples of proper use.
>>
>> On Thu, Sep 11, 2014 at 5:58 PM, Tomasz Napierala <
>> tnapier...@mirantis.com> wrote:
>>
>>>
>>> On 11 Sep 2014, at 09:19, Mike Scherbakov 
>>> wrote:
>>>
>>> > Hi all,
>>> > what about using "experimental" tag for experimental features?
>>> >
>>> > After we implemented feature groups [1], we can divide our features
>>> and for complex features, or those which don't get enough QA resources in
>>> the dev cycle, we can declare as experimental. It would mean that those are
>>> not production ready features.
>>> > Giving them live still in experimental mode allows early adopters to
>>> give a try and bring a feedback to the development team.
>>> >
>>> > I think we should not count bugs for HCF criteria if they affect only
>>> experimental feature(s). At the moment, we have Zabbix as experimental
>>> feature, and Patching of OpenStack [2] is under consideration: if today QA
>>> doesn't approve it to be as ready for production use, we have no other
>>> choice. All deadlines passed, and we need to get 5.1 finally out.
>>> >
>>> > Any objections / other ideas?
>>>
>>> +1
>>>
>>> --
>>> Tomasz 'Zen' Napierala
>>> Sr. OpenStack Engineer
>>> tnapier...@mirantis.com
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Mike Scherbakov
> #mihgen
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] Juno Performance Testing (Round 2)

2014-09-11 Thread Devananda van der Veen
On Wed, Sep 10, 2014 at 6:09 PM, Kurt Griffiths
 wrote:
> On 9/10/14, 3:58 PM, "Devananda van der Veen" 
> wrote:
>
>>I'm going to assume that, for these benchmarks, you configured all the
>>services optimally.
>
> Sorry for any confusion; I am not trying to hide anything about the setup.
> I thought I was pretty transparent about the way uWSGI, MongoDB, and Redis
> were configured. I tried to stick to mostly default settings to keep
> things simple, making it easier for others to reproduce/verify the results.
>
> Is there further information about the setup that you were curious about
> that I could provide? Was there a particular optimization that you didn’t
> see that you would recommend?
>

Nope.

>>I'm not going to question why you didn't run tests
>>with tens or hundreds of concurrent clients,
>
> If you review the different tests, you will note that a couple of them
> used at least 100 workers. That being said, I think we ought to try higher
> loads in future rounds of testing.
>

Perhaps I misunderstand what "2 processes with 25 gevent workers"
means - I think this means you have two _processes_ which are using
greenthreads and eventlet, and so each of those two python processes
is swapping between 25 coroutines. From a load generation standpoint,
this is not the same as having 100 concurrent client _processes_.

>>or why you only ran the
>>tests for 10 seconds.
>
> In Round 1 I did mention that i wanted to do a followup with a longer
> duration. However, as I alluded to in the preamble for Round 2, I kept
> things the same for the redis tests to compare with the mongo ones done
> previously.
>
> We’ll increase the duration in the next round of testing.
>

Sure - consistency between tests is good. But I don't believe that a
10-second benchmark is ever enough to suss out service performance.
Lots of things only appear after high load has been applied for a
period of time as eg. caches fill up, though this leads to my next
point below...

>>Instead, I'm actually going to question how it is that, even with
>>relatively beefy dedicated hardware (128 GB RAM in your storage
>>nodes), Zaqar peaked at around 1,200 messages per second.
>
> I went back and ran some of the tests and never saw memory go over ~20M
> (as observed with redis-top) so these same results should be obtainable on
> a box with a lot less RAM.

Whoa. So, that's a *really* important piece of information which was,
afaict, missing from your previous email(s). I hope you can understand
how, with the information you provided ("the Redis server has 128GB
RAM") I was shocked at the low performance.

> Furthermore, the tests only used 1 CPU on the
> Redis host, so again, similar results should be achievable on a much more
> modest box.

You described fairy beefy hardware but didn't utilize it fully -- I
was expecting your performance test to attempt to stress the various
components of a Zaqar installation and, at least in some way, attempt
to demonstrate what the capacity of a Zaqar deployment might be on the
hardware you have available. Thus my surprise at the low numbers. If
that wasn't your intent (and given the CPU/RAM usage your tests
achieved, it's not what you achieved) then my disappointment in those
performance numbers is unfounded.

But I hope you can understand, if I'm looking at a service benchmark
to gauge how well that service might perform in production, seeing
expensive hardware perform disappointingly slowly is not a good sign.

>
> FWIW, I went back and ran a couple scenarios to get some more data points.
> First, I did one with 50 producers and 50 observers. In that case, the
> single CPU on which the OS scheduled the Redis process peaked at 30%. The
> second test I did was with 50 producers + 5 observers + 50 consumers
> (which claim messages and delete them rather than simply page through
> them). This time, Redis used 78% of its CPU. I suppose this should not be
> surprising because the consumers do a lot more work than the observers.
> Meanwhile, load on the web head was fairly high; around 80% for all 20
> CPUs. This tells me that python and/or uWSGI are working pretty hard to
> serve these requests, and there may be some opportunities to optimize that
> layer. I suspect there are also some opportunities to reduce the number of
> Redis operations and roundtrips required to claim a batch of messages.
>

OK - those resource usages sound better. At least you generated enough
load to saturate the uWSGI process CPU, which is a good point to look
at performance of the system.

At that peak, what was the:
- average msgs/sec
- min/max/avg/stdev time to [post|get|delete] a message

> The other thing to consider is that in these first two rounds I did not
> test increasing amounts of load (number of clients performing concurrent
> requests) and graph that against latency and throughput. Out of curiosity,
> I just now did a quick test to compare the messages enqueued with 50
> producers + 5 observers + 50 consumers vs. adding anot

Re: [openstack-dev] [Zaqar] Zaqar graduation (round 2) [was: Comments on the concerns arose during the TC meeting]

2014-09-11 Thread Clint Byrum
Excerpts from Flavio Percoco's message of 2014-09-11 04:14:30 -0700:
> On 09/10/2014 03:45 PM, Gordon Sim wrote:
> > On 09/10/2014 01:51 PM, Thierry Carrez wrote:
> >> I think we do need, as Samuel puts it, "some sort of durable
> >> message-broker/queue-server thing". It's a basic application building
> >> block. Some claim it's THE basic application building block, more useful
> >> than database provisioning. It's definitely a layer above pure IaaS, so
> >> if we end up splitting OpenStack into layers this clearly won't be in
> >> the inner one. But I think "IaaS+" basic application building blocks
> >> belong in OpenStack one way or another. That's the reason I supported
> >> Designate ("everyone needs DNS") and Trove ("everyone needs DBs").
> >>
> >> With that said, I think yesterday there was a concern that Zaqar might
> >> not fill the "some sort of durable message-broker/queue-server thing"
> >> role well. The argument goes something like: if it was a queue-server
> >> then it should actually be built on top of Rabbit; if it was a
> >> message-broker it should be built on top of postfix/dovecot; the current
> >> architecture is only justified because it's something in between, so
> >> it's broken.
> > 
> > What is the distinction between a message broker and a queue server? To
> > me those terms both imply something broadly similar (message broker
> > perhaps being a little bit more generic). I could see Zaqar perhaps as
> > somewhere between messaging and data-storage.
> 
> I agree with Gordon here. I really don't know how to say this without
> creating more confusion. Zaqar is a messaging service. Messages are the
> most important entity in Zaqar. This, however, does not forbid anyone to
> use Zaqar as a queue. It has the required semantics, it guarantees FIFO
> and other queuing specific patterns. This doesn't mean Zaqar is trying
> to do something outside its scope, it comes for free.
> 

It comes with a huge cost actually, so saying it comes for free is a
misrepresentation. It is a side effect of developing a superset of
queueing. But that superset is only useful to a small number of your
stated use cases. Many of your use cases (including the one I've been
involved with, Heat pushing metadata to servers) are entirely served by
the much simpler, much lighter weight, pure queueing service.

> Is Zaqar being optimized as a *queuing* service? I'd say no. Our goal is
> to optimize Zaqar for delivering messages and supporting different
> messaging patterns.
> 

Awesome! Just please don't expect people to get excited about it for
the lighter weight queueing workloads that you've claimed as use cases.

I totally see Horizon using it to keep events for users. I see Heat
using it for stack events as well. I would bet that Trove would benefit
from being able to communicate messages to users.

But I think in between Zaqar and the backends will likely be a lighter
weight queue-only service that the users can just subscribe to when they
don't want an inbox. And I think that lighter weight queue service is
far more important for OpenStack than the full blown random access
inbox.

I think the reason such a thing has not appeared is because we were all
sort of running into "but Zaqar is already incubated". Now that we've
fleshed out the difference, I think those of us that need a lightweight
multi-tenant queue service should add it to OpenStack.  Separately. I hope
that doesn't offend you and the rest of the excellent Zaqar developers. It
is just a different thing.

> Should we remove all the semantics that allow people to use Zaqar as a
> queue service? I don't think so either. Again, the semantics are there
> because Zaqar is using them to do its job. Whether other folks may/may
> not use Zaqar as a queue service is out of our control.
> 
> This doesn't mean the project is broken.
> 

No, definitely not broken. It just isn't actually necessary for many of
the stated use cases.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Chris Friesen

On 09/11/2014 12:02 PM, Dan Prince wrote:


Maybe I'm impatient (I totally am!) but I see much of the review
slowdown as a result of the feedback loop times increasing over the
years. OpenStack has some really great CI and testing but I think our
focus on not breaking things actually has us painted into a corner. We
are losing our agility and the review process is paying the price. At
this point I think splitting out the virt drivers would be more of a
distraction than a help.


I think the only solution to feedback loop times increasing is to scale 
the review process, which I think means giving more people 
responsibility for a smaller amount of code.


I don't think it's strictly necessary to split the code out into a 
totally separate repo, but I do think it would make sense to have 
changes that are entirely contained within a virt driver be reviewed 
only by developers of that virt driver rather than requiring review by 
the project as a whole.  And they should only have to pass a subset of 
the CI testing--that way they wouldn't be held up by gating bugs in 
other areas.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Armando M.
On 10 September 2014 22:23, Russell Bryant  wrote:
> On 09/10/2014 10:35 PM, Armando M. wrote:
>> Hi,
>>
>> I devoured this thread, so much it was interesting and full of
>> insights. It's not news that we've been pondering about this in the
>> Neutron project for the past and existing cycle or so.
>>
>> Likely, this effort is going to take more than two cycles, and would
>> require a very focused team of people working closely together to
>> address this (most likely the core team members plus a few other folks
>> interested).
>>
>> One question I was unable to get a clear answer was: what happens to
>> existing/new bug fixes and features? Would the codebase go in lockdown
>> mode, i.e. not accepting anything else that isn't specifically
>> targeting this objective? Just using NFV as an example, I can't
>> imagine having changes supporting NFV still being reviewed and merged
>> while this process takes place...it would be like shooting at a moving
>> target! If we did go into lockdown mode, what happens to all the
>> corporate-backed agendas that aim at delivering new value to
>> OpenStack?
>
> Yes, I imagine a temporary slow-down on new feature development makes
> sense.  However, I don't think it has to be across the board.  Things
> should be considered case by case, like usual.

Aren't we trying to move away from the 'usual'? Considering things on
a case by case basis still requires review cycles, etc. Keeping the
status quo would mean prolonging the exact pain we're trying to
address.

>
> For example, a feature that requires invasive changes to the virt driver
> interface might have a harder time during this transition, but a more
> straight forward feature isolated to the internals of a driver might be
> fine to let through.  Like anything else, we have to weight cost/benefit.
>
>> Should we relax what goes into the stable branches, i.e. considering
>> having  a Juno on steroids six months from now that includes some of
>> the features/fixes that didn't land in time before this process kicks
>> off?
>
> No ... maybe I misunderstand the suggestion, but I definitely would not
> be in favor of a Juno branch with features that haven't landed in master.
>

I was thinking of the bold move of having Kilo (and beyond)
developments solely focused on this transition. Until this is
complete, nothing would be merged that is not directly pertaining this
objective. At the same time, we'd still want pending features/fixes
(and possibly new features) to land somewhere stable-ish. I fear that
doing so in master, while stuff is churned up and moved out into
external repos, will makes this whole task harder than it already is.

Thanks,
Armando

> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Dan Prince
On Thu, 2014-09-04 at 11:24 +0100, Daniel P. Berrange wrote:
> Position statement
> ==
> 
> Over the past year I've increasingly come to the conclusion that
> Nova is heading for (or probably already at) a major crisis. If
> steps are not taken to avert this, the project is likely to loose
> a non-trivial amount of talent, both regular code contributors and
> core team members. That includes myself. This is not good for
> Nova's long term health and so should be of concern to anyone
> involved in Nova and OpenStack.
> 
> For those who don't want to read the whole mail, the executive
> summary is that the nova-core team is an unfixable bottleneck
> in our development process with our current project structure.
> The only way I see to remove the bottleneck is to split the virt
> drivers out of tree and let them all have their own core teams
> in their area of code, leaving current nova core to focus on
> all the common code outside the virt driver impls. I, now, none
> the less urge people to read the whole mail.
> 


I've always referred to the virt/driver.py API as an internal API
meaning there are no guarantees about it being preserved across
releases. I'm not saying this is correct... just that it is what we've
got.  While OpenStack attempts to do a good job at stabilizing its
public API's we haven't done the same for internal API's. It is actually
quite painful to be out of tree at this point as I've seen with the
Ironic driver being out of the Nova tree. (really glad that is back in
now!)

So because we haven't designed things to be split out in this regard we
can't just go and do it. 

I tinkered with some numbers... not sure if this helps or hurts my
stance but here goes. By my calculation this is the number of commits
we've made that touched each virt driver tree for the last 3 releases
plus stuff done to-date in Juno.

Created using a command like this in each virt directory for each
release: git log origin/stable/havana..origin/stable/icehouse
--no-merges --pretty=oneline . | wc -l

essex => folsom:

 baremetal: 26
 hyperv: 9
 libvirt: 222
 vmwareapi: 18
 xenapi: 164
* total for above: 439

folsom => grizzly:

 baremetal: 83
 hyperv: 58
 libvirt: 254
 vmwareapi: 59
 xenapi: 126
   * total for above: 580

grizzly => havana:

 baremetal: 48
 hyperv: 55
 libvirt: 157
 vmwareapi: 105
 xenapi: 123
   * total for above: 488

havana => icehouse:

 baremetal: 45
 hyperv: 42
 libvirt: 212
 vmwareapi: 121
 xenapi: 100
   * total for above: 520

icehouse => master:

 baremetal: 26
 hyperv: 32
 libvirt: 188
 vmwareapi: 121
 xenapi: 71
   * total for above: 438

---

A couple of things jump out at me from the numbers:

 -drivers that are being deprecated (baremetal) still have lots of
changes. Some of these changes are valid bug fixes for the driver but a
majority of them are actually related to internal cleanups and interface
changes. This goes towards the fact that Nova isn't mature enough to do
a split like this yet.

 -the number of commits landed isn't growing *that* much across releases
in the virt driver trees. Presumably we think we were doing a better job
2 years ago? But the number of changes in the virt trees is largely the
same... perhaps this is because people aren't submitting stuff because
they are frustrated though?

---

For comparison here are the total number of commits for each Nova
release (includes the above commits):

essex -> folsom: 1708
folsom -> grizzly: 2131
grizzly -> havana: 2188
havana -> icehouse: 1696
icehouse -> master: 1493

---

So say around 30% of the commits for a given release touch the virt
drivers themselves.. many of them aren't specifically related to the
virt drivers. Rather just general Nova internal cleanups because the
interfaces aren't stable.

And while splitting Nova virt drivers might help out some I'm not sure
it helps the general Nova issue in that we have more reviews with less
of the good ones landing. Nova is a weird beast at the moment and just
splitting things like this is probably going to harm as much as it helps
(like we saw with Ironic) unless we stabilize the APIs... and even then
I'm skeptical of death by a million tiny sub-projects. I'm just not
convinced this is the number #1 pain point around Nova reviews. What
about the other 70%?

For me a lot of the frustration with reviews is around test/gate time,
pushing things through, rechecks, etc... and if we break something it
takes just as much time to get the revert in. The last point (the
ability to revert code quickly) is a really important one as it
sometimes takes days to get a simple (obvious) revert landed. This
leaves groups like TripleO who have their own CI and 3rd party testing
systems which also capable of finding many critical issues in the
difficult position of having to revert/cherry pick critical changes for
days at a time in order to keep things running.

Maybe I'm impatient (I totally am!) but I see much of the review
slowdown as a result of the feedbac

[openstack-dev] [Trove] Cluster implementation is grabbing instance's gutsHi guys, I was looking through the clustering code today and noticed a lot of it is grabbing what I'd call the guts of the ins

2014-09-11 Thread Tim Simpson
Hi everyone,

I was looking through the clustering code today and noticed a lot of it is 
grabbing what I'd call the guts of the instance models code.

The best example is here: 
https://github.com/openstack/trove/commit/06196fcf67b27f0308381da192da5cc8ae65b157#diff-a4d09d28bd2b650c2327f5d8d81be3a9R89https://github.com/openstack/trove/commit/06196fcf67b27f0308381da192da5cc8ae65b157#diff-a4d09d28bd2b650c2327f5d8d81be3a9R89

In the "_all_instances_ready" function, I would have expected 
trove.instance.models.load_any_instance to be called for each instance ID and 
it's status to be checked.

Instead, the service_status is being called directly. That is a big mistake. 
For now it works, but in general it mixes the concern of "what is an instance 
stauts?" to code outside of the instance class itself.

For an example of why this is bad, look at the method 
"_instance_ids_with_failures." The code is checking for failures by seeing if 
the service status is failed. What if the Nova server or Cinder volume have 
tanked instead? The code won't work as expected.

It could be we need to introduce another status besides BUILD to instance 
statuses, or we need to introduce a new internal property to the SimpleInstance 
base class we can check. But whatever we do we should add this extra logic to 
the instance class itself rather than put it in the clustering models code.

This is a minor nitpick but I think we should fix it before too much time 
passes.

Thanks,

Tim
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Group-based Policy next steps

2014-09-11 Thread Stephen Wong
I agree with Kevin. Like in Juno, we as a subteam will be shooting for
option 1 (again) for Kilo - ideally we can land in Kilo, and we will work
closely with the community to try to accomplish that. In the meantime, we
need to have a repo to iterate our implementations, build package (Juno
based) to early adopters, and be as transparent as if the code is on
gerrit. With option 2 never picking up momentum when Bob suggested it on
the ML, option 3 being more like an idea discussed by several cores during
the mid-cycle meetup, and option 4 is currently on holding pattern without
any detail but tons of concern raised on ML --- option 5 (stackforge) seems
like the best available option at this point.

Thanks,
- Stephen

On Thu, Sep 11, 2014 at 10:02 AM, Kevin Benton  wrote:

> Thanks. This is good writeup.
>
> >Of course this all assumes there is consensus that we should proceed
> with GBP, that we should continue by iterating the currently proposed
> design and code, and that GBP should eventually become part of Neutron.
> These assumptions may still be the real issues :-( .
>
> Unfortunately I think this is the real root cause. Most of the people that
> worked on GBP definitely want to see it merged into Neutron and is in
> general agreement there. However, some of the other cores disagreed and now
> GBP is sitting in limbo. IIUC, this thread was started to just get GBP to
> some location where it can be developed on and tested that isn't a big
> string of rejected gerrit patches.
>
> >Does the above make some sense? What have I missed?
>
> Option 1 is great, but I don't see how the same thing that happened in
> Juno would be avoided.
>
> Option 2 is also good, but that idea didn't seem to catch on. If this
> option is on the table, this seems like the best way to go.
>
> Option 3 sounded like it brought up a lot of tooling (gerrit) issues with
> regard to how the merging workflow would work.
>
> Option 4 is unknown until the incubator details are hashed out.
>
> Option 5 is stackforge. I see this as a better place just to do what is
> already being done right now. You're right that patches would occur without
> core reviewers, but that's essentially what's happening now since nothing
> is getting merged.
>
>
>
>
> On Thu, Sep 11, 2014 at 7:57 AM, Robert Kukura 
> wrote:
>
>>
>> On 9/10/14, 6:54 PM, Kevin Benton wrote:
>>
>> Being in the incubator won't help with this if it's a different repo as
>> well.
>>
>> Agreed.
>>
>> Given the requirement for GBP to intercept API requests, the potential
>> couplings between policy drivers, ML2 mechanism drivers, and even service
>> plugins (L3 router), and the fact Neutron doesn't have a stable [service]
>> plugin API, along with the goal to eventually merge GBP into Neutron, I'd
>> rank the options as follows in descending order:
>>
>> 1) Merge the GBP patches to the neutron repo early in Kilo and iterate,
>> just like we had planned for Juno ;-) .
>>
>> 2) Like 1, but with the code initially in a "preview" subtree to clarify
>> its level of stability and support, and to facilitate packaging it as an
>> optional component.
>>
>> 3) Like 1, but merge to a feature branch in the neutron repo and iterate
>> there.
>>
>> 4) Develop in an official neutron-incubator repo, with neutron core
>> reviews of each GBP patch.
>>
>> 5) Develop in StackForge, without neutron core reviews.
>>
>>
>> Here's how I see these options in terms of the various considerations
>> that have come up during this discussion:
>>
>> * Options 1, 2 and 3 most easily support whatever coupling is needed with
>> the rest of Neutron. Options 4 and 5 would sometimes require synchronized
>> changes across repos since dependencies aren't in terms of stable
>> interfaces.
>>
>> * Options 1, 2 and 3 provide a clear path to eventually graduate GBP into
>> a fully supported Neutron feature, without loss of git history. Option 4
>> would have some hope of eventually merging into the neutron repo due to the
>> code having already had core reviews. With option 5, reviewing and merging
>> a complete GBP implementation from StackForge into the neutron repo would
>> be a huge effort, with significant risk that reviewers would want design
>> changes not practical to make at that stage.
>>
>> * Options 1 and 2 take full advantage of existing review, CI, packaging
>> and release processes and mechanisms. All the other options require extra
>> work to put these in place.
>>
>> * Options 1 and 2 can easily make GBP consumable by early adopters
>> through normal channels such as devstack and OpenStack distributions. The
>> other options all require the operator or the packager to pull GBP code
>> from a different source than the base Neutron code.
>>
>> * Option 1 relies on the historical understanding that new Neutron
>> extension APIs are not initially considered stable, and incompatible
>> changes can occur in future releases. Options 2, 3 and 4 make this
>> explicit. Option 5 really has nothing to do with Neutron.
>>
>>

Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Sean Dague
On 09/11/2014 11:14 AM, Gary Kotton wrote:
> 
> 
> On 9/11/14, 4:30 PM, "Sean Dague"  wrote:
> 
>> On 09/11/2014 09:09 AM, Gary Kotton wrote:
>>>
>>>
>>> On 9/11/14, 2:55 PM, "Thierry Carrez"  wrote:
>>>
 Sean Dague wrote:
> [...]
> Why don't we start with "let's clean up the virt interface and make it
> more sane", as I don't think there is any disagreement there. If it's
> going to take a cycle, it's going to take a cycle anyway (it will
> probably take 2 cycles, realistically, we always underestimate these
> things, remember when no-db-compute was going to be 1 cycle?). I don't
> see the need to actually decide here and now that the split is clearly
> at least 7 - 12 months away. A lot happens in the intervening time.

 Yes, that sounds like the logical next step. We can't split drivers
 without first doing that anyway. I still think "people need smaller
 areas of work", as Vish eloquently put it. I still hope that
 refactoring
 our test architecture will let us reach the same level of quality with
 only a fraction of the tests being run at the gate, which should
 address
 most of the harm you see in adding additional repositories. But I agree
 there is little point in discussing splitting virt drivers (or anything
 else, really) until the internal interface below that potential split
 is
 fully cleaned up and it becomes an option.
>>>
>>> How about we start to try and patch gerrit to provide +2 permissions for
>>> people
>>> Who can be assigned Œdriver core¹ status. This is something that is
>>> relevant to Nova and Neutron and I guess Cinder too.
>>
>> If you think that's the right solution, I'd say go and investigate it
>> with folks that understand enough gerrit internals to be able to figure
>> out how hard it would be. Start a conversation in #openstack-infra to
>> explore it.
>>
>> My expectation is that there is more complexity there than you give it
>> credit for. That being said one of the biggest limitations we've had on
>> gerrit changes is we've effectively only got one community member, Kai,
>> who does any of that. If other people, or teams, were willing to dig in
>> and own things like this, that might be really helpful.
> 
> What about what Radoslav suggested? Having a background task running -
> that can set a flag indicating that the code has been approved by the
> driver ‘maintainers’. This can be something that driver CI should run -
> that is, driver code can only be approved if it has X +1’s from the driver
> maintainers and a +1 from the driver CI.

There is a ton of complexity and open questions with that approach as
well, largely, again because people are designing systems based on
gerrit from the hip without actually understanding gerrit.

If someone wants to devote time to that kind of system and architecture,
they should engage the infra team to understand what can and can't be
done here. And take that on as a Kilo cycle goal. It would be useful,
but there is no 'simply' about it.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Group-based Policy next steps

2014-09-11 Thread Mandeep Dhami
I agree with Kevin. Any option in-tree or in-incubator would need core
review time, and they are already oversubscribed with nova parity issues
(for Juno). So the only option to continue collaboration on experimenting
with policy based networking on current openstack is on stackforge (option
5).

So the summary is: We develop in stackforge for Juno code, and that we
should keep our options open and review this as a community again during
the Kilo conference.

Regards,
Mandeep



On Thu, Sep 11, 2014 at 10:02 AM, Kevin Benton  wrote:

> Thanks. This is good writeup.
>
> >Of course this all assumes there is consensus that we should proceed
> with GBP, that we should continue by iterating the currently proposed
> design and code, and that GBP should eventually become part of Neutron.
> These assumptions may still be the real issues :-( .
>
> Unfortunately I think this is the real root cause. Most of the people that
> worked on GBP definitely want to see it merged into Neutron and is in
> general agreement there. However, some of the other cores disagreed and now
> GBP is sitting in limbo. IIUC, this thread was started to just get GBP to
> some location where it can be developed on and tested that isn't a big
> string of rejected gerrit patches.
>
> >Does the above make some sense? What have I missed?
>
> Option 1 is great, but I don't see how the same thing that happened in
> Juno would be avoided.
>
> Option 2 is also good, but that idea didn't seem to catch on. If this
> option is on the table, this seems like the best way to go.
>
> Option 3 sounded like it brought up a lot of tooling (gerrit) issues with
> regard to how the merging workflow would work.
>
> Option 4 is unknown until the incubator details are hashed out.
>
> Option 5 is stackforge. I see this as a better place just to do what is
> already being done right now. You're right that patches would occur without
> core reviewers, but that's essentially what's happening now since nothing
> is getting merged.
>
>
>
>
> On Thu, Sep 11, 2014 at 7:57 AM, Robert Kukura 
> wrote:
>
>>
>> On 9/10/14, 6:54 PM, Kevin Benton wrote:
>>
>> Being in the incubator won't help with this if it's a different repo as
>> well.
>>
>> Agreed.
>>
>> Given the requirement for GBP to intercept API requests, the potential
>> couplings between policy drivers, ML2 mechanism drivers, and even service
>> plugins (L3 router), and the fact Neutron doesn't have a stable [service]
>> plugin API, along with the goal to eventually merge GBP into Neutron, I'd
>> rank the options as follows in descending order:
>>
>> 1) Merge the GBP patches to the neutron repo early in Kilo and iterate,
>> just like we had planned for Juno ;-) .
>>
>> 2) Like 1, but with the code initially in a "preview" subtree to clarify
>> its level of stability and support, and to facilitate packaging it as an
>> optional component.
>>
>> 3) Like 1, but merge to a feature branch in the neutron repo and iterate
>> there.
>>
>> 4) Develop in an official neutron-incubator repo, with neutron core
>> reviews of each GBP patch.
>>
>> 5) Develop in StackForge, without neutron core reviews.
>>
>>
>> Here's how I see these options in terms of the various considerations
>> that have come up during this discussion:
>>
>> * Options 1, 2 and 3 most easily support whatever coupling is needed with
>> the rest of Neutron. Options 4 and 5 would sometimes require synchronized
>> changes across repos since dependencies aren't in terms of stable
>> interfaces.
>>
>> * Options 1, 2 and 3 provide a clear path to eventually graduate GBP into
>> a fully supported Neutron feature, without loss of git history. Option 4
>> would have some hope of eventually merging into the neutron repo due to the
>> code having already had core reviews. With option 5, reviewing and merging
>> a complete GBP implementation from StackForge into the neutron repo would
>> be a huge effort, with significant risk that reviewers would want design
>> changes not practical to make at that stage.
>>
>> * Options 1 and 2 take full advantage of existing review, CI, packaging
>> and release processes and mechanisms. All the other options require extra
>> work to put these in place.
>>
>> * Options 1 and 2 can easily make GBP consumable by early adopters
>> through normal channels such as devstack and OpenStack distributions. The
>> other options all require the operator or the packager to pull GBP code
>> from a different source than the base Neutron code.
>>
>> * Option 1 relies on the historical understanding that new Neutron
>> extension APIs are not initially considered stable, and incompatible
>> changes can occur in future releases. Options 2, 3 and 4 make this
>> explicit. Option 5 really has nothing to do with Neutron.
>>
>> * Option 5 allows rapid iteration by the GBP team, without waiting for
>> core review. This is essential during experimentation and prototyping, but
>> at least some participants consider the GBP implementation to be well
>> beyond that

Re: [openstack-dev] [Cinder] Request for J3 Feature Freeze Exception

2014-09-11 Thread Duncan Thomas
Mike

This FFE request was withdraw. I updated the etherpad but didn't mail
the list, sorry

On 11 September 2014 18:07, Mike Perez  wrote:
> On 19:32 Fri 05 Sep , David Pineau wrote:
>> So I asked Duncan what could be done, learned about the FFE, and I am
>> now humbly asking you guys to give us a last chance to get in for
>> Juno. I was told that if it was possible the last delay would be next
>> week, and believe me, we're doing everything we can on our side to be
>> able to meet that.
>
> As given in the comments [1], there will be a better chance for an exception
> with this after cert results are provided.
>
> [1] - https://review.openstack.org/#/c/110236/
>
> --
> Mike Perez
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Request for J3 FFE - add reset-state function for backups

2014-09-11 Thread Mike Perez
On 12:23 Tue 09 Sep , yunling wrote:
> Hi Cinder Folks,I would like to request a FFE for add reset-state function 
> for backups[1][2].The spec of add reset-state function for backups has been 
> reviewed and merged[2]. These code changes have been well tested and are not 
> very complex[3]. I would appreciate any consideration for an FFE.Thanks,

It looks like the current review has some comments that are waiting too be
addressed now.

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Request for J3 Feature Freeze Exception

2014-09-11 Thread Mike Perez
On 19:32 Fri 05 Sep , David Pineau wrote:
> So I asked Duncan what could be done, learned about the FFE, and I am
> now humbly asking you guys to give us a last chance to get in for
> Juno. I was told that if it was possible the last delay would be next
> week, and believe me, we're doing everything we can on our side to be
> able to meet that.

As given in the comments [1], there will be a better chance for an exception
with this after cert results are provided.

[1] - https://review.openstack.org/#/c/110236/

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Group-based Policy next steps

2014-09-11 Thread Kevin Benton
Thanks. This is good writeup.

>Of course this all assumes there is consensus that we should proceed with
GBP, that we should continue by iterating the currently proposed design and
code, and that GBP should eventually become part of Neutron. These
assumptions may still be the real issues :-( .

Unfortunately I think this is the real root cause. Most of the people that
worked on GBP definitely want to see it merged into Neutron and is in
general agreement there. However, some of the other cores disagreed and now
GBP is sitting in limbo. IIUC, this thread was started to just get GBP to
some location where it can be developed on and tested that isn't a big
string of rejected gerrit patches.

>Does the above make some sense? What have I missed?

Option 1 is great, but I don't see how the same thing that happened in Juno
would be avoided.

Option 2 is also good, but that idea didn't seem to catch on. If this
option is on the table, this seems like the best way to go.

Option 3 sounded like it brought up a lot of tooling (gerrit) issues with
regard to how the merging workflow would work.

Option 4 is unknown until the incubator details are hashed out.

Option 5 is stackforge. I see this as a better place just to do what is
already being done right now. You're right that patches would occur without
core reviewers, but that's essentially what's happening now since nothing
is getting merged.




On Thu, Sep 11, 2014 at 7:57 AM, Robert Kukura 
wrote:

>
> On 9/10/14, 6:54 PM, Kevin Benton wrote:
>
> Being in the incubator won't help with this if it's a different repo as
> well.
>
> Agreed.
>
> Given the requirement for GBP to intercept API requests, the potential
> couplings between policy drivers, ML2 mechanism drivers, and even service
> plugins (L3 router), and the fact Neutron doesn't have a stable [service]
> plugin API, along with the goal to eventually merge GBP into Neutron, I'd
> rank the options as follows in descending order:
>
> 1) Merge the GBP patches to the neutron repo early in Kilo and iterate,
> just like we had planned for Juno ;-) .
>
> 2) Like 1, but with the code initially in a "preview" subtree to clarify
> its level of stability and support, and to facilitate packaging it as an
> optional component.
>
> 3) Like 1, but merge to a feature branch in the neutron repo and iterate
> there.
>
> 4) Develop in an official neutron-incubator repo, with neutron core
> reviews of each GBP patch.
>
> 5) Develop in StackForge, without neutron core reviews.
>
>
> Here's how I see these options in terms of the various considerations that
> have come up during this discussion:
>
> * Options 1, 2 and 3 most easily support whatever coupling is needed with
> the rest of Neutron. Options 4 and 5 would sometimes require synchronized
> changes across repos since dependencies aren't in terms of stable
> interfaces.
>
> * Options 1, 2 and 3 provide a clear path to eventually graduate GBP into
> a fully supported Neutron feature, without loss of git history. Option 4
> would have some hope of eventually merging into the neutron repo due to the
> code having already had core reviews. With option 5, reviewing and merging
> a complete GBP implementation from StackForge into the neutron repo would
> be a huge effort, with significant risk that reviewers would want design
> changes not practical to make at that stage.
>
> * Options 1 and 2 take full advantage of existing review, CI, packaging
> and release processes and mechanisms. All the other options require extra
> work to put these in place.
>
> * Options 1 and 2 can easily make GBP consumable by early adopters through
> normal channels such as devstack and OpenStack distributions. The other
> options all require the operator or the packager to pull GBP code from a
> different source than the base Neutron code.
>
> * Option 1 relies on the historical understanding that new Neutron
> extension APIs are not initially considered stable, and incompatible
> changes can occur in future releases. Options 2, 3 and 4 make this
> explicit. Option 5 really has nothing to do with Neutron.
>
> * Option 5 allows rapid iteration by the GBP team, without waiting for
> core review. This is essential during experimentation and prototyping, but
> at least some participants consider the GBP implementation to be well
> beyond that phase.
>
> * Options 3, 4, and 5 potentially decouple the GBP release schedule from
> the Neutron release schedule. With options 1 or 2, GBP snapshots would be
> included in all normal Neutron releases. With any of the options, the GBP
> team, vendors, or distributions would be able to back-port arbitrary
> snapshots of GBP to a branch off the stable/juno branch (in the neutron
> repo itself or in a clone) to allow early adopters to use GBP with
> Juno-based OpenStack distributions.
>
>
> Does the above make some sense? What have I missed?
>
> Of course this all assumes there is consensus that we should proceed with
> GBP, that we should continue by iterating th

[openstack-dev] [Neutron] Allow for per-subnet dhcp options

2014-09-11 Thread Jonathan Proulx
Hi All,

I'm hoping to get this blueprint
https://blueprints.launchpad.net/neutron/+spec/dhcp-options-per-subnet
some love...seems it's been hanging around since January so my
assumption is it's not going anywhere.

As a private cloud operator I make heavy use of vlan based provider
networks to plug VMs into exiting datacenter networks.

Some of these are Jumbo frame networks and some use standard 1500 MTUs
so I really want to specify the MTU per subnet, there is currently no
way to do this.  I can get it globally in dnsmasq.conf or I can set it
per port using extra-dhcp-opt neither of which really do what I need.

Given that extra-dhcp-opt is implemented per port is seems to me that
making a similar implementation per subnet would not be a difficult
task for someone familiar with the code.

I'm not that person but if you are, then you can be my Neutron hero
for the next release cycle :)

-Jon

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] anyone using RabbitMQ with active/active mirrored queues?

2014-09-11 Thread Abel Lopez
Yes, not sure why the HA guide says that.
Only problems I've run into was around cluster upgrades. If you're running
3.2+ you'll likely have a better experience.

List ha_queues in all your configs, list all your rabbit hosts (I don't use
a VIP as heartbeats weren't working when I did this)

On Wednesday, September 10, 2014, Chris Friesen 
wrote:

> Hi,
>
> I see that the OpenStack high availability guide is still recommending the
> active/standby method of configuring RabbitMQ.
>
> Has anyone tried using active/active with mirrored queues as recommended
> by the RabbitMQ developers?  If so, what problems did you run into?
>
> Thanks,
> Chris
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding Dina Belova to ceilometer-core

2014-09-11 Thread Ildikó Váncsa
Hi,

+1 from me too, thanks for all the hard work so far.

Best Regards,
Ildikó

-Original Message-
From: Julien Danjou [mailto:jul...@danjou.info] 
Sent: Thursday, September 11, 2014 3:25 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Ceilometer] Adding Dina Belova to ceilometer-core

Hi,

Dina has been doing a great work and has been very helpful during the Juno 
cycle and her help is very valuable. She's been doing a lot of reviews and has 
been very active in our community.

I'd like to propose that we add Dina Belova to the ceilometer-core group, as 
I'm convinced it'll help the project.

Please, dear ceilometer-core members, reply with your votes!

--
Julien Danjou
// Free Software hacker
// http://julien.danjou.info

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] anyone using RabbitMQ with active/active mirrored queues?

2014-09-11 Thread Chris Friesen

On 09/11/2014 12:50 AM, Jesse Pretorius wrote:

On 10 September 2014 17:20, Chris Friesen mailto:chris.frie...@windriver.com>> wrote:

I see that the OpenStack high availability guide is still
recommending the active/standby method of configuring RabbitMQ.

Has anyone tried using active/active with mirrored queues as
recommended by the RabbitMQ developers?  If so, what problems did
you run into?



I would recommend that you ask this question on the openstack-perators
list as you'll likely get more feedback.


Thanks for the suggestion, will do.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting Sep 11 1800 UTC

2014-09-11 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meeting&iso=20140911T18

P.S. I'm on vacation this week, so, Andrew Lazarev will chair the meeting.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] referencing the index of a ResourceGroup

2014-09-11 Thread Steven Hardy
On Thu, Sep 11, 2014 at 10:06:01AM -0500, Jason Greathouse wrote:
>My mistake about the mailing list, The openstack heat wiki page
>(https://wiki.openstack.org/wiki/Heat) only lists the "dev" list. I will
>make sure to ask future usage questions on the other one.

No worries, we should update the wiki by the sounds of it.

Since you're not the first person to ask this question this week, I wrote a
quick blog post with some more info:

http://hardysteven.blogspot.co.uk/2014/09/using-heat-resourcegroup-resources.html

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread James Bottomley
On Thu, 2014-09-11 at 16:20 +0100, Duncan Thomas wrote:
> On 11 September 2014 15:35, James Bottomley
>  wrote:
> 
> > OK, so look at a concrete example: in 2002, the Linux kernel went with
> > bitkeeper precisely because we'd reached the scaling limit of a single
> > integration point, so we took the kernel from a single contributing team
> > to a bunch of them.  This was expanded with git in 2005 and leads to the
> > hundreds of contributing teams we have today.
> 
> 
> One thing the kernel has that Openstack doesn't, that alter the way
> this model plays out, is a couple of very strong, forthright and frank
> personalities at the top who are pretty well respected. Both Andrew
> and Linux (and others) regularly if not frequently rip into ideas
> quite scathingly, even after they have passed other barriers and
> gauntlets and just say no to things. Openstack has nothing of this
> sort, and there is no evidence that e.g. the TC can, should or desire
> to fill this role.

Linus is the court of last appeal.  It's already a team negotiation
failure if stuff bubbles up to him.  The somewhat abrasive response
you'll get if you're being stupid acts as strong downward incentive on
the teams to sort out their own API squabbles *before* they get this
type of visibility.

The whole point of open source is aligning the structures with the
desire to fix it yourself.  In an ideal world, everything would get
sorted at the local level and nothing would bubble up.  Of course, the
world isn't ideal, so you need some court of last appeal, but it doesn't
have to be an individual ... it just has to be something that's
daunting, to encourage local settlement, and decisive.

Every process has to have something like this anyway.  If there's no
process way of sorting out intractable disputes, they go on for ever and
damage the project.

James



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] referencing the index of a ResourceGroup

2014-09-11 Thread Mike Spreitzer
Steven Hardy  wrote on 09/11/2014 04:21:18 AM:

> On Wed, Sep 10, 2014 at 04:44:01PM -0500, Jason Greathouse wrote:
> >I'm trying to find a way to create a set of servers and attach a 
new
> >volume to each server. 
> >...
> 
> Basically creating lots of resource groups for related things is the 
wrong
> pattern.  You need to create one nested stack template containing the
> related things (Server, Volume and VolumeAttachment in this case), and 
use
> ResourceGroup to multiply them as a unit.
> 
> I answered a similar question here on the openstack general ML recently
> (which for future reference may be a better ML for usage questions like
> this, as it's not really development discussion):
> 
> 
http://lists.openstack.org/pipermail/openstack/2014-September/009216.html
> 
> Here's another example which I used in a summit demo, which I think
> basically does what you need?
> 
> https://github.com/hardys/demo_templates/tree/master/
> juno_summit_intro_to_heat/example3_server_with_volume_group

There is also an example of exactly this under review.  See 
https://review.openstack.org/#/c/97366/

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Duncan Thomas
On 11 September 2014 15:35, James Bottomley
 wrote:

> OK, so look at a concrete example: in 2002, the Linux kernel went with
> bitkeeper precisely because we'd reached the scaling limit of a single
> integration point, so we took the kernel from a single contributing team
> to a bunch of them.  This was expanded with git in 2005 and leads to the
> hundreds of contributing teams we have today.


One thing the kernel has that Openstack doesn't, that alter the way
this model plays out, is a couple of very strong, forthright and frank
personalities at the top who are pretty well respected. Both Andrew
and Linux (and others) regularly if not frequently rip into ideas
quite scathingly, even after they have passed other barriers and
gauntlets and just say no to things. Openstack has nothing of this
sort, and there is no evidence that e.g. the TC can, should or desire
to fill this role.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-11 Thread Jeremy Stanley
On 2014-09-11 01:27:23 -0400 (-0400), Russell Bryant wrote:
[...]
> But seriously, we should probably put out a more official notice about
> this once Kilo opens up.

It's probably worth carrying in the release notes for all Juno
servers... "This is the last release of OpenStack with official
support for Python 2.6-based platforms."

Of course we're still supporting it on the Juno stable branch for
its lifetime (probably something like a year depending on what the
stable branch managers feel they can provide), and in all involved
clients and libraries until Juno reaches end of support. So don't
get all excited that 2.6 is "going away entirely" in a couple
months.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Gary Kotton


On 9/11/14, 4:30 PM, "Sean Dague"  wrote:

>On 09/11/2014 09:09 AM, Gary Kotton wrote:
>> 
>> 
>> On 9/11/14, 2:55 PM, "Thierry Carrez"  wrote:
>> 
>>> Sean Dague wrote:
 [...]
 Why don't we start with "let's clean up the virt interface and make it
 more sane", as I don't think there is any disagreement there. If it's
 going to take a cycle, it's going to take a cycle anyway (it will
 probably take 2 cycles, realistically, we always underestimate these
 things, remember when no-db-compute was going to be 1 cycle?). I don't
 see the need to actually decide here and now that the split is clearly
 at least 7 - 12 months away. A lot happens in the intervening time.
>>>
>>> Yes, that sounds like the logical next step. We can't split drivers
>>> without first doing that anyway. I still think "people need smaller
>>> areas of work", as Vish eloquently put it. I still hope that
>>>refactoring
>>> our test architecture will let us reach the same level of quality with
>>> only a fraction of the tests being run at the gate, which should
>>>address
>>> most of the harm you see in adding additional repositories. But I agree
>>> there is little point in discussing splitting virt drivers (or anything
>>> else, really) until the internal interface below that potential split
>>>is
>>> fully cleaned up and it becomes an option.
>> 
>> How about we start to try and patch gerrit to provide +2 permissions for
>> people
>> Who can be assigned Œdriver core¹ status. This is something that is
>> relevant to Nova and Neutron and I guess Cinder too.
>
>If you think that's the right solution, I'd say go and investigate it
>with folks that understand enough gerrit internals to be able to figure
>out how hard it would be. Start a conversation in #openstack-infra to
>explore it.
>
>My expectation is that there is more complexity there than you give it
>credit for. That being said one of the biggest limitations we've had on
>gerrit changes is we've effectively only got one community member, Kai,
>who does any of that. If other people, or teams, were willing to dig in
>and own things like this, that might be really helpful.

What about what Radoslav suggested? Having a background task running -
that can set a flag indicating that the code has been approved by the
driver ‘maintainers’. This can be something that driver CI should run -
that is, driver code can only be approved if it has X +1’s from the driver
maintainers and a +1 from the driver CI.


>
>   -Sean
>
>-- 
>Sean Dague
>https://urldefense.proofpoint.com/v1/url?u=http://dague.net/&k=oIvRg1%2BdG
>AgOoM1BIlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%
>0A&m=krRe7RLL8WDd62ypHGZ6F1MqaSzJLkWn153Ch9UZktk%3D%0A&s=9b417c5fd29939b40
>eee619ca9ed30be48192d939b824941d42d6e6ab36b1883
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Licensing issue with using JSHint in build

2014-09-11 Thread Solly Ross
Thanks!  ESLint looks interesting.  I'm curious to see what it
says about the Horizon source.  I'll keep it in mind for future
personal projects and the like.

Best Regards,
Solly Ross

- Original Message -
> From: "Martin Geisler" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Thursday, September 11, 2014 3:20:56 AM
> Subject: Re: [openstack-dev] [Horizon] Licensing issue with using JSHint in   
> build
> 
> Solly Ross  writes:
> 
> Hi,
> 
> I recently began using using ESLint for all my JavaScript linting:
> 
>   http://eslint.org/
> 
> It has nice documentation, a normal license, and you can easily write
> new rules for it.
> 
> > P.S. Here's hoping that the JSHint devs eventually find a way to
> > remove that line from the file -- according to
> > https://github.com/jshint/jshint/issues/1234, not much of the original
> > remains.
> 
> I don't think it matters how much of the original code remains -- what
> matters is that any rewrite is a derived work. Otherwise Debian and
> others could have made the license pure MIT long ago.
> 
> --
> Martin Geisler
> 
> http://google.com/+MartinGeisler
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] referencing the index of a ResourceGroup

2014-09-11 Thread Jason Greathouse
My mistake about the mailing list, The openstack heat wiki page (
https://wiki.openstack.org/wiki/Heat) only lists the "dev" list. I will
make sure to ask future usage questions on the other one.

Thank you for the response and example. This is what I was missing.

On Thu, Sep 11, 2014 at 3:21 AM, Steven Hardy  wrote:

> On Wed, Sep 10, 2014 at 04:44:01PM -0500, Jason Greathouse wrote:
> >I'm trying to find a way to create a set of servers and attach a new
> >volume to each server.
> >I first tried to use block_device_mapping but that requires an
> existing
> >snapshot or volume and the deployment would fail when Rackspace
> >intermittently timed out trying to create the new volume from a
> >snapshot.
> >I'm now trying with 3 ResourceGroups: OS::Cinder::Volume to build
> volumes
> >followed by OS::Nova::Server and then trying to attach the volumes
> >with  OS::Cinder::VolumeAttachment.
>
> Basically creating lots of resource groups for related things is the wrong
> pattern.  You need to create one nested stack template containing the
> related things (Server, Volume and VolumeAttachment in this case), and use
> ResourceGroup to multiply them as a unit.
>
> I answered a similar question here on the openstack general ML recently
> (which for future reference may be a better ML for usage questions like
> this, as it's not really development discussion):
>
> http://lists.openstack.org/pipermail/openstack/2014-September/009216.html
>
> Here's another example which I used in a summit demo, which I think
> basically does what you need?
>
>
> https://github.com/hardys/demo_templates/tree/master/juno_summit_intro_to_heat/example3_server_with_volume_group
>
> Steve.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
*Jason Greathouse*
Sr. Systems Engineer

*[image: LeanKitlogo] *
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding Dina Belova to ceilometer-core

2014-09-11 Thread Eoghan Glynn


> Hi,
> 
> Dina has been doing a great work and has been very helpful during the
> Juno cycle and her help is very valuable. She's been doing a lot of
> reviews and has been very active in our community.
> 
> I'd like to propose that we add Dina Belova to the ceilometer-core
> group, as I'm convinced it'll help the project.
> 
> Please, dear ceilometer-core members, reply with your votes!

A definite +1 from me.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Group-based Policy next steps

2014-09-11 Thread Robert Kukura


On 9/10/14, 6:54 PM, Kevin Benton wrote:
Being in the incubator won't help with this if it's a different repo 
as well.

Agreed.

Given the requirement for GBP to intercept API requests, the potential 
couplings between policy drivers, ML2 mechanism drivers, and even 
service plugins (L3 router), and the fact Neutron doesn't have a stable 
[service] plugin API, along with the goal to eventually merge GBP into 
Neutron, I'd rank the options as follows in descending order:


1) Merge the GBP patches to the neutron repo early in Kilo and iterate, 
just like we had planned for Juno;-) .


2) Like 1, but with the code initially in a "preview" subtree to clarify 
its level of stability and support, and to facilitate packaging it as an 
optional component.


3) Like 1, but merge to a feature branch in the neutron repo and iterate 
there.


4) Develop in an official neutron-incubator repo, with neutron core 
reviews of each GBP patch.


5) Develop in StackForge, without neutron core reviews.


Here's how I see these options in terms of the various considerations 
that have come up during this discussion:


* Options 1, 2 and 3 most easily support whatever coupling is needed 
with the rest of Neutron. Options 4 and 5 would sometimes require 
synchronized changes across repos since dependencies aren't in terms of 
stable interfaces.


* Options 1, 2 and 3 provide a clear path to eventually graduate GBP 
into a fully supported Neutron feature, without loss of git history. 
Option 4 would have some hope of eventually merging into the neutron 
repo due to the code having already had core reviews. With option 5, 
reviewing and merging a complete GBP implementation from StackForge into 
the neutron repo would be a huge effort, with significant risk that 
reviewers would want design changes not practical to make at that stage.


* Options 1 and 2 take full advantage of existing review, CI, packaging 
and release processes and mechanisms. All the other options require 
extra work to put these in place.


* Options 1 and 2 can easily make GBP consumable by early adopters 
through normal channels such as devstack and OpenStack distributions. 
The other options all require the operator or the packager to pull GBP 
code from a different source than the base Neutron code.


* Option 1 relies on the historical understanding that new Neutron 
extension APIs are not initially considered stable, and incompatible 
changes can occur in future releases. Options 2, 3 and 4 make this 
explicit. Option 5 really has nothing to do with Neutron.


* Option 5 allows rapid iteration by the GBP team, without waiting for 
core review. This is essential during experimentation and prototyping, 
but at least some participants consider the GBP implementation to be 
well beyond that phase.


* Options 3, 4, and 5 potentially decouple the GBP release schedule from 
the Neutron release schedule. With options 1 or 2, GBP snapshots would 
be included in all normal Neutron releases. With any of the options, the 
GBP team, vendors, or distributions would be able to back-port arbitrary 
snapshots of GBP to a branch off the stable/juno branch (in the neutron 
repo itself or in a clone) to allow early adopters to use GBP with 
Juno-based OpenStack distributions.



Does the above make some sense? What have I missed?

Of course this all assumes there is consensus that we should proceed 
with GBP, that we should continue by iterating the currently proposed 
design and code, and that GBP should eventually become part of Neutron. 
These assumptions may still be the real issues:-( . If we can't agree on 
whether GBP is in an experimentation/rapid-prototyping phase vs. an 
almost-ready-to-beta-test phase, I don't see how can we get consensus on 
the next steps for its development.


-Bob


On Wed, Sep 10, 2014 at 7:22 AM, Robert Kukura 
mailto:kuk...@noironetworks.com>> wrote:



On 9/9/14, 7:51 PM, Jay Pipes wrote:

On 09/09/2014 06:57 PM, Kevin Benton wrote:

Hi Jay,

The main component that won't work without direct
integration is
enforcing policy on calls directly to Neutron and calls
between the
plugins inside of Neutron. However, that's only one
component of GBP.
All of the declarative abstractions, rendering of policy,
etc can be
experimented with here in the stackforge project until the
incubator is
figured out.


OK, thanks for the explanation Kevin, that helps!

I'll add that there is likely to be a close coupling between ML2
mechanism drivers and corresponding GBP policy drivers for some of
the back-end integrations. These will likely share local state
such as connections to controllers, and may interact with each
other as part of processing core and GBP API requests.
Development, review, and packaging of these would be facilitated
by having them on the same bran

Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread James Bottomley
On Thu, 2014-09-11 at 07:36 -0400, Sean Dague wrote:
> >>> b) The conflict Dan is speaking of is around the current situation where 
> >>> we
> >>> have a limited core review team bandwidth and we have to pick and choose
> >>> which virt driver-specific features we will review. This leads to bad
> >>> feelings and conflict.
> >>
> >> The way this worked in the past is we had cores who were subject
> >> matter experts in various parts of the code -- there is a clear set of
> >> cores who "get" xen or libivrt for example and I feel like those
> >> drivers get reasonable review times. What's happened though is that
> >> we've added a bunch of drivers without adding subject matter experts
> >> to core to cover those drivers. Those newer drivers therefore have a
> >> harder time getting things reviewed and approved.
> > 
> > FYI, for Juno at least I really don't consider that even the libvirt
> > driver got acceptable review times in any sense. The pain of waiting
> > for reviews in libvirt code I've submitted this cycle is what prompted
> > me to start this thread. All the virt drivers are suffering way more
> > than they should be, but those without core team representation suffer
> > to an even greater degree.  And this is ignoring the point Jay & I
> > were making about how the use of a single team means that there is
> > always contention for feature approval, so much work gets cut right
> > at the start even if maintainers of that area felt it was valuable
> > and worth taking.
> 
> I continue to not understand how N non overlapping teams makes this any
> better. You have to pay the integration cost somewhere. Right now we're
> trying to pay it 1 patch at a time. This model means the integration
> units get much bigger, and with less common ground.

OK, so look at a concrete example: in 2002, the Linux kernel went with
bitkeeper precisely because we'd reached the scaling limit of a single
integration point, so we took the kernel from a single contributing team
to a bunch of them.  This was expanded with git in 2005 and leads to the
hundreds of contributing teams we have today.

The reason this scales nicely is precisely because the integration costs
are lower.  However, there are a couple of principles that really assist
us getting there.  The first is internal API management: an Internal API
is a contract between two teams (may be more, but usually two).  If
someone wants to change this API they have to negotiate between the two
(or more) teams.  This naturally means that only the affected components
review this API change, but *only* they need to review it, so it doesn't
bubble up to the whole kernel community.  The second is automation:
linux-next and the zero day test programme build and smoke test an
integration of all our development trees.  If one team does something
that impacts another in their development tree, this system gives us
immediate warning.  Basically we run continuous integration, so when
Linus does his actual integration pull, everything goes smoothly (that's
how we integrate all the 300 or so trees for a kernel release in about
ten days).  We also now have a lot of review automation (checkpatch.pl
for instance), but that's independent of the number of teams

In this model the scaling comes from the local reviews and integration.
The more teams the greater the scaling.  The factor which obstructs
scaling is the internal API ... it usually doesn't make sense to
separate a component where there's no API between the two pieces ...
however, if you think there should be, separating and telling the teams
to figure it out is a great way to generate the API.   The point here is
that since an API is a contract, forcing people to negotiate and abide
by the contract tends to make them think much more carefully about it.
Internal API moves from being a global issue to being a local one.

By the way, the extra link work is actually time well spent because it
means the link APIs are negotiated by teams with use cases not just
designed by abstract architecture.  The greater the link pain the
greater the indication that there's an API problem and the greater the
pressure on the teams either end to fix it.  Once the link pain is
minimised, the API is likely a good one.

> Look at how much active work in crossing core teams we've had to do to
> make any real progress on the neutron replacing nova-network front. And
> how slow that process is. I think you'll see that hugely show up here.

Well, as I said, separating the components leads to API negotiation
between the teams  Because of the API negotiation, taking one thing and
making it two does cause more work, and it's visible work because the
two new teams get to do the API negotiation which didn't exist before.
The trick to getting the model to scale is the network effect.  The
scaling comes by splitting out into high numbers of teams (say N) the
added work comes in the links (the API contracts) between the N teams.
If the network is star shaped (ev

Re: [openstack-dev] [Fuel] Experimental features and how they affect HCF

2014-09-11 Thread Mike Scherbakov
> Mike, i've just want to say, if feature isn't ready for production use
and we have no other choice, we should provide detailed limitations and
examples of proper use.
Fully agree, such features should become experimental. We should have this
information in release notes.

Basically, Patching of OpenStack becomes as such, unfortunately. We still
have bugs, and there is no guarantee that we won't find more.

So, let's add "experimental" tag to issues around Zabbix & Patching of
OpenStack.

On Thu, Sep 11, 2014 at 6:19 PM, Anastasia Urlapova 
wrote:

> Mike, i've just want to say, if feature isn't ready for production use and
> we have no other choice, we should provide detailed limitations and
> examples of proper use.
>
> On Thu, Sep 11, 2014 at 5:58 PM, Tomasz Napierala  > wrote:
>
>>
>> On 11 Sep 2014, at 09:19, Mike Scherbakov 
>> wrote:
>>
>> > Hi all,
>> > what about using "experimental" tag for experimental features?
>> >
>> > After we implemented feature groups [1], we can divide our features and
>> for complex features, or those which don't get enough QA resources in the
>> dev cycle, we can declare as experimental. It would mean that those are not
>> production ready features.
>> > Giving them live still in experimental mode allows early adopters to
>> give a try and bring a feedback to the development team.
>> >
>> > I think we should not count bugs for HCF criteria if they affect only
>> experimental feature(s). At the moment, we have Zabbix as experimental
>> feature, and Patching of OpenStack [2] is under consideration: if today QA
>> doesn't approve it to be as ready for production use, we have no other
>> choice. All deadlines passed, and we need to get 5.1 finally out.
>> >
>> > Any objections / other ideas?
>>
>> +1
>>
>> --
>> Tomasz 'Zen' Napierala
>> Sr. OpenStack Engineer
>> tnapier...@mirantis.com
>>
>>
>>
>>
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [clients] [keystone] lack of retrying tokens leads to overall OpenStack fragility

2014-09-11 Thread Steven Hardy
On Wed, Sep 10, 2014 at 08:46:45PM -0400, Jamie Lennox wrote:
> 
> - Original Message -
> > From: "Steven Hardy" 
> > To: "OpenStack Development Mailing List (not for usage questions)" 
> > 
> > Sent: Thursday, September 11, 2014 1:55:49 AM
> > Subject: Re: [openstack-dev] [all] [clients] [keystone] lack of retrying 
> > tokens leads to overall OpenStack fragility
> > 
> > On Wed, Sep 10, 2014 at 10:14:32AM -0400, Sean Dague wrote:
> > > Going through the untriaged Nova bugs, and there are a few on a similar
> > > pattern:
> > > 
> > > Nova operation in progress takes a while
> > > Crosses keystone token expiration time
> > > Timeout thrown
> > > Operation fails
> > > Terrible 500 error sent back to user
> > 
> > We actually have this exact problem in Heat, which I'm currently trying to
> > solve:
> > 
> > https://bugs.launchpad.net/heat/+bug/1306294
> > 
> > Can you clarify, is the issue either:
> > 
> > 1. Create novaclient object with username/password
> > 2. Do series of operations via the client object which eventually fail
> > after $n operations due to token expiry
> > 
> > or:
> > 
> > 1. Create novaclient object with username/password
> > 2. Some really long operation which means token expires in the course of
> > the service handling the request, blowing up and 500-ing
> > 
> > If the former, then it does sound like a client, or usage-of-client bug,
> > although note if you pass a *token* vs username/password (as is currently
> > done for glance and heat in tempest, because we lack the code to get the
> > token outside of the shell.py code..), there's nothing the client can do,
> > because you can't request a new token with longer expiry with a token...
> > 
> > However if the latter, then it seems like not really a client problem to
> > solve, as it's hard to know what action to take if a request failed
> > part-way through and thus things are in an unknown state.
> > 
> > This issue is a hard problem, which can possibly be solved by
> > switching to a trust scoped token (service impersonates the user), but then
> > you're effectively bypassing token expiry via delegation which sits
> > uncomfortably with me (despite the fact that we may have to do this in heat
> > to solve the afforementioned bug)
> > 
> > > It seems like we should have a standard pattern that on token expiration
> > > the underlying code at least gives one retry to try to establish a new
> > > token to complete the flow, however as far as I can tell *no* clients do
> > > this.
> > 
> > As has been mentioned, using sessions may be one solution to this, and
> > AFAIK session support (where it doesn't already exist) is getting into
> > various clients via the work being carried out to add support for v3
> > keystone by David Hu:
> > 
> > https://review.openstack.org/#/q/owner:david.hu%2540hp.com,n,z
> > 
> > I see patches for Heat (currently gating), Nova and Ironic.
> > 
> > > I know we had to add that into Tempest because tempest runs can exceed 1
> > > hr, and we want to avoid random fails just because we cross a token
> > > expiration boundary.
> > 
> > I can't claim great experience with sessions yet, but AIUI you could do
> > something like:
> > 
> > from keystoneclient.auth.identity import v3
> > from keystoneclient import session
> > from keystoneclient.v3 import client
> > 
> > auth = v3.Password(auth_url=OS_AUTH_URL,
> >username=USERNAME,
> >password=PASSWORD,
> >project_id=PROJECT,
> >user_domain_name='default')
> > sess = session.Session(auth=auth)
> > ks = client.Client(session=sess)
> > 
> > And if you can pass the same session into the various clients tempest
> > creates then the Password auth-plugin code takes care of reauthenticating
> > if the token cached in the auth plugin object is expired, or nearly
> > expired:
> > 
> > https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/auth/identity/base.py#L120
> > 
> > So in the tempest case, it seems like it may be a case of migrating the
> > code creating the clients to use sessions instead of passing a token or
> > username/password into the client object?
> > 
> > That's my understanding of it atm anyway, hopefully jamielennox will be 
> > along
> > soon with more details :)
> > 
> > Steve
> 
> 
> By clients here are you referring to the CLIs or the python libraries? 
> Implementation is at different points with each. 

I think for both heat and tempest we're talking about the python libraries
(Client objects).

> Sessions will handle automatically reauthenticating and retrying a request, 
> however it relies on the service throwing a 401 Unauthenticated error. If a 
> service is returning a 500 (or a timeout?) then there isn't much that a 
> client can/should do for that because we can't assume that trying again with 
> a new token will solve anything. 

Hmm, I was hoping it would reauthenticate based on the auth_ref
will_expire_soon, as it would fi

Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Davanum Srinivas
Rados,

personally, i'd want a human to do the +W. Also the critieria would
include a 3) which is the CI for the driver if applicable.

On Thu, Sep 11, 2014 at 9:53 AM, Radoslav Gerganov  wrote:
> On 09/11/2014 04:30 PM, Sean Dague wrote:
>>
>> On 09/11/2014 09:09 AM, Gary Kotton wrote:
>>>
>>>
>>>
>>> On 9/11/14, 2:55 PM, "Thierry Carrez"  wrote:
>>>
 Sean Dague wrote:
>
> [...]
> Why don't we start with "let's clean up the virt interface and make it
> more sane", as I don't think there is any disagreement there. If it's
> going to take a cycle, it's going to take a cycle anyway (it will
> probably take 2 cycles, realistically, we always underestimate these
> things, remember when no-db-compute was going to be 1 cycle?). I don't
> see the need to actually decide here and now that the split is clearly
> at least 7 - 12 months away. A lot happens in the intervening time.


 Yes, that sounds like the logical next step. We can't split drivers
 without first doing that anyway. I still think "people need smaller
 areas of work", as Vish eloquently put it. I still hope that refactoring
 our test architecture will let us reach the same level of quality with
 only a fraction of the tests being run at the gate, which should address
 most of the harm you see in adding additional repositories. But I agree
 there is little point in discussing splitting virt drivers (or anything
 else, really) until the internal interface below that potential split is
 fully cleaned up and it becomes an option.
>>>
>>>
>>> How about we start to try and patch gerrit to provide +2 permissions for
>>> people
>>> Who can be assigned Œdriver core¹ status. This is something that is
>>> relevant to Nova and Neutron and I guess Cinder too.
>>
>>
>> If you think that's the right solution, I'd say go and investigate it
>> with folks that understand enough gerrit internals to be able to figure
>> out how hard it would be. Start a conversation in #openstack-infra to
>> explore it.
>>
>> My expectation is that there is more complexity there than you give it
>> credit for. That being said one of the biggest limitations we've had on
>> gerrit changes is we've effectively only got one community member, Kai,
>> who does any of that. If other people, or teams, were willing to dig in
>> and own things like this, that might be really helpful.
>
>
> I don't think we need to modify gerrit to support this functionality. We can
> simply have a gerrit job (similar to the existing CI jobs) which is run on
> every patch set and checks if:
> 1) the changes are only under /nova/virt/XYZ and /nova/tests/virt/XYZ
> 2) it has two +1 from maintainers of driver XYZ
>
> if the above conditions are met, the job will post W+1 for this patchset.
> Does that make sense?
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Experimental features and how they affect HCF

2014-09-11 Thread Anastasia Urlapova
Mike, i've just want to say, if feature isn't ready for production use and
we have no other choice, we should provide detailed limitations and
examples of proper use.

On Thu, Sep 11, 2014 at 5:58 PM, Tomasz Napierala 
wrote:

>
> On 11 Sep 2014, at 09:19, Mike Scherbakov 
> wrote:
>
> > Hi all,
> > what about using "experimental" tag for experimental features?
> >
> > After we implemented feature groups [1], we can divide our features and
> for complex features, or those which don't get enough QA resources in the
> dev cycle, we can declare as experimental. It would mean that those are not
> production ready features.
> > Giving them live still in experimental mode allows early adopters to
> give a try and bring a feedback to the development team.
> >
> > I think we should not count bugs for HCF criteria if they affect only
> experimental feature(s). At the moment, we have Zabbix as experimental
> feature, and Patching of OpenStack [2] is under consideration: if today QA
> doesn't approve it to be as ready for production use, we have no other
> choice. All deadlines passed, and we need to get 5.1 finally out.
> >
> > Any objections / other ideas?
>
> +1
>
> --
> Tomasz 'Zen' Napierala
> Sr. OpenStack Engineer
> tnapier...@mirantis.com
>
>
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][db] Need help resolving a strange error with db connections in tests

2014-09-11 Thread Anna Kamyshnikova
Hello everyone!

I'm working on implementing test in Neutron that checks that models are
synchronized with database state [1] [2]. This is very important change as
during Juno cycle big changes of database structure were done.

I was working on it for rather long time but about three weeks ago strange
error appeared [3], using AssertionPool shows [4]. The problem is that
somehow there are more than one connection to database from each test. I
tried to use locks from lockutils, but it didn’t help. On db meeting we
decided to add TestCase just for one Ml2 plugin for starters, and then
continue working on this strange error, that is why there are two change
requests [1] and [2]. But I found out that somehow even one testcase fails
with the same error [5] from time to time.

I’m asking for any suggestions that could be done in this case. It is very
important to get at least [1] merged in Juno.

[1] - https://review.openstack.org/76520

[2] - https://review.openstack.org/120040

[3] - http://paste.openstack.org/show/110158/

[4] - http://paste.openstack.org/show/110159/

[5] -
http://logs.openstack.org/20/76520/68/check/gate-neutron-python27/63938f9/testr_results.html.gz

Regards,

Ann
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [clients] [keystone] lack of retrying tokens leads to overall OpenStack fragility

2014-09-11 Thread Duncan Thomas
On 11 September 2014 03:17, Angus Lees  wrote:

> (As inspired by eg kerberos)
> 2. Ensure at some environmental/top layer that the advertised token lifetime
> exceeds the timeout set on the request, before making the request.  This
> implies (since there's no special handling in place) failing if the token was
> expired earlier than expected.

We've a related problem in cinder (cinder-backup uses the user's token
to talk to swift, and the backup can easily take longer than the token
expiry time) which could not be solved by this, since the time the
backup takes is unknown (compression, service and resource contention,
etc alter the time by multiple orders of magnitude)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Experimental features and how they affect HCF

2014-09-11 Thread Tomasz Napierala

On 11 Sep 2014, at 09:19, Mike Scherbakov  wrote:

> Hi all,
> what about using "experimental" tag for experimental features?
> 
> After we implemented feature groups [1], we can divide our features and for 
> complex features, or those which don't get enough QA resources in the dev 
> cycle, we can declare as experimental. It would mean that those are not 
> production ready features.
> Giving them live still in experimental mode allows early adopters to give a 
> try and bring a feedback to the development team.
> 
> I think we should not count bugs for HCF criteria if they affect only 
> experimental feature(s). At the moment, we have Zabbix as experimental 
> feature, and Patching of OpenStack [2] is under consideration: if today QA 
> doesn't approve it to be as ready for production use, we have no other 
> choice. All deadlines passed, and we need to get 5.1 finally out.
> 
> Any objections / other ideas?

+1

-- 
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Duncan Thomas
On 11 September 2014 12:36, Sean Dague  wrote:

> I continue to not understand how N non overlapping teams makes this any
> better. You have to pay the integration cost somewhere. Right now we're
> trying to pay it 1 patch at a time. This model means the integration
> units get much bigger, and with less common ground.
>
> Look at how much active work in crossing core teams we've had to do to
> make any real progress on the neutron replacing nova-network front. And
> how slow that process is. I think you'll see that hugely show up here.

Cinder has also suffered extreme latency trying to make changes to the
nova<->cinder interface, to a sufficient degree that work is under
consideration to move the interface to give cinder more control over
parts of it.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Radoslav Gerganov

On 09/11/2014 04:30 PM, Sean Dague wrote:

On 09/11/2014 09:09 AM, Gary Kotton wrote:



On 9/11/14, 2:55 PM, "Thierry Carrez"  wrote:


Sean Dague wrote:

[...]
Why don't we start with "let's clean up the virt interface and make it
more sane", as I don't think there is any disagreement there. If it's
going to take a cycle, it's going to take a cycle anyway (it will
probably take 2 cycles, realistically, we always underestimate these
things, remember when no-db-compute was going to be 1 cycle?). I don't
see the need to actually decide here and now that the split is clearly
at least 7 - 12 months away. A lot happens in the intervening time.


Yes, that sounds like the logical next step. We can't split drivers
without first doing that anyway. I still think "people need smaller
areas of work", as Vish eloquently put it. I still hope that refactoring
our test architecture will let us reach the same level of quality with
only a fraction of the tests being run at the gate, which should address
most of the harm you see in adding additional repositories. But I agree
there is little point in discussing splitting virt drivers (or anything
else, really) until the internal interface below that potential split is
fully cleaned up and it becomes an option.


How about we start to try and patch gerrit to provide +2 permissions for
people
Who can be assigned Œdriver core¹ status. This is something that is
relevant to Nova and Neutron and I guess Cinder too.


If you think that's the right solution, I'd say go and investigate it
with folks that understand enough gerrit internals to be able to figure
out how hard it would be. Start a conversation in #openstack-infra to
explore it.

My expectation is that there is more complexity there than you give it
credit for. That being said one of the biggest limitations we've had on
gerrit changes is we've effectively only got one community member, Kai,
who does any of that. If other people, or teams, were willing to dig in
and own things like this, that might be really helpful.


I don't think we need to modify gerrit to support this functionality. We 
can simply have a gerrit job (similar to the existing CI jobs) which is 
run on every patch set and checks if:

1) the changes are only under /nova/virt/XYZ and /nova/tests/virt/XYZ
2) it has two +1 from maintainers of driver XYZ

if the above conditions are met, the job will post W+1 for this 
patchset. Does that make sense?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Experimental features and how they affect HCF

2014-09-11 Thread Mike Scherbakov
> +1, absolutely agree, but we should determine count of allowed bugs for
experimental features against severity.
Anastasia, can you please give an example? I think we should not count them
at all. Experimental features, if they are isolated, they can be in any
stated. May be just very beginning of the development cycle.

On Thu, Sep 11, 2014 at 5:20 PM, Vladimir Kuklin 
wrote:

> +1
>
> On Thu, Sep 11, 2014 at 5:05 PM, Anastasia Urlapova <
> aurlap...@mirantis.com> wrote:
>
>> > I think we should not count bugs for HCF criteria if they affect only
>> > experimental feature(s).
>>
>> +1, absolutely agree, but we should determine count of allowed bugs for
>> experimental features against severity.
>>
>> On Thu, Sep 11, 2014 at 2:13 PM, Nikolay Markov 
>> wrote:
>>
>>> Probably, even "experimental feature" should at least pretend to be
>>> working, anyway, or it shouldn't be publically announced. But I think
>>> it's important to describe limitation of this features (or mark some
>>> of them as "untested") and I think list of known issues with links to
>>> most important bugs is a good approach. And tags will just make things
>>> simpler.
>>>
>>> On Thu, Sep 11, 2014 at 1:05 PM, Igor Kalnitsky 
>>> wrote:
>>> >> May be we can use tag per feature, for example "zabbix"
>>> >
>>> > Tags are ok, but I still think that we can mention at least some
>>> > significant bugs. For example, if some feature doesn't work in some
>>> > deployment mode (e.g. simple, with ceilometer, etc) we can at least
>>> > notify users so they even don't try.
>>> >
>>> > Another opinions?
>>> >
>>> >
>>> > On Thu, Sep 11, 2014 at 11:45 AM, Mike Scherbakov
>>> >  wrote:
>>> >>> if we point somewhere about knowing issues in those experimental
>>> features
>>> >> there are might be dozens of bugs.
>>> >> May be we can use tag per feature, for example "zabbix", so it will
>>> be easy
>>> >> to search in LP all open bugs regarding Zabbix feature?
>>> >>
>>> >> On Thu, Sep 11, 2014 at 12:11 PM, Igor Kalnitsky <
>>> ikalnit...@mirantis.com>
>>> >> wrote:
>>> >>>
>>> >>> > I think we should not count bugs for HCF criteria if they affect
>>> only
>>> >>> > experimental feature(s).
>>> >>>
>>> >>> +1, I'm totally agree with you - it makes no sense to count
>>> >>> experimental bugs as HCF criteria.
>>> >>>
>>> >>> > Any objections / other ideas?
>>> >>>
>>> >>> I think it would be great for customers if we point somewhere about
>>> >>> knowing issues in those experimental features. IMHO, it should help
>>> >>> them to understand what's wrong in case of errors and may prevent bug
>>> >>> duplication in LP.
>>> >>>
>>> >>>
>>> >>> On Thu, Sep 11, 2014 at 10:19 AM, Mike Scherbakov
>>> >>>  wrote:
>>> >>> > Hi all,
>>> >>> > what about using "experimental" tag for experimental features?
>>> >>> >
>>> >>> > After we implemented feature groups [1], we can divide our
>>> features and
>>> >>> > for
>>> >>> > complex features, or those which don't get enough QA resources in
>>> the
>>> >>> > dev
>>> >>> > cycle, we can declare as experimental. It would mean that those
>>> are not
>>> >>> > production ready features.
>>> >>> > Giving them live still in experimental mode allows early adopters
>>> to
>>> >>> > give a
>>> >>> > try and bring a feedback to the development team.
>>> >>> >
>>> >>> > I think we should not count bugs for HCF criteria if they affect
>>> only
>>> >>> > experimental feature(s). At the moment, we have Zabbix as
>>> experimental
>>> >>> > feature, and Patching of OpenStack [2] is under consideration: if
>>> today
>>> >>> > QA
>>> >>> > doesn't approve it to be as ready for production use, we have no
>>> other
>>> >>> > choice. All deadlines passed, and we need to get 5.1 finally out.
>>> >>> >
>>> >>> > Any objections / other ideas?
>>> >>> >
>>> >>> > [1]
>>> >>> >
>>> >>> >
>>> https://github.com/stackforge/fuel-specs/blob/master/specs/5.1/feature-groups.rst
>>> >>> > [2] https://blueprints.launchpad.net/fuel/+spec/patch-openstack
>>> >>> > --
>>> >>> > Mike Scherbakov
>>> >>> > #mihgen
>>> >>> >
>>> >>> >
>>> >>> > ___
>>> >>> > OpenStack-dev mailing list
>>> >>> > OpenStack-dev@lists.openstack.org
>>> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>> >
>>> >>>
>>> >>> ___
>>> >>> OpenStack-dev mailing list
>>> >>> OpenStack-dev@lists.openstack.org
>>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Mike Scherbakov
>>> >> #mihgen
>>> >>
>>> >>
>>> >> ___
>>> >> OpenStack-dev mailing list
>>> >> OpenStack-dev@lists.openstack.org
>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>
>>> >
>>> > ___
>>> > OpenStack-dev mailing list
>>> > OpenStack-dev@lists.openstack.org
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Sean Dague
On 09/11/2014 09:09 AM, Gary Kotton wrote:
> 
> 
> On 9/11/14, 2:55 PM, "Thierry Carrez"  wrote:
> 
>> Sean Dague wrote:
>>> [...]
>>> Why don't we start with "let's clean up the virt interface and make it
>>> more sane", as I don't think there is any disagreement there. If it's
>>> going to take a cycle, it's going to take a cycle anyway (it will
>>> probably take 2 cycles, realistically, we always underestimate these
>>> things, remember when no-db-compute was going to be 1 cycle?). I don't
>>> see the need to actually decide here and now that the split is clearly
>>> at least 7 - 12 months away. A lot happens in the intervening time.
>>
>> Yes, that sounds like the logical next step. We can't split drivers
>> without first doing that anyway. I still think "people need smaller
>> areas of work", as Vish eloquently put it. I still hope that refactoring
>> our test architecture will let us reach the same level of quality with
>> only a fraction of the tests being run at the gate, which should address
>> most of the harm you see in adding additional repositories. But I agree
>> there is little point in discussing splitting virt drivers (or anything
>> else, really) until the internal interface below that potential split is
>> fully cleaned up and it becomes an option.
> 
> How about we start to try and patch gerrit to provide +2 permissions for
> people
> Who can be assigned Œdriver core¹ status. This is something that is
> relevant to Nova and Neutron and I guess Cinder too.

If you think that's the right solution, I'd say go and investigate it
with folks that understand enough gerrit internals to be able to figure
out how hard it would be. Start a conversation in #openstack-infra to
explore it.

My expectation is that there is more complexity there than you give it
credit for. That being said one of the biggest limitations we've had on
gerrit changes is we've effectively only got one community member, Kai,
who does any of that. If other people, or teams, were willing to dig in
and own things like this, that might be really helpful.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding Dina Belova to ceilometer-core

2014-09-11 Thread Nadya Privalova
May I be the first  :)? Big +1 from me. Thanks Dina!

On Thu, Sep 11, 2014 at 5:24 PM, Julien Danjou  wrote:

> Hi,
>
> Dina has been doing a great work and has been very helpful during the
> Juno cycle and her help is very valuable. She's been doing a lot of
> reviews and has been very active in our community.
>
> I'd like to propose that we add Dina Belova to the ceilometer-core
> group, as I'm convinced it'll help the project.
>
> Please, dear ceilometer-core members, reply with your votes!
>
> --
> Julien Danjou
> // Free Software hacker
> // http://julien.danjou.info
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Adding Dina Belova to ceilometer-core

2014-09-11 Thread Julien Danjou
Hi,

Dina has been doing a great work and has been very helpful during the
Juno cycle and her help is very valuable. She's been doing a lot of
reviews and has been very active in our community.

I'd like to propose that we add Dina Belova to the ceilometer-core
group, as I'm convinced it'll help the project.

Please, dear ceilometer-core members, reply with your votes!

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding Nejc Saje to ceilometer-core

2014-09-11 Thread Nadya Privalova
I'm in :)
+1

On Thu, Sep 11, 2014 at 4:58 PM, gordon chung  wrote:

> > Nejc has been doing a great work and has been very helpful during the
>
> > Juno cycle and his help is very valuable.
>
> > I'd like to propose that we add Nejc Saje to the ceilometer-core group.
>
> can we minus because he makes me look bad? /sarcasm
>
> +1 for core.
>
> cheers,
> *gord*
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugable solution for running abstract commands on nodes

2014-09-11 Thread Vladimir Kuklin
Let's not create architectural leaks here. Let there be only tasks, but
let's create a really simple template for task that user will be able to
easily fill only with the command itself.

On Thu, Sep 11, 2014 at 4:17 PM, Evgeniy L  wrote:

> Hi,
>
> In most cases for plugin developers or fuel users it will be much
> easier to just write command which he wants to run on nodes
> instead of describing some abstract task which doesn't have
> any additional information/logic and looks like unnecessary complexity.
>
> But for complicated cases user will have to write some code for tasklib.
>
> Thanks,
>
> On Wed, Sep 10, 2014 at 8:10 PM, Dmitriy Shulyak 
> wrote:
>
>> Hi,
>>
>> you described transport mechanism for running commands based on facts, we
>> have another one, which stores
>> all business logic in nailgun and only provides orchestrator with set of
>> tasks to execute. This is not a problem.
>>
>> I am talking about API for plugin writer/developer. And how implement it
>> to be more "friendly"
>>
>> On Wed, Sep 10, 2014 at 6:46 PM, Aleksandr Didenko > > wrote:
>>
>>> Hi,
>>>
>>> as for execution of arbitrary code across the OpenStack cluster - I was
>>> thinking of mcollective + fact filters:
>>>
>>> 1) we need to start using mcollective facts [0] [2] - we don't
>>> use/configure this currently
>>> 2) use mcollective execute_shell_command agent (or any other agent) with
>>> fact filter [1]
>>>
>>> So, for example, if we have mcollective fact called "node_roles":
>>> node_roles: "compute ceph-osd"
>>>
>>> Then we can execute shell cmd on all compute nodes like this:
>>>
>>> mco rpc execute_shell_command execute cmd="/some_script.sh" -F
>>> "node_role=/compute/"
>>>
>>> Of course, we can use more complicated filters to run commands more
>>> precisely.
>>>
>>> [0]
>>> https://projects.puppetlabs.com/projects/mcollective-plugins/wiki/FactsFacterYAML
>>> [1]
>>> https://docs.puppetlabs.com/mcollective/reference/ui/filters.html#fact-filters
>>> [2] https://docs.puppetlabs.com/mcollective/reference/plugins/facts.html
>>>
>>>
>>> On Wed, Sep 10, 2014 at 6:04 PM, Dmitriy Shulyak 
>>> wrote:
>>>
 Hi folks,

 Some of you may know that there is ongoing work to achieve kindof
 data-driven orchestration
 for Fuel. If this is new to you, please get familiar with spec:

 https://review.openstack.org/#/c/113491/

 Knowing that running random command on nodes will be probably most
 usable type of
 orchestration extension, i want to discuss our solution for this
 problem.

 Plugin writer will need to do two things:

 1. Provide custom task.yaml (i am using /etc/puppet/tasks, but this is
 completely configurable,
 we just need to reach agreement)

   /etc/puppet/tasks/echo/task.yaml

   with next content:

type: exec
cmd: echo 1

 2. Provide control plane with orchestration metadata

 /etc/fuel/tasks/echo_task.yaml

 controller:
  -
   task: echo
   description: Simple echo for you
   priority: 1000
 compute:
 -
   task: echo
   description: Simple echo for you
   priority: 1000

 This is done in order to separate concerns of orchestration logic and
 tasks.

 From plugin writer perspective it is far more usable to provide exact
 command in orchestration metadata itself, like:

 /etc/fuel/tasks/echo_task.yaml

 controller:
  -
   task: echo
   description: Simple echo for you
   priority: 1000
   cmd: echo 1
   type: exec

 compute:
 -
  task: echo
   description: Simple echo for you
   priority: 1000
   cmd: echo 1
   type: exec

 I would prefer to stick to the first, because there is benefits of
 using one interface between all tasks executors (puppet, exec, maybe chef),
 which will improve debuging and development process.

 So my question is first - good enough? Or second is essential type of
 plugin to support?

 If you want additional implementation details check:
 https://review.openstack.org/#/c/118311/
 https://review.openstack.org/#/c/113226/




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/m

Re: [openstack-dev] [Fuel] Experimental features and how they affect HCF

2014-09-11 Thread Vladimir Kuklin
+1

On Thu, Sep 11, 2014 at 5:05 PM, Anastasia Urlapova 
wrote:

> > I think we should not count bugs for HCF criteria if they affect only
> > experimental feature(s).
>
> +1, absolutely agree, but we should determine count of allowed bugs for
> experimental features against severity.
>
> On Thu, Sep 11, 2014 at 2:13 PM, Nikolay Markov 
> wrote:
>
>> Probably, even "experimental feature" should at least pretend to be
>> working, anyway, or it shouldn't be publically announced. But I think
>> it's important to describe limitation of this features (or mark some
>> of them as "untested") and I think list of known issues with links to
>> most important bugs is a good approach. And tags will just make things
>> simpler.
>>
>> On Thu, Sep 11, 2014 at 1:05 PM, Igor Kalnitsky 
>> wrote:
>> >> May be we can use tag per feature, for example "zabbix"
>> >
>> > Tags are ok, but I still think that we can mention at least some
>> > significant bugs. For example, if some feature doesn't work in some
>> > deployment mode (e.g. simple, with ceilometer, etc) we can at least
>> > notify users so they even don't try.
>> >
>> > Another opinions?
>> >
>> >
>> > On Thu, Sep 11, 2014 at 11:45 AM, Mike Scherbakov
>> >  wrote:
>> >>> if we point somewhere about knowing issues in those experimental
>> features
>> >> there are might be dozens of bugs.
>> >> May be we can use tag per feature, for example "zabbix", so it will be
>> easy
>> >> to search in LP all open bugs regarding Zabbix feature?
>> >>
>> >> On Thu, Sep 11, 2014 at 12:11 PM, Igor Kalnitsky <
>> ikalnit...@mirantis.com>
>> >> wrote:
>> >>>
>> >>> > I think we should not count bugs for HCF criteria if they affect
>> only
>> >>> > experimental feature(s).
>> >>>
>> >>> +1, I'm totally agree with you - it makes no sense to count
>> >>> experimental bugs as HCF criteria.
>> >>>
>> >>> > Any objections / other ideas?
>> >>>
>> >>> I think it would be great for customers if we point somewhere about
>> >>> knowing issues in those experimental features. IMHO, it should help
>> >>> them to understand what's wrong in case of errors and may prevent bug
>> >>> duplication in LP.
>> >>>
>> >>>
>> >>> On Thu, Sep 11, 2014 at 10:19 AM, Mike Scherbakov
>> >>>  wrote:
>> >>> > Hi all,
>> >>> > what about using "experimental" tag for experimental features?
>> >>> >
>> >>> > After we implemented feature groups [1], we can divide our features
>> and
>> >>> > for
>> >>> > complex features, or those which don't get enough QA resources in
>> the
>> >>> > dev
>> >>> > cycle, we can declare as experimental. It would mean that those are
>> not
>> >>> > production ready features.
>> >>> > Giving them live still in experimental mode allows early adopters to
>> >>> > give a
>> >>> > try and bring a feedback to the development team.
>> >>> >
>> >>> > I think we should not count bugs for HCF criteria if they affect
>> only
>> >>> > experimental feature(s). At the moment, we have Zabbix as
>> experimental
>> >>> > feature, and Patching of OpenStack [2] is under consideration: if
>> today
>> >>> > QA
>> >>> > doesn't approve it to be as ready for production use, we have no
>> other
>> >>> > choice. All deadlines passed, and we need to get 5.1 finally out.
>> >>> >
>> >>> > Any objections / other ideas?
>> >>> >
>> >>> > [1]
>> >>> >
>> >>> >
>> https://github.com/stackforge/fuel-specs/blob/master/specs/5.1/feature-groups.rst
>> >>> > [2] https://blueprints.launchpad.net/fuel/+spec/patch-openstack
>> >>> > --
>> >>> > Mike Scherbakov
>> >>> > #mihgen
>> >>> >
>> >>> >
>> >>> > ___
>> >>> > OpenStack-dev mailing list
>> >>> > OpenStack-dev@lists.openstack.org
>> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>> >
>> >>>
>> >>> ___
>> >>> OpenStack-dev mailing list
>> >>> OpenStack-dev@lists.openstack.org
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> Mike Scherbakov
>> >> #mihgen
>> >>
>> >>
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>> Best regards,
>> Nick Markov
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-

Re: [openstack-dev] global-reqs on tooz pulls in worrisome transitive dep

2014-09-11 Thread Julien Danjou
On Tue, Sep 09 2014, Matt Riedemann wrote:

> I noticed this change [1] today for global-requirements to require tooz [2] 
> for
> a ceilometer blueprint [3].
>
> The sad part is that tooz requires pymemcache [4] which is, from what I can
> tell, a memcached client that is not the same as python-memcached [5].
>
> Note that python-memcached is listed in global-requirements already [6].

You're not going to control the entire full list of dependency of things
we use in OpenStack, so this kind of situation is going to arise anyway.

> The problem I have with this is it doesn't appear that RHEL/Fedora package
> pymemcache (they do package python-memcached).  I see that openSUSE builds
> separate packages for each.  It looks like Ubuntu also has separate packages.
>
> My question is, is this a problem?  I'm assuming RDO will just have to package
> python-pymemcache themselves but what about people not using RDO (SOL? Don't
> care? Other?).
>
> Reverting the requirements change would probably mean reverting the ceilometer
> blueprint (or getting a version of tooz out that works with python-memcached
> which is probably too late for that right now).  Given the point in the 
> schedule
> that seems pretty drastic.

python-memcached is a terrible memcache client, which does not support
Python 3. pymemcache is way better than python-memcached, and everybody
should switch to it. When we started tooz from scratch a year ago, there
was no point starting to use a non-Python 3 compatible and "crappy"
memcache client.

pymemcache shouldn't be a problem to package anyway. :)

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Error in deploying ironicon Ubuntu 12.04

2014-09-11 Thread Jim Rollenhagen


On September 11, 2014 3:52:59 AM PDT, Lucas Alvares Gomes 
 wrote:
>Oh, it's because Precise doesn't have the docker.io package[1] (nor
>"docker").
>
>AFAIK the -infra team is now using Trusty in gate, so it won't be a
>problem. But if you think that we should still support Ironic DevStack
>with Precise please file a bug about it so the Ironic team can take a
>look on that.
>
>[1]
>http://packages.ubuntu.com/search?suite=trusty§ion=all&arch=any&keywords=docker.io&searchon=names
>
>Cheers,
>Lucas
>
>On Thu, Sep 11, 2014 at 11:12 AM, Peeyush 
>wrote:
>> Hi all,
>>
>> I have been trying to deploy Openstack-ironic on a Ubuntu 12.04 VM.
>> I encountered the following error:
>>
>> 2014-09-11 10:08:11.166 | Reading package lists...
>> 2014-09-11 10:08:11.471 | Building dependency tree...
>> 2014-09-11 10:08:11.475 | Reading state information...
>> 2014-09-11 10:08:11.610 | E: Unable to locate package docker.io
>> 2014-09-11 10:08:11.610 | E: Couldn't find any package by regex
>'docker.io'
>> 2014-09-11 10:08:11.611 | + exit_trap
>> 2014-09-11 10:08:11.612 | + local r=100
>> 2014-09-11 10:08:11.612 | ++ jobs -p
>> 2014-09-11 10:08:11.612 | + jobs=
>> 2014-09-11 10:08:11.612 | + [[ -n '' ]]
>> 2014-09-11 10:08:11.612 | + kill_spinner
>> 2014-09-11 10:08:11.613 | + '[' '!' -z '' ']'
>> 2014-09-11 10:08:11.613 | + [[ 100 -ne 0 ]]
>> 2014-09-11 10:08:11.613 | + echo 'Error on exit'
>> 2014-09-11 10:08:11.613 | Error on exit
>> 2014-09-11 10:08:11.613 | + [[ -z /opt/stack ]]
>> 2014-09-11 10:08:11.613 | + ./tools/worlddump.py -d /opt/stack
>> 2014-09-11 10:08:11.655 | + exit 100
>>
>> I tried to make it work on a separate machine, but got the same
>error.
>> I understand that it could be because script is looking for docker.io
>> package,
>> but I guess only docker package is available. I tried to install
>docker.io,
>> but couldn't
>> find it.
>>
>> Can you please help me out to resolve this?

Ouch. I added this as a dependency in devstack for building IPA. 

As Lucas said, it works fine in 14.04. In 12.04, and if using Ironic with the 
PXE driver (default), you can likely remove that line from 
devstack/files/apts/ironic. I won't promise that everything will work after 
that, but chances are good. 

// jim
>>
>> Thanks,
>>
>> --
>> Peeyush Gupta
>> gpeey...@linux.vnet.ibm.com
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Andrew Laski


On 09/10/2014 07:23 PM, Michael Still wrote:

On Thu, Sep 11, 2014 at 8:11 AM, Jay Pipes  wrote:


a) Sorting out the common code is already accounted for in Dan B's original
proposal -- it's a prerequisite for the split.

Its a big prerequisite though. I think we're talking about a release
worth of work to get that right. I don't object to us doing that work,
but I think we need to be honest about how long its going to take. It
will also make the core of nova less agile, as we'll find it hard to
change the hypervisor driver interface over time. Do we really think
its ready to be stable?


I don't.  For a long time now I've wanted to split the gigantic spawn() 
method in the virt api into more discrete steps.  I think there's some 
opportunity for doing some steps in parallel and the potential to have 
failures reported earlier and handled better.  But I've been sitting on 
it because I wanted to use 'tasks' as a way to address the 
parallelization and that work hasn't happened yet.  But this work would 
be introducing new calls which would be used based on some sort of 
capability query to the driver, so I don't think this work is 
necessarily hindered by stabilizing the interface.


I also think the migration/resize methods could use some analysis before 
making a determination that they are what we want in a stable interface.




As an alternative approach...

What if we pushed most of the code for a driver into a library?
Imagine a library which controls the low level operations of a
hypervisor -- create a vm, attach a NIC, etc. Then the driver would
become a shim around that which was relatively thin, but owned the
interface into the nova core. The driver handles the nova specific
things like knowing how to create a config drive, or how to
orchestrate with cinder, but hands over all the hypervisor operations
to the library. If we found a bug in the library we just pin our
dependancy on the version we know works whilst we fix things.

In fact, the driver inside nova could be a relatively generic "library
driver", and we could have multiple implementations of the library,
one for each hypervisor.

This would make testing nova easier too, because we know how to mock
libraries already.

Now, that's kind of what we have in the hypervisor driver API now.
What I'm proposing is that the point where we break out of the nova
code base should be closer to the hypervisor than what that API
presents.


b) The conflict Dan is speaking of is around the current situation where we
have a limited core review team bandwidth and we have to pick and choose
which virt driver-specific features we will review. This leads to bad
feelings and conflict.

The way this worked in the past is we had cores who were subject
matter experts in various parts of the code -- there is a clear set of
cores who "get" xen or libivrt for example and I feel like those
drivers get reasonable review times. What's happened though is that
we've added a bunch of drivers without adding subject matter experts
to core to cover those drivers. Those newer drivers therefore have a
harder time getting things reviewed and approved.

That said, a heap of cores have spent time reviewing vmware driver
code this release, so its obviously not as simple as I describe above.


c) It's the impact to the CI and testing load that I see being the biggest
benefit to the split-out driver repos. Patches proposed to the XenAPI driver
shouldn't have the Hyper-V CI tests run against the patch. Likewise, running
libvirt unit tests in the VMWare driver repo doesn't make a whole lot of
sense, and all of these tests add a not-insignificant load to the overall
upstream and external CI systems. The long wait time for tests to come back
means contributors get frustrated, since many reviewers tend to wait until
Jenkins returns some result before they review. All of this leads to
increased conflict that would be somewhat ameliorated by having separate
code repos for the virt drivers.

It is already possible to filter CI runs to specific paths in the
code. We just didn't choose to do that for policy reasons. We could
change that right now with a trivial tweak to each CI system's zuul
config.

Michael




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-11 Thread David Kranz

On 09/11/2014 07:32 AM, Eoghan Glynn wrote:



As you all know, there has recently been several very active discussions
around how to improve assorted aspects of our development process. One idea
that was brought up is to come up with a list of cycle goals/project
priorities for Kilo [0].

To that end, I would like to propose an exercise as discussed in the TC
meeting yesterday [1]:
Have anyone interested (especially TC members) come up with a list of what
they think the project wide Kilo cycle goals should be and post them on this
thread ...

Here's my list of high-level cycle goals, for consideration ...


1. Address our usability debts

With some justification, we've been saddled with the perception
of not caring enough about the plight of users and operators. The
frustrating thing is that much of this is very fixable, *if* we take
time out from the headlong rush to add features. Achievable things
like documentation completeness, API consistency, CLI intuitiveness,
logging standardization, would all go a long way here.

These things are of course all not beyond the wit of man, but we
need to take the time out to actually do them. This may involve
a milestone, or even longer, where we accept that the rate of
feature addition will be deliberately slowed down.


2. Address the drags on our development velocity

Despite the Trojan efforts of the QA team, the periodic brownouts
in the gate are having a serious impact on our velocity. Over the
past few cycles, we've seen the turnaround time for patch check/
verification spike up unacceptably long multiple times, mostly
around the milestones.

Whatever we can do to smoothen out these spikes, whether it be
moving much of the Tempest coverage into the project trees, or
switching focus onto post-merge verification as suggested by
Sean on this thread, or even considering some more left-field
approaches such as staggered milestones, we need to grasp this
nettle as a matter of urgency.

Further back in the pipeline, the effort required to actually get
something shepherded through review is steadily growing. To the
point that we need to consider some radical approaches that
retain the best of our self-organizing model, while setting more
reasonable & reliable expectations for patch authors, and making
it more likely that narrow domain expertise is available to review
their contributions in timely way. For the larger projects, this
is likely to mean something different (along the lines of splits
or sub-domains) than it does for the smaller projects.


3. Address the long-running "what's in and what's out" questions

The way some of the discussions about integration and incubation
played out this cycle have made me sad. Not all of these discussions
have been fully supported by the facts on the ground IMO. And not
all of the issues that have been held up as justifications for
whatever course of exclusion or inclusion would IMO actually be
solved in that way.

I think we need to move the discussion around a new concept of
layering, or redefining what it means to be "in the tent", to a
more constructive and collaborative place than heretofore.


4. Address the fuzziness in cross-service interactions

In a semi-organic way, we've gone and built ourselves a big ol'
service-oriented architecture. But without necessarily always
following the strong contracts, loose coupling, discoverability,
and autonomy that a SOA approach implies.

We need to take the time to go back and pay down some of the debt
that has accreted over multiple cycles around these these
cross-service interactions. The most pressing of these would
include finally biting the bullet on the oft-proposed but never
delivered-upon notion of stabilizing notifications behind a
well-defined contract. Also, the more recently advocated notions
of moving away from coarse-grained versioning of the inter-service
APIs, and supporting better introspection and discovery of
capabilities.

+1
IMO, almost all of the other ills discussed recently derive from this 
single failure.


 -David

by end of day Wednesday, September 10th.

Oh, yeah, and impose fewer arbitrary deadlines ;)

Cheers,
Eoghan


After which time we can
begin discussing the results.
The goal of this exercise is to help us see if our individual world views
align with the greater community, and to get the ball rolling on a larger
discussion of where as a project we should be focusing more time.


best,
Joe Gordon

[0]
http://lists.openstack.org/pipermail/openstack-dev/2014-August/041929.html
[1]
http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-09-02-20.04.log.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_

Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Gary Kotton


On 9/11/14, 2:55 PM, "Thierry Carrez"  wrote:

>Sean Dague wrote:
>> [...]
>> Why don't we start with "let's clean up the virt interface and make it
>> more sane", as I don't think there is any disagreement there. If it's
>> going to take a cycle, it's going to take a cycle anyway (it will
>> probably take 2 cycles, realistically, we always underestimate these
>> things, remember when no-db-compute was going to be 1 cycle?). I don't
>> see the need to actually decide here and now that the split is clearly
>> at least 7 - 12 months away. A lot happens in the intervening time.
>
>Yes, that sounds like the logical next step. We can't split drivers
>without first doing that anyway. I still think "people need smaller
>areas of work", as Vish eloquently put it. I still hope that refactoring
>our test architecture will let us reach the same level of quality with
>only a fraction of the tests being run at the gate, which should address
>most of the harm you see in adding additional repositories. But I agree
>there is little point in discussing splitting virt drivers (or anything
>else, really) until the internal interface below that potential split is
>fully cleaned up and it becomes an option.

How about we start to try and patch gerrit to provide +2 permissions for
people
Who can be assigned Œdriver core¹ status. This is something that is
relevant to Nova and Neutron and I guess Cinder too.

Thanks
Gary

>
>-- 
>Thierry Carrez (ttx)
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Experimental features and how they affect HCF

2014-09-11 Thread Anastasia Urlapova
> I think we should not count bugs for HCF criteria if they affect only
> experimental feature(s).

+1, absolutely agree, but we should determine count of allowed bugs for
experimental features against severity.

On Thu, Sep 11, 2014 at 2:13 PM, Nikolay Markov 
wrote:

> Probably, even "experimental feature" should at least pretend to be
> working, anyway, or it shouldn't be publically announced. But I think
> it's important to describe limitation of this features (or mark some
> of them as "untested") and I think list of known issues with links to
> most important bugs is a good approach. And tags will just make things
> simpler.
>
> On Thu, Sep 11, 2014 at 1:05 PM, Igor Kalnitsky 
> wrote:
> >> May be we can use tag per feature, for example "zabbix"
> >
> > Tags are ok, but I still think that we can mention at least some
> > significant bugs. For example, if some feature doesn't work in some
> > deployment mode (e.g. simple, with ceilometer, etc) we can at least
> > notify users so they even don't try.
> >
> > Another opinions?
> >
> >
> > On Thu, Sep 11, 2014 at 11:45 AM, Mike Scherbakov
> >  wrote:
> >>> if we point somewhere about knowing issues in those experimental
> features
> >> there are might be dozens of bugs.
> >> May be we can use tag per feature, for example "zabbix", so it will be
> easy
> >> to search in LP all open bugs regarding Zabbix feature?
> >>
> >> On Thu, Sep 11, 2014 at 12:11 PM, Igor Kalnitsky <
> ikalnit...@mirantis.com>
> >> wrote:
> >>>
> >>> > I think we should not count bugs for HCF criteria if they affect only
> >>> > experimental feature(s).
> >>>
> >>> +1, I'm totally agree with you - it makes no sense to count
> >>> experimental bugs as HCF criteria.
> >>>
> >>> > Any objections / other ideas?
> >>>
> >>> I think it would be great for customers if we point somewhere about
> >>> knowing issues in those experimental features. IMHO, it should help
> >>> them to understand what's wrong in case of errors and may prevent bug
> >>> duplication in LP.
> >>>
> >>>
> >>> On Thu, Sep 11, 2014 at 10:19 AM, Mike Scherbakov
> >>>  wrote:
> >>> > Hi all,
> >>> > what about using "experimental" tag for experimental features?
> >>> >
> >>> > After we implemented feature groups [1], we can divide our features
> and
> >>> > for
> >>> > complex features, or those which don't get enough QA resources in the
> >>> > dev
> >>> > cycle, we can declare as experimental. It would mean that those are
> not
> >>> > production ready features.
> >>> > Giving them live still in experimental mode allows early adopters to
> >>> > give a
> >>> > try and bring a feedback to the development team.
> >>> >
> >>> > I think we should not count bugs for HCF criteria if they affect only
> >>> > experimental feature(s). At the moment, we have Zabbix as
> experimental
> >>> > feature, and Patching of OpenStack [2] is under consideration: if
> today
> >>> > QA
> >>> > doesn't approve it to be as ready for production use, we have no
> other
> >>> > choice. All deadlines passed, and we need to get 5.1 finally out.
> >>> >
> >>> > Any objections / other ideas?
> >>> >
> >>> > [1]
> >>> >
> >>> >
> https://github.com/stackforge/fuel-specs/blob/master/specs/5.1/feature-groups.rst
> >>> > [2] https://blueprints.launchpad.net/fuel/+spec/patch-openstack
> >>> > --
> >>> > Mike Scherbakov
> >>> > #mihgen
> >>> >
> >>> >
> >>> > ___
> >>> > OpenStack-dev mailing list
> >>> > OpenStack-dev@lists.openstack.org
> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> >
> >>>
> >>> ___
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev@lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >>
> >> --
> >> Mike Scherbakov
> >> #mihgen
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Best regards,
> Nick Markov
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding Nejc Saje to ceilometer-core

2014-09-11 Thread gordon chung
> Nejc has been doing a great work and has been very helpful during the> Juno 
> cycle and his help is very valuable.
 
> I'd like to propose that we add Nejc Saje to the ceilometer-core group.can we 
> minus because he makes me look bad? /sarcasm
+1 for core.
cheers,
gord
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Triage Bug Day Today!

2014-09-11 Thread Flavio Percoco
On 09/11/2014 02:28 PM, Cindy Pallares wrote:
> Hi Folks!
> 
> Glance is having its bug triage day today! Please help out if you can.
> You can check out the tasks here:
> 
> http://etherpad.openstack.org/p/glancebugday
> 
> Also here are some handy links to the untriaged bugs in glance and the
> client:
> 
> https://bugs.launchpad.net/glance/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=NEW&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on
> 
> 
> https://bugs.launchpad.net/python-glanceclient/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=NEW&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on
> 
> 
> 

Awesome,

Thanks for organizing this, Cindy.
Flavio


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Triage Bug Day Today!

2014-09-11 Thread Cindy Pallares

Hi Folks!

Glance is having its bug triage day today! Please help out if you can. 
You can check out the tasks here:


http://etherpad.openstack.org/p/glancebugday

Also here are some handy links to the untriaged bugs in glance and the 
client:


https://bugs.launchpad.net/glance/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=NEW&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on

https://bugs.launchpad.net/python-glanceclient/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=NEW&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on



-Cindy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] rss for specs

2014-09-11 Thread Sergey Lukjanov
Hi folks,

you could subscribe to specs rss now -
http://specs.openstack.org/openstack/sahara-specs/rss

Thanks for the Doug Hellmann for implementing it.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugable solution for running abstract commands on nodes

2014-09-11 Thread Evgeniy L
Hi,

In most cases for plugin developers or fuel users it will be much
easier to just write command which he wants to run on nodes
instead of describing some abstract task which doesn't have
any additional information/logic and looks like unnecessary complexity.

But for complicated cases user will have to write some code for tasklib.

Thanks,

On Wed, Sep 10, 2014 at 8:10 PM, Dmitriy Shulyak 
wrote:

> Hi,
>
> you described transport mechanism for running commands based on facts, we
> have another one, which stores
> all business logic in nailgun and only provides orchestrator with set of
> tasks to execute. This is not a problem.
>
> I am talking about API for plugin writer/developer. And how implement it
> to be more "friendly"
>
> On Wed, Sep 10, 2014 at 6:46 PM, Aleksandr Didenko 
> wrote:
>
>> Hi,
>>
>> as for execution of arbitrary code across the OpenStack cluster - I was
>> thinking of mcollective + fact filters:
>>
>> 1) we need to start using mcollective facts [0] [2] - we don't
>> use/configure this currently
>> 2) use mcollective execute_shell_command agent (or any other agent) with
>> fact filter [1]
>>
>> So, for example, if we have mcollective fact called "node_roles":
>> node_roles: "compute ceph-osd"
>>
>> Then we can execute shell cmd on all compute nodes like this:
>>
>> mco rpc execute_shell_command execute cmd="/some_script.sh" -F
>> "node_role=/compute/"
>>
>> Of course, we can use more complicated filters to run commands more
>> precisely.
>>
>> [0]
>> https://projects.puppetlabs.com/projects/mcollective-plugins/wiki/FactsFacterYAML
>> [1]
>> https://docs.puppetlabs.com/mcollective/reference/ui/filters.html#fact-filters
>> [2] https://docs.puppetlabs.com/mcollective/reference/plugins/facts.html
>>
>>
>> On Wed, Sep 10, 2014 at 6:04 PM, Dmitriy Shulyak 
>> wrote:
>>
>>> Hi folks,
>>>
>>> Some of you may know that there is ongoing work to achieve kindof
>>> data-driven orchestration
>>> for Fuel. If this is new to you, please get familiar with spec:
>>>
>>> https://review.openstack.org/#/c/113491/
>>>
>>> Knowing that running random command on nodes will be probably most
>>> usable type of
>>> orchestration extension, i want to discuss our solution for this problem.
>>>
>>> Plugin writer will need to do two things:
>>>
>>> 1. Provide custom task.yaml (i am using /etc/puppet/tasks, but this is
>>> completely configurable,
>>> we just need to reach agreement)
>>>
>>>   /etc/puppet/tasks/echo/task.yaml
>>>
>>>   with next content:
>>>
>>>type: exec
>>>cmd: echo 1
>>>
>>> 2. Provide control plane with orchestration metadata
>>>
>>> /etc/fuel/tasks/echo_task.yaml
>>>
>>> controller:
>>>  -
>>>   task: echo
>>>   description: Simple echo for you
>>>   priority: 1000
>>> compute:
>>> -
>>>   task: echo
>>>   description: Simple echo for you
>>>   priority: 1000
>>>
>>> This is done in order to separate concerns of orchestration logic and
>>> tasks.
>>>
>>> From plugin writer perspective it is far more usable to provide exact
>>> command in orchestration metadata itself, like:
>>>
>>> /etc/fuel/tasks/echo_task.yaml
>>>
>>> controller:
>>>  -
>>>   task: echo
>>>   description: Simple echo for you
>>>   priority: 1000
>>>   cmd: echo 1
>>>   type: exec
>>>
>>> compute:
>>> -
>>>  task: echo
>>>   description: Simple echo for you
>>>   priority: 1000
>>>   cmd: echo 1
>>>   type: exec
>>>
>>> I would prefer to stick to the first, because there is benefits of using
>>> one interface between all tasks executors (puppet, exec, maybe chef), which
>>> will improve debuging and development process.
>>>
>>> So my question is first - good enough? Or second is essential type of
>>> plugin to support?
>>>
>>> If you want additional implementation details check:
>>> https://review.openstack.org/#/c/118311/
>>> https://review.openstack.org/#/c/113226/
>>>
>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL] Re: SSL in Fuel.

2014-09-11 Thread Simon Pasquier
Hi,

On Thu, Sep 11, 2014 at 1:03 PM, Sebastian Kalinowski <
skalinow...@mirantis.com> wrote:

> I have some topics for [1] that I want to discuss:
>
> 1) Should we allow users to turn SSL on/off for Fuel master?
> I think we should since some users may don't care about SSL and
> enabling it will just make them unhappy (like warnings in browsers,
> expiring certs).
>
>
Definitely +1. I think that Tomasz mentioned somewhere that HTTP should be
kept as the default.


> 2) Will we allow users (in first iteration) to use their own certs?
> If we will (which I think we should and other people aslo seems to
> share this point of view), we have some options for that:
>  A) Add informations to docs where to upload your own certificate on
> master node (no UI) - less work, but requires a little more action from
> users
>  B) Simple form in UI where user will be able to paste his certs -
> little bit more work, user friendly
> Are there any reasons we shouldn't do that?
>
>
Option A is enough. If there is enough time to implement option B, that's
cool but this should not be a blocker.


> 3) How we will manage cert expiration?
> Stanislaw proposed that we should show user a notification that will
> tell user about cert expiration. We could check that in cron job.
> I think that we should also allow user to generate a new cert in Fuel
> if the old one will expire.
>

As long as the user cannot upload a certificate, we don't need to care
about this point but it should be mentioned in the doc.
And to avoid this problem, Fuel should generate certificates that expire in
many years (eg >= 10).

BR

Simon

>
> I'll also remove part about adding cert validation in fuel agent since it
> would require a significant amount of work and it's not essential for first
> iteration.
>
> Best,
> Sebastian
>
>
> [1] https://blueprints.launchpad.net/fuel/+spec/fuel-ssl-endpoints
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Bringing back auto-abandon

2014-09-11 Thread Ryan Brown
On 09/10/2014 06:32 PM, James E. Blair wrote:
> James Polley  writes:
> Incidentally, that is the query in the "Wayward Changes" section of the
> "Review Inbox" dashboard (thanks Sean!); for nova, you can see it here:
> 
>   
> https://review.openstack.org/#/projects/openstack/nova,dashboards/important-changes:review-inbox-dashboard
> 
> The key here is that there are a lot of changes in a lot of different
> states, and one query isn't going to do everything that everyone wants
> it to do.  Gerrit has a _very_ powerful query language that can actually
> help us make sense of all the changes we have in our system without
> externalizing the cost of that onto contributors in the form of
> forced-abandoning of changes.  Dashboards can help us share the
> knowledge of how to get the most out of it.
> 
>   https://review.openstack.org/Documentation/user-dashboards.html
>   https://review.openstack.org/Documentation/user-search.html
> 
> -Jim
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Also if you don't feel existing dashboards scratch your project's
particular itch, there's always gerrit-dash-creator[1] to help you make
one that fits your needs.

[1]: https://github.com/stackforge/gerrit-dash-creator

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Thierry Carrez
Sean Dague wrote:
> [...]
> Why don't we start with "let's clean up the virt interface and make it
> more sane", as I don't think there is any disagreement there. If it's
> going to take a cycle, it's going to take a cycle anyway (it will
> probably take 2 cycles, realistically, we always underestimate these
> things, remember when no-db-compute was going to be 1 cycle?). I don't
> see the need to actually decide here and now that the split is clearly
> at least 7 - 12 months away. A lot happens in the intervening time.

Yes, that sounds like the logical next step. We can't split drivers
without first doing that anyway. I still think "people need smaller
areas of work", as Vish eloquently put it. I still hope that refactoring
our test architecture will let us reach the same level of quality with
only a fraction of the tests being run at the gate, which should address
most of the harm you see in adding additional repositories. But I agree
there is little point in discussing splitting virt drivers (or anything
else, really) until the internal interface below that potential split is
fully cleaned up and it becomes an option.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [clients] [keystone] lack of retrying tokens leads to overall OpenStack fragility

2014-09-11 Thread Sean Dague
On 09/10/2014 11:55 AM, Steven Hardy wrote:
> On Wed, Sep 10, 2014 at 10:14:32AM -0400, Sean Dague wrote:
>> Going through the untriaged Nova bugs, and there are a few on a similar
>> pattern:
>>
>> Nova operation in progress takes a while
>> Crosses keystone token expiration time
>> Timeout thrown
>> Operation fails
>> Terrible 500 error sent back to user
> 
> We actually have this exact problem in Heat, which I'm currently trying to
> solve:
> 
> https://bugs.launchpad.net/heat/+bug/1306294
> 
> Can you clarify, is the issue either:
> 
> 1. Create novaclient object with username/password
> 2. Do series of operations via the client object which eventually fail
> after $n operations due to token expiry
> 
> or:
> 
> 1. Create novaclient object with username/password
> 2. Some really long operation which means token expires in the course of
> the service handling the request, blowing up and 500-ing

>From what I can tell of the Nova bugs both are issues. Honestly, it
would probably be really telling to setup a test env with 10s token
timeouts and see how crazy it broke. I expect that our expiration logic,
and how our components react to it, is actually a lot less coherent than
we believe.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [clients] [keystone] lack of retrying tokens leads to overall OpenStack fragility

2014-09-11 Thread Sean Dague
On 09/10/2014 08:46 PM, Jamie Lennox wrote:
> 
> - Original Message -
>> From: "Steven Hardy" 
>> To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>> Sent: Thursday, September 11, 2014 1:55:49 AM
>> Subject: Re: [openstack-dev] [all] [clients] [keystone] lack of retrying 
>> tokens leads to overall OpenStack fragility
>>
>> On Wed, Sep 10, 2014 at 10:14:32AM -0400, Sean Dague wrote:
>>> Going through the untriaged Nova bugs, and there are a few on a similar
>>> pattern:
>>>
>>> Nova operation in progress takes a while
>>> Crosses keystone token expiration time
>>> Timeout thrown
>>> Operation fails
>>> Terrible 500 error sent back to user
>>
>> We actually have this exact problem in Heat, which I'm currently trying to
>> solve:
>>
>> https://bugs.launchpad.net/heat/+bug/1306294
>>
>> Can you clarify, is the issue either:
>>
>> 1. Create novaclient object with username/password
>> 2. Do series of operations via the client object which eventually fail
>> after $n operations due to token expiry
>>
>> or:
>>
>> 1. Create novaclient object with username/password
>> 2. Some really long operation which means token expires in the course of
>> the service handling the request, blowing up and 500-ing
>>
>> If the former, then it does sound like a client, or usage-of-client bug,
>> although note if you pass a *token* vs username/password (as is currently
>> done for glance and heat in tempest, because we lack the code to get the
>> token outside of the shell.py code..), there's nothing the client can do,
>> because you can't request a new token with longer expiry with a token...
>>
>> However if the latter, then it seems like not really a client problem to
>> solve, as it's hard to know what action to take if a request failed
>> part-way through and thus things are in an unknown state.
>>
>> This issue is a hard problem, which can possibly be solved by
>> switching to a trust scoped token (service impersonates the user), but then
>> you're effectively bypassing token expiry via delegation which sits
>> uncomfortably with me (despite the fact that we may have to do this in heat
>> to solve the afforementioned bug)
>>
>>> It seems like we should have a standard pattern that on token expiration
>>> the underlying code at least gives one retry to try to establish a new
>>> token to complete the flow, however as far as I can tell *no* clients do
>>> this.
>>
>> As has been mentioned, using sessions may be one solution to this, and
>> AFAIK session support (where it doesn't already exist) is getting into
>> various clients via the work being carried out to add support for v3
>> keystone by David Hu:
>>
>> https://review.openstack.org/#/q/owner:david.hu%2540hp.com,n,z
>>
>> I see patches for Heat (currently gating), Nova and Ironic.
>>
>>> I know we had to add that into Tempest because tempest runs can exceed 1
>>> hr, and we want to avoid random fails just because we cross a token
>>> expiration boundary.
>>
>> I can't claim great experience with sessions yet, but AIUI you could do
>> something like:
>>
>> from keystoneclient.auth.identity import v3
>> from keystoneclient import session
>> from keystoneclient.v3 import client
>>
>> auth = v3.Password(auth_url=OS_AUTH_URL,
>>username=USERNAME,
>>password=PASSWORD,
>>project_id=PROJECT,
>>user_domain_name='default')
>> sess = session.Session(auth=auth)
>> ks = client.Client(session=sess)
>>
>> And if you can pass the same session into the various clients tempest
>> creates then the Password auth-plugin code takes care of reauthenticating
>> if the token cached in the auth plugin object is expired, or nearly
>> expired:
>>
>> https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/auth/identity/base.py#L120
>>
>> So in the tempest case, it seems like it may be a case of migrating the
>> code creating the clients to use sessions instead of passing a token or
>> username/password into the client object?
>>
>> That's my understanding of it atm anyway, hopefully jamielennox will be along
>> soon with more details :)
>>
>> Steve
> 
> 
> By clients here are you referring to the CLIs or the python libraries? 
> Implementation is at different points with each. 
> 
> Sessions will handle automatically reauthenticating and retrying a request, 
> however it relies on the service throwing a 401 Unauthenticated error. If a 
> service is returning a 500 (or a timeout?) then there isn't much that a 
> client can/should do for that because we can't assume that trying again with 
> a new token will solve anything. 
> 
> At the moment we have keystoneclient, novaclient, cinderclient neutronclient 
> and then a number of the smaller projects with support for sessions. That 
> obviously doesn't mean that existing users of that code have transitioned to 
> the newer way though. David Hu has been working on using this code within the 
> existing CLIs. I have prot

Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Sean Dague
On 09/11/2014 05:18 AM, Daniel P. Berrange wrote:
> On Thu, Sep 11, 2014 at 09:23:34AM +1000, Michael Still wrote:
>> On Thu, Sep 11, 2014 at 8:11 AM, Jay Pipes  wrote:
>>
>>> a) Sorting out the common code is already accounted for in Dan B's original
>>> proposal -- it's a prerequisite for the split.
>>
>> Its a big prerequisite though. I think we're talking about a release
>> worth of work to get that right. I don't object to us doing that work,
>> but I think we need to be honest about how long its going to take. It
>> will also make the core of nova less agile, as we'll find it hard to
>> change the hypervisor driver interface over time. Do we really think
>> its ready to be stable?
> 
> Yes, in my proposal I explicitly said we'd need to have Kilo
> for all the prep work to clean up the virt API, before only
> doing the split in Lx.
> 
> The actual nova/virt/driver.py has been more stable over the
> past few releases than I thought it would be. In terms of APIs
> we're not really modified existing APIs, mostly added new ones.
> Where we did modify existing APIs, we could have easily taken
> the approach of adding a new API in parallel and deprecating
> the old entry point to maintain compat.
> 
> The big change which isn't visible directly is the conversion
> of internal nova code to use objects. Finishing this conversion
> is clearly a pre-requisite to any such split, since we'd need
> to make sure all data passed into the nova virt APIs as parameters
> is stable & well defined. 
> 
>> As an alternative approach...
>>
>> What if we pushed most of the code for a driver into a library?
>> Imagine a library which controls the low level operations of a
>> hypervisor -- create a vm, attach a NIC, etc. Then the driver would
>> become a shim around that which was relatively thin, but owned the
>> interface into the nova core. The driver handles the nova specific
>> things like knowing how to create a config drive, or how to
>> orchestrate with cinder, but hands over all the hypervisor operations
>> to the library. If we found a bug in the library we just pin our
>> dependancy on the version we know works whilst we fix things.
>>
>> In fact, the driver inside nova could be a relatively generic "library
>> driver", and we could have multiple implementations of the library,
>> one for each hypervisor.
> 
> I don't think that particularly solves the problem, particularly
> the ones you are most concerned about above of API stability. The
> naive impl of any "library" for the virt driver would pretty much
> mirror the nova virt API. The virt driver impls would thus have to
> do the job of taking the Nova objects passed in as parameters and
> turning them into something "stable" to pass to the library. Except
> now instead of us only having to figure out a stable API in one
> place, every single driver has to reinvent the wheel defining their
> own stable interface & objects. I'd also be concerned that ongoing
> work on drivers is still going to require alot of patches to Nova
> to update the shims all the time, so we're still going to contend
> on resource fairly highly.
> 
>>> b) The conflict Dan is speaking of is around the current situation where we
>>> have a limited core review team bandwidth and we have to pick and choose
>>> which virt driver-specific features we will review. This leads to bad
>>> feelings and conflict.
>>
>> The way this worked in the past is we had cores who were subject
>> matter experts in various parts of the code -- there is a clear set of
>> cores who "get" xen or libivrt for example and I feel like those
>> drivers get reasonable review times. What's happened though is that
>> we've added a bunch of drivers without adding subject matter experts
>> to core to cover those drivers. Those newer drivers therefore have a
>> harder time getting things reviewed and approved.
> 
> FYI, for Juno at least I really don't consider that even the libvirt
> driver got acceptable review times in any sense. The pain of waiting
> for reviews in libvirt code I've submitted this cycle is what prompted
> me to start this thread. All the virt drivers are suffering way more
> than they should be, but those without core team representation suffer
> to an even greater degree.  And this is ignoring the point Jay & I
> were making about how the use of a single team means that there is
> always contention for feature approval, so much work gets cut right
> at the start even if maintainers of that area felt it was valuable
> and worth taking.

I continue to not understand how N non overlapping teams makes this any
better. You have to pay the integration cost somewhere. Right now we're
trying to pay it 1 patch at a time. This model means the integration
units get much bigger, and with less common ground.

Look at how much active work in crossing core teams we've had to do to
make any real progress on the neutron replacing nova-network front. And
how slow that process is. I think you'll see that hugely show u

Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-11 Thread Eoghan Glynn


> As you all know, there has recently been several very active discussions
> around how to improve assorted aspects of our development process. One idea
> that was brought up is to come up with a list of cycle goals/project
> priorities for Kilo [0].
> 
> To that end, I would like to propose an exercise as discussed in the TC
> meeting yesterday [1]:
> Have anyone interested (especially TC members) come up with a list of what
> they think the project wide Kilo cycle goals should be and post them on this
> thread ...

Here's my list of high-level cycle goals, for consideration ...


1. Address our usability debts

With some justification, we've been saddled with the perception
of not caring enough about the plight of users and operators. The
frustrating thing is that much of this is very fixable, *if* we take
time out from the headlong rush to add features. Achievable things
like documentation completeness, API consistency, CLI intuitiveness,
logging standardization, would all go a long way here.

These things are of course all not beyond the wit of man, but we
need to take the time out to actually do them. This may involve
a milestone, or even longer, where we accept that the rate of
feature addition will be deliberately slowed down. 


2. Address the drags on our development velocity

Despite the Trojan efforts of the QA team, the periodic brownouts
in the gate are having a serious impact on our velocity. Over the
past few cycles, we've seen the turnaround time for patch check/
verification spike up unacceptably long multiple times, mostly
around the milestones.

Whatever we can do to smoothen out these spikes, whether it be
moving much of the Tempest coverage into the project trees, or
switching focus onto post-merge verification as suggested by
Sean on this thread, or even considering some more left-field
approaches such as staggered milestones, we need to grasp this
nettle as a matter of urgency.

Further back in the pipeline, the effort required to actually get
something shepherded through review is steadily growing. To the
point that we need to consider some radical approaches that
retain the best of our self-organizing model, while setting more
reasonable & reliable expectations for patch authors, and making
it more likely that narrow domain expertise is available to review
their contributions in timely way. For the larger projects, this
is likely to mean something different (along the lines of splits
or sub-domains) than it does for the smaller projects.


3. Address the long-running "what's in and what's out" questions

The way some of the discussions about integration and incubation 
played out this cycle have made me sad. Not all of these discussions
have been fully supported by the facts on the ground IMO. And not
all of the issues that have been held up as justifications for
whatever course of exclusion or inclusion would IMO actually be
solved in that way.

I think we need to move the discussion around a new concept of
layering, or redefining what it means to be "in the tent", to a
more constructive and collaborative place than heretofore.


4. Address the fuzziness in cross-service interactions

In a semi-organic way, we've gone and built ourselves a big ol'
service-oriented architecture. But without necessarily always
following the strong contracts, loose coupling, discoverability,
and autonomy that a SOA approach implies.

We need to take the time to go back and pay down some of the debt
that has accreted over multiple cycles around these these
cross-service interactions. The most pressing of these would
include finally biting the bullet on the oft-proposed but never
delivered-upon notion of stabilizing notifications behind a
well-defined contract. Also, the more recently advocated notions
of moving away from coarse-grained versioning of the inter-service
APIs, and supporting better introspection and discovery of
capabilities.

> by end of day Wednesday, September 10th.

Oh, yeah, and impose fewer arbitrary deadlines ;)

Cheers,
Eoghan

> After which time we can
> begin discussing the results.
> The goal of this exercise is to help us see if our individual world views
> align with the greater community, and to get the ball rolling on a larger
> discussion of where as a project we should be focusing more time.
> 
> 
> best,
> Joe Gordon
> 
> [0]
> http://lists.openstack.org/pipermail/openstack-dev/2014-August/041929.html
> [1]
> http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-09-02-20.04.log.html
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >