Re: [openstack-dev] Disaster Recovery for OpenStack - call for stakeholder - discussion reminder

2014-03-20 Thread Deepak Shetty
Hi List,
I was looking at the etherpad and March 19 notes and have few Qs

1) How is the "DR middleware" (depicted in Ron's youtube video) different
than the "replication agent" (noted in the March 19 etherpad notes). Are
they same, if not, how/why are they different ?

2) Maybe a dumb Q.. but still.. Why do we need to worry about syncing
metadata differently ? If all the storage that is used across openstack
services (and in typical case it might be just 1 backend, say GlsuterFS)
are beign replicated durign the DR, wouldn't the metadata be replicated
too.. why do we need to be concerned abt it as a separate entity ?

thanx,
deepak



On Wed, Mar 19, 2014 at 2:11 PM, Ronen Kat  wrote:

> For those who are interested we will discuss the disaster recovery
> use-cases and how to proceed toward the Juno summit on March 19 at 17:00
> UTC (invitation below)
>
>
>
> Call-in:
> *https://www.teleconference.att.com/servlet/glbAccess?process=1&accessCode=6406941&accessNumber=1809417783#C2*
> Passcode: 6406941
>
> Etherpad:
> https://etherpad.openstack.org/p/juno-disaster-recovery-call-for-stakeholders
> Wiki: https://wiki.openstack.org/wiki/DisasterRecovery
>
> Regards,
> __
> Ronen I. Kat, PhD
> Storage Research
> *IBM Research - Haifa*
> Phone: +972.3.7689493
> Email: ronen...@il.ibm.com
>
>
>
>
> From:"Luohao (brian)" 
> To:"OpenStack Development Mailing List (not for usage questions)"
> ,
> Date:14/03/2014 03:59 AM
> Subject:Re: [openstack-dev] Disaster Recovery for OpenStack -
> call for stakeholder
> --
>
>
>
> 1.  fsfreeze with vss has been added to qemu upstream, see
> http://lists.gnu.org/archive/html/qemu-devel/2013-02/msg01963.html for
> usage.
> 2.  libvirt allows a client to send any commands to qemu-ga, see
> http://wiki.libvirt.org/page/Qemu_guest_agent
> 3.  linux fsfreeze is not equivalent to windows fsfreeze+vss. Linux
> fsreeze offers fs consistency only, while windows vss allows agents like
> sqlserver to register their plugins to flush their cache to disk when a
> snapshot occurs.
> 4.  my understanding is xenserver does not support fsfreeze+vss now,
> because xenserver normally does not use block backend in qemu.
>
> -Original Message-
> From: Bruce Montague 
> [mailto:bruce_monta...@symantec.com]
>
> Sent: Thursday, March 13, 2014 10:35 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] Disaster Recovery for OpenStack - call for
> stakeholder
>
> Hi, about OpenStack and VSS. Does anyone have experience with the qemu
> project's implementation of VSS support? They appear to have a within-guest
> agent, qemu-ga, that perhaps can work as a VSS requestor. Does it also work
> with KVM? Does qemu-ga work with libvirt (can VSS quiesce be triggered via
> libvirt)? I think there was an effort for qemu-ga to use fsfreeze as an
> equivalent to VSS on Linux systems, was that done?  If so, could an
> OpenStack API provide a generic quiesce request that would then get passed
> to libvirt? (Also, the XenServer VSS support seems different than
> qemu/KVM's, is this true? Can it also be accessed through libvirt?
>
> Thanks,
>
> -bruce
>
> -Original Message-
> From: Alessandro Pilotti 
> [mailto:apilo...@cloudbasesolutions.com
> ]
> Sent: Thursday, March 13, 2014 6:49 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] Disaster Recovery for OpenStack - call for
> stakeholder
>
> Those use cases are very important in enterprise scenarios requirements,
> but there's an important missing piece in the current OpenStack APIs:
> support for application consistent backups via Volume Shadow Copy (or other
> solutions) at the instance level, including differential / incremental
> backups.
>
> VSS can be seamlessly added to the Nova Hyper-V driver (it's included with
> the free Hyper-V Server) with e.g. vSphere and XenServer supporting it as
> well (quescing) and with the option for third party vendors to add drivers
> for their solutions.
>
> A generic Nova backup / restore API supporting those features is quite
> straightforward to design. The main question at this stage is if the
> OpenStack community wants to support those use cases or not. Cinder
> backup/restore support [1] and volume replication [2] are surely a great
> starting point in this direction.
>
> Alessandro
>
> [1] https://review.openstack.org/#/c/69351/
> [2] https://review.openstack.org/#/c/64026/
>
>
> > On 12/mar/2014, at 20:45, "Bruce Montague" 
> wrote:
> >
> >
> > Hi, regarding the call to create a list of disaster recovery (DR) use
> cases (
> http://lists.openstack.org/pipermail/openstack-dev/2014-March/028859.html), 
> the following list sketches some speculative OpenStack DR use cases.
> These use cases do not reflect any specific product behavior

Re: [openstack-dev] OpenStack/GSoC

2014-03-20 Thread Csubák Dániel
Hi Dims!

I have submitted my proposal to the Google GSoC site and I will upload it
to the OpenStack GSoC wiki soon.
Could you please review it for me?
Please inform me if anything else is needed.

Thank you,
Dániel Csubák (csuby)


2014-03-18 16:19 GMT+01:00 Davanum Srinivas :

> Dear Students,
>
> Student application deadline is on Friday, March 21 [1]
>
> Once you finish the application process on the Google GSoC site.
> Please reply back to this thread to confirm that all the materials are
> ready to review.
>
> thanks,
> dims
>
> [1] http://www.google-melange.com/gsoc/events/google/gsoc2014
>
> --
> Davanum Srinivas :: http://davanum.wordpress.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][PCI] one use case make the flavor/extra-info based solution to be right choice

2014-03-20 Thread yongli he
HI, all

cause of the Juno, the PCI discuss keen open, for group VS to
flavor/extra-information based solution. there is a use case, which
group based
solution can not supported well.

please considerate of this, and choose the flavor/extra-information
based solution.


Groups problem:

I: exposed may detail under laying grouping to user, user burden of deal
with those things. and in a OS system, the group name might be messy.
refer to II)
--
II: group based solution can not well support such a simple use case:

user want a faster NIC from Intel to join a virtual networking.

suppose the tenant use physical network name is "phy1". then the 'group'
style solution won't meeting such a simple use case. reason:

1) the group name must be 'phy1', otherwise, the neutron can't not fill
the pci request, the neutron have only the physical network name for this.
(suppose the phy1 not bothering the user, if bothering user, user will
see group like : the "intel_phy1" "ciscio_v1_phy1" )

2) because there is only one property in pci stats pool, user then loose
the chance to choice the version or model of the pci device, then user
can not request a simple thing like the "intel-NIC" or "1G_NIC.


Regards
Yongli He

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] API schema location

2014-03-20 Thread Christopher Yeoh
On Tue, 18 Mar 2014 19:52:27 +0100
"Koderer, Marc"  wrote:

> > Am I missing something or are these schemas being added now just a
> > subset of what is being used for negative testing? Why can't we
> > either add the extra negative test info around the new test
> > validation patches and get the double benefit. Or have the negative
> > test schemas just extend these new schemas being added?
> 
> Yes, the api_schema files should theoretically be a
> subsets of the negative test schemas.
> But I don't think that extending them will be possible:
> 
> if you have a property definition like this:
> 
> "properties": {
> "minRam": {  "type": "integer",}
> 
> how can you extend it to:
> 
> "properties": {
> "minRam": {
> "type": "integer",
> "results": {
> "gen_none": 400,
> "gen_string": 400
> }
> 
> This is the reason why I am unsure how inheritance can solve
> something here.

I think this is an example of how we can do some sharing of schema
definitions when there is sufficient commonality to justify it.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Mark McLoughlin
On Thu, 2014-03-20 at 01:28 +, Joshua Harlow wrote:
> Proxying from yahoo's open source director (since he wasn't initially
> subscribed to this list, afaik he now is) on his behalf.
> 
> From Gil Yehuda (Yahoo’s Open Source director).
> 
> I would urge you to avoid creating a dependency between Openstack code
> and any AGPL project, including MongoDB. MongoDB is licensed in a very
> strange manner that is prone to creating unintended licensing mistakes
> (a lawyer’s dream). Indeed, MongoDB itself presents Apache licensed
> drivers – and thus technically, users of those drivers are not
> impacted by the AGPL terms. MongoDB Inc. is in the unique position to
> license their drivers this way (although they appear to violate the
> AGPL license) since MongoDB is not going to sue themselves for their
> own violation. However, others in the community create MongoDB drivers
> are licensing those drivers under the Apache and MIT licenses – which
> does pose a problem.
> 
> Why? The AGPL considers 'Corresponding Source' to be defined as “the
> source code for shared libraries and dynamically linked subprograms
> that the work is specifically designed to require, such as by intimate
> data communication or control flow between those subprograms and other
> parts of the work." Database drivers *are* work that is designed to
> require by intimate data communication or control flow between those
> subprograms and other parts of the work. So anyone using MongoDB with
> any other driver now invites an unknown --  that one court case, one
> judge, can read the license under its plain meaning and decide that
> AGPL terms apply as stated. We have no way to know how far they apply
> since this license has not been tested in court yet.
> Despite all the FAQs MongoDB puts on their site indicating they don't
> really mean to assert the license terms, normally when you provide a
> license, you mean those terms. If they did not mean those terms, they
> would not use this license. I hope they intended to do something good
> (to get contributions back without impacting applications using their
> database) but, even good intentions have unintended consequences.
> Companies with deep enough pockets to be lawsuit targets, and
> companies who want to be good open source citizens face the problem
> that using MongoDB anywhere invites the future risk of legal
> catastrophe. A simple development change in an open source project can
> change the economics drastically. This is simply unsafe and unwise.
> 
> OpenStack's ecosystem is fueled by the interests of many commercial
> ventures who wish to cooperate in the open source manner, but then
> leverage commercial opportunities they hope to create. I suggest that
> using MongoDB anywhere in this project will result in a loss of
> opportunity -- real or perceived, that would outweigh the benefits
> MongoDB itself provides.
> 
> tl;dr version: If you want to use MongoDB in your company, that's your
> call. Please don't turn anyone who uses OpenStack components into a
> unsuspecting MongoDB users. Instead, decouple the database from the
> project. It's not worth the legal risk, nor the impact on the
> "Apache-ness" of this project.

Thanks for that, Josh and Gil.

Rather than cross-posting, I think this MongoDB/AGPLv3 discussion should
continue on the legal-discuss mailing list:

  http://lists.openstack.org/pipermail/legal-discuss/2014-March/thread.html#174

Bear in mind that we (OpenStack, as a project and community) need to
judge whether this is a credible concern or not. If some users said they
were only willing to deploy Apache licensed code in their organization,
we would dismiss that notion pretty quickly. Is this AGPLv3 concern
sufficiently credible that OpenStack needs to take it into account when
making important decisions? That's what I'm hoping to get to in the
legal-discuss thread.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Alternating meeting time for more TZ friendliness

2014-03-20 Thread jang
On Wed, 19 Mar 2014, Sullivan, Jon Paul wrote:

> > From: James Slagle [mailto:james.sla...@gmail.com]
> > Sent: 18 March 2014 19:58
> > Subject: [openstack-dev] [TripleO] Alternating meeting time for more TZ
> > friendliness
> > 
> > Our current meeting time is Tuesdays at 19:00 UTC.  I think this works
> > ok for most folks in and around North America.
> > 
> > It was proposed during today's meeting to see if there is interest is an
> > alternating meeting time every other week so that we can be a bit more
> > friendly to those folks that currently can't attend.
> > If that interests you, speak up :).
> 
> Speaking up! :D

Also interested.

> Tuesdays at 14:00 UTC on #openstack-meeting-alt is available.

Relatively speaking, that's actually sociable.

-- 
Update your address books: j...@ioctl.org  http://ioctl.org/jan/
perl -e 's?ck?t??print:perl==pants if $_="Just Another Perl Hacker\n"'

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [legal-discuss] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Monty Taylor

On 03/20/2014 01:30 AM, Radcliffe, Mark wrote:

The problem with AGPL is that the scope is very uncertain and the
determination of the  consequences are very fact intensive. I was the
chair of the User Committee in developing the GPLv3 and I am therefor
quite familiar with the legal issues.  The incorporation of AGPLv3
code Into OpenStack Project  is a significant decision and should not
be made without input from the Foundation. At a minimum, the
inclusion of APLv3 code means that the OpenStack Project is no longer
solely an Apache v2 licensed project because AGPLv3 code cannot be
licensed under Apache v. 2 License.  Moreover, the inclusion of such
code is inconsistent with the current CLA provisions.

This code can be included but it is an important decision that should
be made carefully.


I agree - but in this case, I think that we're not talking about 
including AGPL code in OpenStack as much as we are talking about using 
an Apache2 driver that would talk to a server that is AGPL ... if the 
deployer chooses to install the AGPL software. I don't think we're 
suggesting that downloading or installing openstack itself would involve 
downloading or installing AGPL code.



-Original Message- From: Fox, Kevin M
[mailto:kevin@pnnl.gov] Sent: Wednesday, March 19, 2014 2:39 PM
To: OpenStack Development Mailing List (not for usage questions) Cc:
legal-disc...@lists.openstack.org Subject: Re: [legal-discuss]
[openstack-dev] [Marconi] Why is marconi a queue implementation vs a
provisioning API?

Its my understanding that the only case the A in the AGPL would kick
in is if the cloud provider made a change to MongoDB and exposed the
MongoDB instance to users. Then the users would have to be able to
download the changed code. Since Marconi's in front, the user is
Marconi, and wouldn't ever want to download the source. As far as I
can tell, in this use case, the AGPL'ed MongoDB is not really any
different then the GPL'ed MySQL in footprint here. MySQL is
acceptable, so why isn't MongoDB?

It would be good to get legal's official take on this. It would be a
shame to make major architectural decisions based on license
assumptions that turn out not to be true. I'm cc-ing them.

Thanks, Kevin  From: Chris
Friesen [chris.frie...@windriver.com] Sent: Wednesday, March 19, 2014
2:24 PM To: openstack-dev@lists.openstack.org Subject: Re:
[openstack-dev] [Marconi] Why is marconi a queue implementation vs a
provisioning API?

On 03/19/2014 02:24 PM, Fox, Kevin M wrote:

Can someone please give more detail into why MongoDB being AGPL is
a problem? The drivers that Marconi uses are Apache2 licensed,
MongoDB is separated by the network stack and MongoDB is not
exposed to the Marconi users so I don't think the 'A' part of the
GPL really kicks in at all since the MongoDB "user" is the cloud
provider, not the cloud end user?


Even if MongoDB was exposed to end-users, would that be a problem?

Obviously the source to MongoDB would need to be made available
(presumably it already is) but does the AGPL licence "contaminate"
the Marconi stuff?  I would have thought that would fall under "mere
aggregation".

Chris

___ OpenStack-dev mailing
list OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___ legal-discuss mailing
list legal-disc...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/legal-discuss
Please consider the environment before printing this email.

The information contained in this email may be confidential and/or
legally privileged. It has been sent for the sole use of the intended
recipient(s). If the reader of this message is not an intended
recipient, you are hereby notified that any unauthorized review, use,
disclosure, dissemination, distribution, or copying of this
communication, or any of its contents, is strictly prohibited. If you
have received this communication in error, please reply to the sender
and destroy all copies of the message. To contact us directly, send
to postmas...@dlapiper.com. Thank you.


___ legal-discuss mailing
list legal-disc...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/legal-discuss




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat]Heat use as a standalone component for Cloud Managment over multi IAAS

2014-03-20 Thread Thomas Spatzier
Just out of curiosity: what is the purpose of project "warm"? From the wiki
page and the sample it looks pretty much like what Heat is doing.
And "warm" is almost "HOT" so could you imagine your use cases can just be
addressed by Heat using HOT templates?

Regards,
Thomas

sahid  wrote on 18/03/2014 12:56:47:

> From: sahid 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 18/03/2014 12:59
> Subject: Re: [openstack-dev] [Heat]Heat use as a standalone
> component for Cloud Managment over multi IAAS
>
> Sorry for the late of this response,
>
> I'm currently working on a project called Warm.
> https://wiki.openstack.org/wiki/Warm
>
> It is used as a standalone client and try to deploy small OpenStack
> environments from Yzml templates. You can find some samples here:
> https://github.com/sahid/warm-templates
>
> s.
>
> - Original Message -
> From: "Charles Walker" 
> To: openstack-dev@lists.openstack.org
> Sent: Wednesday, February 26, 2014 2:47:44 PM
> Subject: [openstack-dev] [Heat]Heat use as a standalone component
> for Cloud Managment over multi IAAS
>
> Hi,
>
>
> I am trying to deploy the proprietary application made in my company on
the
> cloud. The pre requisite for this is to have a IAAS which can be either a
> public cloud or private cloud (openstack is an option for a private
IAAS).
>
>
> The first prototype I made was based on a homemade python orchestrator
and
> apache libCloud to interact with IAAS (AWS and Rackspace and GCE).
>
> The orchestrator part is a python code reading a template file which
> contains the info needed to deploy my application. This template file
> indicates the number of VM and the scripts associated to each VM type to
> install it.
>
>
> Now I was trying to have a look on existing open source tool to do the
> orchestration part. I find JUJU (https://juju.ubuntu.com/) or HEAT (
> https://wiki.openstack.org/wiki/Heat).
>
> I am investigating deeper HEAT and also had a look on
> https://wiki.openstack.org/wiki/Heat/DSL which mentioned:
>
> *"Cloud Service Provider* - A service entity offering hosted cloud
services
> on OpenStack or another cloud technology. Also known as a Vendor."
>
>
> I think HEAT as its actual version will not match my requirement but I
have
> the feeling that it is going to evolve and could cover my needs.
>
>
> I would like to know if it would be possible to use HEAT as a standalone
> component in the future (without Nova and other Ostack modules)? The goal
> would be to deploy an application from a template file on multiple cloud
> service (like AWS, GCE).
>
>
> Any feedback from people working on HEAT could help me.
>
>
> Thanks, Charles.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Evaluation for Marconi

2014-03-20 Thread Flavio Percoco

On 19/03/14 11:00 -0400, Doug Hellmann wrote:




On Wed, Mar 19, 2014 at 7:31 AM, Thierry Carrez  wrote:

   Kurt Griffiths wrote:
   > Kudos to Balaji for working so hard on this. I really appreciate his
   candid feedback on both frameworks.

   Indeed, that analysis is very much appreciated.

   From the Technical Committee perspective, we put a high weight on a
   factor that was not included in the report results: consistency and
   convergence between projects we commonly release in an integrated manner
   every 6 months. There was historically a lot of deviation, but as we add
   more projects that deviation is becoming more costly. We want developers
   to be able to jump from one project to another easily, and we want
   convergence from an operators perspective. 



   Individual projects are obviously allowed to pick the best tool in their
   toolbox. But the TC may also decide to let projects live out of the
   "integrated release" if we feel they would add too much divergence in.


As Thierry points out, an important aspect of being in the integrated release
is being aligned with the rest of the community. The evaluation gives
"community" considerations the lowest weight among the criteria considered.
Does that ranking reflect the opinion of the entire Marconi team? If so, what
benefits do you see to being integrated?


Doug, I'm sure your intentions are good but lets try to avoid this
kind of questions. I don't think the Marconi team has shown any kind
of closed development since the project was kicked off. We've
worked hard to integrate with OpenStack's standards, the overall
community and with other projects as well. We've had sessions for
cross-project integrations, we've had discussions with other teams
that lead us to improve Marconi and ease our collaboration with the
whole community. Hard work has been put on integrating with our CI
system. The team has participated in upstream discussions. The team
has also strictly followed the upstream release cycle - even before
Marconi was incubated - and finally the team is here discussing pros
and cons of the framework that has been chosen. Not to mention the
discussions we had about Falcon and Pecan that you pointed out below.

The evaluation of Pecan vs Falcon is purely technical - none of us is
trying to hide that - the reason being that our choice of falcon was
purely technical. There has never been in our team a will of going
against the community and after these last months of hard work,
completely focused on the integration requirements, I think the team's
intentions should be more than clear.

Just a side note, I'm very proud of what the team has done so far and
how the community around Marconi has grown 'til now and I'm also proud
of the work the team has done to integrate with the OpenStack
community.

Back to the discussion, I understand the choice of using Falcon
instead of Pecan has an impact in the community that was not
considered in the evaluation and I'm very sorry for that. However,
lets remember that this evaluation was done by someone completely
*new* to the 3 communities (OpenStack's, Falcon's and Pecan's) because
it was meant to be purely technical and not biased.

In order to fulfill this requirement in the evaluation, folks from the
OpenStack community need to chime in - as we're doing now - and help
filling the pros and cons of having 1-fw-to-rule-openstack, a set of
frameworks or even better evaluate these needs in a case-by-case
basis.

Again, I may be wrong but I haven't seen the needs, pros and cons of
standardizing the libraries across projects written anywhere. I'm sure
there are pros and cons community-wise and I think this case of study
(Pecan and / or Falcon) will be useful for future cases like this one.

That said and considering that this matters to the whole community,
I'd like to invite folks to help us create a list of pros and cons
from having just 1 common library as opposed to having a limited set
of supported libraries being used across projects. Whatever comes out
from this discussion could be used as a base to evaluate future cases
like this and it'll obviously be integrated to the evaluation being
discussed.

If this was already discussed in emails, meetings and / or there's
something written somewhere, I'd really appreciate a link pointing to
such resource that could be used to complete the evaluation.

[snip]


   > After reviewing the report below, I would recommend that Marconi
   > continue using Falcon for the v1.1 API and then re-evaluate Pecan for
   > v2.0 or possibly look at using swob.

   The report (and your email below) makes a compelling argument that
   Falcon is a better match for Marconi's needs (or for a data-plane API)
   than Pecan currently is. My question would be, can Pecan be improved to
   also cover Marconi's use case ? Could we have the best of both worlds
   (an appropriate tool *and* convergence) ?


We had several conversations with Kurt and Flavio in Hong Kong about

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Miguel Angel Ajo



On 03/19/2014 10:54 PM, Joe Gordon wrote:




On Wed, Mar 19, 2014 at 9:25 AM, Miguel Angel Ajo mailto:majop...@redhat.com>> wrote:



An advance on the changes that it's requiring to have a
py->c++ compiled rootwrap as a mitigation POC for havana/icehouse.


https://github.com/mangelajo/__shedskin.rootwrap/commit/__e4167a6491dfbc71e2d0f6e28ba93b__c8a1dd66c0



The current translation output is included.

It looks like doable (almost killed 80% of the translation problems),
but there are two big stones:

1) As Joe said, no support for Subprocess (we're interested in popen),
I'm using a dummy os.system() for the test.

2) No logging support.

I'm not sure on how complicated could be getting those modules
implemented for shedkin.


Before sorting out of we can get those working under shedskin, any
preliminary performance numbers from neutron when using this?



Sure, totally agree.

I'm trying to put up a conversion without 1 & 2, to run a benchmark on
it, and then I'll post the results.

I suppose, we couldn't use it in neutron itself without Popen support
(not sure) but at least I could get an estimate out of the previous
numbers and the new ones.

Best,
Miguel Ángel.



On 03/18/2014 09:14 AM, Miguel Angel Ajo wrote:

Hi Joe, thank you very much for the positive feedback,

 I plan to spend a day during this week on the
shedskin-compatibility
for rootwrap (I'll branch it, and tune/cut down as necessary) to
make
it compile under shedskin [1] : nothing done yet.

 It's a short-term alternative until we can have a rootwrap
agent,
together with it's integration in neutron (for Juno). As, for the
compiled rootwrap, if it works, and if it does look good
(security wise)
then we'd have a solution for Icehouse/Havana.

help in [1] is really  welcome ;-) I'm available in
#openstack-neutron
as 'ajo'.

 Best regards,
Miguel Ángel.

[1] https://github.com/mangelajo/__shedskin.rootwrap


On 03/18/2014 12:48 AM, Joe Gordon wrote:




On Tue, Mar 11, 2014 at 1:46 AM, Miguel Angel Ajo Pelayo
mailto:mangel...@redhat.com>
>>
wrote:


 I have included on the etherpad, the option to write a sudo
 plugin (or several), specific for openstack.


 And this is a test with shedskin, I suppose that in
more complicated
 dependecy scenarios it should perform better.

 [majopela@redcylon tmp]$ cat  import sys
  > print "hello world"
  > sys.exit(0)
  > EOF

 [majopela@redcylon tmp]$ time python test.py
 hello world

 real0m0.016s
 user0m0.015s
 sys 0m0.001s



This looks very promising!

A few gotchas:

* Very limited library support
https://code.google.com/p/__shedskin/wiki/docs#Library___Limitations

* no logging
* no six
* no subprocess

* no *args support
*

https://code.google.com/p/__shedskin/wiki/docs#Python___Subset_Restrictions



that being said I did a quick comparison with great results:

$ cat tmp.sh
#!/usr/bin/env bash
echo $0 $@
ip a

$ time ./tmp.sh  foo bar> /dev/null

real0m0.009s
user0m0.003s
sys 0m0.006s



$ cat tmp.py
#!/usr/bin/env python
import os
import sys

print sys.argv
print os.system("ip a")

$ time ./tmp.py  foo bar > /dev/null

min:
real0m0.016s
user0m0.004s
sys 0m0.012s

max:
real0m0.038s
user0m0.016s
sys 0m0.020s



shedskin  tmp.py && make


$ time ./tmp  foo bar > /dev/null

real0m0.010s
user0m0.007s
sys 0m0.002s



Based in these results I think a deeper dive into making
rootwrap
supportshedskin is worthwhile.





 [majopela@redcylon tmp]$ shedskin test.py
 *** SHED SKIN Python-to-C++ Compiler 0.9.4 ***
 

Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-20 Thread Nadya Privalova
Hi all,
First of all, thanks for your suggestions!

To summarize the discussions here:
1. We are not going to install Mongo (because "is's wrong" ?)
2. Idea about spawning several collectors is suspicious (btw there is a
patch that run several collectors: https://review.openstack.org/#/c/79962/.)

Let's try to get back to original problem. All these solutions were
suggested to solve the problem of high load on Ceilometer. AFAIK, Tempest's
goal is to test projects` interactions, not performance testing. The
perfect tempest's behaviour would be "start ceilometer only for Ceilometer
tests". From one hand it will allow not to load db during other tests, from
the other hand "projects` interactions" will be tested because during
Ceilometer test we create volums, images and instances. But I'm afraid that
this scenario is not possible technically.
There is one more idea. Make Ceilometer able to monitor not all messages
but filtered set of messages. But anyway this is a new feature and cannot
be added right now.

Tempest guys, if you have any thoughts about first suggestion "start
ceilometer only for Ceilometer tests" please share.

Thanks,
Nadya




On Thu, Mar 20, 2014 at 3:23 AM, Sean Dague  wrote:

> On 03/19/2014 06:09 PM, Doug Hellmann wrote:
> > The ceilometer collector is meant to scale horizontally. Have you tried
> > configuring the test environment to run more than one copy, to process
> > the notifications more quickly?
>
> The ceilometer collector is already one of the top running processes on
> the box -
>
> http://logs.openstack.org/82/81282/2/check/check-tempest-dsvm-full/693dc3b/logs/dstat.txt.gz
>
>
> Often consuming > 1/2 a core (25% == 1 core in that run, as can been
> seen when qemu boots and pegs one).
>
> So while we could spin up more collectors, I think it's unreasonable
> that the majority of our cpu has to be handed over to the metric
> collector to make it function responsively. I thought the design point
> was that this was low impact.
>
> -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-20 Thread Sean Dague
On 03/20/2014 05:49 AM, Nadya Privalova wrote:
> Hi all,
> First of all, thanks for your suggestions!
> 
> To summarize the discussions here:
> 1. We are not going to install Mongo (because "is's wrong" ?)

We are not going to install Mongo "not from base distribution", because
we don't do that for things that aren't python. Our assumption is
dependent services come from the base OS.

That being said, being an integrated project means you have to be able
to function, sanely, on an sqla backend, as that will always be part of
your gate.

> 2. Idea about spawning several collectors is suspicious (btw there is a
> patch that run several collectors:
> https://review.openstack.org/#/c/79962/ .)

Correct, given that the collector is already one of the most expensive
processes in a devstack run.

> Let's try to get back to original problem. All these solutions were
> suggested to solve the problem of high load on Ceilometer. AFAIK,
> Tempest's goal is to test projects` interactions, not performance
> testing. The perfect tempest's behaviour would be "start ceilometer only
> for Ceilometer tests". From one hand it will allow not to load db during
> other tests, from the other hand "projects` interactions" will be tested
> because during Ceilometer test we create volums, images and instances.
> But I'm afraid that this scenario is not possible technically.
> There is one more idea. Make Ceilometer able to monitor not all messages
> but filtered set of messages. But anyway this is a new feature and
> cannot be added right now.
> 
> Tempest guys, if you have any thoughts about first suggestion "start
> ceilometer only for Ceilometer tests" please share.

The point of the gate is that it's integrated and testing the
interaction between projects. Ceilometer can be tested on it's own in
ceilometer unit tests, or by creating ceilometer functional tests that
only run on the ceilometer jobs.

While I agree that Tempest's job is not to test performance, we do have
to give some basic sanity checking here that the software is running in
some performance profile that we believe is base usable.

Based on the latest dstat results, I think that's a dubious assessment.
The answer on the collector side has to be something other than
horizontal scaling. Because we're talking about the collector being the
3rd highest utilized process on the box right now (we should write a
dstat plugin to give us cumulative data, just haven't gotten there yet).

So right now, I think performance analysis for ceilometer on sqla is
important, really important. Not just horizontal scaling, but actual
performance profiling.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Blueprint Review request for OpenStack Neutron Port Templates Network Policy.

2014-03-20 Thread Kumar, Vinod (HP Networking)
Hi All,

Me and my fellow friends (CCed) invite the community to review OpenStack 
Neutron Port Templates Network Policy blueprint submitted recently.
Please review the same and feel free to suggest, comment and ask questions.

Link to Open Stack blueprint page
https://blueprints.launchpad.net/neutron/+spec/neutron-port-template-policy

Thanks
Vinod
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [QA] Slow Ceilometer resource_list CLI command

2014-03-20 Thread Sampath Priyankara
Hi,

  Re-architecting the schema might fix most of the performance issues of
resource_list.  
  And also, must do some work to improve the performance of meter-list.
  Is the Gordon's blue print gonna cover the both aspects ?
  https://blueprints.launchpad.net/ceilometer/+spec/big-data-sql
  
Sampath

-Original Message-
From: Neal, Phil [mailto:phil.n...@hp.com] 
Sent: Wednesday, March 19, 2014 12:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ceilometer] [QA] Slow Ceilometer resource_list
CLI command


> -Original Message-
> From: Tim Bell [mailto:tim.b...@cern.ch]
> Sent: Monday, March 17, 2014 2:04 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Ceilometer] [QA] Slow Ceilometer 
> resource_list CLI command
> 
> 
> At CERN, we've had similar issues when enabling telemetry. Our 
> resource-list times out after 10 minutes when the proxies for HA 
> assume there is no answer coming back. Keystone instances per cell 
> have helped the situation a little so we can collect the data but 
> there was a significant increase in load on the API endpoints.
> 
> I feel that some reference for production scale validation would be 
> beneficial as part of TC approval to leave incubation in case there 
> are issues such as this to be addressed.
> 
> Tim
> 
> > -Original Message-
> > From: Jay Pipes [mailto:jaypi...@gmail.com]
> > Sent: 17 March 2014 20:25
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [Ceilometer] [QA] Slow Ceilometer
> resource_list CLI command
> >
> ...
> >
> > Yep. At AT&T, we had to disable calls to GET /resources without any 
> > filters
> on it. The call would return hundreds of thousands of
> > records, all being JSON-ified at the Ceilometer API endpoint, and 
> > the result
> would take minutes to return. There was no default limit
> > on the query, which meant every single records in the database was
> returned, and on even a semi-busy system, that meant
> > horrendous performance.
> >
> > Besides the problem that the SQLAlchemy driver doesn't yet support
> pagination [1], the main problem with the get_resources() call is
> > the underlying databases schema for the Sample model is wacky, and
> forces the use of a dependent subquery in the WHERE clause
> > [2] which completely kills performance of the query to get resources.
> >
> > [1]
> >
> https://github.com/openstack/ceilometer/blob/master/ceilometer/storage
> /
> impl_sqlalchemy.py#L436
> > [2]
> >
> https://github.com/openstack/ceilometer/blob/master/ceilometer/storage
> /
> impl_sqlalchemy.py#L503
> >
> > > The cli tests are supposed to be quick read-only sanity checks of 
> > > the cli functionality and really shouldn't ever be on the list of 
> > > slowest tests for a gate run.
> >
> > Oh, the test is readonly all-right. ;) It's just that it's reading 
> > hundreds of
> thousands of records.
> >
> > >  I think there was possibly a performance regression recently in 
> > > ceilometer because from I can tell this test used to normally take ~60
sec.
> > > (which honestly is probably too slow for a cli test too) but it is 
> > > currently much slower than that.
> > >
> > > From logstash it seems there are still some cases when the 
> > > resource list takes as long to execute as it used to, but the 
> > > majority of runs take a
> long time:
> > > http://goo.gl/smJPB9
> > >
> > > In the short term I've pushed out a patch that will remove this 
> > > test from gate
> > > runs: https://review.openstack.org/#/c/81036 But, I thought it 
> > > would be good to bring this up on the ML to try and figure out 
> > > what changed or why this is so slow.
> >
> > I agree with removing the test from the gate in the short term. 
> > Medium to
> long term, the root causes of the problem (that GET
> > /resources has no support for pagination on the query, there is no 
> > default
> for limiting results based on a since timestamp, and that
> > the underlying database schema is non-optimal) should be addressed.

Gordon has introduced a blueprint
https://blueprints.launchpad.net/ceilometer/+spec/big-data-sql with some
fixes for individual queries but +1 to the point of looking at
re-architecting the schema as an approach to fixing performance. We've also
seen some gains here at HP using batch writes as well but have temporarily
tabled that work in favor of getting a better-performing schema in place.
- Phil

> >
> > Best,
> > -jay
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@list

Re: [openstack-dev] [nova] Backwards incompatible API changes

2014-03-20 Thread Thierry Carrez
Christopher Yeoh wrote:
> The patch was merged in October (just after Icehouse opened) and so has
> been used in clouds that do CD for quite a while. After some discussion
> on IRC I think we'll end up having to leave this backwards incompatible
> change in there - given there are most likely users who now rely on
> both sets of behaviour there is no good way to get out of this
> situation. I've added a note to the Icehouse release notes.

I still think reverting before release is an option we should consider.
My point is, yes we broke it back in October for people doing CD (and
they might by now have gotten used to it), if we let this to release
we'll then break it for everyone else.

We put a high emphasis into guaranteeing backward compatibility between
releases. I think there would be more damage done if we let this sail to
release, compared to the damage of reverting CD followers to pre-October
behavior...

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Review request: A blueprint on OpenStack Neutron Port Templates Framework

2014-03-20 Thread Kamat, Maruti Haridas
Hi All,

Me and my fellow friends (CCed) invite the community to review OpenStack 
Neutron Port Templates Framework blueprint submitted recently.
Please review the same and feel free to suggest, comment and ask questions.

Link to Open Stack blueprint page
https://blueprints.launchpad.net/neutron/+spec/neutron-port-template-framework

Thanks
Maruti



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [legal-discuss] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Thierry Carrez
Monty Taylor wrote:
> On 03/20/2014 01:30 AM, Radcliffe, Mark wrote:
>> The problem with AGPL is that the scope is very uncertain and the
>> determination of the  consequences are very fact intensive. I was the
>> chair of the User Committee in developing the GPLv3 and I am therefor
>> quite familiar with the legal issues.  The incorporation of AGPLv3
>> code Into OpenStack Project  is a significant decision and should not
>> be made without input from the Foundation. At a minimum, the
>> inclusion of APLv3 code means that the OpenStack Project is no longer
>> solely an Apache v2 licensed project because AGPLv3 code cannot be
>> licensed under Apache v. 2 License.  Moreover, the inclusion of such
>> code is inconsistent with the current CLA provisions.
>>
>> This code can be included but it is an important decision that should
>> be made carefully.
> 
> I agree - but in this case, I think that we're not talking about
> including AGPL code in OpenStack as much as we are talking about using
> an Apache2 driver that would talk to a server that is AGPL ... if the
> deployer chooses to install the AGPL software. I don't think we're
> suggesting that downloading or installing openstack itself would involve
> downloading or installing AGPL code.

Yes, the issue here is more... a large number of people want to stay
away from AGPL. Should the TC consider adding to the OpenStack
integrated release a component that requires AGPL software to be
installed alongside it ? It's not really a legal issue (hence me
stopping the legal-issues cross-posting).

This was identified early on as a concern with Marconi and the SQLA
support was added to counter that concern. The question then becomes,
how usable this SQLA option actually is ? If it's sluggish, unusable in
production or if it doesn't fully support the proposed Marconi API, then
I think we still have that concern.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-20 Thread Nadya Privalova
Sean, thank for analysis.
JFYI, I did some initial profiling, it's described here
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg19030.html.


On Thu, Mar 20, 2014 at 2:15 PM, Sean Dague  wrote:

> On 03/20/2014 05:49 AM, Nadya Privalova wrote:
> > Hi all,
> > First of all, thanks for your suggestions!
> >
> > To summarize the discussions here:
> > 1. We are not going to install Mongo (because "is's wrong" ?)
>
> We are not going to install Mongo "not from base distribution", because
> we don't do that for things that aren't python. Our assumption is
> dependent services come from the base OS.
>
> That being said, being an integrated project means you have to be able
> to function, sanely, on an sqla backend, as that will always be part of
> your gate.
>
> > 2. Idea about spawning several collectors is suspicious (btw there is a
> > patch that run several collectors:
> > https://review.openstack.org/#/c/79962/ .)
>
> Correct, given that the collector is already one of the most expensive
> processes in a devstack run.
>
> > Let's try to get back to original problem. All these solutions were
> > suggested to solve the problem of high load on Ceilometer. AFAIK,
> > Tempest's goal is to test projects` interactions, not performance
> > testing. The perfect tempest's behaviour would be "start ceilometer only
> > for Ceilometer tests". From one hand it will allow not to load db during
> > other tests, from the other hand "projects` interactions" will be tested
> > because during Ceilometer test we create volums, images and instances.
> > But I'm afraid that this scenario is not possible technically.
> > There is one more idea. Make Ceilometer able to monitor not all messages
> > but filtered set of messages. But anyway this is a new feature and
> > cannot be added right now.
> >
> > Tempest guys, if you have any thoughts about first suggestion "start
> > ceilometer only for Ceilometer tests" please share.
>
> The point of the gate is that it's integrated and testing the
> interaction between projects. Ceilometer can be tested on it's own in
> ceilometer unit tests, or by creating ceilometer functional tests that
> only run on the ceilometer jobs.
>
> While I agree that Tempest's job is not to test performance, we do have
> to give some basic sanity checking here that the software is running in
> some performance profile that we believe is base usable.
>
> Based on the latest dstat results, I think that's a dubious assessment.
> The answer on the collector side has to be something other than
> horizontal scaling. Because we're talking about the collector being the
> 3rd highest utilized process on the box right now (we should write a
> dstat plugin to give us cumulative data, just haven't gotten there yet).
>
> So right now, I think performance analysis for ceilometer on sqla is
> important, really important. Not just horizontal scaling, but actual
> performance profiling.
>
> -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Review request: A blueprint on OpenStack Neutron audit logging enhancement

2014-03-20 Thread Kamat, Maruti Haridas
Hi All,

I would like you to go through a blueprint I submitted recently and get early 
comments from you, so that it covers all the use cases and anything else which 
will improve the blueprint.

Link to Open Stack blueprint page
https://blueprints.launchpad.net/neutron/+spec/neutron-auditlogging

Please review the same and feel free to suggest, comment and ask questions.

Thanks
Maruti




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Monty Taylor

On 03/20/2014 05:31 AM, Miguel Angel Ajo wrote:



On 03/19/2014 10:54 PM, Joe Gordon wrote:




On Wed, Mar 19, 2014 at 9:25 AM, Miguel Angel Ajo mailto:majop...@redhat.com>> wrote:



An advance on the changes that it's requiring to have a
py->c++ compiled rootwrap as a mitigation POC for havana/icehouse.


https://github.com/mangelajo/__shedskin.rootwrap/commit/__e4167a6491dfbc71e2d0f6e28ba93b__c8a1dd66c0





The current translation output is included.


The C++ code it's generating is inefficient and subject to some issues. 
I'd be a bit concerned about it and would want to make some fixes.


Also - I'm sure you know this, but it's labeled as experimental. As 
something that is going to run as root, I'd want to actually audit all 
of the code it does. WHICH MEANS - we'd be auditing/examining C++ code. 
If we're going to do that, I'd rather just convert it to C++ by hand, 
audit that, and deal with it. Either way, we'll need a substantial 
amount of infra support - such as multi-platform compilation testing, 
valgrind, etc.


Not a blocker - but things to think about. If you're really serious 
about bringing C++ into the mix - let's just do it.


(ps. if we're going to do that, we could also just write a sudo plugin)


It looks like doable (almost killed 80% of the translation problems),
but there are two big stones:

1) As Joe said, no support for Subprocess (we're interested in
popen),
I'm using a dummy os.system() for the test.

2) No logging support.

I'm not sure on how complicated could be getting those modules
implemented for shedkin.


Before sorting out of we can get those working under shedskin, any
preliminary performance numbers from neutron when using this?



Sure, totally agree.

I'm trying to put up a conversion without 1 & 2, to run a benchmark on
it, and then I'll post the results.

I suppose, we couldn't use it in neutron itself without Popen support
(not sure) but at least I could get an estimate out of the previous
numbers and the new ones.

Best,
Miguel Ángel.



On 03/18/2014 09:14 AM, Miguel Angel Ajo wrote:

Hi Joe, thank you very much for the positive feedback,

 I plan to spend a day during this week on the
shedskin-compatibility
for rootwrap (I'll branch it, and tune/cut down as necessary) to
make
it compile under shedskin [1] : nothing done yet.

 It's a short-term alternative until we can have a rootwrap
agent,
together with it's integration in neutron (for Juno). As, for the
compiled rootwrap, if it works, and if it does look good
(security wise)
then we'd have a solution for Icehouse/Havana.

help in [1] is really  welcome ;-) I'm available in
#openstack-neutron
as 'ajo'.

 Best regards,
Miguel Ángel.

[1] https://github.com/mangelajo/__shedskin.rootwrap


On 03/18/2014 12:48 AM, Joe Gordon wrote:




On Tue, Mar 11, 2014 at 1:46 AM, Miguel Angel Ajo Pelayo
mailto:mangel...@redhat.com>
>>
wrote:


 I have included on the etherpad, the option to write
a sudo
 plugin (or several), specific for openstack.


 And this is a test with shedskin, I suppose that in
more complicated
 dependecy scenarios it should perform better.

 [majopela@redcylon tmp]$ cat  import sys
  > print "hello world"
  > sys.exit(0)
  > EOF

 [majopela@redcylon tmp]$ time python test.py
 hello world

 real0m0.016s
 user0m0.015s
 sys 0m0.001s



This looks very promising!

A few gotchas:

* Very limited library support

https://code.google.com/p/__shedskin/wiki/docs#Library___Limitations


* no logging
* no six
* no subprocess

* no *args support
*

https://code.google.com/p/__shedskin/wiki/docs#Python___Subset_Restrictions




that being said I did a quick comparison with great results:

$ cat tmp.sh
#!/usr/bin/env bash
echo $0 $@
ip a

$ time ./tmp.sh  foo bar> /dev/null

real0m0.009s
user0m0.003s
sys 0m0.006s



$ cat tmp.py
#!/usr/bin/env python
import os
import sys

 

[openstack-dev] [openstack-qa] Graduation Requirements + Scope of Tempest

2014-03-20 Thread Malini Kamalambal
Hello all,

I have been working on adding tests in Tempest for Marconi, for the last few 
months.
While there are many amazing people to work with, the process has been more 
difficult than I expected.

Couple of pain-points and suggestions to make the process easier for myself & 
future contributors.

1. The QA requirements for a project to graduate needs details beyond the 
"Project must have a *basic* devstack-gate job set up"
2. The scope of Tempest needs clarification  - what tests should be in Tempest 
vs. in the individual projects? Or should they be in both tempest and the 
project?

See details below.

1. There is little documentation on graduation requirement from a QA 
perspective beyond 'Project must have a basic devstack-gate job set up'.

As a result, I hear different interpretations on what a basic devstack gate job 
is.
This topic was discussed in one of the QA meetings a few weeks back [1].
Based on the discussion there, having a basic job - such as one that will let 
us know 'if a keystone change broke marconi' was  good enough.
My efforts in getting Marconi meet graduation requirements w.r.t Tempest was 
based on the above discussion.

However, my conversations with the TC during Marconi's graduation review  lead 
me to believe that these requirements aren't yet formalized.
We were told that we needed to have more test coverage in tempest, & having 
them elsewhere (i.e. functional tests in the Marconi project itself) was not 
good enough.

I will never debate the value of having good test coverage - after all I define 
myself professionally as a QA ;)
I am proud of the unit and functional test suites & the test coverage we have 
in Marconi [2].
Marconi team is continuing its efforts in this direction.
We are looking forward to adding more tests in Tempest and making sure Marconi 
is in par with the community standards.

But what frustrates me is that the test requirements seem to evolve, catching  
new contributors by surprise.

It will really help to have these requirements documented in detail - answering 
at least the following questions,
a. What tests are needed to graduate - API, Scenario, CLI?
b. How much coverage is good enough to graduate?

That will make sure that contributors focus their time & energy in the right 
direction.
I am willing to lead the effort to document the QA-level graduation 
requirements for a project and help solidify them.

2. Clarify the scope of Tempest  - what tests should be in Tempest vs in the 
individual projects ?

It sounds like the scope of tempest is to make sure that,
a. Projects are functionally tested (AND)
b. Openstack components (a.k.a projects) do not have integration issues.

Assuming my understanding is correct, does it make sense to have the project 
specific functional tests in Tempest?
Troubleshooting failures related to project specific functionality requires 
deep understanding of the individual projects.
Isn't it better to leave it to the individual projects to make sure that they 
are functional?
That will help the contributors to Tempest spend their time on what only 
Tempest can do -i.e. identify integration issues.


My hope is that we can clearly outline these requirements as soon as possible.
Looking forward to hearing your thoughts!!

Best Regards,
Malini

[1] 
http://eavesdrop.openstack.org/meetings/qa/2014/qa.2014-02-13-17.00.log.html   
(See discussion starting with timestamp 17:51:57)
[2] https://github.com/openstack/marconi/tree/master/tests

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Review request: A blueprint on OpenStack Neutron Port Templates Framework

2014-03-20 Thread Sylvain Bauza
Hi,

As said here [1], please don't send review requests directly to the ML.


[1]
http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html


2014-03-20 12:08 GMT+01:00 Kamat, Maruti Haridas :

>  Hi All,
>
> Me and my fellow friends (CCed) invite the community to review OpenStack
> Neutron Port Templates Framework blueprint submitted recently.
> Please review the same and feel free to suggest, comment and ask questions.
>
> Link to Open Stack blueprint page
>
> *https://blueprints.launchpad.net/neutron/+spec/neutron-port-template-framework*
>
> Thanks
> Maruti
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Mark McLoughlin
On Wed, 2014-03-19 at 12:37 -0700, Devananda van der Veen wrote:
> Let me start by saying that I want there to be a constructive discussion
> around all this. I've done my best to keep my tone as non-snarky as I could
> while still clearly stating my concerns. I've also spent a few hours
> reviewing the current code and docs. Hopefully this contribution will be
> beneficial in helping the discussion along.

Thanks, I think it does.

> For what it's worth, I don't have a clear understanding of why the Marconi
> developer community chose to create a new queue rather than an abstraction
> layer on top of existing queues. While my lack of understanding there isn't
> a technical objection to the project, I hope they can address this in the
> aforementioned FAQ.
> 
> The reference storage implementation is MongoDB. AFAIK, no integrated
> projects require an AGPL package to be installed, and from the discussions
> I've been part of, that would be a show-stopper if Marconi required
> MongoDB. As I understand it, this is why sqlalchemy support was required
> when Marconi was incubated. Saying "Marconi also supports SQLA" is
> disingenuous because it is a second-class citizen, with incomplete API
> support, is clearly not the recommended storage driver, and is going to be
> unusuable at scale (I'll come back to this point in a bit).
> 
> Let me ask this. Which back-end is tested in Marconi's CI? That is the
> back-end that matters right now. If that's Mongo, I think there's a
> problem. If it's SQLA, then I think Marconi should declare any features
> which SQLA doesn't support to be optional extensions, make SQLA the
> default, and clearly document how to deploy Marconi at scale with a SQLA
> back-end.
> 
> 
> Then there's the db-as-a-queue antipattern, and the problems that I have
> seen result from this in the past... I'm not the only one in the OpenStack
> community with some experience scaling MySQL databases. Surely others have
> their own experiences and opinions on whether a database (whether MySQL or
> Mongo or Postgres or ...) can be used in such a way _at_scale_ and not fall
> over from resource contention. I would hope that those members of the
> community would chime into this discussion at some point. Perhaps they'll
> even disagree with me!
> 
> A quick look at the code around claim (which, it seems, will be the most
> commonly requested action) shows why this is an antipattern.
> 
> The MongoDB storage driver for claims requires _four_ queries just to get a
> message, with a serious race condition (but at least it's documented in the
> code) if multiple clients are claiming messages in the same queue at the
> same time. For reference:
> 
> https://github.com/openstack/marconi/blob/master/marconi/queues/storage/mongodb/claims.py#L119
> 
> The SQLAlchemy storage driver is no better. It's issuing _five_ queries
> just to claim a message (including a query to purge all expired claims
> every time a new claim is created). The performance of this transaction
> under high load is probably going to be bad...
> 
> https://github.com/openstack/marconi/blob/master/marconi/queues/storage/sqlalchemy/claims.py#L83
> 
> Lastly, it looks like the Marconi storage drivers assume the storage
> back-end to be infinitely scalable. AFAICT, the mongo storage driver
> supports mongo's native sharding -- which I'm happy to see -- but the SQLA
> driver does not appear to support anything equivalent for other back-ends,
> eg. MySQL. This relegates any deployment using the SQLA backend to the
> scale of "only what one database instance can handle". It's unsuitable for
> any large-scale deployment. Folks who don't want to use Mongo are likely to
> use MySQL and will be promptly bitten by Marconi's lack of scalability with
> this back end.
> 
> While there is a lot of room to improve the messaging around what/how/why,
> and I think a FAQ will be very helpful, I don't think that Marconi should
> graduate this cycle because:
> (1) support for a non-AGPL-backend is a legal requirement [*] for Marconi's
> graduation;
> (2) deploying Marconi with sqla+mysql will result in an incomplete and
> unscalable service.
> 
> It's possible that I'm wrong about the scalability of Marconi with sqla +
> mysql. If anyone feels that this is going to perform blazingly fast on a
> single mysql db backend, please publish a benchmark and I'll be very happy
> to be proved wrong. To be meaningful, it must have a high concurrency of
> clients creating and claiming messages with (num queues) << (num clients)
> << (num messages), and all clients polling on a reasonably short interval,
> based on what ever the recommended client-rate-limit is. I'd like the test
> to be repeated with both Mongo and SQLA back-ends on the same hardware for
> comparison.

My guess (and it's just a guess) is that the Marconi developers almost
wish their SQLA driver didn't exist after reading your email because of
the confusion it's causing. My understanding is that the SQLA driver 

Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-20 Thread Tim Bell

+1 for performance analysis to understand what needs to be optimised. Metering 
should be light-weight.

For those of us running in production, we don't have an option to turn 
ceilometer off some of the time. That we are not able to run through the gate 
tests hints that there are optimisations that are needed.

For example, turning on ceilometer caused a 16x increase in our Nova API call 
rate, see 
http://openstack-in-production.blogspot.ch/2014/03/cern-cloud-architecture-update-for.html
 for details.

Tim

> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: 20 March 2014 11:16
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer 
> tempest testing in gate
> 
...
> 
> While I agree that Tempest's job is not to test performance, we do have to 
> give some basic sanity checking here that the software is running in some
> performance profile that we believe is base usable.
>
> Based on the latest dstat results, I think that's a dubious assessment.
> The answer on the collector side has to be something other than horizontal 
> scaling. Because we're talking about the collector being the 3rd highest
> utilized process on the box right now (we should write a dstat plugin to give 
> us cumulative data, just haven't gotten there yet).
>
> So right now, I think performance analysis for ceilometer on sqla is 
> important, really important. Not just horizontal scaling, but actual
> performance profiling.
> 
>   -Sean
> 
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Backwards incompatible API changes

2014-03-20 Thread Christopher Yeoh
On Thu, 20 Mar 2014 11:51:26 +0100
Thierry Carrez  wrote:
> 
> I still think reverting before release is an option we should
> consider. My point is, yes we broke it back in October for people
> doing CD (and they might by now have gotten used to it), if we let
> this to release we'll then break it for everyone else.
> 
> We put a high emphasis into guaranteeing backward compatibility
> between releases. I think there would be more damage done if we let
> this sail to release, compared to the damage of reverting CD
> followers to pre-October behavior...

So I'm more than happy for there to be more discussion about what we
should do. I can only see two rather bad options and see the
arguments against both but its not really clear to me which one is the
least worst one.

I can't make the Nova meeting this week because its the middle of
the night for me, but I've put it on the agenda so perhaps there will
more discussion about it then.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-20 Thread Nadya Privalova
Tim, yep. If you use one db for Ceilometer and Nova then nova's performance
may be affected. I've seen this issue.
Will start profiling ASAP.


On Thu, Mar 20, 2014 at 3:59 PM, Tim Bell  wrote:

>
> +1 for performance analysis to understand what needs to be optimised.
> Metering should be light-weight.
>
> For those of us running in production, we don't have an option to turn
> ceilometer off some of the time. That we are not able to run through the
> gate tests hints that there are optimisations that are needed.
>
> For example, turning on ceilometer caused a 16x increase in our Nova API
> call rate, see
> http://openstack-in-production.blogspot.ch/2014/03/cern-cloud-architecture-update-for.htmlfor
>  details.
>
> Tim
>
> > -Original Message-
> > From: Sean Dague [mailto:s...@dague.net]
> > Sent: 20 March 2014 11:16
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer
> tempest testing in gate
> >
> ...
> >
> > While I agree that Tempest's job is not to test performance, we do have
> to give some basic sanity checking here that the software is running in some
> > performance profile that we believe is base usable.
> >
> > Based on the latest dstat results, I think that's a dubious assessment.
> > The answer on the collector side has to be something other than
> horizontal scaling. Because we're talking about the collector being the 3rd
> highest
> > utilized process on the box right now (we should write a dstat plugin to
> give us cumulative data, just haven't gotten there yet).
> >
> > So right now, I think performance analysis for ceilometer on sqla is
> important, really important. Not just horizontal scaling, but actual
> > performance profiling.
> >
> >   -Sean
> >
> > --
> > Sean Dague
> > Samsung Research America
> > s...@dague.net / sean.da...@samsung.com
> > http://dague.net
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack/GSoC

2014-03-20 Thread Artem Shepelev

On 03/18/2014 07:19 PM, Davanum Srinivas wrote:

Dear Students,

Student application deadline is on Friday, March 21 [1]

Once you finish the application process on the Google GSoC site.
Please reply back to this thread to confirm that all the materials are
ready to review.

thanks,
dims

[1] http://www.google-melange.com/gsoc/events/google/gsoc2014


Hello!

I have just sumbitted my proposal.
If there are any questions or offers I'm ready to listen them and do 
what I can.


P.S.: For other students and mentors: there is no possibility to edit 
proposals after the deadline (March 21, 19.00 UTC)


--
Best regards,
Artem Shepelev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Flavio Percoco

On 20/03/14 09:09 +, Mark McLoughlin wrote:

On Wed, 2014-03-19 at 12:37 -0700, Devananda van der Veen wrote:

Let me start by saying that I want there to be a constructive discussion
around all this. I've done my best to keep my tone as non-snarky as I could
while still clearly stating my concerns. I've also spent a few hours
reviewing the current code and docs. Hopefully this contribution will be
beneficial in helping the discussion along.


Thanks, I think it does.


Very helpful, Thanks!




For what it's worth, I don't have a clear understanding of why the Marconi
developer community chose to create a new queue rather than an abstraction
layer on top of existing queues. While my lack of understanding there isn't
a technical objection to the project, I hope they can address this in the
aforementioned FAQ.

The reference storage implementation is MongoDB. AFAIK, no integrated
projects require an AGPL package to be installed, and from the discussions
I've been part of, that would be a show-stopper if Marconi required
MongoDB. As I understand it, this is why sqlalchemy support was required
when Marconi was incubated. Saying "Marconi also supports SQLA" is
disingenuous because it is a second-class citizen, with incomplete API
support, is clearly not the recommended storage driver, and is going to be
unusuable at scale (I'll come back to this point in a bit).

Let me ask this. Which back-end is tested in Marconi's CI? That is the
back-end that matters right now. If that's Mongo, I think there's a
problem. If it's SQLA, then I think Marconi should declare any features
which SQLA doesn't support to be optional extensions, make SQLA the
default, and clearly document how to deploy Marconi at scale with a SQLA
back-end.


Then there's the db-as-a-queue antipattern, and the problems that I have
seen result from this in the past... I'm not the only one in the OpenStack
community with some experience scaling MySQL databases. Surely others have
their own experiences and opinions on whether a database (whether MySQL or
Mongo or Postgres or ...) can be used in such a way _at_scale_ and not fall
over from resource contention. I would hope that those members of the
community would chime into this discussion at some point. Perhaps they'll
even disagree with me!

A quick look at the code around claim (which, it seems, will be the most
commonly requested action) shows why this is an antipattern.

The MongoDB storage driver for claims requires _four_ queries just to get a
message, with a serious race condition (but at least it's documented in the
code) if multiple clients are claiming messages in the same queue at the
same time. For reference:

https://github.com/openstack/marconi/blob/master/marconi/queues/storage/mongodb/claims.py#L119

The SQLAlchemy storage driver is no better. It's issuing _five_ queries
just to claim a message (including a query to purge all expired claims
every time a new claim is created). The performance of this transaction
under high load is probably going to be bad...

https://github.com/openstack/marconi/blob/master/marconi/queues/storage/sqlalchemy/claims.py#L83

Lastly, it looks like the Marconi storage drivers assume the storage
back-end to be infinitely scalable. AFAICT, the mongo storage driver
supports mongo's native sharding -- which I'm happy to see -- but the SQLA
driver does not appear to support anything equivalent for other back-ends,
eg. MySQL. This relegates any deployment using the SQLA backend to the
scale of "only what one database instance can handle". It's unsuitable for
any large-scale deployment. Folks who don't want to use Mongo are likely to
use MySQL and will be promptly bitten by Marconi's lack of scalability with
this back end.

While there is a lot of room to improve the messaging around what/how/why,
and I think a FAQ will be very helpful, I don't think that Marconi should
graduate this cycle because:
(1) support for a non-AGPL-backend is a legal requirement [*] for Marconi's
graduation;
(2) deploying Marconi with sqla+mysql will result in an incomplete and
unscalable service.

It's possible that I'm wrong about the scalability of Marconi with sqla +
mysql. If anyone feels that this is going to perform blazingly fast on a
single mysql db backend, please publish a benchmark and I'll be very happy
to be proved wrong. To be meaningful, it must have a high concurrency of
clients creating and claiming messages with (num queues) << (num clients)
<< (num messages), and all clients polling on a reasonably short interval,
based on what ever the recommended client-rate-limit is. I'd like the test
to be repeated with both Mongo and SQLA back-ends on the same hardware for
comparison.


My guess (and it's just a guess) is that the Marconi developers almost
wish their SQLA driver didn't exist after reading your email because of
the confusion it's causing. My understanding is that the SQLA driver is
not intended for production usage.


Yeah, pretty much the feeling now! :D

In 

Re: [openstack-dev] [legal-discuss] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Mark McLoughlin
On Thu, 2014-03-20 at 12:07 +0100, Thierry Carrez wrote:
> Monty Taylor wrote:
> > On 03/20/2014 01:30 AM, Radcliffe, Mark wrote:
> >> The problem with AGPL is that the scope is very uncertain and the
> >> determination of the  consequences are very fact intensive. I was the
> >> chair of the User Committee in developing the GPLv3 and I am therefor
> >> quite familiar with the legal issues.  The incorporation of AGPLv3
> >> code Into OpenStack Project  is a significant decision and should not
> >> be made without input from the Foundation. At a minimum, the
> >> inclusion of APLv3 code means that the OpenStack Project is no longer
> >> solely an Apache v2 licensed project because AGPLv3 code cannot be
> >> licensed under Apache v. 2 License.  Moreover, the inclusion of such
> >> code is inconsistent with the current CLA provisions.
> >>
> >> This code can be included but it is an important decision that should
> >> be made carefully.
> > 
> > I agree - but in this case, I think that we're not talking about
> > including AGPL code in OpenStack as much as we are talking about using
> > an Apache2 driver that would talk to a server that is AGPL ... if the
> > deployer chooses to install the AGPL software. I don't think we're
> > suggesting that downloading or installing openstack itself would involve
> > downloading or installing AGPL code.
> 
> Yes, the issue here is more... a large number of people want to stay
> away from AGPL. Should the TC consider adding to the OpenStack
> integrated release a component that requires AGPL software to be
> installed alongside it ? It's not really a legal issue (hence me
> stopping the legal-issues cross-posting).

We need to understand the reasons "people want to stay away from the
AGPL". Those reasons appear to be legal reasons, and not necessarily
well founded. I think legal-discuss can help tease those out.

I don't (yet) accept that there's a reasonable enough concern for the
OpenStack project to pander to.

I'm no fan of the AGPL, but we need to be able to articulate any policy
decision we make here beyond "some people don't like the AGPL".

For example, if we felt the AGPL fears weren't particularly well founded
then we could make a policy decision that projects should have an
abstraction that would allow those with AGPL fears add support for
another technology ... but that the project wouldn't be required to do
so itself before graduating.

> This was identified early on as a concern with Marconi and the SQLA
> support was added to counter that concern. The question then becomes,
> how usable this SQLA option actually is ? If it's sluggish, unusable in
> production or if it doesn't fully support the proposed Marconi API, then
> I think we still have that concern.

I understood that a future Redis driver was what the Marconi team had in
mind to address this concern and the SQLA driver wasn't intended for
production use.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Yuriy Taraday
On Tue, Mar 18, 2014 at 7:38 PM, Yuriy Taraday  wrote:

> I'm aiming at ~100 new lines of code for daemon. Of course I'll use some
> batteries included with Python stdlib but they should be safe already.
> It should be rather easy to audit them.
>

Here's my take on this: https://review.openstack.org/81798

Benchmark included showed on my machine these numbers (average over 100
iterations):

Running 'ip a':
  ip a :   4.565ms
 sudo ip a :  13.744ms
   sudo rootwrap conf ip a : 102.571ms
daemon.run('ip a') :   8.973ms
Running 'ip netns exec bench_ns ip a':
  sudo ip netns exec bench_ns ip a : 162.098ms
sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
 daemon.run('ip netns exec bench_ns ip a') : 129.876ms

So it looks like running daemon is actually faster than running "sudo".

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] [Ceilometer] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Steve Gordon
- Original Message -
> Monty Taylor wrote:
> > On 03/20/2014 01:30 AM, Radcliffe, Mark wrote:
> >> The problem with AGPL is that the scope is very uncertain and the
> >> determination of the  consequences are very fact intensive. I was the
> >> chair of the User Committee in developing the GPLv3 and I am therefor
> >> quite familiar with the legal issues.  The incorporation of AGPLv3
> >> code Into OpenStack Project  is a significant decision and should not
> >> be made without input from the Foundation. At a minimum, the
> >> inclusion of APLv3 code means that the OpenStack Project is no longer
> >> solely an Apache v2 licensed project because AGPLv3 code cannot be
> >> licensed under Apache v. 2 License.  Moreover, the inclusion of such
> >> code is inconsistent with the current CLA provisions.
> >>
> >> This code can be included but it is an important decision that should
> >> be made carefully.
> > 
> > I agree - but in this case, I think that we're not talking about
> > including AGPL code in OpenStack as much as we are talking about using
> > an Apache2 driver that would talk to a server that is AGPL ... if the
> > deployer chooses to install the AGPL software. I don't think we're
> > suggesting that downloading or installing openstack itself would involve
> > downloading or installing AGPL code.
> 
> Yes, the issue here is more... a large number of people want to stay
> away from AGPL. Should the TC consider adding to the OpenStack
> integrated release a component that requires AGPL software to be
> installed alongside it ? It's not really a legal issue (hence me
> stopping the legal-issues cross-posting).
> 
> This was identified early on as a concern with Marconi and the SQLA
> support was added to counter that concern. The question then becomes,
> how usable this SQLA option actually is ? If it's sluggish, unusable in
> production or if it doesn't fully support the proposed Marconi API, then
> I think we still have that concern.

This is somewhat tangential but what's the recommended/default backend for 
Telemetry these days? I ask because in the documentation we currently still 
appear to concentrate on using MongoDB [1]. Are all of the backends - or at 
least those with 'Yes' in all columns in the developer docs [2] - first class 
citizens at this point?

Thanks,

Steve

[1] 
http://docs.openstack.org/havana/install-guide/install/yum/content/ceilometer-install.html
[2] http://docs.openstack.org/developer/ceilometer/install/dbreco.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [legal-discuss] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Sean Dague
On 03/20/2014 08:36 AM, Mark McLoughlin wrote:
> On Thu, 2014-03-20 at 12:07 +0100, Thierry Carrez wrote:
>> Monty Taylor wrote:
>>> On 03/20/2014 01:30 AM, Radcliffe, Mark wrote:
 The problem with AGPL is that the scope is very uncertain and the
 determination of the  consequences are very fact intensive. I was the
 chair of the User Committee in developing the GPLv3 and I am therefor
 quite familiar with the legal issues.  The incorporation of AGPLv3
 code Into OpenStack Project  is a significant decision and should not
 be made without input from the Foundation. At a minimum, the
 inclusion of APLv3 code means that the OpenStack Project is no longer
 solely an Apache v2 licensed project because AGPLv3 code cannot be
 licensed under Apache v. 2 License.  Moreover, the inclusion of such
 code is inconsistent with the current CLA provisions.

 This code can be included but it is an important decision that should
 be made carefully.
>>>
>>> I agree - but in this case, I think that we're not talking about
>>> including AGPL code in OpenStack as much as we are talking about using
>>> an Apache2 driver that would talk to a server that is AGPL ... if the
>>> deployer chooses to install the AGPL software. I don't think we're
>>> suggesting that downloading or installing openstack itself would involve
>>> downloading or installing AGPL code.
>>
>> Yes, the issue here is more... a large number of people want to stay
>> away from AGPL. Should the TC consider adding to the OpenStack
>> integrated release a component that requires AGPL software to be
>> installed alongside it ? It's not really a legal issue (hence me
>> stopping the legal-issues cross-posting).
> 
> We need to understand the reasons "people want to stay away from the
> AGPL". Those reasons appear to be legal reasons, and not necessarily
> well founded. I think legal-discuss can help tease those out.
> 
> I don't (yet) accept that there's a reasonable enough concern for the
> OpenStack project to pander to.
> 
> I'm no fan of the AGPL, but we need to be able to articulate any policy
> decision we make here beyond "some people don't like the AGPL".
> 
> For example, if we felt the AGPL fears weren't particularly well founded
> then we could make a policy decision that projects should have an
> abstraction that would allow those with AGPL fears add support for
> another technology ... but that the project wouldn't be required to do
> so itself before graduating.
> 
>> This was identified early on as a concern with Marconi and the SQLA
>> support was added to counter that concern. The question then becomes,
>> how usable this SQLA option actually is ? If it's sluggish, unusable in
>> production or if it doesn't fully support the proposed Marconi API, then
>> I think we still have that concern.
> 
> I understood that a future Redis driver was what the Marconi team had in
> mind to address this concern and the SQLA driver wasn't intended for
> production use.

This is a little problematic from a deployer requirement perspective.
Today we say to all deployers: To run all of OpenStack:

 * You will need to run and maintain a relational database environment
(and probably cluster it for scale / availability)
 * You will need to run and maintain a queue system (rabbit, qpid, zmq)
 * You will need to run and maintain a web server (apache)

Deciding to integrate something to OpenStack that requires a base piece
of software that is outside of that list is a pretty big deal.

Forget license for a moment, just specifying that you have to run and
maintain and monitor a nosql environment in addition to a RDBMS is
definitely adding substantial burden to OpenStack deployers.

Big public cloud shops, that's probably fine for them. However OpenStack
as a site level deploy ? Having to add expertise in nosql engine
lifecycle management is a real cost. And we better be explicit about it
from a project wide stance if that's what we're saying.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-20 Thread Christian Berendt
On 03/20/2014 01:27 PM, Nadya Privalova wrote:
> Tim, yep. If you use one db for Ceilometer and Nova then nova's
> performance may be affected.

If I understood it correctly the problem is not the higher load produced
directly by Ceilometer on the database. The problem is that the
Ceilometer compute agent sends a lot of Nova API calls and this results
in a higher load on the nova-api services. Tim mentioned a factor of 16.

Christian.

-- 
Christian Berendt
Cloud Computing Solution Architect
Mail: bere...@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Blueprint Review request for OpenStack Neutron Port Templates Network Policy.

2014-03-20 Thread Kyle Mestery
On Thu, Mar 20, 2014 at 5:21 AM, Kumar, Vinod (HP Networking) <
vinod.kum...@hp.com> wrote:

>  Hi All,
>
>
>
> Me and my fellow friends (CCed) invite the community to review OpenStack
> Neutron Port Templates Network Policy blueprint submitted recently.
>
> Please review the same and feel free to suggest, comment and ask questions.
>
>
>
> Link to Open Stack blueprint page
>
> https://blueprints.launchpad.net/neutron/+spec/neutron-port-template-policy
>
>
>
Hi Vinod:

Your blueprint looks like it has a large amount of overlap with the
existing Group Based
Policy effort in Neutron which has been going on for 6 months now. Have you
looked at
the Group Based Policy Blueprint? We have been meeting weekly since the
Summit in
Hong Kong as well. I'd encourage you to look at all the work we've done,
including the
prototype code, and join us in our weekly meetings.

Thanks!
Kyle

[1]
https://blueprints.launchpad.net/neutron/+spec/group-based-policy-abstraction
[2] https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy


>  Thanks
>
> Vinod
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Review request: A blueprint on OpenStack Neutron Port Templates Framework

2014-03-20 Thread Kyle Mestery
Also, before undertaking large blueprints like this, I'd highly encourage
folks to look
at existing work in the community. The Neutron community has been working on
Group Based Policy since last fall. There was a design summit session in
Hong Kong
around this as well, and we've been meeting weekly on IRC since then.
Please see
my response to the other Neutron policy email here [1]. I'd encourage both
this BP and
the other one to look at the work already being done in this area and see
where there is
overlap.

Thanks,
Kyle

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-March/030554.html


On Thu, Mar 20, 2014 at 6:37 AM, Sylvain Bauza wrote:

> Hi,
>
> As said here [1], please don't send review requests directly to the ML.
>
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html
>
>
> 2014-03-20 12:08 GMT+01:00 Kamat, Maruti Haridas :
>
>>  Hi All,
>>
>> Me and my fellow friends (CCed) invite the community to review OpenStack
>> Neutron Port Templates Framework blueprint submitted recently.
>> Please review the same and feel free to suggest, comment and ask
>> questions.
>>
>> Link to Open Stack blueprint page
>>
>> *https://blueprints.launchpad.net/neutron/+spec/neutron-port-template-framework*
>>
>> Thanks
>> Maruti
>>
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack/GSoC

2014-03-20 Thread Davanum Srinivas
Team,

Here's what i see in the system so far.

Mentors:
ybudupi
blint
boris_42
coroner
cppcabrera
sriramhere
arnaudleg
greghaynes
hughsaunders
julim
ddutta (Organization Administrator)
dims (Organization Administrator)

Projects:
Cross-services Scheduler project. Artem Shepelev Artem Shepelev
Proposal for Implementing an application-level FWaaS driver (Zorp) Dániel Csubák
Openstack-OSLO :Add a New Backend to Oslo.Cache sai krishna
How to detect network anomalies from telemetry data within Openstack mst89
OpenStack/Marconi: Py3k support Nataliia
Develop a benchmarking suite and new storage backends to OpenStack
Marconi Prashanth Raghu
Developing Benchmarks for Virtual Machines of OpenStack with Rally
Tzanetos Balitsaris

Mentors, if you don't see your id, please send me a connection request
in the google gsoc site.
Students, if you don't see your proposal above, please hop onto
#openstack-gsoc and #gsoc and get help making sure we can see it.

thanks,
dims

On Thu, Mar 20, 2014 at 8:41 AM, Artem Shepelev
 wrote:
> On 03/18/2014 07:19 PM, Davanum Srinivas wrote:
>
> Dear Students,
>
> Student application deadline is on Friday, March 21 [1]
>
> Once you finish the application process on the Google GSoC site.
> Please reply back to this thread to confirm that all the materials are
> ready to review.
>
> thanks,
> dims
>
> [1] http://www.google-melange.com/gsoc/events/google/gsoc2014
>
> Hello!
>
> I have just sumbitted my proposal.
> If there are any questions or offers I'm ready to listen them and do what I
> can.
>
> P.S.: For other students and mentors: there is no possibility to edit
> proposals after the deadline (March 21, 19.00 UTC)
>
> --
> Best regards,
> Artem Shepelev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-20 Thread Tim Bell
We're using a dedicated MongoDB instance for ceilometer and a distinct DB for 
each of the Nova cells.

Tim

From: Nadya Privalova [mailto:nprival...@mirantis.com]
Sent: 20 March 2014 13:27
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer 
tempest testing in gate

Tim, yep. If you use one db for Ceilometer and Nova then nova's performance may 
be affected. I've seen this issue.
Will start profiling ASAP.

On Thu, Mar 20, 2014 at 3:59 PM, Tim Bell 
mailto:tim.b...@cern.ch>> wrote:

+1 for performance analysis to understand what needs to be optimised. Metering 
should be light-weight.

For those of us running in production, we don't have an option to turn 
ceilometer off some of the time. That we are not able to run through the gate 
tests hints that there are optimisations that are needed.

For example, turning on ceilometer caused a 16x increase in our Nova API call 
rate, see 
http://openstack-in-production.blogspot.ch/2014/03/cern-cloud-architecture-update-for.html
 for details.

Tim

> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: 20 March 2014 11:16
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer 
> tempest testing in gate
>
...
>
> While I agree that Tempest's job is not to test performance, we do have to 
> give some basic sanity checking here that the software is running in some
> performance profile that we believe is base usable.
>
> Based on the latest dstat results, I think that's a dubious assessment.
> The answer on the collector side has to be something other than horizontal 
> scaling. Because we're talking about the collector being the 3rd highest
> utilized process on the box right now (we should write a dstat plugin to give 
> us cumulative data, just haven't gotten there yet).
>
> So right now, I think performance analysis for ceilometer on sqla is 
> important, really important. Not just horizontal scaling, but actual
> performance profiling.
>
>   -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / 
> sean.da...@samsung.com
> http://dague.net
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Miguel Angel Ajo


   Wow Yuriy, amazing and fast :-), benchmarks included ;-)

   The daemon solution only adds 4.5ms, good work. I'll add some 
comments in a while.


   Recently I talked with another engineer in Red Hat (working
in ovirt/vdsm), and they have something like this daemon, and they
are using BaseManager too.

   In our last conversation he told me that the BaseManager has
a couple of bugs & race conditions that won't be fixed for python2.x,
I'm waiting for details on those bugs, I'll post them to the thread
as soon as I have the details.


   If this coupled to neutron in a way that it can be accepted for
Icehouse (we're killing a performance bug), or that at least it can
be y backported, you'd be covering both the short & long term needs.


Best,
Miguel Ángel.




On 03/20/2014 01:41 PM, Yuriy Taraday wrote:

On Tue, Mar 18, 2014 at 7:38 PM, Yuriy Taraday mailto:yorik@gmail.com>> wrote:

I'm aiming at ~100 new lines of code for daemon. Of course I'll use
some batteries included with Python stdlib but they should be safe
already.
It should be rather easy to audit them.


Here's my take on this: https://review.openstack.org/81798

Benchmark included showed on my machine these numbers (average over 100
iterations):

Running 'ip a':
   ip a :   4.565ms
  sudo ip a :  13.744ms
sudo rootwrap conf ip a : 102.571ms
 daemon.run('ip a') :   8.973ms
Running 'ip netns exec bench_ns ip a':
   sudo ip netns exec bench_ns ip a : 162.098ms
 sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
  daemon.run('ip netns exec bench_ns ip a') : 129.876ms

So it looks like running daemon is actually faster than running "sudo".

--

Kind regards, Yuriy.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hierarchicical Multitenancy Discussion

2014-03-20 Thread Vinod Kumar Boppanna
Hi,

As discussed in the last meeting, i had changed the POC for Quota Management in 
Hierarchical Multitenancy setup. Please check the following links

Twiki Page ->  https://wiki.openstack.org/wiki/POC_for_QuotaManagement#API_URLs

Code (diff) -> 
https://github.com/vinodkumarboppanna/POC-For-Quotas/commit/4147173a415108e9496d6265522ea57bec5d1409

If you have any comments, please post to me.

Thanks & Regards,
Vinod Kumar Boppanna
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][PCI] one use case make the flavor/extra-info based solution to be right choice

2014-03-20 Thread Robert Li (baoli)
Hi Yongli,

I'm very glad that you bring this up and relive our discussion on PCI
passthrough and its application on networking. The use case you brought up
is:

   user wants a FASTER NIC from INTEL to join a virtual
networking. 

By FASTER, I guess that you mean that the user is allowed to select a
particular vNIC card. Therefore, the above statement can be translated
into the following requests for a PCI device:
. Intel vNIC
. 1G or 10G or ?
. network to join

First of all, I'm not sure in a cloud environment, a user would care about
the vendor or card type. 1G or 10G doesn't have anything to do with the
bandwidth a user would get. But I guess a cloud provider may have the
incentive to do so for other reasons, and want to provide its users with
such choice. In any case, let's assume it's a valid use case.

With the initial PCI group proposal, we have one tag and you can tag the
Intel device with its group name, for example, "Intel_1G_phy1",
"Intel_10G_phy1". When requesting a particular device, the user can say:
pci_group="Intel_1G_phy1", or pci_group="Intel_10G_phy1", or if the user
don't care 1G or 10G, pci_group="Intel_1G_phy1" OR "Intel_10G_phy1".

I would also think that it's possible to have two tags on a networking
device with the above use case in mind: a group tag, and a network tag.
For example, a device can be tagged with pci_group="Intel_1G",
network="phy1". When requesting a networking device, the network tag can
be derived from the nic that's being requested.

As you can see, an admin defines the devices once on the compute nodes,
and doesn't need to do anything on top of that. It's simple and easy to
use.

My initial comments to the flavor/extra-info based solution are about the
PCI stats management and scheduling. Your latest patch seems to have
answered some of my original questions. However, your implementation seems
to deviate from (or I should say have clarified) the original proposal
https://docs.google.com/document/d/1vadqmurlnlvZ5bv3BlUbFeXRS_wh-dsgi5plSji
mWjU/edit, which doesn't provide detailed explanation on those.

Here, let me extract the comment I provided to this patch
https://review.openstack.org/#/c/63267/:

'''
I'd like to take an analogy with a database table. Assuming a device table
with columns of device properties (such as product_id, etc), designated as
P and extra attributes as E. So it would look like something as T-columns
= (P1, P2, ..., E1, E2, ..).
A pci_flavor_attrs is a subset of T-columns. With that, the entire device
table will be REDUCED to a smaller stats pool table. For example, if
pci_flavor_attrs is (P1, P2, E1, E2), then the stats pool table will look
like: S-columns = (P1, P2, E1, E2, COUNT). In the worst case, S-columns =
T-columns. Although a well educated admin wouldn't do that.
Therefore, requesting a PCI device is like doing a DB search based on the
stats pool table. And the search criteria is based on a combination of the
S-columns (for example, by way of nova flavor).
The admin can decide to define any extra attributes, and devices may be
tagged with different extra attributes. It's possible that many extra
attributes are defined, but some devices may be tagged with one. However,
all the extra attributes have to have corresponding columns in the stats
pool table.
I can see there are many ways to use such an interface. it also means it
could easily lead to misuse. An admin may define a lot of attributes,
later he may find it's not enough based on how he used it, and adding new
attributes or deleting attributes may not be a fun thing at all (due to
the fixed pci_flavor_attrs configuration), let alone how to do that in a
working cloud.
'''

Imagine in a cloud that supports PCI passthrough on various classes of PCI
cards (by class, I mean the linux pci device class). Examples are video,
crypto, networking, storage, etc. The pci_flavor_attrs needs to be defined
on EVERY node, and has to accommodate attributes from ALL of these classes
of cards. However, an attribute for one class of cards may not be
applicable to other classes of cards. However, the stats group are keyed
on pci_flavor_attrs, and PCI flavors can be defined with any attributes
from pci_flavor_attrs. Thus, it really lacks the level of abstraction that
clearly defines the usage and semantics. It's up to a well educated admin
to use it properly, and it's not easy to manage. Therefore, I believe it
requires further work.

I think that practical use cases would really help us find the right
solution, and provide the optimal interface to the admin/user. So let's
keep the discussion going.

thanks,
Robert

On 3/20/14 4:22 AM, "yongli he"  wrote:

>HI, all
>
>cause of the Juno, the PCI discuss keen open, for group VS to
>flavor/extra-information based solution. there is a use case, which
>group based
>solution can not supported well.
>
>please considerate of this, and choose the flavor/extra-information
>based solution.
>
>
>Groups problem:
>
>I: exposed may

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Miguel Angel Ajo




On 03/20/2014 12:32 PM, Monty Taylor wrote:

On 03/20/2014 05:31 AM, Miguel Angel Ajo wrote:



On 03/19/2014 10:54 PM, Joe Gordon wrote:




On Wed, Mar 19, 2014 at 9:25 AM, Miguel Angel Ajo mailto:majop...@redhat.com>> wrote:



An advance on the changes that it's requiring to have a
py->c++ compiled rootwrap as a mitigation POC for havana/icehouse.


https://github.com/mangelajo/__shedskin.rootwrap/commit/__e4167a6491dfbc71e2d0f6e28ba93b__c8a1dd66c0







The current translation output is included.


Updated changes:
https://github.com/mangelajo/shedskin.rootwrap/commit/f4bd8d6efac9d3a972e686b42a47996f32ce0464

Some results of the compiled version (it's a different machine, so note 
the rootwrap runtime difference).


# time neutron-rootwrap /etc/neutron/rootwrap.conf
/usr/bin/neutron-rootwrap: No command specified

real0m0.133s
user0m0.106s
sys 0m0.023s

[root@rdo-storm rootwrap]# time ./cmd /etc/neutron/rootwrap.conf
./cmd: No command specified

real0m0.003s
user0m0.003s
sys 0m0.001s


[root@rdo-storm rootwrap]# time ./cmd /etc/neutron/rootwrap.conf ip 
netns exec bench_ns ip a

37: lo:  mtu 16436 qdisc noop state DOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

real0m0.025s
user0m0.003s
sys 0m0.011s
[root@rdo-storm rootwrap]# time neutron-rootwrap 
/etc/neutron/rootwrap.conf ip netns exec bench_ns ip a

37: lo:  mtu 16436 qdisc noop state DOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

real0m0.164s
user0m0.124s
sys 0m0.029s





The C++ code it's generating is inefficient and subject to some issues.
I'd be a bit concerned about it and would want to make some fixes.


Well, the inefficiencies should not be a problem for us, as long as the
final solution performs as we need.



Also - I'm sure you know this, but it's labeled as experimental. As
something that is going to run as root, I'd want to actually audit all
of the code it does. WHICH MEANS - we'd be auditing/examining C++ code.


Sure, I didn't miss that, and I think the same way you do. At this
moment, what I'm trying is a proof of concept, something we can
play around and do real measurements, look how does the code looks like,
if it can be audited or not, etc..


If we're going to do that, I'd rather just convert it to C++ by hand,
audit that, and deal with it. Either way, we'll need a substantial
amount of infra support - such as multi-platform compilation testing,
valgrind, etc.

Not a blocker - but things to think about. If you're really serious
about bringing C++ into the mix - let's just do it.


As I think, it's a solution aiming to the short-term, but IMHO, with
the speed Yuriy is working on the daemon, may be it's solution can make
it for  Icehouse without being very code-intrussive.


(ps. if we're going to do that, we could also just write a sudo plugin)


Yes, but sudo plugin / sudo rules, comes back with the maintenance 
problem we already had with sudo rules.


Ideally, if we needed to write/translate in C/C++, I'd go for something
rootwrap compatible, so, innovation could be kept on the rootwrap-python
side, and translated later.



It looks like doable (almost killed 80% of the translation
problems),
but there are two big stones:

1) As Joe said, no support for Subprocess (we're interested in
popen),
I'm using a dummy os.system() for the test.

2) No logging support.

I'm not sure on how complicated could be getting those modules
implemented for shedkin.


Before sorting out of we can get those working under shedskin, any
preliminary performance numbers from neutron when using this?



Sure, totally agree.

I'm trying to put up a conversion without 1 & 2, to run a benchmark on
it, and then I'll post the results.

I suppose, we couldn't use it in neutron itself without Popen support
(not sure) but at least I could get an estimate out of the previous
numbers and the new ones.

Best,
Miguel Ángel.



On 03/18/2014 09:14 AM, Miguel Angel Ajo wrote:

Hi Joe, thank you very much for the positive feedback,

 I plan to spend a day during this week on the
shedskin-compatibility
for rootwrap (I'll branch it, and tune/cut down as necessary) to
make
it compile under shedskin [1] : nothing done yet.

 It's a short-term alternative until we can have a rootwrap
agent,
together with it's integration in neutron (for Juno). As, for
the
compiled rootwrap, if it works, and if it does look good
(security wise)
then we'd have a solution for Icehouse/Havana.

help in [1] is really  welcome ;-) I'm available in
#openstack-neutron
as 'ajo'.

 Best regards,
Miguel Ángel.

[1] https://github.com/mangelajo/__shedskin.rootwrap


Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

2014-03-20 Thread Stephen Balukoff
Hi y'all!

It's good to be back, eh!

On Wed, Mar 19, 2014 at 11:35 PM, Eugene Nikanorov
wrote:

>
>>- Active/Passive Failover
>>   - I think this is solved with multiple pools.
>>
>> The multiple pools support that is coming with L7 rules is to support
>> content-switching based on L7 HTTP information (URL, headers, etc.). There
>> is no support today for an active vs. passive pool.
>>
> I'm not sure that's the priority. It depends on if this is widely
> supported among vendors.
>

A commercial load balancer that doesn't have high availability features? Is
there really such a thing still being sold in 2014? ;)

Also, Jorge-- thanks for creating that page! I've made a few additions to
it as well that I'd love to see prioritized.

Stephen




-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [legal-discuss] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Flavio Percoco

On 20/03/14 08:52 -0400, Sean Dague wrote:

On 03/20/2014 08:36 AM, Mark McLoughlin wrote:

On Thu, 2014-03-20 at 12:07 +0100, Thierry Carrez wrote:

Monty Taylor wrote:

On 03/20/2014 01:30 AM, Radcliffe, Mark wrote:

The problem with AGPL is that the scope is very uncertain and the
determination of the  consequences are very fact intensive. I was the
chair of the User Committee in developing the GPLv3 and I am therefor
quite familiar with the legal issues.  The incorporation of AGPLv3
code Into OpenStack Project  is a significant decision and should not
be made without input from the Foundation. At a minimum, the
inclusion of APLv3 code means that the OpenStack Project is no longer
solely an Apache v2 licensed project because AGPLv3 code cannot be
licensed under Apache v. 2 License.  Moreover, the inclusion of such
code is inconsistent with the current CLA provisions.

This code can be included but it is an important decision that should
be made carefully.


I agree - but in this case, I think that we're not talking about
including AGPL code in OpenStack as much as we are talking about using
an Apache2 driver that would talk to a server that is AGPL ... if the
deployer chooses to install the AGPL software. I don't think we're
suggesting that downloading or installing openstack itself would involve
downloading or installing AGPL code.


Yes, the issue here is more... a large number of people want to stay
away from AGPL. Should the TC consider adding to the OpenStack
integrated release a component that requires AGPL software to be
installed alongside it ? It's not really a legal issue (hence me
stopping the legal-issues cross-posting).


We need to understand the reasons "people want to stay away from the
AGPL". Those reasons appear to be legal reasons, and not necessarily
well founded. I think legal-discuss can help tease those out.

I don't (yet) accept that there's a reasonable enough concern for the
OpenStack project to pander to.

I'm no fan of the AGPL, but we need to be able to articulate any policy
decision we make here beyond "some people don't like the AGPL".

For example, if we felt the AGPL fears weren't particularly well founded
then we could make a policy decision that projects should have an
abstraction that would allow those with AGPL fears add support for
another technology ... but that the project wouldn't be required to do
so itself before graduating.


This was identified early on as a concern with Marconi and the SQLA
support was added to counter that concern. The question then becomes,
how usable this SQLA option actually is ? If it's sluggish, unusable in
production or if it doesn't fully support the proposed Marconi API, then
I think we still have that concern.


I understood that a future Redis driver was what the Marconi team had in
mind to address this concern and the SQLA driver wasn't intended for
production use.


This is a little problematic from a deployer requirement perspective.
Today we say to all deployers: To run all of OpenStack:

* You will need to run and maintain a relational database environment
(and probably cluster it for scale / availability)
* You will need to run and maintain a queue system (rabbit, qpid, zmq)
* You will need to run and maintain a web server (apache)

Deciding to integrate something to OpenStack that requires a base piece
of software that is outside of that list is a pretty big deal.

Forget license for a moment, just specifying that you have to run and
maintain and monitor a nosql environment in addition to a RDBMS is
definitely adding substantial burden to OpenStack deployers.

Big public cloud shops, that's probably fine for them. However OpenStack
as a site level deploy ? Having to add expertise in nosql engine
lifecycle management is a real cost. And we better be explicit about it
from a project wide stance if that's what we're saying.


I agree this is quite an issue but I also think that pretending that
we'll be able to let OpenStack grow with a minimum set of databases,
brokers and web servers is a bit unrealistic. The set of supported
technologies won't be able to fulfill the needs of all the
yet-to-be-discovered *amazing* projects.

I think we'll need to either relax this constraint or create some sort
of separate group of projects that are still official but not
considered when we say "Deploy OpenStack". This group will allow some
limited sort of divergence when it comes to the technology
requirements.

I'm sure there's more to consider here. For instance, CI issues, CD
issues etc.

Cheers,
Flavio

--
@flaper87
Flavio Percoco


pgpPria2zbe5i.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Malini Kamalambal

Let me start by saying that I want there to be a constructive discussion around 
all this. I've done my best to keep my tone as non-snarky as I could while 
still clearly stating my concerns. I've also spent a few hours reviewing the 
current code and docs. Hopefully this contribution will be beneficial in 
helping the discussion along.

For what it's worth, I don't have a clear understanding of why the Marconi 
developer community chose to create a new queue rather than an abstraction 
layer on top of existing queues. While my lack of understanding there isn't a 
technical objection to the project, I hope they can address this in the 
aforementioned FAQ.

The reference storage implementation is MongoDB. AFAIK, no integrated projects 
require an AGPL package to be installed, and from the discussions I've been 
part of, that would be a show-stopper if Marconi required MongoDB. As I 
understand it, this is why sqlalchemy support was required when Marconi was 
incubated. Saying "Marconi also supports SQLA" is disingenuous because it is a 
second-class citizen, with incomplete API support, is clearly not the 
recommended storage driver, and is going to be unusuable at scale (I'll come 
back to this point in a bit).

Let me ask this. Which back-end is tested in Marconi's CI? That is the back-end 
that matters right now. If that's Mongo, I think there's a problem. If it's 
SQLA, then I think Marconi should declare any features which SQLA doesn't 
support to be optional extensions, make SQLA the default, and clearly document 
how to deploy Marconi at scale with a SQLA back-end.


"[drivers]
storage = mongodb

[drivers:storage:mongodb]
uri = mongodb://localhost:27017/marconi



http://logs.openstack.org/94/81094/2/check/check-tempest-dsvm-marconi/c006285/logs/etc/marconi/marconi.conf.txt.gz



On an related note I see that marconi has no gating integration tests.
https://review.openstack.org/#/c/81094/2


But then again that is documented in 
https://wiki.openstack.org/wiki/Marconi/Incubation/Graduation#Legal_requirements
We have a devstack-gate job running and will be making it voting this week.


Of the non-gating integration test job, I only see one marconi test being run: 
tempest.api.queuing.test_queues.TestQueues.test_create_queue
 
http://logs.openstack.org/94/81094/2/check/check-tempest-dsvm-marconi/c006285/logs/testr_results.html.gz
"


I have a separate thread started on the graduation gating requirements w.r.t 
Tempest.
The single test we have on Tempest was a result of the one-liner requirement ' 
'Project must have a basic devstack-gate job set up'.
The subsequent discussion in openstack qa meeting lead me to believe that the 
'basic' job we have is good enough.
Please refer to the email 'Graduation Requirements + Scope of Tempest' for more 
details regarding this.

But that does not mean that 'the single tempest test' is all we have to verify 
the Marconi functionality.
We have had a robust test suite (unit & functional tests – with lots of 
positive & negative test scenarios)for a very long time in Marconi.
See 
http://logs.openstack.org/33/81033/2/check/gate-marconi-python27/35822df/testr_results.html.gz
These tests are run against a sqlite backend.
The gating tests have been using sqlalchemy driver ever since we have had it.
Hope that clarifies !

- Malini





Then there's the db-as-a-queue antipattern, and the problems that I have seen 
result from this in the past... I'm not the only one in the OpenStack community 
with some experience scaling MySQL databases. Surely others have their own 
experiences and opinions on whether a database (whether MySQL or Mongo or 
Postgres or ...) can be used in such a way _at_scale_ and not fall over from 
resource contention. I would hope that those members of the community would 
chime into this discussion at some point. Perhaps they'll even disagree with me!

A quick look at the code around claim (which, it seems, will be the most 
commonly requested action) shows why this is an antipattern.

The MongoDB storage driver for claims requires _four_ queries just to get a 
message, with a serious race condition (but at least it's documented in the 
code) if multiple clients are claiming messages in the same queue at the same 
time. For reference:
  
https://github.com/openstack/marconi/blob/master/marconi/queues/storage/mongodb/claims.py#L119

The SQLAlchemy storage driver is no better. It's issuing _five_ queries just to 
claim a message (including a query to purge all expired claims every time a new 
claim is created). The performance of this transaction under high load is 
probably going to be bad...
  
https://github.com/openstack/marconi/blob/master/marconi/queues/storage/sqlalchemy/claims.py#L83

Lastly, it looks like the Marconi storage drivers assume the storage back-end 
to be infinitely scalable. AFAICT, the mongo storage driver supports mongo's 
native sharding -- which I'm happy to see -- but the SQLA driver does not 
appear to support anything equiv

Re: [openstack-dev] [nova] Backwards incompatible API changes

2014-03-20 Thread Solly Ross
I concur.  I suspect people/organizations who are doing CD *probably* won't mind
such a change as much as the people who use the versioned releases will mind
backwards-incompatibility.  Correct me if I'm wrong, but doing CD requires a
certain willingness to roll with the punches, so to speak, whereas people using
versioned releases are less likely to be as flexible.

Best Regards,
Solly Ross

- Original Message -
From: "Thierry Carrez" 
To: openstack-dev@lists.openstack.org
Sent: Thursday, March 20, 2014 6:51:26 AM
Subject: Re: [openstack-dev] [nova] Backwards incompatible API changes

Christopher Yeoh wrote:
> The patch was merged in October (just after Icehouse opened) and so has
> been used in clouds that do CD for quite a while. After some discussion
> on IRC I think we'll end up having to leave this backwards incompatible
> change in there - given there are most likely users who now rely on
> both sets of behaviour there is no good way to get out of this
> situation. I've added a note to the Icehouse release notes.

I still think reverting before release is an option we should consider.
My point is, yes we broke it back in October for people doing CD (and
they might by now have gotten used to it), if we let this to release
we'll then break it for everyone else.

We put a high emphasis into guaranteeing backward compatibility between
releases. I think there would be more damage done if we let this sail to
release, compared to the damage of reverting CD followers to pre-October
behavior...

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [legal-discuss] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Chuck Thier
>
>
> I agree this is quite an issue but I also think that pretending that
> we'll be able to let OpenStack grow with a minimum set of databases,
> brokers and web servers is a bit unrealistic. The set of supported
> technologies won't be able to fulfill the needs of all the
> yet-to-be-discovered *amazing* projects.
>

Or continue to ostracize current *amazing* projects. ;)

There has long been a rift in the Openstack community around the
implementation details of swift.   I know someone mentioned earlier, but I
want to focus on the fact that Swift (like marconi) is a very different
service.  The API *is* the product.  With something like Nova, the API can
be down, but users can still use their VM's.  For swift, if the API is
down, the whole product is down.  We have a very different set of
constraints that we have to work with, and thus why we often have to take
very different approaches.  There absolutely can't be a one fit solution
for all.

If we are going to be so strict about what an Openstack project uses, are
we then by the same token going to kick swift out of Openstack because it
will *never* use Pecan?  And I say that not because I think Pecan is a bad
tool, just not the right tool for swift.

--
Chuck
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] python-keystoneclient unit tests only if python-memcache is installed

2014-03-20 Thread Thomas Goirand
Hi,

I've noticed that if there's python-memcache installed, building the
python-keystoneclient (last version: 0.6.0) will produce some unit tests
errors (see below the first 2 errors, there's 7 of them in total). If I
apt-get purge python-memcache, the errors don't show.

I'm worried that this is the symptoms of a problem with some backends.
Can someone confirm if there's a real problem or if I should just
declare a build-conflict?

Cheers,

Thomas Goirand (zigo)

==
FAIL:
keystoneclient.tests.test_auth_token_middleware.v2AuthTokenMiddlewareTest.test_encrypt_cache_data
keystoneclient.tests.test_auth_token_middleware.v2AuthTokenMiddlewareTest.test_encrypt_cache_data
--
testtools.testresult.real._StringException: Traceback (most recent call
last):
  File
"/home/zigo/sources/openstack/icehouse/python-keystoneclient/build-area/python-keystoneclient-0.6.0/keystoneclient/tests/test_auth_token_middleware.py",
line 784, in test_encryp
self.assertEqual(self.middleware._cache_get(token), data[0])
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line
321, in assertEqual
self.assertThat(observed, matcher, message)
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line
406, in assertThat
raise mismatch_error
MismatchError: None != 'this_data'


==
FAIL:
keystoneclient.tests.test_auth_token_middleware.v2AuthTokenMiddlewareTest.test_no_memcache_protection

keystoneclient.tests.test_auth_token_middleware.v2AuthTokenMiddlewareTest.test_no_memcache_protection
--
testtools.testresult.real._StringException: Traceback (most recent call
last):
  File
"/home/zigo/sources/openstack/icehouse/python-keystoneclient/build-area/python-keystoneclient-0.6.0/keystoneclient/tests/test_auth_token_middleware.py",
line 815, in test_no_mem
self.assertEqual(self.middleware._cache_get(token), data[0])
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line
321, in assertEqual
self.assertThat(observed, matcher, message)
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line
406, in assertThat
raise mismatch_error
MismatchError: None != 'this_data'

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Display images/icon in Horizon Tables

2014-03-20 Thread Dave Johnston
?Hi,


I'm creating a custom panel for horizon.  I have developed a table that 
displays some information, and I would like to add a 'icon' to each row.

Ideally, I want to be able to specify the URL of a remote image, and have it 
displayed using the  tag.


Can this be acheived with filters, or some other mechanism ?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Yuriy Taraday
On Tue, Mar 11, 2014 at 12:58 AM, Carl Baldwin  wrote:
>
> https://etherpad.openstack.org/p/neutron-agent-exec-performance
>

I've added info on how we can speedup work with namespaces by setting
namespaces by ourselves using setns() without "ip netns exec" overhead.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Rick Jones

On 03/20/2014 05:41 AM, Yuriy Taraday wrote:

On Tue, Mar 18, 2014 at 7:38 PM, Yuriy Taraday mailto:yorik@gmail.com>> wrote:

I'm aiming at ~100 new lines of code for daemon. Of course I'll use
some batteries included with Python stdlib but they should be safe
already.
It should be rather easy to audit them.


Here's my take on this: https://review.openstack.org/81798

Benchmark included showed on my machine these numbers (average over 100
iterations):

Running 'ip a':
   ip a :   4.565ms
  sudo ip a :  13.744ms
sudo rootwrap conf ip a : 102.571ms
 daemon.run('ip a') :   8.973ms
Running 'ip netns exec bench_ns ip a':
   sudo ip netns exec bench_ns ip a : 162.098ms
 sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
  daemon.run('ip netns exec bench_ns ip a') : 129.876ms

So it looks like running daemon is actually faster than running "sudo".


Interesting result.  Which versions of sudo and ip and with how many 
interfaces on the system?


For consistency's sake (however foolish it may be) and purposes of 
others being able to reproduce results and all that, stating the number 
of interfaces on the system and versions and such would be a Good Thing.


happy benchmarking,

rick jones

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-20 Thread David Kranz

On 03/20/2014 06:15 AM, Sean Dague wrote:

On 03/20/2014 05:49 AM, Nadya Privalova wrote:

Hi all,
First of all, thanks for your suggestions!

To summarize the discussions here:
1. We are not going to install Mongo (because "is's wrong" ?)

We are not going to install Mongo "not from base distribution", because
we don't do that for things that aren't python. Our assumption is
dependent services come from the base OS.

That being said, being an integrated project means you have to be able
to function, sanely, on an sqla backend, as that will always be part of
your gate.
This is a claim I think needs a bit more scrutiny if by "sanely" you 
mean "performant". It seems we have an integrated project that no one 
would deploy using the sql db driver we have in the gate. Is any one 
doing that?  Is having a scalable sql back end a goal of ceilometer?


More generally, if there is functionality that is of great importance to 
any cloud deployment (and we would not integrate it if we didn't think 
it was) that cannot be deployed at scale using sqla, are we really going 
to say it should not be a part of OpenStack because we refuse, for 
whatever reason, to run it in our gate using a driver that would 
actually be used? And if we do demand an sqla backend, how much time 
should we spend trying to optimize it if no one will really use it? 
Though the slow heat job is a little different because the slowness 
comes directly from running real use cases, perhaps we should just set 
up a "slow ceilometer" job if the sql version is too slow for its budget 
in the main job.


It seems like there is a similar thread, at least in part, about this 
around marconi.


 -David








___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

2014-03-20 Thread Jorge Miramontes
Thanks for the input. I too was thinking "IP Access Control" could be solved 
with the firewall service in Neutron. To clarify what I mean check out our 
current API docs on this feature 
here.

Cheers,
--Jorge

From: Eugene Nikanorov mailto:enikano...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, March 20, 2014 1:35 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

Hi folks, my comments inlined:


On Thu, Mar 20, 2014 at 6:13 AM, Youcef Laribi 
mailto:youcef.lar...@citrix.com>> wrote:
Jorge,

Thanks for taking the time to put up a requirements list. Some comments below:

  *   Static IP Addresses
 *   Our current Cloud Load Balancing (CLB) offering utilizes static IP 
addresses which is something our customers really like, especially when setting 
up DNS. AWS for example, gives you an A record which you CNAME to.
This should also already be addressed, as you can today specify the VIP’s IP 
address explicitly on creation. We do not have DNS-based support for LB like in 
AWS ELB though.
Right, it's already there. Probably that's why it confused me :)

  *   Active/Passive Failover
 *   I think this is solved with multiple pools.
The multiple pools support that is coming with L7 rules is to support 
content-switching based on L7 HTTP information (URL, headers, etc.). There is 
no support today for an active vs. passive pool.
I'm not sure that's the priority. It depends on if this is widely supported 
among vendors.


  *   IP Access Control
 *   Our current CLB offering allows the user to restrict access through 
their load balancer by blacklisting/whitelisting cidr blocks and even 
individual ip addresses. This is just a basic security feature.
Is this controlling access to the VIP’s IP address or to pool members IP 
addresses? There is also a Firewall service in Neutron. Could this feature 
better fit in that service?
Agree, it's better to utilize what fwaas has to offer.

Eugene.



Youcef

From: Jorge Miramontes 
[mailto:jorge.miramon...@rackspace.com]
Sent: Wednesday, March 19, 2014 11:44 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

Oleg, thanks for the updates.

Eugene, High/Medium/Low is fine with me. I really just wanted to find a way to 
rank even amongst all of 'X' priorities. As people start adding more items we 
may need more columns to add things such as this, links to blueprints (per 
Ryan's idea), etc. In terms of the requirements marked with a '?' I can try to 
clarify here:


  *   Static IP Addresses

 *   Our current Cloud Load Balancing (CLB) offering utilizes static IP 
addresses which is something our customers really like, especially when setting 
up DNS. AWS for example, gives you an A record which you CNAME to.

  *   Active/Passive Failover

 *   I think this is solved with multiple pools.

  *   IP Access Control

 *   Our current CLB offering allows the user to restrict access through 
their load balancer by blacklisting/whitelisting cidr blocks and even 
individual ip addresses. This is just a basic security feature.

Cheers,
--Jorge

From: Eugene Nikanorov mailto:enikano...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, March 19, 2014 7:32 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

Hi Jorge,

Thanks for taking care of the page. I've added priorities, although I'm not 
sure we need precise priority weights.
Those features that still have '?' need further clarification.

Thanks,
Eugene.


On Wed, Mar 19, 2014 at 11:18 AM, Oleg Bondarev 
mailto:obonda...@mirantis.com>> wrote:
Hi Jorge,

Thanks for taking care of this and bringing it all together! This will be 
really useful for LBaaS discussions.
I updated the wiki to include L7 rules support and also marking already 
implemented requirements.

Thanks,
Oleg

On Wed, Mar 19, 2014 at 2:57 AM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
Hey Neutron LBaaS folks,

Per last week's IRC meeting I have created a preliminary requirements &
use case wiki page. I requested adding such a page since there appears to
be a lot of new interest in load balancing and feel that we need a
structured way to align everyone's interest in the project. Furthermore,
it appears that understanding everyone's requirements and use cases will
aid in the current object model discussion we all have been having. That
being said, this 

Re: [openstack-dev] Display images/icon in Horizon Tables

2014-03-20 Thread Lyle, David
This is easy to accomplish by creating a custom template for a cell.  In your 
table, instead of providing a data field for the column, provide a method name, 
have that method load a custom HTML template.

Here is an example of this without an image of this method: 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/instances/tables.py#L653

But adding an  in the HTML template works fine.

David

From: Dave Johnston [mailto:dave.johns...@owmobility.com] 
Sent: Thursday, March 20, 2014 9:20 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Display images/icon in Horizon Tables

​Hi,

I'm creating a custom panel for horizon.  I have developed a table that 
displays some information, and I would like to add a 'icon' to each row.  
Ideally, I want to be able to specify the URL of a remote image, and have it 
displayed using the  tag.

Can this be acheived with filters, or some other mechanism ?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [legal-discuss] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Russell Bryant
On 03/20/2014 11:11 AM, Chuck Thier wrote:
> 
> I agree this is quite an issue but I also think that pretending that
> we'll be able to let OpenStack grow with a minimum set of databases,
> brokers and web servers is a bit unrealistic. The set of supported
> technologies won't be able to fulfill the needs of all the
> yet-to-be-discovered *amazing* projects.
> 
> 
> Or continue to ostracize current *amazing* projects. ;) 
> 
> There has long been a rift in the Openstack community around the
> implementation details of swift.   I know someone mentioned earlier, but
> I want to focus on the fact that Swift (like marconi) is a very
> different service.  The API *is* the product.  With something like Nova,
> the API can be down, but users can still use their VM's.  For swift, if
> the API is down, the whole product is down.  We have a very different
> set of constraints that we have to work with, and thus why we often have
> to take very different approaches.  There absolutely can't be a one fit
> solution for all.
> 
> If we are going to be so strict about what an Openstack project uses,
> are we then by the same token going to kick swift out of Openstack
> because it will *never* use Pecan?  And I say that not because I think
> Pecan is a bad tool, just not the right tool for swift.

I don't think we'd kick out an existing project over this point.  We've
already said that we don't expect existing projects to migrate an
existing API.  It's a movement to standardize for new APIs.  If swift
were building a new API, I do think it would be good to into this in
more detail.  For the existing one, I think it's fine.

Swift has more "different than everything else" issues than just library
choices.  It's a real problem IMO, but I'd rather separate that
discussion from this thread.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] [Nova] use Keystone V3 token to volume attachment

2014-03-20 Thread Matt Riedemann



On 3/19/2014 10:02 AM, Matthew Treinish wrote:

On Wed, Mar 19, 2014 at 09:35:34AM -0500, Matt Riedemann wrote:



On 3/19/2014 2:48 AM, Shao Kai SK Li wrote:

Hello:

  I am working on this
patch(https://review.openstack.org/#/c/77524/) to fix bugs about volume
attach failure with keystone V3 token.

  Just wonder, is there some blue prints or plans in Juno to address
keystone V3 support in nova ?

  Thanks you in advance.


Best Regards~~~

Li, Shaokai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I have this on the nova meeting agenda for tomorrow [1].  I would
think at a minimum this means running compute tests in Tempest
against a keystone v3 backend.  I'm not sure what the current state
of Tempest is regarding keystone v3.  Note that this isn't the only
thing that made it into nova in Icehouse related to keystone v3 [2].


On the tempest side there are some dedicated keystone v3 api tests, I'm not
sure how well things are covered there though. On using keystone v3 for auth
for the other tests tempest doesn't quite support that yet. Andrea Frittoli is
working on a bp to get this working:

https://blueprints.launchpad.net/tempest/+spec/multi-keystone-api-version-tests

But, at this point it will probably end up being early Juno thing before this
can be enabled everywhere in tempest.

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Furthermore Russell talked to Dolph in IRC and Dolph created this 
blueprint for planning the path forward from keystone v2 to v3:


https://blueprints.launchpad.net/keystone/+spec/document-v2-to-v3-transition

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Updates to Juno blueprint review process

2014-03-20 Thread Russell Bryant
We recently discussed the idea of using gerrit to review blueprint
specifications [1].  There was a lot of support for the idea so we have
proceeded with putting this together before the start of the Juno
development cycle.

We now have a new project set up, openstack/nova-specs.  You submit
changes to it just like any other project in gerrit.  Find the README
and a template for specifications here:

  http://git.openstack.org/cgit/openstack/nova-specs/tree/README.rst

  http://git.openstack.org/cgit/openstack/nova-specs/tree/template.rst

The blueprint process wiki page has also been updated to reflect that we
will be using this for Nova:

  https://wiki.openstack.org/wiki/Blueprints#Nova

Note that *all* Juno blueprints, including ones that were previously
approved, must go through this new process.  This will help ensure that
blueprints previously approved still make sense, as well as ensure that
all Juno specs follow a more complete and consistent format.

Before the flood of spec reviews start, we would really like to get
feedback on the content of the spec template.  It includes things like
"deployer impact" which could use more input.  Feel free to provide
feedback on list, or just suggest updates via proposed changes in gerrit.

I suspect this process to evolve a bit throughout Juno, but I'm very
excited about the positive impact it is likely to have on our overall
result.

Thanks!

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-March/029232.html

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] python-keystoneclient unit tests only if python-memcache is installed

2014-03-20 Thread Dolph Mathews
Yes, those tests are conditionally executed if
https://pypi.python.org/pypi/python-memcached/ is installed and if so,
memcached is assumed to be accessible on localhost. Unfortunately the test
suite doesn't have a sanity check for that following assumption, so the
test failures aren't particularly helpful.


On Thu, Mar 20, 2014 at 10:10 AM, Thomas Goirand  wrote:

> Hi,
>
> I've noticed that if there's python-memcache installed, building the
> python-keystoneclient (last version: 0.6.0) will produce some unit tests
> errors (see below the first 2 errors, there's 7 of them in total). If I
> apt-get purge python-memcache, the errors don't show.
>
> I'm worried that this is the symptoms of a problem with some backends.
> Can someone confirm if there's a real problem or if I should just
> declare a build-conflict?
>
> Cheers,
>
> Thomas Goirand (zigo)
>
> ==
> FAIL:
>
> keystoneclient.tests.test_auth_token_middleware.v2AuthTokenMiddlewareTest.test_encrypt_cache_data
>
> keystoneclient.tests.test_auth_token_middleware.v2AuthTokenMiddlewareTest.test_encrypt_cache_data
> --
> testtools.testresult.real._StringException: Traceback (most recent call
> last):
>   File
>
> "/home/zigo/sources/openstack/icehouse/python-keystoneclient/build-area/python-keystoneclient-0.6.0/keystoneclient/tests/test_auth_token_middleware.py",
> line 784, in test_encryp
> self.assertEqual(self.middleware._cache_get(token), data[0])
>   File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line
> 321, in assertEqual
> self.assertThat(observed, matcher, message)
>   File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line
> 406, in assertThat
> raise mismatch_error
> MismatchError: None != 'this_data'
>
>
> ==
> FAIL:
>
> keystoneclient.tests.test_auth_token_middleware.v2AuthTokenMiddlewareTest.test_no_memcache_protection
>
>
> keystoneclient.tests.test_auth_token_middleware.v2AuthTokenMiddlewareTest.test_no_memcache_protection
> --
> testtools.testresult.real._StringException: Traceback (most recent call
> last):
>   File
>
> "/home/zigo/sources/openstack/icehouse/python-keystoneclient/build-area/python-keystoneclient-0.6.0/keystoneclient/tests/test_auth_token_middleware.py",
> line 815, in test_no_mem
> self.assertEqual(self.middleware._cache_get(token), data[0])
>   File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line
> 321, in assertEqual
> self.assertThat(observed, matcher, message)
>   File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line
> 406, in assertThat
> raise mismatch_error
> MismatchError: None != 'this_data'
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] [Nova] use Keystone V3 token to volume attachment

2014-03-20 Thread Lance Bragstad

Conversation from #openstack-dev starting at 2014-03-20T15:09:49

http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2014-03-20.log


On 3/20/2014 10:47 AM, Matt Riedemann wrote:



On 3/19/2014 10:02 AM, Matthew Treinish wrote:

On Wed, Mar 19, 2014 at 09:35:34AM -0500, Matt Riedemann wrote:



On 3/19/2014 2:48 AM, Shao Kai SK Li wrote:

Hello:

  I am working on this
patch(https://review.openstack.org/#/c/77524/) to fix bugs about 
volume

attach failure with keystone V3 token.

  Just wonder, is there some blue prints or plans in Juno to 
address

keystone V3 support in nova ?

  Thanks you in advance.


Best Regards~~~

Li, Shaokai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I have this on the nova meeting agenda for tomorrow [1].  I would
think at a minimum this means running compute tests in Tempest
against a keystone v3 backend.  I'm not sure what the current state
of Tempest is regarding keystone v3.  Note that this isn't the only
thing that made it into nova in Icehouse related to keystone v3 [2].


On the tempest side there are some dedicated keystone v3 api tests, 
I'm not
sure how well things are covered there though. On using keystone v3 
for auth
for the other tests tempest doesn't quite support that yet. Andrea 
Frittoli is

working on a bp to get this working:

https://blueprints.launchpad.net/tempest/+spec/multi-keystone-api-version-tests 



But, at this point it will probably end up being early Juno thing 
before this

can be enabled everywhere in tempest.

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Furthermore Russell talked to Dolph in IRC and Dolph created this 
blueprint for planning the path forward from keystone v2 to v3:


https://blueprints.launchpad.net/keystone/+spec/document-v2-to-v3-transition 





--
Lance Bragstad
ldbra...@linux.vnet.ibm.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Yuriy Taraday
On Thu, Mar 20, 2014 at 7:28 PM, Rick Jones  wrote:

> On 03/20/2014 05:41 AM, Yuriy Taraday wrote:
>
>> Benchmark included showed on my machine these numbers (average over 100
>>  iterations):
>>
>> Running 'ip a':
>>ip a :   4.565ms
>>   sudo ip a :  13.744ms
>> sudo rootwrap conf ip a : 102.571ms
>>  daemon.run('ip a') :   8.973ms
>> Running 'ip netns exec bench_ns ip a':
>>sudo ip netns exec bench_ns ip a : 162.098ms
>>  sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
>>   daemon.run('ip netns exec bench_ns ip a') : 129.876ms
>>
>> So it looks like running daemon is actually faster than running "sudo".
>>
>
> Interesting result.  Which versions of sudo and ip and with how many
> interfaces on the system?
>

Here are the numbers:

% sudo -V
Sudo version 1.8.6p7
Sudoers policy plugin version 1.8.6p7
Sudoers file grammar version 42
Sudoers I/O plugin version 1.8.6p7
% ip -V
ip utility, iproute2-ss130221
% ip a | grep '^[^ ]' | wc -l
5


> For consistency's sake (however foolish it may be) and purposes of others
> being able to reproduce results and all that, stating the number of
> interfaces on the system and versions and such would be a Good Thing.
>

Ok, I'll add them to benchmark output.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] Prefix delegation - API Attributes

2014-03-20 Thread Collins, Sean
Hi,

Anthony Veiga and I did a small bit of whiteboarding this morning to
sketch out what a prefix delegation would look like in the Neutron API.

https://wiki.openstack.org/wiki/Neutron/IPv6/PrefixDelegation

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat]Heat use as a standalone component for Cloud Managment over multi IAAS

2014-03-20 Thread sahid
warm is just an other client, like we can have for the cli. It does
not claim to do what Heat can. It should be useful to prepare some templates
to be reused in different OpenStack environment without using script
shell or python.

When I said standalone client, I mean there is no need to install services
in your OpenStack cloud to use it.

Regards,
s.

- Original Message -
From: "Thomas Spatzier" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Thursday, March 20, 2014 9:44:22 AM
Subject: Re: [openstack-dev] [Heat]Heat use as a standalone component for Cloud 
Managment over multi IAAS

Just out of curiosity: what is the purpose of project "warm"? From the wiki
page and the sample it looks pretty much like what Heat is doing.
And "warm" is almost "HOT" so could you imagine your use cases can just be
addressed by Heat using HOT templates?

Regards,
Thomas

sahid  wrote on 18/03/2014 12:56:47:

> From: sahid 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 18/03/2014 12:59
> Subject: Re: [openstack-dev] [Heat]Heat use as a standalone
> component for Cloud Managment over multi IAAS
>
> Sorry for the late of this response,
>
> I'm currently working on a project called Warm.
> https://wiki.openstack.org/wiki/Warm
>
> It is used as a standalone client and try to deploy small OpenStack
> environments from Yzml templates. You can find some samples here:
> https://github.com/sahid/warm-templates
>
> s.
>
> - Original Message -
> From: "Charles Walker" 
> To: openstack-dev@lists.openstack.org
> Sent: Wednesday, February 26, 2014 2:47:44 PM
> Subject: [openstack-dev] [Heat]Heat use as a standalone component
> for Cloud Managment over multi IAAS
>
> Hi,
>
>
> I am trying to deploy the proprietary application made in my company on
the
> cloud. The pre requisite for this is to have a IAAS which can be either a
> public cloud or private cloud (openstack is an option for a private
IAAS).
>
>
> The first prototype I made was based on a homemade python orchestrator
and
> apache libCloud to interact with IAAS (AWS and Rackspace and GCE).
>
> The orchestrator part is a python code reading a template file which
> contains the info needed to deploy my application. This template file
> indicates the number of VM and the scripts associated to each VM type to
> install it.
>
>
> Now I was trying to have a look on existing open source tool to do the
> orchestration part. I find JUJU (https://juju.ubuntu.com/) or HEAT (
> https://wiki.openstack.org/wiki/Heat).
>
> I am investigating deeper HEAT and also had a look on
> https://wiki.openstack.org/wiki/Heat/DSL which mentioned:
>
> *"Cloud Service Provider* - A service entity offering hosted cloud
services
> on OpenStack or another cloud technology. Also known as a Vendor."
>
>
> I think HEAT as its actual version will not match my requirement but I
have
> the feeling that it is going to evolve and could cover my needs.
>
>
> I would like to know if it would be possible to use HEAT as a standalone
> component in the future (without Nova and other Ostack modules)? The goal
> would be to deploy an application from a template file on multiple cloud
> service (like AWS, GCE).
>
>
> Any feedback from people working on HEAT could help me.
>
>
> Thanks, Charles.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [legal-discuss] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Kurt Griffiths
> The incorporation of AGPLv3 code Into OpenStack Project  is a
>significant decision

To be clear, Marconi does not incorporate any AGPL code itself; pymongo is
Apache2 licensed.

Concerns over AGPL were raised when Marconi was incubated, and I totally
respect that some folks are not comfortable with deploying something like
MongoDB that is AGPL-licensed. Those discussions precipitated the work we
have been doing on SQLAlchemy and Redis drivers. In fact, the sqla driver
was one of the graduation requirements put in place but the TC. Now, if
people want to use something else than what the Marconi team is already
working on, we are more than happy to have them contribute and help us
evolve the driver interface as needed.

On the subject of minimizing the number of different backends that
operators need to manage, some relief may be found by making the backends
for projects more customizable. For example, as we move more projects to
using the oslo caching library, that will give operators an opportunity to
migrate from memcached to, say, Redis. And if OpenStack Service X
(Marconi, for example) supports Redis as a backing store, now the operator
can reuse their Redis infrastructure and know-how.

The software industry has been moving towards hybrid NoSQL+SQL
architectures for a long time now, in order to create best-fit solutions;
I think we’ll see more OpenStack projects following this model in the
future, not less, and so we need to work out a happy path for supporting
these kinds of operating environments.

Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Yuriy Taraday
On Thu, Mar 20, 2014 at 5:41 PM, Miguel Angel Ajo wrote:

>
>Wow Yuriy, amazing and fast :-), benchmarks included ;-)
>
>The daemon solution only adds 4.5ms, good work. I'll add some comments
> in a while.
>
>Recently I talked with another engineer in Red Hat (working
> in ovirt/vdsm), and they have something like this daemon, and they
> are using BaseManager too.
>
>In our last conversation he told me that the BaseManager has
> a couple of bugs & race conditions that won't be fixed for python2.x,
> I'm waiting for details on those bugs, I'll post them to the thread
> as soon as I have the details.
>

Looking at log of managers.py and connection.py I don't see any significant
changes landed after 2.7.6 was released (Nov, 10). So it looks like those
bugs should be fixed in 2.7.

   If this coupled to neutron in a way that it can be accepted for
> Icehouse (we're killing a performance bug), or that at least it can
> be y backported, you'd be covering both the short & long term needs.
>

As I said on the meeting I plan to provide change request to Neutron with
some integration with this patch.
I'm also going to engage people involved in rootwrap about my change
request.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-20 Thread Sean Dague
On 03/20/2014 11:35 AM, David Kranz wrote:
> On 03/20/2014 06:15 AM, Sean Dague wrote:
>> On 03/20/2014 05:49 AM, Nadya Privalova wrote:
>>> Hi all,
>>> First of all, thanks for your suggestions!
>>>
>>> To summarize the discussions here:
>>> 1. We are not going to install Mongo (because "is's wrong" ?)
>> We are not going to install Mongo "not from base distribution", because
>> we don't do that for things that aren't python. Our assumption is
>> dependent services come from the base OS.
>>
>> That being said, being an integrated project means you have to be able
>> to function, sanely, on an sqla backend, as that will always be part of
>> your gate.
> This is a claim I think needs a bit more scrutiny if by "sanely" you
> mean "performant". It seems we have an integrated project that no one
> would deploy using the sql db driver we have in the gate. Is any one
> doing that?  Is having a scalable sql back end a goal of ceilometer?
> 
> More generally, if there is functionality that is of great importance to
> any cloud deployment (and we would not integrate it if we didn't think
> it was) that cannot be deployed at scale using sqla, are we really going
> to say it should not be a part of OpenStack because we refuse, for
> whatever reason, to run it in our gate using a driver that would
> actually be used? And if we do demand an sqla backend, how much time
> should we spend trying to optimize it if no one will really use it?
> Though the slow heat job is a little different because the slowness
> comes directly from running real use cases, perhaps we should just set
> up a "slow ceilometer" job if the sql version is too slow for its budget
> in the main job.
> 
> It seems like there is a similar thread, at least in part, about this
> around marconi.

We required a non mongo backend to graduate ceilometer. So I don't think
it's too much to ask that it actually works.

If the answer is that it will never work and it was a checkbox with no
intent to make it work, then it should be deprecated and removed from
the tree in Juno, with a big WARNING that you shouldn't ever use that
backend. Like Nova now does with all the virt drivers that aren't tested
upstream.

Shipping in tree code that you don't want people to use is bad for
users. Either commit to making it work, or deprecate it and remove it.

I don't see this as the same issue as the slow heat job. Heat,
architecturally, is going to be slow. It spins up real OSes and does
real thinks to them. There is no way that's ever going to be fast, and
the dedicated job was a recognition that to support this level of
services in OpenStack we need to give them more breathing room.

Architecturally Ceilometer should not be this expensive. We've got some
data showing it to be aberrant from where we believe it should be. We
should fix that.

Once we get a base OS in the gate that lets us direct install mongo from
base packages, we can also do that. Or someone can 3rd party it today.
Then we'll even have comparative results to understand the differences.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Rick Jones

On 03/20/2014 09:07 AM, Yuriy Taraday wrote:

On Thu, Mar 20, 2014 at 7:28 PM, Rick Jones mailto:rick.jon...@hp.com>> wrote:
Interesting result.  Which versions of sudo and ip and with how many
interfaces on the system?


Here are the numbers:

% sudo -V
Sudo version 1.8.6p7
Sudoers policy plugin version 1.8.6p7
Sudoers file grammar version 42
Sudoers I/O plugin version 1.8.6p7
% ip -V
ip utility, iproute2-ss130221
% ip a | grep '^[^ ]' | wc -l
5

For consistency's sake (however foolish it may be) and purposes of
others being able to reproduce results and all that, stating the
number of interfaces on the system and versions and such would be a
Good Thing.


Ok, I'll add them to benchmark output.


Since there are only five interfaces on the system, it likely doesn't 
make much of a difference in your specific benchmark but the 
top-of-trunk version of sudo has the fix/enhancement to allow one to 
tell it via sudo.conf to not grab the list of interfaces on the system.


Might be worthwhile though to take the interface count out to 2000 or 
more in the name of doing things at scale.  Namespace count as well.


happy benchmarking,

rick jones


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

2014-03-20 Thread Eugene Nikanorov
>
>>>- Active/Passive Failover
>>>   - I think this is solved with multiple pools.
>>>
>>> The multiple pools support that is coming with L7 rules is to support
>>> content-switching based on L7 HTTP information (URL, headers, etc.). There
>>> is no support today for an active vs. passive pool.
>>>
>> I'm not sure that's the priority. It depends on if this is widely
>> supported among vendors.
>>
>
> A commercial load balancer that doesn't have high availability features?
> Is there really such a thing still being sold in 2014? ;)
>
I might be missing something fundamental here, but we're talking about
'additional' HA at pool level? Why not just add nodes to the pool?


>
> Also, Jorge-- thanks for creating that page! I've made a few additions to
> it as well that I'd love to see prioritized.
>


>
> Stephen
>
>
>
>
> --
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [legal-discuss] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Sean Dague
On 03/20/2014 12:29 PM, Kurt Griffiths wrote:
>> The incorporation of AGPLv3 code Into OpenStack Project  is a
>> significant decision
> 
> To be clear, Marconi does not incorporate any AGPL code itself; pymongo is
> Apache2 licensed.
> 
> Concerns over AGPL were raised when Marconi was incubated, and I totally
> respect that some folks are not comfortable with deploying something like
> MongoDB that is AGPL-licensed. Those discussions precipitated the work we
> have been doing on SQLAlchemy and Redis drivers. In fact, the sqla driver
> was one of the graduation requirements put in place but the TC. Now, if
> people want to use something else than what the Marconi team is already
> working on, we are more than happy to have them contribute and help us
> evolve the driver interface as needed.
> 
> On the subject of minimizing the number of different backends that
> operators need to manage, some relief may be found by making the backends
> for projects more customizable. For example, as we move more projects to
> using the oslo caching library, that will give operators an opportunity to
> migrate from memcached to, say, Redis. And if OpenStack Service X
> (Marconi, for example) supports Redis as a backing store, now the operator
> can reuse their Redis infrastructure and know-how.

Yep, that's definitely goodness.

> The software industry has been moving towards hybrid NoSQL+SQL
> architectures for a long time now, in order to create best-fit solutions;
> I think we’ll see more OpenStack projects following this model in the
> future, not less, and so we need to work out a happy path for supporting
> these kinds of operating environments.

Absolutely, but lets at least be deliberate about it. Also, the burden,
and thus concern, would go way down if the service actually used heat
and/or nova to deploy it's backend as well. Like Savana and Trove do. It
seems like we've got all this infrastructure in OpenStack already to
deploy and manage things on computes programatically. It would be nice
to reuse that for application centric services.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Display images/icon in Horizon Tables

2014-03-20 Thread Dave Johnston
Thanks,

That worked perfectly.

I defined a method like this:

def get_image_url(stack):
template_name = 'openwave/stack_catalogue/_icon.html'
context = {"stack" : stack }
return template.loader.render_to_string(template_name, context)

Stack is my class that has a field called image_icon.

With in the _icon.html template, I simply wrote:




From: Lyle, David 
Sent: 20 March 2014 15:45
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Display images/icon in Horizon Tables

This is easy to accomplish by creating a custom template for a cell.  In your 
table, instead of providing a data field for the column, provide a method name, 
have that method load a custom HTML template.

Here is an example of this without an image of this method: 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/instances/tables.py#L653

But adding an  in the HTML template works fine.

David

From: Dave Johnston [mailto:dave.johns...@owmobility.com]
Sent: Thursday, March 20, 2014 9:20 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Display images/icon in Horizon Tables

​Hi,

I'm creating a custom panel for horizon.  I have developed a table that 
displays some information, and I would like to add a 'icon' to each row.
Ideally, I want to be able to specify the URL of a remote image, and have it 
displayed using the  tag.

Can this be acheived with filters, or some other mechanism ?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Yuriy Taraday
On Thu, Mar 20, 2014 at 8:23 PM, Rick Jones  wrote:

> On 03/20/2014 09:07 AM, Yuriy Taraday wrote:
>
>> On Thu, Mar 20, 2014 at 7:28 PM, Rick Jones > > wrote:
>> Interesting result.  Which versions of sudo and ip and with how many
>> interfaces on the system?
>>
>>
>> Here are the numbers:
>>
>> % sudo -V
>> Sudo version 1.8.6p7
>> Sudoers policy plugin version 1.8.6p7
>> Sudoers file grammar version 42
>> Sudoers I/O plugin version 1.8.6p7
>> % ip -V
>> ip utility, iproute2-ss130221
>> % ip a | grep '^[^ ]' | wc -l
>> 5
>>
>> For consistency's sake (however foolish it may be) and purposes of
>> others being able to reproduce results and all that, stating the
>> number of interfaces on the system and versions and such would be a
>> Good Thing.
>>
>>
>> Ok, I'll add them to benchmark output.
>>
>
> Since there are only five interfaces on the system, it likely doesn't make
> much of a difference in your specific benchmark but the top-of-trunk
> version of sudo has the fix/enhancement to allow one to tell it via
> sudo.conf to not grab the list of interfaces on the system.
>
> Might be worthwhile though to take the interface count out to 2000 or more
> in the name of doing things at scale.  Namespace count as well.


Given that this benchmark is created to show that my changes are worth
doing and they already show that my approach is almost 2x faster than sudo,
slowing down sudo will only enhance this difference. I don't think we
should add this to the benchmark itself.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-20 Thread David Kranz

On 03/20/2014 12:31 PM, Sean Dague wrote:

On 03/20/2014 11:35 AM, David Kranz wrote:

On 03/20/2014 06:15 AM, Sean Dague wrote:

On 03/20/2014 05:49 AM, Nadya Privalova wrote:

Hi all,
First of all, thanks for your suggestions!

To summarize the discussions here:
1. We are not going to install Mongo (because "is's wrong" ?)

We are not going to install Mongo "not from base distribution", because
we don't do that for things that aren't python. Our assumption is
dependent services come from the base OS.

That being said, being an integrated project means you have to be able
to function, sanely, on an sqla backend, as that will always be part of
your gate.

This is a claim I think needs a bit more scrutiny if by "sanely" you
mean "performant". It seems we have an integrated project that no one
would deploy using the sql db driver we have in the gate. Is any one
doing that?  Is having a scalable sql back end a goal of ceilometer?

More generally, if there is functionality that is of great importance to
any cloud deployment (and we would not integrate it if we didn't think
it was) that cannot be deployed at scale using sqla, are we really going
to say it should not be a part of OpenStack because we refuse, for
whatever reason, to run it in our gate using a driver that would
actually be used? And if we do demand an sqla backend, how much time
should we spend trying to optimize it if no one will really use it?
Though the slow heat job is a little different because the slowness
comes directly from running real use cases, perhaps we should just set
up a "slow ceilometer" job if the sql version is too slow for its budget
in the main job.

It seems like there is a similar thread, at least in part, about this
around marconi.

We required a non mongo backend to graduate ceilometer. So I don't think
it's too much to ask that it actually works.

If the answer is that it will never work and it was a checkbox with no
intent to make it work, then it should be deprecated and removed from
the tree in Juno, with a big WARNING that you shouldn't ever use that
backend. Like Nova now does with all the virt drivers that aren't tested
upstream.

Shipping in tree code that you don't want people to use is bad for
users. Either commit to making it work, or deprecate it and remove it.

I don't see this as the same issue as the slow heat job. Heat,
architecturally, is going to be slow. It spins up real OSes and does
real thinks to them. There is no way that's ever going to be fast, and
the dedicated job was a recognition that to support this level of
services in OpenStack we need to give them more breathing room.
Peace. I specifically noted that difference in my original comment. And 
for that reason the heat slow job may not be temporary.


Architecturally Ceilometer should not be this expensive. We've got some
data showing it to be aberrant from where we believe it should be. We
should fix that.
There are plenty of cases where we have had code that passes gate tests 
with acceptable performance but falls over in real deployment. I'm just 
saying that having a driver that works ok in the gate but does not work 
for real deployments is of no more value that not having it at all. 
Maybe less value.
How do you propose to solve the problem of getting more ceilometer tests 
into the gate in the short-run? As a practical measure l don't see why 
it is so bad to have a separate job until the complex issue of whether 
it is possible to have a real-world performant sqla backend is resolved. 
Or did I miss something and it has already been determined that sqla 
could be used for large-scale deployments if we just fixed our code?


Once we get a base OS in the gate that lets us direct install mongo from
base packages, we can also do that. Or someone can 3rd party it today.
Then we'll even have comparative results to understand the differences.

Yes. Do you know which base OS's are candidates for that?

 -David



-Sean



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [legal-discuss] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Vishvananda Ishaya

On Mar 20, 2014, at 5:52 AM, Sean Dague  wrote:

> On 03/20/2014 08:36 AM, Mark McLoughlin wrote:
>> On Thu, 2014-03-20 at 12:07 +0100, Thierry Carrez wrote:
>>> Monty Taylor wrote:
 On 03/20/2014 01:30 AM, Radcliffe, Mark wrote:
> The problem with AGPL is that the scope is very uncertain and the
> determination of the  consequences are very fact intensive. I was the
> chair of the User Committee in developing the GPLv3 and I am therefor
> quite familiar with the legal issues.  The incorporation of AGPLv3
> code Into OpenStack Project  is a significant decision and should not
> be made without input from the Foundation. At a minimum, the
> inclusion of APLv3 code means that the OpenStack Project is no longer
> solely an Apache v2 licensed project because AGPLv3 code cannot be
> licensed under Apache v. 2 License.  Moreover, the inclusion of such
> code is inconsistent with the current CLA provisions.
> 
> This code can be included but it is an important decision that should
> be made carefully.
 
 I agree - but in this case, I think that we're not talking about
 including AGPL code in OpenStack as much as we are talking about using
 an Apache2 driver that would talk to a server that is AGPL ... if the
 deployer chooses to install the AGPL software. I don't think we're
 suggesting that downloading or installing openstack itself would involve
 downloading or installing AGPL code.
>>> 
>>> Yes, the issue here is more... a large number of people want to stay
>>> away from AGPL. Should the TC consider adding to the OpenStack
>>> integrated release a component that requires AGPL software to be
>>> installed alongside it ? It's not really a legal issue (hence me
>>> stopping the legal-issues cross-posting).
>> 
>> We need to understand the reasons "people want to stay away from the
>> AGPL". Those reasons appear to be legal reasons, and not necessarily
>> well founded. I think legal-discuss can help tease those out.
>> 
>> I don't (yet) accept that there's a reasonable enough concern for the
>> OpenStack project to pander to.
>> 
>> I'm no fan of the AGPL, but we need to be able to articulate any policy
>> decision we make here beyond "some people don't like the AGPL".
>> 
>> For example, if we felt the AGPL fears weren't particularly well founded
>> then we could make a policy decision that projects should have an
>> abstraction that would allow those with AGPL fears add support for
>> another technology ... but that the project wouldn't be required to do
>> so itself before graduating.
>> 
>>> This was identified early on as a concern with Marconi and the SQLA
>>> support was added to counter that concern. The question then becomes,
>>> how usable this SQLA option actually is ? If it's sluggish, unusable in
>>> production or if it doesn't fully support the proposed Marconi API, then
>>> I think we still have that concern.
>> 
>> I understood that a future Redis driver was what the Marconi team had in
>> mind to address this concern and the SQLA driver wasn't intended for
>> production use.
> 
> This is a little problematic from a deployer requirement perspective.
> Today we say to all deployers: To run all of OpenStack:
> 
> * You will need to run and maintain a relational database environment
> (and probably cluster it for scale / availability)
> * You will need to run and maintain a queue system (rabbit, qpid, zmq)
> * You will need to run and maintain a web server (apache)

(Don’t forget load balancers and/or proxies)

> 
> Deciding to integrate something to OpenStack that requires a base piece
> of software that is outside of that list is a pretty big deal.
> 
> Forget license for a moment, just specifying that you have to run and
> maintain and monitor a nosql environment in addition to a RDBMS is
> definitely adding substantial burden to OpenStack deployers.

This is an excellent point that I want to emphasize. Maintaining these things
in production is non-trivial; it is the burden of maintenance that has
prevented some of us from integrating Mongo. It is especially troublesome that
there are not options on the noSQL side. If you are already running redis
or cassandra or couch or riak in production, you have to install mongo as well.
Maintaining a noSQL db is a problem, but maintaining a SPECIFIC noSQL db with
no option to replace it with another makes the problem worse.

Vish 
> 
> Big public cloud shops, that's probably fine for them. However OpenStack
> as a site level deploy ? Having to add expertise in nosql engine
> lifecycle management is a real cost. And we better be explicit about it
> from a project wide stance if that's what we're saying.
> 
>   -Sean
> 
> -- 
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi

Re: [openstack-dev] [legal-discuss] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Gil Yehuda
>To be clear, Marconi does not incorporate any AGPL code itself; pymongo is
Apache2 licensed.

Understood, but here's the rub. Someone else is going to want to build on this 
(which it the point of this open source project). Whereas 'pymongo' is Apache 
licensed, since the copyright holder, MongoDB Inc. declared it as such, the 
authors of the other community drivers (for other language bindings and 
features of MongoDB, etc.) are also of releasing drivers under the Apache or 
BSD licenses too (thinking that's OK to do since no one is telling them 
otherwise). That community is unaware of their legal obligations when creating 
drivers to an AGPL database. Thus if one of those community drivers gets 
intertwined in a court case clarifying their license to be infringing on the 
AGPL terms, we've inadvertently impacted our community. This is a credible risk 
that is difficult for OpenStack to abate, since the problem lies with the way a 
different community chose to operate.

There are three interconnected issues here:
1. The confusion that MongoDB has created in Open Source licensing due to the 
asymmetric control they have on licensing terms.
2. The diligence of Open Stack to remain careful with OpenStack's CLA 
compliance and Apache-friendly terms.
3. The pragmatics of the effect MondgoDB would have onto OpenStack's economic 
viability and legal risks at large.

The first problem is out of scope for this list. But I think people who rely 
upon Open Source for their business ought to understand what MongoDB is doing 
to open source software. The second is, to your point, the issue in this 
conversation. As long as Openstack only use Apache licensed code >>from 
MondgoDB Inc.<< and diligently avoids using any open source contributions from 
any community contributor to the MongoDB ecosystem, then you remain compliant 
the your CLA. But you will have to exclude the rest of the MongoDB community 
(which goes against the spirit of Open Source -- back to the problem #1, which 
is out of scope). As for #3, I think the foundation needs to weigh in on the 
pragmatics here, since this has an economic and legal impact to the entire 
endeavor, not just to persisting data in one component of the project.

Gil Yehuda
Sr. Director Of Open Source, Open Standards

-Original Message-
From: Kurt Griffiths [mailto:kurt.griffi...@rackspace.com] 
Sent: Thursday, March 20, 2014 9:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [legal-discuss] [Marconi] Why is marconi a queue 
implementation vs a provisioning API?

> The incorporation of AGPLv3 code Into OpenStack Project  is a 
>significant decision



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

2014-03-20 Thread Youcef Laribi
Stephen,

I don’t think the active/passive pools feature is referring to the HA of 
loadbalancers. This is about the ability to divide the list of members 
servicing load-balanced requests into 2 groups: The first one is active and the 
second one is passive (or a backup pool). If all the members in the first pool 
are down, then the passive pool’s members become serving traffic for that VIP.

Youcef

From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Thursday, March 20, 2014 7:06 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

Hi y'all!

It's good to be back, eh!

On Wed, Mar 19, 2014 at 11:35 PM, Eugene Nikanorov 
mailto:enikano...@mirantis.com>> wrote:

  *   Active/Passive Failover

 *   I think this is solved with multiple pools.
The multiple pools support that is coming with L7 rules is to support 
content-switching based on L7 HTTP information (URL, headers, etc.). There is 
no support today for an active vs. passive pool.
I'm not sure that's the priority. It depends on if this is widely supported 
among vendors.

A commercial load balancer that doesn't have high availability features? Is 
there really such a thing still being sold in 2014? ;)

Also, Jorge-- thanks for creating that page! I've made a few additions to it as 
well that I'd love to see prioritized.

Stephen




--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

2014-03-20 Thread Jorge Miramontes
The use case from our customers has been mostly for database (MySql) load 
balancing. If the master goes down then they want another master/slave on 
standby ready to receive traffic. In the simplest case, I think Neutron can 
achieve this with 2 pools with 1 node each. If pool #1 goes down then pool #2 
becomes active. We currently solve this with the notion of primary and 
secondary nodes. If all primary nodes go down then secondary nodes become 
active.

Cheers,
--Jorge

From: Eugene Nikanorov mailto:enikano...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, March 20, 2014 11:35 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki



  *   Active/Passive Failover
 *   I think this is solved with multiple pools.
The multiple pools support that is coming with L7 rules is to support 
content-switching based on L7 HTTP information (URL, headers, etc.). There is 
no support today for an active vs. passive pool.
I'm not sure that's the priority. It depends on if this is widely supported 
among vendors.

A commercial load balancer that doesn't have high availability features? Is 
there really such a thing still being sold in 2014? ;)
I might be missing something fundamental here, but we're talking about 
'additional' HA at pool level? Why not just add nodes to the pool?


Also, Jorge-- thanks for creating that page! I've made a few additions to it as 
well that I'd love to see prioritized.


Stephen




--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] horizon PyPi distribution missing

2014-03-20 Thread Paul Belanger
On Mon, Mar 17, 2014 at 5:22 AM, Timur Sufiev  wrote:
> It depends on openstack_dashboard, namely on
> openstack_dashboard.settings. So it is fine from Murano's point of
> view to have openstack_dashboard published on PyPi. Many thanks for
> considering my request :).
>
We are having this issue too, for now we have worked around it with
pip switches allowing to download from an external source, but
hopefully we'll get horizon and openstack_dashboard split out in the
near future.

Like you, we've built a dashboard atop of horizon, mostly because of
the existing keystone integration.  Everything else we do is external
to OpenStack.

-- 
Paul Belanger | PolyBeacon, Inc.
Jabber: paul.belan...@polybeacon.com | IRC: pabelanger (Freenode)
Github: https://github.com/pabelanger | Twitter: https://twitter.com/pabelanger

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How does the role stuff work in real production deployment?

2014-03-20 Thread Ali, Haneef
Hi

How does the keystone  role stuff work in real production deployment?  Every 
service team considers the user who has role with the name "admin" as "admin".  
So basically I can't have a   separate admin user for  keystone, nova, swift.   
 Isn't this a security issue?   Given a project how to isolate the admin for 
each service.

Thanks
Haneef
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

2014-03-20 Thread Eugene Nikanorov
Right, it could be solved by pool's member weights and appropriate
scheduling algorithm.

Eugene.


On Thu, Mar 20, 2014 at 9:21 PM, Jorge Miramontes <
jorge.miramon...@rackspace.com> wrote:

>   The use case from our customers has been mostly for database (MySql)
> load balancing. If the master goes down then they want another master/slave
> on standby ready to receive traffic. In the simplest case, I think Neutron
> can achieve this with 2 pools with 1 node each. If pool #1 goes down then
> pool #2 becomes active. We currently solve this with the notion of primary
> and secondary nodes. If all primary nodes go down then secondary nodes
> become active.
>
>  Cheers,
> --Jorge
>
>   From: Eugene Nikanorov 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Thursday, March 20, 2014 11:35 AM
>
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki
>
>
>
- Active/Passive Failover
   - I think this is solved with multiple pools.

  The multiple pools support that is coming with L7 rules is to support
 content-switching based on L7 HTTP information (URL, headers, etc.). There
 is no support today for an active vs. passive pool.

>>>  I'm not sure that's the priority. It depends on if this is widely
>>> supported among vendors.
>>>
>>
>>  A commercial load balancer that doesn't have high availability
>> features? Is there really such a thing still being sold in 2014? ;)
>>
> I might be missing something fundamental here, but we're talking about
> 'additional' HA at pool level? Why not just add nodes to the pool?
>
>
>>
>>  Also, Jorge-- thanks for creating that page! I've made a few additions
>> to it as well that I'd love to see prioritized.
>>
>
>
>>
>>  Stephen
>>
>>
>>
>>
>>  --
>> Stephen Balukoff
>> Blue Box Group, LLC
>> (800)613-4305 x807
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Updates to Juno blueprint review process

2014-03-20 Thread Solly Ross
IMHO, I feel like many of the things in the "Implementation" section should 
just be in the
launchpad BP, and not in the git-tracked spec.  While I think that the idea of 
having
the BP in Gerrit is a great idea, I feel like the details of the 
"Implementation" section
(assignee, etc) will lead to "trivial" changes that will be a burden on the 
people who need
to approve changes and will distract from "substantial" changes (changes to the 
design parts
of the blueprint, etc).

Best Regards,
Solly Ross

- Original Message -
From: "Russell Bryant" 
To: "OpenStack Development Mailing List" 
Sent: Thursday, March 20, 2014 11:49:37 AM
Subject: [openstack-dev] [Nova] Updates to Juno blueprint review process

We recently discussed the idea of using gerrit to review blueprint
specifications [1].  There was a lot of support for the idea so we have
proceeded with putting this together before the start of the Juno
development cycle.

We now have a new project set up, openstack/nova-specs.  You submit
changes to it just like any other project in gerrit.  Find the README
and a template for specifications here:

  http://git.openstack.org/cgit/openstack/nova-specs/tree/README.rst

  http://git.openstack.org/cgit/openstack/nova-specs/tree/template.rst

The blueprint process wiki page has also been updated to reflect that we
will be using this for Nova:

  https://wiki.openstack.org/wiki/Blueprints#Nova

Note that *all* Juno blueprints, including ones that were previously
approved, must go through this new process.  This will help ensure that
blueprints previously approved still make sense, as well as ensure that
all Juno specs follow a more complete and consistent format.

Before the flood of spec reviews start, we would really like to get
feedback on the content of the spec template.  It includes things like
"deployer impact" which could use more input.  Feel free to provide
feedback on list, or just suggest updates via proposed changes in gerrit.

I suspect this process to evolve a bit throughout Juno, but I'm very
excited about the positive impact it is likely to have on our overall
result.

Thanks!

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-March/029232.html

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] horizon PyPi distribution missing

2014-03-20 Thread Akihiro Motoki
As a workaround you can use tarball in requirements.txt (or
test-requirements.txt).
Each versions of Horizon/openstack_dashboard is available at
http://tarballs.openstack.org/horizon/.
Each tarball is generated every time corresponding branch or tag is updated.
If you would like to use horizon I-3 as a requirement, please add the
following line to your requirements.txt:

http://tarballs.openstack.org/horizon/horizon-2014.1.b3.tar.gz

Thanks,
Akihiro

On Fri, Mar 21, 2014 at 2:22 AM, Paul Belanger
 wrote:
> On Mon, Mar 17, 2014 at 5:22 AM, Timur Sufiev  wrote:
>> It depends on openstack_dashboard, namely on
>> openstack_dashboard.settings. So it is fine from Murano's point of
>> view to have openstack_dashboard published on PyPi. Many thanks for
>> considering my request :).
>>
> We are having this issue too, for now we have worked around it with
> pip switches allowing to download from an external source, but
> hopefully we'll get horizon and openstack_dashboard split out in the
> near future.
>
> Like you, we've built a dashboard atop of horizon, mostly because of
> the existing keystone integration.  Everything else we do is external
> to OpenStack.
>
> --
> Paul Belanger | PolyBeacon, Inc.
> Jabber: paul.belan...@polybeacon.com | IRC: pabelanger (Freenode)
> Github: https://github.com/pabelanger | Twitter: 
> https://twitter.com/pabelanger
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

2014-03-20 Thread Youcef Laribi
Jorge,

Just to clarify, is this a feature to control which client IP addresses can 
access the VIP?

Thanks,
Youcef

From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Thursday, March 20, 2014 8:37 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

Thanks for the input. I too was thinking "IP Access Control" could be solved 
with the firewall service in Neutron. To clarify what I mean check out our 
current API docs on this feature 
here.

Cheers,
--Jorge

From: Eugene Nikanorov mailto:enikano...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, March 20, 2014 1:35 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

Hi folks, my comments inlined:

On Thu, Mar 20, 2014 at 6:13 AM, Youcef Laribi 
mailto:youcef.lar...@citrix.com>> wrote:
Jorge,

Thanks for taking the time to put up a requirements list. Some comments below:

  *   Static IP Addresses

 *   Our current Cloud Load Balancing (CLB) offering utilizes static IP 
addresses which is something our customers really like, especially when setting 
up DNS. AWS for example, gives you an A record which you CNAME to.
This should also already be addressed, as you can today specify the VIP's IP 
address explicitly on creation. We do not have DNS-based support for LB like in 
AWS ELB though.
Right, it's already there. Probably that's why it confused me :)

  *   Active/Passive Failover

 *   I think this is solved with multiple pools.
The multiple pools support that is coming with L7 rules is to support 
content-switching based on L7 HTTP information (URL, headers, etc.). There is 
no support today for an active vs. passive pool.
I'm not sure that's the priority. It depends on if this is widely supported 
among vendors.


  *   IP Access Control

 *   Our current CLB offering allows the user to restrict access through 
their load balancer by blacklisting/whitelisting cidr blocks and even 
individual ip addresses. This is just a basic security feature.
Is this controlling access to the VIP's IP address or to pool members IP 
addresses? There is also a Firewall service in Neutron. Could this feature 
better fit in that service?
Agree, it's better to utilize what fwaas has to offer.

Eugene.



Youcef

From: Jorge Miramontes 
[mailto:jorge.miramon...@rackspace.com]
Sent: Wednesday, March 19, 2014 11:44 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

Oleg, thanks for the updates.

Eugene, High/Medium/Low is fine with me. I really just wanted to find a way to 
rank even amongst all of 'X' priorities. As people start adding more items we 
may need more columns to add things such as this, links to blueprints (per 
Ryan's idea), etc. In terms of the requirements marked with a '?' I can try to 
clarify here:


  *   Static IP Addresses

 *   Our current Cloud Load Balancing (CLB) offering utilizes static IP 
addresses which is something our customers really like, especially when setting 
up DNS. AWS for example, gives you an A record which you CNAME to.

  *   Active/Passive Failover

 *   I think this is solved with multiple pools.

  *   IP Access Control

 *   Our current CLB offering allows the user to restrict access through 
their load balancer by blacklisting/whitelisting cidr blocks and even 
individual ip addresses. This is just a basic security feature.

Cheers,
--Jorge

From: Eugene Nikanorov mailto:enikano...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, March 19, 2014 7:32 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

Hi Jorge,

Thanks for taking care of the page. I've added priorities, although I'm not 
sure we need precise priority weights.
Those features that still have '?' need further clarification.

Thanks,
Eugene.


On Wed, Mar 19, 2014 at 11:18 AM, Oleg Bondarev 
mailto:obonda...@mirantis.com>> wrote:
Hi Jorge,

Thanks for taking care of this and bringing it all together! This will be 
really useful for LBaaS discussions.
I updated the wiki to include L7 rules support and also marking already 
implemented requirements.

Thanks,
Oleg

On Wed, Mar 19, 2014 at 2:57 AM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
Hey Neutron LBaaS folks,

Per last week's IRC meeting I have created a preliminary requirements &
use case wiki page. I req

[openstack-dev] [nova] instances stuck with task_state of REBOOTING

2014-03-20 Thread Chris Friesen
I'm running a havana install, and during some testing I've managed to 
get the system into a state where two instances are up and running but 
are reporting a task_state of REBOOTING.


I can see the nova-api logs showing the soft-reboot request.  I don't 
see a corresponding nova-compute log indicating a successful reboot or 
an error.  I see a subsequent nova-api log for a requested soft-reboot 
that gets a 409 response, presumably because it thinks it's still rebooting.


Has anyone seen something like this before?

I'm a bit surprised that there isn't an audit that cleans this up. 
They've been sitting like this for hours.  Does that count as a bug? 
Presumably a power state of "RUNNING" should be enough to clear the 
task_state.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-qa] Graduation Requirements + Scope of Tempest

2014-03-20 Thread Matthew Treinish
On Thu, Mar 20, 2014 at 11:35:15AM +, Malini Kamalambal wrote:
> Hello all,
> 
> I have been working on adding tests in Tempest for Marconi, for the last few 
> months.
> While there are many amazing people to work with, the process has been more 
> difficult than I expected.
> 
> Couple of pain-points and suggestions to make the process easier for myself & 
> future contributors.
> 
> 1. The QA requirements for a project to graduate needs details beyond the 
> "Project must have a *basic* devstack-gate job set up"
> 2. The scope of Tempest needs clarification  - what tests should be in 
> Tempest vs. in the individual projects? Or should they be in both tempest and 
> the project?
> 
> See details below.
> 
> 1. There is little documentation on graduation requirement from a QA 
> perspective beyond 'Project must have a basic devstack-gate job set up'.
> 
> As a result, I hear different interpretations on what a basic devstack gate 
> job is.
> This topic was discussed in one of the QA meetings a few weeks back [1].
> Based on the discussion there, having a basic job - such as one that will let 
> us know 'if a keystone change broke marconi' was  good enough.
> My efforts in getting Marconi meet graduation requirements w.r.t Tempest was 
> based on the above discussion.
> 
> However, my conversations with the TC during Marconi's graduation review  
> lead me to believe that these requirements aren't yet formalized.
> We were told that we needed to have more test coverage in tempest, & having 
> them elsewhere (i.e. functional tests in the Marconi project itself) was not 
> good enough.

So having only looked at the Marconi ML thread and not the actual TC meeting
minutes I might be missing the whole picture. But, from what I saw when I looked
at both a marconi commit and a tempest commit is that there is no gating marconi
devstack-gate job on marconi commits. It's only non-voting in the check 
pipeline.
Additionally, there isn't a non-voting job on tempest or devstack-gate. For
example, look at how savanna has it's tempest jobs setup and this is what 
marconi
needs to have.

> 
> I will never debate the value of having good test coverage - after all I 
> define myself professionally as a QA ;)
> I am proud of the unit and functional test suites & the test coverage we have 
> in Marconi [2].
> Marconi team is continuing its efforts in this direction.
> We are looking forward to adding more tests in Tempest and making sure 
> Marconi is in par with the community standards.
> 
> But what frustrates me is that the test requirements seem to evolve, catching 
>  new contributors by surprise.
> 
> It will really help to have these requirements documented in detail - 
> answering at least the following questions,
> a. What tests are needed to graduate - API, Scenario, CLI?
> b. How much coverage is good enough to graduate?
> 
> That will make sure that contributors focus their time & energy in the right 
> direction.
> I am willing to lead the effort to document the QA-level graduation 
> requirements for a project and help solidify them.

Testing contributions will always be an iterative process. The actual test
coverage doesn't matter as much up front. The graduation requirement as I
understood it was just to have the glue in place and to verify that everything
runs. As long as there is steady contribution and interaction from the marconi
community with tempest IMO that matters far more then actually having complete
coverage upfront.

> 
> 2. Clarify the scope of Tempest  - what tests should be in Tempest vs in the 
> individual projects ?
> 
> It sounds like the scope of tempest is to make sure that,
> a. Projects are functionally tested (AND)
> b. Openstack components (a.k.a projects) do not have integration issues.
> 
> Assuming my understanding is correct, does it make sense to have the project 
> specific functional tests in Tempest?
> Troubleshooting failures related to project specific functionality requires 
> deep understanding of the individual projects.
> Isn't it better to leave it to the individual projects to make sure that they 
> are functional?
> That will help the contributors to Tempest spend their time on what only 
> Tempest can do -i.e. identify integration issues.

What do you mean by project specific functional testing? What makes debugging
a marconi failure in a tempest gate job any more involved than debugging a
nova or neutron failure? Part of the point of having an integrated gate is
saying that the project works well with all the others in OpenStack. IMO that's
not just in project functionality but also in community. When there is an issue
with a gate job everyone comes together to work on it. For example if you have
a keystone patch that breaks a marconi test in check there is open communication
about what happened and how to fix it.

That being said there are certain cases where having a project specific
functional test makes sense. For example swift has a functional test job th

Re: [openstack-dev] [nova] instances stuck with task_state of REBOOTING

2014-03-20 Thread Solly Ross
Hi Chris,
Are you in the position to determine whether or not this happens with the 
latest master code?
Either way, it definitely looks like a bug.

If you could give more specific reproduction instructions, that would be most 
useful.

Best Regards,
Solly Ross

- Original Message -
From: "Chris Friesen" 
To: openstack-dev@lists.openstack.org
Sent: Thursday, March 20, 2014 1:59:39 PM
Subject: [openstack-dev] [nova] instances stuck with task_state of REBOOTING

I'm running a havana install, and during some testing I've managed to 
get the system into a state where two instances are up and running but 
are reporting a task_state of REBOOTING.

I can see the nova-api logs showing the soft-reboot request.  I don't 
see a corresponding nova-compute log indicating a successful reboot or 
an error.  I see a subsequent nova-api log for a requested soft-reboot 
that gets a 409 response, presumably because it thinks it's still rebooting.

Has anyone seen something like this before?

I'm a bit surprised that there isn't an audit that cleans this up. 
They've been sitting like this for hours.  Does that count as a bug? 
Presumably a power state of "RUNNING" should be enough to clear the 
task_state.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

2014-03-20 Thread Stephen Balukoff
Aah! Yes--  you are correct. I was thinking a the load balancer level, not
the pool level.  (Maybe we can clarify this distinction in the requirements
doc?)

But I would also be surprised if there are load balancers today that don't
intrinsically offer "active / backup pool membership" as a standard feature.

Eugene: This is somewhat similar to weighting, but not exactly the same
thing.

Stephen


On Thu, Mar 20, 2014 at 10:17 AM, Youcef Laribi wrote:

> Stephen,
>
>
>
> I don’t think the active/passive pools feature is referring to the HA of
> loadbalancers. This is about the ability to divide the list of members
> servicing load-balanced requests into 2 groups: The first one is active and
> the second one is passive (or a backup pool). If all the members in the
> first pool are down, then the passive pool’s members become serving traffic
> for that VIP.
>
>
>
> Youcef
>




-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Backwards incompatible API changes

2014-03-20 Thread David Kranz

On 03/20/2014 11:05 AM, Solly Ross wrote:

I concur.  I suspect people/organizations who are doing CD *probably* won't mind
such a change as much as the people who use the versioned releases will mind
backwards-incompatibility.  Correct me if I'm wrong, but doing CD requires a
certain willingness to roll with the punches, so to speak, whereas people using
versioned releases are less likely to be as flexible.

Best Regards,
Solly Ross
It looks like there was no existing unit test for what horizon ended up 
doing, where a red-flagging change to the test would have been needed. 
There is obviously also no tempest test. But I hope that folks doing CD 
would not roll with this sort of punch if they find it, but push back 
immediately to revert the change unless it had gone through whatever api 
change review process we come up with. I presume that it simply was not 
noticed since it is perhaps a bit of an obscure api point. Since 
OpenStack currently advertises 6 month cycle releases and stable apis, 
it would be best to revert it IMO.



 -David

- Original Message -
From: "Thierry Carrez" 
To: openstack-dev@lists.openstack.org
Sent: Thursday, March 20, 2014 6:51:26 AM
Subject: Re: [openstack-dev] [nova] Backwards incompatible API changes

Christopher Yeoh wrote:

The patch was merged in October (just after Icehouse opened) and so has
been used in clouds that do CD for quite a while. After some discussion
on IRC I think we'll end up having to leave this backwards incompatible
change in there - given there are most likely users who now rely on
both sets of behaviour there is no good way to get out of this
situation. I've added a note to the Icehouse release notes.

I still think reverting before release is an option we should consider.
My point is, yes we broke it back in October for people doing CD (and
they might by now have gotten used to it), if we let this to release
we'll then break it for everyone else.

We put a high emphasis into guaranteeing backward compatibility between
releases. I think there would be more damage done if we let this sail to
release, compared to the damage of reverting CD followers to pre-October
behavior...




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [legal-discuss] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Stan Lagun
Kurt,

Your point is that NoSQL solution may be required for innovative project.
And that is MongoDB. But what if come another amazing project that needs
CouchDB, neo4j, Riak, (put your favorite NoSQL DB here)? It would be in the
same position cause everyone would say hey, we already have NoSQL in
OpenStack so you have to use MongoDB which is not fair. But is also obvious
that OpenStack cannot demand cloud operators to maintain MySQL, MongoDB,
CouchDB, neo4j etc in simultaneously

I hate to say that (I happen to be MongoDB fan) but the only way we can
introduce external dependencies to OpenStack is by making technology that
would make possible the project to be responsible for deployment and
maintenance of that dependency (DBMS) rather then cloud operator. It seems
to me that the right way to introduce MongoDB is to invest in projects like
TripleO, Fuel, Murano, Heat and Ironic


On Thu, Mar 20, 2014 at 9:09 PM, Gil Yehuda  wrote:

> >To be clear, Marconi does not incorporate any AGPL code itself; pymongo is
> Apache2 licensed.
>
> Understood, but here's the rub. Someone else is going to want to build on
> this (which it the point of this open source project). Whereas 'pymongo' is
> Apache licensed, since the copyright holder, MongoDB Inc. declared it as
> such, the authors of the other community drivers (for other language
> bindings and features of MongoDB, etc.) are also of releasing drivers under
> the Apache or BSD licenses too (thinking that's OK to do since no one is
> telling them otherwise). That community is unaware of their legal
> obligations when creating drivers to an AGPL database. Thus if one of those
> community drivers gets intertwined in a court case clarifying their license
> to be infringing on the AGPL terms, we've inadvertently impacted our
> community. This is a credible risk that is difficult for OpenStack to
> abate, since the problem lies with the way a different community chose to
> operate.
>
> There are three interconnected issues here:
> 1. The confusion that MongoDB has created in Open Source licensing due to
> the asymmetric control they have on licensing terms.
> 2. The diligence of Open Stack to remain careful with OpenStack's CLA
> compliance and Apache-friendly terms.
> 3. The pragmatics of the effect MondgoDB would have onto OpenStack's
> economic viability and legal risks at large.
>
> The first problem is out of scope for this list. But I think people who
> rely upon Open Source for their business ought to understand what MongoDB
> is doing to open source software. The second is, to your point, the issue
> in this conversation. As long as Openstack only use Apache licensed code
> >>from MondgoDB Inc.<< and diligently avoids using any open source
> contributions from any community contributor to the MongoDB ecosystem, then
> you remain compliant the your CLA. But you will have to exclude the rest of
> the MongoDB community (which goes against the spirit of Open Source -- back
> to the problem #1, which is out of scope). As for #3, I think the
> foundation needs to weigh in on the pragmatics here, since this has an
> economic and legal impact to the entire endeavor, not just to persisting
> data in one component of the project.
>
> Gil Yehuda
> Sr. Director Of Open Source, Open Standards
>
> -Original Message-
> From: Kurt Griffiths [mailto:kurt.griffi...@rackspace.com]
> Sent: Thursday, March 20, 2014 9:29 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [legal-discuss] [Marconi] Why is marconi a
> queue implementation vs a provisioning API?
>
> > The incorporation of AGPLv3 code Into OpenStack Project  is a
> >significant decision
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Sincerely yours
Stanislav (Stan) Lagun
Senior Developer
Mirantis
35b/3, Vorontsovskaya St.
Moscow, Russia
Skype: stanlagun
www.mirantis.com
sla...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Backwards incompatible API changes

2014-03-20 Thread Solly Ross
Sorry if my meaning was unclear.  I think we should revert as well.

Best Regards,
Solly Ross

- Original Message -
From: "David Kranz" 
To: openstack-dev@lists.openstack.org
Sent: Thursday, March 20, 2014 2:20:42 PM
Subject: Re: [openstack-dev] [nova] Backwards incompatible API changes

On 03/20/2014 11:05 AM, Solly Ross wrote:
> I concur.  I suspect people/organizations who are doing CD *probably* won't 
> mind
> such a change as much as the people who use the versioned releases will mind
> backwards-incompatibility.  Correct me if I'm wrong, but doing CD requires a
> certain willingness to roll with the punches, so to speak, whereas people 
> using
> versioned releases are less likely to be as flexible.
>
> Best Regards,
> Solly Ross
It looks like there was no existing unit test for what horizon ended up 
doing, where a red-flagging change to the test would have been needed. 
There is obviously also no tempest test. But I hope that folks doing CD 
would not roll with this sort of punch if they find it, but push back 
immediately to revert the change unless it had gone through whatever api 
change review process we come up with. I presume that it simply was not 
noticed since it is perhaps a bit of an obscure api point. Since 
OpenStack currently advertises 6 month cycle releases and stable apis, 
it would be best to revert it IMO.


  -David
> - Original Message -
> From: "Thierry Carrez" 
> To: openstack-dev@lists.openstack.org
> Sent: Thursday, March 20, 2014 6:51:26 AM
> Subject: Re: [openstack-dev] [nova] Backwards incompatible API changes
>
> Christopher Yeoh wrote:
>> The patch was merged in October (just after Icehouse opened) and so has
>> been used in clouds that do CD for quite a while. After some discussion
>> on IRC I think we'll end up having to leave this backwards incompatible
>> change in there - given there are most likely users who now rely on
>> both sets of behaviour there is no good way to get out of this
>> situation. I've added a note to the Icehouse release notes.
> I still think reverting before release is an option we should consider.
> My point is, yes we broke it back in October for people doing CD (and
> they might by now have gotten used to it), if we let this to release
> we'll then break it for everyone else.
>
> We put a high emphasis into guaranteeing backward compatibility between
> releases. I think there would be more damage done if we let this sail to
> release, compared to the damage of reverting CD followers to pre-October
> behavior...
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] instances stuck with task_state of REBOOTING

2014-03-20 Thread Chris Friesen

On 03/20/2014 12:06 PM, Solly Ross wrote:

Hi Chris,
Are you in the position to determine whether or not this happens with the 
latest master code?
Either way, it definitely looks like a bug.


Unfortunately not right now, working towards a deadline.


If you could give more specific reproduction instructions, that would be most 
useful.


If I could give more specific reproduction instructions, I could track 
it down and fix it.  :)   I do know that they were soft-rebooting 
instances and rebooting the controllers at around the same time.


The fact that there are no success or error logs in nova-compute.log 
makes me wonder if we somehow got stuck in self.driver.reboot().


Also, I'm kind of wondering what would happen if nova-compute was 
running reboot_instance() and we rebooted the controller at the same 
time.  reboot_instance() could time out trying to update the instance 
with the the new power state and a task_state of None.  Later on in 
_sync_power_states() we would update the power_state, but nothing would 
update the task_state.  I don't think this is what happened to us though 
since I'd expect to see logs of the timeout.


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [legal-discuss] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Amit Gandhi
If we limited Openstack projects to just one database, is that database (e.g. 
MySQL) going to be the best storage deployment for that job?  Or are there 
cases where other technologies such as Redis, MongoDB, Cassandra, CouchDB, etc 
make more sense?

Marconi has a pluggable storage driver model which allows these other storage 
drivers to be implemented (Redis is on the books).  The operator can then make 
their own informed choice on which backend makes the most sense for them based 
on their needs.

The alternative is that Openstack projects limit themselves to just one option 
(to reduce the deployment stack operators have to be concerned with – for 
example: only MySQL backends allowed), but may (likely) result in reduced 
performance/features/experience.  To me that would be an injustice to the users 
of that cloud.

How do back ends utilized relate to the amount/type/churn of data?  Is the 
blessed database ideal for that job or are there more scalable options? Im not 
saying you can’t scale MySQL, but db’s like Mongo/Cassandra/etc are better 
equipped for it (personal opinion).

I agree that investments in projects like Heat etc will reduce the burden on 
operators that deploy Openstack.


Amit Gandhi
Senior Manager, Rackspace.



From: Stan Lagun mailto:sla...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, March 20, 2014 at 2:23 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [legal-discuss] [Marconi] Why is marconi a queue 
implementation vs a provisioning API?

Kurt,

Your point is that NoSQL solution may be required for innovative project. And 
that is MongoDB. But what if come another amazing project that needs CouchDB, 
neo4j, Riak, (put your favorite NoSQL DB here)? It would be in the same 
position cause everyone would say hey, we already have NoSQL in OpenStack so 
you have to use MongoDB which is not fair. But is also obvious that OpenStack 
cannot demand cloud operators to maintain MySQL, MongoDB, CouchDB, neo4j etc in 
simultaneously

I hate to say that (I happen to be MongoDB fan) but the only way we can 
introduce external dependencies to OpenStack is by making technology that would 
make possible the project to be responsible for deployment and maintenance of 
that dependency (DBMS) rather then cloud operator. It seems to me that the 
right way to introduce MongoDB is to invest in projects like TripleO, Fuel, 
Murano, Heat and Ironic


On Thu, Mar 20, 2014 at 9:09 PM, Gil Yehuda 
mailto:gyeh...@yahoo-inc.com>> wrote:
>To be clear, Marconi does not incorporate any AGPL code itself; pymongo is
Apache2 licensed.

Understood, but here's the rub. Someone else is going to want to build on this 
(which it the point of this open source project). Whereas 'pymongo' is Apache 
licensed, since the copyright holder, MongoDB Inc. declared it as such, the 
authors of the other community drivers (for other language bindings and 
features of MongoDB, etc.) are also of releasing drivers under the Apache or 
BSD licenses too (thinking that's OK to do since no one is telling them 
otherwise). That community is unaware of their legal obligations when creating 
drivers to an AGPL database. Thus if one of those community drivers gets 
intertwined in a court case clarifying their license to be infringing on the 
AGPL terms, we've inadvertently impacted our community. This is a credible risk 
that is difficult for OpenStack to abate, since the problem lies with the way a 
different community chose to operate.

There are three interconnected issues here:
1. The confusion that MongoDB has created in Open Source licensing due to the 
asymmetric control they have on licensing terms.
2. The diligence of Open Stack to remain careful with OpenStack's CLA 
compliance and Apache-friendly terms.
3. The pragmatics of the effect MondgoDB would have onto OpenStack's economic 
viability and legal risks at large.

The first problem is out of scope for this list. But I think people who rely 
upon Open Source for their business ought to understand what MongoDB is doing 
to open source software. The second is, to your point, the issue in this 
conversation. As long as Openstack only use Apache licensed code >>from 
MondgoDB Inc.<< and diligently avoids using any open source contributions from 
any community contributor to the MongoDB ecosystem, then you remain compliant 
the your CLA. But you will have to exclude the rest of the MongoDB community 
(which goes against the spirit of Open Source -- back to the problem #1, which 
is out of scope). As for #3, I think the foundation needs to weigh in on the 
pragmatics here, since this has an economic and legal impact to the entire 
endeavor, not just to persisting data in one component of the project.

Gil Yehuda
Sr. Director Of Open Source, Open Standards

-Original Message-
Fro

Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-20 Thread Clint Byrum
Excerpts from Flavio Percoco's message of 2014-03-19 03:01:19 -0700:
> FWIW, I think there's a value on having an sqlalchemy driver. It's
> helpful for newcomers, it integrates perfectly with the gate and I
> don't want to impose other folks what they should or shouldn't use in
> production. Marconi may be providing a data API but it's still
> non-opinionated and it wants to support other drivers - or at least provide
> a nice way to implement them. Working on sqlalchemy instead of amqp (or
> redis) was decided in the incubation meeting.
> 
> But again, It's an optional driver that we're talking about here. As
> of now, our recommended driver is mongodb's and as I already mentioned
> in this email, we'll start working on an amqp's one, which will likely
> become the recommended one. There's also support for redis.
> 
> As already mentioned, we have plans to complete the redis driver and
> write an amqp based one and let them both live in the code base.
> Having support for different storage dirvers makes marconi's sharding
> feature more valuable.
> 
> 

Just to steer this back to technical development discussions a bit:

I suggest the sqla driver be removed. It will never be useful as a queue
backend. It will confuse newcomers because they'll see the schema and
think that it will work and then use it, and then they find out that SQL
is just not suitable for queueing about the time that they're taking a
fire extinguisher to their rack.

"Just use Redis" is pretty interesting as a counter to the concerns
MongoDB's license situation. Redis, AFAIK, does not have many of the
features that make MongoDB attractive for backing a queue. The primary
one that I would cite is sharding. While MongoDB will manage sharding
for you, Redis works more like Memcached when you want to partition[1].
This is particularly problematic for an operational _storage_ product
as that means if you want to offline a node, you are going to have to
consider what kind of partitioning Marconi has used, and how it will
affect the availability and durability of the data.

All of this to say, if Marconi is going to be high scale, I agree that
SQL can't be used, and even that MongoDB, on technical abilities alone,
makes some sense. But I think what might be simpler is if Marconi just
shifted focus to make the API more like AMQP, and used AMQP on its
backend. This allows cloud operators to deploy what they're used to for
OpenStack, and would still give users something they're comfortable with
(an HTTP API) to consume it.

[1] http://redis.io/topics/partitioning

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] instances stuck with task_state of REBOOTING

2014-03-20 Thread Chris Friesen

On 03/20/2014 12:29 PM, Chris Friesen wrote:


The fact that there are no success or error logs in nova-compute.log
makes me wonder if we somehow got stuck in self.driver.reboot().

Also, I'm kind of wondering what would happen if nova-compute was
running reboot_instance() and we rebooted the controller at the same
time.  reboot_instance() could time out trying to update the instance
with the the new power state and a task_state of None.  Later on in
_sync_power_states() we would update the power_state, but nothing would
update the task_state.  I don't think this is what happened to us though
since I'd expect to see logs of the timeout.


Actually, looking at the logs a bit more carefully it appears that what 
happened is something like this:


We reboot the controllers.
Right after they come back up something calls compute.api.API.reboot()
That sets instance.task_state = task_states.REBOOTING and then calls 
instance.save() to update the database.

Then it calls self.compute_rpcapi.reboot_instance() which does an rpc cast.
That message gets dropped on the floor due to communication issues 
between the controller and the compute.

Now we're stuck with a task_state of REBOOTING.


I think that both of the RPC message loss scenarios are valid with 
current nova code, so we really do need an audit to clean up after this 
sort of thing.


Chris



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-qa] Graduation Requirements + Scope of Tempest

2014-03-20 Thread Malini Kamalambal
Thanks Matt for your response !! It has clarified some of the 'cloudy
areas' ;)


"So having only looked at the Marconi ML thread and not the actual TC
meeting
minutes I might be missing the whole picture. But, from what I saw when I
looked
at both a marconi commit and a tempest commit is that there is no gating
marconi
devstack-gate job on marconi commits. It's only non-voting in the check
pipeline.
Additionally, there isn't a non-voting job on tempest or devstack-gate. For
example, look at how savanna has it's tempest jobs setup and this is what
marconi
needs to have."


I am not dismissing the fact that Marconi does not have a voting gate job.
All we have currently is a non-voting check job in Marconi, and
experimental jobs in Tempest & devstack-gate.
So we definitely have more work to do there, starting with making the
devstack-tempest job voting in Marconi.

But what caught me by surprise is 'the extend of gate testing -
http://git.openstack.org/cgit/openstack/tempest/tree/tempest/api/queuing/te
st_queues.py'  was brought up as a concern.
We have always had an extensive functional test suite in Marconi , which
is run at gate against a marconi-server that the test spins up (This was
implemented before we had devstack integration in place. It is in our
plans to run the functional test suite in Marconi against devstack
Marconi).

But my point is not do a postmortem on everything that happened, rather
come up with an objective list of items that I (or anybody who wants to
meet the Tempest criteria for graduation- No way meant to imply that
Tempest work stops after graduation ;) ) need to complete. I started a
wiki page to document what will be a good enough graduation criteria to
graduate from the QA perspective.

https://etherpad.openstack.org/p/Tempest-Graduation-Criteria


It will help if the QA team can add their thoughts to this.
This can maybe become a recommendation to the TC on what the QA graduation
requirements are.

"What do you mean by project specific functional testing? What makes
debugging
a marconi failure in a tempest gate job any more involved than debugging a
nova or neutron failure? Part of the point of having an integrated gate is
saying that the project works well with all the others in OpenStack. IMO
that's
not just in project functionality but also in community. When there is an
issue
with a gate job everyone comes together to work on it. For example if you
have
a keystone patch that breaks a marconi test in check there is open
communication
about what happened and how to fix it."

'project specific functional testing' in the Marconi context is treating
Marconi as a complete system, making Marconi API calls & verifying the
response - just like an end user would, but without keystone. If one of
these tests fail, it is because there is a bug in the Marconi code , and
not because its interaction with Keystone caused it to fail.

"That being said there are certain cases where having a project specific
functional test makes sense. For example swift has a functional test job
that
starts swift in devstack. But, those things are normally handled on a per
case
basis. In general if the project is meant to be part of the larger
OpenStack
ecosystem then Tempest is the place to put functional testing. That way
you know
it works with all of the other components. The thing is in openstack what
seems
like a project isolated functional test almost always involves another
project
in real use cases. (for example keystone auth with api requests)

"

One of the concerns we heard in the review was 'having the functional
tests elsewhere (I.e within the project itself) does not count and they
have to be in Tempest'.
This has made us as a team wonder if we should migrate all our functional
tests to Tempest.
But from Matt's response, I think it is reasonable to continue in our
current path & have the functional tests in Marconi coexist  along with
the tests in Tempest.


On 3/20/14 1:59 PM, "Matthew Treinish"  wrote:

>On Thu, Mar 20, 2014 at 11:35:15AM +, Malini Kamalambal wrote:
>> Hello all,
>> 
>> I have been working on adding tests in Tempest for Marconi, for the
>>last few months.
>> While there are many amazing people to work with, the process has been
>>more difficult than I expected.
>> 
>> Couple of pain-points and suggestions to make the process easier for
>>myself & future contributors.
>> 
>> 1. The QA requirements for a project to graduate needs details beyond
>>the "Project must have a *basic* devstack-gate job set up"
>> 2. The scope of Tempest needs clarification  - what tests should be in
>>Tempest vs. in the individual projects? Or should they be in both
>>tempest and the project?
>> 
>> See details below.
>> 
>> 1. There is little documentation on graduation requirement from a QA
>>perspective beyond 'Project must have a basic devstack-gate job set up'.
>> 
>> As a result, I hear different interpretations on what a basic devstack
>>gate job is.
>> This topic was discussed in one of t

Re: [openstack-dev] [nova][PCI] one use case make the flavor/extra-info based solution to be right choice

2014-03-20 Thread Jay Pipes
On Thu, 2014-03-20 at 13:50 +, Robert Li (baoli) wrote:
> Hi Yongli,
> 
> I'm very glad that you bring this up and relive our discussion on PCI
> passthrough and its application on networking. The use case you brought up
> is:
> 
>user wants a FASTER NIC from INTEL to join a virtual
> networking. 
> 
> By FASTER, I guess that you mean that the user is allowed to select a
> particular vNIC card. Therefore, the above statement can be translated
> into the following requests for a PCI device:
> . Intel vNIC
> . 1G or 10G or ?
> . network to join
> 
> First of all, I'm not sure in a cloud environment, a user would care about
> the vendor or card type.

Correct. Nor would/should a user of the cloud know what vendor or card
type is in use on a particular compute node. At most, all a user of the
cloud would be able to select from is an instance type (flavor) that
listed some capability like "high_io_networking" or something like that,
and the mapping of what "high_io_networking" meant on the back end of
Nova would need to be done by the operator (i.e. if the tag
"high_io_networking" is on a flavor a user has asked to launch a server
with, then that tag should be translated into a set of capabilities that
is passed to the scheduler and used to determine where the instance can
be scheduled by looking at which compute nodes support that set of
capabilities.

This is what I've been babbling about with regards to "leaking
implementation through the API". What happens if, say, the operator
decides to use IBM cards (instead of or in addition to Intel ones)? If
you couple the implementation with the API, like the example above shows
("user wants a FASTER NIC from INTEL"), then you have to add more
complexity to the front-end API that a user deals with, instead of just
adding a capabilities mapping for new compute nodes that says
"high_io_networking" tag can match to these new compute nodes with IBM
cards.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] [Nova] Blueprint review request for pagination sorting enhancements

2014-03-20 Thread Steven Kaufer


Please review the following blueprints, both are scoped to supporting
multiple sort key and sort directions on the API request when retrieving
volumes and servers.

https://blueprints.launchpad.net/cinder/+spec/cinder-pagination
https://blueprints.launchpad.net/nova/+spec/nova-pagination

Note that I will be updating the nova blueprint to adhere to the new
nova-specs process once that is ready.  In the mean time, the specification
wiki page has been updated with the information from the template.

Any comments, questions, and recommendations would be appreciated.

Thanks,

Steven Kaufer___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] API schema location

2014-03-20 Thread Jay Pipes
On Thu, 2014-03-20 at 19:34 +1100, Christopher Yeoh wrote:
> On Tue, 18 Mar 2014 19:52:27 +0100
> "Koderer, Marc"  wrote:
> 
> > > Am I missing something or are these schemas being added now just a
> > > subset of what is being used for negative testing? Why can't we
> > > either add the extra negative test info around the new test
> > > validation patches and get the double benefit. Or have the negative
> > > test schemas just extend these new schemas being added?
> > 
> > Yes, the api_schema files should theoretically be a
> > subsets of the negative test schemas.
> > But I don't think that extending them will be possible:
> > 
> > if you have a property definition like this:
> > 
> > "properties": {
> > "minRam": {  "type": "integer",}
> > 
> > how can you extend it to:
> > 
> > "properties": {
> > "minRam": {
> > "type": "integer",
> > "results": {
> > "gen_none": 400,
> > "gen_string": 400
> > }
> > 
> > This is the reason why I am unsure how inheritance can solve
> > something here.
> 
> I think this is an example of how we can do some sharing of schema
> definitions when there is sufficient commonality to justify it.

JSON-Schema, for the record, actually does support extending the
definition of the schema using the additionalProperties attribute on a
schema object.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-qa] Graduation Requirements + Scope of Tempest

2014-03-20 Thread Sean Dague
I will agree that the TC language is not as strong as it should be (and
really should be clarified, but I don't think that's going to happen
until the release is looking solid). Honestly, though, I think Sahara is
a good example here of the level of that we expect. They have actively
engaged with upstream infra and used it to the full extent possible.
Then wrote additional tooling to do even more and report 3rd party in on
their changes.

It's also worth noting they got there by actively participating in the
upstream community and conversations, and clearly making upstream test
integration a top priority for the cycle.

On 03/20/2014 03:13 PM, Malini Kamalambal wrote:
> Thanks Matt for your response !! It has clarified some of the 'cloudy
> areas' ;)
> 
> 
> "So having only looked at the Marconi ML thread and not the actual TC
> meeting
> minutes I might be missing the whole picture. But, from what I saw when I
> looked
> at both a marconi commit and a tempest commit is that there is no gating
> marconi
> devstack-gate job on marconi commits. It's only non-voting in the check
> pipeline.
> Additionally, there isn't a non-voting job on tempest or devstack-gate. For
> example, look at how savanna has it's tempest jobs setup and this is what
> marconi
> needs to have."
> 
> 
> I am not dismissing the fact that Marconi does not have a voting gate job.
> All we have currently is a non-voting check job in Marconi, and
> experimental jobs in Tempest & devstack-gate.
> So we definitely have more work to do there, starting with making the
> devstack-tempest job voting in Marconi.
> 
> But what caught me by surprise is 'the extend of gate testing -
> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/api/queuing/te
> st_queues.py'  was brought up as a concern.
> We have always had an extensive functional test suite in Marconi , which
> is run at gate against a marconi-server that the test spins up (This was
> implemented before we had devstack integration in place. It is in our
> plans to run the functional test suite in Marconi against devstack
> Marconi).
> 
> But my point is not do a postmortem on everything that happened, rather
> come up with an objective list of items that I (or anybody who wants to
> meet the Tempest criteria for graduation- No way meant to imply that
> Tempest work stops after graduation ;) ) need to complete. I started a
> wiki page to document what will be a good enough graduation criteria to
> graduate from the QA perspective.
> 
> https://etherpad.openstack.org/p/Tempest-Graduation-Criteria
> 
> 
> It will help if the QA team can add their thoughts to this.
> This can maybe become a recommendation to the TC on what the QA graduation
> requirements are.
> 
> "What do you mean by project specific functional testing? What makes
> debugging
> a marconi failure in a tempest gate job any more involved than debugging a
> nova or neutron failure? Part of the point of having an integrated gate is
> saying that the project works well with all the others in OpenStack. IMO
> that's
> not just in project functionality but also in community. When there is an
> issue
> with a gate job everyone comes together to work on it. For example if you
> have
> a keystone patch that breaks a marconi test in check there is open
> communication
> about what happened and how to fix it."
> 
> 'project specific functional testing' in the Marconi context is treating
> Marconi as a complete system, making Marconi API calls & verifying the
> response - just like an end user would, but without keystone. If one of
> these tests fail, it is because there is a bug in the Marconi code , and
> not because its interaction with Keystone caused it to fail.
> 
> "That being said there are certain cases where having a project specific
> functional test makes sense. For example swift has a functional test job
> that
> starts swift in devstack. But, those things are normally handled on a per
> case
> basis. In general if the project is meant to be part of the larger
> OpenStack
> ecosystem then Tempest is the place to put functional testing. That way
> you know
> it works with all of the other components. The thing is in openstack what
> seems
> like a project isolated functional test almost always involves another
> project
> in real use cases. (for example keystone auth with api requests)
> 
> "
> 
> One of the concerns we heard in the review was 'having the functional
> tests elsewhere (I.e within the project itself) does not count and they
> have to be in Tempest'.
> This has made us as a team wonder if we should migrate all our functional
> tests to Tempest.
> But from Matt's response, I think it is reasonable to continue in our
> current path & have the functional tests in Marconi coexist  along with
> the tests in Tempest.
> 
> 
> On 3/20/14 1:59 PM, "Matthew Treinish"  wrote:
> 
>> On Thu, Mar 20, 2014 at 11:35:15AM +, Malini Kamalambal wrote:
>>> Hello all,
>>>
>>> I have been working on adding tests in Tempest

Re: [openstack-dev] [openstack-qa] Graduation Requirements + Scope of Tempest

2014-03-20 Thread Rochelle.RochelleGrober


> -Original Message-
> From: Malini Kamalambal [mailto:malini.kamalam...@rackspace.com]
> Sent: Thursday, March 20, 2014 12:13 PM
> 
> 'project specific functional testing' in the Marconi context is
> treating
> Marconi as a complete system, making Marconi API calls & verifying the
> response - just like an end user would, but without keystone. If one of
> these tests fail, it is because there is a bug in the Marconi code ,
> and
> not because its interaction with Keystone caused it to fail.
> 
> "That being said there are certain cases where having a project
> specific
> functional test makes sense. For example swift has a functional test
> job
> that
> starts swift in devstack. But, those things are normally handled on a
> per
> case
> basis. In general if the project is meant to be part of the larger
> OpenStack
> ecosystem then Tempest is the place to put functional testing. That way
> you know
> it works with all of the other components. The thing is in openstack
> what
> seems
> like a project isolated functional test almost always involves another
> project
> in real use cases. (for example keystone auth with api requests)
> 
> "
> 
> One of the concerns we heard in the review was 'having the functional
> tests elsewhere (I.e within the project itself) does not count and they
> have to be in Tempest'.
> This has made us as a team wonder if we should migrate all our
> functional
> tests to Tempest.
> But from Matt's response, I think it is reasonable to continue in our
> current path & have the functional tests in Marconi coexist  along with
> the tests in Tempest.
> 

I think that what is being asked, really is that the functional tests could be 
a single set of tests that would become a part of the tempest repository and 
that these tests would have an ENV variable as part of the configuration that 
would allow either "no Keystone" or "Keystone" or some such, if that is the 
only configuration issue that separates running the tests isolated vs. 
integrated.  The functional tests need to be as much as possible a single set 
of tests to reduce duplication and remove the likelihood of two sets getting 
out of sync with each other/development.  If they only run in the integrated 
environment, that's ok, but if you want to run them isolated to make debugging 
easier, then it should be a configuration option and a separate test job.

So, if my assumptions are correct, QA only requires functional tests for 
integrated runs, but if the project QAs/Devs want to run isolated for dev and 
devtest purposes, more power to them.  Just keep it a single set of functional 
tests and put them in the Tempest repository so that if a failure happens, 
anyone can find the test and do the debug work without digging into a separate 
project repository.

Hopefully, the tests as designed could easily take a new configuration 
directive and a short bit of work with OS QA will get the integrated FTs 
working as well as the isolated ones.

--Rocky

> On 3/20/14 1:59 PM, "Matthew Treinish"  wrote:
> 
> >On Thu, Mar 20, 2014 at 11:35:15AM +, Malini Kamalambal wrote:
> >> Hello all,
> >>
> >> I have been working on adding tests in Tempest for Marconi, for the
> >>last few months.
> >> While there are many amazing people to work with, the process has
> been
> >>more difficult than I expected.
> >>
> >> Couple of pain-points and suggestions to make the process easier for
> >>myself & future contributors.
> >>
> >> 1. The QA requirements for a project to graduate needs details
> beyond
> >>the "Project must have a *basic* devstack-gate job set up"
> >> 2. The scope of Tempest needs clarification  - what tests should be
> in
> >>Tempest vs. in the individual projects? Or should they be in both
> >>tempest and the project?
> >>
> >> See details below.
> >>
> >> 1. There is little documentation on graduation requirement from a QA
> >>perspective beyond 'Project must have a basic devstack-gate job set
> up'.
> >>
> >> As a result, I hear different interpretations on what a basic
> devstack
> >>gate job is.
> >> This topic was discussed in one of the QA meetings a few weeks back
> [1].
> >> Based on the discussion there, having a basic job - such as one that
> >>will let us know 'if a keystone change broke marconi' was  good
> enough.
> >> My efforts in getting Marconi meet graduation requirements w.r.t
> >>Tempest was based on the above discussion.
> >>
> >> However, my conversations with the TC during Marconi's graduation
> >>review  lead me to believe that these requirements aren't yet
> formalized.
> >> We were told that we needed to have more test coverage in tempest, &
> >>having them elsewhere (i.e. functional tests in the Marconi project
> >>itself) was not good enough.
> >
> >So having only looked at the Marconi ML thread and not the actual TC
> >meeting
> >minutes I might be missing the whole picture. But, from what I saw
> when I
> >looked
> >at both a marconi commit and a tempest commit is that there is no
> gat

Re: [openstack-dev] [Nova] Updates to Juno blueprint review process

2014-03-20 Thread Dolph Mathews
On Thu, Mar 20, 2014 at 10:49 AM, Russell Bryant  wrote:

> We recently discussed the idea of using gerrit to review blueprint
> specifications [1].  There was a lot of support for the idea so we have
> proceeded with putting this together before the start of the Juno
> development cycle.
>
> We now have a new project set up, openstack/nova-specs.  You submit
> changes to it just like any other project in gerrit.  Find the README
> and a template for specifications here:
>
>   http://git.openstack.org/cgit/openstack/nova-specs/tree/README.rst
>
>   http://git.openstack.org/cgit/openstack/nova-specs/tree/template.rst


This is great! This is the same basic process we've used for API-impacting
changes in keystone and it has worked really well for us, and we're eager
to adopt the same thing on a more general level.

The process seems overly complicated to me, however. As a blueprint
proposer, I find it odd that I have to propose my blueprint as part of
approved/ -- why not just have a single directory to file things away that
have been implemented? Is it even necessary to preserve them? (why not just
git rm when implemented?) Gerrit already provides a permalink (to the
review).


>
>
> The blueprint process wiki page has also been updated to reflect that we
> will be using this for Nova:
>
>   https://wiki.openstack.org/wiki/Blueprints#Nova
>
> Note that *all* Juno blueprints, including ones that were previously
> approved, must go through this new process.  This will help ensure that
> blueprints previously approved still make sense, as well as ensure that
> all Juno specs follow a more complete and consistent format.
>
> Before the flood of spec reviews start, we would really like to get
> feedback on the content of the spec template.  It includes things like
> "deployer impact" which could use more input.  Feel free to provide
> feedback on list, or just suggest updates via proposed changes in gerrit.
>
> I suspect this process to evolve a bit throughout Juno, but I'm very
> excited about the positive impact it is likely to have on our overall
> result.
>
> Thanks!
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2014-March/029232.html
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-20 Thread Sean Dague
On 03/20/2014 01:01 PM, David Kranz wrote:
> On 03/20/2014 12:31 PM, Sean Dague wrote:
>> On 03/20/2014 11:35 AM, David Kranz wrote:
>>> On 03/20/2014 06:15 AM, Sean Dague wrote:
 On 03/20/2014 05:49 AM, Nadya Privalova wrote:
> Hi all,
> First of all, thanks for your suggestions!
>
> To summarize the discussions here:
> 1. We are not going to install Mongo (because "is's wrong" ?)
 We are not going to install Mongo "not from base distribution", because
 we don't do that for things that aren't python. Our assumption is
 dependent services come from the base OS.

 That being said, being an integrated project means you have to be able
 to function, sanely, on an sqla backend, as that will always be part of
 your gate.
>>> This is a claim I think needs a bit more scrutiny if by "sanely" you
>>> mean "performant". It seems we have an integrated project that no one
>>> would deploy using the sql db driver we have in the gate. Is any one
>>> doing that?  Is having a scalable sql back end a goal of ceilometer?
>>>
>>> More generally, if there is functionality that is of great importance to
>>> any cloud deployment (and we would not integrate it if we didn't think
>>> it was) that cannot be deployed at scale using sqla, are we really going
>>> to say it should not be a part of OpenStack because we refuse, for
>>> whatever reason, to run it in our gate using a driver that would
>>> actually be used? And if we do demand an sqla backend, how much time
>>> should we spend trying to optimize it if no one will really use it?
>>> Though the slow heat job is a little different because the slowness
>>> comes directly from running real use cases, perhaps we should just set
>>> up a "slow ceilometer" job if the sql version is too slow for its budget
>>> in the main job.
>>>
>>> It seems like there is a similar thread, at least in part, about this
>>> around marconi.
>> We required a non mongo backend to graduate ceilometer. So I don't think
>> it's too much to ask that it actually works.
>>
>> If the answer is that it will never work and it was a checkbox with no
>> intent to make it work, then it should be deprecated and removed from
>> the tree in Juno, with a big WARNING that you shouldn't ever use that
>> backend. Like Nova now does with all the virt drivers that aren't tested
>> upstream.
>>
>> Shipping in tree code that you don't want people to use is bad for
>> users. Either commit to making it work, or deprecate it and remove it.
>>
>> I don't see this as the same issue as the slow heat job. Heat,
>> architecturally, is going to be slow. It spins up real OSes and does
>> real thinks to them. There is no way that's ever going to be fast, and
>> the dedicated job was a recognition that to support this level of
>> services in OpenStack we need to give them more breathing room.
> Peace. I specifically noted that difference in my original comment. And
> for that reason the heat slow job may not be temporary.
>>
>> Architecturally Ceilometer should not be this expensive. We've got some
>> data showing it to be aberrant from where we believe it should be. We
>> should fix that.
> There are plenty of cases where we have had code that passes gate tests
> with acceptable performance but falls over in real deployment. I'm just
> saying that having a driver that works ok in the gate but does not work
> for real deployments is of no more value that not having it at all.
> Maybe less value.
> How do you propose to solve the problem of getting more ceilometer tests
> into the gate in the short-run? As a practical measure l don't see why
> it is so bad to have a separate job until the complex issue of whether
> it is possible to have a real-world performant sqla backend is resolved.
> Or did I miss something and it has already been determined that sqla
> could be used for large-scale deployments if we just fixed our code?

I think right now the ball is back in the ceilometer court to do some
performance profiling, and lets see what comes of that. I don't think
we're getting more test before the release in any real way.

>> Once we get a base OS in the gate that lets us direct install mongo from
>> base packages, we can also do that. Or someone can 3rd party it today.
>> Then we'll even have comparative results to understand the differences.
> Yes. Do you know which base OS's are candidates for that?

Ubuntu 14.04 will have a sufficient level of Mongo, so some time in the
Juno cycle we should have it in the gate.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-qa] Graduation Requirements + Scope of Tempest

2014-03-20 Thread Joe Gordon
On Thu, Mar 20, 2014 at 1:19 PM, Rochelle.RochelleGrober <
rochelle.gro...@huawei.com> wrote:

>
>
> > -Original Message-
> > From: Malini Kamalambal [mailto:malini.kamalam...@rackspace.com]
> > Sent: Thursday, March 20, 2014 12:13 PM
> >
> > 'project specific functional testing' in the Marconi context is
> > treating
> > Marconi as a complete system, making Marconi API calls & verifying the
> > response - just like an end user would, but without keystone. If one of
> > these tests fail, it is because there is a bug in the Marconi code ,
> > and
> > not because its interaction with Keystone caused it to fail.
> >
> > "That being said there are certain cases where having a project
> > specific
> > functional test makes sense. For example swift has a functional test
> > job
> > that
> > starts swift in devstack. But, those things are normally handled on a
> > per
> > case
> > basis. In general if the project is meant to be part of the larger
> > OpenStack
> > ecosystem then Tempest is the place to put functional testing. That way
> > you know
> > it works with all of the other components. The thing is in openstack
> > what
> > seems
> > like a project isolated functional test almost always involves another
> > project
> > in real use cases. (for example keystone auth with api requests)
> >
> > "
> >
> > One of the concerns we heard in the review was 'having the functional
> > tests elsewhere (I.e within the project itself) does not count and they
> > have to be in Tempest'.
> > This has made us as a team wonder if we should migrate all our
> > functional
> > tests to Tempest.
> > But from Matt's response, I think it is reasonable to continue in our
> > current path & have the functional tests in Marconi coexist  along with
> > the tests in Tempest.
> >
>
>
While there always exceptions to this rule, in general I think functional
tests belong in tempest for all of the reasons discussed above.



> I think that what is being asked, really is that the functional tests
> could be a single set of tests that would become a part of the tempest
> repository and that these tests would have an ENV variable as part of the
> configuration that would allow either "no Keystone" or "Keystone" or some
> such, if that is the only configuration issue that separates running the
> tests isolated vs. integrated.  The functional tests need to be as much as
> possible a single set of tests to reduce duplication and remove the
> likelihood of two sets getting out of sync with each other/development.  If
> they only run in the integrated environment, that's ok, but if you want to
> run them isolated to make debugging easier, then it should be a
> configuration option and a separate test job.
>
> So, if my assumptions are correct, QA only requires functional tests for
> integrated runs, but if the project QAs/Devs want to run isolated for dev
> and devtest purposes, more power to them.  Just keep it a single set of
> functional tests and put them in the Tempest repository so that if a
> failure happens, anyone can find the test and do the debug work without
> digging into a separate project repository.
>
> Hopefully, the tests as designed could easily take a new configuration
> directive and a short bit of work with OS QA will get the integrated FTs
> working as well as the isolated ones.
>
> --Rocky
>
> > On 3/20/14 1:59 PM, "Matthew Treinish"  wrote:
> >
> > >On Thu, Mar 20, 2014 at 11:35:15AM +, Malini Kamalambal wrote:
> > >> Hello all,
> > >>
> > >> I have been working on adding tests in Tempest for Marconi, for the
> > >>last few months.
> > >> While there are many amazing people to work with, the process has
> > been
> > >>more difficult than I expected.
> > >>
> > >> Couple of pain-points and suggestions to make the process easier for
> > >>myself & future contributors.
> > >>
> > >> 1. The QA requirements for a project to graduate needs details
> > beyond
> > >>the "Project must have a *basic* devstack-gate job set up"
> > >> 2. The scope of Tempest needs clarification  - what tests should be
> > in
> > >>Tempest vs. in the individual projects? Or should they be in both
> > >>tempest and the project?
> > >>
> > >> See details below.
> > >>
> > >> 1. There is little documentation on graduation requirement from a QA
> > >>perspective beyond 'Project must have a basic devstack-gate job set
> > up'.
> > >>
> > >> As a result, I hear different interpretations on what a basic
> > devstack
> > >>gate job is.
> > >> This topic was discussed in one of the QA meetings a few weeks back
> > [1].
> > >> Based on the discussion there, having a basic job - such as one that
> > >>will let us know 'if a keystone change broke marconi' was  good
> > enough.
> > >> My efforts in getting Marconi meet graduation requirements w.r.t
> > >>Tempest was based on the above discussion.
> > >>
> > >> However, my conversations with the TC during Marconi's graduation
> > >>review  lead me to believe that these requirements aren't yet
> > formal

  1   2   >