Re: [openstack-dev] [Solum] Choice of API infrastructure

2013-11-03 Thread Noorul Islam K M
Russell Bryant rbry...@redhat.com writes:

 On 11/02/2013 11:54 AM, Adrian Otto wrote:

 Noorul,
 
 I agree that key decisions should be tracked in blueprints. This is the
 one for this decision which was made in our 2013-10-18 public meeting.
 Jay's submission is consistent with the direction indicated by the team.
 
 https://blueprints.launchpad.net/solum/+spec/rest-api-base
 
 Transcript log:
 http://irclogs.solum.io/2013/solum.2013-10-08-16.01.log.html
 http://irclogs.solum.io/2013/solum.2013-10-08-16.01.log.html

 Heh, not much discussion there.  :-)

 Here's my take ... Pecan+WSME has been pushed as the thing to
 standardize on across most OpenStack APIs.  Ceilometer (and maybe
 others?) are already using it.  Others, such as Nova, are planning to
 use it this cycle. [1][2]

 I care less about the particular choice and more about consistency.  It
 brings a lot of value, such as making it a lot easier for developers to
 jump around between the OpenStack projects.  Can we first at least agree
 that there is value in standardizing on *something* for most OpenStack APIs?

 I understand that there may be cases where the needs for an API justify
 being different.  Marconi being more of a data-plane API vs
 control-plane means that performance concerns are much higher, for example.

 If we agree that consistency is good, does Solum have needs that make it
 different than the majority of OpenStack APIs?  IMO, it does not.

 Can someone lay out a case for why all OpenStack projects should be
 using Falcon, if that's what you think Solum should use?

 Also, is anyone willing to put up the equivalent of Jay's review [3],
 but with Pecan+WSME, to help facilitate the discussion?


I will work on this and submit one.

Regards,
Noorul

 [1]
 http://icehousedesignsummit.sched.org/event/b2680d411aa7f5d432438a435ac21fee
 [2]
 http://icehousedesignsummit.sched.org/event/4a7316a4f5c6f783e362cbba2644bae2
 [3] https://review.openstack.org/#/c/55040/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler]The database access in the scheduler filters

2013-11-03 Thread Russell Bryant
On 11/01/2013 06:39 AM, Jiang, Yunhong wrote:
 I noticed several filters (AggregateMultiTenancyIsoaltion, ram_filter, 
 type_filter, AggregateInstanceExtraSpecsFilter) have DB access in the 
 host_passes(). Some will even access for each invocation.
 
 Just curios if this is considered a performance issue? With a 10k nodes, 60 
 VM per node, and 3 hours VM life cycle cloud, it will have more than 1 
 million DB access per second. Not a small number IMHO.

On a somewhat related note, here's an idea that would be pretty easy to
implement.

What if we added some optional metadata to scheduler filters to let them
indicate where in the order of filters they should run?

The filters you're talking about here we would probably want to run
last.  Other filters that could potentially efficiently eliminate a
large number of hosts should be run first.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Choice of API infrastructure

2013-11-03 Thread Angus Salkeld

On 03/11/13 11:26 +0800, Russell Bryant wrote:

On 11/02/2013 11:54 AM, Adrian Otto wrote:

Noorul,

I agree that key decisions should be tracked in blueprints. This is the
one for this decision which was made in our 2013-10-18 public meeting.
Jay's submission is consistent with the direction indicated by the team.

https://blueprints.launchpad.net/solum/+spec/rest-api-base

Transcript log:
http://irclogs.solum.io/2013/solum.2013-10-08-16.01.log.html
http://irclogs.solum.io/2013/solum.2013-10-08-16.01.log.html


Heh, not much discussion there.  :-)

Here's my take ... Pecan+WSME has been pushed as the thing to
standardize on across most OpenStack APIs.  Ceilometer (and maybe
others?) are already using it.  Others, such as Nova, are planning to
use it this cycle. [1][2]


+1, for Pecan+WSME



I care less about the particular choice and more about consistency.  It
brings a lot of value, such as making it a lot easier for developers to
jump around between the OpenStack projects.  Can we first at least agree
that there is value in standardizing on *something* for most OpenStack APIs?

I understand that there may be cases where the needs for an API justify
being different.  Marconi being more of a data-plane API vs
control-plane means that performance concerns are much higher, for example.

If we agree that consistency is good, does Solum have needs that make it
different than the majority of OpenStack APIs?  IMO, it does not.

Can someone lay out a case for why all OpenStack projects should be
using Falcon, if that's what you think Solum should use?

Also, is anyone willing to put up the equivalent of Jay's review [3],
but with Pecan+WSME, to help facilitate the discussion?

[1]
http://icehousedesignsummit.sched.org/event/b2680d411aa7f5d432438a435ac21fee
[2]
http://icehousedesignsummit.sched.org/event/4a7316a4f5c6f783e362cbba2644bae2
[3] https://review.openstack.org/#/c/55040/

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Manual to build an application on open stack

2013-11-03 Thread Denis Makogon
I'd suggest you to take a look at OpenStack Trove for backend deployment.
Also, i've seen at github ReST based OpenStack client written on java.
If you need any help, you are free to ask community. Regards, Denis M.


2013/11/3 Krishanu Dhar rony.k...@gmail.com

 Thanks Denis. So,this is what I am planning.

 An application; I was thinking of writing it up in java 'coz I ain't that
 good in python. It could be accessed either via a web browser or an app
 could be downloaded to a desktop.
 Brief set of functionalities...
 1. A db backend to store information retrieved from devices in the
 network. (MySQL may be)
 2. Add devices to it for management (eg: switch, storage array, server)
 3. Perform tasks on these devices. (eg: create, delete, modify logical
 objects)
 On Nov 2, 2013 4:32 PM, Denis Makogon dmako...@mirantis.com wrote:

 Ok, i got it. I'd suggest you to describe whole copcent of your
 application (if it is not commercial secret) and let community lead you to
 right direction.
 There simple rules of developing with OpenStack:
 1. Python based.
 2. Clinet/Server interaction
 3. ReST
 4. Test coverage. Inner test or Tempest.


 2013/11/2 Krishanu Dhar rony.k...@gmail.com

 Sorry for being vague in my previous email. Thanks for the link, I'll go
 through it.

  I am trying to build an app that resides on the cloud and would want to
 perform some basic storage management operations.


 On Sat, Nov 2, 2013 at 3:07 PM, Denis Makogon dmako...@mirantis.comwrote:

 It depends on what kind of application your are building. There
 serveral way of developing something above openstack services.
 I hope this wikinpage would help you a bit
 https://wiki.openstack.org/wiki/Documentation


 2013/11/2 Krishanu Dhar rony.k...@gmail.com

  Hi,

 I wanted to build an application on openstack and wanted help on how
 do i go about doing it? Since i am absolutely new to the stack I was
 wondering if their is a developer's manual or something of that sort.

 All I have done so far is started the installation of devstack. I am
 typing this email while the installation is progressing in the background.

 Thanks for you time and help.
 krish

 --
 Krishanu

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Krishanu

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Network topologies

2013-11-03 Thread Robert Collins
On 3 November 2013 17:36, Edgar Magana emag...@plumgrid.com wrote:
 Zen, Kevin, Aaron, Salvatore et al,

 Thank you so much for taking some time and reviewing this proposal. I
 couldn't get any time slot neither in Neutron nor Heat track to have a deep
 discussion about this proposal. My conclusions are the following:

I'm sorry if I managed to miss something, but what problem are you
trying to solve? I don't mean the technical problem - that seems clear
[provide store/restore of network definition], but the user story:
what are people doing /trying to do that will become
easier/better/possible with this thing implemented?

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Glance] Support of v1 and v2 glance APIs in Nova

2013-11-03 Thread Joe Gordon
On Nov 1, 2013 6:46 PM, John Garbutt j...@johngarbutt.com wrote:

 On 29 October 2013 16:11, Eddie Sheffield eddie.sheffi...@rackspace.com
wrote:
 
  John Garbutt j...@johngarbutt.com said:
 
  Going back to Joe's comment:
  Can both of these cases be covered by configuring the keystone
catalog?
  +1
 
  If both v1 and v2 are present, pick v2, otherwise just pick what is in
  the catalogue. That seems cool. Not quite sure how the multiple glance
  endpoints works in the keystone catalog, but should work I assume.
 
  We hard code nova right now, and so we probably want to keep that
route too?
 
  Nova doesn't use the catalog from Keystone when talking to Glance.
There is a config value glance_api_servers which defines a list of Glance
servers that gets randomized and cycled through. I assume that's what
you're referring to with we hard code nova. But currently there's nowhere
in this path (internal nova to glance) where the keystone catalog is
available.

 Yes. I was not very clear. I am proposing we change that. We could try
 shoehorn the multiple glance nodes in the keystone catalog, then cache
 that in the context, but maybe that doesn't make sense. This is a
 separate change really.

FYI:  We cache the cinder endpoints from keystone catalog in the context
already. So doing something like that with glance won't be without
president.


 But clearly, we can't drop the direct configuration of glance servers
 for some time either.

  I think some of the confusion may be that Glanceclient at the
programmatic client level doesn't talk to keystone. That happens happens
higher in the CLI level which doesn't come into play here.
 
  From: Russell Bryant rbry...@redhat.com
  On 10/17/2013 03:12 PM, Eddie Sheffield wrote:
  Might I propose a compromise?
 
  1) For the VERY short term, keep the config value and get the change
otherwise
  reviewed and hopefully accepted.
 
  2) Immediately file two blueprints:
 - python-glanceclient - expose a way to discover available
versions
 - nova - depends on the glanceclient bp and allowing
autodiscovery of glance
  version
  and making the config value optional (tho not deprecated
/ removed)
 
  Supporting both seems reasonable.  At least then *most* people don't
  need to worry about it and it just works, but the override is there
if
  necessary, since multiple people seem to be expressing a desire to
have
  it available.
 
  +1
 
  Can we just do this all at once?  Adding this to glanceclient doesn't
  seem like a huge task.
 
  I worry about us never getting the full solution, but it seems to have
  got complicated.
 
  The glanceclient side is done, as far as allowing access to the list of
available API versions on a given server. It's getting Nova to use this
info that's a bit sticky.

 Hmm, OK. Could we not just cache the detected version, to reduce the
 impact of that decision.

  On 28 October 2013 15:13, Eddie Sheffield 
eddie.sheffi...@rackspace.com wrote:
  So...I've been working on this some more and hit a bit of a snag. The
  Glanceclient change was easy, but I see now that doing this in nova
will require
  a pretty huge change in the way things work. Currently, the API
version is
  grabbed from the config value, the appropriate driver is
instantiated, and calls
  go through that. The problem comes in that the actually glance server
isn't
  communicated with until very late in the process. Nothing sees the
servers at
  the level where the driver is determined. Also there isn't a single
glance server
  but a list of them, and in the even of certain communication failures
the list is
  cycled through until success or a number of retries has passed.
 
  So to change this to auto configuring will require turning this
upside down,
  cycling through the servers at a higher level, choosing the
appropriate driver
  for that server, and handling retries at that same level.
 
  Doable, but a much larger task than I first was thinking.
 
  Also, I don't really want the added overhead of getting the api
versions before
  every call, so I'm thinking that going through the list of servers at
startup and
  discovering the versions then and caching that somehow would be
helpful as well.
 
  Thoughts?
 
  I do worry about that overhead. But with Joe's comment, does it not
  just boil down to caching the keystone catalog in the context?
 
  I am not a fan of all the specific talk to glance code we have in
  nova, moving more of that into glanceclient can only be a good thing.
  For the XenServer itegration, for efficiency reasons, we need glance
  to talk from dom0, so it has dom0 making the final HTTP call. So we
  would need a way of extracting that info from the glance client. But
  that seems better than having that code in nova.
 
  I know in Glance we've largely taken the view that the client should be
as thin and lightweight as possible so users of the client can make use of
it however they best see fit. There was an earlier patch that would have
moved 

Re: [openstack-dev] [nova][scheduler]The database access in the scheduler filters

2013-11-03 Thread Russell Bryant
On 11/03/2013 03:12 PM, Russell Bryant wrote:
 On 11/01/2013 06:39 AM, Jiang, Yunhong wrote:
 I noticed several filters (AggregateMultiTenancyIsoaltion, ram_filter, 
 type_filter, AggregateInstanceExtraSpecsFilter) have DB access in the 
 host_passes(). Some will even access for each invocation.

 Just curios if this is considered a performance issue? With a 10k nodes, 60 
 VM per node, and 3 hours VM life cycle cloud, it will have more than 1 
 million DB access per second. Not a small number IMHO.
 
 On a somewhat related note, here's an idea that would be pretty easy to
 implement.
 
 What if we added some optional metadata to scheduler filters to let them
 indicate where in the order of filters they should run?
 
 The filters you're talking about here we would probably want to run
 last.  Other filters that could potentially efficiently eliminate a
 large number of hosts should be run first.
 

https://review.openstack.org/55072

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Shared network between specific tenants, but not all tenants?

2013-11-03 Thread Salvatore Orlando
This blueprint is related (the concept stands for any network, not just
externals):
https://blueprints.launchpad.net/neutron/+spec/sharing-model-for-external-networks

You can read the blueprint and spec pages for some context.
Bottom line is that it would be great to have some feedback on whether you
reckon this:
i) a RBAC problem, which should be enhanced in order to delegate access
to network resources thus enabling a tenant to select with which tenants a
network should be shared
ii) a network topology problem, meaning that we don't really care about
whether there's a single shared network segment or not, as long as there is
a possibility to provide a CIDR which is routable by different tenants.

Regards,
Salvatore


On 29 October 2013 22:20, Mike Wilson geekinu...@gmail.com wrote:

 +1

 I also have tenants asking for this :-). I'm interested to see a blueprint.

 -Mike


 On Tue, Oct 29, 2013 at 1:24 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 10/29/2013 02:25 PM, Justin Hammond wrote:

 We have been considering this and have some notes on our concept, but we
 haven't made a blueprint for it. I will speak amongst my group and find
 out what they think of making it more public.


 OK, cool, glad to know I'm not the only one with tenants asking for this
 :)

 Looking forward to a possible blueprint on this.

 Best,
 -jay


  On 10/29/13 12:26 PM, Jay Pipes jaypi...@gmail.com wrote:

  Hi Neutron devs,

 Are there any plans to support networks that are shared/routed only
 between certain tenants (not all tenants)?

 Thanks,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [summit] Youtube stream from Design Summit?

2013-11-03 Thread James E. Blair
Stefano Maffulli stef...@openstack.org writes:

 This lead us to give up trying for Hong Kong. We searched for 
 alternatives (see the infra meeting logs a few months back and my 
 personal experiments [1] based on UDS experience) and the only solid 
 one was to use a land line to call in the crucial people that *have to* 
 be in a session. In the end I and Thierry made a judgement call to drop 
 that too because we didn't hear anyone demanding it and setting them up 
 in a reliable way for all sessions required a lot of efforts (that in 
 the past we felt went wasted anyway).

 We decided to let moderators find their preferred way to pull in those 
 1-3 people ad-hoc with their preferred methods.

Indeed, we have an asterisk server that we set up with the intent that
it should be able to support a lot (we tested at least 100) simultaneous
attendees for remote summit participation.  But since Stefano and
Thierry knew there was no interest, we did not do the final prep needed
to support the summit.

If people really are interested in remote participation, we would be
happy to help with that using open source tools that are accessible to a
large number of people over a wide range of technology.  Please register
your interest with the track leads, Stefano, or the infrastructure team
a bit earlier before the next summit and it can happen.

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Logging Blueprint Approval

2013-11-03 Thread Adrian Otto
Russell,

On Nov 2, 2013, at 9:18 PM, Russell Bryant rbry...@redhat.com
 wrote:

 On 11/02/2013 01:42 AM, Adrian Otto wrote:
 Team,
 
 We have a blueprint for logging architecture within Solum, which is an 
 important feature to aid in debugging at runtime:
 
 https://blueprints.launchpad.net/solum/+spec/logging
 
 The Implementation field of the blueprint says Needs Code Review, but
 I didn't see a patch anywehere.  Did I miss it?

I just corrected that on the BP. Sorry that was incorrectly set.

 On the implementation side, is this going to make use of the logging
 code in Oslo [1]?
 
 If it doesn't provide for everything you want, let's improve it!  That
 will allow the enhancements to be consumed by all other OpenStack
 projects, as well.
 
 [1]
 http://git.openstack.org/cgit/openstack/oslo-incubator/tree/openstack/common/log.py

Excellent idea. I put the link into the BP. I know that Paul Montgomery took an 
interest in this one, and may have some code ready to drop soon.

SInce I heard nothing to the contrary, I am also marking this as approved now 
so we can move forward on implementation. This is something we'd like to have 
in Solum early.

Thanks,

Adrian


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Choice of API infrastructure

2013-11-03 Thread Jay Pipes

On 11/02/2013 11:26 PM, Russell Bryant wrote:

On 11/02/2013 11:54 AM, Adrian Otto wrote:

Noorul,

I agree that key decisions should be tracked in blueprints. This is the
one for this decision which was made in our 2013-10-18 public meeting.
Jay's submission is consistent with the direction indicated by the team.

https://blueprints.launchpad.net/solum/+spec/rest-api-base

Transcript log:
http://irclogs.solum.io/2013/solum.2013-10-08-16.01.log.html
http://irclogs.solum.io/2013/solum.2013-10-08-16.01.log.html


Heh, not much discussion there.  :-)


Agreed. I actually didn't know anything about the discussion -- I wasn't 
at the meeting. I just figured I would throw some example code up to 
Gerrit that shows how Falcon can be used for the API plumbing. Like I 
mentioned in a previous email, I believe it's much easier to discuss 
things when there is sample code...



Here's my take ... Pecan+WSME has been pushed as the thing to
standardize on across most OpenStack APIs.  Ceilometer (and maybe
others?) are already using it.  Others, such as Nova, are planning to
use it this cycle. [1][2]


I've used both actually, and I've come to prefer Falcon because of its 
simplicity and specifically because of the following things:


* It's lack of integration with WSME, which I don't care for. I don't 
care for WSME because I believe it tries to put the model at the view 
layer, instead of where it belongs, at the model layer.
* It doesn't need a configuration file, specifically a configuration 
file that is a Python file as opposed to an .ini file.



I care less about the particular choice and more about consistency.  It
brings a lot of value, such as making it a lot easier for developers to
jump around between the OpenStack projects.  Can we first at least agree
that there is value in standardizing on *something* for most OpenStack APIs?


I completely understand the need for consistency. I pushed my patch as 
an example of how to do things the Falcon way. While I would prefer 
Falcon over Pecan (and certainly over Pecan+WSME), I will respect the 
push towards consistency if that's what is most important.


That said, I also believe that the projects in Stackforge should be the 
laboratories of experiment, and these projects may serve as a good 
playground for various implementations of things. I remind the reader 
that over time, the development community has standardized on various 
things, only to find a better implementation in an incubated project. 
Pecan+WSME is actually an example of that experimentation turned 
accepted standard.


Best,
-jay


I understand that there may be cases where the needs for an API justify
being different.  Marconi being more of a data-plane API vs
control-plane means that performance concerns are much higher, for example.

If we agree that consistency is good, does Solum have needs that make it
different than the majority of OpenStack APIs?  IMO, it does not.

Can someone lay out a case for why all OpenStack projects should be
using Falcon, if that's what you think Solum should use?

Also, is anyone willing to put up the equivalent of Jay's review [3],
but with Pecan+WSME, to help facilitate the discussion?

[1]
http://icehousedesignsummit.sched.org/event/b2680d411aa7f5d432438a435ac21fee
[2]
http://icehousedesignsummit.sched.org/event/4a7316a4f5c6f783e362cbba2644bae2
[3] https://review.openstack.org/#/c/55040/




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Choice of API infrastructure

2013-11-03 Thread Adrian Otto

On Nov 3, 2013, at 6:54 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 11/02/2013 11:26 PM, Russell Bryant wrote:
 On 11/02/2013 11:54 AM, Adrian Otto wrote:
 Noorul,
 
 I agree that key decisions should be tracked in blueprints. This is the
 one for this decision which was made in our 2013-10-18 public meeting.
 Jay's submission is consistent with the direction indicated by the team.
 
 https://blueprints.launchpad.net/solum/+spec/rest-api-base
 
 Transcript log:
 http://irclogs.solum.io/2013/solum.2013-10-08-16.01.log.html
 http://irclogs.solum.io/2013/solum.2013-10-08-16.01.log.html
 
 Heh, not much discussion there.  :-)
 
 Agreed. I actually didn't know anything about the discussion -- I wasn't at 
 the meeting. I just figured I would throw some example code up to Gerrit that 
 shows how Falcon can be used for the API plumbing. Like I mentioned in a 
 previous email, I believe it's much easier to discuss things when there is 
 sample code...
 
 Here's my take ... Pecan+WSME has been pushed as the thing to
 standardize on across most OpenStack APIs.  Ceilometer (and maybe
 others?) are already using it.  Others, such as Nova, are planning to
 use it this cycle. [1][2]
 
 I've used both actually, and I've come to prefer Falcon because of its 
 simplicity and specifically because of the following things:
 
 * It's lack of integration with WSME, which I don't care for. I don't care 
 for WSME because I believe it tries to put the model at the view layer, 
 instead of where it belongs, at the model layer.
 * It doesn't need a configuration file, specifically a configuration file 
 that is a Python file as opposed to an .ini file.
 
 I care less about the particular choice and more about consistency.  It
 brings a lot of value, such as making it a lot easier for developers to
 jump around between the OpenStack projects.  Can we first at least agree
 that there is value in standardizing on *something* for most OpenStack APIs?
 
 I completely understand the need for consistency. I pushed my patch as an 
 example of how to do things the Falcon way. While I would prefer Falcon over 
 Pecan (and certainly over Pecan+WSME), I will respect the push towards 
 consistency if that's what is most important.
 
 That said, I also believe that the projects in Stackforge should be the 
 laboratories of experiment, and these projects may serve as a good 
 playground for various implementations of things. I remind the reader that 
 over time, the development community has standardized on various things, only 
 to find a better implementation in an incubated project. Pecan+WSME is 
 actually an example of that experimentation turned accepted standard.
 
 Best,
 -jay
 
 I understand that there may be cases where the needs for an API justify
 being different.  Marconi being more of a data-plane API vs
 control-plane means that performance concerns are much higher, for example.
 
 If we agree that consistency is good, does Solum have needs that make it
 different than the majority of OpenStack APIs?  IMO, it does not.
 
 Can someone lay out a case for why all OpenStack projects should be
 using Falcon, if that's what you think Solum should use?
 
 Also, is anyone willing to put up the equivalent of Jay's review [3],
 but with Pecan+WSME, to help facilitate the discussion?
 
 [1]
 http://icehousedesignsummit.sched.org/event/b2680d411aa7f5d432438a435ac21fee
 [2]
 http://icehousedesignsummit.sched.org/event/4a7316a4f5c6f783e362cbba2644bae2
 [3] https://review.openstack.org/#/c/55040/
 

I added this subject to our next meeting agenda: 
https://wiki.openstack.org/wiki/Meetings/Solum

Adrian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Choice of API infrastructure

2013-11-03 Thread Clayton Coleman


 On Nov 3, 2013, at 9:02 AM, Jay Pipes jaypi...@gmail.com wrote:
 
 On 11/02/2013 11:26 PM, Russell Bryant wrote:
 On 11/02/2013 11:54 AM, Adrian Otto wrote:
 Noorul,
 
 I agree that key decisions should be tracked in blueprints. This is the
 one for this decision which was made in our 2013-10-18 public meeting.
 Jay's submission is consistent with the direction indicated by the team.
 
 https://blueprints.launchpad.net/solum/+spec/rest-api-base
 
 Transcript log:
 http://irclogs.solum.io/2013/solum.2013-10-08-16.01.log.html
 http://irclogs.solum.io/2013/solum.2013-10-08-16.01.log.html
 
 Heh, not much discussion there.  :-)
 
 Agreed. I actually didn't know anything about the discussion -- I wasn't at 
 the meeting. I just figured I would throw some example code up to Gerrit that 
 shows how Falcon can be used for the API plumbing. Like I mentioned in a 
 previous email, I believe it's much easier to discuss things when there is 
 sample code...
 
 Here's my take ... Pecan+WSME has been pushed as the thing to
 standardize on across most OpenStack APIs.  Ceilometer (and maybe
 others?) are already using it.  Others, such as Nova, are planning to
 use it this cycle. [1][2]
 
 I've used both actually, and I've come to prefer Falcon because of its 
 simplicity and specifically because of the following things:
 
 * It's lack of integration with WSME, which I don't care for. I don't care 
 for WSME because I believe it tries to put the model at the view layer, 
 instead of where it belongs, at the model layer.

Well, there are domain models, and API models.  Being able to map the domain 
model to a slightly different representation (although wsme isn't necessarily 
the best example of this), or to introduce virtual fields that exist solely for 
the benefit of a client, has been useful for us in practice over the years. 

The biggest benefit of the API model from my perspective is separating the 
concern of validating fields and reporting those validations consistently and 
correctly to a client, from the underlying persistence translations.  Tying 
json serialization too closely to a domain model will cause other coupling 
problems later on.

If I understand correctly from looking at the nova code this mapping (and of 
other domain model actions) is a goal of the 'objects' work, so if we end up 
versioning our domain model, not just the API, there's less of a need to for 
anything more that a light API translation.

All my 2 cents - I generally prefer having a light API model on top of my 
domain model for anything more complicated than a simple crud resource.

 * It doesn't need a configuration file, specifically a configuration file 
 that is a Python file as opposed to an .ini file.
 
 I care less about the particular choice and more about consistency.  It
 brings a lot of value, such as making it a lot easier for developers to
 jump around between the OpenStack projects.  Can we first at least agree
 that there is value in standardizing on *something* for most OpenStack APIs?
 
 I completely understand the need for consistency. I pushed my patch as an 
 example of how to do things the Falcon way. While I would prefer Falcon over 
 Pecan (and certainly over Pecan+WSME), I will respect the push towards 
 consistency if that's what is most important.
 
 That said, I also believe that the projects in Stackforge should be the 
 laboratories of experiment, and these projects may serve as a good 
 playground for various implementations of things. I remind the reader that 
 over time, the development community has standardized on various things, only 
 to find a better implementation in an incubated project. Pecan+WSME is 
 actually an example of that experimentation turned accepted standard.
 
 Best,
 -jay
 
 I understand that there may be cases where the needs for an API justify
 being different.  Marconi being more of a data-plane API vs
 control-plane means that performance concerns are much higher, for example.
 
 If we agree that consistency is good, does Solum have needs that make it
 different than the majority of OpenStack APIs?  IMO, it does not.
 
 Can someone lay out a case for why all OpenStack projects should be
 using Falcon, if that's what you think Solum should use?
 
 Also, is anyone willing to put up the equivalent of Jay's review [3],
 but with Pecan+WSME, to help facilitate the discussion?
 
 [1]
 http://icehousedesignsummit.sched.org/event/b2680d411aa7f5d432438a435ac21fee
 [2]
 http://icehousedesignsummit.sched.org/event/4a7316a4f5c6f783e362cbba2644bae2
 [3] https://review.openstack.org/#/c/55040/
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Network topologies

2013-11-03 Thread Salvatore Orlando
Il giorno 03/nov/2013 17:15, Robert Collins robe...@robertcollins.net
ha scritto:

 On 3 November 2013 17:36, Edgar Magana emag...@plumgrid.com wrote:
  Zen, Kevin, Aaron, Salvatore et al,
 
  Thank you so much for taking some time and reviewing this proposal. I
  couldn't get any time slot neither in Neutron nor Heat track to have a
deep
  discussion about this proposal. My conclusions are the following:

 I'm sorry if I managed to miss something, but what problem are you
 trying to solve? I don't mean the technical problem - that seems clear
 [provide store/restore of network definition], but the user story:
 what are people doing /trying to do that will become
 easier/better/possible with this thing implemented?


That was actually one of my questions as well, and an answer to it will
probably be very helpful to understand where this API should be placed.

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Locking and ZooKeeper - a space oddysey

2013-11-03 Thread Steven Dake

Sandy,

Apologies for not responding earlier, I am on vacation ATM. Responses 
inline.


On 10/31/2013 08:13 AM, Sandy Walsh wrote:


On 10/30/2013 08:08 PM, Steven Dake wrote:

On 10/30/2013 12:20 PM, Sandy Walsh wrote:

On 10/30/2013 03:10 PM, Steven Dake wrote:

I will -2 any patch that adds zookeeper as a dependency to Heat.

Certainly any distributed locking solution should be plugin based and
optional. Just as a database-oriented solution could be the default
plugin.

Sandy,

Even if it is optional, some percentage of the userbase will enable it
and expect the Heat community to debug and support it.

But, that's the nature of every openstack project. I don't support
HyperV in Nova or HBase in Ceilometer. The implementers deal with that
support. I can help guide someone to those people but have no intentions
of standing up those environments.


The HyperV scenario is different then the heat scenario.  If someone 
uses HyperV in Nova, they are pretty much on their own because of their 
choice to support a MS based virt infrastructure.  HyperV is not a 
choice many OpenStack deployments (if any...) will make.


In the case of Heat, the default choice will be zookeeper, because it 
doesn't suffer from dead locks problem.  Unfortunately these users will 
not recognize the problems that come with zookeeper (HA, scalability, 
security, documentation, different runtime environment) until it is too 
late to alter the previous decision that was made (We must have 
zookeeper!!).


I really don't think most people will *use* Nova with HyperV even though 
it is optional.  I do believe, however, that if zookeeper were 
optional it would become the default way to deploy Heat. This 
naturally leads to the Heat developers dealing with the ensuing chaos 
for one very specific problem that can be solved using alternative methods.


When users of OpenStack choose a db (postgres or mysql), or amqp (qpid 
or rabbit), the entire OpenStack community is able to respond with 
support, rather then one program (heat) in the hypothetical case of 
Zookeeper.



Re: the Java issue, we already have optional components in other
languages. I know Java is a different league of pain, but if it's an
optional component and left as a choice of the deployer, should we care?

-S

PS As an aside, what are your issues with ZK?



I realize zookeeper exists for a reason.  But unfortunately Zookeeper is
a server, rather then an in-process library.  This means someone needs
to figure out how to document, scale, secure, and provide high
availability for this component.

Yes, that's why we would use it. Same goes for rabbit and mysql.


The pain is not worth the gain.  A better solution would be to avoid 
locking all-together.



This is extremely challenging for the
two server infrastructure components OpenStack server processes depend
on today (AMQP, SQL).  If the entire OpenStack community saw value in
biting the bullet and accepting zookeeper as a dependency and taking on
this work, I might be more ameniable.

Why do other services need to agree on adopting ZK? If some Heat users
need it, they can use it. Nova shouldn't care.
And the Heat devs are the only folks with any responsibility to support 
it in this scenario.  If we are going to bring the pain, we should share 
it equally between programs :)



What we are talking about in the
review, however, is that the Heat team bite that bullet, which is a big
addition to the scope of work we already execute for the ability to gain
a distributed lock.  I would expect there are simpler approaches to
solve the problem without dragging the baggage of a new server component
into the OpenStack deployment.

Yes, there probably are, and alternatives are good. But, as others have
attested, ZK is tried and true. Why not support it also?


Choices are fundamental in life, and I generally disagree with limiting 
choices for people, including our target user base. However, I'd prefer 
us investigate choices that do not negatively impact one group of folks 
(the Heat developers and the various downstreams) in a profound way 
before we give up and say too hard.


As an example of the thought processes that went on after zookeeper was 
proposed, heat-core devs were talking about introducing a zookeeper 
tempest gate for Heat.  In my mind if we gate it, we support it.  This 
leads to the natural conclusion that it must be securi-fied, HA-ified, 
documenti-fied, and scalei-fied).  Groan.


See how one optional feature spins out of control?


Using zookeeper as is suggested in the review is far different then the
way Nova uses Zookeeper.  With the Nova use case, Nova still operates
just dandy without zookeeper.  With zookeeper in the Heat usecase, it
essentially becomes the default way people are expected to deploy Heat.

Why, if it's a plugin?

explained above.

What I would prefer is taskflow over AMQP, to leverage existing server
infrastructure (that has already been documented, scaled, secured, and
HA-ified).

Re: [openstack-dev] [OpenStack-Infra] [summit] Youtube stream from Design Summit?

2013-11-03 Thread Jeremy Stanley
On 2013-11-01 18:32:51 -0400 (-0400), Nick Chase wrote:
[...]
 (Not sure actually what the difference would be from IRC,
 actually, though.)

Or, for that matter, the chat panel built into Etherpad itself.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [summit] Youtube stream from Design Summit?

2013-11-03 Thread David Medberry
On Fri, Nov 1, 2013 at 3:25 PM, Stefano Maffulli stef...@openstack.orgwrote:

 This lead us to give up trying for Hong Kong. We searched for
 alternatives (see the infra meeting logs a few months back and my
 personal experiments [1] based on UDS experience) and the only solid
 one was to use a land line to call in the crucial people that *have to*
 be in a session. In the end I and Thierry made a judgement call to drop
 that too because we didn't hear anyone demanding it and setting them up
 in a reliable way for all sessions required a lot of efforts (that in
 the past we felt went wasted anyway).


Ah, bad planning on my part. I was kind of counting on the remote access
just getting better and better.

I'll just have to play catchup afterwards. As I'm unable to be there this
week.


-- 
-dave
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] How to fix the race condition issue between deleting and soft reboot?

2013-11-03 Thread Wangpan
Hi all,

I have a question about fixing a race condition issue between deleting and soft 
reboot,
the issue is that:
1. If we soft reboot an instance, and then delete it, the instance may not be 
deleted and stand on deleting task state, this is because the bug below,
https://bugs.launchpad.net/nova/+bug/213
and I have fixed this bug yet several months ago(just for libvirt driver).
2. The other issue is, if the instance is rebooted just before deleting the 
files under instance dir, then it may become to a running deleted one, and this 
bug is at below:
https://bugs.launchpad.net/nova/+bug/1246181
I want to fix it now, and I need your advice.
The commit is here: https://review.openstack.org/#/c/54477/ , you can post your 
advice on gerrit or mail to me.

The ways to fix bug #2 may be these(just for libvirt driver in my mind):
1. Add a lock to reboot operation like the deleting operation, so the reboot 
operation and the delete operation will be done in sequence. 
But on the other hand, the soft reboot operation may cost 120s if the instance 
doesn't support graceful shutdown, I think it is too long for a user to delete 
an instance, so this may not be the best way.
2. Check the instance state at the last of _cleanup method in libvirt driver, 
and if it is still running, destroy it again.
This way is usable but both Nikola Dipanov and I don't like this 'ugly' way.
3. Check the instance vm state and task state in nova db before booting in 
reboot, if it is deleted/deleting, stop the reboot process, this will access db 
at driver level, it is a 'ugly' way, too.

Nikola suggests that 'maybe we can leverage task/vm states and refactor how 
reboot is done so we can back out of a reboot on a delete', but I think we 
should let user delete an instance at any time and any state, so the delete 
operation during 'soft reboot' may not be forbidden.

Thanks and waiting for your voice!

2013-11-04



Wangpan___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] AUTO: Xi Lun Chen is out of office (returning 2013/11/11)

2013-11-03 Thread Xi Lun Chen


I am out of the office until 2013/11/11.

Xu Han Peng/China/IBM is my backup for my team's development activities.
Xiao Xin Liu/China/IBM is my backup for management. Anything urgent, pls
call +86-13811600424.


Note: This is an automated response to your message  Re: [openstack-dev]
[summit] Youtube stream from Design Summit? sent on 11/04/2013 8:16:38.

This is the only notification you will receive while this person is away.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

2013-11-03 Thread David koo
On Wed, Oct 30, 2013 at 12:18:13AM -0400, Mike Spreitzer wrote:
 David koo david@huawei.com wrote on 10/29/2013 05:19:24 AM:

  Would it be possible to provide an alternate link to the doc? The
  GFW blocks google docs :(.

 For those of you behind the GFW, here is the main doc and some it references:

Many thanks!

Sorry for the late reply.

--
Koo
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev