Re: [Openstack] [NetStack] Quantum Service API extension proposal

2011-05-23 Thread Dan Wendlandt
Hi Ying,

Thanks for the detailed example.  You are correct, this is inline with what
I was thinking.

A "data extension" mechanism like this would let any interested party
cleanly expose additional properties for API port objects, and as Alex
mentioned, potentially for API network objects as well.  From an internal
Quantum architecture perspective, we'll have to discuss how this data gets
passed to the plugin, what validation happens at the API layer, as well as
how plugins are able go beyond basic data extension to add new API methods
and objects.  This is what I'd like to tackle with the blueprint:
https://blueprints.launchpad.net/network-service/+spec/quantum-api-extensions

During the meeting tomorrow we can see if people are largely on the same
page, in which case we can move on to the blueprint on this.

Dan


On Mon, May 23, 2011 at 7:48 PM, Ying Liu (yinliu2) wrote:

> Hi Dan,
>
>
>
> Totally agree. “Data Extensions” is the way we can extend the 
> list for non-base keys.
>
> Actually, we can use this mechanism to extend the extensible 
> construct proposed earlier, assuming that data construct is already in the
> name space.
>
>
>
> The extension can be something like this (pdf and wadl files defines
> extension content):
>
>
>
> 
>namespace="
> http://docs.rackspacecloud.com/network/api/ext/conf/v1.0";
>
> alias="CSCO-CONF"
>
> <
>
> 
>href="
> http://docs.ciscocloud.com/network/api/ext/net-conf-2011.pdf"/>
>
> 
>href="
> http://docs.ciscocloud.com/network/api/ext/net-conf.wadl"/>
>
> 
>
> Adds the configurations to the port.
>
> 
>
>
>
>
>
> The data extension:
>
>
>
> {
>
> "port" : {
>
> "id" : 8,
>
> "name" : "My L2 Network",
>
> "created_at" : "2011-05-18 18:30:40",
>
> "status" : "Active",
>
> "configureations" : {
>
>"*COSO-CONF:acl*" : "permit ip any 209.165.201.2
> 255.255.255.255",
>
>"vlan_segment" : "5"
>
>}
>
> }
>
>
>
> Thus, the registration, discovery and promotion mechanism can all follow
> the standard extension mechanism.  Just my understanding, please correct me
> if I missed something here.
>
>
>
> Best,
>
> Ying
>
>
>
> *From:* Dan Wendlandt [mailto:d...@nicira.com]
> *Sent:* Monday, May 23, 2011 4:54 PM
> *To:* Alex Neefus
> *Cc:* Ying Liu (yinliu2); openstack@lists.launchpad.net
> *Subject:* Re: [Openstack] [NetStack] Quantum Service API extension
> proposal
>
>
>
>
>
>
>
>
>
> On Mon, May 23, 2011 at 1:05 PM, Alex Neefus  wrote:
>
> Hi All –
>
>
>
> I wanted to lend support to this proposal, however I don’t think we should
> be so quick to say this whole thing is an extension.
>
>
>
> Hi Alex, all,
>
>
>
> I'd like to try and level-set here for a minute, as I don't believe people
> are saying that such a mechanism itself would be an extension, but rather
> that it would be a mechanism for plugins to expose extensions.
>
>
>
> Here is the situation as I understand it:  I believe most people would feel
> that having a conf/cap/profile attribute on ports and networks in the core
> API is (at least) one reasonable way of letting plugins exposing additional
> data via the Quantum API.  Where the issue of OpenStack extensions would
> come in is providing a mechanism to introduce new key-value pairs to
> something like the conf/cap/profile attribute.  I'm not expert on API
> extensibility, but doing so seems to be a direct application of the "Data
> Extensions" portion of the OpenStack extensions proposal (see slide 29 of
> http://www.slideshare.net/RackerWilliams/openstack-extensions)
>
>
>
> The OpenStack extensions proposal focuses on standardizing several key
> questions around introducing new data, such as these key-value pairs:
>
> - How do you prevent naming conflicts between keys?
>
> - How does someone easily determine whether a Quantum instance supports a
> certain type of functionality (i.e., a certain key)?
>
> - How does one get access to documentation on the format + type of the
> "value" portion of the key-value pair? (values may be nested objects in
> complex scenarios).
>
> - How do we handle the official "promotion" of a key-value pair from an
> extension to being part of the "base"?
>
>
>
> In my opinion these all seem like good things to standardize across
> OpenStack services and hence be part of Quantum.
>
>
>
> My original response was motivated by the fact that the proposal didn't
> seem to mention using the OpenStack extension mechanism to expose non "base"
> key-value pairs in the conf/cap/profile attribute.  Based on Rick's response
> it seems like the plan is in fact to try and use OpenStack extensions, so
> I'm hoping we're largely on the same page, fingers crossed :)
>
>
>
> Dan
>
>
>
>
>
>
>
>
>
> We benefit a lot from having a standard capabilities mechanism as part of
> our core Quantum API. I like Ying’s key value method as well. I think it’s
> log

Re: [Openstack] How to limit the total virtual processors/memory for one compute node?

2011-05-23 Thread Lorin Hochstein
Hi Huang:

You can use the simple scheduler, allocates new instances to hosts that have 
the fewest instances currently running.

--scheduler_driver=nova.scheduler.simple.SimpleScheduler

More sophisticated schedulers are currently under active development, but they 
haven't made it to the trunk yet.

Take care,

Lorin
--
Lorin Hochstein, Computer Scientist
USC Information Sciences Institute
703.812.3710
http://www.east.isi.edu/~lorin

On May 23, 2011, at 10:00 PM, Huang Zhiteng wrote:

> Hi all,
> 
> In my setup of Cactus, I found Nova scheduler would place newly created 
> instance to a compute node that is already full occupied (in terms of memory 
> or # of virtual processors), which lead to swapping and VP overcommitting.  
> That would cause serious performance issue on a busy environment.  So I was 
> wondering if there's some kinda mechanism to limit to resource one compute 
> node could use, something like the 'weight' in OpenNebula. 
> 
> I'm using Cactus (with GridDynamic's RHEL package), default scheduler policy, 
> one zone only.
> 
> Any suggestion?
> -- 
> Regards
> Huang Zhiteng
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [NetStack] Quantum Service API extension proposal

2011-05-23 Thread Ying Liu (yinliu2)
Hi Dan,

 

Totally agree. "Data Extensions" is the way we can extend the  list for non-base keys. 

Actually, we can use this mechanism to extend the extensible  construct proposed earlier, assuming that data construct is
already in the name space. 

 

The extension can be something like this (pdf and wadl files defines
extension content):

 

http://docs.rackspacecloud.com/network/api/ext/conf/v1.0";

alias="CSCO-CONF"

<   

http://docs.ciscocloud.com/network/api/ext/net-conf-2011.pdf"/
>

http://docs.ciscocloud.com/network/api/ext/net-conf.wadl"/>



Adds the configurations to the port.



   

 

The data extension:

 

{

"port" : {

"id" : 8,

"name" : "My L2 Network",

"created_at" : "2011-05-18 18:30:40",

"status" : "Active",

"configureations" : {

   "COSO-CONF:acl" : "permit ip any 209.165.201.2
255.255.255.255",

   "vlan_segment" : "5"

   }

}

 

Thus, the registration, discovery and promotion mechanism can all follow
the standard extension mechanism.  Just my understanding, please correct
me if I missed something here.

 

Best,

Ying

 

From: Dan Wendlandt [mailto:d...@nicira.com] 
Sent: Monday, May 23, 2011 4:54 PM
To: Alex Neefus
Cc: Ying Liu (yinliu2); openstack@lists.launchpad.net
Subject: Re: [Openstack] [NetStack] Quantum Service API extension
proposal

 

 

 

 

On Mon, May 23, 2011 at 1:05 PM, Alex Neefus  wrote:

Hi All - 

 

I wanted to lend support to this proposal, however I don't think we
should be so quick to say this whole thing is an extension.   

 

Hi Alex, all, 

 

I'd like to try and level-set here for a minute, as I don't believe
people are saying that such a mechanism itself would be an extension,
but rather that it would be a mechanism for plugins to expose
extensions.  

 

Here is the situation as I understand it:  I believe most people would
feel that having a conf/cap/profile attribute on ports and networks in
the core API is (at least) one reasonable way of letting plugins
exposing additional data via the Quantum API.  Where the issue of
OpenStack extensions would come in is providing a mechanism to introduce
new key-value pairs to something like the conf/cap/profile attribute.
I'm not expert on API extensibility, but doing so seems to be a direct
application of the "Data Extensions" portion of the OpenStack extensions
proposal (see slide 29 of
http://www.slideshare.net/RackerWilliams/openstack-extensions)

 

The OpenStack extensions proposal focuses on standardizing several key
questions around introducing new data, such as these key-value pairs: 

- How do you prevent naming conflicts between keys?  

- How does someone easily determine whether a Quantum instance supports
a certain type of functionality (i.e., a certain key)?

- How does one get access to documentation on the format + type of the
"value" portion of the key-value pair? (values may be nested objects in
complex scenarios).  

- How do we handle the official "promotion" of a key-value pair from an
extension to being part of the "base"?  

  

In my opinion these all seem like good things to standardize across
OpenStack services and hence be part of Quantum.  

 

My original response was motivated by the fact that the proposal didn't
seem to mention using the OpenStack extension mechanism to expose non
"base" key-value pairs in the conf/cap/profile attribute.  Based on
Rick's response it seems like the plan is in fact to try and use
OpenStack extensions, so I'm hoping we're largely on the same page,
fingers crossed :)   

 

Dan

 

 

 

 

We benefit a lot from having a standard capabilities mechanism
as part of our core Quantum API. I like Ying's key value method as well.
I think it's logical, clean and scalable. I propose that basic read
access of "cap" off of our major objects: network, port, interface be
included in our first release. 

 

So in summary I would like to encourage us to add:

GET  /networks/{net_id}/conf

GET  /networks/{net_id}/ports/{port_id}/conf/

GET  {entity}/VIF/conf/

 

Each of these would return a list of keys.

 

Additionally Quantum base should support 

GET  /networks/{net_id}/conf/{key}

GET  /networks/{net_id}/ports/{port_id}/conf/{key}

GET  {entity}/VIF/conf/{key}

 

Where {key} is the name of either a standard capability or an
extention capability. We can define an error code now to designate a
capability not supported by the plugin. (i.e. 472 - CapNotSupported)

 

Finally we don't need to standardize on every capability that
might be supported if we provide this simple mechanism. Specific
capabilities Key,Value sets can be added later but or included as vendor
specific extensions.

 

I'm happy to add this to the wiki if there is consensus.
Rick/Dan -

[Openstack] How to limit the total virtual processors/memory for one compute node?

2011-05-23 Thread Huang Zhiteng
Hi all,

In my setup of Cactus, I found Nova scheduler would place newly created
instance to a compute node that is already full occupied (in terms of memory
or # of virtual processors), which lead to swapping and VP overcommitting.
 That would cause serious performance issue on a busy environment.  So I was
wondering if there's some kinda mechanism to limit to resource one compute
node could use, something like the 'weight' in OpenNebula.

I'm using Cactus (with GridDynamic's RHEL package), default scheduler
policy, one zone only.

Any suggestion?
-- 
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [NetStack] Quantum Service API extension proposal

2011-05-23 Thread Ying Liu (yinliu2)
Thanks, Alex.

 

I'd like to add few points. Our proposal is not intended to provide the
whole extension mechanism for Quantum Service. Instead, we are proposing
an extensible construct to Quantum core API.  Thus, we can have a
flexible and scalable way to describe the port's configurations.  For
example, the Port Profile includes a set of configurations. Thus,  Port
profile is defined once and it can be associated with multiple ports. If
we need to create 30 ports with the same configuration settings, we
don't need to do the same configurations 30 times. 

 

We agree with standard extension mechanism drafted by Jorge. With which,
we can extend attributes, actions, headers, resources and etc.
Considering our proposals, I think there are three options:

 

1.   We keep "Port Profile", the extensible construct for the port.
It's just for the configuration extension. All other extensions go
through the standard extension mechanism. The reason is that
configuration extension is just adding an attribute to the port. With
extensible "Port Profile", we can quickly add it. For the extended
, we can require using vendor ID as the prefix. For example,
.

 

2.   We keep the "Port Profile" construct associated with the port.
But, it only has fixed set of basic keys for common configurations. All
the extensions are handled by standard extension mechanism. With this
approach, we can still have scalable way to configure a large network.
Plus, we won't need extension unless we need some configuration beyond
that common configuration set. 

 

 

3.   In core API, only port id is associated with the port. (any
other basic attributes should associated with a port?). All the
extension are handled by extension mechanisms.  With this option, any
additional attribute/configuration request an extension.

 

We are fine with any of them and I can update wiki based on the
community's decision.

 

Best,

Ying

 

From: Alex Neefus [mailto:a...@mellanox.com] 
Sent: Monday, May 23, 2011 1:05 PM
To: Ying Liu (yinliu2); openstack@lists.launchpad.net
Cc: Rick Clark
Subject: RE: [Openstack] [NetStack] Quantum Service API extension
proposal

 

Hi All - 

 

I wanted to lend support to this proposal, however I don't think we
should be so quick to say this whole thing is an extension.  

 

We benefit a lot from having a standard capabilities mechanism as part
of our core Quantum API. I like Ying's key value method as well. I think
it's logical, clean and scalable. I propose that basic read access of
"cap" off of our major objects: network, port, interface be included in
our first release. 

 

So in summary I would like to encourage us to add:

GET  /networks/{net_id}/conf

GET  /networks/{net_id}/ports/{port_id}/conf/

GET  {entity}/VIF/conf/

 

Each of these would return a list of keys.

 

Additionally Quantum base should support 

GET  /networks/{net_id}/conf/{key}

GET  /networks/{net_id}/ports/{port_id}/conf/{key}

GET  {entity}/VIF/conf/{key}

 

Where {key} is the name of either a standard capability or an extention
capability. We can define an error code now to designate a capability
not supported by the plugin. (i.e. 472 - CapNotSupported)

 

Finally we don't need to standardize on every capability that might be
supported if we provide this simple mechanism. Specific capabilities
Key,Value sets can be added later but or included as vendor specific
extensions.

 

I'm happy to add this to the wiki if there is consensus. Rick/Dan -
Maybe this should be a topic for Tuesdays meeting. 

 

Alex

 

---

Alex Neefus

Senior System Engineer | Mellanox Technologies

(o) 617.337.3116 | (m) 201.208.5771 | (f) 617.337.3019

 

 

 

 

 

From: openstack-bounces+alex=mellanox@lists.launchpad.net
[mailto:openstack-bounces+alex=mellanox@lists.launchpad.net] On
Behalf Of Ying Liu (yinliu2)
Sent: Saturday, May 21, 2011 1:10 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] [NetStack] Quantum Service API extension proposal

 

Hi all,

 

We just posted a proposal for OpenStack Quantum Service API extension on
community wiki page at
http://wiki.openstack.org/QuantumAPIExtensions?action=AttachFile&do=view
&target=quantum_api_extension.pdf

or 

http://wiki.openstack.org/QuantumAPIExtensions?action=AttachFile&do=view
&target=quantum_api_extension.docx

 

Please review and let us know your comments/suggestions. An etherpad
page is created for API extension discussion
http://etherpad.openstack.org/uWXwqQNU4s

 

Best,

Ying

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [NetStack] Quantum Service API extension proposal

2011-05-23 Thread Ying Liu (yinliu2)
Thanks, Alex.

 

I'd like to add few points. Our proposal is not intended to provide the
whole extension mechanism for Quantum Service. Instead, we are proposing
an extensible construct to Quantum core API.  Thus, we can have a
flexible and scalable way to describe the port's configurations.  For
example, the Port Profile includes a set of configurations. Thus,  Port
profile is defined once and it can be associated with multiple ports. If
we need to create 30 ports with the same configuration settings, we
don't need to do the same configurations 30 times. 

 

We agree with standard extension mechanism drafted by Jorge. With which,
we can extend attributes, actions, headers, resources and etc.
Considering our proposals, I think there are three options:

 

1.   We keep "Port Profile", the extensible construct for the port.
It's just for the configuration extension. All other extensions go
through the standard extension mechanism. The reason is that
configuration extension is just adding an attribute to the port. With
extensible "Port Profile", we can quickly add it. For the extended
, we can require using vendor ID as the prefix. For example,
.

 

2.   We keep the "Port Profile" construct associated with the port.
But, it only has fixed set of basic keys for common configurations. All
the extensions are handled by standard extension mechanism. With this
approach, we can still have scalable way to configure a large network.
Plus, we won't need extension unless we need some configuration beyond
that common configuration set. 

 

 

3.   In core API, only port id is associated with the port. (any
other basic attributes should associated with a port?). All the
extension are handled by extension mechanisms.  With this option, any
additional attribute/configuration request an extension.

 

We are fine with any of them and I can update wiki based on the
community's decision.

 

Best,

Ying

 

From: Alex Neefus [mailto:a...@mellanox.com] 
Sent: Monday, May 23, 2011 1:05 PM
To: Ying Liu (yinliu2); openstack@lists.launchpad.net
Cc: Rick Clark
Subject: RE: [Openstack] [NetStack] Quantum Service API extension
proposal

 

Hi All - 

 

I wanted to lend support to this proposal, however I don't think we
should be so quick to say this whole thing is an extension.  

 

We benefit a lot from having a standard capabilities mechanism as part
of our core Quantum API. I like Ying's key value method as well. I think
it's logical, clean and scalable. I propose that basic read access of
"cap" off of our major objects: network, port, interface be included in
our first release. 

 

So in summary I would like to encourage us to add:

GET  /networks/{net_id}/conf

GET  /networks/{net_id}/ports/{port_id}/conf/

GET  {entity}/VIF/conf/

 

Each of these would return a list of keys.

 

Additionally Quantum base should support 

GET  /networks/{net_id}/conf/{key}

GET  /networks/{net_id}/ports/{port_id}/conf/{key}

GET  {entity}/VIF/conf/{key}

 

Where {key} is the name of either a standard capability or an extention
capability. We can define an error code now to designate a capability
not supported by the plugin. (i.e. 472 - CapNotSupported)

 

Finally we don't need to standardize on every capability that might be
supported if we provide this simple mechanism. Specific capabilities
Key,Value sets can be added later but or included as vendor specific
extensions.

 

I'm happy to add this to the wiki if there is consensus. Rick/Dan -
Maybe this should be a topic for Tuesdays meeting. 

 

Alex

 

---

Alex Neefus

Senior System Engineer | Mellanox Technologies

(o) 617.337.3116 | (m) 201.208.5771 | (f) 617.337.3019

 

 

 

 

 

From: openstack-bounces+alex=mellanox@lists.launchpad.net
[mailto:openstack-bounces+alex=mellanox@lists.launchpad.net] On
Behalf Of Ying Liu (yinliu2)
Sent: Saturday, May 21, 2011 1:10 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] [NetStack] Quantum Service API extension proposal

 

Hi all,

 

We just posted a proposal for OpenStack Quantum Service API extension on
community wiki page at
http://wiki.openstack.org/QuantumAPIExtensions?action=AttachFile&do=view
&target=quantum_api_extension.pdf

or 

http://wiki.openstack.org/QuantumAPIExtensions?action=AttachFile&do=view
&target=quantum_api_extension.docx

 

Please review and let us know your comments/suggestions. An etherpad
page is created for API extension discussion
http://etherpad.openstack.org/uWXwqQNU4s

 

Best,

Ying

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [NetStack] Quantum Service API extension proposal

2011-05-23 Thread Dan Wendlandt
On Mon, May 23, 2011 at 1:05 PM, Alex Neefus  wrote:

> Hi All –
>
>
>
> I wanted to lend support to this proposal, however I don’t think we should
> be so quick to say this whole thing is an extension.
>

Hi Alex, all,

I'd like to try and level-set here for a minute, as I don't believe people
are saying that such a mechanism itself would be an extension, but rather
that it would be a mechanism for plugins to expose extensions.

Here is the situation as I understand it:  I believe most people would feel
that having a conf/cap/profile attribute on ports and networks in the core
API is (at least) one reasonable way of letting plugins exposing additional
data via the Quantum API.  Where the issue of OpenStack extensions would
come in is providing a mechanism to introduce new key-value pairs to
something like the conf/cap/profile attribute.  I'm not expert on API
extensibility, but doing so seems to be a direct application of the "Data
Extensions" portion of the OpenStack extensions proposal (see slide 29 of
http://www.slideshare.net/RackerWilliams/openstack-extensions)

The OpenStack extensions proposal focuses on standardizing several key
questions around introducing new data, such as these key-value pairs:
- How do you prevent naming conflicts between keys?
- How does someone easily determine whether a Quantum instance supports a
certain type of functionality (i.e., a certain key)?
- How does one get access to documentation on the format + type of the
"value" portion of the key-value pair? (values may be nested objects in
complex scenarios).
- How do we handle the official "promotion" of a key-value pair from an
extension to being part of the "base"?

In my opinion these all seem like good things to standardize across
OpenStack services and hence be part of Quantum.

My original response was motivated by the fact that the proposal didn't seem
to mention using the OpenStack extension mechanism to expose non "base"
key-value pairs in the conf/cap/profile attribute.  Based on Rick's response
it seems like the plan is in fact to try and use OpenStack extensions, so
I'm hoping we're largely on the same page, fingers crossed :)

Dan




>
>
> We benefit a lot from having a standard capabilities mechanism as part of
> our core Quantum API. I like Ying’s key value method as well. I think it’s
> logical, clean and scalable. I propose that basic read access of “cap” off
> of our major objects: network, port, interface be included in our first
> release.
>
>
>
> So in summary I would like to encourage us to add:
>
> GET  /networks/{net_id}/conf
>
> GET  /networks/{net_id}/ports/{port_id}/conf/
>
> GET  {entity}/VIF/conf/
>
>
>
> Each of these would return a list of keys.
>
>
>
> Additionally Quantum base should support
>
> GET  /networks/{net_id}/conf/{key}
>
> GET  /networks/{net_id}/ports/{port_id}/conf/{key}
>
> GET  {entity}/VIF/conf/{key}
>
>
>
> Where {key} is the name of either a standard capability or an extention
> capability. We can define an error code now to designate a capability not
> supported by the plugin. (i.e. 472 – CapNotSupported)
>
>
>
> Finally we don’t need to standardize on every capability that might be
> supported if we provide this simple mechanism. Specific capabilities
> Key,Value sets can be added later but or included as vendor specific
> extensions.
>
>
>
> I’m happy to add this to the wiki if there is consensus. Rick/Dan – Maybe
> this should be a topic for Tuesdays meeting.
>
>
>
> Alex
>
>
>
> ---
>
> Alex Neefus
>
> Senior System Engineer | Mellanox Technologies
>
> (o) 617.337.3116 | (m) 201.208.5771 | (f) 617.337.3019
>
>
>
>
>
>
>
>
>
>
>
> *From:* openstack-bounces+alex=mellanox@lists.launchpad.net [mailto:
> openstack-bounces+alex=mellanox@lists.launchpad.net] *On Behalf Of *Ying
> Liu (yinliu2)
> *Sent:* Saturday, May 21, 2011 1:10 PM
> *To:* openstack@lists.launchpad.net
> *Subject:* [Openstack] [NetStack] Quantum Service API extension proposal
>
>
>
> Hi all,
>
>
>
> We just posted a proposal for OpenStack Quantum Service API extension on
> community wiki page at
> http://wiki.openstack.org/QuantumAPIExtensions?action=AttachFile&do=view&target=quantum_api_extension.pdf
>
> or
>
>
> http://wiki.openstack.org/QuantumAPIExtensions?action=AttachFile&do=view&target=quantum_api_extension.docx
>
>
>
> Please review and let us know your comments/suggestions. An etherpad page
> is created for API extension discussion
> http://etherpad.openstack.org/uWXwqQNU4s
>
>
>
> Best,
>
> Ying
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
~~~
Dan Wendlandt
Nicira Networks, Inc.
www.nicira.com | www.openvswitch.org
Sr. Product Manager
cell: 650-906-2650
~~~
___
Mailing list: https://launc

Re: [Openstack] [NetStack] Quantum Service API extension proposal

2011-05-23 Thread kmestery
Debo:

A tap is nothing different than a VM vnics from a switch perspective, it still 
contains a portion owned by Nova (the tap itself) and it connects to a port, 
which is owned by Quantum. So in essence, the tap still has both a "vif" and a 
"port" in this terminology.

Thanks,
Kyle

On May 23, 2011, at 4:10 PM, Debo Dutta (dedutta) wrote:

> Hi Troy
>  
> What about a tap? Its also like a port ….Should that be in quantum?
>  
> Regards
> Debo
>  
> From: Troy Toman [mailto:troy.to...@rackspace.com] 
> Sent: Monday, May 23, 2011 2:10 PM
> To: Debo Dutta (dedutta)
> Cc: Alex Neefus; Ying Liu (yinliu2); 
> Subject: Re: [Openstack] [NetStack] Quantum Service API extension proposal
>  
> I think the idea was slightly different. We were equating a vif to  NIC in a 
> physical server. A port was equated to a switch port on a physical switch. 
> Doesn't necessarily mean they have to be different. But, there was a reason 
> we used different terminology. 
>  
> In particular, we felt the vif was something that would continue to be in the 
> server's domain and managed within Nova. A port was a construct that is owned 
> and managed by the network service (Quantum). 
>  
> Troy
>  
> On May 23, 2011, at 3:56 PM, Debo Dutta (dedutta) wrote:
> 
> 
> Quick question: it seems we are calling one end of the virtual wire a port 
> and the other a vif. Is there a reason to do that? Can we just call say that 
> that a wire connects 2 ports?
>  
> Also another interesting network scenario is when there is a wire connecting 
> 2 ports and you have a tap (for all sorts of scenarios). I think the 
> semantics of the tap might be  quite basic.
>  
> Regards
> debo
>  
> From: openstack-bounces+dedutta=cisco@lists.launchpad.net 
> [mailto:openstack-bounces+dedutta=cisco@lists.launchpad.net] On Behalf Of 
> Alex Neefus
> Sent: Monday, May 23, 2011 1:05 PM
> To: Ying Liu (yinliu2); openstack@lists.launchpad.net
> Subject: Re: [Openstack] [NetStack] Quantum Service API extension proposal
>  
> Hi All –
>  
> I wanted to lend support to this proposal, however I don’t think we should be 
> so quick to say this whole thing is an extension.  
>  
> We benefit a lot from having a standard capabilities mechanism as part of our 
> core Quantum API. I like Ying’s key value method as well. I think it’s 
> logical, clean and scalable. I propose that basic read access of “cap” off of 
> our major objects: network, port, interface be included in our first release.
>  
> So in summary I would like to encourage us to add:
> GET  /networks/{net_id}/conf
> GET  /networks/{net_id}/ports/{port_id}/conf/
> GET  {entity}/VIF/conf/
>  
> Each of these would return a list of keys.
>  
> Additionally Quantum base should support
> GET  /networks/{net_id}/conf/{key}
> GET  /networks/{net_id}/ports/{port_id}/conf/{key}
> GET  {entity}/VIF/conf/{key}
>  
> Where {key} is the name of either a standard capability or an extention 
> capability. We can define an error code now to designate a capability not 
> supported by the plugin. (i.e. 472 – CapNotSupported)
>  
> Finally we don’t need to standardize on every capability that might be 
> supported if we provide this simple mechanism. Specific capabilities 
> Key,Value sets can be added later but or included as vendor specific 
> extensions.
>  
> I’m happy to add this to the wiki if there is consensus. Rick/Dan – Maybe 
> this should be a topic for Tuesdays meeting.
>  
> Alex
>  
> ---
> Alex Neefus
> Senior System Engineer | Mellanox Technologies
> (o) 617.337.3116 | (m) 201.208.5771 | (f) 617.337.3019
>  
>  
>  
>  
>  
> From: openstack-bounces+alex=mellanox@lists.launchpad.net 
> [mailto:openstack-bounces+alex=mellanox@lists.launchpad.net] On Behalf Of 
> Ying Liu (yinliu2)
> Sent: Saturday, May 21, 2011 1:10 PM
> To: openstack@lists.launchpad.net
> Subject: [Openstack] [NetStack] Quantum Service API extension proposal
>  
> Hi all,
>  
> We just posted a proposal for OpenStack Quantum Service API extension on 
> community wiki page 
> athttp://wiki.openstack.org/QuantumAPIExtensions?action=AttachFile&do=view&target=quantum_api_extension.pdf
> or
> http://wiki.openstack.org/QuantumAPIExtensions?action=AttachFile&do=view&target=quantum_api_extension.docx
>  
> Please review and let us know your comments/suggestions. An etherpad page is 
> created for API extension discussionhttp://etherpad.openstack.org/uWXwqQNU4s
>  
> Best,
> Ying
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>  
>  
> Confidentiality Notice: This e-mail message (including any attached or
> embedded documents) is intended for the exclusive and confidential use of the
> individual or entity to which this message is addressed, and unless otherwise
> expressly indicated, is confidential and privileged information of Rackspac

Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Sandy Walsh
Thanks to all for the input. I don't think we've really come to any conclusions 
for the near term.

Unless someone screams, we will be proceeding along the following lines:

1. Adding PUT /zones/server/ to create an instance that will return a 
Reservation ID (a UUID). It will also accept a num-instances parameter. 
(I'll refactor with the existing code to keep the duplication to a minimum)

2. python-novaclient will be extended to take an optional reservation ID for 
GET /servers, which will be used to list instances across zones based on 
Reservation ID

None of this should affect the existing OS API or EC2 API functionality. 

We can have Feats of Strength later to decide how this should live on in an OS 
API 2.0 world.

/me listens for the screams ...

-S


Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [NetStack] Quantum Service API extension proposal

2011-05-23 Thread Debo Dutta (dedutta)
Hi Troy 

 

What about a tap? Its also like a port Should that be in quantum?

 

Regards

Debo

 

From: Troy Toman [mailto:troy.to...@rackspace.com] 
Sent: Monday, May 23, 2011 2:10 PM
To: Debo Dutta (dedutta)
Cc: Alex Neefus; Ying Liu (yinliu2); 
Subject: Re: [Openstack] [NetStack] Quantum Service API extension
proposal

 

I think the idea was slightly different. We were equating a vif to  NIC
in a physical server. A port was equated to a switch port on a physical
switch. Doesn't necessarily mean they have to be different. But, there
was a reason we used different terminology.  

 

In particular, we felt the vif was something that would continue to be
in the server's domain and managed within Nova. A port was a construct
that is owned and managed by the network service (Quantum).  

 

Troy

 

On May 23, 2011, at 3:56 PM, Debo Dutta (dedutta) wrote:





Quick question: it seems we are calling one end of the virtual wire a
port and the other a vif. Is there a reason to do that? Can we just call
say that that a wire connects 2 ports?

 

Also another interesting network scenario is when there is a wire
connecting 2 ports and you have a tap (for all sorts of scenarios). I
think the semantics of the tap might be  quite basic.

 

Regards

debo

 

From: openstack-bounces+dedutta=cisco@lists.launchpad.net
[mailto:openstack-bounces+dedutta=cisco@lists.launchpad.net] On
Behalf Of Alex Neefus
Sent: Monday, May 23, 2011 1:05 PM
To: Ying Liu (yinliu2); openstack@lists.launchpad.net
Subject: Re: [Openstack] [NetStack] Quantum Service API extension
proposal

 

Hi All -

 

I wanted to lend support to this proposal, however I don't think we
should be so quick to say this whole thing is an extension.  

 

We benefit a lot from having a standard capabilities mechanism as part
of our core Quantum API. I like Ying's key value method as well. I think
it's logical, clean and scalable. I propose that basic read access of
"cap" off of our major objects: network, port, interface be included in
our first release.

 

So in summary I would like to encourage us to add:

GET  /networks/{net_id}/conf

GET  /networks/{net_id}/ports/{port_id}/conf/

GET  {entity}/VIF/conf/

 

Each of these would return a list of keys.

 

Additionally Quantum base should support

GET  /networks/{net_id}/conf/{key}

GET  /networks/{net_id}/ports/{port_id}/conf/{key}

GET  {entity}/VIF/conf/{key}

 

Where {key} is the name of either a standard capability or an extention
capability. We can define an error code now to designate a capability
not supported by the plugin. (i.e. 472 - CapNotSupported)

 

Finally we don't need to standardize on every capability that might be
supported if we provide this simple mechanism. Specific capabilities
Key,Value sets can be added later but or included as vendor specific
extensions.

 

I'm happy to add this to the wiki if there is consensus. Rick/Dan -
Maybe this should be a topic for Tuesdays meeting.

 

Alex

 

---

Alex Neefus

Senior System Engineer | Mellanox Technologies

(o) 617.337.3116 | (m) 201.208.5771 | (f) 617.337.3019

 

 

 

 

 

From: openstack-bounces+alex=mellanox@lists.launchpad.net
[mailto:openstack-bounces+alex=mellanox@lists.launchpad.net] On
Behalf Of Ying Liu (yinliu2)
Sent: Saturday, May 21, 2011 1:10 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] [NetStack] Quantum Service API extension proposal

 

Hi all,

 

We just posted a proposal for OpenStack Quantum Service API extension on
community wiki page
athttp://wiki.openstack.org/QuantumAPIExtensions?action=AttachFile&do=vi
ew&target=quantum_api_extension.pdf

or

http://wiki.openstack.org/QuantumAPIExtensions?action=AttachFile&do=view
&target=quantum_api_extension.docx

 

Please review and let us know your comments/suggestions. An etherpad
page is created for API extension
discussionhttp://etherpad.openstack.org/uWXwqQNU4s

 

Best,

Ying

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

 

 
Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use
of the
individual or entity to which this message is addressed, and unless
otherwise
expressly indicated, is confidential and privileged information of
Rackspace.
Any dissemination, distribution or copying of the enclosed material is
prohibited.
If you receive this transmission in error, please notify us immediately
by e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [NetStack] Quantum Service API extension proposal

2011-05-23 Thread Troy Toman
I think the idea was slightly different. We were equating a vif to  NIC in a 
physical server. A port was equated to a switch port on a physical switch. 
Doesn't necessarily mean they have to be different. But, there was a reason we 
used different terminology.

In particular, we felt the vif was something that would continue to be in the 
server's domain and managed within Nova. A port was a construct that is owned 
and managed by the network service (Quantum).

Troy

On May 23, 2011, at 3:56 PM, Debo Dutta (dedutta) wrote:

Quick question: it seems we are calling one end of the virtual wire a port and 
the other a vif. Is there a reason to do that? Can we just call say that that a 
wire connects 2 ports?

Also another interesting network scenario is when there is a wire connecting 2 
ports and you have a tap (for all sorts of scenarios). I think the semantics of 
the tap might be  quite basic.

Regards
debo

From: 
openstack-bounces+dedutta=cisco@lists.launchpad.net
 [mailto:openstack-bounces+dedutta=cisco@lists.launchpad.net] On Behalf Of 
Alex Neefus
Sent: Monday, May 23, 2011 1:05 PM
To: Ying Liu (yinliu2); 
openstack@lists.launchpad.net
Subject: Re: [Openstack] [NetStack] Quantum Service API extension proposal

Hi All –

I wanted to lend support to this proposal, however I don’t think we should be 
so quick to say this whole thing is an extension.

We benefit a lot from having a standard capabilities mechanism as part of our 
core Quantum API. I like Ying’s key value method as well. I think it’s logical, 
clean and scalable. I propose that basic read access of “cap” off of our major 
objects: network, port, interface be included in our first release.

So in summary I would like to encourage us to add:
GET  /networks/{net_id}/conf
GET  /networks/{net_id}/ports/{port_id}/conf/
GET  {entity}/VIF/conf/

Each of these would return a list of keys.

Additionally Quantum base should support
GET  /networks/{net_id}/conf/{key}
GET  /networks/{net_id}/ports/{port_id}/conf/{key}
GET  {entity}/VIF/conf/{key}

Where {key} is the name of either a standard capability or an extention 
capability. We can define an error code now to designate a capability not 
supported by the plugin. (i.e. 472 – CapNotSupported)

Finally we don’t need to standardize on every capability that might be 
supported if we provide this simple mechanism. Specific capabilities Key,Value 
sets can be added later but or included as vendor specific extensions.

I’m happy to add this to the wiki if there is consensus. Rick/Dan – Maybe this 
should be a topic for Tuesdays meeting.

Alex

---
Alex Neefus
Senior System Engineer | Mellanox Technologies
(o) 617.337.3116 | (m) 201.208.5771 | (f) 617.337.3019





From: 
openstack-bounces+alex=mellanox@lists.launchpad.net
 [mailto:openstack-bounces+alex=mellanox@lists.launchpad.net] On Behalf Of 
Ying Liu (yinliu2)
Sent: Saturday, May 21, 2011 1:10 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] [NetStack] Quantum Service API extension proposal

Hi all,

We just posted a proposal for OpenStack Quantum Service API extension on 
community wiki page 
athttp://wiki.openstack.org/QuantumAPIExtensions?action=AttachFile&do=view&target=quantum_api_extension.pdf
or
http://wiki.openstack.org/QuantumAPIExtensions?action=AttachFile&do=view&target=quantum_api_extension.docx

Please review and let us know your comments/suggestions. An etherpad page is 
created for API extension discussionhttp://etherpad.openstack.org/uWXwqQNU4s

Best,
Ying
___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [NetStack] Quantum Service API extension proposal

2011-05-23 Thread Debo Dutta (dedutta)
Quick question: it seems we are calling one end of the virtual wire a
port and the other a vif. Is there a reason to do that? Can we just call
say that that a wire connects 2 ports?

 

Also another interesting network scenario is when there is a wire
connecting 2 ports and you have a tap (for all sorts of scenarios). I
think the semantics of the tap might be  quite basic. 

 

Regards

debo 

 

From: openstack-bounces+dedutta=cisco@lists.launchpad.net
[mailto:openstack-bounces+dedutta=cisco@lists.launchpad.net] On
Behalf Of Alex Neefus
Sent: Monday, May 23, 2011 1:05 PM
To: Ying Liu (yinliu2); openstack@lists.launchpad.net
Subject: Re: [Openstack] [NetStack] Quantum Service API extension
proposal

 

Hi All - 

 

I wanted to lend support to this proposal, however I don't think we
should be so quick to say this whole thing is an extension.  

 

We benefit a lot from having a standard capabilities mechanism as part
of our core Quantum API. I like Ying's key value method as well. I think
it's logical, clean and scalable. I propose that basic read access of
"cap" off of our major objects: network, port, interface be included in
our first release. 

 

So in summary I would like to encourage us to add:

GET  /networks/{net_id}/conf

GET  /networks/{net_id}/ports/{port_id}/conf/

GET  {entity}/VIF/conf/

 

Each of these would return a list of keys.

 

Additionally Quantum base should support 

GET  /networks/{net_id}/conf/{key}

GET  /networks/{net_id}/ports/{port_id}/conf/{key}

GET  {entity}/VIF/conf/{key}

 

Where {key} is the name of either a standard capability or an extention
capability. We can define an error code now to designate a capability
not supported by the plugin. (i.e. 472 - CapNotSupported)

 

Finally we don't need to standardize on every capability that might be
supported if we provide this simple mechanism. Specific capabilities
Key,Value sets can be added later but or included as vendor specific
extensions.

 

I'm happy to add this to the wiki if there is consensus. Rick/Dan -
Maybe this should be a topic for Tuesdays meeting. 

 

Alex

 

---

Alex Neefus

Senior System Engineer | Mellanox Technologies

(o) 617.337.3116 | (m) 201.208.5771 | (f) 617.337.3019

 

 

 

 

 

From: openstack-bounces+alex=mellanox@lists.launchpad.net
[mailto:openstack-bounces+alex=mellanox@lists.launchpad.net] On
Behalf Of Ying Liu (yinliu2)
Sent: Saturday, May 21, 2011 1:10 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] [NetStack] Quantum Service API extension proposal

 

Hi all,

 

We just posted a proposal for OpenStack Quantum Service API extension on
community wiki page at
http://wiki.openstack.org/QuantumAPIExtensions?action=AttachFile&do=view
&target=quantum_api_extension.pdf

or 

http://wiki.openstack.org/QuantumAPIExtensions?action=AttachFile&do=view
&target=quantum_api_extension.docx

 

Please review and let us know your comments/suggestions. An etherpad
page is created for API extension discussion
http://etherpad.openstack.org/uWXwqQNU4s

 

Best,

Ying

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [NetStack] Quantum Service API extension proposal

2011-05-23 Thread Alex Neefus
Hi All - 

 

I wanted to lend support to this proposal, however I don't think we
should be so quick to say this whole thing is an extension.  

 

We benefit a lot from having a standard capabilities mechanism as part
of our core Quantum API. I like Ying's key value method as well. I think
it's logical, clean and scalable. I propose that basic read access of
"cap" off of our major objects: network, port, interface be included in
our first release. 

 

So in summary I would like to encourage us to add:

GET  /networks/{net_id}/conf

GET  /networks/{net_id}/ports/{port_id}/conf/

GET  {entity}/VIF/conf/

 

Each of these would return a list of keys.

 

Additionally Quantum base should support 

GET  /networks/{net_id}/conf/{key}

GET  /networks/{net_id}/ports/{port_id}/conf/{key}

GET  {entity}/VIF/conf/{key}

 

Where {key} is the name of either a standard capability or an extention
capability. We can define an error code now to designate a capability
not supported by the plugin. (i.e. 472 - CapNotSupported)

 

Finally we don't need to standardize on every capability that might be
supported if we provide this simple mechanism. Specific capabilities
Key,Value sets can be added later but or included as vendor specific
extensions.

 

I'm happy to add this to the wiki if there is consensus. Rick/Dan -
Maybe this should be a topic for Tuesdays meeting. 

 

Alex

 

---

Alex Neefus

Senior System Engineer | Mellanox Technologies

(o) 617.337.3116 | (m) 201.208.5771 | (f) 617.337.3019

 

 

 

 

 

From: openstack-bounces+alex=mellanox@lists.launchpad.net
[mailto:openstack-bounces+alex=mellanox@lists.launchpad.net] On
Behalf Of Ying Liu (yinliu2)
Sent: Saturday, May 21, 2011 1:10 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] [NetStack] Quantum Service API extension proposal

 

Hi all,

 

We just posted a proposal for OpenStack Quantum Service API extension on
community wiki page at
http://wiki.openstack.org/QuantumAPIExtensions?action=AttachFile&do=view
&target=quantum_api_extension.pdf

or 

http://wiki.openstack.org/QuantumAPIExtensions?action=AttachFile&do=view
&target=quantum_api_extension.docx

 

Please review and let us know your comments/suggestions. An etherpad
page is created for API extension discussion
http://etherpad.openstack.org/uWXwqQNU4s

 

Best,

Ying

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Fwd: Chef Deployment System for Swift - a proposed design - feedback?

2011-05-23 Thread James W. Brinkerhoff
Andi,

That's great!  Taking a look right now

-jwb

On Mon, May 23, 2011 at 1:41 PM, andi abes  wrote:

>
>
> It took a while, but finally:
> https://github.com/dellcloudedge/openstack-swift
>
> Jay, I've added a swift-proxy-acct and swift-proxy (without account
> management).
>
> This cookbook is an advanced "leak" of for swift, soon to be followed with
> a leak of a nova cookbook. The full crowbar that was mentioned is on its
> way...
>
> To use these recipes (with default settings) you just need to pick your
> storage nodes and 1 or more proxies. Then assign the appropriate roles
> (swift-storage swift-proxy or swift-proxy-acct) using the chef ui or a knife
> command. Choose one of the nodes and assign it the swift-node-compute. and
> the swift cluster is built (because of async nature of multi-node
> deployments, it might require a few chef-client runs while the ring files
> are generated and pushed around.
>
> have a spin. eager to hear comments.
>
>
>
>
>
>
>
>
> On Mon, May 2, 2011 at 11:36 AM, andi abes  wrote:
>
>> Jay,
>>
>> hmmm, interesting point about account management in the proxy. Guess
>> you're suggesting that you have 2 flavors of a proxy server - one with
>> account management enabled and one without?
>>  Is the main concern here security - you'd have more controls on the
>> account management servers? Or is this about something else?
>>
>> About ring-compute:
>> so there are 2 concerns I was thinking about with rings - a) make sure the
>> ring information is consistent across all the nodes in the cluster, and b)
>> try not to lose the ring info.
>>
>> The main driver to have only 1 ring compute node was a). the main concern
>> being guaranteeing consistency of the ring data among all nodes without
>> causing too strong coupling to the underlying mechanisms used to build the
>> ring.
>> For example - if 2 new rings are created independently, then the order in
>> which disks are added to the ring should be consistent (assuming that the
>> disk/partition allocation algorithm is sensitive to ordering). Which implies
>> that the query to chef should always return data in exactly the same order.
>> If also would require that the ring building (and mandate that it will
>> never be changed) does not use any heuristics that are time or machine
>> dependent (I _think_ that right now that is the case, but I would rather not
>> depend on it).
>>
>> I was thinking that these restrictions can be avoided easily by making
>> sure that only 1 node computes the ring. To make sure that b) (don't lose
>> the ring) is addressed - the ring is copied around.
>> If the ring compute node fails, then any other node can be used to seed a
>> new compute ring without any loss.
>> Does that make sense?
>>
>>
>> Right now I'm using a snapshot deb package built from bzr266. Changing the
>> source of the bits is pretty esay... (and installing the deb includes the
>> utilities you mentioned)
>>
>>
>> Re: load balancers:
>> What you're proposing makes perfect sense. Chef is pretty modular. So the
>> swift configuration recipe focuses on setting up swift - not the whole
>> environment. It would make sense to deploy some load balancer, firewall
>> appliance etc in an environment. However, these would be add-ons to the
>> basic swift configuration.
>> A simple way to achieve this would be to have a recipe that would query
>> the chef server for all nodes which have the swift-proxy role, and add them
>> as internal addresses for the load balancer of your choice.
>> (e.g. :
>> http://wiki.opscode.com/display/chef/Search#Search-FindNodeswithaRoleintheExpandedRunList
>> )
>>
>>
>> a.
>>
>>
>>
>> On Sun, May 1, 2011 at 10:14 AM, Jay Payne  wrote:
>>
>>> Andi,
>>>
>>> This looks great.   I do have some thoughts/questions.
>>>
>>> If you are using 1.3, do you have a separate role for the management
>>> functionality in the proxy?It's not a good idea to have all your
>>> proxy servers running in management mode (unless you only have one
>>> proxy).
>>>
>>> Why only 1 ring-compute node?  If that node is lost or unavailable do
>>> you loose your ring-builder files?
>>>
>>> When I create an environment I always setup utilities like st,
>>> get-nodes, stats-report, and a simple functional test script on a
>>> server to help troubleshoot and manage the cluster(s).
>>>
>>> Are you using packages or eggs to deploy the swift code?   If your
>>> using packages, are you building them yourself or using the ones from
>>> launchpad?
>>>
>>> If you have more than three proxy servers, do you plan on using load
>>> balancers?
>>>
>>>
>>> Thanks
>>> --J
>>>
>>>
>>>
>>>
>>> On Sun, May 1, 2011 at 8:37 AM, andi abes  wrote:
>>> > Judd,
>>> >   Sorry, today I won't be around. I'd love to hear feedback and
>>> suggestions
>>> > on what I have so far ( I'm not 100% sure when I can make the fully
>>> > available, but I'm hoping this is very soon). I'm running with swift
>>> 1.3 on
>>> > ubuntu 10.10.
>>> > I'm using  the environment pattern in chef

Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Eric Day
Also keep in mind that UUIDs alone may not be sufficient. As was
discussed previously in a marathon ID rename thread, we have to
handle the case of federated zones gone bad that could purposefully
produce UUIDs that collide. We may want any extra namespace such as
"account:uuid" or "zone:uuid", but of course we need to figure out
federated account and zone details first.

-Eric

On Mon, May 23, 2011 at 05:28:20PM +, Jorge Williams wrote:
> +1
> 
> On May 23, 2011, at 11:54 AM, Vishvananda Ishaya wrote:
> 
> > So I think we've identified the real problem...
> > 
> > :)
> > 
> > sounds like we really need to do the UUID switchover to optimize here.
> > 
> > Vish
> > 
> > On May 23, 2011, at 9:42 AM, Jay Pipes wrote:
> > 
> >> On Mon, May 23, 2011 at 12:33 PM, Brian Schott
> >>  wrote:
> >>> Why does getting the instance id require the API to block?  I can create 
> >>> 1 or 1000 UUIDs in order (1) time in the API server and hand back 1000 
> >>> instance ids in a list of  entries in the same amount of time.
> >> 
> >> Instance IDs aren't currently UUIDs :) They are auto-increment
> >> integers that are local to the zone database. And because they are
> >> currently assigned by the zone, the work of identifying the
> >> appropriate zone to place a requested instance in would need to be a
> >> synchronous operation (you can't have the instance ID until you find
> >> the zone to put it in).
> >> 
> >> -jay
> >> 
> >> ___
> >> Mailing list: https://launchpad.net/~openstack
> >> Post to : openstack@lists.launchpad.net
> >> Unsubscribe : https://launchpad.net/~openstack
> >> More help   : https://help.launchpad.net/ListHelp
> > 
> > 
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Fwd: Chef Deployment System for Swift - a proposed design - feedback?

2011-05-23 Thread andi abes
It took a while, but finally:
https://github.com/dellcloudedge/openstack-swift

Jay, I've added a swift-proxy-acct and swift-proxy (without account
management).

This cookbook is an advanced "leak" of for swift, soon to be followed with a
leak of a nova cookbook. The full crowbar that was mentioned is on its
way...

To use these recipes (with default settings) you just need to pick your
storage nodes and 1 or more proxies. Then assign the appropriate roles
(swift-storage swift-proxy or swift-proxy-acct) using the chef ui or a knife
command. Choose one of the nodes and assign it the swift-node-compute. and
the swift cluster is built (because of async nature of multi-node
deployments, it might require a few chef-client runs while the ring files
are generated and pushed around.

have a spin. eager to hear comments.








On Mon, May 2, 2011 at 11:36 AM, andi abes  wrote:

> Jay,
>
> hmmm, interesting point about account management in the proxy. Guess you're
> suggesting that you have 2 flavors of a proxy server - one with account
> management enabled and one without?
>  Is the main concern here security - you'd have more controls on the
> account management servers? Or is this about something else?
>
> About ring-compute:
> so there are 2 concerns I was thinking about with rings - a) make sure the
> ring information is consistent across all the nodes in the cluster, and b)
> try not to lose the ring info.
>
> The main driver to have only 1 ring compute node was a). the main concern
> being guaranteeing consistency of the ring data among all nodes without
> causing too strong coupling to the underlying mechanisms used to build the
> ring.
> For example - if 2 new rings are created independently, then the order in
> which disks are added to the ring should be consistent (assuming that the
> disk/partition allocation algorithm is sensitive to ordering). Which implies
> that the query to chef should always return data in exactly the same order.
> If also would require that the ring building (and mandate that it will
> never be changed) does not use any heuristics that are time or machine
> dependent (I _think_ that right now that is the case, but I would rather not
> depend on it).
>
> I was thinking that these restrictions can be avoided easily by making sure
> that only 1 node computes the ring. To make sure that b) (don't lose the
> ring) is addressed - the ring is copied around.
> If the ring compute node fails, then any other node can be used to seed a
> new compute ring without any loss.
> Does that make sense?
>
>
> Right now I'm using a snapshot deb package built from bzr266. Changing the
> source of the bits is pretty esay... (and installing the deb includes the
> utilities you mentioned)
>
>
> Re: load balancers:
> What you're proposing makes perfect sense. Chef is pretty modular. So the
> swift configuration recipe focuses on setting up swift - not the whole
> environment. It would make sense to deploy some load balancer, firewall
> appliance etc in an environment. However, these would be add-ons to the
> basic swift configuration.
> A simple way to achieve this would be to have a recipe that would query the
> chef server for all nodes which have the swift-proxy role, and add them as
> internal addresses for the load balancer of your choice.
> (e.g. :
> http://wiki.opscode.com/display/chef/Search#Search-FindNodeswithaRoleintheExpandedRunList
> )
>
>
> a.
>
>
>
> On Sun, May 1, 2011 at 10:14 AM, Jay Payne  wrote:
>
>> Andi,
>>
>> This looks great.   I do have some thoughts/questions.
>>
>> If you are using 1.3, do you have a separate role for the management
>> functionality in the proxy?It's not a good idea to have all your
>> proxy servers running in management mode (unless you only have one
>> proxy).
>>
>> Why only 1 ring-compute node?  If that node is lost or unavailable do
>> you loose your ring-builder files?
>>
>> When I create an environment I always setup utilities like st,
>> get-nodes, stats-report, and a simple functional test script on a
>> server to help troubleshoot and manage the cluster(s).
>>
>> Are you using packages or eggs to deploy the swift code?   If your
>> using packages, are you building them yourself or using the ones from
>> launchpad?
>>
>> If you have more than three proxy servers, do you plan on using load
>> balancers?
>>
>>
>> Thanks
>> --J
>>
>>
>>
>>
>> On Sun, May 1, 2011 at 8:37 AM, andi abes  wrote:
>> > Judd,
>> >   Sorry, today I won't be around. I'd love to hear feedback and
>> suggestions
>> > on what I have so far ( I'm not 100% sure when I can make the fully
>> > available, but I'm hoping this is very soon). I'm running with swift 1.3
>> on
>> > ubuntu 10.10.
>> > I'm using  the environment pattern in chef - when nodes search for their
>> > peers a predicate comparing the node[:swift][:config][:environment] to
>> the
>> > corresponding value on the prospective peer. A "default" value is
>> assigned
>> > to this by the default recipe's attributes, so if 

Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Jorge Williams
+1

On May 23, 2011, at 11:54 AM, Vishvananda Ishaya wrote:

> So I think we've identified the real problem...
> 
> :)
> 
> sounds like we really need to do the UUID switchover to optimize here.
> 
> Vish
> 
> On May 23, 2011, at 9:42 AM, Jay Pipes wrote:
> 
>> On Mon, May 23, 2011 at 12:33 PM, Brian Schott
>>  wrote:
>>> Why does getting the instance id require the API to block?  I can create 1 
>>> or 1000 UUIDs in order (1) time in the API server and hand back 1000 
>>> instance ids in a list of  entries in the same amount of time.
>> 
>> Instance IDs aren't currently UUIDs :) They are auto-increment
>> integers that are local to the zone database. And because they are
>> currently assigned by the zone, the work of identifying the
>> appropriate zone to place a requested instance in would need to be a
>> synchronous operation (you can't have the instance ID until you find
>> the zone to put it in).
>> 
>> -jay
>> 
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Sandy Walsh
Changing to UUID is a great thing to do, but not sure if it solves our problem. 
We still need to differentiate between an Instance ID and a Reservation ID.

Additionally, switching to UUID has to be a 2.0 thing, since it's going to bust 
all backwards compatibility. The ability to cast to int() is a general 
assumption in RS API clients.

With respect to "multiple single-shot requests", assume 10 schedulers pick up 
10 instance requests concurrently. Their view of the world will be largely the 
same so they will all attempt to provision to the same host. VS a single 
request for 10 instances where the scheduler can be smart about where it 
attempts to place them.  And then there's the socket / api server load from 
1000 single-shot requests as mentioned elsewhere in this thread. 

I agree that 3 & 4 may be nice to haves. I've simply heard explicit demand for 
#4 from customers and I don't believe the delta to get their is that high.

-S

PS> You're not speaking out of turn. I need to do a better job of articulating 
the zones/dist-sched architecture ... it's underway :)


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Mark Washenberger [mark.washenber...@rackspace.com]
Sent: Monday, May 23, 2011 1:54 PM
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

I'm totally on board with this as a future revision of the OS api. However it 
sounds like we need some sort of solution for 1.1.

> 1. We can't treat the InstanceID as a ReservationID since they do two 
> different
> things. InstanceID's are unique per instance and ReservationID's might span N
> instances. I don't like the idea of overloading these concepts. How is the 
> caller
> supposed to know if they're getting back a ReservationID or an InstanceID? 
> How to
> they ask for updates for each (one returns a single value, one returns a 
> list?).

Rather than overloading the two, could we just make instance-id a uuid 
asynchronous and pare down the amount of info returned in the server create 
response?

> 2. We need to handle "provision N instances" so the scheduler can effectively 
> load
> balance the requests by looking at the current state of the system in a single
> view. Concurrent single-shot requests would be picked up by many different
> schedulers in many different zones and give an erratic distribution.

Are we worried about concurrent or rapid sequential requests?

Is there any way we could cut down on the erraticism by funneling these types 
of requests through a smaller set of schedulers? I'm very unfamiliar with the 
scheduler system but it seems like maybe routing choices at a higher level 
scheduler could help here.

3. and 4. sound like great features albeit ones that could wait on a future 
revision of the api.

Apologies if I'm speaking out of turn and should just read up on scheduler code!


"Sandy Walsh"  said:

> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> Cool, I think you all understand the concerns here:
>
> 1. We can't treat the InstanceID as a ReservationID since they do two 
> different
> things. InstanceID's are unique per instance and ReservationID's might span N
> instances. I don't like the idea of overloading these concepts. How is the 
> caller
> supposed to know if they're getting back a ReservationID or an InstanceID? 
> How to
> they ask for updates for each (one returns a single value, one returns a 
> list?).
>
> 2. We need to handle "provision N instances" so the scheduler can effectively 
> load
> balance the requests by looking at the current state of the system in a single
> view. Concurrent single-shot requests would be picked up by many different
> schedulers in many different zones and give an erratic distribution.
>
> 3. As Soren pointed out, we may want certain semantics around failure such as 
> "all
> or nothing"
>
> 4. Other Nova users have mentioned a desire for instance requests such as "has
> GPU, is in North America and has a blue sticker on the box". If we try to do 
> that
> with Flavors we need to clutter the Flavor table with most-common-denominator
> fields. We can handle this now with Zone/Host Capabilities and not have to 
> extend
> the table at all. If you look at nova/tests/scheduler/test_host_filter.py 
> you'll
> see an example of this in action. To Soren's point about "losing the ability 
> to
> rely on a fixed set of topics in the message queue for doing scheduling" this 
> is
> not the case, there are no new topics introduced. Instead there are simply 
> extra
> arguments passed into the run_instance() method of the scheduler that 
> understands
> these more complex instance reques

Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Vishvananda Ishaya
So I think we've identified the real problem...

:)

sounds like we really need to do the UUID switchover to optimize here.

Vish

On May 23, 2011, at 9:42 AM, Jay Pipes wrote:

> On Mon, May 23, 2011 at 12:33 PM, Brian Schott
>  wrote:
>> Why does getting the instance id require the API to block?  I can create 1 
>> or 1000 UUIDs in order (1) time in the API server and hand back 1000 
>> instance ids in a list of  entries in the same amount of time.
> 
> Instance IDs aren't currently UUIDs :) They are auto-increment
> integers that are local to the zone database. And because they are
> currently assigned by the zone, the work of identifying the
> appropriate zone to place a requested instance in would need to be a
> synchronous operation (you can't have the instance ID until you find
> the zone to put it in).
> 
> -jay
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Mark Washenberger
I'm totally on board with this as a future revision of the OS api. However it 
sounds like we need some sort of solution for 1.1.

> 1. We can't treat the InstanceID as a ReservationID since they do two 
> different
> things. InstanceID's are unique per instance and ReservationID's might span N
> instances. I don't like the idea of overloading these concepts. How is the 
> caller
> supposed to know if they're getting back a ReservationID or an InstanceID? 
> How to
> they ask for updates for each (one returns a single value, one returns a 
> list?).

Rather than overloading the two, could we just make instance-id a uuid 
asynchronous and pare down the amount of info returned in the server create 
response?

> 2. We need to handle "provision N instances" so the scheduler can effectively 
> load
> balance the requests by looking at the current state of the system in a single
> view. Concurrent single-shot requests would be picked up by many different
> schedulers in many different zones and give an erratic distribution.

Are we worried about concurrent or rapid sequential requests?

Is there any way we could cut down on the erraticism by funneling these types 
of requests through a smaller set of schedulers? I'm very unfamiliar with the 
scheduler system but it seems like maybe routing choices at a higher level 
scheduler could help here.

3. and 4. sound like great features albeit ones that could wait on a future 
revision of the api.

Apologies if I'm speaking out of turn and should just read up on scheduler code!


"Sandy Walsh"  said:

> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> Cool, I think you all understand the concerns here:
> 
> 1. We can't treat the InstanceID as a ReservationID since they do two 
> different
> things. InstanceID's are unique per instance and ReservationID's might span N
> instances. I don't like the idea of overloading these concepts. How is the 
> caller
> supposed to know if they're getting back a ReservationID or an InstanceID? 
> How to
> they ask for updates for each (one returns a single value, one returns a 
> list?).
> 
> 2. We need to handle "provision N instances" so the scheduler can effectively 
> load
> balance the requests by looking at the current state of the system in a single
> view. Concurrent single-shot requests would be picked up by many different
> schedulers in many different zones and give an erratic distribution.
> 
> 3. As Soren pointed out, we may want certain semantics around failure such as 
> "all
> or nothing"
> 
> 4. Other Nova users have mentioned a desire for instance requests such as "has
> GPU, is in North America and has a blue sticker on the box". If we try to do 
> that
> with Flavors we need to clutter the Flavor table with most-common-denominator
> fields. We can handle this now with Zone/Host Capabilities and not have to 
> extend
> the table at all. If you look at nova/tests/scheduler/test_host_filter.py 
> you'll
> see an example of this in action. To Soren's point about "losing the ability 
> to
> rely on a fixed set of topics in the message queue for doing scheduling" this 
> is
> not the case, there are no new topics introduced. Instead there are simply 
> extra
> arguments passed into the run_instance() method of the scheduler that 
> understands
> these more complex instance requests.
> 
> That said, I was thinking of adding a POST /zone/server command to support 
> these
> extended operations. It wouldn't affect anything currently in place and makes 
> it
> clear that this is a zone-specific operation. Existing EC2 and core OS API
> operations are performed as usual.
> 
> Likewise, we need a way to query the results of a Reservation ID request 
> without
> busting GET /servers/detail ... perhaps GET /zones/servers could do that?
> 
> The downside is that now we have two ways to create an instance that needs to 
> be
> tested, etc.
> 
> -S
> 
> 
> Confidentiality Notice: This e-mail message (including any attached or
> embedded documents) is intended for the exclusive and confidential use of the
> individual or entity to which this message is addressed, and unless otherwise
> expressly indicated, is confidential and privileged information of Rackspace.
> Any dissemination, distribution or copying of the enclosed material is
> prohibited.
> If you receive this transmission in error, please notify us immediately by 
> e-mail
> at ab...@rackspace.com, and delete the original message.
> Your cooperation is appreciated.
> 
> 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Jay Pipes
On Mon, May 23, 2011 at 12:33 PM, Brian Schott
 wrote:
> Why does getting the instance id require the API to block?  I can create 1 or 
> 1000 UUIDs in order (1) time in the API server and hand back 1000 instance 
> ids in a list of  entries in the same amount of time.

Instance IDs aren't currently UUIDs :) They are auto-increment
integers that are local to the zone database. And because they are
currently assigned by the zone, the work of identifying the
appropriate zone to place a requested instance in would need to be a
synchronous operation (you can't have the instance ID until you find
the zone to put it in).

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Brian Schott
Why does getting the instance id require the API to block?  I can create 1 or 
1000 UUIDs in order (1) time in the API server and hand back 1000 instance ids 
in a list of  entries in the same amount of time.  I'm more concerned 
about an external user hitting the API server 1000 times to generate a 1000 
instance request, then hitting the API server 1000 more times to check the 
status until a 1000 instances return "running".  That's a lot of socket 
connections.

The SeverDetailsResponse isn't a list?  You can't just return:

...


...


...



Brian Schott, CTO
Nimbis Services, Inc.
brian.sch...@nimbisservices.com





On May 23, 2011, at 11:52 AM, Ed Leafe wrote:

> On May 23, 2011, at 11:41 AM, Jorge Williams wrote:
> 
>> I don't see how that peculates anything.  Treat the instance id as the 
>> reservation id on single instance creations -- have a separate reservation 
>> id when launching multiple instances.  End of the day even if you have the 
>> capability to launch multiple instances at once you should be able to poll a 
>> specific instance for changes.  
> 
> 
>   I'm not too crazy about an API call returning one thing (instance ID) 
> for one call, and a different thing (reservation ID) for the same call, with 
> the only difference being the value of one parameter. Do we do anything like 
> that anywhere else in the API?
> 
>   Also, with distributed zones, getting an instance ID requires the api 
> to block until the host selection can be completed, and the host's zone 
> database updated with the new instance information. Granted, that's not an 
> agonizingly slow or expensive operation even if it does involve several 
> inter-zone HTTP calls, but it isn't the cleanest and most scalable design IMO.
> 
> 
> -- Ed Leafe
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Jorge Williams

On May 23, 2011, at 11:25 AM, Sandy Walsh wrote:

From: Jorge Williams

> So this is 2.0 API stuff -- right.

Well, we need it now ... so we have to find a short term solution.

> Why not simply have a request on the server list with the reservation id as a 
> parameter.
> This can easily be supported as an extension.
>
> So GET  /servers/detail?RID=3993882
>
> I would probably call it a build ID.  That would narrow the response to only 
> those that are
> currently being build with a single request (3993882).

I'm cool with that ... why does it need to be an extension, per se? It's just 
an additional parameter which will be ignored until something going looking for 
it.

To prevent clashes.  To detect if the feature is available -- it probably wont 
be available in our legacy system.


How about the POST /zones/server idea?


I'll have to think about it.

-S

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Sandy Walsh
From: Jorge Williams

> So this is 2.0 API stuff -- right.

Well, we need it now ... so we have to find a short term solution.

> Why not simply have a request on the server list with the reservation id as a 
> parameter.
> This can easily be supported as an extension.
>
> So GET  /servers/detail?RID=3993882
>
> I would probably call it a build ID.  That would narrow the response to only 
> those that are
> currently being build with a single request (3993882).

I'm cool with that ... why does it need to be an extension, per se? It's just 
an additional parameter which will be ignored until something going looking for 
it.

How about the POST /zones/server idea?

-S
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Ed Leafe
On May 23, 2011, at 11:53 AM, Sandy Walsh wrote:

> Likewise, we need a way to query the results of a Reservation ID request 
> without busting GET /servers/detail ... perhaps GET /zones/servers could do 
> that?

GET /servers/reservation/  perhaps? Returns a list of instances 
similar to GET /servers.


-- Ed Leafe


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Jorge Williams

I'd like to step back and denote that there won't be support for this in the 
1.1 API -- unless this is an extension.  So this is 2.0 API stuff -- right.  
Other comments inline:

On May 23, 2011, at 10:53 AM, Sandy Walsh wrote:

Cool, I think you all understand the concerns here:

1. We can't treat the InstanceID as a ReservationID since they do two different 
things. InstanceID's are unique per instance and ReservationID's might span N 
instances. I don't like the idea of overloading these concepts. How is the 
caller supposed to know if they're getting back a ReservationID or an 
InstanceID? How to they ask for updates for each (one returns a single value, 
one returns a list?).


The user doesn't even need to know that there are two concepts at all -- at 
least not in the 1.1 world.  When creating single instances then they only see 
a single instance ID.

2. We need to handle "provision N instances" so the scheduler can effectively 
load balance the requests by looking at the current state of the system in a 
single view. Concurrent single-shot requests would be picked up by many 
different schedulers in many different zones and give an erratic distribution.

3. As Soren pointed out, we may want certain semantics around failure such as 
"all or nothing"

4. Other Nova users have mentioned a desire for instance requests such as "has 
GPU, is in North America and has a blue sticker on the box". If we try to do 
that with Flavors we need to clutter the Flavor table with 
most-common-denominator fields. We can handle this now with Zone/Host 
Capabilities and not have to extend the table at all. If you look at 
nova/tests/scheduler/test_host_filter.py you'll see an example of this in 
action. To Soren's point about "losing the ability to rely on a fixed set of 
topics in the message queue for doing scheduling" this is not the case, there 
are no new topics introduced. Instead there are simply extra arguments passed 
into the run_instance() method of the scheduler that understands these more 
complex instance requests.

That said, I was thinking of adding a POST /zone/server command to support 
these extended operations. It wouldn't affect anything currently in place and 
makes it clear that this is a zone-specific operation. Existing EC2 and core OS 
API operations are performed as usual.

Likewise, we need a way to query the results of a Reservation ID request 
without busting GET /servers/detail ... perhaps GET /zones/servers could do 
that?


Why not simply have a request on the server list with the reservation id as a 
parameter.  This can easily be supported as an extension.

So GET  /servers/detail?RID=3993882

I would probably call it a build ID.  That would narrow the response to only 
those that are currently being build with a single request (3993882).

The downside is that now we have two ways to create an instance that needs to 
be tested, etc.

-S

Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original 
message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Sandy Walsh
Cool, I think you all understand the concerns here:

1. We can't treat the InstanceID as a ReservationID since they do two different 
things. InstanceID's are unique per instance and ReservationID's might span N 
instances. I don't like the idea of overloading these concepts. How is the 
caller supposed to know if they're getting back a ReservationID or an 
InstanceID? How to they ask for updates for each (one returns a single value, 
one returns a list?).

2. We need to handle "provision N instances" so the scheduler can effectively 
load balance the requests by looking at the current state of the system in a 
single view. Concurrent single-shot requests would be picked up by many 
different schedulers in many different zones and give an erratic distribution.

3. As Soren pointed out, we may want certain semantics around failure such as 
"all or nothing"

4. Other Nova users have mentioned a desire for instance requests such as "has 
GPU, is in North America and has a blue sticker on the box". If we try to do 
that with Flavors we need to clutter the Flavor table with 
most-common-denominator fields. We can handle this now with Zone/Host 
Capabilities and not have to extend the table at all. If you look at 
nova/tests/scheduler/test_host_filter.py you'll see an example of this in 
action. To Soren's point about "losing the ability to rely on a fixed set of 
topics in the message queue for doing scheduling" this is not the case, there 
are no new topics introduced. Instead there are simply extra arguments passed 
into the run_instance() method of the scheduler that understands these more 
complex instance requests.

That said, I was thinking of adding a POST /zone/server command to support 
these extended operations. It wouldn't affect anything currently in place and 
makes it clear that this is a zone-specific operation. Existing EC2 and core OS 
API operations are performed as usual.

Likewise, we need a way to query the results of a Reservation ID request 
without busting GET /servers/detail ... perhaps GET /zones/servers could do 
that?

The downside is that now we have two ways to create an instance that needs to 
be tested, etc.

-S


Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Ed Leafe
On May 23, 2011, at 11:41 AM, Jorge Williams wrote:

> I don't see how that peculates anything.  Treat the instance id as the 
> reservation id on single instance creations -- have a separate reservation id 
> when launching multiple instances.  End of the day even if you have the 
> capability to launch multiple instances at once you should be able to poll a 
> specific instance for changes.  


I'm not too crazy about an API call returning one thing (instance ID) 
for one call, and a different thing (reservation ID) for the same call, with 
the only difference being the value of one parameter. Do we do anything like 
that anywhere else in the API?

Also, with distributed zones, getting an instance ID requires the api 
to block until the host selection can be completed, and the host's zone 
database updated with the new instance information. Granted, that's not an 
agonizingly slow or expensive operation even if it does involve several 
inter-zone HTTP calls, but it isn't the cleanest and most scalable design IMO.


-- Ed Leafe


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Jorge Williams

On May 23, 2011, at 10:15 AM, Ed Leafe wrote:

> On May 23, 2011, at 10:35 AM, Jorge Williams wrote:
> 
>> If we make the instance ID a unique ID -- which we probably should.   Why 
>> not also treat it as a reservation id and generate/assign it up front?
> 
> 
>   Because that precludes the 1:M relationship of a reservation to created 
> instances. 
> 
>   If I request 100 instances, they are all created with unique IDs, but 
> with a single reservation ID. 
> 

I don't see how that peculates anything.  Treat the instance id as the 
reservation id on single instance creations -- have a separate reservation id 
when launching multiple instances.  End of the day even if you have the 
capability to launch multiple instances at once you should be able to poll a 
specific instance for changes.  

> 
> 
> -- Ed Leafe
> 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Jorge Williams

On May 23, 2011, at 10:15 AM, Jay Pipes wrote:

> /me wishes you were on IRC ;)
> 
> Discussing this with Mark Wash on IRC...
> 

I'll stop by :-)

> Basically, I'm cool with using a UUID-like pregenerated instance ID
> and returning that as a reservation ID in the 1.X API.

Cool.

> I was really
> just brainstorming about a future, request-centric 2.0 API that would
> allow for more atomic operations on the instance creation level.
> 

Okay, I'll follow up.

> Cheers!
> jay
> 
> On Mon, May 23, 2011 at 10:35 AM, Jorge Williams
>  wrote:
>> Comments inline:
>> 
>> On May 23, 2011, at 8:59 AM, Jay Pipes wrote:
>> 
>>> Hi Jorge! Comments inline :)
>>> 
>>> On Mon, May 23, 2011 at 9:42 AM, Jorge Williams
>>>  wrote:
 Hi Sandy,
 My understanding (Correct me if i'm wrong here guys) is that creating
 multiple instances with a single call is not in scope for the 1.1 API.
>>> 
>>> Actually, I don't think we *could* do this without issuing a 2.0 API.
>>> The reason is because changing POST /servers to return a reservation
>>> ID instead of the instance ID would break existing clients, and
>>> therefore a new major API version would be needed.
>> 
>> Why?  Clients just see an ID.  I'm suggesting that for single instances, the 
>> instanceID == the reservationID.
>> In the API you query based on Some ID.
>> 
>> http://my.openstack-compute.net/v1.1/2233/servers/{Some unique ID}
>> 
>>> 
 Same
 thing for changing the way in which flavors work.  Both features can be
 brought in as extensions though.
>>> 
>>> Sorry, I'm not quite sure I understand what you mean by "changing the
>>> way flavours work". Could you elaborate a bit on that?
>> 
>> Sandy was suggesting we employ a method "richer than Flavors".  I'll let him 
>> elaborate.
>> 
>>> 
 I should note that when creating single instances the instance id should
 really be equivalent to a reservation id.  That is, the create should be
 asynchronous and the instance id can be used to poll for changes.
>>> 
>>> Hmm, it's actually a bit different. In one case, you need to actually
>>> get an identifier for the instance from whatever database (zone db?)
>>> would be responsible for creating the instance. In the other case, you
>>> merely create a token/task that can then be queried for a status of
>>> the operation. In the former case, you unfortunately make the
>>> scheduler's work synchronous, since the instance identifier would need
>>> to be determined from the zone the instance would be created in. :(
>>> 
>> 
>> If we make the instance ID a unique ID -- which we probably should.   Why 
>> not also treat it as a reservation id and generate/assign it up front?
>> 
 Because
 of this, a user can create multiple instances in very rapid succession.
>>> 
>>> Not really the same as issuing a request to create 100 instances. Not
>>> only would the user interface implications be different, but you can
>>> also do all-or-nothing scheduling with a request for 100 instances
>>> versus 100 requests for a single instance. All-or-nothing allows a
>>> provider to pin a request to a specific SLA or policy. For example, if
>>> a client requests 100 instances be created with requirements X, Y, and
>>> Z, and you create 88 instances and 12 instances don't get created
>>> because there is no more available room that meets requirements X, Y,
>>> and Z, then you have failed to service the entire request...
>>> 
>> 
>> 
>> I totally understand this.  I'm just suggesting that since this is not is 
>> scope for 1.1 -- you should be able to launch individual instances as an 
>> alternative.
>> 
>> Also, keep in mind that the all-or-nothing requires a compensation when 
>> something fails.
>> 
>> 
>> 
 Additionally, the changes-since feature in the API allows a user to
 efficiently monitor the creation of multiple instances simultaneously.
>>> 
>>> Agreed, but I think that is tangential to the above discussion.
>>> 
>>> Cheers!
>>> jay
>>> 
 -jOrGe W.
 On May 23, 2011, at 7:19 AM, Sandy Walsh wrote:
 
 Hi everyone,
 We're deep into the Zone / Distributed Scheduler merges and stumbling onto
 an interesting problem.
 EC2 API has two important concepts that I don't see in OS API (1.0 or 1.1):
 - Reservation ID
 - Number of Instances to create
 Typical use case: "Create 1000 instances". The API allocates a Reservation
 ID and all the instances are created until this ID. The ID is immediately
 returned to the user who can later query on this ID to check status.
 From what I can see, the OS API only deals with single instance creation 
 and
 returns the Instance ID from the call. Both of these need to change to
 support Reservation ID's and creating N instances. The value of the
 distributed scheduler comes from being able to create N instances load
 balanced across zones.
 Anyone have any suggestions how we can support this?
 Additionally, and less important at this sta

Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Ed Leafe
On May 23, 2011, at 10:35 AM, Jorge Williams wrote:

> If we make the instance ID a unique ID -- which we probably should.   Why not 
> also treat it as a reservation id and generate/assign it up front?


Because that precludes the 1:M relationship of a reservation to created 
instances. 

If I request 100 instances, they are all created with unique IDs, but 
with a single reservation ID. 



-- Ed Leafe


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Jay Pipes
/me wishes you were on IRC ;)

Discussing this with Mark Wash on IRC...

Basically, I'm cool with using a UUID-like pregenerated instance ID
and returning that as a reservation ID in the 1.X API. I was really
just brainstorming about a future, request-centric 2.0 API that would
allow for more atomic operations on the instance creation level.

Cheers!
jay

On Mon, May 23, 2011 at 10:35 AM, Jorge Williams
 wrote:
> Comments inline:
>
> On May 23, 2011, at 8:59 AM, Jay Pipes wrote:
>
>> Hi Jorge! Comments inline :)
>>
>> On Mon, May 23, 2011 at 9:42 AM, Jorge Williams
>>  wrote:
>>> Hi Sandy,
>>> My understanding (Correct me if i'm wrong here guys) is that creating
>>> multiple instances with a single call is not in scope for the 1.1 API.
>>
>> Actually, I don't think we *could* do this without issuing a 2.0 API.
>> The reason is because changing POST /servers to return a reservation
>> ID instead of the instance ID would break existing clients, and
>> therefore a new major API version would be needed.
>
> Why?  Clients just see an ID.  I'm suggesting that for single instances, the 
> instanceID == the reservationID.
> In the API you query based on Some ID.
>
> http://my.openstack-compute.net/v1.1/2233/servers/{Some unique ID}
>
>>
>>> Same
>>> thing for changing the way in which flavors work.  Both features can be
>>> brought in as extensions though.
>>
>> Sorry, I'm not quite sure I understand what you mean by "changing the
>> way flavours work". Could you elaborate a bit on that?
>
> Sandy was suggesting we employ a method "richer than Flavors".  I'll let him 
> elaborate.
>
>>
>>> I should note that when creating single instances the instance id should
>>> really be equivalent to a reservation id.  That is, the create should be
>>> asynchronous and the instance id can be used to poll for changes.
>>
>> Hmm, it's actually a bit different. In one case, you need to actually
>> get an identifier for the instance from whatever database (zone db?)
>> would be responsible for creating the instance. In the other case, you
>> merely create a token/task that can then be queried for a status of
>> the operation. In the former case, you unfortunately make the
>> scheduler's work synchronous, since the instance identifier would need
>> to be determined from the zone the instance would be created in. :(
>>
>
> If we make the instance ID a unique ID -- which we probably should.   Why not 
> also treat it as a reservation id and generate/assign it up front?
>
>>> Because
>>> of this, a user can create multiple instances in very rapid succession.
>>
>> Not really the same as issuing a request to create 100 instances. Not
>> only would the user interface implications be different, but you can
>> also do all-or-nothing scheduling with a request for 100 instances
>> versus 100 requests for a single instance. All-or-nothing allows a
>> provider to pin a request to a specific SLA or policy. For example, if
>> a client requests 100 instances be created with requirements X, Y, and
>> Z, and you create 88 instances and 12 instances don't get created
>> because there is no more available room that meets requirements X, Y,
>> and Z, then you have failed to service the entire request...
>>
>
>
> I totally understand this.  I'm just suggesting that since this is not is 
> scope for 1.1 -- you should be able to launch individual instances as an 
> alternative.
>
> Also, keep in mind that the all-or-nothing requires a compensation when 
> something fails.
>
>
>
>>> Additionally, the changes-since feature in the API allows a user to
>>> efficiently monitor the creation of multiple instances simultaneously.
>>
>> Agreed, but I think that is tangential to the above discussion.
>>
>> Cheers!
>> jay
>>
>>> -jOrGe W.
>>> On May 23, 2011, at 7:19 AM, Sandy Walsh wrote:
>>>
>>> Hi everyone,
>>> We're deep into the Zone / Distributed Scheduler merges and stumbling onto
>>> an interesting problem.
>>> EC2 API has two important concepts that I don't see in OS API (1.0 or 1.1):
>>> - Reservation ID
>>> - Number of Instances to create
>>> Typical use case: "Create 1000 instances". The API allocates a Reservation
>>> ID and all the instances are created until this ID. The ID is immediately
>>> returned to the user who can later query on this ID to check status.
>>> From what I can see, the OS API only deals with single instance creation and
>>> returns the Instance ID from the call. Both of these need to change to
>>> support Reservation ID's and creating N instances. The value of the
>>> distributed scheduler comes from being able to create N instances load
>>> balanced across zones.
>>> Anyone have any suggestions how we can support this?
>>> Additionally, and less important at this stage, users at the summit
>>> expressed an interest in being able to specify instances with something
>>> richer than Flavors. We have some mockups in the current host-filter code
>>> for doing this using a primitive little JSON grammar. So, let's assume the
>>> Flavo

Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Soren Hansen
2011/5/23 Sandy Walsh :
> Additionally, and less important at this stage, users at the summit
> expressed an interest in being able to specify instances with something
> richer than Flavors. We have some mockups in the current host-filter code
> for doing this using a primitive little JSON grammar. So, let's assume the
> Flavor-like query would just be a string. Thoughts?

We'd lose the ability to rely exclusively on a fixed set of topics in
the message queue for doing scheduling. With a fixed set of relatively
simple flavours, each compute node could simply subscribe to the
topics representing the flavours for which it still has capacity. As
it fills up, it could unsubscribe from that topics representing the
larger flavours.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Soren Hansen
2011/5/23 Mark Washenberger :
> If I understand the features correctly, their implementation in nova seems 
> straightforward. However, I am still a little curious about their necessity. 
> For load balancing, what is the difference between a single request for N 
> instances and N requests for a single instance each?

Scheduling. If you ask for 10 instances in one call, they're scheduled
as a set. The entire call either fails or succeeds and all the
instances land in the same availability zone. 10 individual requests
will cause each of the instances to be scheduled individually (partial
failures and being scattered across multiple availability zones being
the major problems).

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Jorge Williams
Comments inline:

On May 23, 2011, at 8:59 AM, Jay Pipes wrote:

> Hi Jorge! Comments inline :)
> 
> On Mon, May 23, 2011 at 9:42 AM, Jorge Williams
>  wrote:
>> Hi Sandy,
>> My understanding (Correct me if i'm wrong here guys) is that creating
>> multiple instances with a single call is not in scope for the 1.1 API.
> 
> Actually, I don't think we *could* do this without issuing a 2.0 API.
> The reason is because changing POST /servers to return a reservation
> ID instead of the instance ID would break existing clients, and
> therefore a new major API version would be needed.

Why?  Clients just see an ID.  I'm suggesting that for single instances, the 
instanceID == the reservationID.
In the API you query based on Some ID.

http://my.openstack-compute.net/v1.1/2233/servers/{Some unique ID}

> 
>> Same
>> thing for changing the way in which flavors work.  Both features can be
>> brought in as extensions though.
> 
> Sorry, I'm not quite sure I understand what you mean by "changing the
> way flavours work". Could you elaborate a bit on that?

Sandy was suggesting we employ a method "richer than Flavors".  I'll let him 
elaborate.

> 
>> I should note that when creating single instances the instance id should
>> really be equivalent to a reservation id.  That is, the create should be
>> asynchronous and the instance id can be used to poll for changes.
> 
> Hmm, it's actually a bit different. In one case, you need to actually
> get an identifier for the instance from whatever database (zone db?)
> would be responsible for creating the instance. In the other case, you
> merely create a token/task that can then be queried for a status of
> the operation. In the former case, you unfortunately make the
> scheduler's work synchronous, since the instance identifier would need
> to be determined from the zone the instance would be created in. :(
> 

If we make the instance ID a unique ID -- which we probably should.   Why not 
also treat it as a reservation id and generate/assign it up front?

>> Because
>> of this, a user can create multiple instances in very rapid succession.
> 
> Not really the same as issuing a request to create 100 instances. Not
> only would the user interface implications be different, but you can
> also do all-or-nothing scheduling with a request for 100 instances
> versus 100 requests for a single instance. All-or-nothing allows a
> provider to pin a request to a specific SLA or policy. For example, if
> a client requests 100 instances be created with requirements X, Y, and
> Z, and you create 88 instances and 12 instances don't get created
> because there is no more available room that meets requirements X, Y,
> and Z, then you have failed to service the entire request...
> 


I totally understand this.  I'm just suggesting that since this is not is scope 
for 1.1 -- you should be able to launch individual instances as an alternative.

Also, keep in mind that the all-or-nothing requires a compensation when 
something fails.



>> Additionally, the changes-since feature in the API allows a user to
>> efficiently monitor the creation of multiple instances simultaneously.
> 
> Agreed, but I think that is tangential to the above discussion.
> 
> Cheers!
> jay
> 
>> -jOrGe W.
>> On May 23, 2011, at 7:19 AM, Sandy Walsh wrote:
>> 
>> Hi everyone,
>> We're deep into the Zone / Distributed Scheduler merges and stumbling onto
>> an interesting problem.
>> EC2 API has two important concepts that I don't see in OS API (1.0 or 1.1):
>> - Reservation ID
>> - Number of Instances to create
>> Typical use case: "Create 1000 instances". The API allocates a Reservation
>> ID and all the instances are created until this ID. The ID is immediately
>> returned to the user who can later query on this ID to check status.
>> From what I can see, the OS API only deals with single instance creation and
>> returns the Instance ID from the call. Both of these need to change to
>> support Reservation ID's and creating N instances. The value of the
>> distributed scheduler comes from being able to create N instances load
>> balanced across zones.
>> Anyone have any suggestions how we can support this?
>> Additionally, and less important at this stage, users at the summit
>> expressed an interest in being able to specify instances with something
>> richer than Flavors. We have some mockups in the current host-filter code
>> for doing this using a primitive little JSON grammar. So, let's assume the
>> Flavor-like query would just be a string. Thoughts?
>> -S
>> 
>> 
>> Confidentiality Notice: This e-mail message (including any attached or
>> embedded documents) is intended for the exclusive and confidential use of
>> the
>> individual or entity to which this message is addressed, and unless
>> otherwise
>> expressly indicated, is confidential and privileged information of
>> Rackspace.
>> Any dissemination, distribution or copying of the enclosed material is
>> prohibited.
>> If you receive this transmission in error, pl

Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Jay Pipes
Hi Jorge! Comments inline :)

On Mon, May 23, 2011 at 9:42 AM, Jorge Williams
 wrote:
> Hi Sandy,
> My understanding (Correct me if i'm wrong here guys) is that creating
> multiple instances with a single call is not in scope for the 1.1 API.

Actually, I don't think we *could* do this without issuing a 2.0 API.
The reason is because changing POST /servers to return a reservation
ID instead of the instance ID would break existing clients, and
therefore a new major API version would be needed.

> Same
> thing for changing the way in which flavors work.  Both features can be
> brought in as extensions though.

Sorry, I'm not quite sure I understand what you mean by "changing the
way flavours work". Could you elaborate a bit on that?

> I should note that when creating single instances the instance id should
> really be equivalent to a reservation id.  That is, the create should be
> asynchronous and the instance id can be used to poll for changes.

Hmm, it's actually a bit different. In one case, you need to actually
get an identifier for the instance from whatever database (zone db?)
would be responsible for creating the instance. In the other case, you
merely create a token/task that can then be queried for a status of
the operation. In the former case, you unfortunately make the
scheduler's work synchronous, since the instance identifier would need
to be determined from the zone the instance would be created in. :(

> Because
> of this, a user can create multiple instances in very rapid succession.

Not really the same as issuing a request to create 100 instances. Not
only would the user interface implications be different, but you can
also do all-or-nothing scheduling with a request for 100 instances
versus 100 requests for a single instance. All-or-nothing allows a
provider to pin a request to a specific SLA or policy. For example, if
a client requests 100 instances be created with requirements X, Y, and
Z, and you create 88 instances and 12 instances don't get created
because there is no more available room that meets requirements X, Y,
and Z, then you have failed to service the entire request...

> Additionally, the changes-since feature in the API allows a user to
> efficiently monitor the creation of multiple instances simultaneously.

Agreed, but I think that is tangential to the above discussion.

Cheers!
jay

> -jOrGe W.
> On May 23, 2011, at 7:19 AM, Sandy Walsh wrote:
>
> Hi everyone,
> We're deep into the Zone / Distributed Scheduler merges and stumbling onto
> an interesting problem.
> EC2 API has two important concepts that I don't see in OS API (1.0 or 1.1):
> - Reservation ID
> - Number of Instances to create
> Typical use case: "Create 1000 instances". The API allocates a Reservation
> ID and all the instances are created until this ID. The ID is immediately
> returned to the user who can later query on this ID to check status.
> From what I can see, the OS API only deals with single instance creation and
> returns the Instance ID from the call. Both of these need to change to
> support Reservation ID's and creating N instances. The value of the
> distributed scheduler comes from being able to create N instances load
> balanced across zones.
> Anyone have any suggestions how we can support this?
> Additionally, and less important at this stage, users at the summit
> expressed an interest in being able to specify instances with something
> richer than Flavors. We have some mockups in the current host-filter code
> for doing this using a primitive little JSON grammar. So, let's assume the
> Flavor-like query would just be a string. Thoughts?
> -S
>
>
> Confidentiality Notice: This e-mail message (including any attached or
> embedded documents) is intended for the exclusive and confidential use of
> the
> individual or entity to which this message is addressed, and unless
> otherwise
> expressly indicated, is confidential and privileged information of
> Rackspace.
> Any dissemination, distribution or copying of the enclosed material is
> prohibited.
> If you receive this transmission in error, please notify us immediately by
> e-mail
> at ab...@rackspace.com, and delete the original message.
> Your cooperation is appreciated.
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Jorge Williams
Hi Sandy,

My understanding (Correct me if i'm wrong here guys) is that creating multiple 
instances with a single call is not in scope for the 1.1 API.  Same thing for 
changing the way in which flavors work.  Both features can be brought in as 
extensions though.

I should note that when creating single instances the instance id should really 
be equivalent to a reservation id.  That is, the create should be asynchronous 
and the instance id can be used to poll for changes.  Because of this, a user 
can create multiple instances in very rapid succession.   Additionally, the 
changes-since feature in the API allows a user to efficiently monitor the 
creation of multiple instances simultaneously.

-jOrGe W.

On May 23, 2011, at 7:19 AM, Sandy Walsh wrote:

Hi everyone,

We're deep into the Zone / Distributed Scheduler merges and stumbling onto an 
interesting problem.

EC2 API has two important concepts that I don't see in OS API (1.0 or 1.1):
- Reservation ID
- Number of Instances to create

Typical use case: "Create 1000 instances". The API allocates a Reservation ID 
and all the instances are created until this ID. The ID is immediately returned 
to the user who can later query on this ID to check status.

>From what I can see, the OS API only deals with single instance creation and 
>returns the Instance ID from the call. Both of these need to change to support 
>Reservation ID's and creating N instances. The value of the distributed 
>scheduler comes from being able to create N instances load balanced across 
>zones.

Anyone have any suggestions how we can support this?

Additionally, and less important at this stage, users at the summit expressed 
an interest in being able to specify instances with something richer than 
Flavors. We have some mockups in the current host-filter code for doing this 
using a primitive little JSON grammar. So, let's assume the Flavor-like query 
would just be a string. Thoughts?

-S



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original 
message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Mark Washenberger
Sandy,

If I understand the features correctly, their implementation in nova seems 
straightforward. However, I am still a little curious about their necessity. 
For load balancing, what is the difference between a single request for N 
instances and N requests for a single instance each?

"Sandy Walsh"  said:

> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> Hi everyone,
> 
> We're deep into the Zone / Distributed Scheduler merges and stumbling onto an
> interesting problem.
> 
> EC2 API has two important concepts that I don't see in OS API (1.0 or 1.1):
> - Reservation ID
> - Number of Instances to create
> 
> Typical use case: "Create 1000 instances". The API allocates a Reservation ID 
> and
> all the instances are created until this ID. The ID is immediately returned 
> to the
> user who can later query on this ID to check status.
> 
>>From what I can see, the OS API only deals with single instance creation and
>> returns the Instance ID from the call. Both of these need to change to 
>> support
>> Reservation ID's and creating N instances. The value of the distributed 
>> scheduler
>> comes from being able to create N instances load balanced across zones.
> 
> Anyone have any suggestions how we can support this?
> 
> Additionally, and less important at this stage, users at the summit expressed 
> an
> interest in being able to specify instances with something richer than 
> Flavors. We
> have some mockups in the current host-filter code for doing this using a 
> primitive
> little JSON grammar. So, let's assume the Flavor-like query would just be a
> string. Thoughts?
> 
> -S
> 
> 
> 
> 
> Confidentiality Notice: This e-mail message (including any attached or
> embedded documents) is intended for the exclusive and confidential use of the
> individual or entity to which this message is addressed, and unless otherwise
> expressly indicated, is confidential and privileged information of Rackspace.
> Any dissemination, distribution or copying of the enclosed material is
> prohibited.
> If you receive this transmission in error, please notify us immediately by 
> e-mail
> at ab...@rackspace.com, and delete the original message.
> Your cooperation is appreciated.
> 
> 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] OpenStack API, Reservation ID's and Num Instances ...

2011-05-23 Thread Sandy Walsh
Hi everyone,

We're deep into the Zone / Distributed Scheduler merges and stumbling onto an 
interesting problem.

EC2 API has two important concepts that I don't see in OS API (1.0 or 1.1):
- Reservation ID
- Number of Instances to create

Typical use case: "Create 1000 instances". The API allocates a Reservation ID 
and all the instances are created until this ID. The ID is immediately returned 
to the user who can later query on this ID to check status.

>From what I can see, the OS API only deals with single instance creation and 
>returns the Instance ID from the call. Both of these need to change to support 
>Reservation ID's and creating N instances. The value of the distributed 
>scheduler comes from being able to create N instances load balanced across 
>zones.

Anyone have any suggestions how we can support this?

Additionally, and less important at this stage, users at the summit expressed 
an interest in being able to specify instances with something richer than 
Flavors. We have some mockups in the current host-filter code for doing this 
using a primitive little JSON grammar. So, let's assume the Flavor-like query 
would just be a string. Thoughts?

-S




Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp