Re: [Openstack] Refocusing the Lunr Project

2011-07-08 Thread Erik Carlin
From a Rackspace perspective, we do not want to expose block operations in the 
compute API.  The plan has been to expose the attach/detach as nova API 
extensions, and that still makes sense, but will there be a separate, 
independent block service and API?

Erik

From: Chuck Thier cth...@gmail.commailto:cth...@gmail.com
Date: Fri, 8 Jul 2011 13:15:56 -0500
To: Jorge Williams 
jorge.willi...@rackspace.commailto:jorge.willi...@rackspace.com
Cc: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] Refocusing the Lunr Project

Hey Jorge,

That is up to the Nova team, though I imagine that it will continue down the 
road that it had been progressing (as an extension to the current openstack 
api).

--
Chuck

On Fri, Jul 8, 2011 at 12:37 PM, Jorge Williams 
jorge.willi...@rackspace.commailto:jorge.willi...@rackspace.com wrote:
Chuck,

What does this mean in terms of APIs?  Will there be a separate Volume API?  
Will volumes be embedded in the compute API?

-jOrGe W.


On Jul 8, 2011, at 10:40 AM, Chuck Thier wrote:

Openstack Community,

Through the last few months the Lunr team has learned many things.  This
week, it has become clear to us that it would be better to integrate
with the existing Nova Volume code. It is upon these reflections that we
have decided to narrow the focus of the Lunr Project.

Lunr will continue to focus on delivering an open commodity storage
platform that will integrate with the Nova Volume service.  This will
be accomplished by implementing a Nova Volume driver. We will work
with the Nova team, and other storage vendors, to drive the features
needed to provide a flexible volume service.

I believe that this new direction will ensure a bright future for storage
in Nova, and look forward to continuing to work with everyone in making this
possible.

Sincerely,

Chuck Thier (@creiht)
Lunr Team Lead ___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

This email may include confidential information. If you received it in error, 
please delete it.

___ Mailing list: 
https://launchpad.net/~openstack Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net Unsubscribe 
: https://launchpad.net/~openstack More help : 
https://help.launchpad.net/ListHelp
This email may include confidential information. If you received it in error, 
please delete it.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Thinking about Openstack Volume API

2011-04-22 Thread Erik Carlin
To promote consistency across OpenStack APIs, I suggest we follow the same 
model as in OS compute.  That is, have a high level entity called /flavors.  
One can query flavors to determine what types of volumes are available (based 
on sla, performance tiers, whatever) then pass in the flavor ID during a POST 
/volumes.  Different flavors would likely be charged at different rates so 
volume usage can also include flavor for billing purposes.

Erik

From: Chuck Thier cth...@gmail.commailto:cth...@gmail.com
Date: Fri, 22 Apr 2011 20:44:18 -0500
To: Vishvananda Ishaya vishvana...@gmail.commailto:vishvana...@gmail.com
Cc: Openstack 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] Thinking about Openstack Volume API

Hey Vish,

Yes, we have been thinking about that a bit.  The current idea is to have 
volume types, and depending on the type, it would expect a certain amount of 
data for that type.  The scheduler would then map that type and corresponding 
data to provision the right type of storage.

--
Chuck

On Fri, Apr 22, 2011 at 6:17 PM, Vishvananda Ishaya 
vishvana...@gmail.commailto:vishvana...@gmail.com wrote:
This all seems reasonable to me.  Do you have a concept of how you will expose 
different SLAs within the deployment.  Is it metadata on the volume that and 
handled by the scheduler?  Or will different SLAs be at separate endpoints?

In other words am i creating a volume with a PUT 
/provider.com/high-perf-volumes/account/volumes/http://provider.com/high-perf-volumes/account/volumes/
or just a /provider.com/account/volumes/http://provider.com/account/volumes/ 
and a X-High-Perf header ?

Vish

On Apr 22, 2011, at 2:40 PM, Chuck Thier wrote:

 One of the first steps needed to help decouple volumes from Nova, is to
 define what the Openstack Volume API should look like.  I would like to start
 by discussing the main api endpoints, and discussing the interaction of
 compute attaching/detaching from volumes.

 All of the following endpoints will support basic CRUD opperations similar to
 others described in the Openstack 1.1 API.

 /volumes
 Justin already has a pretty good start to this.  We will need to discuss
 what data we will need to store/display about volumes, but I will save
 that for a later discussion.

 /snapshots
 This will allow us to expose snapshot functionality from the underlying
 storage systems.

 /exports
 This will be used to expose a volume to be consumed by an external system.
 The Nova attach api call will make a call to /exports to set up a volume
 to be attached to a VM.  This will store information that is specific
 about a particular attachement (for example maybe CHAP authentication
 information for an iscsi export).  This helps with decoupling volumes
 from nova, and makes the attachement process more generic so that other
 systems can easily consume the volumes service.  It is also undecided if
 this should be a publicly available api, or just used by backend services.

 The exports endpoint is the biggest change that we are proposing, so we would
 like to solicit feedback on this idea.

 --
 Chuck Thier (@creiht)
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : 
 openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___ Mailing list: 
https://launchpad.net/~openstack Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net Unsubscribe 
: https://launchpad.net/~openstack More help : 
https://help.launchpad.net/ListHelp


Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Floating IP in OpenStack API

2011-04-15 Thread Erik Carlin
Cool.  Got it.  Floating IPs or what Amazon calls Elastic IPs.  How are you 
solving the cross L2 problem?

Erik

Sent from my iPhone

On Apr 15, 2011, at 7:28 PM, Eldar Nugaev enug...@griddynamics.com wrote:

 Hi Erik
 
 Thank you for response!
 Yes, you are absolutely right OpenStack API already support shared IP groups.
 Suppose there are some misunderstanding, because I wrote about floating IPs.
 
 I want to have API for association IPs from floating IPs pool with
 particular VM.
 
 At this moment we have #1 implementation as a path in our RPM repo
 http://yum.griddynamics.net/. And going to make the merge proposal to
 trunk.
 
 Also we going to create blueprint about #3 and attach branch to it.
 
 Eldar
 
 On Sat, Apr 16, 2011 at 2:34 AM, Erik Carlin erik.car...@rackspace.com 
 wrote:
 Eldar -
 
 The OpenStack API already supports sharing IPs between instances (although
 this may be an extension?).  What exact behavior are you after?  More
 important than the way in which we expose via the API is how it's
 implemented.  It's important to note that this is extremely network
 topology dependent.  Sharing IPs today requires L2 adjacency so other VMs
 can GARP for the IP.  L2 doesn't work at scale so you need another
 mechanism.  I'm pretty sure the way AWS does it is to have a separate pool
 of IPs and inject /32 routes higher up that route towards the appropriate
 VM IP.  What are your thoughts around how this would be implemented?
 
 Multiple people are working towards an independent Network as a Service
 external to nova so it may make sense to plug this requirement in there.
 
 Erik
 
 On 4/11/11 8:31 AM, Eldar Nugaev enug...@griddynamics.com wrote:
 
 Hello everyone,
 
 We going to add possibility to assigning floating IP addresses in
 OpenStack API.
 Our goal reproduce AWS behavior when creating instance automatically
 assigns any free floating IP or add methods to OpenStack API for
 allocation and association API addresses.
 
 At this time we see three way:
 
 1. FLAG --auto_assign_floating_ip (default=False)
 2. Optional parameter auto_assign_floating_ip in existing create
 method
 3. OpenStack API add floating_ip - allocate_floating_ip,
 associate_floating_ip
 
 What way is more suitable at this time?
 
 --
 Eldar
 Skype: eldar.nugaev
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 Confidentiality Notice: This e-mail message (including any attached or
 embedded documents) is intended for the exclusive and confidential use of the
 individual or entity to which this message is addressed, and unless otherwise
 expressly indicated, is confidential and privileged information of Rackspace.
 Any dissemination, distribution or copying of the enclosed material is 
 prohibited.
 If you receive this transmission in error, please notify us immediately by 
 e-mail
 at ab...@rackspace.com, and delete the original message.
 Your cooperation is appreciated.
 
 
 
 
 
 -- 
 Eldar
 Skype: eldar.nugaev


Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] NaaS proposal suggestion

2011-04-14 Thread Erik Carlin
Rick -

Agree with everything below. IMO, #3 should apply in general to all OS services 
(core network, block storage, load balancing, etc.)  We want things to work as 
a suite of services but each service should be independent and deployable by 
itself.  There will obviously by interface standards that will need to be 
adhered to, but totally doable.  The more we can make each OS service the 
canonical API and automation engine for each IaaS piece, the better.  And part 
of that is making it usable with non-OS services.

Erik

Sent from my iPhone

On Apr 14, 2011, at 1:08 PM, Rick Clark r...@openstack.org wrote:

 As many of you know there are a few Network as a Service proposals
 floating around.  All of the authors are working to combine them into
 something we all want to move forward with.  Hopefully by the summit we
 will have one blueprint to rule them all.
 
 I would like to make a couple suggestions publicly that I have been
 mentioning to everyone I talk to about NaaS.
 
 1.  NaaS should be optional
 nova's existing hypervisor only flat and vlan network functionality
 should stay in nova.  You should not need to bring up a separate service
 to bring up a simple test instance. This will also help us not break
 nova as we are making rapid code changes.
 
 2. all communication should be via API.
 NaaS should not write or read directly from Novadb.  I have seen many
 diagrams that have the NaaS writing data directly to novadb.
 
 3. NaaS should be generic enough that other things can consume it.  I
 would love to see Opennebula and Eucalyptus be able to use the Openstack
 NaaS.  I know of a few sites that have both Eucalyptus and Openstack
 deployed.  It would be nice if they could share a NaaS.  i would also
 like to support application calling NaaS to create their own shared
 network containers.
 
 Cheers,
 
 Rick
 
 Principal Engineer, Cisco Systems
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Instance IDs and Multiple Zones

2011-03-22 Thread Erik Carlin
Good discussion.  I need to understand a bit more about how cross org
boundary bursting is envisioned to work before assessing the implications
on server id format.

Say a user hits the http://servers.myos.com api on zone A, which then
calls out to http://servers.osprovider.com api in zone B, which calls out
to http://dfw.servers.rackspace.com zone C, which calls down to
http://zoned.dfw.servers.rackspace.com zone D (which would not be a public
endpoint).  

[We'll exclude authN and the network implications for now :-]

I assume the lowest zone (zone D) is responsible for assigning the id?

Does that mean there are now 4 URIs for the same exact resource (I'm
assuming a numeric server id here for a moment):

http://zoned.dfw.servers.rackspace.com/v1.1/123/servers/12345 (this would
be non-public)
http://dfw.servers.rackspace.com/v1.1/123/servers/12345
http://servers.osprovider.com/v1.1/456/servers/12345
http://servers.myos.com/v1.1/789/servers/12345

I assume then the user is only returned the URI from the high level zone
they are hitting (http://servers.myos.com/v1.1/789/servers/12345 in this
example)?  If so, that means the high level zone defines everything in the
URI except the actually server ID which is assigned by the low level zone.
 Would users ever get returned a downstream URI they could hit directly?

Pure numeric ids will not work in a federated model at scale.  If you have
registered zone prefixes/suffixes, you will limit the total zone count
based on the number of digits you preallocate and need a registration
process to ensure uniqueness.  How many zones is enough?

You could use UUID.  If the above flow is accurate, I can only see how you
create collisions in your OWN OS deployment.  For example, if I
purposefully create a UUID collision in servers.myos.com (that I run) with
dfw.servers.rackspace.com (that Rackspace runs), it would only affect me
since the collision would only be seen in the servers.myos.com namespace.
Maybe I'm missing something, but I don't see how you could inject a
collision ID downstream - you can just shoot yourself in your own foot.
Eric Day, please jump in here if I am off.  AFAICT, same applies to dns
(which I will discuss more below).  I could just make my server ID dns
namespace collide with rackspace, but it would still only affect me in my
own URI namespace.

The other option apart from UUID is a globally unique string prefix.  If
Rackspace had 3 global API endpoints (ord, dfw, lon) each with 5 zones,
the ID would need to be something like rax:dfw:1:12345 (I would actually
want to hash the zone id 1 portion with something unique per customer so
people couldn't coordinate info about zones and target attacks, etc.).
This is obviously redundant with the Rackspace URI since we are
representing Rackspace and the region twice, e.g.
http://dfw.servers.rackspace.com/v1.1/12345/servers/rax:dfw:1:6789.

This option also means we need a mechanism for registering unique
prefixes.  We could use the same one we are proposing for API extensions,
or, as Eric pointed out, use dns, but that would REALLY get redundant,
e.g. 
http://dfw.servers.rackspace.com/v1.1/12345/servers/6789.dfw.servers.racksp
ace.com.

Using strings also means people could make ids whatever they want as long
as they obeyed the prefix/suffix.  So one provider could be
rax:dfw:1:12345 and another could be osprovider:8F792#@*jsn.  That is
technically not a big deal, but there is something for consistency and
simplicity.


The fundamental problem I see here is URI is intended to be the universal
resource identifier but since zone federation will create multiple URIs
for the same resource, the server id now has to be ANOTHER universal
resource identifier.

Another issue is whether you want transparency or opaqueness when you are
federating.  If you hit http://servers.myos.com, create two servers, and
the ids that come back are (assuming using dns as server ids for a moment):

http://servers.myos.com/v1.1/12345/servers/5678.servers.myos.com

http://servers.myos.com/v1.1/12345/servers/6789.dfw.servers.rackspace.com

It will be obvious in which deployment the servers live.  This will
effectively prevent whitelabel federating.  UUID would be more opaque.

Given all of the above, I think I lean towards UUID.

Would love to hear more thought and dialog on this.

Erik  



On 3/22/11 3:49 PM, Eric Day e...@oddments.org wrote:

See my previous response to Justin's email as to why UUIDs alone are
not sifficient.

-Eric

On Tue, Mar 22, 2011 at 04:06:14PM -0400, Brian Schott wrote:
 +1
 Sounds like some IPV6 discussions back when the standards were being
debated.  We could debate bit-allocation forever.  Why can't we use
UUIDs?
 
 http://tools.ietf.org/html/rfc4122
 
 
 2.  Motivation
 
 
One of the main reasons for using UUIDs is that no centralized
authority is required to administer them (although one format uses
IEEE 802 node identifiers, others do not).  As a result, generation
on demand can be 

Re: [Openstack] OpenStack Compute API for Cactus (critical!)

2011-03-01 Thread Erik Carlin
Jesse -

Understood they are separate within nova.  We're just having a semantic 
disconnect which is my fault since I put the slides together in 3 min.  
Rackspace defines a standard service as having a clear api boundary of rest 
and optionally atom interfaces.  In that model, nova is a service (comprised of 
subservices let's say) as is swift.  As you allude to below, we talked about 
moving the inter-subservice communication to http which in effect makes them 
independent services.  We want OpenStack to be a suite of independent services 
that could be deployed stand alone (I just download and install image or 
network or block storage) or in combination with each other.  For that reason 
(and others), I would push for also decomposing nova into separate OS projects, 
repos, etc.  I was using the term service to indicate that separation (you 
didn't get all that from the word service :-).

Thanks for asking and pushing for clarification.

Erik

From: Jesse Andrews anotherje...@gmail.commailto:anotherje...@gmail.com
Date: Tue, 1 Mar 2011 10:16:32 -0600
To: Paul Voccio paul.voc...@rackspace.commailto:paul.voc...@rackspace.com
Cc: Erik Carlin erik.car...@rackspace.commailto:erik.car...@rackspace.com, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)

pvo,

Yep.  I'm responding to the slide having 3 services, 5 endpoints (nova, 
glance, swift)

Since the number of endpoints will depend on deployment configuration.

And nova being a single repository doesn't mean it is a single service.

Jesse

On Mar 1, 2011, at 1:07 AM, Paul Voccio wrote:

Jesse,

I agree that some implementations can want to have a single endpoint. I think 
this is doable with a simple proxy that can pass requests back to each service 
apis. This can also be accomplished by having configuration variables in your 
bindings to talk to something that looks like the following:

compute=api.compute.example.com
volume=api.volume.example.com
image=api.image.example.com
network=api.network.example.com

Or for behind the proxies:

compute=api.example.com
volume=api.example.com
image=api.example.com
network=api.example.com

Maybe this is something the auth services return?


From: Jesse Andrews anotherje...@gmail.commailto:anotherje...@gmail.com
Date: Mon, 28 Feb 2011 19:53:01 -0800
To: Erik Carlin erik.car...@rackspace.commailto:erik.car...@rackspace.com
Cc: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)

I'm also confused because nova (compute/block/network) is in 1 repository 
doesn't mean it isn't 3 different services.

We've talked about moving the services inside nova to not reaching inside of 
each other via RPC calls and instead making HTTP calls.  But they are mostly 
already designed in a way that allows them to operate independantly.

And I would also say that while rackspace may deploy with 3 endpoints, 
openstack might want to deploy multiple services behind a single endpoint.

Jesse

On Feb 28, 2011, at 3:52 PM, Erik Carlin wrote:

I was talking with Will Reese about this more.  If we are eventually going to 
decompose into independent services with separate endpoints, he thought we 
should do that now.  I like that idea better.  For cactus, we still have a 
single nova service black box but we put multiple OpenStack API endpoints on 
the front side, one for each future service.  In other words, use separate 
endpoints instead of extensions in a single endpoint to expose the current 
capabilities.  That way, it sets us on the right path and consumers don't have 
to refactor to support between cactus and diable.  In diablo, we decompose into 
separate services and the endpoints move with them.  It's a bit hard to 
visualize so I put together the attached pdf.  I'm assuming glance is a 
separate service and endpoint for cactus (still need to figure out per my 
message below) and swift already is.

Erik

From: Erik Carlin erik.car...@rackspace.commailto:erik.car...@rackspace.com
Date: Mon, 28 Feb 2011 17:07:22 -0600
To: John Purrier j...@openstack.orgmailto:j...@openstack.org, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)

That all sounds good.  My only question is around images.  Is glance ready to 
be an independent service (and thus have a separate API) in Cactus?

Erik

From: John Purrier j...@openstack.orgmailto:j...@openstack.org
Date: Mon, 28 Feb 2011 16:53:53 -0600
To: Erik Carlin erik.car...@rackspace.commailto:erik.car...@rackspace.com, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: RE: [Openstack] OpenStack Compute API for Cactus (critical!)

Hi Erik, today we have compute, block/volume, and network all

Re: [Openstack] OpenStack Compute API for Cactus (critical!)

2011-02-28 Thread Erik Carlin
John -

Are we just talking about compute aspects?  IMO, we should NOT be exposing 
block functionality in the OS compute API.  In Diablo, we will break out block 
into a separate service with it's own OS block API.  That means for now, there 
may be functionality in nova that isn't exposed (an artifact of originally 
mimicing EC2) until we can fully decompose nova into independent services.

Erik

From: John Purrier j...@openstack.orgmailto:j...@openstack.org
Date: Mon, 28 Feb 2011 14:16:20 -0600
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: [Openstack] OpenStack Compute API for Cactus (critical!)

Has anyone done a gap analysis against the proposed OpenStack Compute API and 
a) the implemented code, and b) the EC2 API?

It looks like we have had a breakdown in process, as the community review 
process of the proposed spec has not generated discussion of the missing 
aspects of the proposed spec.


Here is what we said on Feb 3 as the goal for Cactus:



OpenStack Compute API completed. We need to complete a working set of API's 
that are consistent and inclusive of all the exposed functionality.

We need to *very* quickly identify the missing elements that are required in 
the OpenStack Compute API, and then discuss how we mobilize to get this work 
done for Cactus. As this is the #1 priority for this release there are 
implications on milestones dates depending on the results of this exercise. The 
1.1 spec should be complete and expose all current Nova functionality (superset 
of EC2/RS).

Dendrobates, please take the lead on this, anyone who can help please 
coordinate with Rick. Can we get a fairly complete view by EOD tomorrow? Please 
set up a wiki page to identify the gaps, I suggest 3 columns (Actual code / EC2 
/ OpenStack Compute).



Thanks,



John

___ Mailing list: 
https://launchpad.net/~openstack Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net Unsubscribe 
: https://launchpad.net/~openstack More help : 
https://help.launchpad.net/ListHelp


Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API for Cactus (critical!)

2011-02-28 Thread Erik Carlin
That all sounds good.  My only question is around images.  Is glance ready to 
be an independent service (and thus have a separate API) in Cactus?

Erik

From: John Purrier j...@openstack.orgmailto:j...@openstack.org
Date: Mon, 28 Feb 2011 16:53:53 -0600
To: Erik Carlin erik.car...@rackspace.commailto:erik.car...@rackspace.com, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: RE: [Openstack] OpenStack Compute API for Cactus (critical!)

Hi Erik, today we have compute, block/volume, and network all encompassed in 
nova. Along with image and object storage these make the whole of OpenStack 
today. The goal is to see where we are at wrt the OpenStack API 
(compute/network/volume/image) and coverage of the underlying implementation as 
well as what is available through the EC2 API today.

I would propose that volume and network API’s be exposed not through the core 
compute API, but as extensions. Once we create separate services and factor 
network and volume services out of nova these API’s will form the core API’s 
for these services. We may also need to up-version these service API’s between 
Cactus and Diablo as they are currently under heavy discussion and design.

John

From: Erik Carlin [mailto:erik.car...@rackspace.com]
Sent: Monday, February 28, 2011 3:16 PM
To: John Purrier; 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)

John -

Are we just talking about compute aspects?  IMO, we should NOT be exposing 
block functionality in the OS compute API.  In Diablo, we will break out block 
into a separate service with it's own OS block API.  That means for now, there 
may be functionality in nova that isn't exposed (an artifact of originally 
mimicing EC2) until we can fully decompose nova into independent services.

Erik

From: John Purrier j...@openstack.orgmailto:j...@openstack.org
Date: Mon, 28 Feb 2011 14:16:20 -0600
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: [Openstack] OpenStack Compute API for Cactus (critical!)

Has anyone done a gap analysis against the proposed OpenStack Compute API and 
a) the implemented code, and b) the EC2 API?

It looks like we have had a breakdown in process, as the community review 
process of the proposed spec has not generated discussion of the missing 
aspects of the proposed spec.


Here is what we said on Feb 3 as the goal for Cactus:



OpenStack Compute API completed. We need to complete a working set of API's 
that are consistent and inclusive of all the exposed functionality.

We need to *very* quickly identify the missing elements that are required in 
the OpenStack Compute API, and then discuss how we mobilize to get this work 
done for Cactus. As this is the #1 priority for this release there are 
implications on milestones dates depending on the results of this exercise. The 
1.1 spec should be complete and expose all current Nova functionality (superset 
of EC2/RS).

Dendrobates, please take the lead on this, anyone who can help please 
coordinate with Rick. Can we get a fairly complete view by EOD tomorrow? Please 
set up a wiki page to identify the gaps, I suggest 3 columns (Actual code / EC2 
/ OpenStack Compute).



Thanks,



John
___ Mailing list: 
https://launchpad.net/~openstack Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net Unsubscribe 
: https://launchpad.net/~openstack More help : 
https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or

embedded documents) is intended for the exclusive and confidential use of the

individual or entity to which this message is addressed, and unless otherwise

expressly indicated, is confidential and privileged information of Rackspace.

Any dissemination, distribution or copying of the enclosed material is 
prohibited.

If you receive this transmission in error, please notify us immediately by 
e-mail

at ab...@rackspace.commailto:ab...@rackspace.com, and delete the original 
message.

Your cooperation is appreciated.


Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API for Cactus (critical!)

2011-02-28 Thread Erik Carlin
Thanks Devin for the reiteration.  I'm for EC2 API support, I just think that 
OS owning our own API specs is key if we are to innovate and drive open, 
standard per service interfaces.

Erik

From: Devin Carlen devin.car...@gmail.commailto:devin.car...@gmail.com
Date: Mon, 28 Feb 2011 19:59:38 -0800
To: Erik Carlin erik.car...@rackspace.commailto:erik.car...@rackspace.com
Cc: John Purrier j...@openstack.orgmailto:j...@openstack.org, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)

Erik,

Thanks for the clarification.  I'd just like to reiterate that official support 
for the EC2 API is something that needs to be handled in parallel, since we've 
committed to supporting it in the past.


Best,


Devin

On Feb 28, 2011, at 7:53 PM, Erik Carlin wrote:

Devin -

In a decomposed service model, OS APIs are per service, so the routing is 
straightforward.  For services that need to consume other services (e.g. The 
compute service needs an IP from the network service), the queueing and worker 
model remains the same, it's just that the network worker calls out to the 
RESTful network service API (likely the admin API).

For EC2 (and any other 3rd party API), the community is welcome to support 
them, although I see them as secondary to the canonical OS APIs themselves.  
Since the EC2 API combines a number of services, it is essentially a 
composition API.  It probably makes sense to keep in nova (i.e. compute) but 
you are right, it would need to call out to glance, block, and network in the 
diablo timeframe.

What was attached was intended simply to show the general approach, not be a 
detailed diagram of the API flows.  Once we complete the gap analysis John has 
requested, these connections should become more clear.

Erik

From: Devin Carlen devin.car...@gmail.commailto:devin.car...@gmail.com
Date: Mon, 28 Feb 2011 17:44:03 -0800
To: Erik Carlin erik.car...@rackspace.commailto:erik.car...@rackspace.com
Cc: John Purrier j...@openstack.orgmailto:j...@openstack.org, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)

Your diagram is deceptively simple because it makes no distinction about how 
block API would be handled in the EC2 API, where compute and block operations 
are very closely coupled.  In order for the diagram to convey the requirements 
properly, it needs to show how compute/network/volume API requests are routed 
by both the EC2 and OpenStack API.


Devin


On Feb 28, 2011, at 3:52 PM, Erik Carlin wrote:

I was talking with Will Reese about this more.  If we are eventually going to 
decompose into independent services with separate endpoints, he thought we 
should do that now.  I like that idea better.  For cactus, we still have a 
single nova service black box but we put multiple OpenStack API endpoints on 
the front side, one for each future service.  In other words, use separate 
endpoints instead of extensions in a single endpoint to expose the current 
capabilities.  That way, it sets us on the right path and consumers don't have 
to refactor to support between cactus and diable.  In diablo, we decompose into 
separate services and the endpoints move with them.  It's a bit hard to 
visualize so I put together the attached pdf.  I'm assuming glance is a 
separate service and endpoint for cactus (still need to figure out per my 
message below) and swift already is.

Erik

From: Erik Carlin erik.car...@rackspace.commailto:erik.car...@rackspace.com
Date: Mon, 28 Feb 2011 17:07:22 -0600
To: John Purrier j...@openstack.orgmailto:j...@openstack.org, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)

That all sounds good.  My only question is around images.  Is glance ready to 
be an independent service (and thus have a separate API) in Cactus?

Erik

From: John Purrier j...@openstack.orgmailto:j...@openstack.org
Date: Mon, 28 Feb 2011 16:53:53 -0600
To: Erik Carlin erik.car...@rackspace.commailto:erik.car...@rackspace.com, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: RE: [Openstack] OpenStack Compute API for Cactus (critical!)

Hi Erik, today we have compute, block/volume, and network all encompassed in 
nova. Along with image and object storage these make the whole of OpenStack 
today. The goal is to see where we are at wrt the OpenStack API 
(compute/network/volume/image) and coverage of the underlying implementation as 
well as what is available through the EC2 API today.

I would propose that volume and network API’s be exposed not through the core 
compute API, but as extensions. Once we create separate services and factor 
network and volume services out of nova these API’s will form the core API’s 
for these services. We may also need

Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Erik Carlin
The way I see it, there isn't a singular OpenStack API (even today there is 
swift, nova, and glance).  OpenStack is a suite of IaaS each with their own API 
– so there is a SUITE of standard OS APIs.  And each OS service should strive 
to define the canonical API for automating that particular service.  If I just 
want to run an image repo, I deploy glance.  If my SAN guy can't get storage 
provisioned fast enough, I deploy the OS block storage service (once we have 
it).  And if I want a full cloud suite, I deploy all the services.  They are 
loosely coupled and (ideally) independent building blocks.  Whether one chooses 
to front the different service endpoints with a proxy to unify them or have 
separate service endpoints is purely a deployment decision.  Either way, there 
are no competing OS APIs.  Support for 3rd party APIs (e.g. EC2) is secondary 
IMO, and to some degree, detrimental.  Standards are defined largely in part by 
ubiquity.  We want OS to become ubiquitous and we want the OS APIs to become 
defacto.  Supporting additional APIs (or even variations of the same API like 
AMQP per the other thread) doesn't help us here.  I would love to see the 
community rally behind a per service standard OS REST API that we can own and 
drive.

To that end, the goal as I see it is to launch canonical OpenStack Compute 
(nova) and Image (glance) APIs with Cactus.  In Diablo, we would then work to 
introduce separate network and block storage services with REST APIs as well.  
All APIs would be independently versioned and stable.  I'm ALL for per language 
OpenStack bindings that implement support for the entire suite of services.

Re: extensions, it's actually the technical aspects that are driving it.  There 
is a tension between standards and innovation that needs to be resolved.  In 
addition, we need to be able to support niche functionality (e.g. Rackspace may 
want to support API operations related to managed services) without imposing it 
on everyone.  These problems are not new.  We've seen the same exact thing with 
OpenGL and they have a very successful extension model that has solved this.  
Jorge studied this when did his PhD and has designed extensions with that in 
mind.  He has a presentation on extensions here4:20
Jorge%20Williams
http://wiki.openstack.org/JorgeWilliams?action=AttachFiledo=viewtarget=Extensions.pdf
 if you haven't seen it.  I think extensions are critically important and would 
encourage dialog amongst the community to come to a consensus on this.  Per my 
points above, I would prefer to avoid separate APIs for the same service.  
Let's see if we can get behind a per service API that becomes THE defacto 
standard way for automating that service.

Erik

From: Justin Santa Barbara jus...@fathomdb.commailto:jus...@fathomdb.com
Date: Fri, 18 Feb 2011 09:57:12 -0800
To: Paul Voccio paul.voc...@rackspace.commailto:paul.voc...@rackspace.com
Cc: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack Compute API 1.1

 How is the 1.1 api proposal breaking this?

Because if we launch an OpenStack API, the expectation is that this will be the 
OpenStack API :-)

If we support a third-party API (CloudServers or EC2), then people will 
continue to use their existing wrappers (e.g. jclouds)  Once there's an 
OpenStack API, then end-users will want to find a library for that, and we 
don't want that to be a poor experience.  To maintain a good experience, we 
either can't break the API, or we need to write and maintain a lot of proxying 
code to maintain compatibility.  We know we're not ready for the first 
commitment, and I don't think we get enough to justify the second.

 I think the proxy would make sense if you wanted to have a single api. Not 
 all service providers will but I see this as entirely optional, not required 
 to use the services.

But then we have two OpenStack APIs?  Our ultimate end users don't use the API, 
they use a wrapper library.  They want a stable library that works and is kept 
up to date with recent changes and don't care about what's going on under the 
covers.  Wrapper library authors want an API that is (1) one API and (2) stable 
with reasonable evolution, otherwise they'll abandon their wrapper or not 
update it.

 The extensions mechanism is the biggest change, iirc.

I'm not a big fan of the extensions idea, because it feels more like a 
reflection of a management goal, rather than a technical decision (OpenStack 
is open to extensions)  Supporting separate APIs feels like a better way to do 
that.  I'm very open to be corrected here, but I think we need to see code that 
wants to use the extension API and isn't better done as a separate API.  Right 
now I haven't seen any patches, and that makes me uneasy.





On Fri, Feb 18, 2011 at 9:29 AM, Paul Voccio 
paul.voc...@rackspace.commailto:paul.voc...@rackspace.com wrote:
The spec for 1.0 

Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Erik Carlin
Whoops.  Extension presentation link was broken.  
Herehttp://wiki.openstack.org/JorgeWilliams?action=AttachFiledo=viewtarget=Extensions.pdf
 is a working one.

From: Erik Carlin erik.car...@rackspace.commailto:erik.car...@rackspace.com
Date: Fri, 18 Feb 2011 16:32:30 -0600
To: Justin Santa Barbara jus...@fathomdb.commailto:jus...@fathomdb.com, 
Paul Voccio paul.voc...@rackspace.commailto:paul.voc...@rackspace.com
Cc: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack Compute API 1.1

The way I see it, there isn't a singular OpenStack API (even today there is 
swift, nova, and glance).  OpenStack is a suite of IaaS each with their own API 
– so there is a SUITE of standard OS APIs.  And each OS service should strive 
to define the canonical API for automating that particular service.  If I just 
want to run an image repo, I deploy glance.  If my SAN guy can't get storage 
provisioned fast enough, I deploy the OS block storage service (once we have 
it).  And if I want a full cloud suite, I deploy all the services.  They are 
loosely coupled and (ideally) independent building blocks.  Whether one chooses 
to front the different service endpoints with a proxy to unify them or have 
separate service endpoints is purely a deployment decision.  Either way, there 
are no competing OS APIs.  Support for 3rd party APIs (e.g. EC2) is secondary 
IMO, and to some degree, detrimental.  Standards are defined largely in part by 
ubiquity.  We want OS to become ubiquitous and we want the OS APIs to become 
defacto.  Supporting additional APIs (or even variations of the same API like 
AMQP per the other thread) doesn't help us here.  I would love to see the 
community rally behind a per service standard OS REST API that we can own and 
drive.

To that end, the goal as I see it is to launch canonical OpenStack Compute 
(nova) and Image (glance) APIs with Cactus.  In Diablo, we would then work to 
introduce separate network and block storage services with REST APIs as well.  
All APIs would be independently versioned and stable.  I'm ALL for per language 
OpenStack bindings that implement support for the entire suite of services.

Re: extensions, it's actually the technical aspects that are driving it.  There 
is a tension between standards and innovation that needs to be resolved.  In 
addition, we need to be able to support niche functionality (e.g. Rackspace may 
want to support API operations related to managed services) without imposing it 
on everyone.  These problems are not new.  We've seen the same exact thing with 
OpenGL and they have a very successful extension model that has solved this.  
Jorge studied this when did his PhD and has designed extensions with that in 
mind.  He has a presentation on extensions here4:20
Jorge%20Williams
http://wiki.openstack.org/JorgeWilliams?action=AttachFiledo=viewtarget=Extensions.pdf
 if you haven't seen it.  I think extensions are critically important and would 
encourage dialog amongst the community to come to a consensus on this.  Per my 
points above, I would prefer to avoid separate APIs for the same service.  
Let's see if we can get behind a per service API that becomes THE defacto 
standard way for automating that service.

Erik

From: Justin Santa Barbara jus...@fathomdb.commailto:jus...@fathomdb.com
Date: Fri, 18 Feb 2011 09:57:12 -0800
To: Paul Voccio paul.voc...@rackspace.commailto:paul.voc...@rackspace.com
Cc: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack Compute API 1.1

 How is the 1.1 api proposal breaking this?

Because if we launch an OpenStack API, the expectation is that this will be the 
OpenStack API :-)

If we support a third-party API (CloudServers or EC2), then people will 
continue to use their existing wrappers (e.g. jclouds)  Once there's an 
OpenStack API, then end-users will want to find a library for that, and we 
don't want that to be a poor experience.  To maintain a good experience, we 
either can't break the API, or we need to write and maintain a lot of proxying 
code to maintain compatibility.  We know we're not ready for the first 
commitment, and I don't think we get enough to justify the second.

 I think the proxy would make sense if you wanted to have a single api. Not 
 all service providers will but I see this as entirely optional, not required 
 to use the services.

But then we have two OpenStack APIs?  Our ultimate end users don't use the API, 
they use a wrapper library.  They want a stable library that works and is kept 
up to date with recent changes and don't care about what's going on under the 
covers.  Wrapper library authors want an API that is (1) one API and (2) stable 
with reasonable evolution, otherwise they'll abandon their wrapper or not 
update it.

 The extensions mechanism

Re: [Openstack] Glance x-image-meta-type raw vs machine

2011-01-13 Thread Erik Carlin
I would just call it VMDK.  That's what Vmware
(http://www.vmware.com/technical-resources/interfaces/vmdk.html) and
everyone else calls it, even though there may be extra files to support
it.  We're just naming the disk format here.

We had also talked about the IMG disk format to support AMIs but RAW is
the same thing so we are covered there.

Erik

  

On 1/13/11 9:15 AM, Jay Pipes jaypi...@gmail.com wrote:

2011/1/13 Diego Parrilla Santamaría diego.parrilla.santama...@gmail.com:
 An appliance is the combination of metadata describing the virtual
machine
 plus the virtual disks. The standard format in the virtualization
industry
 is OVF. Basically, differs from VMX+VMDK(s) because it has a XML format
that
 describes the virtual machine (and little bit its environment like
 firewalling, policies, etc...).
 VMX is VMware specific, and OVF is vendor agnostic (or should be...).
From
 my perspective VMX + VMDK(s) is not an appliance format, but this is the
 kind of topic for a long discussion ;-)
 If you are looking for a simple way to describe the virtual machine
 parameters for an appliance, check OVF specs to get some inspiration. I
 think OVF full spec is overkill because the simpler approach of Nova.

Thanks for the explanation, Diego! Much appreciated.

The question arises because we are wondering what information to store
in Glance's registry that describes an image. I had proposed the
following, with additions from John Purrier:

disk_format: choices: VHD, VDI, VMDK, RAW,  QCOW2
appliance_format: choices: OVF, OVA,  AMI.

Do you agree that we should put VMX in the list of appliance formats,
as Ewan Mellor suggested?

Cheers!
jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Glance x-image-meta-type raw vs machine

2011-01-10 Thread Erik Carlin
Correct me if I am wrong, but I believe AMIs use IMG so that should be
another disk format a well (it would be the only one in the AMI appliance
format unless AWS changes it).  Is there enough variance in the virtual
disk and envelope formats over time that we want to include version
columns or would new versions just be another format choice.

Erik

On 1/10/11 10:59 AM, John Purrier j...@openstack.org wrote:

Jay, this makes a lot of sense. For disk formats I would suggest: VHD,
VDI, VMDK, RAW,  QCOW2. For the appliance formats: OVF, OVA,  AMI.

Conversion within Glance will need to be able to handle both disk image
conversion and appliance format conversion.

John

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Monday, January 10, 2011 10:26 AM
To: John Purrier
Cc: Ewan Mellor; openstack@lists.launchpad.net
Subject: Re: [Openstack] Glance x-image-meta-type raw vs machine

And I think we need to come to an agreement on the terms used here...

What is a type of virtual image? Do we mean a *disk* image format?
Do we mean a *metadata envelope* type (OVF, AMI, etc)? Do we mean some
type of system image or image part (kernel, ramdisk, etc)?

What Glance is serving/registering is really called a *virtual
appliance*, as described in this article:
http://en.wikipedia.org/wiki/Virtual_appliance

Proposal:

Change the Image model to have these following fields, instead of the
existing type column:

disk_format -- choice between ('VHD', 'VDI', 'VMDK')
appliance_file_format -- choice between ('AMI','OVF')

Thoughts?
-jay

On Mon, Jan 10, 2011 at 11:11 AM, John Purrier j...@openstack.org wrote:
 My 2 cents... We need to define a transport-neutral specification that
allows us to encapsulate and copy/move a variety of virtual image
formats, this should be based on OVF. The envelope can contain both the
actual image as well as any required meta-data.

 The image elements specified are very AMI specific, we should
generalize to be able to indicate the type of virtual image (i.e. AMI,
VHD, etc.). A test for POC can be a service that takes the data in the
OVF or what is stored in Glance to convert between formats. If we do
this correctly all of the required data will be available at the correct
point in the flow.

 Don't know if this is directly applicable to the discussion point
below, but it is important that we get the fundamental
design/architecture concepts in place moving forward.

 John

 -Original Message-
 From: openstack-bounces+john=openstack@lists.launchpad.net
[mailto:openstack-bounces+john=openstack@lists.launchpad.net] On
Behalf Of Jay Pipes
 Sent: Monday, January 10, 2011 9:44 AM
 To: Ewan Mellor
 Cc: openstack@lists.launchpad.net
 Subject: Re: [Openstack] Glance x-image-meta-type raw vs machine

 On Sat, Jan 1, 2011 at 7:30 PM, Ewan Mellor ewan.mel...@eu.citrix.com
wrote:
 What is the intended semantics of the Glance x-image-meta-type header
values
 ³raw² vs ³machine²?

 When we pulled the Image model from Nova into Glance, there was a
 field image_type that was limited to the strings raw, machine,
 kernel, and ramdisk.

 I'm open to changing this or using something like a format field
 (AMI vs OVF, etc..)

 Thoughts?

 -jay

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace. 
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message. 
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Some insight into the number of instances Nova needs to spin up...

2010-12-29 Thread Erik Carlin
We know Amazon is highly, highly elastic.  While the instances launched
per day is impressive, we know that many of those instances have a short
life.  I see Guy is now teaming up with CloudKick on this report.  The EC2
instance ID enables precise measurement of instances launched, and
CloudKick provides some quantitative measure of lifetime of instances.
Last time I checked, those numbers we're something like 3% of EC2
instances launched via CK were still running (as a point of reference,
something like 80% of Rackspace cloud servers were still running).

To meet the elasticity demands of EC2, nova would need to support a high
change rate of adds/deletes (not to mention state polling, resizes, etc).
Is there a nova change rate target as well or just a physical host limit?
The 1M host limit still seems reasonable to me.  Large scale deployments
will break into regions where each region is an independent nova
deployment that each has a 1M host limit.

Erik 


On 12/29/10 10:47 AM, Jay Pipes jaypi...@gmail.com wrote:

Some insight into the number of instances being spun up *per day* on
AWS, *just in the US-East-1 region*:

http://www.jackofallclouds.com/2010/12/recounting-ec2/

Avg in 2010: 70,528 instances spun up each day
Max: 150,800 instances in a single day

Food for thought.

What do we think Nova could/can support? I know we are aiming at
supporting clouds of 1M physical hosts. Perhaps we need to re-assess?
:)

/me urges more prioritization of the openstack-ci (continuous
integration and performance/stress testing) project in Cactus...

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace. 
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message. 
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp