[Openstack] python-novaclient vs. python-openstack.compute

2011-05-17 Thread Soren Hansen
python-novaclient[0] is the client for Nova that we maintain
ourselves. It is a fork of jacobian's python-cloudservers.

python-openstack.compute is jacobian's new branch of python-cloudservers.

I wonder if there's any point in having two distinct, but very similar
libraries to do same thing. If not, how do we move forward?

Yielding to jacobian (or someone else external to the project) helps
keep us honest, since someone outside the project would look at the
API docs to extend their client tools, and will hopefully point out if
there's divergence between the API docs and the actual exposed API.

However, we need client tools to exercise new features exposed in the
API, so I'm not sure we can reasonably live without a set of tools
that we maintain ourselves to expose all the new functionality.

Thoughts?

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Discussion of network service flows

2011-05-17 Thread Troy Toman
As was mentioned in the networks meeting this afternoon, we need to open up a 
discussion around flows between Nova, Melange(IPAM), Quantum and Donabe. As we 
are refactoring Nova for networking and designing IP and Network services in 
parallel, it will be important to reach some agreement as to how the REST calls 
flow and who maintains which relationships. 

I've set up an etherpad: 

http://etherpad.openstack.org/network-flows

to host the discussion.

Troy

Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] testing and deploying swift?

2011-05-17 Thread Jon Slenk
hi,

So what are people's processes for tracking Swift releases, on
production systems?

I'm guessing Rackspace is probably the most serious deployment to
date. If anybody there could comment on what release of Swift is being
run and how you expect to deploy newer versions, that would be fun and
educational to hear about and mull over.

If anybody working on core Swift could comment on which parts of the
system are more vs. less dangerous to muck with, that would be great,
too. For one example, we're still trying to grok the implications of
significantly changing the Rings (expanding them, usually). Like, what
even qualifies as "significant" vs. not.

thanks for sharing any experiences,
-Jon.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Global deployment of Glance

2011-05-17 Thread Eric Day
Assuming you are using Swift for storage, the Swift ring configuration
can specify zones and minimum number of replicas, which could handle
all this logic and bit pushing for you.

-Eric

On Tue, May 17, 2011 at 06:36:38PM +, Glen Campbell wrote:
> That's probably the easiest to implement. This would mean that each
> deployment of new images would need to be installed in each region.
> 
> 
> 
> 
> 
> 
> 
> 
> On 5/17/11 12:47 PM, "Chris Behrens"  wrote:
> 
> >
> >Ignoring how it is actually implemented, I think we do want copies of
> >base images in reach region.  We don't want any sort of outage in one
> >region to adversely affect another region.
> >
> >- Chris
> >
> >
> >On May 17, 2011, at 9:36 AM, Jay Pipes wrote:
> >
> >> On Tue, May 17, 2011 at 11:59 AM, Glen Campbell
> >>  wrote:
> >>> If we are going to deploy Glance to support a global deployment of
> >>>Nova, would it make sense to have replicas in different regions for
> >>>better performance?
> >>> Or, to put it another way, is there a recommended way to keep multiple
> >>>Glance installations in sync?
> >> 
> >> Hi Glen!
> >> 
> >> I think a better idea than having multiple copies of an image in
> >> different regions is to do two things:
> >> 
> >> a) Use a proxy caching server like Varnish or Squid to cache pieces or
> >> all of an image in various zones
> >> b) Use a highly-available storage system like Swift behind the global
> >> Glance server
> >> 
> >> For a) we need to complete the HTTP 1.1 Cache headers blueprint
> >> (https://blueprints.launchpad.net/glance/+spec/caching) and for b) you
> >> would simply use the Swift backend, configured appropriately for a
> >> large Swift cluster.
> >> 
> >>> Users doing snapshots/backups, etc., would presumably get better
> >>>performance if Glance was local, but how would we keep the base/shared
> >>>images in sync?
> >> 
> >> This is actually something that Rick H and Chris McG are working on.
> >> The basic strategy that they came up with was to add a parent ID
> >> attribute to the image and for any snapshot image, simply refer to the
> >> base image as the snapshot image's parent. The glance client would
> >> check for a parent_id that wasn't null and continue streaming the
> >> image while it found a parent URI/ID.
> >> 
> >> For example, let's say you have a "golden image" with the URI:
> >> http://glance.example.com/12345. A user creates an instance with this
> >> image and some time later, decides to do a snapshot or backup of their
> >> running instance. The snapshotting code in the virtualization layer
> >> produces what is essentially a differential snapshot, containing only
> >> the differing bits of the existing image with the base golden image.
> >> This snapshot (typically much smaller than the original image) could
> >> be stored in the local (zone-local) Glance server with a call to POST
> >> /images. When pushing this snapshot image to the local Glance server,
> >> we would set the parent ID to http://glance.example.com/12345.
> >> 
> >> Let's say at some later time, the user wanted to restore from this
> >> backup. The virtualization layer that implemented the restore call
> >> would need to stream the backup image from the local Glance server. In
> >> doing so, it would use the glance client class' get_image() method.
> >> When calling this method, the glance client would first return the
> >> snapshot image piece. Noticing the image had a parent ID, it would
> >> continue to stream the golden image from the global image Glance
> >> server in-line, essentially enabling us to store only the small diff
> >> of the snapshot locally while streaming the bulk of the image master
> >> from the global Glance server.
> >> 
> >> I'll let Rick elaborate on the above and correct any mistakes I made
> >> in my description. :)
> >> 
> >> -jay
> >> 
> >> ___
> >> Mailing list: https://launchpad.net/~openstack
> >> Post to : openstack@lists.launchpad.net
> >> Unsubscribe : https://launchpad.net/~openstack
> >> More help   : https://help.launchpad.net/ListHelp
> >
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Global deployment of Glance

2011-05-17 Thread Glen Campbell
That's probably the easiest to implement. This would mean that each
deployment of new images would need to be installed in each region.








On 5/17/11 12:47 PM, "Chris Behrens"  wrote:

>
>Ignoring how it is actually implemented, I think we do want copies of
>base images in reach region.  We don't want any sort of outage in one
>region to adversely affect another region.
>
>- Chris
>
>
>On May 17, 2011, at 9:36 AM, Jay Pipes wrote:
>
>> On Tue, May 17, 2011 at 11:59 AM, Glen Campbell
>>  wrote:
>>> If we are going to deploy Glance to support a global deployment of
>>>Nova, would it make sense to have replicas in different regions for
>>>better performance?
>>> Or, to put it another way, is there a recommended way to keep multiple
>>>Glance installations in sync?
>> 
>> Hi Glen!
>> 
>> I think a better idea than having multiple copies of an image in
>> different regions is to do two things:
>> 
>> a) Use a proxy caching server like Varnish or Squid to cache pieces or
>> all of an image in various zones
>> b) Use a highly-available storage system like Swift behind the global
>> Glance server
>> 
>> For a) we need to complete the HTTP 1.1 Cache headers blueprint
>> (https://blueprints.launchpad.net/glance/+spec/caching) and for b) you
>> would simply use the Swift backend, configured appropriately for a
>> large Swift cluster.
>> 
>>> Users doing snapshots/backups, etc., would presumably get better
>>>performance if Glance was local, but how would we keep the base/shared
>>>images in sync?
>> 
>> This is actually something that Rick H and Chris McG are working on.
>> The basic strategy that they came up with was to add a parent ID
>> attribute to the image and for any snapshot image, simply refer to the
>> base image as the snapshot image's parent. The glance client would
>> check for a parent_id that wasn't null and continue streaming the
>> image while it found a parent URI/ID.
>> 
>> For example, let's say you have a "golden image" with the URI:
>> http://glance.example.com/12345. A user creates an instance with this
>> image and some time later, decides to do a snapshot or backup of their
>> running instance. The snapshotting code in the virtualization layer
>> produces what is essentially a differential snapshot, containing only
>> the differing bits of the existing image with the base golden image.
>> This snapshot (typically much smaller than the original image) could
>> be stored in the local (zone-local) Glance server with a call to POST
>> /images. When pushing this snapshot image to the local Glance server,
>> we would set the parent ID to http://glance.example.com/12345.
>> 
>> Let's say at some later time, the user wanted to restore from this
>> backup. The virtualization layer that implemented the restore call
>> would need to stream the backup image from the local Glance server. In
>> doing so, it would use the glance client class' get_image() method.
>> When calling this method, the glance client would first return the
>> snapshot image piece. Noticing the image had a parent ID, it would
>> continue to stream the golden image from the global image Glance
>> server in-line, essentially enabling us to store only the small diff
>> of the snapshot locally while streaming the bulk of the image master
>> from the global Glance server.
>> 
>> I'll let Rick elaborate on the above and correct any mistakes I made
>> in my description. :)
>> 
>> -jay
>> 
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Global deployment of Glance

2011-05-17 Thread Chris Behrens

Ignoring how it is actually implemented, I think we do want copies of base 
images in reach region.  We don't want any sort of outage in one region to 
adversely affect another region.

- Chris


On May 17, 2011, at 9:36 AM, Jay Pipes wrote:

> On Tue, May 17, 2011 at 11:59 AM, Glen Campbell
>  wrote:
>> If we are going to deploy Glance to support a global deployment of Nova, 
>> would it make sense to have replicas in different regions for better 
>> performance?
>> Or, to put it another way, is there a recommended way to keep multiple 
>> Glance installations in sync?
> 
> Hi Glen!
> 
> I think a better idea than having multiple copies of an image in
> different regions is to do two things:
> 
> a) Use a proxy caching server like Varnish or Squid to cache pieces or
> all of an image in various zones
> b) Use a highly-available storage system like Swift behind the global
> Glance server
> 
> For a) we need to complete the HTTP 1.1 Cache headers blueprint
> (https://blueprints.launchpad.net/glance/+spec/caching) and for b) you
> would simply use the Swift backend, configured appropriately for a
> large Swift cluster.
> 
>> Users doing snapshots/backups, etc., would presumably get better performance 
>> if Glance was local, but how would we keep the base/shared images in sync?
> 
> This is actually something that Rick H and Chris McG are working on.
> The basic strategy that they came up with was to add a parent ID
> attribute to the image and for any snapshot image, simply refer to the
> base image as the snapshot image's parent. The glance client would
> check for a parent_id that wasn't null and continue streaming the
> image while it found a parent URI/ID.
> 
> For example, let's say you have a "golden image" with the URI:
> http://glance.example.com/12345. A user creates an instance with this
> image and some time later, decides to do a snapshot or backup of their
> running instance. The snapshotting code in the virtualization layer
> produces what is essentially a differential snapshot, containing only
> the differing bits of the existing image with the base golden image.
> This snapshot (typically much smaller than the original image) could
> be stored in the local (zone-local) Glance server with a call to POST
> /images. When pushing this snapshot image to the local Glance server,
> we would set the parent ID to http://glance.example.com/12345.
> 
> Let's say at some later time, the user wanted to restore from this
> backup. The virtualization layer that implemented the restore call
> would need to stream the backup image from the local Glance server. In
> doing so, it would use the glance client class' get_image() method.
> When calling this method, the glance client would first return the
> snapshot image piece. Noticing the image had a parent ID, it would
> continue to stream the golden image from the global image Glance
> server in-line, essentially enabling us to store only the small diff
> of the snapshot locally while streaming the bulk of the image master
> from the global Glance server.
> 
> I'll let Rick elaborate on the above and correct any mistakes I made
> in my description. :)
> 
> -jay
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Global deployment of Glance

2011-05-17 Thread Chris Behrens
Each zone should definitely have glance instances, IMO.  At least two per zone 
for redundancy and networking reasons in large OpenStack installations.  
There's some work to do to support this, though.

- Chris

On May 17, 2011, at 8:59 AM, Glen Campbell wrote:

> If we are going to deploy Glance to support a global deployment of Nova, 
> would it make sense to have replicas in different regions for better 
> performance?
> 
> Or, to put it another way, is there a recommended way to keep multiple Glance 
> installations in sync?
> 
> Users doing snapshots/backups, etc., would presumably get better performance 
> if Glance was local, but how would we keep the base/shared images in sync?
> 
> 
> Confidentiality Notice: This e-mail message (including any attached or
> embedded documents) is intended for the exclusive and confidential use of the
> individual or entity to which this message is addressed, and unless otherwise
> expressly indicated, is confidential and privileged information of Rackspace.
> Any dissemination, distribution or copying of the enclosed material is 
> prohibited.
> If you receive this transmission in error, please notify us immediately by 
> e-mail
> at ab...@rackspace.com, and delete the original message.
> Your cooperation is appreciated.
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Global deployment of Glance

2011-05-17 Thread Jay Pipes
On Tue, May 17, 2011 at 11:59 AM, Glen Campbell
 wrote:
> If we are going to deploy Glance to support a global deployment of Nova, 
> would it make sense to have replicas in different regions for better 
> performance?
> Or, to put it another way, is there a recommended way to keep multiple Glance 
> installations in sync?

Hi Glen!

I think a better idea than having multiple copies of an image in
different regions is to do two things:

a) Use a proxy caching server like Varnish or Squid to cache pieces or
all of an image in various zones
b) Use a highly-available storage system like Swift behind the global
Glance server

For a) we need to complete the HTTP 1.1 Cache headers blueprint
(https://blueprints.launchpad.net/glance/+spec/caching) and for b) you
would simply use the Swift backend, configured appropriately for a
large Swift cluster.

> Users doing snapshots/backups, etc., would presumably get better performance 
> if Glance was local, but how would we keep the base/shared images in sync?

This is actually something that Rick H and Chris McG are working on.
The basic strategy that they came up with was to add a parent ID
attribute to the image and for any snapshot image, simply refer to the
base image as the snapshot image's parent. The glance client would
check for a parent_id that wasn't null and continue streaming the
image while it found a parent URI/ID.

For example, let's say you have a "golden image" with the URI:
http://glance.example.com/12345. A user creates an instance with this
image and some time later, decides to do a snapshot or backup of their
running instance. The snapshotting code in the virtualization layer
produces what is essentially a differential snapshot, containing only
the differing bits of the existing image with the base golden image.
This snapshot (typically much smaller than the original image) could
be stored in the local (zone-local) Glance server with a call to POST
/images. When pushing this snapshot image to the local Glance server,
we would set the parent ID to http://glance.example.com/12345.

Let's say at some later time, the user wanted to restore from this
backup. The virtualization layer that implemented the restore call
would need to stream the backup image from the local Glance server. In
doing so, it would use the glance client class' get_image() method.
When calling this method, the glance client would first return the
snapshot image piece. Noticing the image had a parent ID, it would
continue to stream the golden image from the global image Glance
server in-line, essentially enabling us to store only the small diff
of the snapshot locally while streaming the bulk of the image master
from the global Glance server.

I'll let Rick elaborate on the above and correct any mistakes I made
in my description. :)

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Global deployment of Glance

2011-05-17 Thread Glen Campbell
If we are going to deploy Glance to support a global deployment of Nova, would 
it make sense to have replicas in different regions for better performance?

Or, to put it another way, is there a recommended way to keep multiple Glance 
installations in sync?

Users doing snapshots/backups, etc., would presumably get better performance if 
Glance was local, but how would we keep the base/shared images in sync?

[cid:67C8D8D2-A2AD-473E-8376-07B9818708A7]


Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.

<>___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Reminder: OpenStack team meeting - 21:00 UTC

2011-05-17 Thread Thierry Carrez
Hello everyone,

Our weekly team meeting will take place at 21:00 UTC this Tuesday in
#openstack-meeting on IRC.

Check out how that time translates for *your* timezone:
http://www.timeanddate.com/worldclock/fixedtime.html?iso=20110517T21

See the meeting agenda, edit the wiki to add new topics for discussion:
http://wiki.openstack.org/Meetings

Cheers,

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Keystone API versioning

2011-05-17 Thread Ziad Sawalha
Gholt brings up a good point in https://github.com/khussein/keystone/issues/36. 
In order to support the existing ecosystem of clients out there designed to 
work against swift and the Rackspace auth 1.0 API (as documented here 
http://docs.rackspacecloud.com/files/api/v1/cfdevguide_d5/content/ch03s01.html),
 we should make Keystone compatible with Rackspace 1.0 Auth.

I therefore propose the following:
1 - we make the first release of the Keystone API be a v2.0 API (the impact is 
that it will be accessed with /v2/tokens instead of v1.0/tokens)
docs: 
https://github.com/khussein/keystone/blob/master/docs/guide/src/docbkx/idmdevguide.xml.
spec: 
https://github.com/khussein/keystone/blob/master/docs/guide/src/docbkx/idm.wadl

2 – we make Keystone respond to v1.0 requests the same way Rackspace auth does 
(Blueprint: 
https://blueprints.launchpad.net/keystone/+spec/backward-compatibility).

Looking for input and feedback.

Thanks,
Z






Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp