Re: [Openstack] libvirt vs. Xen driver handling of local storage

2011-09-01 Thread Paul Voccio
Vish,

We've talked about this idea in the past and I agree this works for *nix
hosts, but a base install of Windows 2k8R2 with CloudServers is 10.7GB. If
we went with a 10gb base disk solution this obviously won't work. Even if we
went with a 20gb partition it could become a problem as users install
programs to C: and then try to do system updates that expect to have some
reasonable disk available on C:. Providing a clean install with < 10gb
usable doesn't feel like a good customer experience. We could go with a 30gb
or 40gb fixed disk but that doesn't sound very flexible for all users.

I wanted to include some context to why we do it this way and how I
envisioned this working.

We have some users that wish to format their disk with other filesystems for
performance reasons or encrypt them for security reasons. Both of these are
completely valid. However, this poses a problem if we were to try to resize
these partitions for them. Easy solution is don't touch the partitions and
let them do it themselves. I think this type of solution works well for
power users and developers. This does pose a problem for less technical
users who would resize a disk and then wonder why they don't see the extra
space as expected. This would create extra support costs that not all
providers are willing to shoulder.

To address this, we designed what we felt was a compromise and let the users
decide what they feel is the best solution. It would be an extension that
would let users define what kind of disk management they wish to use,
'manual' or 'auto'. Manual would be the hands off approach that would tell
the system to expand the disk, but not touch the partition.

Auto would expand the disk along with the partition. The caveat with the
auto expand would be the filesystem would have to be in a format that the
host understood. If it is a FS that the compute node isn't prepared to deal
with, it errors.If this instance is set to auto and the customer requests a
resize, the compute node would mount the FS, check for the right partition
boundaries and type and expand the disk. If the partition boundaries aren't
what the system expects, it errors. This scheme would allow users that wish
to scale vertically with their instance inflate their instance then deflate
it. It would entail copying the data to a smaller disk, but the customers
that use this feature see this as unique and useful.

The proposal as I understand it would help in all manual situations but not
with customers that wish to use obtuse filesystem options.

If anyone can think of other ways to handle this, I would love to discuss.

pvo


On Wed, Aug 31, 2011 at 4:24 PM, Chris Behrens
wrote:

> Vish,
>
> I think Rackspace ozone/titan has some upcoming work to do for the resizing
> for xenserver that might close some of the gap.
>
> I think we need some options (flags) if we are to synchronize libvirt/xen.
>  At some point, Rackspace also needs an API extension to support a couple
> different ways of handling resizes.  Until we get there, we at least need an
> option to keep the xenserver code working as-is for now.  I assume others
> need the current libvirt implementation to stay as well.
>
> That said, I think it's probably not too difficult to do the 'libvirt way'
> for Xen, but I don't know about it making diablo.
> Adding support into libvirt to do the 'xen way' should be easier, I'd
> think.  But I'm the opposite of you, Vish.  I don't know the libvirt layer
> as well. :)
>
> If we can FLAG the way it works... and make these options work in both
> libvirt/xen, I think we can all remain happy.
>
> - Chris
>
> On Aug 31, 2011, at 11:45 AM, Vishvananda Ishaya wrote:
>
> > Hey guys,
> >
> > We have a very annoying discrepancy between how local space is used in
> the xen driver vs the libvirt driver.  I think it is vital that this is
> rectified before the Diablo release.  We already have a few functional gaps
> between the drivers, but the fact that disks are partitioned completely
> differently between the two is very confusing to users.
> >
> > Bug is here: https://bugs.launchpad.net/nova/+bug/834189
> >
> > The libvirt driver:
> >
> > * downloads the image from glance
> > * resizes the image to 10G if it is < 10G
> > (in the case of a separate kernel and ramdisk image it extends the
> filesystem as well.  In the case of a whole-disk image it just resizes the
> file because it doesn't know enough to change the filesystem)
> > * attaches a second disk the size of local_gb to the image
> > (when using block device mapping through the ec2 api, more swap/ephemeral
> disks can be attached as volumes as well)
> >
> > The XenServer driver (I'm less familiar with this code so please correct
> me if i am wrong here):
> > * downloads the image from glance
> > * creates a vdi from the base image
> > * resizes the vdi to the size of local_gb
> >
> > The first method of resize to 10G and having separate local_gb is
> essentially the strategy taken by aws.
> >
> > Drawbacks of 

Re: [Openstack] libvirt vs. Xen driver handling of local storage

2011-09-02 Thread Paul Voccio
On Fri, Sep 2, 2011 at 8:01 AM, Soren Hansen  wrote:

> 2011/9/2 Paul Voccio :
> > Vish,
> > We've talked about this idea in the past and I agree this works for *nix
> > hosts, but a base install of Windows 2k8R2 with CloudServers is 10.7GB.
>
> Yikes.
>

Tell me about it.


>
> > If we went with a 10gb base disk solution this obviously won't work.
> > Even if we went with a 20gb partition it could become a problem as
> > users install programs to C: and then try to do system updates that
> > expect to have some reasonable disk available on C:. Providing a clean
> > install with < 10gb usable doesn't feel like a good customer
> > experience.
>
> What we do now is grow the image if it's smaller than size X. If it's
> already of size X or larger, we leave it alone.
>
> I guess we should add a check to verify that the image isn't larger than
> the disk size granted to the requested flavour so that people can't
> abuse this.
>
> > We have some users that wish to format their disk with other
> > filesystems for performance reasons or encrypt them for security
> > reasons. Both of these are completely valid. However, this poses a
> > problem if we were to try to resize these partitions for them. Easy
> > solution is don't touch the partitions and let them do it themselves.
> > I think this type of solution works well for power users and
> > developers. This does pose a problem for less technical users who
> > would resize a disk and then wonder why they don't see the extra space
> > as expected. This would create extra support costs that not all
> > providers are willing to shoulder.
>
> I think this is the difference between "a cloud" and a "VPS with an API"
> making its appearance.
>

Soren, I agree with you that this is a subtle difference. The big think that
I'm not sure we always consider is the support costs of using Nova. If the
customers were responsible for expanding the disk there will always be some
that are unable (or unwilling) to do this.  If the software was able to do
this you could reduce the staff needed to handle support. IMHO, this is what
we're tasked to do, is automate those support features.



> I'm all for building something on top of which someone can provide a
> VPS, but that's not the core of what Nova is. It's meant to be "a
> cloud".  It's a piece of infrastructure on top of which amazing,
> scalable technology can be built. If we document "this is how this thing
> behaves.  Deal with it" that should be fine. I'd be really sad if we
> weren't able to make the best choice because the best choice might
> surprise less technical users using it as a VPS and who haven't read the
> documentation.
>

This is why I think we can give users the choice in how they want to manage
this. You can currently use Nova to do both. You can use it entirely as a
cloud as is and treat everything ephemeral (even if it isn't). I agree with
you on this point and this is how I build my apps on virtualized
infrastructure.

However, this is not what everyone actually does. Some people just a want a
few developer machines to test with. If I'm working on a feature and I'm
running out of memory on a test, I think you would agree it would be an
interesting feature to increase the VM to a different size, run the test and
confirm that it was a memory problem, then resize do a smaller instance once
I was finished. I shouldn't have to copy all the data around to do this. You
could accomplish this by taking a snapshot and launching a larger instance
but I don't know if that will always be the case (I'm thinking of single use
licensed softwares).




>
> If a deployer of OpenStack thinks this demographic is particularly
> appealing, they can extend the images they offer to notify users about
> these things or perhaps even take action on their part. E.g.:
>
>  * E-mail the user telling them this is what they need to do
>  * Show a pop-up on login telling them there's unpartitioned space.
>   "Click here to extend C: to use this space" or "Click here to
>   fire up 'Microsoft Genuine Partitioning Tool 2008 XP'".
>  * "We've detected you've grown your Cloud Server. C: has been extended
>   to use this new space. Have a nice day."
>
>
Option #3 above is exactly what I'm describing.


> I don't believe this should be a core concern for Nova. Do you think we
> can get that separation of concern to work out for everyone?
>
> > To address this, we designed what we felt was a compromise and let the
> users
> > decide what they feel is the best solution. 

Re: [Openstack] libvirt vs. Xen driver handling of local storage

2011-09-02 Thread Paul Voccio
My first thought was to do a singled fixed disk and never resize that disk
at all. If you need space, you have to use a volume service.

Ultimately, I don't think this the right approach either, but it solves the
initial use case of needing more storage space.



On Fri, Sep 2, 2011 at 11:34 AM, Chris Behrens
wrote:

>
> On Sep 2, 2011, at 8:07 AM, Paul Voccio wrote:
>
> > On Fri, Sep 2, 2011 at 8:01 AM, Soren Hansen  wrote:
> > [...]
> > The potential for filesystem bugs that could bring the host down gives
> > me the heebie jeebies. I really, really don't want to mount people's
> > filesystems.
> >
> >
> > Can you explain a bit more here? I would like to understand your
> concerns. I would advocate mounting in a utility VM if you mean to protect
> from mounting instance with malicious data. We may have to do this to expand
> partitions or resize down for Windows.
>
> Mounting someone's filesystem should not be necessary if we have certain
> restrictions on the management.  I.e., we could say we will only resize the
> last filesystem in the partition table.  That would avoid needing to know
> the filesystem layout in the image (looking at /etc/fstab or updating it).
>  Not sure that's a desired restriction, however.
>
> That said, we'd still need to attach the VM disk somewhere and run fs
> resize utils... and it might still be best to do this in a utility VM.
>
> - Chris
>
> This email may include confidential information. If you received it in
> error, please delete it.
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] libvirt vs. Xen driver handling of local storage

2011-09-09 Thread Paul Voccio
Vish,

Any more thoughts on how you want to handle this? I agree we need to get in
sync. Just need to think though all the issues.

We have some work lining up to deal with the disks and I was hoping to do it
once instead of diverging then having to redo the work.

pvo

On Fri, Sep 2, 2011 at 2:20 PM, Chris Behrens
wrote:

> Yeah, I think that can be rather fair for Unix.
>
> It's just that as you pointed out... Windows is a huge pain.   Need to make
> sure there's enough space on C: and I think there are still a lot of things
> that stupidly rely on being installed on C:
>
>
> On Sep 2, 2011, at 10:32 AM, Paul Voccio wrote:
>
> > My first thought was to do a singled fixed disk and never resize that
> disk at all. If you need space, you have to use a volume service.
> >
> > Ultimately, I don't think this the right approach either, but it solves
> the initial use case of needing more storage space.
> >
> >
> >
> > On Fri, Sep 2, 2011 at 11:34 AM, Chris Behrens <
> chris.behr...@rackspace.com> wrote:
> >
> > On Sep 2, 2011, at 8:07 AM, Paul Voccio wrote:
> >
> > > On Fri, Sep 2, 2011 at 8:01 AM, Soren Hansen 
> wrote:
> > > [...]
> > > The potential for filesystem bugs that could bring the host down gives
> > > me the heebie jeebies. I really, really don't want to mount people's
> > > filesystems.
> > >
> > >
> > > Can you explain a bit more here? I would like to understand your
> concerns. I would advocate mounting in a utility VM if you mean to protect
> from mounting instance with malicious data. We may have to do this to expand
> partitions or resize down for Windows.
> >
> > Mounting someone's filesystem should not be necessary if we have certain
> restrictions on the management.  I.e., we could say we will only resize the
> last filesystem in the partition table.  That would avoid needing to know
> the filesystem layout in the image (looking at /etc/fstab or updating it).
>  Not sure that's a desired restriction, however.
> >
> > That said, we'd still need to attach the VM disk somewhere and run fs
> resize utils... and it might still be best to do this in a utility VM.
> >
> > - Chris
> >
> > This email may include confidential information. If you received it in
> error, please delete it.
> >
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
>
> This email may include confidential information. If you received it in
> error, please delete it.
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Propose to make Monsyne Dragon a nova core developer

2012-02-07 Thread Paul Voccio
+1

On Mon, Feb 6, 2012 at 4:48 PM, Matt Dietz  wrote:

>  Hey guys,
>
>  Dragon has really stepped up lately on reviewing patches into Nova, and
> has a ton of knowledge around Nova proper, so I propose he be added to Nova
> core. I think he'd be a great addition to the team.
>
>  Matt
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SPAM] Re: Reminder: OpenStack team meeting - 21:00 UTC

2011-01-18 Thread Paul Voccio
http://maps.google.com/maps?q=diablo,+california&um=1&ie=UTF-8&hq=&hnear=Di
ablo,+CA&gl=us&ei=Mpw1TcqSCpD3gAf_05WGCw&sa=X&oi=geocode_result&ct=title&re
snum=1&ved=0CBMQ8gEwAA

Diablo, CA is near Santa Clara. I'd vote for that.



On 1/18/11 7:47 AM, "Jordan Rinke"  wrote:

>I grew up in CA. I think the reason some of those aren't on your list is
>due
>to the technical description of it being a city. The places I mentioned
>have
>road signs but Diablo for example only has around 1,000 people living in
>it
>etc. Death Valley is a desert/national park.
>
>-Original Message-
>From: Thierry Carrez [mailto:thie...@openstack.org]
>Sent: Tuesday, January 18, 2011 7:34 AM
>To: Jordan Rinke
>Cc: openstack@lists.launchpad.net
>Subject: Re: [SPAM] Re: [Openstack] Reminder: OpenStack team meeting -
>21:00
>UTC
>
>Jordan Rinke wrote:
>> Oh man there are tons of good ones in CA. Diablo, Death Valley,
>> Drytown, Discovery, Darwin etc.
>
>Where did you find those ?
>
>I built the list on http://wiki.openstack.org/ReleaseNaming based on:
>http://en.wikipedia.org/wiki/List_of_cities_in_California
>
>--
>Thierry Carrez (ttx)
>Release Manager, OpenStack
>
>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cactus Release Preparation

2011-01-31 Thread Paul Voccio
John,

I would agree with putting deployability at the top of the list. Right now, it 
is operational from a developers point of view. I think a true operations team 
would struggle supporting it at scale.

A change I might suggest in priority is moving the API up in the list. While 
the OS API is usable from a developers perspective, it isn't yet in a place 
where it can drive real value to the community. If we miss the Cactus release 
without having a complete API I think we run a risk of it not being relevant in 
the long term.

Paul

From: John Purrier mailto:j...@openstack.org>>
Date: Mon, 31 Jan 2011 13:05:34 -0600
To: 'Thierry Carrez' mailto:thie...@openstack.org>>, 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] Cactus Release Preparation


I would suggest that the theme(s) for the Cactus release be:

a. Deployability. This includes consistent packaging and deployment tools 
support; but also includes good consistent documentation, approachability to 
the project (how quickly can a novice get a running system going for proof of 
concept), and deployability at larger scale (includes reference materials 
around hardware and networking choices, operational concerns, and multi-machine 
deployment orchestration).

b. Stability. Agree with both Rick and Thierry, we need to get the existing 
features stable and available for additional and larger scale testing 
environments. We will be focusing on providing additional test automation, 
beyond testing into automated functional testing. Contributors such as 
Rackspace will be setting up larger testing environments (on the order of 
hundreds of machines) to ensure that we are stable at scale, as well.

c. Reliability. Once a configuration is stood up and operational, it needs to 
run with only normal operational attention. This will mean additional attention 
to operational concerns such as longer term test runs, memory leak detection, 
working set evaluation, etc.

d. Consistency. Thierry is right on, we need to have OpenStack be consistent 
intra-project and across projects. This will include looking at scenarios that 
"break" our goals of being hypervisor agnostic, API definitions and approach, 
developer documentation, and other areas that teams might be optimizing locally 
but create a "not finished" view of the project.

e. OpenStack API completed. We need to complete a working set of API's that are 
consistent and inclusive of all the exposed functionality. The OpenStack API 
will be an amalgam of the underlying services, we need to ensure that the 
application developer experience is smooth and logical. The DirectAPI calls 
will be exposed to project developers and committers, but the public OpenStack 
API for application developers will need to be stable, repeatable, versioned, 
and extensible. Developer documentation will need to address the fact that the 
OpenStack API will consist of fixed and well known core calls, plus additional 
calls that will be introduced by services via the extension mechanisms.

Thoughts?

John

-Original Message-
From: 
openstack-bounces+john=openstack@lists.launchpad.net
 [mailto:openstack-bounces+john=openstack@lists.launchpad.net] On Behalf Of 
Thierry Carrez
Sent: Monday, January 31, 2011 2:59 AM
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] Cactus Release Preparation

Rick Clark wrote:

> In Bexar was a feature release.  We pushed lots of new features.  The

> focus of Nova development in Cactus is going to be testing and

> stabilization.

I wonder if we shouldn't say "consistency, testing and stabilization".

Feature work should be concentrated in areas where the resulting

software is not consistent, in covering the gaps left after a featureful

release. The different groups have been pursuing specific scenarios, but

as a project we want to make sure that the other combinations also work.

Support IPv6 on FlatManager, for example, is clearly part of that. A

complete toolset around the Openstack API, maybe have a plan to

deprecate the objectstore...

--

Thierry Carrez (ttx)

Release Manager, OpenStack

___

Mailing list: https://launchpad.net/~openstack

Post to : 
openstack@lists.launchpad.net

Unsubscribe : https://launchpad.net/~openstack

More help   : https://help.launchpad.net/ListHelp

___ Mailing list: 
https://launchpad.net/~openstack Post to : 
openstack@lists.launchpad.net Unsubscribe 
: https://launchpad.net/~openstack More help : 
https://help.launchpad.net/ListHelp


Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless othe

Re: [Openstack] Multi-NIC support to Cactus

2011-02-02 Thread Paul Voccio
I wanted to clarify what mult-nic means to us as we're going to implement
it. 

Since we use flat networking and we assign the ips to the vms it feels
like it isn't complicated. Someone please correct me if I'm
misunderstanding. We have both public and private networks for the vms.
Since we pick and assign both those addresses out of a range for the
public and private nics we are able to assign a public routable ip to to
the "public" interface and a private RFC1918 address to the "private"
side. 

Since we are using XenServer, we do this by putting those addresses in a
json string in xenstore and having an agent read the xenstore and
configure the nics as necessary. Both the public and private nics are on
single vlans. 

I know the network service is going to get complicated very quickly but I
wanted to let everyone know for Cactus, our use case is pretty simple.

Paul

On 2/2/11 9:04 AM, "masumo...@nttdata.co.jp" 
wrote:

>Hello Ewan,
>
>Thanks for your answer. Now it's clear to me.
>
>> I assume that the tenant will not be able to configure any rich network
>> topologies until network-service is done.
>
>Network model topic is big issue and multi-nic issue is also not a small
>issue if we cover "any" network topologies. I'm expecting at first
>thought - 
>1. instances can have more than 2 vifs.
>2. cloud user can decide how many vifs instance have.
>3. different vlan can be assigned to vifs.
>4. different security groups can be assigned to vifs.
>5. "vifs are assigned which physical nics" kind of thought is necessary(?)
>Those are just first shallow thought - things can go step by step.
>
>I personally feel that not only model-basis discussion but also
>functionality-based discussion may be good to accelerate better network
>model..
>
>Kindly Regards, 
>Kei Masumoto
>
>
>-Original Message-
>From: Ewan Mellor [mailto:ewan.mel...@eu.citrix.com]
>Sent: Wednesday, February 02, 2011 10:06 PM
>To: RDH 桝本 圭(ITアーキ&セキュ技術); openstack@lists.launchpad.net
>Subject: RE: Multi-NIC support to Cactus
>
>That's a good question.  Multi-NIC support could be separated out from
>the rest of the network-service blueprint, but I don't know whether it
>would be useful to do so.
>
>I assume that the tenant will not be able to configure any rich network
>topologies until network-service is done.  If that is true, what else
>would you do with multi-NIC support?  And how do you imagine that it
>would work?
>
>Thanks,
>
>Ewan.
>
>> -Original Message-
>> From: openstack-bounces+ewan.mellor=citrix@lists.launchpad.net
>> [mailto:openstack-bounces+ewan.mellor=citrix@lists.launchpad.net]
>> On Behalf Of masumo...@nttdata.co.jp
>> Sent: 02 February 2011 06:34
>> To: openstack@lists.launchpad.net
>> Subject: [Openstack] Multi-NIC support to Cactus
>> 
>> Hello,
>> 
>> Regarding to the blueprint to Cactus,
>> I found 2 blueprints that may be related to multi-NIC.
>> ( I expect instances can have multiple vnic. )
>> 
>> 1. 
>> 2. 
>> 
>> Q1. New network model topic and multi-nic support topic will be
>> discussed as different topics for Cactus.
>> A new netowk model is deferred to Diablo but multi-NIC support may be
>> included in Cactus, am I following to current discussion?
>> 
>> Q2. If so, looking back to discussions till now, multi-nic might be
>> supported to Xenserver and KVM to Cactus?
>> I know we are not sure till any blueprint be approved, my team is
>> curious to KVM multi-nic support.
>> 
>> Regards,
>> Kei Masumoto
>> 
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Pondering multi-tenant needs in nova.

2011-02-03 Thread Paul Voccio
Diego,

Due to our networking topology, having a vlan per customer isn't really
feasible. Most switches are limited at 4k or 8k or even 32k. With more
customers than these switches can reasonably accommodate, having a single
vlan per customer either limits the portability within a cloud or limits
the scale at which you can build your cloud. Open vSwitch will alleviate
some of this pain, but until we get it in XenServer, we're somewhat stuck
on flat networking.

Paul

On 2/3/11 4:20 AM, "Diego Parrilla Santamaría"
 wrote:

>Hi Monsyne,
>
>it's a very interesting topic and I'm curious about the reason why you
>are using the Flat Networking set up. From the conversations in other
>threads it seems the Service Providers prefer different networking
>approaches: VLAN oriented basically.
>
>Regards
>Diego
>
>-
>Diego Parrilla
>nubeblog.com | nubeb...@nubeblog.com | twitter.com/nubeblog
>+34 649 94 43 29
>
>
>
>
>On Thu, Feb 3, 2011 at 2:37 AM, Monsyne Dragon 
>wrote:
>> I am sorting out some possible implementations for the
>> multi-tenant-accounting blueprint, and the related system-usage-records
>>bp,
>> and I just wanted to run this by anyone interested in such matters.
>>
>> Basically, for multitenant purposes we need to introduce the concept of
>>an
>> 'account' in nova, representing a customer,  that basically acts as a
>>label
>> for a group of resources (instances, etc), and for access control (i.e
>> customer a cannot mess w/ customer b's stuff)
>>
>> There was some confusion on how best to implement this, in relation to
>> nova's project concept.  Projects are kind of like what we want an
>>account
>> to be, but there are some associations (like one project per network)
>>which
>> are not valid for our flat networking setup.  I am kind of
>>straw-polling on
>> which is better here:
>>
>> The options are:
>> 1) Create a new 'account' concept in nova,  with an account basically
>>being
>> a subgroup of a project (providers would use a single, default project,
>>with
>> additional projects added if needed for separate brands, or resellers,
>>etc),
>> add in access control per account as well as project, and make sure
>> apis/auth specify account appropriately,  have some way for a default
>> account to used (per project) so account doesn't get in the way for
>> non-multitenant users.
>>
>> 2) having account == nova's "project", and changing the network
>> associations, etc so projects can support our model (as well as current
>> models).  Support for associating accounts (projects) together for
>> resellers, etc would either be delegated outside of nova or added later
>> (it's not a current requirement).
>>
>> In either case, accounts would be identified by name, which would  be an
>> opaque string an outside system/person would assign, and could
>>structure to
>> their needs (ie. for associating accounts with common prefixes, etc)
>>
>> --
>>
>> --
>>-Monsyne Dragon
>>work: 210-312-4190
>>mobile210-441-0965
>>google voice: 210-338-0336
>>
>>
>>
>> Confidentiality Notice: This e-mail message (including any attached or
>> embedded documents) is intended for the exclusive and confidential use
>>of
>> the
>> individual or entity to which this message is addressed, and unless
>> otherwise
>> expressly indicated, is confidential and privileged information of
>> Rackspace.
>> Any dissemination, distribution or copying of the enclosed material is
>> prohibited.
>> If you receive this transmission in error, please notify us immediately
>>by
>> e-mail
>> at ab...@rackspace.com, and delete the original message.
>> Your cooperation is appreciated.
>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace. 
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message. 
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Pondering multi-tenant needs in nova.

2011-02-03 Thread Paul Voccio
Patrick,

You're right.  I guess that¹s what I get for typing a response too fast.

Paul

On 2/3/11 10:03 PM, "Patrick Ancillotti"
 wrote:

>Hey Guys, 
>
>I think Paul may have gotten a bit mixed up between VLAN and CAM tables
>on switches. The VLAN part of an ethernet frame is 12 bits (0 - 4095)
>which limits it accordingly. CAM tables however are a limit within
>switching gear that lists the MAC addresses and their respective source
>and destination ports. The lower end Cisco switches like the 2960 class
>switches have a limit of around 8000 MAC addresses, whereas others higher
>in the stack such as the 4948E class have limits upwards of 55000 MAC
>addresses, although some vendors have CAM tables in the hundreds of
>thousands for even their mid range switches.
>
>That said, VLAN's are extremely limited, even with QinQ (VLANs within
>VLANs) when you're talking about 50 000 customers, or even 500 000
>customers in the same layer 2 domain, and in many cases using smaller
>layer 2 domains creates unfortunately small service areas for capacity.
>
>For more info: 
>http://en.wikipedia.org/wiki/Virtual_LAN
>http://en.wikipedia.org/wiki/CAM_Table
>
>Thanks,
>Patrick
>
>On 3 Feb 2011, at 07:47, Paul Voccio wrote:
>
>> Diego,
>> 
>> Due to our networking topology, having a vlan per customer isn't really
>> feasible. Most switches are limited at 4k or 8k or even 32k. With more
>> customers than these switches can reasonably accommodate, having a
>>single
>> vlan per customer either limits the portability within a cloud or limits
>> the scale at which you can build your cloud. Open vSwitch will alleviate
>> some of this pain, but until we get it in XenServer, we're somewhat
>>stuck
>> on flat networking.
>> 
>> Paul
>> 
>> On 2/3/11 4:20 AM, "Diego Parrilla Santamaría"
>>  wrote:
>> 
>>> Hi Monsyne,
>>> 
>>> it's a very interesting topic and I'm curious about the reason why you
>>> are using the Flat Networking set up. From the conversations in other
>>> threads it seems the Service Providers prefer different networking
>>> approaches: VLAN oriented basically.
>>> 
>>> Regards
>>> Diego
>>> 
>>> -
>>> Diego Parrilla
>>> nubeblog.com | nubeb...@nubeblog.com | twitter.com/nubeblog
>>> +34 649 94 43 29
>>> 
>>> 
>>> 
>>> 
>>> On Thu, Feb 3, 2011 at 2:37 AM, Monsyne Dragon 
>>> wrote:
>>>> I am sorting out some possible implementations for the
>>>> multi-tenant-accounting blueprint, and the related
>>>>system-usage-records
>>>> bp,
>>>> and I just wanted to run this by anyone interested in such matters.
>>>> 
>>>> Basically, for multitenant purposes we need to introduce the concept
>>>>of
>>>> an
>>>> 'account' in nova, representing a customer,  that basically acts as a
>>>> label
>>>> for a group of resources (instances, etc), and for access control (i.e
>>>> customer a cannot mess w/ customer b's stuff)
>>>> 
>>>> There was some confusion on how best to implement this, in relation to
>>>> nova's project concept.  Projects are kind of like what we want an
>>>> account
>>>> to be, but there are some associations (like one project per network)
>>>> which
>>>> are not valid for our flat networking setup.  I am kind of
>>>> straw-polling on
>>>> which is better here:
>>>> 
>>>> The options are:
>>>> 1) Create a new 'account' concept in nova,  with an account basically
>>>> being
>>>> a subgroup of a project (providers would use a single, default
>>>>project,
>>>> with
>>>> additional projects added if needed for separate brands, or resellers,
>>>> etc),
>>>> add in access control per account as well as project, and make sure
>>>> apis/auth specify account appropriately,  have some way for a default
>>>> account to used (per project) so account doesn't get in the way for
>>>> non-multitenant users.
>>>> 
>>>> 2) having account == nova's "project", and changing the network
>>>> associations, etc so projects can support our model (as well as
>>>>current
>>>> models).  Support for associating accounts (projects) together for
>>>> resellers, etc would either be delegated outside of nov

Re: [Openstack] Pondering multi-tenant needs in nova.

2011-02-07 Thread Paul Voccio
Woah, seems I missed a lot by not being around email today.

I was a bit confused at to why we would want to have nova trackif an
account was being used by a reseller. In digging back through the
blueprint associated with this, it seems the idea is for the operator (in
this case Rackspace, but whoever) of Nova should track the idea of a
reseller and accounts associated with that reseller. Nova itself would
still retain the idea of a single account and the resources associated
with that account. I guess this doesn't feel any different than another
managed service provider who is doing add-on business on top of a amazon,
rackspace, linode or other cloud business.

https://blueprints.launchpad.net/nova/+spec/multi-tenant-accounting,
specifically:

http://wiki.openstack.org/openstack-accounting?action=AttachFile&do=view&ta
rget=accounts.pdf

While an operator *could* implement the account id as an arbitrary string
and map inefficient queries to it as Jay mentions, I'm not sure they would
(or even should). 

Jay -- I think I understand your concerns, but are you suggesting we
implement the idea layer of resellers into Nova? Did I miss the point?
Sorry if I'm late to the party on this one.

Pvo


On 2/7/11 8:20 PM, "Eric Day"  wrote:

>On Mon, Feb 07, 2011 at 08:50:58PM -0500, Jay Pipes wrote:
>> Eric, you and I have a database background. I know you understand that
>>this:
>
>Of course, but the first pair of queries is not as bad as a query
>for every entity ID returned, which was in one of the previous emails
>(the main thing I was trying to address).
>
>There are other indexing tricks we can do as well, but lets not bother
>pre-optimizing in email pseudo code. :)
>
>-Eric
>
>> # Executed in the "auth service" or "configuration management
>> database" as Jorge calls it:
>> SELECT entity_id FROM entities
>> WHERE user_id = 
>> 
>> # Executed in the Nova database:
>> SELECT * FROM instances
>> JOIN instance_entity_map ON instance.id=instance_entity_map.instance_id
>> WHERE instance_entity_map.entity_id in ();
>> 
>> is not the same as this:
>> 
>> # Executed in the Nova database:
>> SELECT * FROM instances
>> JOIN instance_entity_map iem ON instance.id=iem.instance_id
>> JOIN entities ON entities.entity_id = iem.entity_id
>> JOIN users ON iem.user_id =  # This last join would,
>> in practice, be a BETWEEN predicate on a self-join to the entities
>> table
>> 
>> One query on a database versus two queries (one on each database).
>> 
>> Let's not talk about distributed join flattening as if it somehow is a
>> single query when in fact it isn't.
>> 
>> -jay
>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Pondering multi-tenant needs in nova.

2011-02-08 Thread Paul Voccio


On 2/8/11 10:30 AM, "Vishvananda Ishaya"  wrote:

>This thread is enormous, so I'm I'm going to briefly summarize the two
>options as I see them:
>
>1.  Project Id is an opaque string, and it simply represents some kind of
>collection of users.  It is the
>responsibility of external systems (authn, authz, billing, and
>monitoring) to define what the string
>means, and what the relationships between the different "groups" and
>objects are.  Nova needs a
>few plug-in points to authn and authz, but the logic relating these
>projects to users happens external
>to the nova code.  Networking concerns are isolated to the project level.
>Pros:
> * Almost no changes to existing code (supporting multiple networks per
>project isonly necessary if
>we want to support multi-tenancy in vlan mode)

Won't we still have to support multiple networks per project in flat
networking as well? You could go into multiple zones, supported by
different networking gear that will need different networks routed to
them. 

> * project code is simple and unencumbered (most small deployments don't
>need multi-tenancy)
>Cons:
> * pushing a lot of potentially useful code into external systems
> * potential for lack of code sharing between external sytsems
>
>2.  We change the project code into a more general concept of groups.
>Groups can contain other
>groups, and users and groups can be members of multiple groups.  This
>would mirror the possibilities
>available in ldap.
>Pros:
> * Greater flexibility of implementation.
> * Group implementation code is in one place, minimizing different
>implementations per component
>Cons:
> * Much more complexity in the nova code.
> * The greater complexity/flexibility won't be needed for smaller
>deployments.

This seems like it is an issue only larger deployments of Nova will
encounter. Trying to guess how someone will try to implement and bill for
resellers and groups and which group should have access to another group's
resources seems almost out of scope at the moment.


>
>Both of these options seem viable to me, but I'm actually leaning toward
>adding flexible groups into
>nova proper.  A complete authentication system IMO needs to support,
>flexible groups.  If we increase
>the flexibility of nova in this regard, it gives us a springboard to
>breaking out authz into a more complete
>separate service.  I like this more than rewriting the entire thing from
>scratch as a completely new component.
>
>Vish
>
>
>
>On Feb 8, 2011, at 8:11 AM, Jay Pipes wrote:
>
>> Hey Paul, yeah, see what happens when you take a little time away from
>>email? ;P
>> 
>> So, I'm satisfied that I've highlighted the trade-offs that come along
>> with Nova not "inherently understanding the relationships between
>> accounts".
>> 
>> Having an external system understand these account relationships is
>> fine, and the posters on this thread have done a good job explaining
>> the benefits that come along with federating the responsibility to an
>> external plugin/service, but there are some performance issues that
>> come along with it. However, as long as these inefficiencies are
>> known, I'm satisfied. :)
>> 
>> Cheers!
>> jay
>> 
>> On Mon, Feb 7, 2011 at 11:37 PM, Paul Voccio
>> wrote:
>>> Woah, seems I missed a lot by not being around email today.
>>> 
>>> I was a bit confused at to why we would want to have nova trackif an
>>> account was being used by a reseller. In digging back through the
>>> blueprint associated with this, it seems the idea is for the operator
>>>(in
>>> this case Rackspace, but whoever) of Nova should track the idea of a
>>> reseller and accounts associated with that reseller. Nova itself would
>>> still retain the idea of a single account and the resources associated
>>> with that account. I guess this doesn't feel any different than another
>>> managed service provider who is doing add-on business on top of a
>>>amazon,
>>> rackspace, linode or other cloud business.
>>> 
>>> https://blueprints.launchpad.net/nova/+spec/multi-tenant-accounting,
>>> specifically:
>>> 
>>> 
>>>http://wiki.openstack.org/openstack-accounting?action=AttachFile&do=view
>>>&ta
>>> rget=accounts.pdf
>>> 
>>> While an operator *could* implement the account id as an arbitrary
>>>string
>>> and map inefficient queries to it as Jay mentions, I'm not sure they
>>>would
>>> (or even should).
>>> 
>

[Openstack] xen server agent code in nova?

2011-02-09 Thread Paul Voccio
All,

After discussing this among some small groups that are working with XenServer, 
we've decided we probably should pull in our guest agent code so others can 
look at it and start using it and building on it. Today, this code is XenServer 
specific, but long term I think we need to figure out where to put the code in 
the Nova codebase. I might suggest /tools/guest_agent/xen_server/ but am open. 
We have Windows 2003/2008 code and Linux code so we'll need to include build 
instructions for both in the directories.

So the question I want to pose to the community is if they are interested in 
the code, where should it go and how should we move forward on extending it?

Thanks,
Paul


Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace. 
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message. 
Your cooperation is appreciated.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] xen server agent code in nova?

2011-02-09 Thread Paul Voccio
Agree with the directory structure. I'm hoping we can get some
conversation on where to pull the configuration that we use xentore for
now. I've heard some ideas around zookeeper from more than one person, but
have not fully investigated yet.

pvo

On 2/9/11 5:08 PM, "Chris Behrens"  wrote:

>
>Thanks.  I was going to pose the same question:  where should this go? :)
>
>I'm currently working on the 'xs-guest-agent' blueprint which involves
>clean up with the current Rackspace linux agent.  While this agent
>currently only supports Linux on XenServer (because of some Linux
>specific calls and communication via XenStore), some of the work I'm
>doing will make it more portable (so that it'll work on *BSD, too) and
>will modularize the communication mechanism.  This will allow the
>XenStore communication to be swapped out in favor of something else not
>Xen specific.
>
>That said, I'm not sure the main code base should live in a 'xen_server'
>subdirectory as suggested below.  Modifying the below, I'd perhaps
>suggest:
>
>/tools/guest_agent/unix for the main unix agent code base.
>Subdirectories could exist under there for different communication
>modules.
>
>Windows might be different, as it is definitely XenServer specific...
>maybe, /tools/guest_agent/windows/xen_server
>
>(As a side note, I wonder if we want move to sharing some common code
>between Windows and Unix agents which would further impact the directory
>structure :)
>
>- Chris
>
>
>On Feb 9, 2011, at 2:16 PM, Paul Voccio wrote:
>
>> All,
>> 
>> After discussing this among some small groups that are working with
>>XenServer, we've decided we probably should pull in our guest agent code
>>so others can look at it and start using it and building on it. Today,
>>this code is XenServer specific, but long term I think we need to figure
>>out where to put the code in the Nova codebase. I might suggest
>>/tools/guest_agent/xen_server/ but am open. We have Windows 2003/2008
>>code and Linux code so we'll need to include build instructions for both
>>in the directories.
>> 
>> So the question I want to pose to the community is if they are
>>interested in the code, where should it go and how should we move
>>forward on extending it?
>> 
>> Thanks,
>> Paul
>> Confidentiality Notice: This e-mail message (including any attached or
>> embedded documents) is intended for the exclusive and confidential use
>>of the
>> individual or entity to which this message is addressed, and unless
>>otherwise
>> expressly indicated, is confidential and privileged information of
>>Rackspace. 
>> Any dissemination, distribution or copying of the enclosed material is
>>prohibited.
>> If you receive this transmission in error, please notify us immediately
>>by e-mail
>> at ab...@rackspace.com, and delete the original message.
>> Your cooperation is appreciated.
>> 
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] xen server agent code in nova?

2011-02-11 Thread Paul Voccio
On one hand, I agree it could be separate. This is one of the reasons why
we've waited to push it. Unfortunately, they are so tightly coupled
because it is going to be directly tied to bugfixes in the compute node
that the project leads for each are going to have to keep up with revs of
each agent against a rev of nova. It would seem like the issues between
them would be easier to track if they were all contained in the same
project. 

More below.

On 2/10/11 11:40 AM, "Scott Moser"  wrote:

>On Thu, 10 Feb 2011, Thierry Carrez wrote:
>
>> Paul Voccio wrote:
>> > So the question I want to pose to the community is if they are
>> > interested in the code, where should it go and how should we move
>> > forward on extending it?
>>
>> I think it's always interesting to publish the code. I'm not convinced
>> of the value of shipping it inside Nova though.
>>
>I agree with this.  To me, the agent is a completely separate piece of
>code from nova.  The agent would be able to interact via some
>communication channel with *something*.  That something should not need to
>be openstack.

Right now, that something is Nova. There isn't some external service that
exists yet to do this. If that were the case today, I would just drop it
there. 

>
>I'm not aware of all that your guest agent does, later in this thread,
>configuring networking and setting password was mentioned.  If you ignore
>configuring of initial networking, then its really not tied at all to the
>hypervisor.  

This is precisely our long term goal. How long term is unknown yet.


>I could communicate with an agent that allows setting of root
>password via tcp/ip with that agent running in a guest kvm on my local
>machine.
>
>What I'm getting at, is that I think you should position this as a
>separate project.   Define that openstack supports communication with a
>guest agent via a known protocol that can happen over a list of transports
>(xen store, vmware host-guest link...).   Then, multiple guest agents
>that speak that protocol can arise.

No argument there, but today there are no plans to start on this (that I
know of). If there is momentum to do this, I'd be happy to seed the
project with the code.

>
>Overall, I think it both reduces the complexity of openstack and increases
>the ability for guest OS innovation by clearly defining that boundary.
>
>
>> I'm not in the POC so it's not my job to define what should be
>> considered "Openstack core" and what should not, but "compatible guest
>> agents" IMO typically warrant their own project...
>


pvo



>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] xen server agent code in nova?

2011-02-11 Thread Paul Voccio
Below.

On 2/11/11 2:30 PM, "Scott Moser"  wrote:

>On Fri, 11 Feb 2011, Vishvananda Ishaya wrote:
>
>> Agreed.  By default lets put things into nova because it makes
>> development and visibility much easier.  As eric mentioned, we can
>> always break it out later.
>
>The stability of the API for communication between the hypervisor platform
>and an instance is very important.  The ability to quickly change it
>should not be the primary reason that you decide where to land the code.
>
>Once you're past the immediate bringup, you're going to need to maintain
>backward compatibility.  You'll have images running on openstack
>installations that have old versions of the agent, and no real option to
>modify them.  You need to get this right, and minimal is better.

One of the features of the agent is to return features is knows about and
return a 'not implemented' if it gets a request it can't complete. Another
feature of the agent is to be passed an option to update itself, given a
url and a md5 hash.


>
>The separation would make you think about things more.  Ie, with the
>project internal, you'll have basically an internal api, that can be
>changed at will.  With it external, you'll be relying on your published
>API to be somewhat stable.
>
>I suspect that I will lose this argument, and I can't pretend that I have
>much grounds for complaint as I've not spoken about anything else.  This
>is something I belive Amazon did very well.  Other than the fact that
>their metadata service really relies on dhcp, its is entirely sufficient,
>and *very* minimal.
>
>Scott
>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] xen server agent code in nova?

2011-02-11 Thread Paul Voccio
Scott,

I completely understand your point of view and were you are coming from. I
think you concerns are absolutely valid. That said, there isn't a required
agent feature to do anything yet. We're not designing anything behind any
doors and tossing it over the fence for other people to deal with. Some
people were asking us for the code that we already have. As I've stated
before, it currently works for XenServer and we want extend this to be
hypervisor agnostic and a networked service. Nothing is or will be changed
'willy-nilly'.
 
I'm not proposing *any* api for a hypothetical service at this point. That
needs to have it own BP, be vetted, and services built around this.

Since the code is coupled with Nova and Xen at the moment, the ask was
should it be included and where? As Vish stated, its extremely simple to
pull it out and move it to a metadata project when appropriate.

I would like to continue the conversation around a metadata service and
what it could look like. I think its a solution we can all benefit from.

Pvo



On 2/11/11 3:18 PM, "Scott Moser"  wrote:

>On Fri, 11 Feb 2011, Paul Voccio wrote:
>
>> Below.
>>
>> On 2/11/11 2:30 PM, "Scott Moser"  wrote:
>>
>> >On Fri, 11 Feb 2011, Vishvananda Ishaya wrote:
>> >
>> >> Agreed.  By default lets put things into nova because it makes
>> >> development and visibility much easier.  As eric mentioned, we can
>> >> always break it out later.
>> >
>> >The stability of the API for communication between the hypervisor
>>platform
>> >and an instance is very important.  The ability to quickly change it
>> >should not be the primary reason that you decide where to land the
>>code.
>> >
>> >Once you're past the immediate bringup, you're going to need to
>>maintain
>> >backward compatibility.  You'll have images running on openstack
>> >installations that have old versions of the agent, and no real option
>>to
>> >modify them.  You need to get this right, and minimal is better.
>>
>> One of the features of the agent is to return features is knows about
>>and
>> return a 'not implemented' if it gets a request it can't complete.
>>Another
>> feature of the agent is to be passed an option to update itself, given a
>> url and a md5 hash.
>
>So thats a required feature of all potential agents ?  It was initially
>stated that the agent was necessary to setup networking.
>I assume that 'url' above could potentially be cdrom://foo.bar.gz ,
>though.
>
>Either way, I surely hope that your argument is not suggesting that
>you can change the API willy-nilly because all the agents in existing
>guests should just be able to update themselves.
>
>Another thing that Amazon did well, was their host->guest communication,
>which takes place as the "metadata service" is entirely versioned.  Ie:
>
>$ wget http://instance-data/ -O - -q; echo
>1.0
>2007-01-19
>2007-03-01
>2007-08-29
>2007-10-10
>2007-12-15
>2008-02-01
>2008-09-01
>2009-04-04
>2011-01-01
>latest
>
>Each version is maintained indefinitely.  I realize that their metadata
>service is simple, but again, other than networking setup, I really think
>its completely sufficient.
>
>I don't see a necessity for making a lot of complex interactions between
>hypevisor and guest. All that does is  make it more difficult to develop
>guests.
>
>I won't object much more, but please, please keep the guest requirements
>and expectations to a minimum.  I'm involved in the development of images
>for such a platform, and I do not want to be limited by expectations that
>the host has upon our images.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-14 Thread Paul Voccio
Thoughts below:

From: Justin Santa Barbara mailto:jus...@fathomdb.com>>
Date: Mon, 14 Feb 2011 14:32:52 -0800
To: mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] OpenStack Compute API 1.1

Some thoughts...

General:


  *   Are we writing the OpenStack API, or are we writing the document for the 
next version of Cloud Servers?  In my opinion, the two need to be separate.  
For example, specifications of resource limits and rate limits, supported 
compression encodings, timeout on persistent connections, pagination, caching, 
polling and resize confirmation windows don't belong in the core OpenStack API. 
 These should be put in the CloudServers v1.1 documentation, but a different 
OpenStack provider will not impose the same limitations that Rackspace will.

I think it is fair to say the api comes with default limits. There is nothing 
in the spec or the code that says you can't alter these limits.



Metadata:


  *   The 5 item limit will probably need to be raised if we start using the 
metadata for hints etc, but this is no big deal
  *   What is the behaviour of the metadata collection update when metadata is 
already present (merge or replace)?  Can this return the new metadata values 
instead of no-return-value?
  *   Should we allow custom metadata on all items?  Should we replace some 
properties with well-known metadata?  e.g. on flavors, should the disk property 
move to openstack:disk metadata?  This way we don't need to define the exact 
set of metadata on all items for eternity (e.g. authors on extensions)
  *   Are duplicate metadata keys allowed?
  *   Can we please reserve the openstack: prefix, just like AWS reserves the 
aws: prefix

IP Addresses:


  *   Instead of just supporting a public and private network, how about 
specifying  and .  This way we can 
also support more networks e.g. SAN, private VPN networks, HPC interconnects etc

This could be a good idea. This way if someone doesn't return a private network 
or additional management networks.


  *   Is it useful to know which IPV4 addresses and IPV6 addresses map to 
network cards?  Right now if there are multiple addresses on the same network, 
the correspondence is undefined.

Not sure we'd know depending on the network topology where the address maps to 
a particular card. Not sure if I follow. If there are multiple addresses on the 
same network the addresses could float between them so knowing which nic they 
were originally bound to isn't important but could also be confusing.



  *   What happens when a machine has a block of addresses?  Is each address 
listed individually?  What happens in IPv6 land where a machine could well have 
a huge block?  I think we need a netmask.

Netmask makes sense.



Extensions:


  *   How are the XML schemas going to work with extension elements?  Right 
now, it's very free-form, which can cause problems with useful schemas.  Are 
the proposed schemas available?

Volumes:


  *   Volume support is core to OpenStack (and has been since launch).  This 
needs therefore to be in the core API, not in an extension.  Or if it is an 
extension then compute, images and flavors should all be in extensions also 
(which would be cool, if a little complicated.)

I think this is in preparation for the separation of apis in the future. 
Flavors would always tie to a compute api since they don't really make sense 
outside of a compute context. Glance is getting the images api, which I think 
the compute images context will eventually go there.

pvo





On Mon, Feb 14, 2011 at 11:30 AM, John Purrier 
mailto:j...@openstack.org>> wrote:

Bumping this to the top of the list. One of the key deliverables for Cactus is 
a complete and usable OpenStack Compute API. This means that using only the API 
and tools that interact with the OpenStack Compute API Nova can be installed 
and configured; once running all of the Nova features and functions for VM, 
Network, and Volume provisioning and management are accessible and operable 
through the API.



We need your eyes on this, to ensure that the spec is correct. Please take the 
time to review and comment, the more up-front work we do here the better the 
implementation will be.



Thanks,



John



-Original Message-
From: 
openstack-bounces+john=openstack.org@lists.launchpad.net
 
[mailto:openstack-bounces+john=openstack.org@lists.launchpad.net]
 On Behalf Of Gabe Westmaas
Sent: Wednesday, February 09, 2011 3:03 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] OpenStack API 1.1



A blueprint and proposed spec for OpenStack API 1.1 has been posted and I would 
love to get feedback on the specification.



Blueprint:

https://blueprints.launchpad.net/nova/+spec/openstack-api-1-1



Spec wiki:

http://wiki.openstack.org/OpenStackAPI_1-1



Detailed Spec:

http

Re: [Openstack] OpenStack Compute API 1.1

2011-02-14 Thread Paul Voccio
Justin -

Thought some more on your comments wrt images being in the 1.1 api spec. I 
agree with you that it doesn't make sense in the long term to have them in the 
compute api since the service will delegate to glance in the long term. I would 
propose that in the 1.2 or other future spec that /images move to an action on 
/compute since that’s really what is happening. I don't know that it makes 
sense to do so in 1.1 as changes are contentious enough without being a total 
rewrite.

Looking forward to your feedback,
pvo

From: Justin Santa Barbara mailto:jus...@fathomdb.com>>
Date: Mon, 14 Feb 2011 14:32:52 -0800
To: mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] OpenStack Compute API 1.1

Some thoughts...

General:


  *   Are we writing the OpenStack API, or are we writing the document for the 
next version of Cloud Servers?  In my opinion, the two need to be separate.  
For example, specifications of resource limits and rate limits, supported 
compression encodings, timeout on persistent connections, pagination, caching, 
polling and resize confirmation windows don't belong in the core OpenStack API. 
 These should be put in the CloudServers v1.1 documentation, but a different 
OpenStack provider will not impose the same limitations that Rackspace will.

Metadata:


  *   The 5 item limit will probably need to be raised if we start using the 
metadata for hints etc, but this is no big deal
  *   What is the behaviour of the metadata collection update when metadata is 
already present (merge or replace)?  Can this return the new metadata values 
instead of no-return-value?
  *   Should we allow custom metadata on all items?  Should we replace some 
properties with well-known metadata?  e.g. on flavors, should the disk property 
move to openstack:disk metadata?  This way we don't need to define the exact 
set of metadata on all items for eternity (e.g. authors on extensions)
  *   Are duplicate metadata keys allowed?
  *   Can we please reserve the openstack: prefix, just like AWS reserves the 
aws: prefix

IP Addresses:


  *   Instead of just supporting a public and private network, how about 
specifying  and .  This way we can 
also support more networks e.g. SAN, private VPN networks, HPC interconnects etc
  *   Is it useful to know which IPV4 addresses and IPV6 addresses map to 
network cards?  Right now if there are multiple addresses on the same network, 
the correspondence is undefined.
  *   What happens when a machine has a block of addresses?  Is each address 
listed individually?  What happens in IPv6 land where a machine could well have 
a huge block?  I think we need a netmask.

Extensions:


  *   How are the XML schemas going to work with extension elements?  Right 
now, it's very free-form, which can cause problems with useful schemas.  Are 
the proposed schemas available?

Volumes:


  *   Volume support is core to OpenStack (and has been since launch).  This 
needs therefore to be in the core API, not in an extension.  Or if it is an 
extension then compute, images and flavors should all be in extensions also 
(which would be cool, if a little complicated.)



Justin





On Mon, Feb 14, 2011 at 11:30 AM, John Purrier 
mailto:j...@openstack.org>> wrote:

Bumping this to the top of the list. One of the key deliverables for Cactus is 
a complete and usable OpenStack Compute API. This means that using only the API 
and tools that interact with the OpenStack Compute API Nova can be installed 
and configured; once running all of the Nova features and functions for VM, 
Network, and Volume provisioning and management are accessible and operable 
through the API.



We need your eyes on this, to ensure that the spec is correct. Please take the 
time to review and comment, the more up-front work we do here the better the 
implementation will be.



Thanks,



John



-Original Message-
From: 
openstack-bounces+john=openstack.org@lists.launchpad.net
 
[mailto:openstack-bounces+john=openstack.org@lists.launchpad.net]
 On Behalf Of Gabe Westmaas
Sent: Wednesday, February 09, 2011 3:03 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] OpenStack API 1.1



A blueprint and proposed spec for OpenStack API 1.1 has been posted and I would 
love to get feedback on the specification.



Blueprint:

https://blueprints.launchpad.net/nova/+spec/openstack-api-1-1



Spec wiki:

http://wiki.openstack.org/OpenStackAPI_1-1



Detailed Spec:

http://wiki.openstack.org/OpenStackAPI_1-1?action=AttachFile&do=view&target=c11-devguide-20110209.pdf



We'd like to finish up as much of the API implementation for cactus as 
possible, and in particular we want to make sure that we get API extensions 
correct as early as possible.  Other new features in the 1.1 spec include the 
ability to view both IPv4 and v6 address

Re: [Openstack] Queue Service

2011-02-14 Thread Paul Voccio
Eric,

Just looking at the http operations. Shouldn't the calls be around the
account then the queue?

GET /$account_id/queue/id to list all the queues
GET /$account_id/queue/id/message/id 

Am I off here? Thoughts?

pvo
  

On 2/14/11 5:07 PM, "Eric Day"  wrote:

>I've summarized the original email and added more sections for review
>and discussion here:
>
>http://wiki.openstack.org/QueueService
>
>In particular there are details on the various components of the
>queue service, a draft at what the REST operations will look like,
>and a couple brief examples.
>
>Please let me know if any clarification is needed and I'll update
>the wiki. Feedback and discussion on use cases and what you think
>the service should look like is very appreciated.
>
>Thanks,
>-Eric
>
>On Mon, Feb 14, 2011 at 09:51:42AM -0800, Eric Day wrote:
>> Hi everyone,
>> 
>> When looking at other services to include as part of OpenStack, the
>> first that comes to mind for many is a queue. A queue service can
>> not only provide a useful public cloud service, but can also provide
>> one of the building blocks for other services. I've been leading an
>> effort to research and gather requirements for a queue service and
>> I'd like to share the current state and get community feedback. I
>> expect real development to begin very soon, and would also like to
>> identify developers who will have time to dedicate to this project.
>> 
>> I'd like to note this is not an official OpenStack project yet. The
>> intention is once we have the community support and a simple
>> implementation, we will submit the project to the OpenStack Project
>> Oversight Committee for approval.
>> 
>> The reason we are initiating our own project vs using an existing one
>> is due to simplicity, modularity, and scale. Also, very few (if any)
>> existing queue systems out there were built with multi-tenant cloud
>> use cases in mind. Very few also have a simple and extensible REST
>> API. There are possible solutions to build an AMQP based service,
>> but AMQP brings complexity and a protocol not optimized for high
>> latency and intermittent connectivity.
>> 
>> The primary goals of the queue service are:
>> 
>> * Simple - Think simple REST based queues for most use cases. Easy
>>   to access and use from any language. It should not require much
>>   setup, if any, before you can start pushing messages into it.
>> 
>> * Modular API - Initially we'll focus on a simple REST API,
>>   but this will not in any way be a first-class API. It should be
>>   possible to add other protocols (AMQP, protocol buffers, Gearman,
>>   etc) for other use cases. Note that the internal service API will
>>   not always provide a 1-1 mapping with the external API, so some
>>   features with advanced protocols may be unavailable.
>> 
>> * Fast - Since this will act as a building block for other services
>>   that my drive heavy throughput, performance will have a focus. This
>>   mostly comes down to implementation language and how clients and
>>   workers interact with the broker to reduce network chatter.
>> 
>> * Multi-tenant - Support multiple accounts for the service, and since
>>   this will also be a public service for some deployments, protect
>>   against potentially malicious users.
>> 
>> * Persistent - Allow messages to optionally be persistent. For
>>   protocols that can support it, this can be an optional flag while
>>   the message is submitted. The persistent storage should also be
>>   modular so we can test various data stores and accommodate different
>>   deployment options.
>> 
>> * Zones and locality awareness - As we've been discussing in other
>>   threads, locality in cloud services is an important feature. When
>>   dealing with where messages should be processed, we need to have
>>   location awareness to process data where it exists to reduce network
>>   overhead and processing time.
>> 
>> Before diving down into implementation details, I would like to hear
>> what folks have to say about the initial requirements above. Once
>> there is something along the lines of agreement, I'll be sending out
>> other topics for discussion dealing with implementation.
>> 
>> I'm looking forward to your feedback. Thanks!
>> 
>> -Eric
>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete t

Re: [Openstack] OpenStack Compute API 1.1

2011-02-14 Thread Paul Voccio
Thoughts below

From: Justin Santa Barbara mailto:jus...@fathomdb.com>>
Date: Mon, 14 Feb 2011 15:40:04 -0800
To: Paul Voccio mailto:paul.voc...@rackspace.com>>
Cc: "openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>" 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] OpenStack Compute API 1.1

Ah - well, I was sort of expecting that we'd all go the other way and agree 
some core functionality, and I thought that volumes should definitely be part 
of that.  I'd hope that the core functionality would always be part of the core 
API, and I'd include images & volumes in that list.

I'm all for having the discussion. How would this work if someone didn't run  a 
volume service or glance? Should the api listen for that? I don't disagree that 
there should be core apis for each service, but that in the long run, there may 
not be a single api. Glance already doesn't have an api in the openstack 1.1 
spec. What about Swift?




I think that building an extensible API is an ambitious proposition.  AWS seems 
to have some pretty rough edges in their API because they've built everything 
incrementally, and I would hope that we could do better, even if it does mean 
'big design up front'.

I think the block storage / volumes, networking, images and compute should all 
be part of the core API and should work well together.

Of course they all have to work well together. I do think we need to discuss 
how it works when someone isn't using these services. It is an OS API 
implementation then?


 We shouldn't be relying on extensions for Cactus.  In fact, I'd rather leave 
out extensions until we have a solid use case.  You may be saying that volumes 
will be our test-use case, but I think that will yield a sub-optimal API.


I see extensions doing a few things. First, it gives a way for other developers 
to work on and promote additions to the api without fighting to get them into 
core at first.  Can you explain how it would yield a sub-optimal api?




With regards to the difference between the CloudServers API and the OpenStack 
API, I really do think there should be separate documents.  I'd like for the 
OpenStack API to basically just have the JSON & XML interfaces in there, and 
none of the operational stuff that Rackspace needs to do to operate a public 
cloud (such as caching)  That is important stuff, but we need to divide and 
conquer.  I'd also like to see a third document, by NASA/Anso, which describes 
a deployment profile for a private cloud (probably no caching or rate limits).  
I think the division will actually help us here

I think we would want to have the same operational aspects in both public and 
private clouds. It gives consistent experience between what is deployed in a 
smaller implementation between what is deployed in large implementations. What 
we should do is make these levers very easy to find and tune. Maybe the default 
is they are tuned to high defaults when deployed, but the functionality should 
ship in the api.


- I don't think anyone will argue with Rackspace's expertise on their 
deployment needs, nor with NASA's on theirs, and we can just have the core 
behavior in the OpenStack API spec.

Justin





On Mon, Feb 14, 2011 at 3:18 PM, Paul Voccio 
mailto:paul.voc...@rackspace.com>> wrote:
Justin -

Thought some more on your comments wrt images being in the 1.1 api spec. I 
agree with you that it doesn't make sense in the long term to have them in the 
compute api since the service will delegate to glance in the long term. I would 
propose that in the 1.2 or other future spec that /images move to an action on 
/compute since that’s really what is happening. I don't know that it makes 
sense to do so in 1.1 as changes are contentious enough without being a total 
rewrite.

Looking forward to your feedback,
pvo

From: Justin Santa Barbara mailto:jus...@fathomdb.com>>
Date: Mon, 14 Feb 2011 14:32:52 -0800
To: mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] OpenStack Compute API 1.1

Some thoughts...

General:


  *   Are we writing the OpenStack API, or are we writing the document for the 
next version of Cloud Servers?  In my opinion, the two need to be separate.  
For example, specifications of resource limits and rate limits, supported 
compression encodings, timeout on persistent connections, pagination, caching, 
polling and resize confirmation windows don't belong in the core OpenStack API. 
 These should be put in the CloudServers v1.1 documentation, but a different 
OpenStack provider will not impose the same limitations that Rackspace will.

Metadata:


  *   The 5 item limit will probably need to be raised if we start using the 
metadata for hints etc, but this is no big deal
  *   What is the behaviour of the metadata collection update when metadata is 
alr

Re: [Openstack] Queue Service

2011-02-14 Thread Paul Voccio
Maybe not. I thought more on the way home.

Dropping the account_id, would this also assume that there can be more
than one queue per account?

On 2/14/11 5:54 PM, "Eric Day"  wrote:

>Yeah, for anonymous access that would be the case. Those are not needed
>when the request comes in with OpenStack Auth headers (like nova).
>
>Do you think we should be repeating the account id in the URI when
>the auth headers are present?
>
>-Eric
>
>On Mon, Feb 14, 2011 at 11:44:58PM +, Paul Voccio wrote:
>> Eric,
>> 
>> Just looking at the http operations. Shouldn't the calls be around the
>> account then the queue?
>> 
>> GET /$account_id/queue/id to list all the queues
>> GET /$account_id/queue/id/message/id
>> 
>> Am I off here? Thoughts?
>> 
>> pvo
>>   
>> 
>> On 2/14/11 5:07 PM, "Eric Day"  wrote:
>> 
>> >I've summarized the original email and added more sections for review
>> >and discussion here:
>> >
>> >http://wiki.openstack.org/QueueService
>> >
>> >In particular there are details on the various components of the
>> >queue service, a draft at what the REST operations will look like,
>> >and a couple brief examples.
>> >
>> >Please let me know if any clarification is needed and I'll update
>> >the wiki. Feedback and discussion on use cases and what you think
>> >the service should look like is very appreciated.
>> >
>> >Thanks,
>> >-Eric
>> >
>> >On Mon, Feb 14, 2011 at 09:51:42AM -0800, Eric Day wrote:
>> >> Hi everyone,
>> >> 
>> >> When looking at other services to include as part of OpenStack, the
>> >> first that comes to mind for many is a queue. A queue service can
>> >> not only provide a useful public cloud service, but can also provide
>> >> one of the building blocks for other services. I've been leading an
>> >> effort to research and gather requirements for a queue service and
>> >> I'd like to share the current state and get community feedback. I
>> >> expect real development to begin very soon, and would also like to
>> >> identify developers who will have time to dedicate to this project.
>> >> 
>> >> I'd like to note this is not an official OpenStack project yet. The
>> >> intention is once we have the community support and a simple
>> >> implementation, we will submit the project to the OpenStack Project
>> >> Oversight Committee for approval.
>> >> 
>> >> The reason we are initiating our own project vs using an existing one
>> >> is due to simplicity, modularity, and scale. Also, very few (if any)
>> >> existing queue systems out there were built with multi-tenant cloud
>> >> use cases in mind. Very few also have a simple and extensible REST
>> >> API. There are possible solutions to build an AMQP based service,
>> >> but AMQP brings complexity and a protocol not optimized for high
>> >> latency and intermittent connectivity.
>> >> 
>> >> The primary goals of the queue service are:
>> >> 
>> >> * Simple - Think simple REST based queues for most use cases. Easy
>> >>   to access and use from any language. It should not require much
>> >>   setup, if any, before you can start pushing messages into it.
>> >> 
>> >> * Modular API - Initially we'll focus on a simple REST API,
>> >>   but this will not in any way be a first-class API. It should be
>> >>   possible to add other protocols (AMQP, protocol buffers, Gearman,
>> >>   etc) for other use cases. Note that the internal service API will
>> >>   not always provide a 1-1 mapping with the external API, so some
>> >>   features with advanced protocols may be unavailable.
>> >> 
>> >> * Fast - Since this will act as a building block for other services
>> >>   that my drive heavy throughput, performance will have a focus. This
>> >>   mostly comes down to implementation language and how clients and
>> >>   workers interact with the broker to reduce network chatter.
>> >> 
>> >> * Multi-tenant - Support multiple accounts for the service, and since
>> >>   this will also be a public service for some deployments, protect
>> >>   against potentially malicious users.
>> >> 
>> >> * Persistent - Allow messages to optionally be persistent. For
>> >>   protocols that 

Re: [Openstack] Queue Service

2011-02-14 Thread Paul Voccio
Looking at the swift docs, they reference a container like so:

 METHOD /v1// HTTP/1.1

http://docs.rackspacecloud.com/files/api/v1/cf-devguide-20110112.pdf

For the openstack api, it also includes the account id in the request:

POST /v1.1/214412/images HTTP/1.1
Host: servers.api.openstack.org
Content-Type: application/json
Accept: application/xml
X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cb



This seems a bit different than what you're suggesting. What am I missing?
Shouldn't the account id be in the request with the auth headers to stay
in line with the other specs?


On 2/14/11 6:56 PM, "Eric Day"  wrote:

>On Tue, Feb 15, 2011 at 12:49:01AM +, Paul Voccio wrote:
>> Dropping the account_id, would this also assume that there can be more
>> than one queue per account?
>
>Yeah. Think of the queue name as an extra namespace layer much like
>a swift container, except you don't create or delete them, they just
>exist when there is an active message in it.
>
>-Eric
>
>> On 2/14/11 5:54 PM, "Eric Day"  wrote:
>> 
>> >Yeah, for anonymous access that would be the case. Those are not needed
>> >when the request comes in with OpenStack Auth headers (like nova).
>> >
>> >Do you think we should be repeating the account id in the URI when
>> >the auth headers are present?
>> >
>> >-Eric
>> >
>> >On Mon, Feb 14, 2011 at 11:44:58PM +, Paul Voccio wrote:
>> >> Eric,
>> >> 
>> >> Just looking at the http operations. Shouldn't the calls be around
>>the
>> >> account then the queue?
>> >> 
>> >> GET /$account_id/queue/id to list all the queues
>> >> GET /$account_id/queue/id/message/id
>> >> 
>> >> Am I off here? Thoughts?
>> >> 
>> >> pvo
>> >>   
>> >> 
>> >> On 2/14/11 5:07 PM, "Eric Day"  wrote:
>> >> 
>> >> >I've summarized the original email and added more sections for
>>review
>> >> >and discussion here:
>> >> >
>> >> >http://wiki.openstack.org/QueueService
>> >> >
>> >> >In particular there are details on the various components of the
>> >> >queue service, a draft at what the REST operations will look like,
>> >> >and a couple brief examples.
>> >> >
>> >> >Please let me know if any clarification is needed and I'll update
>> >> >the wiki. Feedback and discussion on use cases and what you think
>> >> >the service should look like is very appreciated.
>> >> >
>> >> >Thanks,
>> >> >-Eric
>> >> >
>> >> >On Mon, Feb 14, 2011 at 09:51:42AM -0800, Eric Day wrote:
>> >> >> Hi everyone,
>> >> >> 
>> >> >> When looking at other services to include as part of OpenStack,
>>the
>> >> >> first that comes to mind for many is a queue. A queue service can
>> >> >> not only provide a useful public cloud service, but can also
>>provide
>> >> >> one of the building blocks for other services. I've been leading
>>an
>> >> >> effort to research and gather requirements for a queue service and
>> >> >> I'd like to share the current state and get community feedback. I
>> >> >> expect real development to begin very soon, and would also like to
>> >> >> identify developers who will have time to dedicate to this
>>project.
>> >> >> 
>> >> >> I'd like to note this is not an official OpenStack project yet.
>>The
>> >> >> intention is once we have the community support and a simple
>> >> >> implementation, we will submit the project to the OpenStack
>>Project
>> >> >> Oversight Committee for approval.
>> >> >> 
>> >> >> The reason we are initiating our own project vs using an existing
>>one
>> >> >> is due to simplicity, modularity, and scale. Also, very few (if
>>any)
>> >> >> existing queue systems out there were built with multi-tenant
>>cloud
>> >> >> use cases in mind. Very few also have a simple and extensible REST
>> >> >> API. There are possible solutions to build an AMQP based service,
>> >> >> but AMQP brings complexity and a protocol not optimized for high
>> >> >> latency and intermittent connectivity.
>> >&g

Re: [Openstack] Review days for nova-core members

2011-02-16 Thread Paul Voccio
Have we considered pruning and expanding the core team to help speed the
reviews along? There are some people who are no longer day-to-day active
in Nova and some that are that could help in this process.

On 2/16/11 3:54 PM, "Soren Hansen"  wrote:

>2011/2/16 Jay Pipes :
>> Lots of coding and bug fixing has been done in the past weeks. As a
>> result, we've got a big backlog of code reviews to do.
>>
>> If you have some cycles, please do participate:
>>
>> https://code.launchpad.net/nova/+activereviews
>>
>> Nova-core members, remember, it's you responsibility to do code
>>reviews! :)
>
>Yes!
>
>I've considered if nova-core people should take turns in having a review
>day.
>Their top priority for the day should be doing branch reviews and bug
>triaging.  Anything else that day would be second to this.  Every member
>of
>the team should be part of the rotation. The only member of nova-core
>exempt from this responsibility would be Jenkins. :)  f you cannot accept
>this
>responsibility, you don't get to be a member of the team. Of course
>there has to be room for holiday and being ill, but every day, people
>should know who they can bug for reviews and members of the team should
>generally be available for this rotation.
>
>The goal is to ensure that we don't drop good work on the floor due to
>bitrot and to distribute the review load better than we do now.
>
>
>-- 
>Soren Hansen
>Ubuntu Developerhttp://www.ubuntu.com/
>OpenStack Developer http://www.openstack.org/
>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Review days for nova-core members

2011-02-16 Thread Paul Voccio
Not sure what the etiquette is for removing someone. Michael Gundlach is still 
listed but is no longer participating.

pvo

From: Joshua McKenty mailto:j...@piston.cc>>
Date: Wed, 16 Feb 2011 16:51:56 -0800
To: Paul Voccio mailto:paul.voc...@rackspace.com>>
Cc: Soren Hansen mailto:so...@ubuntu.com>>, Jay Pipes 
mailto:jaypi...@gmail.com>>, 
"openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>" 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] Review days for nova-core members

Yup - we approved a new process for adding team members to core a few weeks 
ago, I haven't seen any applications yet but I may have missed one.

Joshua

On Wed, Feb 16, 2011 at 4:28 PM, Paul Voccio 
mailto:paul.voc...@rackspace.com>> wrote:
Have we considered pruning and expanding the core team to help speed the
reviews along? There are some people who are no longer day-to-day active
in Nova and some that are that could help in this process.

On 2/16/11 3:54 PM, "Soren Hansen" mailto:so...@ubuntu.com>> 
wrote:

>2011/2/16 Jay Pipes mailto:jaypi...@gmail.com>>:
>> Lots of coding and bug fixing has been done in the past weeks. As a
>> result, we've got a big backlog of code reviews to do.
>>
>> If you have some cycles, please do participate:
>>
>> https://code.launchpad.net/nova/+activereviews
>>
>> Nova-core members, remember, it's you responsibility to do code
>>reviews! :)
>
>Yes!
>
>I've considered if nova-core people should take turns in having a review
>day.
>Their top priority for the day should be doing branch reviews and bug
>triaging.  Anything else that day would be second to this.  Every member
>of
>the team should be part of the rotation. The only member of nova-core
>exempt from this responsibility would be Jenkins. :)  f you cannot accept
>this
>responsibility, you don't get to be a member of the team. Of course
>there has to be room for holiday and being ill, but every day, people
>should know who they can bug for reviews and members of the team should
>generally be available for this rotation.
>
>The goal is to ensure that we don't drop good work on the floor due to
>bitrot and to distribute the review load better than we do now.
>
>
>--
>Soren Hansen
>Ubuntu Developerhttp://www.ubuntu.com/
>OpenStack Developer http://www.openstack.org/
>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : 
>openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com<mailto:ab...@rackspace.com>, and delete the original 
message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Steps that can help stabilize Nova's trunk

2011-02-16 Thread Paul Voccio
Jay,

Thanks for throwing this out. How would we build this with Hudson? What
would a "standard deploy" of Nova even look like for integration tests?
We've also bounced the idea within our team of not allowing code commits
if the code to test ratio decreases but I'm not sure if that would work
for such a large project like this one.

pvo

On 2/16/11 4:27 PM, "Jay Pipes"  wrote:

>Hey all,
>
>It's come to my attention that a number of folks are not happy that
>Nova's trunk branch (lp:nova) is, shall we say, "less than stable". :)
>
>First, before going into some suggestions on keeping trunk more
>stable, I'd like to point out that trunk is, by nature, an actively
>developed source tree. Nobody should have an expectation that they can
>simply bzr branch lp:nova and everything will magically work with a)
>their existing installations of software packages, b) whatever code
>commits they have made locally, or c) whatever specific
>hypervisor/volume/network environment that they test their local code
>with. The trunk branch is, after all, in active development.
>
>That said, there's *no* reason we can't *improve* the relative
>stability of the trunk branch to make life less stressful for
>contributors.  Here are a few suggestions on how to keep trunk a bit
>more stable for those developers who actively develop from trunk.
>
>1) Participate fully in code reviews. If you suspect a proposed branch
>merge will "mess everything up for you", then you should notify
>reviewers and developers about your concerns. Be proactive.
>
>2) If you pull trunk and something breaks, don't just complain about
>it. Log a bug immediately and talk to the reviewers/approvers of the
>patch that broke your environment. Be constructive in your criticism,
>and be clear about why the patch should have been more thoroughly or
>carefully reviewed. If you don't, we're bound to repeat mistakes.
>
>3) Help us to write functional and integration tests. It's become
>increasingly clear from the frequency of breakages in trunk (and other
>branches) that our unit tests are nowhere near sufficient to catch a
>large portion of bugs. This is to be expected. Our unit tests use
>mocks and stubs for virtually everything, and they only really test
>code interfaces, and they don't even test that very well. We're
>working on adding functional tests to Hudson that will run, as the
>unit test do, before any merge into trunk, with any failure resulting
>in a failed merge. However, we need your help to create functional
>tests and integration tests (tests that various *real* components work
>together properly).  We also need help writing test cases that ensure
>software library dependencies and other packaging issues are handled
>properly and don't break with minor patches.
>
>4) If you have a specific environment/setup that you use (Rackers and
>Anso guys, here...), then we need your assistance to set up test
>clusters that will pull trunk into a wiped test environment and ensure
>that a series of realistic calls to the Nova API are properly handled.
>I know some of you are working on getting hardware ready. We need help
>from the software teams to ensure that these environments are
>initialized with the exact setups you use.

Still working on this. We're hoping to have something built out in the
next couple of weeks. Man, someone should write some hardware emulation
stuff so I don't have to wait on real gear. ;)

>
>The more testing we fire off against each potential merge into trunk,
>and the more those tests are hitting real-life deployment
>environments, the more stable trunk will become and the easier your
>life as a contributor will be.
>
>Thanks in advance for your assistance, and please don't hesitate to
>expand on any more suggestions you might have to stabilize trunk.
>
>-jay
>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-17 Thread Paul Voccio
I wanted to put out into the open where we think the evolution of the apis will 
go over the next few releases. This is by no means the only way to do this, but 
I thought it would be a start of conversation.

http://wiki.openstack.org/api_transition

I also wanted to clear up some confusion that I think came out of our email 
thread the other day. With the OpenStack 1.1 API proposal, this is really a 
OpenStack Compute 1.1 proposal. While volumes and images are currently in, I 
think longer term they would be pulled out. The network and volume services 
should be able to scale independently of each other.

If you look at the diagram, the changes would entail moving from an amqp 
protocol to a http protocol that a worker would hit on the public/admin 
interfaces to accomplish the same work as before.

Lets keep the thread going.

Pvo


From: Justin Santa Barbara mailto:jus...@fathomdb.com>>
Date: Tue, 15 Feb 2011 11:38:37 -0800
To: Troy Toman mailto:troy.to...@rackspace.com>>
Cc: "openstack@lists.launchpad.net" 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] OpenStack Compute API 1.1

Sounds great - when the patch comes in we can discuss whether this should be an 
extension or whether scheduled snapshots / generic tasks have broader 
applicability across OpenStack (and thus would be better in the core API)

Is there a blueprint?



On Tue, Feb 15, 2011 at 11:32 AM, Troy Toman 
mailto:troy.to...@rackspace.com>> wrote:

On Feb 15, 2011, at 1:06 PM, Justin Santa Barbara wrote:


OK - so it sounds like volumes are going to be in the core API (?) - good.  
Let's get that into the API spec.  It also sounds like extensions (swift / 
glance?) are not going to be in the same API long-term.  So why do we have the 
extensions mechanism?

Until we have an implemented use case (i.e. a patch) that uses the extensions 
element, I don't see how we can spec it out or approve it.  So if you want it 
in v1.1, we better find a team that wants to use it and write code.  If there 
is such a patch, I stand corrected and let's get it reviewed and merged.

I would actually expect that the majority of the use cases that we want in the 
API but don't _want_ to go through core would be more simply addressed by 
well-known metadata (e.g. RAID-5, multi-continent replication, HPC, HIPAA).

I'm don't agree that the lack of a coded patch means we can't discuss an 
extension mechanism. But, if you want a specific use case, we have at least one 
we intend to deliver. It may be more of a one-off than a general case because 
it is required to give us a reasonable transition path from our current 
codebase to Nova. But, it is not an imagined need.

In the Rackspace Cloud Servers 1.0 API, we support a concept of backup 
schedules with a series of API calls to manage them. In drafting the OpenStack 
compute API, this was something that didn't feel generally applicable or useful 
in the core API. So, you don't see it as part of the CORE API spec. That said, 
for transition purposes, we will need a way to provide this capability to our 
customers when we move to Nova. Our current plan is to do this using the 
extension mechanism in the proposed API.

If there is a better way to handle this need, then let's discuss further. But, 
I didn't want the lack of a specific example to squash the idea of extensions.

Troy Toman


Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original 
message.
Your cooperation is appreciated.


___ Mailing list: 
https://launchpad.net/~openstack Post to : 
openstack@lists.launchpad.net Unsubscribe 
: https://launchpad.net/~openstack More help : 
https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Paul Voccio
Jay,

I understand Justin's concern if we move /network and /images and /volume
to their own endpoints then it would be a change to the customer. I think
this could be solved by putting a proxy in front of each endpoint and
routing back to the appropriate service endpoint.

I added another image on the wiki page to describe what I'm trying to say.
http://wiki.openstack.org/api_transition

I think might not be as bad of a transition since the compute worker would
receive a request for a new compute node then it would proxy over to the
admin or public api of the network or volume node to request information.
It would work very similar to how the queues work now.

pvo

On 2/17/11 8:33 PM, "Jay Pipes"  wrote:

>Sorry, I don't view the proposed changes from AMQP to REST as being
>"customer facing API changes". Could you explain? These are internal
>interfaces, no?
>
>-jay
>
>On Thu, Feb 17, 2011 at 8:13 PM, Justin Santa Barbara
> wrote:
>> An API is for life, not just for Cactus.
>> I agree that stability is important.  I don't see how we can claim to
>> deliver 'stability' when the plan is then immediately to destablize
>> everything with a very disruptive change soon after, including customer
>> facing API changes and massive internal re-architecting.
>>
>>
>> On Thu, Feb 17, 2011 at 4:18 PM, Jay Pipes  wrote:
>>>
>>> On Thu, Feb 17, 2011 at 6:57 PM, Justin Santa Barbara
>>>  wrote:
>>> > Pulling volumes & images out into separate services (and moving from
>>> > AMQP to
>>> > REST) sounds like a huge breaking change, so if that is indeed the
>>>plan,
>>> > let's do that asap (i.e. Cactus).
>>>
>>> Sorry, I have to disagree with you here, Justin :)  The Cactus release
>>> is supposed to be about stability and the only features going into
>>> Cactus should be to achieve API parity of the OpenStack Compute API
>>> with the Rackspace Cloud Servers API. Doing such a huge change like
>>> moving communication from AMQP to HTTP for volume and network would be
>>> a change that would likely undermine the stability of the Cactus
>>> release severely.
>>>
>>> -jay
>>
>>



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Paul Voccio
More inline. I trimmed your agrees.

On 2/18/11 10:27 AM, "Jay Pipes"  wrote:

>
>> 5) Interested developers can get involved in only the services that
>>they care about without worrying about other services.
>
>Not quite sure how this has to do with REST vs. AMQP... AMQP is simply
>the communication protocol between internal Nova services (network,
>compute, and volume) right now. Developers can currently get involved
>in the services they want to without messing with the other services.

I think means other services will have apis that sit on a different
endpoint than compute. To talk to it, use the http interface instead of a
queue message. 


>
>> 6) We already have 3 APIs (nova, swift, glance), we need to do this
>>kind of integration as it is, it makes sense for us to standardize on it.
>
>Unless I'm mistaken, we're not talking about APIs. We're talking about
>protocols. AMQP vs. HTTP.

Its a bit of both. To break out into separate apis we wouldn't use amqp to
communicate between services.

>
>> We are certainly changing the way we are doing things, but I don't
>>really think we are throwing away a lot of functionality.  As PVO
>>mentioned, things should work very similar to the way they are working
>>now.  You still have compute workers, you may still have an internal
>>queue, the only difference is that cross-service communication is now
>>happening by issuing REST calls.
>
>I guess I'm on the fence with this one. I agree that:
>
>* Having clear boundaries between services is A Good Thing
>* Having versioning in the interfaces between services is A Good Thing
>
>I'm just not convinced that services shouldn't be able to communicate
>on different protocols. REST over HTTP is a fine interface. Serialized
>messages over AMQP is similarly a fine interface. The standardization
>should occur at the *message* level, not the *protocol* level. REST
>over HTTP, combined with the Atom Publishing Protocol, has those
>messages already defined. Having standard message definitions that are
>sent via AMQP seems to me to be the "missing link" in the
>standardization process.

Wouldn't you be designing the same thing over 2 interfaces then? You'd
have to standardize on amqp and http?

>
>Just some thoughts,
>jay


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API 1.1

2011-02-18 Thread Paul Voccio
The spec for 1.0 and 1.1 are pretty close. The extensions mechanism is the 
biggest change, iirc.

I think the proxy would make sense if you wanted to have a single api. Not all 
service providers will but I see this as entirely optional, not required to use 
the services.

The push to get a completed compute api is the desire move away from the ec2 
api to something that we can guide, extend and vote on as a community. The 
sooner we do the the better.

How is the 1.1 api proposal breaking this?

From: Justin Santa Barbara mailto:jus...@fathomdb.com>>
Date: Fri, 18 Feb 2011 09:10:19 -0800
To: Paul Voccio mailto:paul.voc...@rackspace.com>>
Cc: Jay Pipes mailto:jaypi...@gmail.com>>, 
"openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>" 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] OpenStack Compute API 1.1

Jay: The AMQP->REST was the re-architecting I was referring to, which would not 
be customer-facing (other than likely introducing new bugs.)  Spinning off the 
services, if this is visible at the API level, is much more concerning to me.

So Paul, I think the proxy is good because it acknowledges the importance of 
keeping a consistent API.  But - if our API isn't finalized - why push it out 
at all, particularly if we're then going to have the overhead of maintaining 
another translation layer?  For Cactus, let's just support EC2 and/or 
CloudServers 1.0 API compatibility (again a translation layer, but one we 
probably have to support anyway.)  Then we can design the right OpenStack API 
at our leisure and meet all of our goals: a stable Cactus and stable APIs.  If 
anyone ends up coding to a Cactus OpenStack API, we shouldn't have them become 
second-class citizens 3 months later.

Justin





On Fri, Feb 18, 2011 at 6:31 AM, Paul Voccio 
mailto:paul.voc...@rackspace.com>> wrote:
Jay,

I understand Justin's concern if we move /network and /images and /volume
to their own endpoints then it would be a change to the customer. I think
this could be solved by putting a proxy in front of each endpoint and
routing back to the appropriate service endpoint.

I added another image on the wiki page to describe what I'm trying to say.
http://wiki.openstack.org/api_transition

I think might not be as bad of a transition since the compute worker would
receive a request for a new compute node then it would proxy over to the
admin or public api of the network or volume node to request information.
It would work very similar to how the queues work now.

pvo

On 2/17/11 8:33 PM, "Jay Pipes" mailto:jaypi...@gmail.com>> 
wrote:

>Sorry, I don't view the proposed changes from AMQP to REST as being
>"customer facing API changes". Could you explain? These are internal
>interfaces, no?
>
>-jay
>
>On Thu, Feb 17, 2011 at 8:13 PM, Justin Santa Barbara
>mailto:jus...@fathomdb.com>> wrote:
>> An API is for life, not just for Cactus.
>> I agree that stability is important.  I don't see how we can claim to
>> deliver 'stability' when the plan is then immediately to destablize
>> everything with a very disruptive change soon after, including customer
>> facing API changes and massive internal re-architecting.
>>
>>
>> On Thu, Feb 17, 2011 at 4:18 PM, Jay Pipes 
>> mailto:jaypi...@gmail.com>> wrote:
>>>
>>> On Thu, Feb 17, 2011 at 6:57 PM, Justin Santa Barbara
>>> mailto:jus...@fathomdb.com>> wrote:
>>> > Pulling volumes & images out into separate services (and moving from
>>> > AMQP to
>>> > REST) sounds like a huge breaking change, so if that is indeed the
>>>plan,
>>> > let's do that asap (i.e. Cactus).
>>>
>>> Sorry, I have to disagree with you here, Justin :)  The Cactus release
>>> is supposed to be about stability and the only features going into
>>> Cactus should be to achieve API parity of the OpenStack Compute API
>>> with the Rackspace Cloud Servers API. Doing such a huge change like
>>> moving communication from AMQP to HTTP for volume and network would be
>>> a change that would likely undermine the stability of the Cactus
>>> release severely.
>>>
>>> -jay
>>
>>



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com<mailto:ab...@racks

Re: [Openstack] Nova-core membership request

2011-02-19 Thread Paul Voccio
+1 

One more. 

On 2/18/11 9:15 PM, "Ed Leafe"  wrote:

>On Feb 18, 2011, at 6:07 PM, Rick Harris wrote:
>
>> Throwing my hat into the ring for nova-core membership. Eager to help
>>knock down that merge-prop backlog we have at the moment :-)
>
>+1
>
>
>-- Ed Leafe
>
>
>
>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Should the OpenStack API re-use the EC2 credentials?

2011-02-23 Thread Paul Voccio
Justin,

I think you hit upon the reason of why I think it was approved and reverted. 
Because it hadn't been talked about in a blueprint or a mail sent to the list 
(I think I'm up to date on the threads) and a patch landed means other 
alternatives weren't considered before pushing it through to begin with. I 
think we're all open to talking about how to better the auth system and make 
improvements. Dragon has already discussed some alternatives and suggestions on 
the BP page below. I think this is the right way to continue the dialog and we 
all can agree on a good way forward.

I'm confident we can figure it out.

If I missed a conversation, my apologies.

pvo

From: Vishvananda Ishaya mailto:vishvana...@gmail.com>>
Date: Wed, 23 Feb 2011 18:19:41 -0800
To: Justin Santa Barbara mailto:jus...@fathomdb.com>>
Cc: mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] Should the OpenStack API re-use the EC2 credentials?

Hey Justin,

Does it make any difference that the way the auth is (theoretically) supposed 
to work with the os api is that the user gets an auth token from an external 
auth server and then uses username / authtoken to actually contact the api?  I 
think it is just faked out right now to use the access_key instead of doing 
external auth, but I think the reason it works like it does is because the plan 
was to switch to external auth eventually.

Vish

On Feb 23, 2011, at 5:56 PM, Justin Santa Barbara wrote:

I previously fixed OpenStack authentication so it would use the same 
credentials as EC2.  This bugfix was just reverted, because it caused OpenStack 
API users to have to enter in different credentials (sorry!), but primarily 
because it hadn't been discussed on the mailing list.  So here goes!

Here's a blueprint: 
https://blueprints.launchpad.net/nova/+spec/authentication-consistency

Here's an overview of the problem:

EC2 uses an (api_key, api_secret) pair.  Post-revert, OpenStack uses the 
api_key(!) as the password, but a different value entirely as the username: 
(username, api_key).  The bugfix made it so that both APIs used the EC2 
credentials (api_key, api_secret) .  This did mean that anyone that had saved 
the 'bad' OpenStack credentials was unable to continue to use those 
credentials.  I also overlooked exporting the updated credentials in novarc 
(though a merge request was pending).

I actually thought originally that this was a straight-up bug, rather than a 
design 'decision', so I should definitely have flagged it better.  Again, sorry 
to those I impacted.

As things stand now, post-revert, this is probably a security flaw, because the 
EC2 API does not treat the api_key as a secret.  The EC2 API can (relatively) 
safely be run over non-SSL, because it uses signatures instead of passing the 
shared secret directly.

This is also not very user-friendly.  Post-revert, an end-user must know 
whether any particular cloud tool uses the EC2 API or the OpenStack API, so 
that they can enter in the correct pair of credentials.  That doesn't seem like 
a good idea; I think there should be one set of credentials.

There is some discussion about the idea of having the api_key be user-friendly. 
 I don't think it buys us anything, because the api_secret is still going to be 
un-friendly, but I have no objection as long as it is does in a way that does 
not break existing users of the EC2 API.

I propose that:
 (1) the OpenStack API and EC2 credentials should be the same as each other 
(whatever they are) for the sake of our collective sanity and
 (2) we have to change the current configuration anyway for security reasons.
 (3) We should not change the EC2 credentials, because we've shipped the EC2 
API and our users have an expectation that we won't break them without good 
reason, so
 (4) we must change the credentials for users of the (non-shipped) OpenStack 
API.

Estimated user impact: I believe there are two people that will be affected, 
and it will take them ~1 minute each, so total impact ~2 minutes.

The longer we delay fixing this, the more people we break and the bigger the 
impact.  It seems that we have no choice but to do a non-backwards-compatible 
authentication change, but I believe this is OK at the moment because the 
OpenStack API is not yet stable/released - i.e. we can still make fixes without 
worrying about backwards compatibility shims. We're not in "The Old New Thing" 
land yet :-)



As an aside, I am very unhappy about the way this revert was pushed through by 
Rackspace team-members, seemingly without much consideration of alternatives.  
Perhaps we should consider changing from needing two core-approves, to needing 
one Rackspace core-approve and one non-Rackspace core-approve.


Justin



___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://h

Re: [Openstack] Instances lost after reboot

2011-02-24 Thread Paul Voccio
George,

What hypervisor are you using? I'm guessing kvm, but not totally sure. I
know the behavior for XenServer is to keep the the instances available
after a reboot. Can you show some examples of what you're talking about
with versions? 

Thanks,
Pvo

On 2/24/11 1:49 AM, "Thierry Carrez"  wrote:

>Brian Schott wrote:
>> How did you install and launch OpenStack?  The instances are stored in
>>a sqllite or mysql table depending on how things are configured.
>
>I think George's point is that if you reboot the compute node, you lose
>the instances that were running on it. I don't really know if this is a
>bug, or a design decision.
>
>What would be the expected behavior ?
>
>-- 
>Thierry Carrez (ttx)
>Release Manager, OpenStack
>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] test dir in trunk

2011-02-25 Thread Paul Voccio
I don't think so. Talking to her now.

On 2/25/11 3:04 PM, "Vishvananda Ishaya"  wrote:

>Looks like anne's doc branch added a dir called "test" to trunk with a
>whole bunch of stuff in it.  Was this added on purpose?
>
>Vish
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Compute API for Cactus (critical!)

2011-02-28 Thread Paul Voccio
Jesse,

I agree that some implementations can want to have a single endpoint. I think 
this is doable with a simple proxy that can pass requests back to each service 
apis. This can also be accomplished by having configuration variables in your 
bindings to talk to something that looks like the following:

compute=api.compute.example.com
volume=api.volume.example.com
image=api.image.example.com
network=api.network.example.com

Or for behind the proxies:

compute=api.example.com
volume=api.example.com
image=api.example.com
network=api.example.com

Maybe this is something the auth services return?


From: Jesse Andrews mailto:anotherje...@gmail.com>>
Date: Mon, 28 Feb 2011 19:53:01 -0800
To: Erik Carlin mailto:erik.car...@rackspace.com>>
Cc: "openstack@lists.launchpad.net" 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)

I'm also confused because nova (compute/block/network) is in 1 repository 
doesn't mean it isn't 3 different services.

We've talked about moving the services inside nova to not reaching inside of 
each other via RPC calls and instead making HTTP calls.  But they are mostly 
already designed in a way that allows them to operate independantly.

And I would also say that while rackspace may deploy with 3 endpoints, 
openstack might want to deploy multiple services behind a single endpoint.

Jesse

On Feb 28, 2011, at 3:52 PM, Erik Carlin wrote:

I was talking with Will Reese about this more.  If we are eventually going to 
decompose into independent services with separate endpoints, he thought we 
should do that now.  I like that idea better.  For cactus, we still have a 
single nova service "black box" but we put multiple OpenStack API endpoints on 
the front side, one for each future service.  In other words, use separate 
endpoints instead of extensions in a single endpoint to expose the current 
capabilities.  That way, it sets us on the right path and consumers don't have 
to refactor to support between cactus and diable.  In diablo, we decompose into 
separate services and the endpoints move with them.  It's a bit hard to 
visualize so I put together the attached pdf.  I'm assuming glance is a 
separate service and endpoint for cactus (still need to figure out per my 
message below) and swift already is.

Erik

From: Erik Carlin mailto:erik.car...@rackspace.com>>
Date: Mon, 28 Feb 2011 17:07:22 -0600
To: John Purrier mailto:j...@openstack.org>>, 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)

That all sounds good.  My only question is around images.  Is glance ready to 
be an independent service (and thus have a separate API) in Cactus?

Erik

From: John Purrier mailto:j...@openstack.org>>
Date: Mon, 28 Feb 2011 16:53:53 -0600
To: Erik Carlin mailto:erik.car...@rackspace.com>>, 
mailto:openstack@lists.launchpad.net>>
Subject: RE: [Openstack] OpenStack Compute API for Cactus (critical!)

Hi Erik, today we have compute, block/volume, and network all encompassed in 
nova. Along with image and object storage these make the whole of OpenStack 
today. The goal is to see where we are at wrt the OpenStack API 
(compute/network/volume/image) and coverage of the underlying implementation as 
well as what is available through the EC2 API today.

I would propose that volume and network API’s be exposed not through the core 
compute API, but as extensions. Once we create separate services and factor 
network and volume services out of nova these API’s will form the core API’s 
for these services. We may also need to up-version these service API’s between 
Cactus and Diablo as they are currently under heavy discussion and design.

John

From: Erik Carlin [mailto:erik.car...@rackspace.com]
Sent: Monday, February 28, 2011 3:16 PM
To: John Purrier; 
openstack@lists.launchpad.net
Subject: Re: [Openstack] OpenStack Compute API for Cactus (critical!)

John -

Are we just talking about compute aspects?  IMO, we should NOT be exposing 
block functionality in the OS compute API.  In Diablo, we will break out block 
into a separate service with it's own OS block API.  That means for now, there 
may be functionality in nova that isn't exposed (an artifact of originally 
mimicing EC2) until we can fully decompose nova into independent services.

Erik

From: John Purrier mailto:j...@openstack.org>>
Date: Mon, 28 Feb 2011 14:16:20 -0600
To: mailto:openstack@lists.launchpad.net>>
Subject: [Openstack] OpenStack Compute API for Cactus (critical!)

Has anyone done a gap analysis against the proposed OpenStack Compute API and 
a) the implemented code, and b) the EC2 API?

It looks like we have had a breakdown in process, as the community review 
process of the proposed spec has not generated discussion of the missing 
aspects of the proposed spec.

Here is what we said on Feb 3 as the goal for Cactus:

OpenStack Compute AP

Re: [Openstack] Entities in OpenStack Auth

2011-03-01 Thread Paul Voccio
Eric,

I think that¹s an interesting proposal. I think I'll try to put something
together to visual this.

pvo

On 3/1/11 8:14 PM, "Eric Day"  wrote:

>For that query you would, but not all. If you want to create a new
>instance for project1 you would:
>
>nova.openstack.org/v1.1/project1/servers
>
>Or if you wanted to reboot instance X in project1:
>
>nova.openstack.org/v1.1/project1/servers/X
>
>Note that the following resource is not the same as the last, since
>justin wouldn't be the owner for instance X, project1 would be:
>
>nova.openstack.org/v1.1/justin/servers/X
>
>I think searches will always have special cases with filter options,
>but for identifying a canonical URL for a resource, having the entity
>name of the owner in there seems correct.
>
>The main thing I'm trying to figure out is whether to use an extra
>entity in the path for new service URLs. Swift does and Nova does not,
>and it would be nice to have some consistency. I see the benefits of
>both, and in Swift's case it needs to for simple public URLs (where
>there is no user context).
>
>-Eric
>
>On Tue, Mar 01, 2011 at 06:00:12PM -0800, Justin Santa Barbara wrote:
>>If we're always going to pass the same user-id token (for a
>>particular
>>user), what's the value in passing it at all?  Why not get it from
>>the
>>authentication token?
>>e.g. my X-Auth-Token could look like:  "justinsb
>>project1,project2,project3 5OPr9UR2xk32K9ArAjO562e" (i.e. my
>>username,
>>projects and a crypto signature)
>>Justin
>> 
>>On Tue, Mar 1, 2011 at 5:51 PM, Eric Day  wrote:
>> 
>>  Hi Justin,
>>  On Tue, Mar 01, 2011 at 05:14:42PM -0800, Justin Santa Barbara
>>wrote:
>>  >However, what I don't understand is how I can query my
>>servers in
>>  project1
>>  >and project2 (but not those in project3). *The only way I
>>could see
>>  is
>>  >doing something like this:
>>  >*nova.openstack.org/v1.1/project1+project2/servers.
>>  >I agree that REST paths aren't themselves hacky in the
>>  single-project
>>  >case, but I don't yet grok the multi-project query. *Of the 3
>>  options I do
>>  >grok, I see (c) as the least hacky.
>> 
>>  I would probably say use nova.openstack.org/v1.1/justin/servers
>>with
>>  one or more filter parameters in the URL or body as you mention.
>>This
>>  something to consider across all services, not just nova. AFAIK
>>  Swift doesn't support queries across multiple accounts right now,
>>  so I'd like to hear their thoughts on it as well.
>>  -Eric
>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] A single cross-zone database?

2011-03-16 Thread Paul Voccio
Sandy,

Not only is this expensive, but there is no way I can see at the moment to do 
pagination, which is what makes this really expensive. If someone asked for an 
entire list of all their instances and it was > 10,000 then I would think 
they're ok with waiting while that response is gathered and returned. However, 
since the API spec says we should be able to do pagination, this is where 
asking each zone for all its children every time gets untenable.

Looking forward to the discussion. More below.

From: Sandy Walsh mailto:sandy.wa...@rackspace.com>>
Date: Wed, 16 Mar 2011 14:53:37 +
To: "openstack@lists.launchpad.net" 
mailto:openstack@lists.launchpad.net>>
Subject: [Openstack] A single cross-zone database?

Hi y'all, getting any sleep before Feature Freeze?

As you know, one of the main design tenants of OpenStack is Share Nothing 
(where possible). http://wiki.openstack.org/BasicDesignTenets

That's the mantra we've been chanting with Zones. But it does cause a problem 
with a particular Use Case:

"Show me all Customer X Instances, across all Zones."

This is an expensive request. We have to poll all zones and ask them to return 
a list of matching instances.

There has been some water cooler chat about some things we could do to make 
this more efficient in the near term. One proposal has been to assume a single 
database, replicated across zones. I'll call it SDB for short. With SDB we can 
have a join table that links Zone to Instance ... keeping a record of all 
instances across zones. Maybe it's a completely separate set of tables? Maybe 
it's a separate replicated db? The intention is to let us talk to the 
appropriate zone directly.

Sure, there are a ton more optimizations we could make if we go further with 
SDB. We could store all the Zone capabilities in the db to make Zone selection 
faster. We could store all the customers in the db to make multi-tenant easier. 
But that's not what we're talking about here. We're talking about the bare 
minimum required to make the get_instances query fast.

Conversely, there are issues with a single DB. The largest being the 
implication it has on Bursting (Hybrid Private/Public clouds) ... a pretty 
funky feature imho.




Personally, I think the same query gains can be obtained by creating a separate 
db using off-the-shelf ETL tools to create cache/read-only db's. 
http://en.wikipedia.org/wiki/Extract,_transform,_load

Isn't the hard part keeping this in sync with what the zones have?


I was considering SDB for Zones (phase 4), but for now, I'm going to stick with 
the original plan of separate databases (1 per zone) and see what the 
performance implications are.

What are your thoughts on this issue?

... let the games begin!

-S



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original 
message.
Your cooperation is appreciated.


___ Mailing list: 
https://launchpad.net/~openstack Post to : 
openstack@lists.launchpad.net Unsubscribe 
: https://launchpad.net/~openstack More help : 
https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] A single cross-zone database?

2011-03-16 Thread Paul Voccio
Ed,

I would agree. The caching would go with the design tenet #7: Accept
eventual consistency and use it where it is appropriate.

If we're ok with accepting that the list may or may not always be up to
date and feel its appropriate, we should be good with the caching.

pvo


On 3/16/11 11:45 AM, "Ed Leafe"  wrote:

>On Mar 16, 2011, at 12:23 PM, Paul Voccio wrote:
>
>> Not only is this expensive, but there is no way I can see at the moment
>>to do pagination, which is what makes this really expensive. If someone
>>asked for an entire list of all their instances and it was > 10,000 then
>>I would think they're ok with waiting while that response is gathered
>>and returned. However, since the API spec says we should be able to do
>>pagination, this is where asking each zone for all its children every
>>time gets untenable.
>
>This gets us into the caching issues that were discussed at the last
>summit. We could run the query and then cache the results at the
>endpoint, but this would require accepting some level of staleness of the
>results. The cache would handle the paging, and some sort of TTL would
>have to be established as a balance between performance and staleness.
>
>
>
>-- Ed Leafe
>


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] A single cross-zone database?

2011-03-16 Thread Paul Voccio
For Cactus, I'm with Justin and Sandy on #1 to get something working.

Justin — you said earlier that you're not sure this is going to be a problem. 
From experience, this is a problem with trying to query all the instances 
across zones for Rackspace now. Sandy and others including myself have talked 
though this a few times to try to solve this early, but I don't think we can. 
I, like you, want to get something working with zones asap. I'd rather retreat 
from trying to solve this now to get something working. We'll obviously tackle 
the list instances once something works.

From: Justin Santa Barbara mailto:jus...@fathomdb.com>>
Date: Wed, 16 Mar 2011 13:37:50 -0700
To: Ed Leafe mailto:ed.le...@rackspace.com>>
Cc: "openstack@lists.launchpad.net" 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] A single cross-zone database?

Seems that the person writing the code (Sandy) wants _not_ to do a single DB 
initially.  It sounds like there are back channels where a single DB is being 
pushed on Sandy.

To me, it sounds like we have these choices:

  1.  We can have a zones implementation in Cactus.  As specified in the 
blueprint, it will use recursive querying, and there will be no caching 
initially, nor will there be a single DB.
  2.  We can go 'off blueprint', and simply not have a multi-zones 
implementation in Cactus.

Given that, I don't see why we would deviate from what we've agreed (and I'm 
normally all for flexibility); let's get a baseline implementation into Cactus. 
 People that want to add caching or a single DB are then free to do so in their 
own branches, but at least those enhancements will be starting from a common 
base.  I'm not against adding caching / a single DB if it proves necessary / 
good later.

Hopefully we'll actually learn of any real-world issues with the simple 
approach by running Sandy's code, and we can discuss those facts at the design 
conference, rather than talking in hypotheticals.

Sandy: Have I got the wrong end of the stick here?  Are these our choices?

Justin



On Wed, Mar 16, 2011 at 1:13 PM, Ed Leafe 
mailto:ed.le...@rackspace.com>> wrote:
On Mar 16, 2011, at 3:39 PM, Justin Santa Barbara wrote:

> I agree that we could have a better marker, but I'm just going off the spec 
> at the moment.
>
> I've checked the agreed blueprint, and caching in zones is out of scope for 
> Cactus.
>
> Please propose a discussion topic for the Design Summit.

   Can we get back to the original topic? The only reason caching came up 
was as an alternative to a single DB to hold all instance information. That was 
an implementation solution suggested for multi-cluster/zones, so it is 
definitely in scope for Cactus.



-- Ed Leafe



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original 
message.
Your cooperation is appreciated.


___ Mailing list: 
https://launchpad.net/~openstack Post to : 
openstack@lists.launchpad.net Unsubscribe 
: https://launchpad.net/~openstack More help : 
https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Instance IDs and Multiple Zones

2011-03-22 Thread Paul Voccio
I agree with the sentiment that integers aren't the way to go long term.
The current spec of the api does introduce some interesting problems to
this discussion. All can be solved. The spec calls for the api to return
an id and a password upon instance creation. This means the api isn't
asynchronous if it has to wait for the zone to create the id. From page 46
of the API Spec states the following:

"Note that when creating a server only the server ID and the admin
password are guaranteed to be returned in the request object. Additional
attributes may be retrieved by performing subsequent GETs on the server."



This creates a problem with the bursting if Z1 calls to Z2, which is a
public cloud, which has to wait for Z3-X to find out where it is going be
placed. How would this work?

pvo

On 3/22/11 1:39 PM, "Chris Behrens"  wrote:

>
>I think Dragon got it right.  We need a zone identifier prefix on the
>IDs.  I think we need to get away from numbers.  I don't see any reason
>why they need to be numbers.  But, even if they did, you can pick very
>large numbers and reserve some bits for zone ID.
>
>- Chris
>
>
>On Mar 22, 2011, at 10:48 AM, Justin Santa Barbara wrote:
>
>> I think _if_ we want to stick with straight numbers, the following are
>>the 'traditional' choices:
>> 
>> 1) "Skipping" - so zone1 would allocate numbers 1,3,5, zone2 numbers
>>2,4,6.  Requires that you know in advance how many zones there are.
>> 2) Prefixing - so zone0 would get 0xxx, zone1 1xx.
>> 3) Central allocation - each zone would request an ID from a central
>>pool.  This might not be a bad thing, if you do want to have a quick
>>lookup table of ID -> zone.  Doesn't work if the zones aren't under the
>>same administrative control.
>> 4) Block allocation - a refinement of #3, where you get a bunch of IDs.
>> Effectively amortizes the cost of the RPC.  Probably not worth the
>>effort here.
>> 
>> (If you want central allocation without a shared database, that's also
>>possible, but requires some trickier protocols.)
>> 
>> However, I agree with Monsyne: numeric IDs have got to go.  Suppose I'm
>>a customer of Rackspace CloudServers once it is running on OpenStack,
>>and I also have a private cloud that the new Rackspace Cloud Business
>>unit has built for me.  I like both, and then I want to do cloud
>>bursting in between them, by putting an aggregating zone in front of
>>them.  I think at that stage, we're screwed unless we figure this out
>>now.  And this scenario only has one provider (Rackspace) involved!
>> 
>> We can square the circle however - if we want numbers, let's use UUIDs
>>- they're 128 bit numbers, and won't in practice collide.  I'd still
>>prefer strings though...
>> 
>> Justin
>> 
>> 
>> 
>> On Tue, Mar 22, 2011 at 9:40 AM, Ed Leafe  wrote:
>>I want to get some input from all of you on what you think is
>>the best way to approach this problem: the RS API requires that every
>>instance have a unique ID, and we are currently creating these IDs by
>>use of an auto-increment field in the instances table. The introduction
>>of zones complicates this, as each zone has its own database.
>> 
>>The two obvious solutions are a) a single, shared database and
>>b) using a UUID instead of an integer for the ID. Both of these
>>approaches have been discussed and rejected, so let's not bring them
>>back up now.
>> 
>>Given integer IDs and separate databases, the only obvious
>>choice is partitioning the numeric space so that each zone starts its
>>auto-incrementing at a different point, with enough room between
>>starting ranges to ensure that they would never overlap. This would
>>require some assumptions be made about the maximum number of instances
>>that would ever be created in a single zone in order to determine how
>>much numeric space that zone would need. I'm looking to get some
>>feedback on what would seem to be reasonable guesses to these partition
>>sizes.
>> 
>>The other concern is more aesthetic than technical: we can make
>>the numeric spaces big enough to avoid overlap, but then we'll have very
>>large ID values; e.g., 10 or more digits for an instance. Computers
>>won't care, but people might, so I thought I'd at least bring up this
>>potential objection.
>> 
>> 
>> 
>> -- Ed Leafe
>> 
>> 
>> 
>> 
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>> 
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>
>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad

Re: [Openstack] Instance IDs and Multiple Zones

2011-03-22 Thread Paul Voccio
With this, are we saying EC2API wouldn't be able to use the child zones in the 
same way as the OSAPI?

From: Vishvananda Ishaya mailto:vishvana...@gmail.com>>
Date: Tue, 22 Mar 2011 12:44:21 -0700
To: Justin Santa Barbara mailto:jus...@fathomdb.com>>
Cc: Paul Voccio mailto:paul.voc...@rackspace.com>>, 
"openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>" 
mailto:openstack@lists.launchpad.net>>, Chris 
Behrens mailto:chris.behr...@rackspace.com>>
Subject: Re: [Openstack] Instance IDs and Multiple Zones

The main issue that drove integers is backwards compatibility to the ec2_api 
and existing ec2 toolsets.  People seemed very opposed to the idea of having 
two separate ids in the database, one for ec2 and one for the underlying 
system.  If we want to move to another id scheme that doesn't fit in a 32 bit 
integer we have to provide a way for ec2 style ids to be assigned to instances, 
perhaps through a central authority that hands out unique ids.

Vish

On Mar 22, 2011, at 12:30 PM, Justin Santa Barbara wrote:

The API spec doesn't seem to preclude us from doing a fully-synchronous method 
if we want to (it just reserves the option to do an async implementation).  
Obviously we should make scheduling fast, but I think we're fine doing 
synchronous scheduling.  It's still probably going to be much faster than 
CloudServers on a bad day anyway :-)

Anyone have a link to where we chose to go with integer IDs?  I'd like to 
understand why, because presumably we had a good reason.

However, if we don't have documentation of the decision, then I vote that it 
never happened, and instance ids are strings.  We've always been at war with 
Eastasia, and all ids have always been strings.

Justin




On Tue, Mar 22, 2011 at 12:20 PM, Paul Voccio 
mailto:paul.voc...@rackspace.com>> wrote:
I agree with the sentiment that integers aren't the way to go long term.
The current spec of the api does introduce some interesting problems to
this discussion. All can be solved. The spec calls for the api to return
an id and a password upon instance creation. This means the api isn't
asynchronous if it has to wait for the zone to create the id. From page 46
of the API Spec states the following:

"Note that when creating a server only the server ID and the admin
password are guaranteed to be returned in the request object. Additional
attributes may be retrieved by performing subsequent GETs on the server."



This creates a problem with the bursting if Z1 calls to Z2, which is a
public cloud, which has to wait for Z3-X to find out where it is going be
placed. How would this work?

pvo

On 3/22/11 1:39 PM, "Chris Behrens" 
mailto:chris.behr...@rackspace.com>> wrote:

>
>I think Dragon got it right.  We need a zone identifier prefix on the
>IDs.  I think we need to get away from numbers.  I don't see any reason
>why they need to be numbers.  But, even if they did, you can pick very
>large numbers and reserve some bits for zone ID.
>
>- Chris
>
>
>On Mar 22, 2011, at 10:48 AM, Justin Santa Barbara wrote:
>
>> I think _if_ we want to stick with straight numbers, the following are
>>the 'traditional' choices:
>>
>> 1) "Skipping" - so zone1 would allocate numbers 1,3,5, zone2 numbers
>>2,4,6.  Requires that you know in advance how many zones there are.
>> 2) Prefixing - so zone0 would get 0xxx, zone1 1xx.
>> 3) Central allocation - each zone would request an ID from a central
>>pool.  This might not be a bad thing, if you do want to have a quick
>>lookup table of ID -> zone.  Doesn't work if the zones aren't under the
>>same administrative control.
>> 4) Block allocation - a refinement of #3, where you get a bunch of IDs.
>> Effectively amortizes the cost of the RPC.  Probably not worth the
>>effort here.
>>
>> (If you want central allocation without a shared database, that's also
>>possible, but requires some trickier protocols.)
>>
>> However, I agree with Monsyne: numeric IDs have got to go.  Suppose I'm
>>a customer of Rackspace CloudServers once it is running on OpenStack,
>>and I also have a private cloud that the new Rackspace Cloud Business
>>unit has built for me.  I like both, and then I want to do cloud
>>bursting in between them, by putting an aggregating zone in front of
>>them.  I think at that stage, we're screwed unless we figure this out
>>now.  And this scenario only has one provider (Rackspace) involved!
>>
>> We can square the circle however - if we want numbers, let's use UUIDs
>>- they're 128 bit numbers, and won't in practice collide.  I'd still
>>prefer strings though...
>>
>> Jus

Re: [Openstack] Instance IDs and Multiple Zones

2011-03-22 Thread Paul Voccio
Ed, 

I spoke with Jorge earlier today and this is still treated as the instance
id. That instance can fail or succeed, but the id of what you call to
retrieve that status never changes.

pvo

On 3/22/11 2:55 PM, "Ed Leafe"  wrote:

>On Mar 22, 2011, at 3:20 PM, Paul Voccio wrote:
>
>> This means the api isn't
>> asynchronous if it has to wait for the zone to create the id. From page
>>46
>> of the API Spec states the following:
>> 
>> "Note that when creating a server only the server ID and the admin
>> password are guaranteed to be returned in the request object. Additional
>> attributes may be retrieved by performing subsequent GETs on the
>>server."
>> 
>> 
>> This creates a problem with the bursting if Z1 calls to Z2, which is a
>> public cloud, which has to wait for Z3-X to find out where it is going
>>be
>> placed. How would this work?
>
>
>I thought this had been changed to return a reservation ID, which
>would then be used to get information about the instance once it had been
>created. That would allow the API to return immediately without having to
>wait for a host to be selected, an instance to be created, networking to
>be configured, etc.
>
>
>-- Ed Leafe
>


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] (no subject)

2011-03-23 Thread Paul Voccio
Bittu,

I would start by reading the docs at http://docs.openstack.org/. Then I
would check out the bzr/LP tutorial that Soren put together here:
http://wiki.openstack.org/LifeWithBzrAndLaunchpad.

Once you've gone through those docs, it should get you moving in the right
direction. If you have any questions, you can usually find help in IRC
(#openstack on freenode) or on the mailing lists.

Goodluck,
pvo

On 3/23/11 6:42 AM, "sedrik daimary"  wrote:

>Hi John,
>
>My name is Bittu and i have recently joined the mailing list of
>Openstack. When i heard about OpenStack it sound really interesting.
>So i want to contribute to OpenStack Compute but before that i want to
>study the architecture of Openstack compute. Can u please give me
>advice of how to approach to understand Openstack compute (nova).
>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace. 
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message. 
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Instance IDs and Multiple Zones

2011-03-23 Thread Paul Voccio



>
>I don't agree at all.  There are many good reasons to preserve the
>identity of a VM even when it's IP or location changes.  Billing, for
>example.  Access control.  Intrusion detection.
>
>Just because I move a VM from one place to another, why would I expect
>its identity to change?
>
>

Where do we put the boundary on the preservation of id? Within the same
deployment? Within the same zone topology? I'm not quite following the
billing aspect. If you shut one down and start another that is a problem
for billing? 

You stated earlier today:
"We have to accept that, on the scales we care about, any unique ID is
going to be incomprehensible to a human.  Rely on your presentation layer,
that's what it's there for!"

Is this really different? If the id changes, should the user care if it is
presented in the same way with the same data? Am I missing something?

pvo
>



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Instance IDs and Multiple Zones

2011-03-23 Thread Paul Voccio
Thanks for the clarification. I wasn't sure if you were actually
contradicting yourself as it seemed such an odd thing for you to do. : )

More below!


>
>I certainly didn't intend for those statements to be contradictory.  I
>don't think that they are.

Thanks for the clarification. I wasn't sure if you were actually
contradicting yourself as it seemed such an odd thing for you to do. : )


>
>My view is that identity should be preserved as long as it's possible to
>do so.  A VM that moves around, gets resized, gets rebooted, etc, should
>have the same identity.
>
>By "identity" I mean that other pieces of software should be able to tell
>that it's the same thing.  A billing system should be able to say "that's
>the same VM that I saw before".  For example, if I charge my customers
>for a month of usage, even if they only run the VM for a part of that
>month, then my billing system needs to be able to say "that VM has moved
>from here to here, but it's actually the same VM, so I'm charging for one
>month, not two".  This is the current charging scheme for RHEL instances
>hosted on Rackspace Cloud
>(http://www.rackspace.com/cloud/blog/2010/08/31/red-hat-license-fee-for-ra
>ckspace-cloud-servers-changing-from-hourly-to-monthly/), not just a
>corner-case example.

I can speak to this particular example as it only charges you for the max
number of RedHat vms you run for the month. With the caveat of this is how
it was explained to me, please consider the scenario:

Launch 2, 
Terminate 2
Launch 5
Terminate 2
Launch 3 
Terminate All

You get billed for the hours plus 6 RHEL licenses since that was your
peak. In your example above, if you terminated then started another
instance, that¹s really 2 instances, with only one active at any time. If
you launched one with cloned data from the other one and both are active
at the same time, its really a additional instance and the operator can
bill accordingly. I don't suppose this really matters for the point your
making and I'll concede that.

>
>You can invent similar arguments for penetration detection systems ("that
>VM is acting the way that it used to") or any other system for enforcing
>policy.
>
>If you are using some kind of location- or path-based identifier for that
>VM, then client software has to be notified of and keep track of all the
>movement of the VM.  If you have a unique identifier, then clients don't
>have to do any of this.
>
>My point about the UI was that we shouldn't worry about how complex these
>IDs should be.  We should make sure that bits of software can talk to
>each other correctly and simply, and base our ID scheme on those needs.
>Once we've figured out what ID scheme we're using, it's _trivial_ for a
>UI or CLI to turn those ugly IDs into "Paul's Apache server" and "Ewan's
>build machine".
I would agree with this.

>
>To your point about the boundary of preservation of ID, that's a good
>question.  If you ignore the security / trust issues, then the obvious
>answer is that IDs should be globally, infinitely, permanently unique.
>That's what UUIDs are for.  We can generate these randomly without any
>need for a central authority, and with no fear of collisions.  It would
>certainly be nice if my VM can leave my SoftLayer DC and arrive in my
>Rackspace DC and when it comes back I still know that it's the same VM.
>That's the OpenStack dream, right?

Is it the ID that matters or the data inside the vm? I think its really
about the data. Consistent Ids would be nice though.

>
>I'm willing to accept that that's difficult to achieve, and I'd
>compromise on identity only being preserved within an ownership/trust
>boundary.  I really don't see why I should lose track of my VM when it
>moves from one zone to another within a given provider though.

If we were to go with UUIDs and using XenServer, I should be able to use
the uuid that it generates upon VM creation. I would almost ask your above
question for XenServer then. When I terminate and launch an VM on the same
machine, I should be able to give it the same uuid that I was just using,
but I can't. Maybe I can and I'm making it harder on myself :)

pvo


>
>Ewan.
>



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] heterogeneous instance types

2011-04-01 Thread Paul Voccio
Lorin,

I think there is a lot of interest in having a container type vm next to a
fully virtualized vm. I see the distinction between services as needing
root vs non-root applications. Database, firewall, lb type services may
not need a fully virtualized vm.

pvo

On 4/1/11 10:38 AM, "Thierry Carrez"  wrote:

>Lorin Hochstein wrote:
>> We didn't have this use case in mind when we did our initial
>>implementation of heterogeneous instance types, but it's an interesting
>>idea and should be pretty straightforward to implement (says the guy at
>>ISI who probably contributed the least amount of code to the
>>heterogeneous implementation...).  Is there community interest in this
>>type of functionality?
>
>>From my (limited) point of view, there is a lot of value for a provider
>to being able to offer a choice between container-type "light" VMs
>(through LXC) and virtualized-type "full" VMs (through KVM), in the same
>cloud.
>
>-- 
>Thierry Carrez (ttx)
>Release Manager, OpenStack
>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Moving code hosting to GitHub

2011-04-08 Thread Paul Voccio
Jay,

I think that discussion will be one of the more popular talks at the
summit. Looking forward to the discussion. I know a lot of devs will be
happy to see this. 

pvo

On 4/8/11 4:21 PM, "Jay Pipes"  wrote:

>All,
>
>In an effort to speed up our code development processes, reduce the
>friction amongst existing contributors and reduce barriers to entry
>for new contributors familiar with the popular git DVCS, we (the
>OpenStack@Rackspace team) have been studying a transition of our code
>hosting from Launchpad to GitHub. We understand others would be
>proposing the same at the design summit, but we figured it would be
>good to get the discussion started earlier.
>
>GitHub has a number of major strengths when it comes to managing source
>code:
>- Contributors seem to be more familiar with, and comfortable using, git
>- The code review process on GitHub is easier to use for reviewers
>  who use the website interface and allows for fine-grained comment
>  control per line in diffs
>
>As good as the GitHub review system is, there are some deficiencies,
>such as the lack of ability to mark a request as definitively
>approved. We hope to work with the GitHub team to investigate how this
>can be rectified.
>
>Of course, there is much more to delivering a professionally released
>open source software package than just the code hosting platform. This
>is the primary interface for code contributors who are actively
>developing, but the project also needs to have processes in place for
>handling bug reports, managing distribution, packaging, translations,
>and releasing the code in an efficient manner.
>
>There are a number of things that Launchpad provides OpenStack
>projects that GitHub does not have the ability to do. Examples of
>these things include translation services, project management
>abilities, package archives for projects, and release-management
>functionality.
>
>Therefore, at this time, we are only proposing moving the code hosting
>functionality to GitHub, and not radically changing any other parts of
>the development and release process.
>
>Soren, Monty, and Thierry, who are the developers responsible for
>keeping our release management and development infrastructure in good
>shape, have identified the pieces of our existing infrastructure that
>they will have to modify. Some of these changes are small, some
>require a bit more work. They are all committed to making these
>changes and to moving us along in the process of transitioning code
>hosting over to GitHub.
>
>There will be a design summit session about this transition where the
>process will be discussed in more detail, as well as the possibility
>to migrate other parts of our infrastructure.
>
>Comments and discussion welcome.
>
>Cheers,
>-jay
>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace. 
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message. 
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal for Justin Santa Barbara to join Nova-Core

2011-04-15 Thread Paul Voccio
Why should they be secret?

From: Sebastian Stadil mailto:sebast...@scalr.com>>
Date: Fri, 15 Apr 2011 10:11:42 -0700
To: Soren Hansen mailto:so...@openstack.org>>
Cc: "openstack@lists.launchpad.net" 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] Proposal for Justin Santa Barbara to join Nova-Core

Question out of curiosity, should everyone's votes be public?

On Fri, Apr 15, 2011 at 4:24 AM, Soren Hansen 
mailto:so...@openstack.org>> wrote:
2011/4/12 Soren Hansen mailto:so...@openstack.org>>:
> +1 from me, too.
>
> As per the process, if no-one objects within 5 business (my
> interpretation) days, i.e. before Thursday, I'll get Justin added to
> the nova-core team.

Noone objected, so this is now done. Congrats, Justin. Welcome to the team :)

--
Soren Hansen
Ubuntu Developerhttp://www.ubuntu.com/
OpenStack Developer http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___ Mailing list: 
https://launchpad.net/~openstack Post to : 
openstack@lists.launchpad.net Unsubscribe 
: https://launchpad.net/~openstack More help : 
https://help.launchpad.net/ListHelp


Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal for Ed Leafe to join Nova-Core

2011-04-15 Thread Paul Voccio
+1 

On 4/15/11 2:55 PM, "Jay Pipes"  wrote:

>Hi all,
>
>Ed Leafe (dabo) has been one of those developers that has stepped up
>to the plate in code reviews and mailing list discussions. I'd like to
>propose he join nova-core.
>
>Cheers,
>jay
>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Creating a forum

2011-05-03 Thread Paul Voccio
Are these really the "official" openstack forums? I didn't get the
impression that this was settled. Didn't we bypass some processes here?

On 5/3/11 2:49 PM, "Jordan Rinke"  wrote:

>Ladies and Gentlemen... welcome to the official OpenStack Forums!
>
>http://forums.openstack.org
>
>Work in progress so feel free to join and post up any comments about the
>forum etc.
>
>-Original Message-
>From: "Everett Toews" 
>Sent: Tuesday, May 3, 2011 1:22pm
>To: "Anne Gentle" 
>Cc: "Jordan Rinke" , openstack@lists.launchpad.net
>Subject: Re: [Openstack] Creating a forum
>
>Regarding your StackExchange questions Anne.
>
>For an Open Source StackExchange-like site OSQA (http://www.osqa.net/)
>could
>be used.
>
>For StackExchange itself it's free as in beer (
>http://area51.stackexchange.com/faq).
>
>"How much does Stack Exchange cost?
>
>Creating a Stack Exchange site is free. Using a Stack Exchange site is
>free. The Creative Commons licenseguarantees that questions and answers
>are
>free to access, free to use and re-use (with attribution), and free to
>shareŠ forever."
>
>Everett
>
>On Tue, May 3, 2011 at 10:48 AM, Anne Gentle  wrote:
>
>> Hey all, thanks for asking for my input. :)
>>
>> A few months ago, I said it's too early. This month, I do sense a need
>>for
>> community support, based on questions I see on the docs site and the
>>types
>> of questions in Launchpad Answers.
>>
>> I think we're getting to a real user community and it would be good
>>timing
>> to start a forum, so I say yes, with the request that we have strong
>>guides.
>> Jordan and Ron can be our one-percenter guys, the ones who are helpful
>>and
>> responsive. We'll need other one-percenters. Vish has done a _great_ job
>> responding to Launchpad Answers. It's getting to be really helpful. But
>>it's
>> not quite a forum. And it's not about the tool, it's about being
>>responsive,
>> right?
>>
>> I don't want to weigh in too heavily on a tools discussion, because it's
>> more about the community and people than a tool. The responses here
>>seem to
>> indicate that sys admins would lean towards forums. I personally like
>>the
>> Stack Exchange style sites for building a reputation which motivates
>> participation if done well. However, OpenStack is not a big enough draw
>>for
>> them to be a "Top Network Site" like Ubuntu. And the tool is certainly
>>not
>> open source. I don't honestly know pricing or licensing or availability
>>of a
>> standalone Stack Exchange site. Does anyone have details there? That
>>info
>> might help with the tools discussion.
>>
>> My main point is that I'd like to ensure responsiveness, so we don't
>>have
>> empty restaurant syndrome in a forum-like support site. The people who
>>will
>> be most responsive to users and adopters should probably weigh in on the
>> tools discussion. Devs won't need to monitor the admin community support
>> site once we get a core group of admins running OpenStack and helping
>> others.
>>
>> So that's my current thinking.
>> Anne
>>
>>
>>
>> On Tue, May 3, 2011 at 9:36 AM, Jordan Rinke 
>>wrote:
>>
>>> Interesting because Ron very specifically mentioned being able to find
>>> useful and relevant information on the Ubuntu forums without bothering
>>> devs
>>> at the beginning of this discussion (which Soren then noted as an
>>> excellent
>>> point).
>>>
>>> We don't have an extended answer from Anne yet, but she did vote Yes on
>>> the
>>> survey (unless someone else used her name since there is no real auth).
>>>
>>> -Original Message-
>>> From: Thierry Carrez [mailto:thie...@openstack.org]
>>> Sent: Tuesday, May 03, 2011 9:27 AM
>>> To: Jordan Rinke
>>> Cc: openstack@lists.launchpad.net
>>> Subject: Re: [Openstack] Creating a forum
>>>
>>> Jordan Rinke wrote:
>>> > I think a purely QnA site misses the mark a little, that style is
>>> > great for a very specific question (And the OSQnA stuff Everett
>>>linked
>>> > looks great) but I think a lot of users are lacking the knowledge to
>>> > ask a very specific question just yet. So maybe it is that we need a
>>> > place for random discussion, but that can also specifically answer a
>>> question as well.
>>>
>>> If you take Ubuntu (arguably one of the largest software-related
>>>forums in
>>> the world), the forums are completely ignored by developers, so it
>>>relies
>>> on
>>> a completely separate user community. It is a source for wrong (or
>>> outdated)
>>> technical answers and user frustration.
>>>
>>> They recently set up a stackexchange site at ask.ubuntu.com, and it is
>>>a
>>> huge success. Developers and users contribute to it, and it's a
>>>valuable
>>> and
>>> continuously-updated source of information.
>>>
>>> I don't want us to run into the same failure before realizing there is
>>>a
>>> better and more targeted tool available... Personally I would ignore
>>> forums
>>> (since they are a waste of time), but contribute to the stackexchange
>>>site
>>> (since they are an easy way to contribute reference infor

Re: [Openstack] Problem with "values" in JSON responses

2011-05-03 Thread Paul Voccio
Hi Eldar, 

There was a good discussion at the summit in regards to how this is going
to be represented. Jay Pipes and Jorge Williams were on point (I think)
for pushing this topic forward. Hopefully one of them can reply here with
their findings. 

Pvo

On 5/3/11 6:29 PM, "Eldar Nugaev"  wrote:

>Hi gents.
>
>At this moment we have problem in OS API 1.1. Any JSON response with
>"values" doesn't meet specification.
>Could you please provide information - why we want to see "values"
>field in JSON and who is responsable for implementation this
>specification in OS API 1.1?
>
>Also we have broken documentation on openstack.org OS API 1.0
>http://docs.openstack.org/cactus/openstack-compute/developer/openstack-com
>pute-api-1.0/content/index.html
>
>-- 
>Eldar
>Skype: eldar.nugaev
>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace. 
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message. 
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Proposal for Nova Core

2011-05-10 Thread Paul Voccio
All,

I would like to nominate Dan Prince (https://launchpad.net/~dan-prince) for 
nova-core. He has been a solid contributor in terms of code, reviews and 
discussions during the summit.

Thanks,
pvo


Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-core] Proposal to add Brian Waldon to nova-core

2011-05-11 Thread Paul Voccio
+1

On 5/11/11 3:01 PM, "Ed Leafe"  wrote:

>On May 11, 2011, at 1:06 PM, Jay Pipes wrote:
>
>> Subject says it all. I think Brian's demonstrated that his code
>> reviews are thoughtful and thorough, and he knows the OpenStack API
>> controller stuff as well as anyone else I believe.
>
>
>Definitely! +1
>
>
>-- Ed Leafe
>
>
>
>Confidentiality Notice: This e-mail message (including any attached or
>embedded documents) is intended for the exclusive and confidential use of
>the
>individual or entity to which this message is addressed, and unless
>otherwise
>expressly indicated, is confidential and privileged information of
>Rackspace. 
>Any dissemination, distribution or copying of the enclosed material is
>prohibited.
>If you receive this transmission in error, please notify us immediately
>by e-mail
>at ab...@rackspace.com, and delete the original message.
>Your cooperation is appreciated.
>
>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-core] Proposal to add Mark Washenberger to nova-core

2011-05-11 Thread Paul Voccio
+1

On 5/11/11 3:02 PM, "Ed Leafe"  wrote:

>On May 11, 2011, at 1:07 PM, Jay Pipes wrote:
>
>> Mark's been a very good reviewer and an invaluable resource on the API
>> side, particularly regarding serialization. I propose he join
>> nova-core.
>
>+1
>
>
>-- Ed Leafe
>
>
>
>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace. 
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message. 
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] Unassigned essential Diablo specs

2011-05-18 Thread Paul Voccio
We'll probably have to tackle Instance migration (which is a
implementation detail of host evacuation). I'll update the BP today.

pvo

On 5/18/11 4:59 AM, "Thierry Carrez"  wrote:

>Hello Nova developers,
>
>In the plan that Vish came up with for Diablo[1], we have a number of
>essential specs that do not have an assignee.
>
>[1] https://blueprints.launchpad.net/nova/diablo
>
>This is an issue, since Essential specs are supposed to delay the Diablo
>release if they are not completed, and not having someone that is
>committed to deliver the feature sounds like the first step to certain
>failure. Here they are:
>
>Implement clear rest API for volumes (targeted at diablo-2, Jun 30)
>https://blueprints.launchpad.net/nova/+spec/implement-volume-api
>This is an essential piece of the decoupling of the volume code. Justin,
>you seemed to be interested ? Anyone else that could take it ?
>
>EC2 Id compatibility (targeted at diablo-3, Jul 28)
>https://blueprints.launchpad.net/nova/+spec/ec2-id-compatibilty
>The "nova-instance-referencing" work will break EC2 API, and this
>blueprint is tracking the work that needs to be done to prevent that.
>Anyone interested ?
>
>Other ("High") specs that are in the plan but unassigned:
>
>Instance Migration (targeted at diablo-2, Jun 30)
>https://blueprints.launchpad.net/nova/+spec/instance-migration
>Allow cloud admins to perform maintenance of hosts by enabling shutdown
>of an instance and moving it to a new host node. Does that fall into
>Ozone realm, as part of the service-provider-readiness ?
>
>Add support for floating IPs in the OpenStack API (diablo-2, Jun 30)
>https://blueprints.launchpad.net/nova/+spec/openstack-api-floating-ips
>This should ultimately be deferred to the NaaS API, but we probably need
>some support for this until that is finalized.
>
>Also note that plan to decouple network services is partially assigned:
>Refactor Networking - assigned to Ryu Ishimoto
>Implement clear rest API for networks (diablo-3) - unassigned
>Integrate Network Services (diablo-4) - unassigned
>
>Regards,
>
>-- 
>Thierry Carrez (ttx)
>Release Manager, OpenStack



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Feedback on Portable Configuration Drive Blueprint

2011-06-19 Thread Paul Voccio
Thorsten,

I'll take a run at these since I wrote the blueprint. It was initially
more of a placeholder that was going to get filled out later but Vish
bumped the priority before we (Ozone) was ready to move forward. I haven't
talked to Chris in regards to his implementation and I haven't seen the
code yet. I'll take a look at the code and update the blueprint later
tonight/tomorrow.  

pvo

On 6/17/11 7:03 PM, "Thorsten von Eicken"  wrote:

>We're very much looking forward to the new "portable configuration
>drive" functionality and would like to provide feedback. If this is not
>the best forum, please point me to it.
>
>The blueprint
>is: https://blueprints.launchpad.net/nova/+spec/configuration-drive
>We reviewed the initial work
>in: 
>https://github.com/ChristopherMacGown/nova/commit/47c041a642ff32085b314047
>5d7a2a62dcb62c1a
>
>Feedback:
>
>1. It is not always obvious how to enumerate attached devices and the
>document doesn't explicitly cover how we'd determine which attached
>device represents the configuration volume. Specifically consider
>Windows as well.
>
>2. Will the configuration drive be available at boot time on a new
>instance? Or does it appear "later" (that would be bad)?

>
>3. How does one send the configuration drive content?  What is the API
>call where we provide the configuration information and what is the
>expected format?



>
>4. It looks like the configuration content is placed into a config.json
>file, is this correct? Is that a blob passed through the API? Does it
>have to be json or is that just a convention? What is the max size?
>
>5. What are the permissions required to access the configuration drive?
>Would be ideal for the content to be root/administrator accessible only
>(because it may contain credentials). Can this be influenced via the API?
>
>6. Is this device going to be read-only?
>
>7. We need to be able to change the configuration content when
>stopping&starting instances, will this be possible?
>
>8. There is a related spec: 'instance-transport'. Is that still
>relevant? Or superseded by the configuration drive?
>
>9. Will the configuration drive contain any metadata? E.g. IP addresses,
>names of available volumes, instance ID, etc.
>
>Thanks much!
>Thorsten - CTO RightScale
>
>
>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cross-zone instance identifiers in EC2 API - Is it worth the effort?

2011-07-11 Thread Paul Voccio
I believe we discussed this in the December timeframe. I'm still a fan of
the idea. 

On 7/11/11 2:37 PM, "Eric Day"  wrote:

>We did discuss using IPV6 addresses as IDs months ago (IRC and email),
>but I don't remember why we decided not to. It may have been due to
>current adoption. I think it was pvo who originally had the idea.
>
>-Eric
>
>On Mon, Jul 11, 2011 at 07:24:39PM +, Chris Behrens wrote:
>> 
>> On Jul 11, 2011, at 12:01 PM, Ed Leafe wrote:
>> 
>> > On Jul 11, 2011, at 2:04 PM, Eric Day wrote:
>> > 
>> >>> How is
>> >>> 
>> >>> nova--
>> >>> 
>> >>> any different than:
>> >>> 
>> >>> ----
>> >>> 
>> >>> Where // (or some subset of them) are
>>reserved/regulated?
>> >> 
>> >> Nothing, if -- is a full UUID. If we compare to
>> >> swift, the account prefix is a UUID too. The account prefix could be
>> >> fixed for a session or passed in to every request depending on how
>> >> things are decided.
>> > 
>> >
>> > 
>> >It's a shame that the ipv6 proposal was never more fully considered.
>>That would handle the uniqueness, with the added benefit of providing
>>simple zone routing via DNS, with the exact same 128-bit/32 char size.
>> 
>> I don't I remember that proposal, but that's such a neat idea.  Was
>>anything discussed at all in Santa Clara regarding encoding zone
>>information in the instance identifier?  I apparently missed the
>>instance identifier discussion somehow.
>> 
>> - Chris
>> 
>> 
>> This email may include confidential information. If you received it in
>>error, please delete it.
>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp

This email may include confidential information. If you received it in error, 
please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp