Re: [Openstack] Progress on wiki migration to Mediawiki

2013-01-21 Thread Ryan Lane
Image location is fixed and the redirects are also in. Note that the
redirects are set up for https://wiki.openstack.org, so they aren't
currently testable (though I did manually change the redirect temporarily
to test it). The redirects have the following behavior:

  wiki.openstack.org/Article -> wiki.openstack.org/wiki/Article

All articles should redirect properly, except for /wiki and /w, if they
exist.

- Ryan


On Mon, Jan 21, 2013 at 6:13 AM, Thierry Carrez wrote:

> Anne Gentle wrote:
> > - Make the landing page mo' better. (Thierry Carrez, ttx) While we won't
> > be able to have the migration make the columns on all the pages lovely,
> > he can make the first page beautious again.
>
> I pushed an optimized page at https://wiki-staging.openstack.org/wiki.
> There is still an issue with uploading images (404 on whatever you
> upload) so I could not add those. Would be great to fix that before we
> flip the switch.
>
> --
> Thierry Carrez (ttx)
> Release Manager, OpenStack
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Wiki content imported into MediaWiki - please check

2012-12-19 Thread Ryan Lane
On Wed, Dec 19, 2012 at 10:57 AM, Ryan Lane  wrote:

> My vote is to edit the pages to fix them. The conversion script is a giant
> hideous mess of perl. Overall I think we'll be able to fix most issues
> quickly in a doc sprint by editing the pages.
>
> for <> we can use:
>
> 1. An extension: http://www.mediawiki.org/wiki/Extension:SubPageList
> 2. A built-in feature: {{Special:PrefixIndex/Ceilometer}}
>
> SubPageList is more flexible in how the output is displayed and doesn't
> disable page cache like PrefIndex. I'll add that extension today.
>
>
I just added the extension, and used the Ceilometer page as an example of
enabling it.

- Ryan
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Wiki content imported into MediaWiki - please check

2012-12-19 Thread Ryan Lane
On Wed, Dec 19, 2012 at 10:18 AM, Nicolas Barcet  wrote:

>
> At first glance, the Ceilometer main page [1] lost from [2]:
>
>  * last column from each table
>  * colors in tables
>  * did not convert macro <> to list sub pages
>
> [1] https://wiki-staging.openstack.org/wiki/Ceilometer
> [2] http://wiki.openstack.org/Ceilometer
>
> What's the recommended path:
>  1/ edit the pages to fix them or
>  2/ wait for the conversion macro to be improved?
>
> Given the loss of data in tables, I would hope for 2...
>
>
My vote is to edit the pages to fix them. The conversion script is a giant
hideous mess of perl. Overall I think we'll be able to fix most issues
quickly in a doc sprint by editing the pages.

for <> we can use:

1. An extension: http://www.mediawiki.org/wiki/Extension:SubPageList
2. A built-in feature: {{Special:PrefixIndex/Ceilometer}}

SubPageList is more flexible in how the output is displayed and doesn't
disable page cache like PrefIndex. I'll add that extension today.

- Ryan
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Wiki content imported into MediaWiki - please check

2012-12-19 Thread Ryan Lane
On Wed, Dec 19, 2012 at 2:27 AM, Daniel P. Berrange wrote:

>
> The migration has not handled '<>' syntax that Moin uses to
> insert line breaks. This causes a bit of a mess, eg look at
> the " Things to avoid when creating commits " section here
>
>   https://wiki-staging.openstack.org/wiki/GitCommitMessages
>
> which is full of stray '<' and '>' characters
>
>
MediaWiki uses . We can fix that with a global search/replace.

- Ryan
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Wiki content imported into MediaWiki - please check

2012-12-18 Thread Ryan Lane
On Tue, Dec 18, 2012 at 2:21 AM, John Garbutt wrote:

> One more thing I spotted around links.
>
> In the migrated wiki:
> [[XenServer/DevStack|XenServer and [[DevStack
>
> Clearly it's a simple fix to this:
> [[XenServer/DevStack|XenServer and DevStack]]
>
> I guess this extra link (that is obviously not valid syntax, and wasn't in
> the original page) got added when the page was imported?
>
>
Yep, that is the case. Thankfully there's not too many of these to handle.
If we do a doc sprint the day of transition, it'll likely take 2-3 hours to
fix all major glaring issues like this.

- Ryan
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Wiki content imported into MediaWiki - please check

2012-12-18 Thread Ryan Lane
> The most obvious issue is how ugly the new main page is :) The loss of
> image inclusion and columns transformed an admittedly not perfect page
> (disclaimer: I authored it) into something unreadable and very
> unwelcoming. Note that it's probably the only page that uses column
> layout, so maybe we can just special case the main page rather than
> write a generic column-handler.
>
>
I just did a basic migration. I didn't edit anything at all. It's not
simple to migrate revisions from MoinMoin that have occurred since the
migration. The plan was to make sure all the content and revisions made it
over, then to fix formatting and such after we've switched wikis.

Wiki migrations are never pretty ;).


> Image inclusion however is a bit more problematic, as it's being used in
> several pages (like
> http://wiki.openstack.org/Documentation/Translation/Status) that are now
> broken.
>
>

> The second most obvious issue is the loss of the OpenStack theme on all
> pages: it would be good to theme the wiki before we complete the
> transition.
>
>
I'd love to do this, but I'm not a front-end developer. Anyone want to
volunteer for this?


> Other issues to watch for before completing the transition:
>
> * Broken table layouts, and no more cell background colors: See examples
> at https://wiki-staging.openstack.org/wiki/GrizzlyReleaseSchedule or
> https://wiki-staging.openstack.org/wiki/Releases
>
>
Should be relatively easy to fix.


> * URL: The new wiki URL appear under a wiki/ subdirectory:
> https://wiki-staging.openstack.org/wiki/Projects -- would be great to
> make sure that once the migration is completed they can still be
> accessed from previous URLs (http://wiki.openstack.org/Projects) so that
> we don't have to change URL pointers from other sites.
>
>
This is a good example of why content should never be served directly from
/. This is doable, but it's going to suck. We're going to have to add
exceptions for a bunch of things (like index.php, api.php, /w/, /images/,
etc.). Of course, that said, links should never break, so we'll bite the
bullet and do this.


> * <>: we used that macro on a lot of pages -- but I
> guess we could abandon them
>
>
This is a simple search and replace. I can install an extension to let us
search/replace across all pages. MediaWiki creates a TOC by default if a
page has three headings or more. It can be forced using __FORCETOC__. It
can be force removed using __NOTOC__.


> * Protected pages: A limited number of pages are protected in the old
> wiki (mostly governance pages, main page and list of official projects)
> to avoid random edits -- will this feature be kept ?
>
>
Yep. We can either make a group with the permissions, or give admin to the
small number of folks that need it.


>  > We're going to leave the wiki up for a couple days in this state. If
> the content is mostly agreeable and we decide to press forward, I'll
> migrate the data again and we'll replace MoinMoin.
>
> I fear we are more than just a couple of days away from being able to
> migrate content in a "mostly agreeable" way, but you're the expert :)
>
>

I meant the content being agreeable, not the style and such. Wiki
migrations often take months, not days. After the switchover we can have a
sprint to fix the most glaring issues.

That said, I'm willing to push the migration back if needed. Waiting on a
theme is worth pushing it back for.

- Ryan
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Wiki content imported into MediaWiki - please check

2012-12-17 Thread Ryan Lane
I've just finished importing the content from the MoinMoin wiki into the
MediaWiki instance. Please check the content:

https://wiki-staging.openstack.org/wiki/Main_Page

We're using a self-signed certificate for now. We are ordering a proper
certificate, but eve that cert will still appear invalid until we've
switched to the correct URL.

Also note that the migration script doesn't map from MoinMoin to MediaWiki
perfectly and we'll need to clean up some of the content manually. We'll
need to create some templates for missing features too (like columned
layout).

We're going to leave the wiki up for a couple days in this state. If the
content is mostly agreeable and we decide to press forward, I'll migrate
the data again and we'll replace MoinMoin.

- Ryan
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Upcoming wiki migration to Mediawiki

2012-12-13 Thread Ryan Lane
> There aren't any code examples in the wiki that I know of. If you have
> examples we can certainly find a way to indicate Apache 2.0 for code, I
> don't find this problematic.
>
>
Yeah, we can wrap a  block in a template
that also adds in license text for any code. Should be easy enough.

- Ryan
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Wikipedia page

2012-11-17 Thread Ryan Lane
On Sat, Nov 17, 2012 at 3:05 AM, Laurence Miao wrote:

> Hi Michael,
>
> I'd love to do something to make this wiki better, if I could.
> Do we have any doc/wiki task force to coordinate all the document stuff
> about OpenStack?
>
> It would be smooth/easy to have a team take care of them, and I will
> definitely join the team.
>
>
Remember to disclose any conflict of interest on the talk page and to cite
any information that's added or your changes will likely be reverted.

- Ryan
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Future of Launchpad OpenStack mailing list (this list)

2012-09-04 Thread Ryan Lane
> Of the As, Option A1 in particular is my preference.
>
> However, I've heard a lot of talk about people wanting a "users" and
> "operators" list to be merged -- with enough support for that, I would
> be happy with Option B.
>

+1

> Do we have information on the type/number of discussions that are
> "user" and not "operator"? "general" and not "user" or "operator"?
>
> I'd love to hear feedback from our community members on their views of
> what's what (at the very least, even confusion will help inform a
> decision).
>

API implementers and operators are the two classes of users I can
think of from the lists perspective. Cloud end-users would likely not
ask questions on the list.

> One thing to keep in mind is that the more divisions there are in a
> set of things which are conceptually similar, the greater amount of
> confusion that will result...
>

This is what I'm most worried about. When the IRC channels were split,
the support channel (#openstack) became a ghost town. I'd really hate
for that to happen to the lists for support as well. If most of the
developers only subscribe to the -dev list, then the support list will
probably suffer quite a bit.

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova bindings for ... PHP?

2012-09-03 Thread Ryan Lane
On Mon, Sep 3, 2012 at 8:53 AM, Anne Gentle  wrote:
> Glen Campbell is working on a PHP library here (and would welcome
> reviewers I'm sure).
>
> https://github.com/rackspacedrg/raxsdk-php/blob/master/docs/userguide/index.md
>

There's also a fairly Wikimedia-specific and incomplete implementation:





- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] A plea from an OpenStack user

2012-08-29 Thread Ryan Lane
> 4. Keystone's LDAP implementation in stable was broken. It returned no
> roles, many values were hardcoded, etc. The LDAP implementation in
> nova worked, and it looks like its code was simply ignored when auth
> was moved into keystone.
>

I did forget to mention one thing about this. The keystone devs,
especially Adam Young, were very responsive and we worked together to
fix the issues in stable and ensured they were also fixed in master. A
million thanks for the help there. Help like this makes life in the
project way easier.

The process for getting the changes into stable was kind of a pain,
but that's another email completely.

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] My diablo to essex upgrade process (was: A plea from an OpenStack user)

2012-08-28 Thread Ryan Lane
> It would be fascinating (for me at least :)) to know the upgrade
> process you use - how many stages you use, do you have multiple
> regions and use one/some as canaries? Does the downtime required to do
> an upgrade affect you? Do you run skewed versions (e.g. folsom nova,
> essex glance) or do you do lock-step upgrades of all the components?
>

This was a particularly difficult upgrade, since we needed to change
so many things at once.

We did a lock-step upgrade this time around. Keystone basically
required that. As far as I could tell, if you enable keystone for
nova, you must enable it for glance. Also, I know that the components
are well tested for compatibility within the same release, so I
thought it would be best to not include any extra complications.

I did my initial testing in a project within my infrastructure (hooray
for inception). After everything worked in a testing set up and was
puppetized, I tested on production hardware. I'm preparing a region in
a new datacenter, so this time I used that hardware for
production-level testing. In the future we're going to set aside a
small amount of cheap-ish hardware for production-level testing.

This upgrade required an operating system upgrade as well. I took the
following steps for the actual upgrade:

1. Backed up all databases, and LDAP
2. Disabled the OpenStackManager extension in the controller's wiki
(we have a custom interface integrated with MediaWiki)
3. Turned off all openstack services
4. Made the required LDAP changes needed for Keystone's backend
5. Upgraded the controller to precise, then made required changes (via
puppet), which includes installing/configuring keystone
6. Upgraded the glance and nova databases
7. Upgraded the network node to precise, then made required changes
(via puppet) - this caused network downtime for a few minutes during
the reboot and puppet run
8. Upgraded a compute node that wasn't in use to precise, made
required changes (via puppet), and tested instance creation and
networking
9. Upgraded a compute node that was in use, rebooted a couple
instances to ensure they'd start properly and have proper networking,
then rebooted all instances on the node
10. Upgraded the remaining compute nodes and rebooted their instances

I had notes on how to rollback during various phases of the upgrade.
This was mostly moving services to different nodes.

Downtime was required because of the need to change OS releases. That
said, my environment is mostly test and development and some
semi-production uses that can handle downtime, so I didn't put a large
amount of effort into completely avoiding downtime.

> For Launchpad we've been moving more and more to a model of permitting
> temporary skew so that we can do rolling upgrades of the component
> services. That seems in-principle doable here - and could make it
> easier to smoothly transition between versions, at the cost of a
> (small) amount of attention to detail while writing changes to the
> various apis.
>

Right now it's not possible to run multiple versions of openstack
services as far as I know. It would be ideal to be able to run all
folsom and grizzly services (for instance) side-by-side while the
upgrade is occurring. At minimum it would be nice for the next release
to be able to use the old release's schema so that upgrades can be
attempted in a way that's much easier to rollback.

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] A plea from an OpenStack user

2012-08-28 Thread Ryan Lane
>> My plea is for the developers to think about how their changes are
>> going to affect production deployments when upgrade time comes.
>
> I for one would like to see the "ops" bug tag used more to try and track
> these issues. If an upgrade makes something harder for operations
> people, developers at the very least should create an ops bug to fix
> that so that its at least tracked.
>

This seems like a good plan for a couple reasons: it tracks them, and
even if they aren't fixed before a release, it allows us to inform
people of possible complications during the upgrade.

This still requires people to think about how their changes may affect
ops, though.

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] A plea from an OpenStack user

2012-08-28 Thread Ryan Lane
> There was talk of trying to set up test infrastructure that would roll out 
> Essex and then upgrade it to Folsom in some automated fashion so we could 
> start learning where it breaks. Was there any forward momentum on that?
>

This would be awesome. Wrapping automated tests around upgrades would
greatly improve the situation. Most of the issues that ops runs into
during upgrades are unexpected changes, which are the same things that
will likely be hit when testing upgrades in an automated way.

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] A plea from an OpenStack user

2012-08-28 Thread Ryan Lane
Yesterday I spent the day finally upgrading my nova infrastructure
from diablo to essex. I've upgraded from bexar to cactus, and cactus
to diablo, and now diablo to essex. Every single upgrade is becoming
more and more difficult. It's not getting easier, at all. Here's some
of the issues I ran into:

1. Glance changed from using image numbers to uuids for images. Nova's
reference to these weren't updated. There was no automated way to do
so. I had to map the old values to the new values from glance's
database then update them in nova.

2. Instance hostnames are changed every single release. In bexar and
cactus it was the ec2 style id. In diablo it was changed and hardcoded
to instance-. In essex it is hardcoded to the instance
name; the instance's ID is configurable (with a default of
instance-, but it only affects the name used in
virsh/the filesystem. I put a hack into diablo (thanks to Vish for
that hack) to fix the naming convention as to not break our production
deployment, but it only affected the hostnames in the database,
instances in virsh and on the filesystem were still named
instance-, so I had to fix all libvirt definitions and
rename a ton of files to fix this during this upgrade, since our
naming convention is the ec2-style format. The hostname change still
affected our deployment, though. It's hardcoded. I decided to simply
switch hostnames to the instance name in production, since our
hostnames are required to be unique globally; however, that changes
how our puppet infrastructure works too, since the certname is by
default based on fqdn (I changed this to use the ec2-style id). Small
changes like this have giant rippling effects in infrastructures.

3. There used to be global groups in nova. In keystone there are no
global groups. This makes performing actions on sets of instances
across tenants incredibly difficult; for instance, I did an in-place
ubuntu upgrade from lucid to precise on a compute node, and needed to
reboot all instances on that host. There's no way to do that without
database queries fed into a custom script. Also, I have to have a
management user added to every single tenant and every single
tenant-role.

4. Keystone's LDAP implementation in stable was broken. It returned no
roles, many values were hardcoded, etc. The LDAP implementation in
nova worked, and it looks like its code was simply ignored when auth
was moved into keystone.

My plea is for the developers to think about how their changes are
going to affect production deployments when upgrade time comes.

It's fine that glance changed its id structure, but the upgrade should
have handled that. If a user needs to go into the database in their
deployment to fix your change, it's broken.

The constant hardcoded hostname changes are totally unacceptable; if
you change something like this it *must* be configurable, and there
should be a warning that the default is changing.

The removal of global groups was a major usability killer for users.
The removal of the global groups wasn't necessarily the problem,
though. The problem is that there were no alternative management
methods added. There's currently no reasonable way to manage the
infrastructure.

I understand that bugs will crop up when a stable branch is released,
but the LDAP implementation in keystone was missing basic
functionality. Keystone simply doesn't work without roles. I believe
this was likely due to the fact that the LDAP backend has basically no
tests and that Keystone light was rushed in for this release. It's
imperative that new required services at least handle the
functionality they are replacing, when released.

That said, excluding the above issues, my upgrade went fairly smoothly
and this release is *way* more stable and performs *way* better, so
kudos to the community for that. Keep up the good work!

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] multiple LDAPs in OpenStack

2012-08-20 Thread Ryan Lane
On Mon, Aug 20, 2012 at 1:52 PM, pat  wrote:
> Hello,
>
> I'm new to this list and OpenStack at all. I want to ask a question: I want to
> ask if it's possible to use one LDAP per tenant. I've searched the web, but
> didn't found the answer.
>

In keystone this is not currently possible.

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] instance evacuation from a failed node (rebuild for HA)

2012-08-10 Thread Ryan Lane
> We have submitted a patch https://review.openstack.org/#/c/11086/ to address
> https://blueprints.launchpad.net/nova/+spec/rebuild-for-ha that simplifies
> recovery from a node failure by introducing an API that recreates an
> instance on *another* host (similar to the existing instance 'rebuild'
> operation). The exact semantics of this operations varies depending on the
> configuration of the instances and the underlying storage topology. For
> example, if it is a regular 'ephemeral' instance, invoking will respawn from
> the same image on another node while retaining the same identity and
> configuration (e.g. same ID, flavor, IP, attached volumes, etc). For
> instances running off shared storage (i.e. same instance file accessible on
> the target host), the VM will be re-created and point to the same instance
> file while retaining the identity and configuration. More details are
> available at http://wiki.openstack.org/Evacuate.
>

If the instance is on shared storage, what does recreate mean? Delete
the old instance and create a new instance, using the same disk image?
Does that mean that the new instance will have a new nova/ec2 id? In
the case where DNS is being used, this would delete the old DNS entry
and create a new DNS entry. This is lossy. If shared storage is
available, the only think that likely needs to happen is for the
instance's host to be updated in the database, and a reboot issued for
the instance. That would keep everything identical, and would likely
be much faster.

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] EC2 api and tenants

2012-08-02 Thread Ryan Lane
On Thu, Aug 2, 2012 at 1:23 PM, Mitchell Broome
 wrote:
> I'm using essex 2012.1 and I'm running into an issue with tenant
> separation using the ec2 api.  I end up having to give a user the
> 'admin' role in keytone to create instances within a tenant.  I can
> live with that but the problem is, now that the user has 'admin', they
> also see all of the instances including ones from other tenants via a
> describe_instances().
>
> If I only give them the 'Member' role, they can only see the instances
> within thier default tenant but they can't create instances.  Also, if
> they only have 'Member', I'm able to create instances via horizon
> manually.
>
> I'm assuming I'm missing some combination of roles I need to setup to
> allow a users to create instances in thier default tenant but not see
> other instances in other tenants.
>

So far, from what I can tell, you need to add custom roles (or
continue using sysadmin and netadmin), and add these roles to the
proper actions in policy.json.

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [keystone] Multi-tenants per user, authentication tokens and global roles

2012-07-27 Thread Ryan Lane
> You can use a token to get a token.  Look at the authenticate code in
> keystone/service.py
>
> Have the user initially get a non-tenant specific token.  Pass that in the
> x-auth header to POST /tokens/ along with a tenantid  and you will get a new
> one scoped to the tenant
>

Ah. This is perfect, thanks!

>> I'm using the LDAP backend. I'm assuming I'm going to have to modify
>> the authenticate method to handle this. Would doing this be enough to
>> make this work, or will I need to patch more extensively for this
>> solution?
>
>
> Tokens are not stored in LDAP.  There are separate back ends for: identity,
> tokens, and service catalog.  LDAP is only wired up for Identity.  For
> Token, the default is KVS, which is in memory only. You probably want to use
> memcached or SQL for the token back end, otherwise a reboot of the keystone
> server will lose you all the tokens.
>

I was planning on hacking in a method of pulling a long-lived token
from LDAP, but your previous comment makes that unneeded.

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] User Friendly Development -was- Fwd: [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-27 Thread Ryan Lane
>> I do wonder if it would make sense to gather user feedback and goals before 
>> the summit, like the day (or week) before, to help provide some priorities 
>> (from their perspective) to consider going into the summit.
>
> This does seem valuable, although keep in mind that most users are a release 
> behind, so the majority of their feedback will hopefully have been handled 
> already :)
>

Unfortunately not. Every release I run into a ton of problems, patch
them locally, then push them upstream so that I won't hit the same
issue next release. I just went through this with the essex version of
keystone. Another example is that keystone's auth model is
fundamentally different from what was in nova, and completely breaks
my use case (which is likely a pretty normal use case for private
clouds). I haven't fully tested nova or glance for upgrade, so I'm
sure I'll be pushing in fixes for that too.

It would have been ideal for me to give this feedback during the
development cycle, but it's pretty difficult managing a product launch
and keeping up with the design and testing of software that's in
development (especially when it undergoes a nearly complete rewrite at
the end of the dev cycle). I feel that many end-users are in a similar
position.

Users definitely need a better mechanism to give feedback early, and
to have their current production issues handled. The start of the
design summit is good, but it would also be nice to collect feedback
after the summit as well.

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [keystone] Multi-tenants per user, authentication tokens and global roles

2012-07-26 Thread Ryan Lane
> Not in Essex.  When we discussed the Domains blueprint,  one issue that I
> brought up was nested groups/projects.  That would solve your problem.  It
> is not currently being developed.
>

Ok. I can deal with handling tens of thousands of tokens, but I need
some way to ensure a user doesn't need to continuously authenticate
when changing between projects. I'm totally fine saving a long-lived
token that can be used for authentication, then re-authenticating with
that token to receive other project tokens. This way the web interface can use
the long-lived token on the user's behalf for authentication between projects.

I'm using the LDAP backend. I'm assuming I'm going to have to modify
the authenticate method to handle this. Would doing this be enough to
make this work, or will I need to patch more extensively for this solution?

I definitely want to solve this legitimately for folsom or grizzly as
this completely breaks my use case (and likely the use case of most
private cloud users).

> Again, this is really a group nesting problem.  I am not sure if the domain
> blueprint would help you out here:
> https://review.openstack.org/#/c/8114/
> https://blueprints.launchpad.net/keystone/+spec/keystone-domains
> http://etherpad.openstack.org/keystone-domains
>

I can likely live with adding/removing admins from groups. I'd prefer
not to, but we require this to some extent right now anyway. I'd
definitely like to resolve this by grizzly at least, though.

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [keystone] Multi-tenants per user, authentication tokens and global roles

2012-07-26 Thread Ryan Lane
I'm working on upgrading to essex, which means I need to start using
keystone. My use case seems to not fit keystone very well, though...

In my environment, one user can be a member of many projects (some
users are in up to 20-30 projects). Management of projects is done
nearly completely though the web interface, and users may work on
resources in multiple projects at the same time. Our web interface can
show all or a subset of user's project's resources in the same view.

In Nova, using the EC2 API, I could query all resources for a user on
their behalf using an admin user, or I could use their access/secret
key and change the tenant for requesting each project.

>From what I can tell in Keystone, when a user authenticates, they get
a token directly linked with a tenant. If I want to do API calls on a
user's behalf in a tenant, I must authenticate them for that tenant.
It seems there's no way for me to make requests on a user's behalf for
multiple projects without authenticating them for every single tenant.
Is this the case? Is there any way for me to handle this? I'd really
like to avoid authenticating a user 30 times on login, then needing to
store all 30 of their tokens.

I have another issue as well. My environment is meant to be integrated
and more of a private-style cloud. We have a group of administrators
that should be able to manage all instances, networks, etc. In Nova's
auth there were global groups. In Keystone there are no global groups.
Will this ever be added into keystone? It's really annoying to need to
constantly add/remove ourselves from projects to manage them.

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Keystone] Quotas: LDAP Help

2012-07-17 Thread Ryan Lane
> I haven't been thinking about quotas, so bear with me here. A few thoughts:
>
> Certain deployments might not be able to touch the LDAP backend.  I am
> thinking specifically where there is a corporate AD/LDAP server.  I tried to
> keep the scheme dependency simple enough that it could be layered onto a
> read-only scenario.  If we put quotas into LDAP,  it might break on those
> deployments.
>

Many, many deployments won't be able to. Applications should generally
assume they are read-only in regards to LDAP.

> I can see that we don't want to define them in the Nova database, as Swift
> might not have access to that, and swift is going to be one of the primary
> consumers of Quotas.  I am Assuming Quantum will have them as well.
>
> As you are aware, there is no metadata storage in the LDAP driver, instead
> it is generated from the tenant and role information on the fly.  There is
> no place to store metadata in "groupOfNames" which is the lowest( common
> denominator) grouping used for Tenants.  Probably the most correct thing to
> do would be to use a "seeAlso"  that points to where the quota data is
> stored.
>

Let's try not to force things into attributes if possible.

When LDAP is used, is the SQL backend not used at all? Why not store
quota info in Keystone's SQL backend, but pull user info from LDAP,
when enabled?

We should only consider storing something in LDAP if it's going to be
reused by other applications. LDAP has a strict schema for exactly
this purpose. If the quota information isn't directly usable by other
applications we shouldn't store it in LDAP.

Many applications with an LDAP backend also have an SQL backend, and
use the SQL as primary storage for most things, and as a cache for
LDAP, if it's used. I think this is likely a sane approach here, as
well.

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] Networking changes and quantum

2012-07-10 Thread Ryan Lane
>> So, I should wait until L3 and try again, but in quantum?
>
>
> Yes, from talking to Vish a while back, the plan is that nova-network will
> be more less "feature frozen", with new features targeting Quantum.  We're
> at a bit of an awkward transition point right now, so probably best to
> continue to use your nova-network implementation off an internal branch for
> now, and then integrate with Quantum once the base L3 stuff is in.
>
>>
>> When do you expect this API to be available? I plan on backporting my
>> work to nova for diablo and essex, but I'd like to make sure I have
>> this upstream in the right place, and in the preferred way.
>
>
> The basic L3 and notification changes should be in during Folsom-3.
>

Great, thanks for the info, I'll add this into quantum in L3.

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] Networking changes and quantum

2012-07-07 Thread Ryan Lane
> L3 + Floating IPs are being added to Quantum in F-3 (got bumped from F-2).
>

So, I should wait until L3 and try again, but in quantum?

> I haven't looked at your patch in detail, but it seems like you're looking
> to notice when a floating-ip is allocated or deallocated, and use that to
> trigger a change to the configuration of an external BGP daemon.
>

Yes, on a per network-node basis. Also, rather than binding the IP to
the public network device, it should bind the IP to lo (to avoid arp
issues). Binding the IPs to lo is likely doable without any changes,
though.

> Assuming that's the case, I'd much rather we take the approach of creating a
> notification API that would let you build this functionality as an external
> component process that feeds off of notifications about floating IPs being
> allocated and deallocated.  We're looking at something similar for DHCP as
> well. This let's people implement custom functionality without having to
> modify the core code and config files.  We have a blueprint to add such a
> notification framework to Quantum (likely based on work in Nova), but at
> this point, but its not clear when this will land (likely depends on whether
> it is used for DHCP improvements in F-3 or not).  If you're interested in
> helping out with it, let me know.
>

Well, I can write this as a plugin, based on the plugin code in
openstack-common. I thought this was a useful enough change to go in
core, though. If everything in quantum is going to be a plugin, I'm
more than happy to have this as a plugin.

When do you expect this API to be available? I plan on backporting my
work to nova for diablo and essex, but I'd like to make sure I have
this upstream in the right place, and in the preferred way.

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Nova] Networking changes and quantum

2012-07-06 Thread Ryan Lane
I'm trying to add support to nova for BGP announcements for floating
IP addresses (as an alternative to subnetting and ARP). The change
currently has a -1, since networking code is moving to quantum. It
seems that quantum doesn't have floating IP support, though. Where
should I be adding this code?

https://review.openstack.org/#/c/9255/

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Translation and Internationalization in OpenStack

2012-05-08 Thread Ryan Lane
> Tools
> 
>
> I know people have strong feelings and concerns on which tools are best and 
> which features matter most, so I've put together a comparison matrix.
>
> https://docs.google.com/spreadsheet/ccc?key=0Aqevw3Q-ErDUdFgzT3VNVXQxd095bFgzODRmajJDeVE
>
> It features our current solution (Launchpad) and the top two contenders 
> people have asked me to look at (Pootle and Transifex). The list of features 
> for comparison contains the concerns voiced at the summit session, those 
> voiced by the community to me, those voiced by the infrastructure team, and 
> my own experience working on translations for other open source projects 
> (such as Django).
>
> Having worked with all three tools, I would strongly suggest Transifex, 
> particularly given that we as a community have to do almost no work to 
> maintain it, it's the only tool that supports OpenStack as a "project hub" 
> with shared teams and management, and it offers us a strong crowdsourced 
> translation community.
>

You should also consider translatewiki (translatewiki.org). It's used
for a number of very large projects (MediaWiki and extensions used on
Wikimedia sites, OpenStreetMap, etc), and it has a large and active
translator community. For example, MediaWiki is very actively
translated in 100 languages, and has translation for roughly 350
languages total.

The translatewiki people are interested in hosting OpenStack since
Wikimedia Foundation is using OpenStack products, and translatewiki
cares deeply about our language support. In fact, they were the first
people to complain about nova's broken utf8 support, which prompted us
to push in fixes.

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack immaturity

2012-04-04 Thread Ryan Lane
> According to the statement of this article from Gartner
> group http://blogs.gartner.com/lydia_leong/2012/04/03/citrix-cloudstack-openstack-and-the-war-for-open-source-clouds/ Openstack is a
> highly immature platform.
> But why? What's make Openstack so immature?
>
> Any comments on that?
>
> Thank you in advance :)
>

I agree that it's an immature platform. That said, it's also a very
young platform and isn't that to be expected? There's a number of
things that need to be fixed before I'd ever recommend Nova's use in
production:

1. There's no upgrade path currently. Upgrading requires fairly
substantial downtime.
2. Live migration is broken. Utterly.
3. Every release fixes so many things that it's really important to
upgrade every time; however, only one release will likely be supported
every Ubuntu LTS release, meaning you're either stuck with a really
old (likely broken) version of nova, or you're stuck will a very
likely unstable version of Ubuntu. This will be easier over time, when
nova is more stable and has less bugs, but it's incredibly painful
right now.

That said, I feel OpenStack's strengths greatly outweigh its
immatureness. I ran a private cloud at my last organization using a
VMWare ESXi cluster. It was more mature, upgrades worked
appropriately, live migration was solid, etc. I had (and still have)
the choice to run VMWare for my current project and am extremely happy
with my choice of OpenStack. The flexibility provided by the platform
and my ability to contribute to its future make its immaturity a
non-concern. Every release gets closer and closer to a stability point
I'm comfortable with.

This article isn't bad news. In fact, I'd say it shows that
competitors see OpenStack as a fairly major threat. We should be
celebrating this ;).

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] LDAP support in Keystone Light/redux

2012-02-09 Thread Ryan Lane
On Thu, Feb 9, 2012 at 3:29 AM, Adam Young  wrote:
> I've made some strides in the KSL  LDAP  implementation.  I've set up a
> github  clone with the code pushed:
>
>
> https://github.com/admiyo/keystone/tree/ldap
>
> The code is ugly,  as I'm in "Just get it working" mode.  Cleanup will
> happend prior to any attempt to merge with the Redux branch.  I've attempted
> to keep the same set of unit tests running as are used for the SQL backend.
>  The one delta is  Metadata, as I am not sure how (or even if) we want to
> reflect that in LDAP.  I've made those three unit tests no-ops for LDAP.
>
> There are still more API calls to implement, (Tenant_Modify for example) and
> then I'll test out against a live Open LDAP  instance.
>
> The one change I've made from the old config is that fields like URL  no
> longer have ldap_  in front of them,  so the config will look something like
>
> [ldap]
> url = ldap://localhost
> user = cn=Admin
> password = password
> backend_entities = ['Tenant', 'User', 'UserRoleAssociation', 'Role']
> suffix ='cn=example,cn=com'
>
>
>
> Feedback requested.
>

Looking through the code, it appears that using ldaps:// may work for
LDAPS support, but is LDAP w/ TLS going to be supported as well? Have
you tested LDAPS support?

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova/puppet blueprint, and some questions

2012-01-31 Thread Ryan Lane
> Sorry for the slow response on this.  There has been a lot to do for e-3. In 
> any case, here are my thoughts on the subject. I am really not convinced that 
> configuration management needs to be part of nova at all.  This is stuff that 
> should be built on top of nova.  We have a bit of work to do cleaning up and 
> improving metadata to make this type of thing easier, but I don't see any big 
> needs for this to be in nova. A horizon plugin that handles this seems like 
> it would be much more interesting.
>

Then cli users can't use this, and other web frontends can't use it
and everyone would very likely implement it differently. My hope was a
consistent interface for this kind of stuff, so that users would be
able to use this at different providers, and so that libraries could
implement this consistently.

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova/puppet blueprint, and some questions

2012-01-27 Thread Ryan Lane
d a standardized way to add integrated functionality. I don't believe
> nova core should be reimplementing/duplicating functionality and logic in
> other systems.
>
> The goal of interacting with the instances through a shared interface is a
> good one, I'm not against that, I just want to see less deep coupling to
> accomplish it.
>

Yes, a shared interface is what we'd like as well. My main motivation
for getting this moved into nova is that I want people to be able to
use the cli tools in addition to web tools. This was also my
motivation for wanting DNS support added ;).

>> I may be misunderstanding you, or my blueprint may be unclear.  Available,
>> Unavailable, and Default don't refer to the availability of classes on the
>> puppet master; rather, they refer to whether or not a class is made
>> available to a nova user for a given instance.  An 'available' class would
>> appear in the checklist in my screenshot.  An Unavailable class would not.
>> A 'default' class would appear, and be pre-checked.  In all three cases the
>> class is presumed to be present on the puppet master.
>
>
> I already asked this, but what keeps that in sync with the puppet master?
>

Nothing, this is user-defined and global defined based on the
end-user's knowledge of the puppet repo. The puppet master has no way
of saying which variables are available, at minimum, since it's
possible for templates to require variables that aren't defined in
manifests.

> Personally, I'd rather see an integration that has a per user configuration
> to a puppet master that stays in sync than the RBAC per module.

Well, we aren't aiming at RBAC, but rather aiming at giving a user a
list of classes and variables available in a specific project. Think
of one project mostly wanting to use the CLI, another using horizon,
and another using OpenStackManager (my MediaWiki extension). Without
nova knowing what's available in a project there's no way for each
interface to display this information.

>>> I also think managing a site.pp is going to be inferior to providing an
>>> endpoint that can act as an eternal node tool for the puppet master.
>>> http://docs.puppetlabs.com/guides/external_nodes.html
>>
>> In which case nova would interact directly with the puppet master for
>> configuration purposes?  (I don't hate that idea, just asking for
>> clarification.)
>
>
> That's puppet's model. Whether you use a site.pp, or external nodes. I'm
> unclear how you want to do it. Can you explain how your system works now?
>>

As described above, we feel that using a non-centralized puppet system
is preferable (for us). Of course, we'd like to have broad support.

>>> One other point, that you might have thought of, but I don't see anywhere
>>> on the wiki is how to handle the ca/certs for the instances.
>>
>> I believe this (and your subsequent question) falls under the heading of "
>> Instances are presumed to know any puppet config info they need at creation
>> time (e.g. how to contact the puppet master). "  Important, but outside the
>> scope of this design :)
>
>
> Thinking through this is actually critical for any standardized puppet
> integration in my opinion. The solution is a prerequisite before considering
> anything else.

Yes, this is an issue. We'd need to figure this out. In the case where
you don't have a puppet master, this isn't necessary, but for it to be
generic, it would need to work. It's possible to sign through puppet's
API, though, so it should be easy enough to have the puppet driver
handle that, if needed.

>> So that I understand your terminology... are extensions like the quotas or
>> floating ips considered 'core nova'?
>
>
> There is not a bright line especially with the way things have evolved to
> now, but I would say floating IPs should definitely be core functionality.
> Quotas may be debatable, but I think it is defensible, though part of me
> feels like some of that kind of permission functionality might be better
> decoupled.
>
>> Thanks again for your input!  Clearly it would be best to hash this out at
>> the design summit, but I'm hoping to get at least a bit of coding done
>> before April :)
>
>
> I hope to be there. I do like the idea, I just want to do what's best for
> OpenStack.
>

Agreed. It may be that some form of hook system, or some other way of
extending nova without hacking core would be a more appropriate way of
handling things like this. Of course, an extensions system also makes
compatibility between vendors much more difficult.

- Ryan Lane

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Public and Private DNS

2012-01-06 Thread Ryan Lane
> EC2 has a feature where if an instance has a fixed IP of 10.0.0.1, and a
> floating IP of 1.2.3.4. All DNS lookups performed against the DNS name for
> 1.2.3.4, from within the same region, will return 10.0.0.1.
>
> E Hammond from Alestic probably explains it better :)
>
>> When an EC2 instance queries the external DNS name of an Elastic IP, the
>> EC2 DNS server returns the internal IP address of the instance to which the
>> Elastic IP address is currently assigned.
>
> While this has advantages for traffic accounting in EC2, It can additionally
> provide a fix for the inability of an OpenStack instance to reach it's own
> floating IP address. (Try ping'ing a floating IP from the instance
> that floating IP is assigned to.. It will fail).
>
> I hope this explains it a tad better :)
>

This would indeed be a great feature to have, but I don't think that
it's something that should necessarily be handled by this blueprint.
This is more of a driver-level feature.

One way I can think of implementing this at the driver level is to do
it when floating IP addresses are associated with instances, or when a
DNS address is added to the associated IP. At this point the driver
can add an entry to another DNS server. Anycast can be configured to
let one server handle requests from the instances, and non-instance
requests can be handled by the normal DNS server.

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Hardware HA

2011-11-10 Thread Ryan Lane
> I know. That's what makes them a poor fit for "the cloud".
>

Meh. Private clouds will still use applications like this. I think
"the cloud" is great for cloud providers, but why limit nova's
usefulness to just cloud providers?

"The cloud" way of doing things pushes the responsibility of keeping
applications alive to the client. There's a lot of clients that don't
have this level of sophistication.

>> Hardware HA is useful for more than just poorly designed applications
>> though. I have a cloud instance that runs my personal website. I don't
>> want to pay for two (or more, realistically) instances just to ensure
>> that if my host dies that my site will continue to run. My provider
>> should automatically detect the hardware failure and re-launch my
>> instance on another piece of hardware; it should also notify me that
>> it happened, but that's a different story ;).
>
> I'm not sure I count that as High Availability. It's more like
> Eventual Availability. :)
>

So, this is one HA mode for VMware. There is also a newer HA mode that
is much more expensive (from the resources perspective) that keeps a
shadow copy of a virtual machine on another piece of hardware, and if
the primary instance's hardware dies, it automatically switches over
to the shadow copy.

Both modes are really useful. There's a huge level of automation
needed for doing things "the cloud way" that is completely
unnecessary. I don't want to have to monitor my instances to see if
one died due to a hardware failure, then start new ones, then pool
them, then depool the dead ones. I want my provider to handle hardware
deaths for me. If I have 200 web servers instances, and 40 of them die
because they are on nodes that die, I want them to restart somewhere
else. It removes all the bullshit automation I'd need to do otherwise.

- Ryan Lane

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Hardware HA

2011-11-10 Thread Ryan Lane
> That's the whole point. For most interesting applications, "fast"
> automatic migration isn't anywhere near fast enough. Don't try to
> avoid failure. Expect it and design around it.
>

This assumes all application designers are doing this. Most web
applications do this fairly well, but most enterprise applications do
this very poorly.

Hardware HA is useful for more than just poorly designed applications
though. I have a cloud instance that runs my personal website. I don't
want to pay for two (or more, realistically) instances just to ensure
that if my host dies that my site will continue to run. My provider
should automatically detect the hardware failure and re-launch my
instance on another piece of hardware; it should also notify me that
it happened, but that's a different story ;).

- Ryan Lane

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Satellite

2011-09-27 Thread Ryan Lane
On Tue, Sep 27, 2011 at 9:47 PM, John Dickinson  wrote:
> What benefits does an openstack-satellite project bring? Other than all using 
> some openstack component, what do these projects have in common that 
> justifies grouping them? For example, I know of many open source projects 
> that use swift (swauth, slogging, cyberduck, all the rackspace language 
> bindings, several iOS apps, a few dashboards), and I can't seem to see why 
> they would benefit by being grouped into one umbrella.
>

Agreed. This seems like something that would likely be better as a
page or pages in the wiki.

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Summit Talk: Information session on Zones? Any interest?

2011-04-14 Thread Ryan Lane
I am also interested in this.

On Thu, Apr 14, 2011 at 9:15 AM, Edward "koko" Konetzko
 wrote:
> On 04/14/2011 11:07 AM, Sandy Walsh wrote:
>>
>> I've been getting a lot of questions about Zones lately.
>>
>> How much interest is there for an informational session on Zones and, I
>> guess, Distributed Scheduler and roadmap?
>>
>> (pending an available slot at the summit ... things are filling up
>> quickly I gather)
>>
>> -S
>>
>> Confidentiality Notice: This e-mail message (including any attached or
>> embedded documents) is intended for the exclusive and confidential use of
>> the
>> individual or entity to which this message is addressed, and unless
>> otherwise
>> expressly indicated, is confidential and privileged information of
>> Rackspace.
>> Any dissemination, distribution or copying of the enclosed material is
>> prohibited.
>> If you receive this transmission in error, please notify us immediately by
>> e-mail
>> at ab...@rackspace.com, and delete the original message.
>> Your cooperation is appreciated.
>>
>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to     : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>
> I would be really interested in a talk on zones, I know a few other people
> who would be too.
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Returning the project for resources in the EC2 API

2011-03-10 Thread Ryan Lane
This is in regards to lp732924.

There currently isn't any simple way to know which resources are in
which project. Most resources return the project via some attribute:

* Instances: ownerId
* Addresses: instanceId
* Security Groups: ownerId

Volumes return an owner via status, but it's the user, not the project.

This makes it difficult to handle things in frontends, as certain
queries may return resources from multiple projects.

Is there any way that we can add this information to the EC2 API in a
somewhat consistent way (such as via a new attribute)? If not, is it
at least possible for all the resources to return the project in some
way I can parse out in a dirty hackish way (like using status in
volumes)?

Filters would help out here, in that I could specifically search for a
project; however, from a performance perspective, I'd like to be able
to search for project x, y, and z, then filter accordingly in the
frontend when necessary as well.

- Ryan Lane

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp