Re: [Openstack] [Nova] How common is user_data for instances?

2012-08-13 Thread Dan Prince


- Original Message -
> From: "Michael Still" 
> To: openstack@lists.launchpad.net, openstack-operat...@lists.openstack.org
> Sent: Saturday, August 11, 2012 5:12:22 AM
> Subject: [Openstack] [Nova] How common is user_data for instances?
> 
> Greetings.
> 
> I'm seeking information about how common user_data is for instances
> in
> nova. Specifically for large deployments (rackspace and HP, here's
> looking at you). What sort of costs would be associated with changing
> the data type of the user_data column in the nova database?
> 
> Bug 1035055 [1] requests that we allow user_data of more than 65,535
> bytes per instance. Note that this size is a base64 encoded version
> of
> the data, so that's only a bit under 50k of data. This is because the
> data is a sqlalchemy Text column.
> 
> We could convert to a LongText column, which allows 2^32 worth of
> data,
> but I want to understand the cost to operators of that change some
> more.
> Is user_data really common? Do you think people would start uploading
> much bigger user_data? Do you care?

Nova has configurable quotas on most things so if we do increase the size of 
the DB column we should probably guard it in a configurable manner with quotas 
as well.

My preference would actually be that we go the other way though and not have to 
store user_data in the database at all. That unfortunately may not be possible 
since some images obtain user_data via the metadata service which needs a way 
to look it up. Other methods of injecting metadata via disk injection, agents 
and/or config drive however might not need it to be store in the database right?

As a simpler solution:

Would setting a reasonable limit (hopefully smaller) and returning a HTTP 400 
bad request if incoming requests exceed that limit be good enough to resolve 
this ticket? That way we don't have to increase the DB column at all and end 
users would be notified up front that user_data is too large (not silently 
truncated). They way I see it user_data is really for bootstrapping 
instances... we probably don't need it to be large enough to write an entire 
application, etc.


> 
> Mikal
> 
> 1: https://bugs.launchpad.net/nova/+bug/1035055
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] Proposal to add Yun Mao to nova-core

2012-07-19 Thread Dan Prince
+1. Nice work Yun.

- Original Message -
> From: "Vishvananda Ishaya" 
> To: "Openstack (openstack@lists.launchpad.net) 
> (openstack@lists.launchpad.net)" 
> Sent: Wednesday, July 18, 2012 7:10:41 PM
> Subject: [Openstack] [nova] Proposal to add Yun Mao to nova-core
> 
> Hello Everyone!
> 
> Yun has been putting a lot of effort into cleaning up our state
> management, and has been contributing a lot to reviews[1]. I think
> he would make a great addition to nova-core.
> 
> 
> [1] https://review.openstack.org/#/dashboard/1711
> 
> 
> Vish
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] Proposal to add Michael Still to nova-core

2012-07-19 Thread Dan Prince
+1. Nice work Michael!

- Original Message -
> From: "Vishvananda Ishaya" 
> To: "Openstack (openstack@lists.launchpad.net) 
> (openstack@lists.launchpad.net)" 
> Sent: Wednesday, July 18, 2012 7:13:54 PM
> Subject: [Openstack] [nova] Proposal to add Michael Still to nova-core
> 
> Hello Everyone!
> 
> Michael wrote the image cache management code, did all of the
> remaining conversions of instance_id -> instance_uuid, and has been
> contributing a lot to reviews[1]. I think he would make a great
> addition to nova-core.
> 
> 
> [1] https://review.openstack.org/#/dashboard/2271
> 
> 
> Vish
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] Proposal to add Padraig Brady to nova-core

2012-07-19 Thread Dan Prince
+1 Excellent reviews and contributions as well!

- Original Message -
> From: "Vishvananda Ishaya" 
> To: "Openstack (openstack@lists.launchpad.net) 
> (openstack@lists.launchpad.net)" 
> Sent: Wednesday, July 18, 2012 7:09:14 PM
> Subject: [Openstack] [nova] Proposal to add Padraig Brady to nova-core
> 
> Hello Everyone!
> 
> Padraig has been contributing a lot of code to all parts of nova, and
> has been contributing a lot to reviews[1]. I think he would make a
> great addition to nova-core.
> 
> 
> [1] https://review.openstack.org/#/dashboard/1812
> 
> 
> Vish
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Glance/Swift tenant specific storage

2012-07-13 Thread Dan Prince
Eglynn pointed out some concurrency concerns in this review that I wanted to 
highlight:

  https://review.openstack.org/#/c/9552/

The issue is we need to provide Glance stores access to the request context. 
The current thought is that we'd like to modify Glance to create new Store 
objects for each request (thus avoiding concurrency issues). The existing 
Glance model creates singleton Store objects at startup. Before I go changing 
Glance in this regard I wanted to ask...

Does this seem reasonable?

We can still in create Store instances at startup for validation purposes. 
Although now that I think of it I find the log messages Glance generates from 
missing S3 and or Swift configs to be somewhat unnecessary. Perhaps we should 
rethink this startup validation as well?


Eoghan: Thanks for the detail comments on the reviews. Keep them coming.


Dan 


- Original Message -
> From: "Dan Prince" 
> To: "OpenStack Mailing List" 
> Sent: Sunday, July 8, 2012 9:59:54 PM
> Subject: Glance/Swift tenant specific storage
> 
> I started working on the Glance swift-tenant-specific-storage
> blueprint last week.
> 
> I've got a working branch in play here:
> 
>   https://github.com/dprince/glance/commits/swift_multi_tenant3
> 
> Some details on how I've done things so far:
> 
>  * Update Glance so that it uses the service catalog for each user to
>  obtain the Swift storage URL.
> 
>  * Provide backend stores access to the context. Glance Essex doesn't
>  give stores access to the RequestContext (auth token). We'll need
>  this information for tenant specific storage if we want to be able
>  to access individual swift accounts.
> 
>  * Store images in separate containers. Swift only allows individual
>  ACL's to be set per container... not per object. As such it appears
>  we'll need to store each image in a separate container in order to
>  support setting public and/or individual read/write access on each
>  image.
> 
>  * Set 'public' access for images in Swift.
> 
>  * Set 'private' read and/or write access for Glance image members
>  which have been granted access to specific images.
> 
>  * Delayed delete (scrubber) will require an authenticated context in
>  order to delete Swift images from the backend. Glance can either be
>  make to grant write access to this account (for all images) or an
>  administrative Swift account could be used to run the delayed
>  delete operation.
> 
>  * Maintain full support with the existing single tenant Glance swift
>  storage scheme.
> 
> 
> 
> I made some general implementation notes up on this wiki page as
> well:
> 
>   http://wiki.openstack.org/GlanceSwiftTenantSpecificStorage
> 
> I'm anxious to get things up for review but before I do so I wanted
> to ask if this implementation looks reasonable? Any thoughts or
> feedback?
> 
> Dan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] PEP8 checks

2012-07-09 Thread Dan Prince


- Original Message -
> From: "Dave Walker" 
> To: "Monty Taylor" 
> Cc: openstack@lists.launchpad.net, "John Garbutt" 
> Sent: Monday, July 9, 2012 6:01:19 PM
> Subject: Re: [Openstack] PEP8 checks
> 
> On Mon, Jul 02, 2012 at 08:28:04AM -0400, Monty Taylor wrote:
> > 
> > 
> > On 07/02/2012 06:46 AM, John Garbutt wrote:
> > > Hi,
> > > 
> > > I noticed I can now run the pep8 tests like this (taken from
> > > Jenkins job):
> > >   tox -v -epep8
> > >   ...
> > >   pep8: commands succeeded
> > >   congratulations :)
> > > 
> > > But the old way to run tests seems to fail:
> > >   ./run-tests.sh -p
> > >   ...
> > >   File
> > >   
> > > "/home/johngar/openstack/nova/.venv/local/lib/python2.7/site-packages/pep8.py",
> > >   line 1220, in check_logical
> > >   for result in self.run_check(check, argument_names):
> > >   TypeError: 'NoneType' object is not iterable
> > > 
> > > Is this expected?
> > > Did I just miss an email about this change?
> > 
> > I cannot reproduce this on my system. Can you run
> > "bash -x run_tests.sh -p" and pastebin the output? Also,
> > tools/with_venv.sh pep8 --version just to be sure.
> >
> 
> Hi,
> 
> The issue is that as of a recent change to upstream pep8 [1], the
> additional pep8 rules in tools/hacking.py need to be changed from
> returns to yields.. :(

This brings up a good point. Why are we following the latest pep8 release so 
closely in Nova? The latest release is hardly a month old and we are already 
using it? We aren't necessarily doing the same for all the other OpenStack 
projects... (nor am I suggesting that we do). So why Nova?

I'm not convinced the latest pep8 "features" really provide us enough benefit 
that we need to bump our pep8 baseline every month or two.  in fact they may be 
hurting us in terms of churn, extra work, back-portability of upstream patches, 
etc. Ultimately is tracking the latest pep8 really worth it?


> 
> [1]
> https://github.com/jcrocholl/pep8/commit/b9f72b16011aac981ce9e3a47fd0ffb1d3d2e085
> 
> Kind Regards,
> 
> Dave Walker 
> Engineering Manager,
> Ubuntu Server Infrastructure
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Glance/Swift tenant specific storage

2012-07-09 Thread Dan Prince


- Original Message -
> From: "David Kranz" 
> To: "Dan Prince" 
> Sent: Monday, July 9, 2012 9:15:00 AM
> Subject: Re: [Openstack] Glance/Swift tenant specific storage
> 
> Dan, I am involved in a project (not Glance) that is doing something
> similar to this though it is at an early stage. One unresolved issue
> was
> what happens when the auth_token used to access Swift expires?

If the auth_token expired the request would fail. I'm not sure how common this 
scenario would be however. I suppose if you are expiring tokens very quickly 
then it could happen... otherwise it would take a really long Glance task to be 
able to hit this right?


> Unless Glance stores the user's password, and in a reversible way, how does
> it
> get a new token?


It can't. The user would have to re-submit the request to upload, download, or 
access the image data with a fresh token. Note: There are still some Glance 
tasks like delayed delete which already required a username:password (stored in 
a config file) to work. With Multi-tenant this would still be the case.

One thing to note: Glance isn't storing auth_tokens in this new implementation. 
It simply obtains the token from the incoming request and uses that to access 
Swift accordingly. This is actually the same thing Nova does to access 
Glance... so presumably any token expiration issue you'd hit with Glance-Swift 
integration would also potentially happen between Nova-Glance.


> This seems like a keystone use case that is not
> clear
> unless I am missing something.
> 
>   -David
> 
> On 7/8/2012 9:59 PM, Dan Prince wrote:
> > I started working on the Glance swift-tenant-specific-storage
> > blueprint last week.
> >
> > I've got a working branch in play here:
> >
> >https://github.com/dprince/glance/commits/swift_multi_tenant3
> >
> > Some details on how I've done things so far:
> >
> >   * Update Glance so that it uses the service catalog for each user
> >   to obtain the Swift storage URL.
> >
> >   * Provide backend stores access to the context. Glance Essex
> >   doesn't give stores access to the RequestContext (auth token).
> >   We'll need this information for tenant specific storage if we
> >   want to be able to access individual swift accounts.
> >
> >   * Store images in separate containers. Swift only allows
> >   individual ACL's to be set per container... not per object. As
> >   such it appears we'll need to store each image in a separate
> >   container in order to support setting public and/or individual
> >   read/write access on each image.
> >
> >   * Set 'public' access for images in Swift.
> >
> >   * Set 'private' read and/or write access for Glance image members
> >   which have been granted access to specific images.
> >
> >   * Delayed delete (scrubber) will require an authenticated context
> >   in order to delete Swift images from the backend. Glance can
> >   either be make to grant write access to this account (for all
> >   images) or an administrative Swift account could be used to run
> >   the delayed delete operation.
> >
> >   * Maintain full support with the existing single tenant Glance
> >   swift storage scheme.
> >
> > 
> >
> > I made some general implementation notes up on this wiki page as
> > well:
> >
> >http://wiki.openstack.org/GlanceSwiftTenantSpecificStorage
> >
> > I'm anxious to get things up for review but before I do so I wanted
> > to ask if this implementation looks reasonable? Any thoughts or
> > feedback?
> >
> > Dan
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> 
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Glance/Swift tenant specific storage

2012-07-08 Thread Dan Prince
I started working on the Glance swift-tenant-specific-storage blueprint last 
week.

I've got a working branch in play here:

  https://github.com/dprince/glance/commits/swift_multi_tenant3

Some details on how I've done things so far:

 * Update Glance so that it uses the service catalog for each user to obtain 
the Swift storage URL.

 * Provide backend stores access to the context. Glance Essex doesn't give 
stores access to the RequestContext (auth token). We'll need this information 
for tenant specific storage if we want to be able to access individual swift 
accounts.

 * Store images in separate containers. Swift only allows individual ACL's to 
be set per container... not per object. As such it appears we'll need to store 
each image in a separate container in order to support setting public and/or 
individual read/write access on each image.

 * Set 'public' access for images in Swift.

 * Set 'private' read and/or write access for Glance image members which have 
been granted access to specific images.

 * Delayed delete (scrubber) will require an authenticated context in order to 
delete Swift images from the backend. Glance can either be make to grant write 
access to this account (for all images) or an administrative Swift account 
could be used to run the delayed delete operation.

 * Maintain full support with the existing single tenant Glance swift storage 
scheme.



I made some general implementation notes up on this wiki page as well:

  http://wiki.openstack.org/GlanceSwiftTenantSpecificStorage

I'm anxious to get things up for review but before I do so I wanted to ask if 
this implementation looks reasonable? Any thoughts or feedback?

Dan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] best practices for merging common into specific projects

2012-07-03 Thread Dan Prince


- Original Message -
> From: "Russell Bryant" 
> To: andrewbog...@gmail.com
> Cc: "Andrew Bogott" , openstack@lists.launchpad.net
> Sent: Monday, July 2, 2012 3:26:56 PM
> Subject: Re: [Openstack] best practices for merging common into specific  
> projects
> 
> On 07/02/2012 03:16 PM, Andrew Bogott wrote:
> > Background:
> > 
> > The openstack-common project is subject to a standard
> > code-review
> > process (and, soon, will also have Jenkins testing gates.)  Sadly,
> > patches that are merged into openstack-common are essentially
> > orphans.
> > Bringing those changes into actual use requires yet another step, a
> > 'merge from common' patch where the code changes in common are
> > copied
> > into a specific project (e.g. nova.)
> > Merge-from-common patches are generated via an automated
> > process.
> > Specific projects express dependencies on specific common
> > components via
> > a config file, e.g. 'nova/openstack-common.conf'.  The actual file
> > copy
> > is performed by 'openstack-common/update.py,' and its behavior is
> > governed by the appropriate openstack-common.conf file.
> 
> More background:
> 
> http://wiki.openstack.org/CommonLibrary
> 
> > Questions:
> > 
> > When should changes from common be merged into other projects?
> > What should a 'merge-from-common' patch look like?
> > What code-review standards should core committers observe when
> > reviewing merge-from-common patches?
> > 
> > Proposals:
> > 
> > I.  As soon as a patch drops into common, the patch author
> > should
> > submit merge-from-common patches to all affected projects.
> > A.  (This should really be done by a bot, but that's not going
> > to
> > happen overnight)
> 
> All of the APIs in openstack-common right now are considered to be in
> incubation, meaning that breaking changes could be made.  I don't
> think
> automated merges are good for anything in incubation.
> 
> Automation would be suitable for stable APIs.  Once an API is no
> longer
> in incubation, we should be looking at how to make releases and treat
> it
> as a proper library.  The copy/paste madness should be limited to
> APIs
> still in incubation.
> 
> There are multiple APIs close or at the point where I think we should
> be
> able to commit to them.  I'll leave the specifics for a separate
> discussion, but I think moving on this front is key to reducing the
> pain
> we are seeing with code copying.
> 
> > II. In the event that I. is not observed, merge-from-common
> > patches
> > will contain bits from multiple precursor patches.  That is not
> > only OK,
> > but encouraged.
> > A.  Keeping projects in sync with common is important!
> > B.  Asking producers of merge-from-common patches to understand
> > the
> > full diff will discourage the generation of such merges.
> 
> I don't see this as much different as any other patches to nova (or
> whatever project is using common).  It should be a proper patch
> series.
>  If the person pulling it in doesn't understand the merge well enough
>  to
> produce the patch series with proper commit messages, then they are
> the
> wrong person to be doing the merge in the first place.

I went on a bit of a rant about this on IRC yesterday. While I agree a patch 
series is appropriate for many new features and bug fixes I don't think it 
should be required for keeping openstack-common in sync. Especially since we 
don't merge tests from openstack-common which would help verify that the person 
doing the merges doesn't mess up the order of the patchset. If we were to 
include the tests from openstack-common in each project this could change my 
mind.

If someone wants to split openstack-common changes into patchsets that might be 
Okay in small numbers. If you are merging say 5-10 changes from openstack 
common into all the various openstack projects that could translate into a 
rather large number of reviews (25+) for things that have been already reviewed 
once.  For me using patchsets to keep openstack-common in sync just causes 
thrashing of Jenkins, SmokeStack, etc. for things that have already been gated. 
Seems like an awful waste of review/CI time. In my opinion patchsets are the 
way to go with getting things into openstack-common... but not when syncing to 
projects.

Hopefully this situation is short lived however and we start using a proper 
library sooner rather than later.


> 
> > III.Merge-from-common patches should be the product of a single
> > unedited run of update.py.
> 
> Disagree, see above.
> 
> > A.  If a merge-from-common patch breaks pep8 or a test in nova,
> > don't fix the patch; fix the code in common.
> 
> Agreed.
> 
> > IV.Merge-from-common patches are 'presumed justified'.  That
> > means:
> > A. Reviewers of merge-from-common patches should consider test
> > failures and pep8 breakages, and obvious functional problems.
> > B. Reviewers of merge-from-common patches should not consider
>

Re: [Openstack] Jenkins vs SmokeStack tests & Gerrit merge blockers

2012-06-28 Thread Dan Prince


- Original Message -
> From: "Monty Taylor" 
> To: openstack@lists.launchpad.net
> Sent: Thursday, June 28, 2012 11:13:28 AM
> Subject: Re: [Openstack] Jenkins vs SmokeStack tests & Gerrit merge blockers
> 
> 
> 
> On 06/28/2012 07:32 AM, Daniel P. Berrange wrote:
> > Today we face a situation where Nova GIT master fails to pass all
> > the libvirt test cases. This regression was accidentally introduced
> > by the following changeset
> > 
> >https://review.openstack.org/#/c/8778/
> > 
> > If you look at the history of that, the first SmokeStack test run
> > fails with some (presumably) transient errors, and added negative
> > karma to the change against patchset 2. If it were not for this
> > transient failure, it should have shown the regression in the
> > libvirt test case. The libvirt test case in question was one that
> > is skipped, unless libvirt is actually present on the host running
> > the tests. SmokeStack had made sure the tests would run on such a
> > host.
> > 
> > There were then further patchsets uploaded, and patchset 4 was
> > approved for merge. Jenkins ran its gate jobs and these all passed
> > successfully. I am told that Jenkins will actually run the
> > unittests
> > that are included in Nova, so I would have expected it to see the
> > flawed libvirt test case, but it didn't. I presume therefore, that
> > Jenkins is not running on a libvirt enabled host.
> 
> Kind of - it's sadly more complex than that ...
> 
> > The end result was that the broken changeset was merged to master,
> > which in turns means any other developers submitting changes
> > touching the libvirt area will get broken tests reported that
> > were not actually their own fault.
> > 
> > This leaves me with the following questions...
> > 
> >  1. Why was the recorded failure from SmokeStack not considered
> > to be a blocker for the merge of the commit by Gerrit or
> > Jenkins or any of the reviewers ?
> >
> >  2. Why did SmokeStack not get re-triggered for the later patch
> > set revisions, before it was merged ?
> 
> The answer to 1 and 2 is largely the same - SmokeStack is a community
> contributed resources and is not managed by the CI team. Dan Prince
> does
> a great job with it, but it's not a resource that we have the ability
> to
> fix should it start messing up, so we have not granted it the
> permissions to file blocking votes.

I would add that if anyone else is interested in collaborating on making 
SmokeStack better I'm more than happy to give access. Its all open source and 
has been since Cactus.

As is now SmokeStack can can cast a -1 vote and hopefully this is proving to be 
useful. I'm open to suggestions.


> 
> The tests that smokestack runs could all be written such that they
> are
> run by jenkins.

I actually put in quite a bit of work to maintain an openstack_vpc job on 
Jenkins post-Cactus. When we talked about gating on this job at the Diablo 
conference the idea didn't seem to get very far... I kind of saw that as the 
end of the line for maintaining an openstack_vpc job and eventually it went 
away. Not sure who deleted it, but anyway.

The way I see it there is value in both testing systems. Rather than 
complaining about why one system exists and/or doesn't port its tests to the 
other why don't we build on each others strengths. Seeing a green "verified 
+1" from both Jenkins and SmokeStack on a review should be very encouraging... 
and if one of the two systems fails it might require further investigation.


> The repos that run the jenkins tests are all in git
> and
> managed by openstack's gerrit. If there are testing profiles that it
> runs that we as a community value and want to see part of the gate,
> anyone is welcome to port them.
> 
> >  3. Why did Jenkins not ensure that the tests were run on a libvirt
> > enabled host ?
> 
> This is a different, and slightly more complex. We run tests in
> virtualenvs so that the process used to test the code can be
> consistently duplicated by all of the developers in the project. This
> is
> the reason that we no longer do ubuntu package creation as part of
> the
> gate - turns out that's really hard for a developer running on OSX to
> do
> locally on their laptop - and if Jenkins reports an blocking error in
> a
> patch, we want a developer to be able to reproduce the problem
> locally
> so that they can have a chance at fixing it.

The ability for developers to test things locally is very important. For that 
matter SmokeStack all started with a project called openstack_vpc, a project 

Re: [Openstack] AttributeError: "virDomain instance has no attribute 'reset'"

2012-06-25 Thread Dan Prince
I went ahead and implemented the fix here:

https://review.openstack.org/#/c/8943/1/nova/virt/libvirt/connection.py

Once we do this the version requirement should be back at 0.9.6 (I think).

Dan

- Original Message -
> From: "Jay Pipes" 
> To: openstack@lists.launchpad.net
> Sent: Friday, June 22, 2012 12:13:07 PM
> Subject: Re: [Openstack] AttributeError: "virDomain instance has no attribute 
> 'reset'"
> 
> That's pretty much what I understood based on a conversation with
> Vish
> on IRC the other day. It's caused me to pretty much give up on 11.10
> for
> modern OpenStack (Nova) installs.
> 
> -jay
> 
> On 06/22/2012 01:23 AM, Vaze, Mandar wrote:
> > Found this bug (albeit for Fedora 16)
> > https://bugs.launchpad.net/nova/+bug/1011863
> > 
> >
> > I’ve updated the bug with my details
> >
> > Does that mean Folsom won’t be supported on Ubuntu 11.10 ?
> >
> > -Mandar
> >
> > *From:*openstack-bounces+mandar.vaze=nttdata@lists.launchpad.net
> > [mailto:openstack-bounces+mandar.vaze=nttdata@lists.launchpad.net]
> > *On Behalf Of *Vaze, Mandar
> > *Sent:* Friday, June 22, 2012 9:41 AM
> > *To:* Vishvananda Ishaya
> > *Cc:* openstack@lists.launchpad.net
> > *Subject:* Re: [Openstack] AttributeError: "virDomain instance has
> > no
> > attribute 'reset'"
> >
> > Vish,
> >
> > I’m running on Ubuntu 11.10 (GNU/Linux 3.0.0-12-server x86_64)
> >
> > When I tried to upgrade I got the following message
> >
> > libvirt-bin is already the newest version.
> >
> > libvirt0 is already the newest version.
> >
> > python-libvirt is already the newest version.
> >
> > Which package should I upgrade/install ?
> >
> > Thanks,
> >
> > -Mandar
> >
> > *From:*Vishvananda Ishaya [mailto:vishvana...@gmail.com]
> > 
> > *Sent:* Friday, June 22, 2012 12:13 AM
> > *To:* Vaze, Mandar
> > *Subject:* Re: AttributeError: "virDomain instance has no attribute
> > 'reset'"
> >
> > the reset command was only recently added to libvirt, so your
> > version is
> > probably just to old. We have discussed adding a fallback of
> > shutting
> > down and restarting the domain if reset is not defined, but no one
> > has
> > implemented it yet.
> >
> > Vish
> >
> > On Jun 21, 2012, at 5:56 AM, Vaze, Mandar wrote:
> >
> > Vish,
> >
> > I recently merged my code with master after a few weeks.
> >
> > Now I’m getting the error mentioned in the subject line during
> > reboot.
> >
> > I looked at the history and this change is done in the following
> > commit
> > by you.
> >
> > https://github.com/openstack/nova/commit/ae878fc8b9761d099a4145617e4a48cbeb390623
> >
> > I also realized that you have defined reset() method in fakelibvirt
> > for
> > testing - so may be tests pass OK.
> >
> > I’m using libvirt_type=kvm
> >
> > Do I need to update any library (python or otherwise) for this
> > change ?
> >
> > Any other suggestions to troubleshoot this ?
> >
> > Thanks,
> >
> > -Mandar
> >
> > Here is relevant snippet from my debug session :
> >
> >>  /opt/stack/nova/nova/virt/libvirt/connection.py(847)_hard_reboot()
> >
> > -> virt_dom.reset(0)
> >
> > (Pdb) dir(virt_dom)
> >
> > ['ID', 'OSType', 'UUID', 'UUIDString', 'XMLDesc', '__del__',
> > '__doc__',
> > '__init__', '__module__', '_conn', '_o', 'abortJob',
> > 'attachDevice',
> > 'attachDeviceFlags', 'autostart', 'blkioParameters', 'blockInfo',
> > 'blockPeek', 'blockStats', 'connect', 'coreDump', 'create',
> > 'createWithFlags', 'destroy', 'detachDevice', 'detachDeviceFlags',
> > 'hasCurrentSnapshot', 'hasManagedSaveImage', 'info', 'injectNMI',
> > 'interfaceStats', 'isActive', 'isPersistent', 'isUpdated',
> > 'jobInfo',
> > 'managedSave', 'managedSaveRemove', 'maxMemory', 'maxVcpus',
> > 'memoryParameters', 'memoryPeek', 'memoryStats', 'migrate',
> > 'migrate2',
> > 'migrateSetMaxDowntime', 'migrateSetMaxSpeed', 'migrateToURI',
> > 'migrateToURI2', 'name', 'openConsole', 'pinVcpu', 'reboot',
> > 'resume',
> > 'revertToSnapshot', 'save', 'schedulerParameters',
> > 'schedulerParametersFlags', 'schedulerType', 'screenshot',
> > 'setAutostart', 'setBlkioParameters', 'setMaxMemory', 'setMemory',
> > 'setMemoryFlags', 'setMemoryParameters', 'setSchedulerParameters',
> > 'setSchedulerParametersFlags', 'setVcpus', 'setVcpusFlags',
> > 'shutdown',
> > 'snapshotCreateXML', 'snapshotCurrent', 'snapshotListNames',
> > 'snapshotLookupByName', 'snapshotNum', 'state', 'suspend',
> > 'undefine',
> > 'updateDeviceFlags', 'vcpus', 'vcpusFlags']
> >
> > (Pdb) type( virt_dom)
> >
> > 
> >
> > (Pdb) type(virt_dom)
> >
> > 
> >
> > (Pdb) n
> >
> > AttributeError: "virDomain instance has no attribute 'reset'"
> >
> >
> > __
> > Disclaimer:This email and any attachments are sent in strictest
> > confidence for the sole use of the addressee and may contain
> > legally
> > privileged, confidential, and proprietary data. If you are not the
> > intended recipient, p

Re: [Openstack] Intermittent devstack-gate failures

2012-06-14 Thread Dan Prince
Hey Jim,

Any updates or new ideas on the cause of the intermittent hangs?

I mentioned these on IRC with regards to one of the Essex failures one 
thing I've seen with Glance recently is that both the glance-api and 
glance-registry now use (and try to auto create) the database by default. I've 
had issues when I start glance-api and glance-registry in a quick sequence 
because both of them try to init the DB.

So in devstack we could run 'glance-manage db_sync' manually:

  https://review.openstack.org/#/c/8495/

And in Glance we could then default auto_db_create to False:

  https://review.openstack.org/#/c/8496/

Any chance this is the cause of the intermittent failures?

Dan

- Original Message -
> From: "James E. Blair" 
> To: "OpenStack Mailing List" 
> Sent: Tuesday, June 12, 2012 7:25:01 PM
> Subject: [Openstack] Intermittent devstack-gate failures
> 
> Hi,
> 
> It looks like there are intermittent, but frequent, failures in the
> devstack-gate.  This suggests a non-deterministic bug has crept into
> some piece of OpenStack software.
> 
> In this kind of situation, certainly could keep re-approving changes
> in
> the hope that they will pass the test and merge, but it would be
> better
> to fix the underlying problem.  Simply re-approving is mostly just
> going
> to make the queue longer.
> 
> Note that the output from Jenkins has changed recently.  I've seen
> some
> people misconstrue some normal parts of the test process as errors.
>  In
> particular, this message from Jenkins is not an error:
> 
>   Looks like the node went offline during the build. Check the slave
>   log
>   for the details.
> 
> That's a normal part of the way the devstack-gate tests run, where we
> add a machine to Jenkins as a slave, run the tests, and remove it
> from
> the list of slaves before it's done.  This is to accommodate the
> one-shot nature of devstack based testing.  It doesn't interfere with
> the results.
> 
> To find out why a test failed, you should scroll up a bit to the
> devstack exercise output, which normally looks like this:
> 
> *
> SUCCESS: End DevStack Exercise: ./exercises/volumes.sh
> *
> =
> SKIP boot_from_volume
> SKIP client-env
> SKIP quantum
> SKIP swift
> PASS aggregates
> PASS bundle
> PASS client-args
> PASS euca
> PASS floating_ips
> PASS sec_groups
> PASS volumes
> =
> 
> Everything after that point is test running boilerplate.  I'll add
> some
> echo statements to that effect in the future.
> 
> Finally, it may be a little difficult to pinpoint when this started.
>  A
> number of devstack-gate tests have passed recently without actually
> running any tests, due to an issue with one of our OpenStack based
> node
> providers.  We are eating our own dogfood.
> 
> -Jim
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Comparing roles - case (in)sensitivity

2012-06-14 Thread Dan Prince


- Original Message -
> From: "Brian Waldon" 
> To: "openstack@lists.launchpad.net (openstack@lists.launchpad.net)" 
> 
> Sent: Friday, June 8, 2012 5:50:45 PM
> Subject: [Openstack] Comparing roles - case (in)sensitivity
> 
> tl;dr - Should we compare roles as case-sensitive or
> case-insensitive? I vote case-sensitive.
> 
> This bug was recently filed in Glance:
> https://bugs.launchpad.net/glance/+bug/1010519. It points out that
> Nova and Keystone are both case-insensitive when it comes to role
> comparison, yet Glance *is* case sensitive. I'm in favor of moving
> other projects to a case-sensitive approach for two main reasons:
> 
> 1) If a role is a string, and comparing strings is inherently
> case-sensitive, then role comparison would logically be
> case-sensitive
> 2) I get to do less work
> 
> Thoughts?


My vote is that we make Glance case-insensitive.

I think I may be the one that changed Nova to be case-insensitive as well as I 
had some real head scratchers with 'admin' vs. 'Admin' at one point.

To me allowing roles will just get too confusing if we allow case sensitive 
comparisons.

Dan


> 
> Brian Waldon
> 
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] The right way to deprecate things in nova?

2012-06-13 Thread Dan Prince


- Original Message -
> From: "Sean Dague" 
> To: openstack@lists.launchpad.net
> Sent: Wednesday, June 13, 2012 3:07:54 PM
> Subject: Re: [Openstack] The right way to deprecate things in nova?
> 
> On 06/13/2012 01:35 PM, Mark Washenberger wrote:
> 
> >> So up until this point OpenStack has been a pretty much a rip and
> >> replace model. You want to go from Diablo to Essex, shut
> >> everything
> >> down, upgrade, bring back up. When I went to change this parameter
> >> originally, the review comments included just ripping out the old
> >> function, and not deprecating it.
> >>
> >> But I think we are moving into a phase where real OpenStack
> >> deployments
> >> are going to have N and N+1 release componets talking to each
> >> other. so
> >> it's probably worth getting in the habbit of having a standard way
> >> to
> >> deprecate out over a release. LOG.warning messages scattered
> >> about,
> >> which may or may not be consistent, that someone might or might
> >> not
> >> remember to remove later, with or without their associated
> >> function,
> >> seems kind of error prone.
> >>
> >
> > Logging sounds like a great way to communicate to deployers and
> > operators,
> > but really doesn't seem the best way to communicate with
> > developers. So
> > my question is, are we using this mechanism to deprecate things the
> > deployers
> > can control? Or is it things that developers need to deal with? If
> > its the
> > latter (which it seems), I'd prefer that we just use our various
> > developer
> > coordinating communication channels, such as the team meetings,
> > IRC, mailing
> > list, etc.
> 
> So for deprecating some piece of Operator facing interface, I agree
> we
> can do that without anything as heavy as a decorators. So how about
> this
> instead, have a user_deprecate(msg="") function.
> 
> It's a wrapper on the LOG function, with some standard formatting
> that
> makes sure all the user deprecated features have an easy grepable
> pattern in the log. Also add the fatal functionality, so that people
> can
> sniff test their system before upgrading to N+1 that they aren't
> using
> deprecated configs.
> 
> It wouldn't be a decorator, just a function that can be placed inside
> code.

I like this approach the best. No decorator... just a simple function in utils 
that log the deprecation warning in a sane manner.


> 
>   -Sean
> 
> --
> Sean Dague
> IBM Linux Technology Center
> email: sda...@linux.vnet.ibm.com
> alt-email: slda...@us.ibm.com
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] swift.common.client and swift CLI has moved to its own project python-swiftclient

2012-06-13 Thread Dan Prince
Okay. It looks like Swift also still depends on swiftclient. Long term it would 
be nice if we could build and unit test swift without relying on the 
swiftclient package. Could we:

 1) Move any binaries that require swiftclient into the python-swiftclient 
project.

 2) Move swift function tests into Tempest.

I know Nova Diablo had a dependency on python-novaclient which we removed in 
Essex. This seems to make the testing, packaging, and deployment much cleaner.

Dan

- Original Message -
> From: "Chmouel Boudjnah" 
> To: openstack@lists.launchpad.net, openstack-annou...@lists.openstack.org
> Sent: Tuesday, June 12, 2012 5:41:14 PM
> Subject: [Openstack] [Swift] swift.common.client and swift CLI has moved to 
> its own project python-swiftclient
> 
> Hi everyone,
> 
> We have moved the swift.common.client library and bin/swift CLI to
> its
> own dedicated project called python-swiftclient available in github
> here  :
> 
> https://github.com/openstack/python-swiftclient
> 
> it should be totally compatible with the previous swift and the only
> change needed if you were using it before in scripts, tools or others
> will be to change the import from :
> 
> import swift.common.client
> 
> to
> 
> import swiftclient
> 
> the swift CLI has also moved there now.
> 
> Glance and devstack should be updated now to use swiftclient[1].
> 
> Documentation (i.e: manuals) has not been updated yet and we are
> hoping packagers will pick this up pretty soon.
> 
> Regards,
> Chmouel.
> 
> [1] horizon is still using python-cloudfiles and didn't use
> swift.common.client before.
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Intermittent devstack-gate failures

2012-06-12 Thread Dan Prince
Hey Jim,

I actually turned off SmokeStack earlier today for potentially the same reason. 
I was out for a couple days last week and haven't quite put my finger on all 
the things that are wrong. I'm seeing about half of the functional test runs 
fail.

This issue seems to be the cause of most of my failures:

https://bugs.launchpad.net/nova/+bug/1010291

I'm also dealing with the fact that Nova now requires libvirt 0.9.7 or later (I 
think) due to some of the refactoring. This is fine but it does mean I can't 
fully test things on Fedora 16 like I used to (reboot is out for example).

That is what I've seen.

Dan

- Original Message -
> From: "James E. Blair" 
> To: "OpenStack Mailing List" 
> Sent: Tuesday, June 12, 2012 7:25:01 PM
> Subject: [Openstack] Intermittent devstack-gate failures
> 
> Hi,
> 
> It looks like there are intermittent, but frequent, failures in the
> devstack-gate.  This suggests a non-deterministic bug has crept into
> some piece of OpenStack software.
> 
> In this kind of situation, certainly could keep re-approving changes
> in
> the hope that they will pass the test and merge, but it would be
> better
> to fix the underlying problem.  Simply re-approving is mostly just
> going
> to make the queue longer.
> 
> Note that the output from Jenkins has changed recently.  I've seen
> some
> people misconstrue some normal parts of the test process as errors.
>  In
> particular, this message from Jenkins is not an error:
> 
>   Looks like the node went offline during the build. Check the slave
>   log
>   for the details.
> 
> That's a normal part of the way the devstack-gate tests run, where we
> add a machine to Jenkins as a slave, run the tests, and remove it
> from
> the list of slaves before it's done.  This is to accommodate the
> one-shot nature of devstack based testing.  It doesn't interfere with
> the results.
> 
> To find out why a test failed, you should scroll up a bit to the
> devstack exercise output, which normally looks like this:
> 
> *
> SUCCESS: End DevStack Exercise: ./exercises/volumes.sh
> *
> =
> SKIP boot_from_volume
> SKIP client-env
> SKIP quantum
> SKIP swift
> PASS aggregates
> PASS bundle
> PASS client-args
> PASS euca
> PASS floating_ips
> PASS sec_groups
> PASS volumes
> =
> 
> Everything after that point is test running boilerplate.  I'll add
> some
> echo statements to that effect in the future.
> 
> Finally, it may be a little difficult to pinpoint when this started.
>  A
> number of devstack-gate tests have passed recently without actually
> running any tests, due to an issue with one of our OpenStack based
> node
> providers.  We are eating our own dogfood.
> 
> -Jim
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] The right way to deprecate things in nova?

2012-06-12 Thread Dan Prince


- Original Message -
> From: "Sean Dague" 
> To: openstack@lists.launchpad.net
> Sent: Tuesday, June 12, 2012 3:50:45 PM
> Subject: [Openstack] The right way to deprecate things in nova?
> 
> I'm in the process of deprecating the old way that we do virt drivers
> so
> that it's fully dynamic -
> https://blueprints.launchpad.net/nova/+spec/virt-driver-cleanup
> 
> The way the code current exists in master is that a LOG.error is
> emitted
> when the deprecated method is hit. I set it to error level to make
> sure
> it got noticed, as it will require a user configuration change
> post-Folsom when the old option is removed. This seems very ad-hoc.
> 
> Yesterday I had a conversation with markmc on IRC about this, and he
> suggested an approach where we have a config option that makes
> deprecation fatal, which could be forced on to ensure cleanliness.
> This
> could be done either as a decorator or as a regular function.
> 
> It also turns out there already are some deprecation functions, which
> dprince pointed out to me today on IRC, because he was in process of
> removing them from nova because they weren't used -
> https://review.openstack.org/#/c/8412/.
> 
> Here's my current suggested path forward, which I'd like comments on:
>   * keep the existing nova.utils deprecation functions (don't remove
>   them)

My take is why keep a 200-300 line set of functions and tests (a small 
framework) to log messages about code we want to get rid of? As of today we 
aren't even making use of it and I'm not convinced peppering more decorators 
all over the place is the best idea. I suppose I have a slight preference for 
simply logging things here.

>   * add the fatal config option, and associated unit tests to make
>   sure
> it works correctly. This would be helpful for people to ensure they
> weren't depending on deprecated functions towards the end of a
> release.

I'm not apposed to this but it seems like grepping log files is also a fine 
tool. Presumably this would be off by default.

>   * possibly move them to nova.common as they might make for good
> openstack-common material down the road
>   * use this instead of the direct LOG.error in get_connection

Why not just log it as a simple WARNING and be done with it: 
https://review.openstack.org/#/c/8411/

> 
> This would have the side effect of making the message warning level,
> instead of error level, which I think is fine at this point.
> 
>   -Sean
> 
> --
> Sean Dague
> IBM Linux Technology Center
> email: sda...@linux.vnet.ibm.com
> alt-email: slda...@us.ibm.com
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova.projects table problems

2012-06-01 Thread Dan Prince


- Original Message -
> From: "Michael Still" 
> To: openstack@lists.launchpad.net
> Sent: Thursday, May 31, 2012 8:03:37 PM
> Subject: Re: [Openstack] nova.projects table problems
> 
> On 01/06/12 09:54, Michael Still wrote:
> > Has anyone else experienced something like this? Its happened to me
> > twice now (on the same machine) and I have no idea what is causing
> > it:
> > 
> > mysql> show tables like 'projects';
> > +---+
> > | Tables_in_nova (projects) |
> > +---+
> > | projects  |
> > +---+
> > 1 row in set (0.00 sec)
> > 
> > mysql> describe projects;
> > ERROR 1146 (42S02): Table 'nova.projects' doesn't exist
> > 
> > mysql> repair table projects;
> > +---++--+-+
> > | Table | Op | Msg_type | Msg_text
> > ||
> > +---++--+-+
> > | nova.projects | repair | Error| Table 'nova.projects' doesn't
> > | exist |
> > | nova.projects | repair | status   | Operation failed
> > ||
> > +---++--+-+
> > 2 rows in set (0.00 sec)
> > 
> > I'm a bit confused.
> 
> This turns out to be
> https://bugs.launchpad.net/ubuntu/+source/nova/+bug/975085 /
> https://bugs.launchpad.net/nova/+bug/993663 I think.


Hi Micheal,

Yes. I think this is the same issue we fixed in 993663. The dns_domains was in 
a very bad state during the Essex release cycle because we essentially had a 
latin1 table with an fkey to a utf8 projects table. MySQL doesn't even allow 
you to do such a thing with a fresh schema (something I noticed when I did the 
database compaction for Folsom).

See the comments in this Folsom migration which should fix the issue:

  
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/migrate_repo/versions/096_recreate_dns_domains.py#L27

You could hotfix any existing systems by creating a SQL script to do something 
similar (drop and recreate the dns_domains table as UTF8 with a shorter 
'domain' column.).

Also, I'm going to work up a patch so we can backport this to Essex without 
adding a new migration (something we typically don't do for stable releases). I 
hope to have this done soon.

Dan

> 
> Mikal
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] Removal of Deprecated Auth

2012-05-14 Thread Dan Prince


- Original Message -
> From: "Brian Waldon" 
> To: "openstack (openstack@lists.launchpad.net)" 
> 
> Sent: Monday, May 7, 2012 6:09:53 PM
> Subject: [Openstack] [Nova] Removal of Deprecated Auth
> 
> I wanted to send out a heads-up to let everyone know that I am now
> removing all the old auth code from Nova
> (https://blueprints.launchpad.net/nova/+spec/remove-deprecated-auth).
> This will definitely be disruptive if you depend on this old code
> and haven't yet taken the steps towards using Keystone. If you
> aren't sure if you depend on it, the best way to tell is to look for
> the 'auth_strategy' flag - it will be set to 'deprecated'.
> 
> I did want to give a rough outline of how I'm planning to get all
> this done:
> 
> 1) Remove AuthMiddleware. At this point, the API won't be able to use
> the deprecated auth strategy, but you can still use the nova-manage
> commands - https://review.openstack.org/#/c/7215/
> 2) Remove nova-manage commands. The data won't have been migrated out
> yet, but you won't have any interface to it through Nova. I'm
> waiting on an OK from Dan Prince here, as I don't want to break
> SmokeStack!


OK. I put in the required Chef/Puppet changes today. Things should be a go with 
regards to SmokeStack. Thanks for the heads up.

Dan


> 3) Remove DB junk. This includes any db api methods and the actual
> removal of the auth tables.
> 
> I'm probably missing some steps in there, but hey, it's a rough
> outline. Please reach out to me now if you have any issues with
> this!
> 
> Brian Waldon
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] SmokeStack: xenserver tests offline :(

2012-05-03 Thread Dan Prince
Several people asked me on IRC about SmokeStack being down.

I'm having some issues with the image snapshot I used to spin up server groups 
for XenServer testing... so until I get a couple hours to go have a look at 
either creating a new snapshot or fixing the existing snapshot the XenServer 
SmokeStack tests are broken.

In the meantime I made a slight adjustment to Bellows so that he will continue 
to post unit test, and libvirt results to merge proposals.

Not giving up on the XenServer testing... its just going to take a bit longer 
to fix this one.

Dan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] database migration cleanup

2012-05-03 Thread Dan Prince


- Original Message -
> From: "John Garbutt" 
> To: "Dan Prince" , "Vishvananda Ishaya" 
> 
> Cc: openstack@lists.launchpad.net
> Sent: Thursday, May 3, 2012 10:56:44 AM
> Subject: RE: [Openstack] database migration cleanup
> 
> I may have missed this in the discussions, but does this impact on
> upgrade?
> 
> I am guessing you have tested Essex -> Folsom upgrade, but does this
> affect people upgrading from any of the Essex milestones to Folsom?

What this does is compact the pre-Essex (final) migrations into a single 
migration. Users of any of the Essex milestones would need to first upgrade to 
the final Essex release and then upgrade to Folsom.

This seemed like a reasonable approach since most distributions release updates 
that contain the final releases anyway.


> I guess the deeper question is which upgrade paths do we want to
> maintain...
> 
> Thanks,
> John
> 
> > -Original Message-
> > From:
> > openstack-bounces+john.garbutt=eu.citrix@lists.launchpad.net
> > [mailto:openstack-bounces+john.garbutt=eu.citrix@lists.launchpad.net]
> > On Behalf Of Dan Prince
> > Sent: 02 May 2012 21:20
> > To: Vishvananda Ishaya
> > Cc: openstack@lists.launchpad.net
> > Subject: Re: [Openstack] database migration cleanup
> > 
> > 
> > 
> > - Original Message -
> > > From: "Vishvananda Ishaya" 
> > > To: "Dan Prince" 
> > > Cc: openstack@lists.launchpad.net
> > > Sent: Thursday, April 26, 2012 4:14:25 PM
> > > Subject: Re: [Openstack] database migration cleanup
> > >
> > > +1.  Might be nice to have some kind of test to verify that the
> > > new
> > > migration leaves the tables in exactly the same state as the old
> > > migrations.
> > 
> > Hey Vish,
> > 
> > This is an outline of what I did to test MySQL and PostgreSQL to
> > ensure the
> > compact migration script generates *exactly* the same schemas as
> > before:
> > 
> > http://wiki.openstack.org/database_migration_testing
> > 
> > As things stand both MySQL and PostgreSQL are exactly the same. I
> > have
> > some pending changes that I've found in the schemas that need to be
> > fixed
> > in Folsom... but the goal here was to replicate Essex with
> > migration 082 so
> > that is what I did.
> > 
> > Sqlite has a few differences (indexes for example). How important
> > is it that
> > the Sqlite schema be exactly the same? Unit tests are passing.
> > 
> > Dan
> > 
> > 
> > >
> > > Vish
> > >
> > > On Apr 26, 2012, at 12:24 PM, Dan Prince wrote:
> > >
> > > > The OpenStack Essex release had 82 database migrations. As
> > > > these
> > > > grow in number it seems reasonable to clean house from time to
> > > > time.
> > > > Now seems as good a time as any.
> > > >
> > > > I came up with a first go at it here:
> > > >
> > > > https://review.openstack.org/#/c/6847/
> > > >
> > > > The idea is that we would:
> > > >
> > > > * Do this early in the release cycle to minimize risk.
> > > >
> > > > * Compact all pre-Folsom migrations into a single migration.
> > > > This
> > > > migration would be used for new installations.
> > > >
> > > > * New migrations during the Folsom release cycle would proceed
> > > > as
> > > > normal.
> > > >
> > > > * Migrations added during Folsom release cycle could be
> > > > compacted
> > > > during "E" release cycle. TBD if/when we do the next
> > > > compaction.
> > > >
> > > > * Users upgrading from pre-Essex would need to upgrade to Essex
> > > > first. Then Folsom.
> > > >
> > > > --
> > > >
> > > > I think this scheme would support users who follow stable
> > > > releases
> > > > as well as users who follow trunk very closely.
> > > >
> > > > We talked about this at the conference but I thought this issue
> > > > might be near and dear to some of our end users so it was worth
> > > > discussing on the list.
> > > >
> > > > What are general thoughts on this approach?
> > > >
> > > > Dan (dprince)
> > > >
> > > > ___
> > > > Mailing list: https://launchpad.net/~openstack
> > > > Post to : openstack@lists.launchpad.net
> > > > Unsubscribe : https://launchpad.net/~openstack
> > > > More help   : https://help.launchpad.net/ListHelp
> > >
> > >
> > 
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] database migration cleanup

2012-05-02 Thread Dan Prince


- Original Message -
> From: "Vishvananda Ishaya" 
> To: "Dan Prince" 
> Cc: openstack@lists.launchpad.net
> Sent: Thursday, April 26, 2012 4:14:25 PM
> Subject: Re: [Openstack] database migration cleanup
> 
> +1.  Might be nice to have some kind of test to verify that the new
> migration leaves the tables in exactly the same state as the old
> migrations.

Hey Vish,

This is an outline of what I did to test MySQL and PostgreSQL to ensure the 
compact migration script generates *exactly* the same schemas as before:

http://wiki.openstack.org/database_migration_testing

As things stand both MySQL and PostgreSQL are exactly the same. I have some 
pending changes that I've found in the schemas that need to be fixed in 
Folsom... but the goal here was to replicate Essex with migration 082 so that 
is what I did.

Sqlite has a few differences (indexes for example). How important is it that 
the Sqlite schema be exactly the same? Unit tests are passing.

Dan


> 
> Vish
> 
> On Apr 26, 2012, at 12:24 PM, Dan Prince wrote:
> 
> > The OpenStack Essex release had 82 database migrations. As these
> > grow in number it seems reasonable to clean house from time to
> > time. Now seems as good a time as any.
> > 
> > I came up with a first go at it here:
> > 
> > https://review.openstack.org/#/c/6847/
> > 
> > The idea is that we would:
> > 
> > * Do this early in the release cycle to minimize risk.
> > 
> > * Compact all pre-Folsom migrations into a single migration. This
> > migration would be used for new installations.
> > 
> > * New migrations during the Folsom release cycle would proceed as
> > normal.
> > 
> > * Migrations added during Folsom release cycle could be compacted
> > during "E" release cycle. TBD if/when we do the next compaction.
> > 
> > * Users upgrading from pre-Essex would need to upgrade to Essex
> > first. Then Folsom.
> > 
> > --
> > 
> > I think this scheme would support users who follow stable releases
> > as well as users who follow trunk very closely.
> > 
> > We talked about this at the conference but I thought this issue
> > might be near and dear to some of our end users so it was worth
> > discussing on the list.
> > 
> > What are general thoughts on this approach?
> > 
> > Dan (dprince)
> > 
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> 
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] database migration cleanup

2012-04-30 Thread Dan Prince


- Original Message -
> From: "Johannes Erdfelt" 
> To: openstack@lists.launchpad.net
> Sent: Friday, April 27, 2012 12:13:47 PM
> Subject: Re: [Openstack] database migration cleanup
> 
> On Fri, Apr 27, 2012, Dan Prince  wrote:
> > > Mirations don't appear to be particularly slow right now, and it
> > > doesn't
> > > appear that merging migrations will make them significantly
> > > faster.
> > > 
> > > What exactly is the benefit of doing this?
> > 
> > Speed wasn't the primary motivation here I suppose. Do we really
> > want
> > to continue to maintain 100+ migrations in our codebase over the
> > lifetime of the project? As we add more and more to our
> > pep8/hacking
> > tools this could become an annoying burden right?
> 
> Has that been a problem?
> 
> I'm not sure I see where pep8/hacking changes would require changes
> to
> the unmerged migrations but the merged migrations wouldn't require
> them.
> Wouldn't that affect either?


The primary benefit here is it is simply less code to maintain:

The old migrations scripts for Essex are around 6200 lines of code.

The new compacted migration for Essex is around 950 lines of code.


> 
> Looking at the logs for the migrations most of the changes (outside
> of
> PEP8 changes) appear to be sqlalchemy related changes, which will
> exist
> regardless if they are merged or not as well.
> 
> > I mean why require end users to run migrations that add and drop
> > the
> > same table repeatedly?
> 
> It's hard to just grep for which migrations drop a table/column added
> in
> a previous migration. Did you see how many are like that when you
> merged
> all of the migrations together?
> 
> > I'm not sure I understand? Git still has the source code for the
> > old
> > migrations. All you'd need to do is checkout an old version of
> > master
> > or look at the stable/essex, stable/diablo branches right?
> 
> True, you would just need to go out of your way to look at the
> history
> instead.
> 
> I guess it's a matter of pros vs cons. Right now, I'd prefer to not
> merge them simply because I haven't seen what the benefit is.


The primary benefit here is just cleaning up our code base. If that isn't a 
good reason or if you'd rather keep the 82 Essex migrations in the code base 
for the long term please feel free to comment accordingly on the merge proposal.

Dan


> 
> JE
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] database migration cleanup

2012-04-30 Thread Dan Prince


- Original Message -
> From: "Monsyne Dragon" 
> To: "Dan Prince" 
> Cc: "Sean Dague" , 
> "" 
> Sent: Friday, April 27, 2012 1:46:03 PM
> Subject: Re: [Openstack] database migration cleanup
> 
> Even better, what would it take to try using Alembic?
> (http://alembic.readthedocs.org/en/latest/front.html#project-homepage)

>From the Alembic docs site: "Note that Alembic is still in alpha status."

I would guess we'd want the project to be in at least a beta (stable API) state 
before we committed to using it, etc.

> 
> It's a big improvement over sqlalchemy.  Amongst other things,
> migrations are not numbered, they are linked by dependancy, and run
> in topological-sort order. That there eliminates alot of "my
> migration number got taken... again..." problems.

To be clear Alembic would be a potential replacement for python-migrate. Not 
Sqlalchemy right? Looks like an interesting project but I'm not convinced it is 
worth switching over to as our default at this time.


> On Apr 27, 2012, at 10:47 AM, Dan Prince wrote:
> 
> > 
> > 
> > - Original Message -
> >> From: "Sean Dague" 
> >> To: openstack@lists.launchpad.net
> >> Sent: Friday, April 27, 2012 10:21:17 AM
> >> Subject: Re: [Openstack] database migration cleanup
> >> 
> >> On 04/26/2012 03:24 PM, Dan Prince wrote:
> >> 
> >>> I think this scheme would support users who follow stable
> >>> releases
> >>> as well as users who follow trunk very closely.
> >>> 
> >>> We talked about this at the conference but I thought this issue
> >>> might be near and dear to some of our end users so it was worth
> >>> discussing on the list.
> >>> 
> >>> What are general thoughts on this approach?
> >> 
> >> Is there any support in sqlalchemy, or related tools, to handle
> >> migrations the way rails does, where a schema file is created at
> >> the
> >> end
> >> of every migration? It would be ideal if we both had a full
> >> migration
> >> history, as well as a short cut at any snap shot to get to the
> >> end.
> > 
> > Ah. Yes, the Rails schema.rb. I looked around for just this sort of
> > thing and didn't find much. Python-migrate has some "experimental"
> > support for generating models and I did make use of that
> > initially. See 'create_model' below:
> > 
> > 
> > [root@nova1 migrate_repo]# python ./manage.py --repository=./
> > --url=mysql://nova:password@localhost/nova
> > Usage: manage.py COMMAND ...
> > 
> >Available commands:
> >compare_model_to_db  - compare MetaData against the
> >current database state
> > create   - create an empty repository at the
> > specified path
> > create_model - dump the current database as a
> > Python model to stdout
> > db_version   - show the current version of the
> > repository under version control
> > downgrade- downgrade a database to an earlier
> > version
> > drop_version_control - removes version control from a
> > database
> > help - displays help on a given command
> > make_update_script_for_model - create a script changing the old
> > MetaData to the new (current) MetaData
> > manage   - creates a Python script that runs
> > Migrate with a set of default values
> > script   - create an empty change Python
> > script
> > script_sql   - create empty change SQL scripts for
> > given database
> > source   - display the Python code for a
> > particular version in this repository
> > test - performs the upgrade and downgrade
> > command on the given database
> > update_db_from_model - modify the database to match the
> > structure of the current MetaData
> > upgrade  - upgrade a database to a later
> > version
> > version  - display the latest version
> > available in a repository
> > version_control  - mark a database as under this
> > repository's version control
> > 
> > 
> > 
> > python-migrate's 'create_model' does not however give you somet

Re: [Openstack] database migration cleanup

2012-04-27 Thread Dan Prince


- Original Message -
> From: "Johannes Erdfelt" 
> To: openstack@lists.launchpad.net
> Sent: Friday, April 27, 2012 10:20:38 AM
> Subject: Re: [Openstack] database migration cleanup
> 
> On Thu, Apr 26, 2012, Dan Prince  wrote:
> > The OpenStack Essex release had 82 database migrations. As these
> > grow
> > in number it seems reasonable to clean house from time to time. Now
> > seems as good a time as any.
> 
> Mirations don't appear to be particularly slow right now, and it
> doesn't
> appear that merging migrations will make them significantly faster.
> 
> What exactly is the benefit of doing this?

Speed wasn't the primary motivation here I suppose. Do we really want to 
continue to maintain 100+ migrations in our codebase over the lifetime of the 
project? As we add more and more to our pep8/hacking tools this could become an 
annoying burden right? I mean why require end users to run migrations that add 
and drop the same table repeatedly?


> 
> By merging migrations, we lose history in git for why the migrations
> were added in the first place.

I'm not sure I understand? Git still has the source code for the old 
migrations. All you'd need to do is checkout an old version of master or look 
at the stable/essex, stable/diablo branches right?

> 
> JE
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] database migration cleanup

2012-04-27 Thread Dan Prince


- Original Message -
> From: "Sean Dague" 
> To: openstack@lists.launchpad.net
> Sent: Friday, April 27, 2012 10:21:17 AM
> Subject: Re: [Openstack] database migration cleanup
> 
> On 04/26/2012 03:24 PM, Dan Prince wrote:
> 
> > I think this scheme would support users who follow stable releases
> > as well as users who follow trunk very closely.
> >
> > We talked about this at the conference but I thought this issue
> > might be near and dear to some of our end users so it was worth
> > discussing on the list.
> >
> > What are general thoughts on this approach?
> 
> Is there any support in sqlalchemy, or related tools, to handle
> migrations the way rails does, where a schema file is created at the
> end
> of every migration? It would be ideal if we both had a full migration
> history, as well as a short cut at any snap shot to get to the end.

Ah. Yes, the Rails schema.rb. I looked around for just this sort of thing and 
didn't find much. Python-migrate has some "experimental" support for generating 
models and I did make use of that initially. See 'create_model' below:


[root@nova1 migrate_repo]# python ./manage.py --repository=./ 
--url=mysql://nova:password@localhost/nova
Usage: manage.py COMMAND ...

Available commands:
compare_model_to_db  - compare MetaData against the current 
database state
create   - create an empty repository at the 
specified path
create_model - dump the current database as a Python 
model to stdout
db_version   - show the current version of the 
repository under version control
downgrade- downgrade a database to an earlier 
version
drop_version_control - removes version control from a database
help - displays help on a given command
make_update_script_for_model - create a script changing the old 
MetaData to the new (current) MetaData
manage   - creates a Python script that runs 
Migrate with a set of default values
script   - create an empty change Python script
script_sql   - create empty change SQL scripts for 
given database
source   - display the Python code for a particular 
version in this repository
test - performs the upgrade and downgrade 
command on the given database
update_db_from_model - modify the database to match the 
structure of the current MetaData
upgrade  - upgrade a database to a later version
version  - display the latest version available in 
a repository
version_control  - mark a database as under this 
repository's version control



python-migrate's 'create_model' does not however give you something that 
exactly matches the schema you'd get by running the all the migrations. So auto 
generation doesn't appear to be an option right now. It would be nice to 
contribute python-migrate in this regard and get better support for model 
generation, etc. Maybe a good long term goal?

Dan


> 
>   -Sean
> 
> --
> Sean Dague
> IBM Linux Technology Center
> email: slda...@us.ibm.com
> alt-email: sda...@linux.vnet.ibm.com
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] proposal for Russell Bryant to be added to Nova Core

2012-04-27 Thread Dan Prince
Russell Bryant wrote the Nova Qpid rpc implementation and is a member of the 
Nova security team. He has been helping chipping away at reviews and 
contributing to discussions for some time now.

I'd like to seem him Nova core so he can help out w/ reviews... definitely the 
RPC ones.

Dan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] database migration cleanup

2012-04-27 Thread Dan Prince


- Original Message -
> From: "Eoghan Glynn" 
> To: "Dan Prince" 
> Cc: openstack@lists.launchpad.net
> Sent: Friday, April 27, 2012 5:45:27 AM
> Subject: Re: [Openstack] database migration cleanup
> 
> 
> 
> > https://review.openstack.org/#/c/6847/
> 
> Nice!
> 
> >  * Migrations added during Folsom release cycle could be compacted
> >  during "E" release cycle. TBD if/when we do the next compaction.
> 
> An alternative idea would be to do the compaction *prior* to the
> Folsom relase instead of after, so that the cleanest possible
> migration path is presented to non-trunk-chasing users. It could for
> example be a task that's part of spinning up the first Folsom-RC.
> 
> Its unlikely that after new migrations are added after the release
> candidate goes out the door (as these are generally associated with
> non-trivial new features, which would have missed the boat at that
> late stage). But if there are any, these would have to be adde to
> the squashed aggregate migration from the get-go.

I thought about this... but that still leaves only a couple weeks to catch any 
issues that might come up in the release candidate phase. Also, using the RC 
makes the compaction point a bit more fuzzy for end users who are following 
trunk more closely. I do like that it would keep the release tree cleaner 
however.

Performing the compaction after release is sort of a middle ground approach 
which should allow us to clean house from time to time but also keep things 
stable around release time.

> 
> Cheers,
> Eoghan
> 
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] database migration cleanup

2012-04-26 Thread Dan Prince
The OpenStack Essex release had 82 database migrations. As these grow in number 
it seems reasonable to clean house from time to time. Now seems as good a time 
as any.

I came up with a first go at it here:

https://review.openstack.org/#/c/6847/

The idea is that we would:

 * Do this early in the release cycle to minimize risk.

 * Compact all pre-Folsom migrations into a single migration. This migration 
would be used for new installations.

 * New migrations during the Folsom release cycle would proceed as normal.

 * Migrations added during Folsom release cycle could be compacted during "E" 
release cycle. TBD if/when we do the next compaction.

 * Users upgrading from pre-Essex would need to upgrade to Essex first. Then 
Folsom.

--

I think this scheme would support users who follow stable releases as well as 
users who follow trunk very closely.

We talked about this at the conference but I thought this issue might be near 
and dear to some of our end users so it was worth discussing on the list.

What are general thoughts on this approach?

Dan (dprince)

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Gerrit minimum review time frame

2012-03-13 Thread Dan Prince


- Original Message -
> From: "Joe Gordon" 
> To: openstack@lists.launchpad.net
> Sent: Monday, March 12, 2012 5:59:02 PM
> Subject: [Openstack]  Gerrit minimum review time frame
> 
> Hi All,
> 
> I have noticed that some Gerrit branches get approved very quickly,
> sometimes in a matter of minutes.   While most of the time these
> branches
> are vetted properly, the window for reviewing can be so small that a
> non-trivial branch lands but without enough vetting.  If someone is
> in a
> meeting for half on hour they may miss the entire review window.  To
> fix
> this problem I propose a minimum time frame (should be overridable in
> an
> emergency) for a branch to be approved, perhaps 2 hours.  This time
> frame
> would start on 'Upload time.'
> 
> best,
> Joe Gordon
> 



I'd be a fan of this. Giving everyone a fair shot at making a comments both 
positive and negative sounds like a great idea.

Perhaps this is something to bring up at the conference? About how to enforce 
this? policy vs. procedure, etc. I'm not sure tying a core everyone's hands is 
the best idea.

>From my prospective I'd love to have a shot at "smoking" more branches w/ 
>SmokeStack. Once those "breaking" commits land it makes it that much harder to 
>classify merge props.

Dan



> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] SmokeStack update

2012-02-23 Thread Dan Prince
This week, we switched Smokestack over to use a Fedora/puppet configuration 
that Derek Higgins and I have been working on. You can see those results in 
gerrit now. It seems very stable and supports running Nova smoke tests and 
Torpedo.

We plan on focussing our trunk chasing on Fedora/puppet/libvirt.

I'd love to see someone else pick up the Ubuntu/chef/Xen support. Any takers?

Dan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [CHEF] Aligning Cookbook Efforts

2012-02-08 Thread Dan Prince
Hi Jay,

Thanks for taking the initiative to send this out!

I added comments to your points are inline below:



>
> Proposal for Alignment
> ==
>
> I think the following steps would be good to get done by the time Essex
> rolls out the door in April:
>
> 1) Create a stable/diablo branch of the openstack/openstack-chef cookbook
> repo and maintain it in the same way that we maintain stable branches for
> core OpenStack projects. I propose we use the branch point that NTT PF Lab
> used to create their fork of the upstream repo.
>

I like the idea of maintaining a stable set of cookbooks for the official
releases. The NTT branch sounds fine to me as a starting point for diablo.
At the time of the diablo release I was maintaining cookbooks for
SmokeStack here: https://github.com/dprince/openstack_cookbooks. Using
these cookbooks around the date of the diablo release would be an option as
well.


>
> 2) Work with Matt Ray and other Chef experts to combine any and all best
> practices that may be contained in the non-official cookbook repos into the
> upstream official repository. From a cursory overview, there are some
> differences in how databags are handled, how certs are handled, how certain
> cookbooks are constructed, and of course differences in the actual
> cookbooks in the repos themselves.


> 3) Consolidate documentation on how to use the cookbooks, the best
> practices used in constructing the cookbooks, and possibly some
> videos/tutorials walking folks through this critical piece of the OpenStack
> puzzle.
>

This sounds great.


>
> 4) Create Jenkins builders for stable branch deployment testing. We
> currently test the official development cookbooks by way of SmokeStack
> gates on all core OpenStack projects. Would be great to get the same
> testing automated for non-development branches of the cookbooks.
>

SmokeStack would easily support testing stable releases. In fact it be a
lot easier to pull off stable release testing than it is to chase trunk
like I'm currently doing :)

I actually have a 'Libvirt Mysql Milestone Proposed (Diablo)' configuration
in SmokeStack. I just haven't been running it mostly because I was focused
on upstream releases and commits. Limited resources and time

Getting more people involved would be great.


>
> Thoughts and criticism most welcome, and apologies in advance if I got any
> of the above history wrong. Feel free to correct me!
>
> Best,
> -jay
>


One final note:

We are looking at adding dual support for Fedora/puppet and Ubuntu/chef to
SmokeStack in the near future. A guy named Derek Higgins from Red Hat has
made excellent progress on this front.

-- 
Dan Prince
princ...@alumni.jmu.edu
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Libvirt File Injection

2012-01-30 Thread Dan Prince
On Mon, Jan 30, 2012 at 1:05 PM, Brian Waldon wrote:

> After implementing a working version of file injection on Libvirt, a good
> question was brought up on the merge prop: how should we handle a file
> injection failure? Injection could fail for several reasons: missing
> necessary libraries, unsupported image formats and bad permissions are just
> a few. There seem to be two clear paths forward:
>
> 1) Log an error, set the instance to ERROR, add an asynchronous fault to
> the instance in the db
> 2) Log a warning, move on with the boot process
>


My preference would be to log a warning and move on with the boot process
(#2). Or perhaps we could address this with some sort of async error
messages concept?

Also, Armando just file this ticket to change how XenServer currently
handles admin password failures:

 https://bugs.launchpad.net/nova/+bug/923865

I understand file injection is slightly different than admin passwords but
it would seem the preference is to treat these types of failures as
warnings and not errors.



> It's not obvious which of these is the best route to take from a user's
> point of view. I'm currently leaning towards option 1 as I wouldn't want to
> have an instance come up (and be billed for it) while it wasn't what I
> explicitly requested.
>
> I would love to get some help with this problem. You can either reply
> directly to this email, or head over to the merge prop:
> https://review.openstack.org/#change,3526
>
>
> Brian Waldon
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
Dan Prince
princ...@alumni.jmu.edu
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Integration test gating on trunk

2012-01-10 Thread Dan Prince

Jim,
 
Okay. I'm still a bit fuzzy on the order of operations we'd need to make in 
order to get branch in when a config file changes.
 
Take this review for example:
 
 https://review.openstack.org/#change,2919
 
Based on what I understand devstack needs to support both version of the nova 
config file in order for the branches to land smoothly? That seems like 
something we wouldn't want to do. Am I misunderstanding something?
 
Also, in order to coordinate these types of changes shouldn't all core members 
on gated projects have core privileges on devstack as well? This would allow us 
to coordinate making these types of changes?
 
Dan


-Original Message-
From: "James E. Blair" 
Sent: Wednesday, January 4, 2012 12:09pm
To: "OpenStack Mailing List" 
Subject: Re: [Openstack] Integration test gating on trunk



"Dan Prince"  writes:

> Hi Jim,
>  
> A couple of questions for you:
>  
> 1) You mentioned how to coordinate changes between glance and nova but
> what about devstack. Does that same process apply to devstack as well?
> For example if there were a configuration file change (api-paste
> changes often and can cause failures) would I push the required change
> to devstack first and then nova? Or would we need to make devstack
> handle multiple versions of the configuration files?

Yes, the same is true for coordinating changes across all of the
projects, including devstack (one change at a time in sequence).

> 2) Where are the devstack instances running (public cloud, private
> Openstack cloud, etc.). If the public cloud is down for maintenance
> does that mean code can't land? Are there any plans to run this on a
> private or OpenStack backed cloud? Regardless of what we are doing is
> there a backup plan in place so that code can land?

They are currently running in the Rackspace public cloud.  There is a
cache of N machines (N==5 currently) running and ready to receive
devstack installs for this test, with new ones being added by a frequent
Jenkins job[1] to replace those consumed.  That helps smooth over
operational errors and short outages.  If all of those are consumed, the
job will fail, and we won't be able to land patches.  Considering the
importance of RS public cloud availability, I think that waiting until
it's up again is probably going to be a viable option.  If there is some
sort of extended outage, we can discuss disabling the job.

In the medium to long term, we plan on mitigating this risk by consuming
VMs from multiple cloud providers.  HP has offered its public cloud for
this purpose.  I'd like the normal mode of operation to launch VMs on
both (all?) of the cloud providers participating to balance load and
resource usage, and of course that gets us higher availability, at least
for the kind of scenario you described.

> 3) Are there any plans on making this run on branches in merge prop
> (before we approve them)? I would love to know that devstack passes
> ahead of time before I approve a branch.

Yes, running tests when patchsets are uploaded is in the plan.  With the
new gerrit trigger plugin we installed last week, we have the technical
capability to run Jenkins jobs on proposal, approval, or merge.  I
believe we will start working on that soon, after we address some
security concerns.

-Jim
[1] https://jenkins.openstack.org/job/devstack-launch-vms/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Integration test gating on trunk

2012-01-04 Thread Dan Prince


Hi Jim,
 
A couple of questions for you:
 
1) You mentioned how to coordinate changes between glance and nova but what 
about devstack. Does that same process apply to devstack as well? For example 
if there were a configuration file change (api-paste changes often and can 
cause failures) would I push the required change to devstack first and then 
nova? Or would we need to make devstack handle multiple versions of the 
configuration files?
 
2) Where are the devstack instances running (public cloud, private Openstack 
cloud, etc.). If the public cloud is down for maintenance does that mean code 
can't land? Are there any plans to run this on a private or OpenStack backed 
cloud? Regardless of what we are doing is there a backup plan in place so that 
code can land?
 
3) Are there any plans on making this run on branches in merge prop (before we 
approve them)? I would love to know that devstack passes ahead of time before I 
approve a branch.
 
Thanks,
 
Dan
 
-Original Message-
From: "James E. Blair" 
Sent: Thursday, December 29, 2011 5:51pm
To: "OpenStack Mailing List" 
Subject: [Openstack] Integration test gating on trunk



Hi,

A few weeks ago, I wrote about turning on an integration test gating job
for the stable/diablo branch.  That's been running for a while now, and
during that time with help from Anthony Young and Jesse Andrews, we've
been able to address the issues we saw and make the job fairly reliable.

At the last design summit, we agreed that we should gate trunk
development of at least nova and its immediate dependencies on some kind
of integration test.  The biggest change this introduces to developer
workflow is how to handle changes that affect more than one project.  At
the design summit, it was decided that such changes should be authored
so that the system continues to function as each is merged in order.  In
other words, if you need to modify nova and glance, you might make a
change to nova that accepts old and new behaviors from glance, then
change glance.

The job we've been developing uses devstack to set up nova, glance, and
keystone, and then runs the relevant exercise.sh tests.  Obviously
that's not a lot of testing, but it does at least ensure that nova can
perform its basic functions, which, again, was an important milestone
identified at the summit.  Once tempest is ready for this, we'll start
using it.

At this point, I believe the testing infrastructure is stable enough for
us to turn on gating for all branches of nova, glance, and keystone
(also python-novaclient, devstack, and openstack-ci, which are involved
in the setup and running of the tests).

I would not be surprised if we run into some problems.  We might see
transient network errors in the test setup, in which case you can just
re-trigger the job (you can vote "Approved" again), and we can see if
there's some caching or local mirroring we can do to reduce that risk.
We might encounter non-deterministic behavior in the setup and running
of OpenStack, in which case it would be best to treat that as a bug in
devstack or the affected component and improve the software.  I think
that kind of problem is the sort of thing that our CI system should be
uncovering, so even though it's annoying if it affects landing a patch
you're working on, I think it's a net positive to the effort overall.
Also, we just might catch real bugs.

Having said that, the Jenkins job has been running in silent mode on
master for several days with few false errors.  My feeling from the
design summit was that it was generally understood there would be a
shakedown period, and people are willing to accept some risk and some
extra work for the benefits an integration test gating job will brink.
I think we're at that point, so I'd like to turn this job on Tuesday,
January 3rd.

-Jim

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] unit and integration tests results for Gerrit

2011-12-13 Thread Dan Prince

Hi Jay,
 

The tests should not be running concurrently.
 
We currently have 4 Natty Cloud Servers configured as unit test workers. The 
machines are shared between nova/glance/keystone.
 
The unit test runner is pretty simple:
 
https://github.com/dprince/smokestack/blob/master/app/templates/unittest_runner.sh.erb
 
Would using a unique FAKE_FILESYSTEM_ROOTDIR for each test run or test help out 
here? Perhaps prefixing each one w/ /tmp/glance-tests so that we could also add 
an explicit tmp dir cleanup command in the test suite runners as well.
 
In the meantime I'm happy to add an explicit cleanup for /tmp/glance-tests to 
make sure we have a clean slate for each test run.
 
Dan
 
-Original Message-
From: "Jay Pipes" 
Sent: Tuesday, December 13, 2011 7:14am
To: "Dan Prince" 
Cc: "Openstack" 
Subject: Re: [Openstack] unit and integration tests results for Gerrit


Thanks Dan, looks like a great start on this. Not sure what's going on
with the unit test runs in Glance, though... As an example, see:

http://smokestack.openstack.org/?go=/jobs/5600

There are dozens of errors like this:

==
ERROR: test_add_member (glance.tests.unit.test_api.TestGlanceAPI)
--
Traceback (most recent call last):
 File "/tmp/tmp.ZaWdEoPfhP/glance_source/glance/tests/unit/test_api.py",
line 1937, in setUp
 stubs.stub_out_filesystem_backend()
 File "/tmp/tmp.ZaWdEoPfhP/glance_source/glance/tests/stubs.py", line
63, in stub_out_filesystem_backend
 os.mkdir(FAKE_FILESYSTEM_ROOTDIR)
OSError: [Errno 17] File exists: '/tmp/glance-tests'

That FAKE_FILESYSTEM_ROOTDIR is cleaned up in the tearDown() method of
API unit tests, and these tests run perfectly fine for me locally. I
was thinking that one of the following things may be occurring:

* The unlink() of the FAKE_FILESYSTEM_ROOTDIR is not fsync'ing fast
enough, resulting in setUp()'s call to create the /tmp/glance-tests
directory is stumbling over itself
* The tests are being run in parallel somehow?

I've seen the fsync behaviour cause havoc in some of the image cache
tests before, and the solution ended up putting a small wait loop in
the test code to wait until disk buffers were flushed and a cache file
was fully removed from the filesystem. That may be happening here?
These are being run on Cloud Servers, right?

/me just trying to figure out why tests would run differently in
smokestack than Jenkins or locally...

Thanks in advance for any insight.

Cheers,
-jay___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] unit and integration tests results for Gerrit

2011-12-12 Thread Dan Prince

Hello all,
 
We just turned on a [https://github.com/dprince/bellows] Bellows feature that 
will automatically update Gerrit reviews with SmokeStack test results. Each 
Gerrit review should have a comment that looks something like this:
 
SmokeStack Results (patch set 2):
 Unit Success: http://smokestack.openstack.org/?go=/jobs/5570
 Libvirt Success: http://smokestack.openstack.org/?go=/jobs/5568
 XenServer Success: http://smokestack.openstack.org/?go=/jobs/5569
 
---
 
The results we obtain are generally good but aren't perfect. I have a high 
level of confidence in the unit test results. Failures in Libvirt and XenServer 
require a bit more investigation but are generally useful as well.
 

Hopefully this information is helpful! I'm not in a position to focus on this 
full time however I'll do the best I can to keep things running smoothly. Pull 
requests accepted.
 
Dan
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal for Mark McLoughlin to join nova-core

2011-11-29 Thread Dan Prince

+1
 
-Original Message-
From: "Vishvananda Ishaya" 
Sent: Tuesday, November 29, 2011 1:05pm
To: "openstack (openstack@lists.launchpad.net)" 
Subject: [Openstack] Proposal for Mark McLoughlin to join nova-core



Mark is maintaining openstack for Fedora and has made some excellent 
contributions to nova.  He has also been very prolific with reviews lately. 
Lets add him to core and make his reviews count towards potential merges!

Vish
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] A possible alternative to Gerrit ...

2011-09-01 Thread Dan Prince

++
 
The closer we can get to really using GitHub the better.
 
Dan
 
-Original Message-
From: "Sandy Walsh" 
Sent: Thursday, September 1, 2011 1:56pm
To: "openstack@lists.launchpad.net" 
Subject: [Openstack] A possible alternative to Gerrit ...



Hey!

Last night I did some hacking on HubCap. HubCap is a simple script that 
monitors Pull Requests in GitHub. It spits out a static HTML page of the 
requests workflow status. 

It infers workflow status by looking for keywords in the comments. It's so 
simple it's stupid. The last keyword from a commentor is considered their vote. 
 When we have a quorum, it can kick off a jenkins build. If that succeeds, it 
can be merged. It uses the core members already assigned to the repo.

It's just a hack right now. But I think it could effectively let us use GitHub 
as intended and still have the voting power of LP.

You can see a (static) sample of the output here from python-novaclient:
http://www.darksecretsoftware.com/hubcap.html

code is here: https://github.com/SandyWalsh/hubcap

As fate would have it, I bounced this off a few people and learned about 0x44's 
Roundabout project:
https://github.com/ChristopherMacGown/roundabout

Roundabout is certainly more mature than HubCap, and has CI hooks, daemon code, 
etc, but no real workflow. I'm going to merge the hubcap code into Roundabout 
and press on from there for novaclient.

Love to get your thoughts (and contributions) on this.

Cheers,
Sandy
This email may include confidential information. If you received it in error, 
please delete it.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
This email may include confidential information. If you received it in error, 
please delete it.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Automated Test Suite for OpenStack

2011-08-30 Thread Dan Prince


I Kiran,


If you are interested in functionally testing the various openstack API's you 
might check out the following:


https://github.com/rackspace-titan/stacktester (v1.1 OSAPI tests written in 
Python)


https://github.com/dprince/openstack_vpc/tree/master/tests/ruby (v1.0 OSAPI 
tests using Ruby bindings)


Also, You can run the nova 'smoketests' directory tests which help verify the 
EC2 API functionality.


We've also got a system called 'SmokeStack' which has the ability to run these 
three test suites on Nova configurations using both libvirt and XenServer. If 
you are interested in a system to run tests on arbitrary nova/glance branch 
builds you might check out:


http://wiki.openstack.org/smokestack
 
Hope this helps.
 
Dan
 
-Original Message-
From: kiran.mur...@csscorp.com
Sent: Tuesday, August 30, 2011 5:34am
To: openstack@lists.launchpad.net
Subject: [Openstack] Automated Test Suite for OpenStack



Hello List,

I have been testing OpenStack for a while and would be interested in any 
automated test suites that are available.

On searching what I could find was Soren's presentation at EuroPython-2011
[http://lanyrd.com/2011/europython/sfwky/] 
http://lanyrd.com/2011/europython/sfwky/

Is the test battery that is run against the installed "cloud" available for 
download.

Would appreciate any pointers on getting an automated test suite.

Thanks,
Kiran 
 E-mail Disclaimer: http://www.csscorp.com/common/email-disclaimer.php ___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] jenkins: nova-tarmac hangs

2011-08-19 Thread Dan Prince
The nova-tarmac jobs gets hung quite often lately. I know a lot of fingers have 
been pointing at Launchpad issues but I'm not sure that is the case here. When 
its hung the console output of the job is almost always at:
 
 Running test command: /home/jenkins/openstack-ci/test_nova.sh
 
This is essentially runs: 'bash run_tests.sh -N && python setup.py sdist'.
 
---
 
I've also noticed that the nova-coverage job is also often running at the same 
time. This job also appears to use 'run_tests.sh -N'. Additionally this job 
like nova-tarmac also runs on the 'nova' jenkins slave.
 
Since both these jobs use the 'nova' Jenkins slave I think the problem here is 
we can't have two jobs using 'run_tests.sh -N' at the same time (due to 
potential port issues etc).
 
Should we create isolated build slaves for these jobs and/or prevent them from 
running at the same time?
 
Dan


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Status of Git/Gerrit Code Hosting/Review

2011-08-09 Thread Dan Prince

To be honest I was a Git user first. I love Git. Its great. A big part of that 
for me though is GitHub.
 
Since we have figured out that GitHub isn't going to work for the project at 
this time I'm wondering how many others feel like just using Git (no GitHub) is 
worth ditching the all-in-one feature set we have on LP.
 
The fact is over the past year bzr has grown on me quite a bit. IMHO it is a 
much simpler distributed version control system that in most cases is actually 
easier to use than Git. Git has it licked on the speed and features front but 
in terms of usability bzr has made me quite happy.
 

Look. Jay's initial email on this thread was about whether we should require 
all projects to use one VCS over another. I say put it to a vote. We have all 
the polling setup for governance stuff. Why not send out a survey and ask the 
developers (those who have committed code) what they prefer?

Dan

-Original Message-
From: "Monty Taylor" 
Sent: Tuesday, August 9, 2011 2:29pm
To: "Jay Pipes" 
Cc: "Dan Prince" , openstack@lists.launchpad.net
Subject: Re: [Openstack] Status of Git/Gerrit Code Hosting/Review



>> Lastly I kind of feel like we've bee dupped. When we talked about Git code
>> hosting on GitHub at the conference some of the major points were improved
>> code review (on GitHub) and performance improvements for checkouts, etc.
>> While we may have taken a step forward on the performance front IMHO we've
>> taken a major step backwards as far as the review and tracking process goes.

I certainly don't want anyone to feel duped here. We did start down the
road quite earnestly of using github, and could not meet all of the
requirements that we have currently as a project. At that point, we
moved on to attempt to solve the underlying problem that people had
expressed in the best way possible - which I'm sure you can understand
given the large number of people with conflicting interests was ... fun.

In looking at things - it seemed that, to be quite honest, the number
one single change that would make the maximum number of people happy was
switching from bzr to git, and there was nothing about git that would
not meet the needs of the project. (it is, of course, an excellent tool)

All in all, I'm not saying that all of the design choices are perfect,
and there are certainly things to work on - but I _do_ think that we're
in an excellent position now to actually effect the changes that we
need. (which will make it more effective to sit down at design summits
and discuss needs - since we should be able to actually implement them)

Thanks!
Monty

>> -Original Message-
>> From: "Jay Pipes" 
>> Sent: Monday, August 8, 2011 2:50pm
>> To: openstack@lists.launchpad.net
>> Subject: [Openstack] Status of Git/Gerrit Code Hosting/Review
>>
>> Hello all,
>>
>> tl;dr
>> ===
>>
>> Contributors have been giving Monty Taylor and Jim Blair feedback on
>> the Gerrit code review system over the last few weeks. Both the
>> Keystone and Glance projects have now migrated to using Git as their
>> source control system and Gerrit for code review and integration into
>> the Jenkins continuous integration system.
>>
>> Tomorrow, the Project Policy Board (PPB) will be voting on two things:
>>
>> 1) Should OS projects
>> a) have a vetted set of options for hosting and review, or
>> b) be required to use a single toolset for review and hosting
>> 2) Shall Gerrit+Git be included in the set of vetted options or be the
>> single option (dependent on the vote result for 1) above)
>>
>> Feedback on #2 is most welcome. Please feel free to respond to this
>> email, catch us on IRC or email me directly.
>>
>> Links:
>>
>> Working with Gerrit: http://wiki.openstack.org/GerritWorkflow
>> Code Review in Gerrit: http://review.openstack.org
>>
>> Details
>> ===
>>
>> Over the last few weeks, Monty Taylor and Jim Blair have been working
>> with a number of OpenStack contributors to gather feedback on a
>> Git-based development workflow, toolset, and review process.
>>
>> First, Monty and Jim investigated whether GitHub's pull request system
>> would be sufficient to enforce existing code review and approval
>> policies. It was determined that GitHub's pull request system was not
>> sufficient. The main reason why the pull request system failed to meet
>> needs is that there is no overall way to track the current state of a
>> given pull request. While this is fine for the simple case (merge
>> request is accepted and merged) it starts to fall over with some of
>> the more complex back and forths that we w

Re: [Openstack] Status of Git/Gerrit Code Hosting/Review

2011-08-09 Thread Dan Prince

Hi Jay,
 
Definitely appreciate all the hard work Monty and Jim have put into the 
migration. This is a lot of work for sure.
 
Couple of comments:
 
1) While gerrit is integrated w/ Launchpad (and can close tickets) Launchpad is 
not integrated with Gerrit. Things like referencing a branch from within a 
ticket or blueprint aren't going to work as well as they used to right?
 
2) I'd like to see a unified diff containing all the files on the Gerrit review 
pages. Is there a way to do this or am I missing something?
 
3) The branch/refspec names Gerrit uses are not very user friendly. In 
Launchpad we typically had people naming their branches w/ either a feature/fix 
name or the ticket number. So in Launchpad my branch would be called something 
like 'lp:~dan-prince/fix_ec2_metadata' or whatever. In Gerrit the branch names 
up for review are rather cryptic 'refs/changes/76/176/1' which means that when 
trying to track and Gate branches before they hit trunk we are going to 
manually have to do an extra bit of detective work to make sense of which 
tickets and features a particular refspec corresponds to. Some extra tooling 
might might this easier but I really dislike that we have no control over the 
branch names that are up for review.
 
Lastly I kind of feel like we've bee dupped. When we talked about Git code 
hosting on GitHub at the conference some of the major points were improved code 
review (on GitHub) and performance improvements for checkouts, etc. While we 
may have taken a step forward on the performance front IMHO we've taken a major 
step backwards as far as the review and tracking process goes.
 
Dan
 
-Original Message-
From: "Jay Pipes" 
Sent: Monday, August 8, 2011 2:50pm
To: openstack@lists.launchpad.net
Subject: [Openstack] Status of Git/Gerrit Code Hosting/Review



Hello all,

tl;dr
===

Contributors have been giving Monty Taylor and Jim Blair feedback on
the Gerrit code review system over the last few weeks. Both the
Keystone and Glance projects have now migrated to using Git as their
source control system and Gerrit for code review and integration into
the Jenkins continuous integration system.

Tomorrow, the Project Policy Board (PPB) will be voting on two things:

1) Should OS projects
 a) have a vetted set of options for hosting and review, or
 b) be required to use a single toolset for review and hosting
2) Shall Gerrit+Git be included in the set of vetted options or be the
single option (dependent on the vote result for 1) above)

Feedback on #2 is most welcome. Please feel free to respond to this
email, catch us on IRC or email me directly.

Links:

Working with Gerrit: http://wiki.openstack.org/GerritWorkflow
Code Review in Gerrit: http://review.openstack.org

Details
===

Over the last few weeks, Monty Taylor and Jim Blair have been working
with a number of OpenStack contributors to gather feedback on a
Git-based development workflow, toolset, and review process.

First, Monty and Jim investigated whether GitHub's pull request system
would be sufficient to enforce existing code review and approval
policies. It was determined that GitHub's pull request system was not
sufficient. The main reason why the pull request system failed to meet
needs is that there is no overall way to track the current state of a
given pull request. While this is fine for the simple case (merge
request is accepted and merged) it starts to fall over with some of
the more complex back and forths that we wind up having in many
OpenStack projects. Additionally, this assessment was predicated on
the current design of a gated trunk with an automated patch queue
manager, and a system where a developer is not required to spend time
landing a patch (other than potential needs for rebases or changes due
to code review).

Monty and Jim then decided to set up a Gerrit server for code review
and CI integration at http://review.openstack.org. Gerrit is a tool
developed by Google to address some of the functionality the Android
Open Source team needed around automated patch queue management and
code reviews.

The first project that moved from Launchpad to Gerrit/Git was the
openstack-ci project. This is the glue code and scripts that support
the continuous integration environment running on
http://jenkins.openstack.org.

After gaining some experience with Gerrit through the migration of
this project from Launchpad, the next OpenStack subproject to move to
the Gerrit platform was the Keystone incubated project. Keystone was
already using git for source control and was on GitHub, using GitHub's
Issues for its pull requests and bug tracking. However, the Keystone
source code was not gated by a non-human patch queue management
system; a Keystone developer would manually merge proposed branches
into the master Keystone source tree, and code reviews were not passed
through any automated tests on http://jenkins.openstack.org. Monty and

Re: [Openstack] Overview of CI/Testing

2011-06-07 Thread Dan Prince
Hi Monty,

I spent a few moments to put together some more detailed notes on SmokeStack:

  http://wiki.openstack.org/smokestack

Screenshots and a link to the live server are on that wiki page.

At this point the ability to specify custom nova and glance branches is 
supported. Each job runs a set of smoke tests that cover both the OS and EC2 
APIs.

--

We've been torpedoing quite a few branches with SmokeStack lately. Its proving 
very useful in doing reviews and helping make sure we get things right before 
they land in trunk.

Dan

-Original Message-
From: "Monty Taylor" 
Sent: Tuesday, June 7, 2011 12:30pm
To: "openstack@lists.launchpad.net" 
Cc: "Dan Prince" , "Mihai Ibanescu" , 
"Peter J. Pouliot" 
Subject: Overview of CI/Testing

Hey everybody,

Here's a quick write up on what we have at the moment and what the plan
is moving forward, as it stands today. In case you weren't aware, the
Jenkins instance sits at:

http://jenkins.openstack.org

If you think you want to help administer/hack on our Jenkins, that is
managed through the openstack-ci-admins team on Launchpad.

As most of you are aware, the main trunk branches for nova, swift,
glance and burrow are all managed by Jenkins and Tarmac. What this means
in short is that jenkins/tarmac finds approved merge proposals, runs the
in-tree unit tests on them, and if successful, pushes back to trunk.
Additionally, we have jenkins jobs which run pep8, pylint and code
coverage jobs and produce reports. For instance:

http://jenkins.openstack.org/job/nova-coverage/815/cobertura/?

Once the code has been merged, we have jenkins jobs which produce
tarballs, and debian source packages which are pushed up to launchpad
PPAs so that effectively every trunk commit winds up having an
associated package.

There is also a feature that it seems no one knew about which allows
developers to submit branch URLs to jenkins to have it run its tests on
that branch. For each project, this is ${project_name}-param, so for
instance:

http://jenkins.openstack.org/job/nova-param/

Will allow anyone in the ~nova team to submit a branch and have jenkins
pull it and run on it what it would do via tarmac - helpful when fixing
something tarmac complained about.

We are planning to get rid of tarmac and get that integrated in as a
Jenkins plugin, because otherwise supporting things like NTT's wish for
stricter coverage metrics as branch gatekeepers, or smoketesting is
preventing branches from getting in become really baroque to support.

** If you like hacking on Java - this should be a fun little project.
Send me a note and I can outline what I was going to do and if you want
it it's all yours. **

Moving forward, the big ticket item is testing not only in-tree unit
tests, but installing things and testing that.

Currently, we have jenkins spinning up a cloud server, copying the most
recently built debs from the last successful build to that server,
installing them, starting things up and then running the smoketests
against the installed code. Currently the install/setup is being done by
hand rather than via chef purely so that I could walk through what's
actually needed to get a single-node minimal install working and collect
information on bugs/workarounds that have to happen.

http://jenkins.openstack.org/job/nova-smoketests

The test failures here are due to config issues which should go away
once I migrate that job to using chef for the node code - so I expect
that to go green soon.

The next step then is to replace the use of shell commands over ssh with
the chef recipes. (now that I have a good handle on what's going
on/needed there, following the existing chef automation work is much
more fruitful)

After that, we'll move from launching cloud servers using the one-off
libcloud-based python script I wrote to using the jenkins jclouds
plugin. (if for no other reason that to make sure that all of the moving
parts of this of any complexity are sensibly checked in and have
lifecycles that people can hack on.)

** If you like hacking on Java - this is a project that's both helpful
for us as consumers in OpenStack - but also is something that
effectively will allow other people to use OpenStack to manage Jenkins
build slaves... SO - come and hack on/improviding the jenkins
jclouds-plugin. It's at https://github.com/jenkinsci/jclouds-plugin **

And then we need to apply this to swift/glance as well.


That's just testing API in a VM though, and doesn't get us to testing
actual bare-metal deployment or integration testing. At Rackspace, we
have some machines set aside at the moment, and have had others offer
chunks of machines to test various combinations of things. At its heart,
the abstract version of this looks fairly identical to the smoketests
job - pxe boot machines, shove version to be tested on them, run tests.
However, there are several moving bits on the

[Openstack] SQLAlchemy migration number conflicts

2011-06-01 Thread Dan Prince
We are getting lots of conflicts with migration numbers in merge props.

What are thoughts on using date time stamps (UTC format) instead of sequential 
numbering?

So instead of:

021_rename_image_ids.py

We'd use:

20110402122512_rename_image_ids.py (or something similar).

Rails projects now use this naming scheme as the default primarily to avoid 
merge conflicts. I personally like the a sequential numbering scheme better for 
many of my smaller projects but the DTS format makes sense when there are many 
people on a project.

Do SQLAlchemy migrations even support this?

Dan


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal for Brian Lamar to join Nova-Core

2011-05-31 Thread Dan Prince
+1

Brian is a rock solid python developer.

Dan

-Original Message-
From: "Vishvananda Ishaya" 
Sent: Tuesday, May 31, 2011 3:16pm
To: "Openstack" 
Subject: [Openstack] Proposal for Brian Lamar to join Nova-Core

While I was checking branch merges, I noticed that Brian Lamar (blamar), is not 
listed as a nova-core developer.  This is most definitely a travesty, as he has 
been one of the most prolific coders/reviewers over the past few months.  So 
I'm proposing that he is added as a nova-core member.

Vish
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [NetStack] Quantum Service API extension proposal

2011-05-21 Thread Dan Prince
The v1.1 spec is still marked as beta, yes.

The extensions code however was included in Cactus and is fully working. From 
the looks of your PDF you need a top level resource with several sub-resources. 
I think most of this should be supported with the existing ResourceExtension 
code.

You might check out the OSAPI volumes extensions as an example:

  nova/api/openstack/contrib/volumes.py

One thing I would mention is the way we serialize and deserialize things within 
the OSAPI (and extensions) is in flux. So stay tuned for improvements there.

Also feel free to hit me (dprince) or any of the team titan guys up on IRC with 
questions.

Dan

-Original Message-
From: "Rick Clark" 
Sent: Saturday, May 21, 2011 5:21pm
To: "Dan Wendlandt" , "Ram Durairaj (radurair)" 

Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] [NetStack] Quantum Service API extension proposal

It is my understanding that we would use the standard Openstack extension 
mechanism,  though I don't believe jorge's proposal has been officially 
accepted yet.  
However our plugin model will require a complex way of expressing capability 
that might not be a part of jorge's proposal.  

T-Mobile. America’s First Nationwide 4G Network

Dan Wendlandt  wrote:

>On Sat, May 21, 2011 at 11:51 AM, Ram Durairaj (radurair) <
>radur...@cisco.com> wrote:
>
>> Hi Dan:
>>
>>
>>
>> As far as I remember, In Design summit, we’ve agreed to expose “extra”
>> attributes for Virtual networks and any other vendor specific features using
>> “API-Extensions” and possibly thru existing Openstack extension mechanisms.
>> Don’t recall that we’ve concluded on Jorge’s proposal.
>>
>
>
>>
>>
>> Also I think it’s better to follow a consistent model across the Openstack
>> , provided the current Jorge’s proposal is generic enough and flexible
>> enough for what we are trying to do in our NetStack side. I think we should
>> take a look at Jorge’s and Ying model and as a team we decide.
>>
>
>Hi Ram,
>
>Apologies if I have misinterpreted the consensus here, but I seem to
>remember widespread verbal agreement during the summit on basing the API and
>its extensions off of the standard OpenStack mechanisms.  Also, the main
>Quantum Diablo Etherpad: http://etherpad.openstack.org/6LJFVsQAL7  (specific
>text pasted below) seems to show you and Salvatore agreeing to Erik's
>comment that we should use the standard OpenStack API and it includes a link
>to Jorge's doc on OpenStack Extensions.
>
>Jorge's proposal for extensions includes things like extension
>querying/discovery, a mechanism for preventing conflicts between extension
>fields from different vendors, etc. that I think are pretty fundamental to
>the what we'll need to make Quantum successful.  As a result, I am
>personally still strongly in favor of using the standard OpenStack extension
>mechanism as the base of our API mechanism for Quantum.
>
>I think Jorge's work is still in progress (Jorge?) so there should be an
>opportunity to provide input on that front as well.  If there are types of
>extensions that you are thinking about that won't work in the standard
>OpenStack model or if you simply think there is a better way to do it, that
>is something we should try to flush out ASAP.
>
>Dan
>
>
>===  Start From Etherpad 
>
>*4. For EACH network service (could be one or more depending on question
>#3), should there be a single, canonical REST API or should there be
>multiple APIs?  By canonical, we mean the base API is the same regardless of
>the driver/plugin that is implementing it.* * How should API extensibility
>be handled? *
>
>POSSIBLE ANSWERS:
>
>EC - [8] We should strive for a single approach across all Open Stack
>services.  To that end, we should follow the nova model and have a single
>"core" REST API that is applicable across all drivers/service engines.
>Where particular operations, headers, attributes, etc. are niche or vendor
>specific, API extensions should be implemented that allow for those
>capabilities to be programatically exposed but not required to be supported
>by all drivers/service engines.  If you are not familiar with the concept of
>OpenStack API extensions, there is a presentation here -
>http://wiki.openstack.org/JorgeWilliams?action=AttachFile&do=get&target=Extensions.pdf.
>Jorge is also doing a talk about this on Tue, 2PM at the Diablo summit.
>
>RamD[8] Completely agree. APIs should be OpenStack API model
>
>SO[8]: Agree with Erik and Ram.
>
> ===  End From Etherpad 
>
>
>
>
>
>>
>>
>> As I informed our Netstack team during our Design Summit, absolutely. we
>> can take up the API extensions and Sure, Ying can lead and help develop the
>> workstream and the related code contributions as part of overall Quantum.
>> I’ll let Ying add more here .
>>
>>
>> Thanks
>>
>>
>> Ram
>>
>>
>>
>>
>>
>> *From:* openstack-bounces+radurair=cisco@lists.launchpad.net [mailto:
>> openstack-bounces+radurair=cisco@lists.launchpad.net] *On Behalf Of *Dan
>> Wen

Re: [Openstack] python-novaclient vs. python-openstack.compute

2011-05-18 Thread Dan Prince
Hey Sandy,

So the API version works in the sense that if you hit /v1.0 you'll get the 
older v1.0 style API. Likewise for /v1.1.

The big things still in flux as far as v1.1 support in nova:

-making the JSON and XML serialization match the spec. On the JSON side there 
are some "collections" changes that need making. On the XML side I'm not sure 
we always adhere to the SPEC.

-ID and HREFs (the ability to specify a custom glance image HREF when creating 
an instance). The implementation for this in cactus just chopped of the image 
ID from the URL and used that as the image_id for the image service.

---

When updating a v1.0 binding to v1.1 you'll have to look at changes in 
metadata, new image metadata support, and some things moved around (change 
password moved from an update to an action), extensions, etc.

---

I wouldn't be opposed to maintaining bindings outside of the project. Like 
Soren points out it does keep us honest. The downside of this is the bindings 
probably won't have any niche features and may lag new features a bit. I see 
external bindings as being more end user friendly and as such you may have a 
hard time getting features we add for administrative API's implemented (zones, 
users, etc.).

In Ruby land...

We have started bumping the Openstack Compute Ruby bindings to support v1.1 
features. There is a branch in the works here: 
lp:~ironcamel/ruby-openstack-compute/v11 which takes care of most of the 
metadata changes, ID's and HREFs, formats, etc. We weren't planning on pushing 
the new Ruby gem with v1.1 support until the serialization stuff finally 
settles down.

Hope this helps.

Dan

-Original Message-
From: "Sandy Walsh" 
Sent: Wednesday, May 18, 2011 8:20am
To: "Soren Hansen" , "openstack@lists.launchpad.net" 

Subject: Re: [Openstack] python-novaclient vs. python-openstack.compute

Whoops, I could be mistaken on the "new 1.1 features" part of that email. 
Versioning is in a pending pull request.

I'd like to hear from the Titan team on their plans ... especially around 1.1 
support.

And, dare I say it, making the client library data-driven so it will change 
whenever the server-side API changes would be ideal. Right now it's a pita to 
have to update the client library every time something on the server changes. 
This also brings us back to the discussion of whether novaclient (or something) 
should be in the nova source tree and not separate. 

-S

From: Sandy Walsh
Sent: Wednesday, May 18, 2011 9:07 AM
To: Soren Hansen; openstack@lists.launchpad.net
Subject: RE: [Openstack] python-novaclient vs. python-openstack.compute

I agree with all of your points. Having to maintain a client library wasn't on 
our list of "fun things to do".

The only thing I can see in Jacobian's python-openstack.compute branch that 
differs from his old Rackspace API library is the addition of the auth URL and 
a rebranding.

We added that functionality to his old project last year, issued a pull request 
and were ignored. Perhaps his stance on working with us has changed since?

Moreover, since that first pull request we've really moved on with the project 
and there is much more functionality in the library:
- the new zone capabilities
- api versioning
- new OS 1.1 features
- better error handling and reporting
- better debugging

That said, the more we deal with the library the more we realize we should 
re-evaluate its use. It's a very chatty implementation ... frequently 
round-tripping to the server to fetch more detailed information. This is fine 
for a CLI, but as an internal library too inefficient.

Rather than merging these two efforts perhaps we should consider a new tack?

https://github.com/jacobian/openstack.compute
https://github.com/rackspace/python-novaclient

-S


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Soren Hansen [so...@linux2go.dk]
Sent: Wednesday, May 18, 2011 3:17 AM
To: openstack@lists.launchpad.net
Subject: [Openstack] python-novaclient vs. python-openstack.compute

python-novaclient[0] is the client for Nova that we maintain
ourselves. It is a fork of jacobian's python-cloudservers.

python-openstack.compute is jacobian's new branch of python-cloudservers.

I wonder if there's any point in having two distinct, but very similar
libraries to do same thing. If not, how do we move forward?

Yielding to jacobian (or someone else external to the project) helps
keep us honest, since someone outside the project would look at the
API docs to extend their client tools, and will hopefully point out if
there's divergence between the API docs and the actual exposed API.

However, we need client tools to exercise new features exposed in the
API, so I'm not sure we can reasonably live without a set of tools
that we maintain ourselves to expose all the new functionality.

Thoughts?

-

[Openstack] glance PPA fix

2011-05-14 Thread Dan Prince
Soren/Monty,

The most recent glance PPA packages fail due to the fact that glance trunk now 
uses separate files for glance-api.conf and glance-registry.conf. This merge 
prop should fix it:

https://code.launchpad.net/~dan-prince/glance/ubuntu-glance-api-version/+merge/61001

Cheers,

Dan


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Code Reviews

2011-05-11 Thread Dan Prince
+1

I just SmokeStack'ed a couple more suspiciously old branches and marked them 
needs fixing because they didn't merge w/ trunk. Most of which already had 
needs fixing for other reasons anyway.

---

Also, I've been sitting on lp:~dan-prince/nova/chown_vdb_device for a month 
now. I'm happy to mark that as WIP to keep things clean. I am however okay with 
the merge prop as is (for the reasons in I listed in the merge prop).


Dan

-Original Message-
From: "Vishvananda Ishaya" 
Sent: Wednesday, May 11, 2011 3:38pm
To: openstack@lists.launchpad.net
Subject: [Openstack] Code Reviews

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
Hello Everyone,

We have quite a large backlog of merge proposals here:

https://code.launchpad.net/~rlane/nova/lp773690/+merge/59565

I've been attempting to go through them to find some high priority ones to 
review.  It seems like people are being pretty active in reviewing branches, 
but there are a lot old branches that haven't been touched in a while.  So 
first I have a general request that anyone who has old branches in for review:  
please update your branches or mark them Work In Progress to remove them from 
the review queue.

I'd also like to propose a change to our process that will make the ready to 
review branches easier to find. I'd like for nova-core to set branches to WIP 
if there are two significant needs fixings or or needs information.  That way 
everyone doesn't have to sort through branches that have already been reviewed 
but are waiting on updates.  We may need to use our judgement here, so if a 
large branch has a needs fixing for a minor typo or some such, you could leave 
it under needs review so it gets viewed by more people.

Here is an example where i think this policy will be useful:

You see a branch that already has a 'Needs Fixing: this needs a failing test".  
If you look at the branch and reach the same conclusion, you can mark it "Needs 
Fixing: I agree, needs a test like xxx" and then set the branch to Work In 
Progress.  When the author has added the test or needs to make more comments, 
he can set it back to Needs Review.

I think this will generally keep the review board a little cleaner and also 
each branch will end up with a couple of people that are queued to review once 
the changes have come in. Does this seem acceptable to everyone?  If I don't 
here any major dissents, I will add this info to the wiki and we can put it 
into practice.

Vish


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] euca-describe-images help?

2011-05-09 Thread Dan Prince
Hi Fred,

The image format for the old S3 image service (local image store) changed a bit 
in Cactus. I believe there is a 'nova-manage image convert' command you can use 
to convert the old naming scheme to the Cactus image store format.

Or like Todd pointed out below you can also manually do the conversion yourself.

Hope this helps.

Dan

-Original Message-
From: "Todd Willey" 
Sent: Monday, May 9, 2011 7:32pm
To: "Yang, Fred" 
Cc: "openstack@lists.launchpad.net" 
Subject: Re: [Openstack] euca-describe-images help?

Looks like the directories need to be named base 16.  See
nova/images/local.py line 60 (in trunk).


On Mon, May 9, 2011 at 6:34 PM, Yang, Fred  wrote:
> Hi,
>
>
>
> I am installing nova from the latest source trunk following
> http://wiki.openstack.org/InstallFromSource as a newbie and is having
> euca-describe-images issue, can someone enlighten me?  The environment is
> single node installation with Ubuntu 10.10
>
>
>
> fred@stocky1:~/openstack/nova$ ls -R images
>
> images:
>
> aki-lucid  ami-tiny  ari-lucid
>
>
>
> images/aki-lucid:
>
> image  info.json
>
>
>
> images/ami-tiny:
>
> image  info.json
>
>
>
> images/ari-lucid:
>
> image  info.json
>
>
>
> fred@stocky1:~/openstack/nova$ euca-describe-images
>
>
>
> where the ./bin/nova-api window gets following messages
>
> 2011-05-09 15:04:04,437 DEBUG nova.api [-] action: DescribeImages from
> (pid=15607) __call__ /home/fred/openstack/nova/nova/api/ec2/__init__.py:214
>
> 2011-05-09 15:04:04,437 DEBUG nova.api [-] arg: ExecutableBy.1
> val: self from (pid=15607) __call__
> /home/fred/openstack/nova/nova/api/ec2/__init__.py:216
>
> 2011-05-09 15:04:04,440 ERROR nova.image.local [-] aki-lucid is not in
> correct directory naming format
>
> 2011-05-09 15:04:04,440 ERROR nova.image.local [-] ami-tiny is not in
> correct directory naming format
>
> 2011-05-09 15:04:04,441 ERROR nova.image.local [-] ari-lucid is not in
> correct directory naming format
>
> 2011-05-09 15:04:04,441 DEBUG nova.api.request [-]  ?> xmlns="http://ec2.amazonaws.com/doc/2009-11-30/";>4WE2DMZ2ZNE4D7BSSYD5
> from (pid=15607) _render_response
> /home/fred/openstack/nova/nova/api/ec2/apirequest.py:171
>
> 2011-05-09 15:04:04,441 INFO nova.api [4WE2DMZ2ZNE4D7BSSYD5 fredy Ras]
> 0.28957s 192.168.0.14 GET /services/Cloud/ CloudController:DescribeImages
> 200 [Boto/1.9b (linux2)] text/plain text/xml
>
>
>
> Is the images path incorrect?  I have also tried execute command by step
> into ~/images
>
>
>
> Thanks,
>
> -Fred
>
>
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Packaging branches

2011-05-06 Thread Dan Prince
Hi Soren,

Thanks for the updates. What Ubuntu distro release is actually used to build 
the PPA packages? Maverick, Natty? Does the PPA build actually use 
lp:~openstack-ubuntu-packagers/nova/ubuntu? I couldn't get it to work on 
Maverick.

I just pushed a couple of branches that allow me to build nova ubuntu packages 
on Maverick:

lp:~dan-prince/nova/ubuntu-hunking-fixes
lp:~dan-prince/nova/ubuntu-fix_installinit

Thanks,

Dan

-Original Message-
From: "Soren Hansen" 
Sent: Thursday, May 5, 2011 10:16am
To: openstack@lists.launchpad.net
Subject: [Openstack] Packaging branches

I decided to restore some sanity to our packaging branch locations.

Long story short, the packaging code trunks are at:

lp:~openstack-ubuntu-packagers//ubuntu

Where  is glance, nova, or swift. This is what the PPA builds will use.

Additionally, in Ubuntu we maintain separate branches per Ubuntu
version (not related to the OpenStack PPA's. They're only used for
uploads directly to Ubuntu):

lp:~openstack-ubuntu-packagers/ubuntu///ubuntu

..where  is e.g. "natty" or "oneiric" and  is still
glance, nova, or swift.

If you already have a checkout of some of this and you want to point
your checkout to the new location, you can use:

$ bzr pull --remember 

Enjoy, and please let me know if you have any questions.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] SmokeStack accounts?

2011-04-30 Thread Dan Prince
Hello all,

After a bit of rest I got a public SmokeStack setup here:

 http://184.106.189.251/

Trunk tests with libvirt ipv4 currently fail due to:

 https://bugs.launchpad.net/nova/+bug/773412

--

Is there interest in making accounts available in SmokeStack this for any and 
all nova devs? It was suggested that I use Launchpad as an open ID provider. If 
I do that I could allow anyone in Authors to create an account (or something 
like that).

Any suggestions? I hate to manually create accounts if we can figure something 
better out.

--

When trunk runs again I'll run all the merge props here:

 https://code.launchpad.net/~hudson-openstack/nova/trunk/+activereviews

Anybody have a Launchpad API script to automate this sort of thing. Something 
like check for all branches in merge prop, check the revision of the branch, if 
we haven't run that code then smoke test it, etc. This is a good start 
(although code could still get stale due to further trunk changes).


Dan


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack API - Volumes?

2011-03-22 Thread Dan Prince
Hey John,

Wasn't the plan that Rackspace would use API extensions to refine the volumes 
API and then move towards getting the API into nova core long term. Perhaps the 
extension could even live in the nova source code but just not be considered 
part of the version spec?

Just wondering if there was a change of plan on this front.

Dan

-Original Message-
From: "John Purrier" 
Sent: Tuesday, March 22, 2011 12:20pm
To: "'Adam Johnson'" , openstack@lists.launchpad.net
Cc: chuck.th...@rackspace.com
Subject: Re: [Openstack] Openstack API - Volumes?

I know that creiht is looking at this for Rackspace. Chuck, anything to add
to this discussion?

John

-Original Message-
From: openstack-bounces+john=openstack@lists.launchpad.net
[mailto:openstack-bounces+john=openstack@lists.launchpad.net] On Behalf
Of Adam Johnson
Sent: Monday, March 21, 2011 10:15 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] Openstack API - Volumes?

Hey everyone,

I wanted to bring up the topic of volumes in the OpenStack API.  I
know there was some discussion about this before, but it seems to have
faded on the ML.   I know Justinsb has done some work on this already,
and has a branch here:
https://code.launchpad.net/~justin-fathomdb/nova/justinsb-openstack-api-volu
mes

Im wondering what the consensus is on what the API should look like,
and when we could get this merged into Nova?

Thanks,
Adam Johnson
Midokura

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OS API server password generation

2011-03-07 Thread Dan Prince
Hi Thierry,

Appoligies for the late blueprint submission. I guess I like using Blueprints 
for missing features so that I can setup the dependencies in launchpad. On this 
specific issue I just wanted some community comments and communication on the 
notes I had in etherpad. Although my initial email was poorly worded I did get 
a couple useful comments.

Bugs vs. Blueprints is sort of a gray area. When a larger blue print is 
accepted (like the Openstack 1.1 API) should we file component features as bugs 
or blueprints?

In my case I could have easily considered this a bug as well since we should 
already support the v1.0 API right?

Dan

-Original Message-
From: "Thierry Carrez" 
Sent: Saturday, March 5, 2011 4:14pm
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] OS API server password generation

Jay Pipes wrote:
> Does anyone else feel it's a bit late to be targeting new blueprints
> for Cactus since we're 2 weeks from branch merge proposal freeze?
> 
> http://wiki.openstack.org/CactusReleaseSchedule

Yes.

SpecSubmissionDeadline for Cactus was one month ago. It was tolerated so
far to add simple blueprints that are fairly obvious and do not require
discussion (even retrospectively). But if the added specs require design
discussion and consensus, it's clearly too late: at this stage in the
cycle we should be busy pumping code out and reviewing proposed code
branches, not really participating in long design threads...

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Remove the local image service (nova/image/local.py)?

2011-03-07 Thread Dan Prince
Is anyone using the nova/image/local.py?

This class appears to be a pre-glance image service that could be used with the 
Openstack API. As we now have glance I'm not sure we really need this anymore.

The class initializes its path to a temp directory:

def __init__(self):
self._path = tempfile.mkdtemp()

Maybe we can use this as a proper mock image service (within the tests) and 
remove it as a top level image service within the code base.

--
I file a bug on this issue here:

  https://bugs.launchpad.net/nova/+bug/723947

NOTE: This has nothing to do with 'nova-objectstore' or anything on the S3/EC2 
side of things as some of the comments mention in the ticket.
--

Why remove a class that isn't hurting anything? In another bug/ticket I was 
trying to standardize how the image services provide access to 'kernel_id' and 
'ramdisk_id' in a standardized manner. This will help clean up some of the code 
in the API layers. While doing this I ran across nova/image/local.py which 
seemed to be rather outdated.

Dan


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OS API server password generation

2011-03-04 Thread Dan Prince
Hi Jay,

Would it be better to have stuff like this go into a bug report? For that 
matter should anything that helps us reach parity with the Cloud Servers v1.0 
spec be a bug report or a blueprint?

I like the dependency graphs we get with Blueprint's as it helps track how 
close we are to reaching parity in some of the larger (meta) blueprints.

I bounced the idea around some of the teams within Rackspace and blueprint 
seemed right for this one. Perhaps I was wrong.

Dan

-Original Message-
From: "Jay Pipes" 
Sent: Friday, March 4, 2011 10:27am
To: "Dan Prince" 
Cc: "Openstack" 
Subject: Re: [Openstack] OS API server password generation

Does anyone else feel it's a bit late to be targeting new blueprints
for Cactus since we're 2 weeks from branch merge proposal freeze?

http://wiki.openstack.org/CactusReleaseSchedule

-jay

On Wed, Mar 2, 2011 at 4:11 PM, Dan Prince  wrote:
> We created a blueprint on adding support for password generation when 
> creating servers. This is needed for Openstack API/Cloud Servers API v1.0 
> parity.
>
> We are anxious to get this work started so if you are interested please 
> review the following:
>
>  https://blueprints.launchpad.net/nova/+spec/openstack-api-server-passwords
>
>  http://etherpad.openstack.org/openstack-api-server-passwords
>
> Dan Prince
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] OS API server password generation

2011-03-02 Thread Dan Prince
We created a blueprint on adding support for password generation when creating 
servers. This is needed for Openstack API/Cloud Servers API v1.0 parity.

We are anxious to get this work started so if you are interested please review 
the following:
 
 https://blueprints.launchpad.net/nova/+spec/openstack-api-server-passwords
 
 http://etherpad.openstack.org/openstack-api-server-passwords

Dan Prince


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How to deal with 'tangential' bugs?

2011-02-28 Thread Dan Prince

I'm not a big fan of 'known bugs' in unit tests. Unit tests should always pass. 
How practical is it that I'm going to invest the time to write a unit tests on 
a bug which I'm then not able to fix in the same merge. In many cases writing 
the test cases are actually harder than writing the code to fix the actual bug.
 
If you really need to write a test case ahead of time (perhaps even to make 
your case that a bug exists) why not just create a launchpad bug and then 
attach the test case as a patch to the bug report?
 
Seems like 'know bugs' also provides a mechanism for potentially stale unit 
tests code to hide out in the nova codebase? If code (including test code) is 
being actively used then I'd actually prefer to not have it be in the codebase.
 

Lastly, QA testers would probably focus more on the functional and integration 
types of testing rather than unit tests right?
 
Dan
 
-Original Message-
From: "Justin Santa Barbara" 
Sent: Monday, February 28, 2011 1:56pm
To: openstack@lists.launchpad.net
Subject: [Openstack] How to deal with 'tangential' bugs?


Jay and I have been having an interesting discussion about how to deal with 
bugs that mean that unit tests _should_ fail.  So, if I find a bug, I should 
write a failing unit test first, and fix it (in one merge).  However, if I 
can't fix it, I can't get a failing unit test merged into the trunk (because it 
fails).  It may be that I can't get what I'm actually working on merged with 
good unit tests until this 'tangential' bug is fixed.
(The discussion is 
here: [https://code.launchpad.net/~justin-fathomdb/nova/bug724623/+merge/51227] 
https://code.launchpad.net/~justin-fathomdb/nova/bug724623/+merge/51227)
I suggested that we introduce a "known_bugs" collection.  It would have a set 
of values to indicate bugs that are known but not yet fixed.  Ideally these 
would be linked to bug reports (we could mandate this).  When a developer wants 
to write a test or behavior to work around a particular bug, they can control 
it based on testing this collection ("if 'bug12345' in known_bugs:")  When 
someone is ready to fix the bug, they remove the bug from the collection, the 
unit tests then fail, they fix the code and commit with the known_bugs item 
removed.
This would let people that find bugs but can't or don't want to fix them still 
contribute unit tests.  This could be a QA person that can write tests but not 
necessarily code the fix.  This could be a developer who simply isn't familiar 
with the particular system.  Or it could be where the fix needs to go through 
the OpenStack discussion process.  Or it could simply be a train-of-thought / 
'flow' issue.
Take, for example, my favorite OpenStack API authentication issue.  To get a 
passing unit test with OpenStack authentication, my best bet is to set all 
three values (username, api_key, api_secret) to the same value.  This, however, 
is a truly terrible test case.  Having "known_bugs" marks my unit test as being 
suboptimal; it lets me provide better code in the same place (controlled by the 
known_bugs setting); and when the bug fixer comes to fix it they easily get a 
failing unit test that they can use for TDD.
Jay (correctly) points out that this is complicated; can cause problems down 
the line when the bug is fixed in an unexpected way; that known_bugs should 
always be empty; and that the right thing to do is to fix the bug or get it 
fixed.  I agree, but I don't think that getting the bug fixed before proceeding 
is realistic in a project with as many stakeholders as OpenStack has.
Can we resolve the dilemma?  How should we proceed when we find a bug but we're 
working on something different?
Justin
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Steps that can help stabilize Nova's trunk

2011-02-17 Thread Dan Prince
Hey Jay,

I like what you propose here. I have a couple of comments and questions.

I see the 'smoketests' directory in the nova code. Is anyone running these 
tests on a regular basis (every commit)? Is this the best place to further 
build out integration tests?

---

Regarding environment/setup tools: I been working on an Openstack VPC toolkit 
project that we are using in Blacksburg to stage test some things in the cloud. 
I'm using Chef along with the Anso/Opscode Openstack cookbooks to setup 
Rackspace Cloud Servers with the latest trunk PPA packages.

This setup works well however I can only test with Qemu (no Xenserver) and 
using network managers that have DHCP (I use FlatDHCPManager since Cloud 
Servers kernels don't currently have the 'ndb' kernel module which would 
support the network injection stuff). Using this setup I'm able to create multi 
node installations where instances on different machines can ping each other. 
While this isn't what I would call a true production setup it is fully 
functional and can easily be run in parallel. The only limitations are the 
limits on your Cloud Servers Account.

If you have bare metal then you can simply swap out the Cloud Servers API layer 
with something that interfaces with your PXE imaging system. I'm a big fan of 
slicking the machines between each test run to avoid the buildup of cruft in 
the system.

This would integrate as a Hudson job nicely as well. We've done some similar 
setups in Rackspace Email and Apps using a single Hudson server. The hudson 
server runs a simple Bash script that invokes the toolkit to create the cloud 
servers, chef them up with the latest PPA packages (or your branch code), and 
then uses Torque (an HPC'ish resource manager) to run schedule test jobs on 
some of the machines. I use Chef recipes to install Torque along with a REST 
interface to schedule and monitor the jobs on a "head" node. The Hudson job the 
waits for the Torque jobs to finish. The last thing the Hudson script does is 
'scp' the unit test results XML file back to the Hudson server where you can 
use something like the xUnit plugin to display and graph the results over time.

To summarize:

-testing in the cloud provides a low barrier of entry that anyone can use
-testing on bare metal is more expensive but gets us extra coverage (XenServer, 
etc.)
-we should do both as often as possible
-the same set of tools and tests should work in both environments

Dan

-Original Message-
From: "Jay Pipes" 
Sent: Wednesday, February 16, 2011 5:27pm
To: openstack@lists.launchpad.net
Subject: [Openstack] Steps that can help stabilize Nova's trunk

Hey all,

It's come to my attention that a number of folks are not happy that
Nova's trunk branch (lp:nova) is, shall we say, "less than stable". :)

First, before going into some suggestions on keeping trunk more
stable, I'd like to point out that trunk is, by nature, an actively
developed source tree. Nobody should have an expectation that they can
simply bzr branch lp:nova and everything will magically work with a)
their existing installations of software packages, b) whatever code
commits they have made locally, or c) whatever specific
hypervisor/volume/network environment that they test their local code
with. The trunk branch is, after all, in active development.

That said, there's *no* reason we can't *improve* the relative
stability of the trunk branch to make life less stressful for
contributors.  Here are a few suggestions on how to keep trunk a bit
more stable for those developers who actively develop from trunk.

1) Participate fully in code reviews. If you suspect a proposed branch
merge will "mess everything up for you", then you should notify
reviewers and developers about your concerns. Be proactive.

2) If you pull trunk and something breaks, don't just complain about
it. Log a bug immediately and talk to the reviewers/approvers of the
patch that broke your environment. Be constructive in your criticism,
and be clear about why the patch should have been more thoroughly or
carefully reviewed. If you don't, we're bound to repeat mistakes.

3) Help us to write functional and integration tests. It's become
increasingly clear from the frequency of breakages in trunk (and other
branches) that our unit tests are nowhere near sufficient to catch a
large portion of bugs. This is to be expected. Our unit tests use
mocks and stubs for virtually everything, and they only really test
code interfaces, and they don't even test that very well. We're
working on adding functional tests to Hudson that will run, as the
unit test do, before any merge into trunk, with any failure resulting
in a failed merge. However, we need your help to create functional
tests and integration tests (tests that various *real* components work
together properly).  We also need help writing test cases that ensure
software library dependencies and other packaging issues are handled
properly and don't break with minor patches.

4) If y

Re: [Openstack] Metadata schema design

2011-02-15 Thread Dan Prince
Hi Justin,

My vote would be to call the table InstanceProperties.

Regarding the ability of other services to use the table wouldn't it be cleaner 
if services had their own 'properties' tables (like the Glance registry service 
does).

In other words services would have control over their own metadata tables. If 
the volume service needs metadata it should have its own table (or DB), etc.

Dan

-Original Message-
From: "Justin Santa Barbara" 
Sent: Monday, February 14, 2011 1:29pm
To: openstack@lists.launchpad.net
Subject: [Openstack] Metadata schema design

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
I've coded support for metadata on instances.  This is part of the
CloudServers API, and I needed it for my idea about metadata hints to the
scheduler.
https://code.launchpad.net/~justin-fathomdb/nova/justinsb-metadata/+merge/49490

However, Jay Pipes has raised some (very valid) design questions on the
schema.

I called the table/entity 'Metadata', and it has two main attributes: 'key'
and 'value'.  There's a foreign key to Instances, and long term I'd expect
we'd add more foreign keys to other parent entities.  I expect only one of
those parent foreign keys would be populated per row.

Two questions:


   1. Are those words too overloaded?  Jay suggested (Instance)Properties.
However, then the question arises about the 'core properties' (zone, image,
   instance_type for a machine) and why they are not stored in the 'properties'
   collection.  This is really metadata, and the CloudServers API calls it
   metadata.  What do people think these should be named?  "Metadata"?
"Properties"? "Tags"?
   2. I imagine that Volumes will also have metadata (long term, probably
   everything will - networks, images, instance types, network objects).  So
   should we have one entity/table or multiple entities (one per parent type)?
I like the idea of one entity, because I think it will yield better code
   with less code duplication.  From a SQL viewpoint, one per parent entity is
   probably more normal though.


Justin



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp