Re: [openstack-dev] [Nova] Proposal to add Christopher Yeoh to nova-core

2013-07-03 Thread Pádraig Brady
+1

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to add Christopher Yeoh to nova-core

2013-07-03 Thread Sean Dague
+1, good to have more voices in the eastern hemisphere as well.

On Wed, Jul 3, 2013 at 8:38 AM, Pádraig Brady p...@draigbrady.com wrote:
 +1

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for including fake implementations in python-client packages

2013-07-03 Thread Joe Gordon
On Mon, Jul 1, 2013 at 11:08 PM, Alex Gaynor alex.gay...@gmail.com wrote:

 Hi all,

 I suspect many of you don't know me as I've only started to get involved in
 OpenStack recently, I work at Rackspace and I'm pretty involved in other
 Python
 open source stuff, notably Django and PyPy, I also serve on the board of
 the
 PSF. So hiwave /!

 I'd like to propose an addition to all of the python-client libraries going
 forwards (and perhaps a requirement for future ones).

 What I'd like is for each client library, in addition to the actual
 implementation, is that they ship a fake, in-memory, version of the API.
 The
 fake implementations should take the same arguments, have the same return
 values, raise the same exceptions, and otherwise be identical, besides the
 fact
 that they are entirely in memory and never make network requests.

 Why not ``mock.Mock(spec=...)``:

 First, for those not familiar with the distinction between fakes and mocks
 (and
 doubles and stubs and ...): http://mumak.net/test-doubles/ is a great
 resource.
 https://www.youtube.com/watch?v=Xu5EhKVZdV8 is also a great resource which
 explains much of what I'm about to say, but better.

 Fakes are better than Mocks, for this because:

 * Mocks tend to be brittle because they're testing for the implementation,
 and
   not the interface.
 * Each mock tends to grow its own divergent behaviors, which tend to not be
   correct.

 http://stackoverflow.com/questions/8943022/reactor-stops-between-tests-when-using-twisted-trial-unittest/8947354#8947354
   explains how to avoid this with fakes
 * Mocks tend to encourage monkey patching, instead of just passing
 objects as
   parameters

 Again: https://www.youtube.com/watch?v=Xu5EhKVZdV8 is an amazing resource.


 This obviously adds a bit of a burden to development of client libraries,
 so
 there needs to be a good justification. Here are the advantages I see:



Just as  FYI, nova currently has a different way of addressing the two
points that a fake API server would address.




 * Helps flesh out the API: by having a simple implementation it helps in
   designing a good API.


Having a simple API implementation will not help us with having a good API,
as by the time its in code almost too late to change (the window to change
is between coded and released).  Instead a fake API would help validate
that the API follows the specs.  Nova has two ways of doing this now, unit
tests that exercise the APIs, and compare the results against recorded
output (see doc/api_samples/ in nova).  And we have tempest API tests.



 * Helps other projects tests: right now any project which uses an openstack
   client library has to do something manual in their tests, either add
 their
   own abstraction layer where they hand write an in-memory implementation,
 or
   they just monkey patch the socket, http, or client library to not make
   request. Either direction requires a bunch of work from each and every
   project using an openstack client. Having these in the core client
 libraries
   would allow downstream authors to simply swap out Connection classes.


Instead of a fake API server with no scheduler, no db etc, nova has a fake
backend.  So the only code path that is different then a real deployment is
which virt driver is used (https://review.openstack.org/#/c/24938/).



 I think these benefits out weigh the disadvantages. I'm not sure what the
 procedure for this is going forward. I think to demonstrate this concept it
 should start with a few (or even just one) client libraries, particularly
 ones
 which completely own the resources they serve (e.g. swift, marconi,
 ceilometer,
 trove), as compared to ones that interact more (e.g. neutrino, cinder, and
 nova). This is absolutely something I'm volunteering to work on, but I
 want to
 ensure this is an idea that has general buy in from the community and
 existing
 maintainers, so it doesn't wither.

 Thanks,
 Alex

 --
 I disapprove of what you say, but I will defend to the death your right
 to say it. -- Evelyn Beatrice Hall (summarizing Voltaire)
 The people's good is the highest law. -- Cicero
 GPG Key fingerprint: 125F 5C67 DFE9 4084

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Work around DB in OpenStack (Oslo, Nova, Cinder, Glance)

2013-07-03 Thread Johannes Erdfelt
On Wed, Jul 03, 2013, Michael Still mi...@stillhq.com wrote:
 On Wed, Jul 3, 2013 at 3:50 AM, Boris Pavlovic bo...@pavlovic.me wrote:
 
  Question:
Why we should put in oslo slqlalchemy-migrate monkey patches, when we are
  planing to switch to alembic?
 
  Answer:
 If we don’t put in oslo sqlalchemy-migrate monkey patches. We won't be
  able to work on 7 point at all until 8 and 10 points will be implemented in
  every project. Also work around 8 point is not finished, so we are not able
  to implement 10 points in any of project. So this blocks almost all work in
  all projects. I think that these 100-200 lines of code are not so big price
  for saving few cycles of time.
 
 We've talked in the past (Folsom summit?) about alembic, but I'm not
 aware of anyone who is actually working on it. Is someone working on
 moving us to alembic? If not, it seems unfair to block database work
 on something no one is actually working on.

I've started working on a non-alembic migration path that was discussed
at the Grizzly summit.

While alembic is better than sqlalchemy-migrate, it still requires long
downtimes when some migrations are run. We discussed moving to an
expand/contract cycle where migrations add new columns, allow migrations
to slowly (relatively speaking) migrate data over, then (possibly) remove
any old columns.

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Work around DB in OpenStack (Oslo, Nova, Cinder, Glance)

2013-07-03 Thread Doug Hellmann
On Wed, Jul 3, 2013 at 6:50 AM, Michael Still mi...@stillhq.com wrote:

 On Wed, Jul 3, 2013 at 3:50 AM, Boris Pavlovic bo...@pavlovic.me wrote:

  Question:
Why we should put in oslo slqlalchemy-migrate monkey patches, when we
 are
  planing to switch to alembic?
 
  Answer:
 If we don’t put in oslo sqlalchemy-migrate monkey patches. We won't be
  able to work on 7 point at all until 8 and 10 points will be implemented
 in
  every project. Also work around 8 point is not finished, so we are not
 able
  to implement 10 points in any of project. So this blocks almost all work
 in
  all projects. I think that these 100-200 lines of code are not so big
 price
  for saving few cycles of time.

 We've talked in the past (Folsom summit?) about alembic, but I'm not
 aware of anyone who is actually working on it. Is someone working on
 moving us to alembic? If not, it seems unfair to block database work
 on something no one is actually working on.


That's not quite what happened. Unfortunately the conversation happened in
gerrit, IRC, and email, so it's a little hard to piece together from the
outside.

I had several concerns with the nature of this change, not the least of
which is it is monkey-patching a third-party library to add a feature
instead of just modifying that library upstream.

The patch I objected to (https://review.openstack.org/#/c/31016) modifies
the sqlite driver inside sqlalchemy-migrate to support some migration
patterns that it does not support natively. There's no blueprint linked
from the commit message on the patch I was reviewing, so I didn't have the
full background. The description of the patch, and the discussion in
gerrit, initially led me to believe this was for unit tests for the
migrations themselves. I pointed out that it didn't make any sense to test
the migrations on a database no one would use in production, especially if
we had to monkey patch the driver to make the migrations work in the first
place.

Boris clarified that the tests were the general nova tests, at which point
I asked why nova was relying on the migrations to set up a database for its
tests instead of just using the models. Sean cleared up the history on that
point, and although I'm still not happy with the idea of putting code in
oslo with the pre-declared plan to remove it (rather than consider it for
graduation), I agreed that the pragmatic thing to do for now is to live
with the monkey patched version of sqlalchemy-migrate.

At this point, I have removed my -2 to the patch, but I haven't had a
chance to fully review the code. I voted 0 to unblock it in case other
reviewers had time to look at it before I was able to come back. That
hasn't happened, but the patch is no longer blocked.

Somewhere during that conversation, I suggested looking at alembic as an
alternative, but alembic clearly states in its documentation that
migrations on sqlite are not supported because of the database's limited
support for alter statements, but that if someone wants to contribute those
features patches would be welcome. If we do need this feature to support
good unit tests of SQLalchemy-based projects, we should eventually move it
out of oslo and into alembic, then move our migration scripts to use
alembic. It would make the most sense to do that on a release boundary,
when we normally collapse the migration scripts anyway. Even better would
be if we could make the models and migration scripts produce databases that
are compatible enough for testing the main project, and then run tests for
the migrations themselves against real databases as a separate step. Based
on the plan Boris has posted, it sounds like he is working toward both of
these goals.

Doug



 Michael

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] weekly meeting today

2013-07-03 Thread Michael Basnight
Same bat time, same bat channel. 2000 UTC in #openstack-meeting-alt

https://wiki.openstack.org/wiki/Meetings/TroveMeeting

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] volume affinity filter for nova scheduler

2013-07-03 Thread Russell Bryant
On 07/03/2013 10:24 AM, Alexey Ovchinnikov wrote:
 Hi everyone,
 
 for some time I have been working on an implementation of a filter that
 would allow to force instances to hosts which contain specific volumes.
 A blueprint can be found here:
 https://blueprints.launchpad.net/nova/+spec/volume-affinity-filter
 and an implementation here:
 https://review.openstack.org/#/c/29343/
 
 The filter works for LVM driver and now it picks either a host
 containing specified volume
 or nothing (thus effectively failing instance scheduling). Now it fails
 primarily when it can't find the volume. It has been
 pointed to me that sometimes it may be desirable not to fail instance
 scheduling but to run it anyway. However this softer behaviour fits better
 for weighter function. Thus I have registered a blueprint for the
 weighter function:
 https://blueprints.launchpad.net/nova/+spec/volume-affinity-weighter-function
 
 I was thinking about both the filter and the weighter working together.
 The former
 could be used in cases when we strongly need storage space associated
 with an 
 instance and need them placed on the same host. The latter could be used
 when 
 storage space is nice to have and preferably on the same host
 with an instance, but not so crucial as to have the instance running.
 
 During reviewing a question appeared whether we need the filter and
 wouldn't things be better
 if we removed it and had only the weighter function instead. I am not
 yet convinced
 that the filter is useless and needs to be replaced with the weighter,
 so I am asking for your opinion on this matter. Do you see usecases for
 the filter,
 or the weighter will answer all needs?

Thanks for starting this thread.

I was pushing for the weight function.  It seems much more appropriate
for a cloud environment than the filter.  It's an optimization that is
always a good idea, so the weight function that works automatically
would be good.  It's also transparent to users.

Some things I don't like about the filter:

 - It requires specifying a scheduler hint

 - It's exposing a concept of co-locating volumes and instances on the
same host to users.  This isn't applicable for many volume backends.  As
a result, it's a violation of the principle where users ideally do not
need to know or care about deployment details.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues with git review for a dependent commit

2013-07-03 Thread Vishvananda Ishaya

On Jul 2, 2013, at 4:23 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 
git review -d 33297
git review -x 35384
git review

Oh, I didn't see that you added -x/-X/-N. I can simplify my backport script[1] 
significantly now.

Vish

[1] https://gist.github.com/vishvananda/2206428
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Metrics][Nova] Using Bicho database to get stats about code review

2013-07-03 Thread Jesus M. Gonzalez-Barahona
Hi all,

Bicho [1] now has a Gerrit backend, which has been tested with
OpenStack's Gerrit. We have used it to produce the MySQL database dump
available at [2] ( gerrit.mysql.7z ). You can use it to compute the
metrics mentioned in the previous threads about code review, and some
others.

[1] https://github.com/MetricsGrimoire/Bicho
[2] http://activity.openstack.org/dash/browser/data/db/

The database dump will be updated daily, starting in a few days.

For some examples on how to run queries on it, or how to produce the
database using Bicho, fresh from OpenStack's gerrit, have a look at [3].

[3] https://github.com/MetricsGrimoire/Bicho/wiki/Gerrit-backend

At some point, we plan to visualize the data as a part of the
development dashboard [4], so any comment on interesting metrics, or
bugs, will be welcome. For now, we're taking not of all metrics
mentioned in the previous posts about code review stats.

[4] http://activity.openstack.org/dash/browser/

Saludos,

Jesus.

-- 
-- 
Bitergia: http://bitergia.com http://blog.bitergia.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for including fake implementations in python-client packages

2013-07-03 Thread Christopher Armstrong
On Tue, Jul 2, 2013 at 11:38 PM, Robert Collins
robe...@robertcollins.net wrote:
 Radix points out I missed the naunce that you're targeting the users
 of python-novaclient, for instance, rather than python-novaclient's
 own tests.


 On 3 July 2013 16:29, Robert Collins robe...@robertcollins.net wrote:

 What I'd like is for each client library, in addition to the actual
 implementation, is that they ship a fake, in-memory, version of the API. The
 fake implementations should take the same arguments, have the same return
 values, raise the same exceptions, and otherwise be identical, besides the
 fact
 that they are entirely in memory and never make network requests.

 So, +1 on shipping a fake reference copy of the API.

 -1 on shipping it in the client.

 The server that defines the API should have two implementations - the
 production one, and a testing fake. The server tests should exercise
 *both* code paths [e.g. using testscenarios] to ensure there is no
 skew between them.

 Then the client tests can be fast and efficient but not subject to
 implementation skew between fake and prod implementations.

 Back on Launchpad I designed a similar thing, but with language
 neutrality as a goal :
 https://dev.launchpad.net/ArchitectureGuide/ServicesRequirements#Test_fake

 And in fact, I think that that design would work well here, because we
 have multiple language bindings - Python, Ruby, PHP, Java, Go etc, and
 all of them will benefit from a low(ms or less)-latency test fake.

 So taking the aspect I missed into account I'm much happier with the
 idea of shipping a fake in the client, but... AFAICT many of our
 client behaviours are only well defined in the presence of a server
 anyhow.

 So it seems to me that a fast server fake can be used in tests of
 python-novaclient, *and* in tests of code using python-novaclient
 (including for instance, heat itself), and we get to write it just
 once per server, rather than once per server per language binding.

 -Rob


I want to make sure I understond you. Let's say I have a program named
cool-cloud-tool, and it uses python-novaclient, python-keystoneclient,
and three other clients for OpenStack services. You're suggesting that
its test suite should start up instances of all those OpenStack
services with in-memory or otherwise localized backends, and
communicate with them using standard python-*client functionality?

I can imagine that being a useful thing, if it's very easy to do, and
won't increase my test execution time too much.

-- 
IRC: radix
Christopher Armstrong
Rackspace

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vmware] VMwareAPI sub-team status update

2013-07-03 Thread Shawn Hartsock
Greetings Stackers!

I covered open reviews on Friday. It is a short week for us here in the US so 
I'll just be sending the one email this week just ahead of our VMwareAPI team 
meeting.

Blueprints targeted for Havana-2:
* https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage - 
started but depends on
** https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy - 
which I have up for review
* 
https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service
 - good progress 

If you are working on a BP for H2 deadline try to get it up and finished for 
Monday morning. Review cycles are long and if you don't have it up for review 
by then the chances are your blueprint will slip into Havana-3.

Thanks to the core-reviewers for all their attention! We've had several patches 
merge and I feel that our developers are starting to learn what's expected from 
them. I have a feeling that future reviews will be smoother thanks to your 
attentions.

Merged (Victory!):
* https://review.openstack.org/#/c/30036/
* https://review.openstack.org/#/c/30289/

Needs one more +2 / Approve button:
* https://review.openstack.org/#/c/27885/
* https://review.openstack.org/#/c/29453/ 

Ready for core-reviewer:
[none at this stage right now]

Needs VMware API expert review:
* https://review.openstack.org/#/c/30282/
* https://review.openstack.org/#/c/30628/ - pretty close to ready for core
* https://review.openstack.org/#/c/30822/ - almost ready for core
* https://review.openstack.org/#/c/33100/
* https://review.openstack.org/#/c/33238/ - BP
* https://review.openstack.org/#/c/33504/ - VC not available fix
* https://review.openstack.org/#/c/34033/ - new feature, how should VMwareAPI 
support?

Needs help/discussion (has topical -1 issues):
* https://review.openstack.org/#/c/33782/ - general nova discussion affects 
all hypervisors
* https://review.openstack.org/#/c/33088/ - IIRC author is considering 
dropping this?
* https://review.openstack.org/#/c/34189/ - has -1 for nit-pick reasons

Work in progress:
* https://review.openstack.org/#/c/35502/ - I can't click the Work in 
progress button I'm not sure how else to signal that I'm still working... help?

Meeting info:
* https://wiki.openstack.org/wiki/Meetings/VMwareAPI

# Shawn Hartsock - VMware's Nova Compute driver maintainer guy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Adding a clean shutdown for stop/delete breaks Jenkins

2013-07-03 Thread Day, Phil
Hi Folks,

I have a change submitted which adds the same clean shutdown logic to stop and 
delete that exists for soft reboot - the rational being that its always better 
to give a VM a chance to shutdown cleanly if possible even if you're about to 
delete it as sometimes other parts of the application expect this, and if its 
booted from a volume you want to leave the guest file system in a tidy state.

https://review.openstack.org/#/c/35303/

However setting the default value to 120 seconds (as per soft reboot) causes 
the Jenkins gate jobs to blow the 3 hour limit.   This seems to be just a 
gradual accumulation of extra time rather than any one test running much longer.

So options would seem to be:


i)Make the default wait time much shorter so that Jenkins runs OK 
(tries this with 10 seconds and it works fine), and assume that users will 
configure it to a more realistic value.

ii)   Keep the default at 120 seconds, but make the Jenkins jobs use a 
specific configuration setting (is this possible, and iof so can someone point 
me at where to make the change) ?

iii) Increase the time allowed for Jenkins

iv) The ever popular something else ...

Thought please.

Cheers,
Phil


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding a clean shutdown for stop/delete breaks Jenkins

2013-07-03 Thread Clark Boylan
On Wed, Jul 3, 2013 at 9:30 AM, Day, Phil philip@hp.com wrote:
 Hi Folks,
I can't really speak to the stuff that was here so snip.

 i)Make the default wait time much shorter so that Jenkins runs OK
 (tries this with 10 seconds and it works fine), and assume that users will
 configure it to a more realistic value.

I know Robert Collins and others would like to see our defaults be
reasonable for a mid sized deployment so we shouldn't use a default to
accomodate Jenkins.
 ii)   Keep the default at 120 seconds, but make the Jenkins jobs use a
 specific configuration setting (is this possible, and iof so can someone
 point me at where to make the change) ?

It is possible. You can expose and set config options through the
devstack-gate project [1] which runs the devstack tests in Jenkins.
 iii) Increase the time allowed for Jenkins

I don't think we want to do this as it already takes quite a bit of
time to get through the gate (the three hour timeout seems long, but
sdague and others would have a better idea of what it should be).
 iv) The ever popular something else …

Nothing comes to mind immediately.

[1] https://github.com/openstack-infra/devstack-gate
You will probably find the devstack-vm-gate.sh and
devstack-vm-gate-wrap.sh scripts to be most useful.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] volume affinity filter for nova scheduler

2013-07-03 Thread Jérôme Gallard
Hi all,

Russell, I agree with all of your remarks, and especially with the
fact that placement details have to be avoided to be exposed to
users.

However I see a possible use case for the filter. For instance, if we
consider the BP Support for multiple active scheduler drivers (
https://blueprints.launchpad.net/nova/+spec/multiple-scheduler-drivers
): a cloud provider may want to provide a specific class of services
(on a dedicated aggregate) for users who wants to ensure that both
volumes and instances are on the same host and use the weight function
for all the other hosts.
Does it make sense?

Regards,
Jérôme

On Wed, Jul 3, 2013 at 5:54 PM, Russell Bryant rbry...@redhat.com wrote:
 On 07/03/2013 10:24 AM, Alexey Ovchinnikov wrote:
 Hi everyone,

 for some time I have been working on an implementation of a filter that
 would allow to force instances to hosts which contain specific volumes.
 A blueprint can be found here:
 https://blueprints.launchpad.net/nova/+spec/volume-affinity-filter
 and an implementation here:
 https://review.openstack.org/#/c/29343/

 The filter works for LVM driver and now it picks either a host
 containing specified volume
 or nothing (thus effectively failing instance scheduling). Now it fails
 primarily when it can't find the volume. It has been
 pointed to me that sometimes it may be desirable not to fail instance
 scheduling but to run it anyway. However this softer behaviour fits better
 for weighter function. Thus I have registered a blueprint for the
 weighter function:
 https://blueprints.launchpad.net/nova/+spec/volume-affinity-weighter-function

 I was thinking about both the filter and the weighter working together.
 The former
 could be used in cases when we strongly need storage space associated
 with an
 instance and need them placed on the same host. The latter could be used
 when
 storage space is nice to have and preferably on the same host
 with an instance, but not so crucial as to have the instance running.

 During reviewing a question appeared whether we need the filter and
 wouldn't things be better
 if we removed it and had only the weighter function instead. I am not
 yet convinced
 that the filter is useless and needs to be replaced with the weighter,
 so I am asking for your opinion on this matter. Do you see usecases for
 the filter,
 or the weighter will answer all needs?

 Thanks for starting this thread.

 I was pushing for the weight function.  It seems much more appropriate
 for a cloud environment than the filter.  It's an optimization that is
 always a good idea, so the weight function that works automatically
 would be good.  It's also transparent to users.

 Some things I don't like about the filter:

  - It requires specifying a scheduler hint

  - It's exposing a concept of co-locating volumes and instances on the
 same host to users.  This isn't applicable for many volume backends.  As
 a result, it's a violation of the principle where users ideally do not
 need to know or care about deployment details.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding a clean shutdown for stop/delete breaks Jenkins

2013-07-03 Thread David Kranz

On 07/03/2013 12:30 PM, Day, Phil wrote:


Hi Folks,

I have a change submitted which adds the same clean shutdown logic to 
stop and delete that exists for soft reboot -- the rational being that 
its always better to give a VM a chance to shutdown cleanly if 
possible even if you're about to delete it as sometimes other parts of 
the application expect this, and if its booted from a volume you want 
to leave the guest file system in a tidy state.


https://review.openstack.org/#/c/35303/

However setting the default value to 120 seconds (as per soft reboot) 
causes the Jenkins gate jobs to blow the 3 hour limit.   This seems to 
be just a gradual accumulation of extra time rather than any one test 
running much longer.


So options would seem to be:

i)Make the default wait time much shorter so that Jenkins runs OK 
(tries this with 10 seconds and it works fine), and assume that users 
will configure it to a more realistic value.


ii)Keep the default at 120 seconds, but make the Jenkins jobs use a 
specific configuration setting (is this possible, and iof so can 
someone point me at where to make the change) ?


iii)Increase the time allowed for Jenkins

iv)The ever popular something else ...

Thought please.

Cheers,

Phil

The fact that changing the timeout changes gate time means the code is 
actually hitting the timeout. Is that expected?
Shutdown is now relying on the guest responding to acpi. Is that what we 
want? Tempest uses a specialized image and I'm not sure how it is set up 
in this regard. In any event I don't think we want to add any more time 
to server delete when running in the gate.


I'm also a little concerned that this seems to be a significant behavior 
change when using vms that behave like the ones in the gate. In reboot 
this is handled by having soft/hard options of course.


 -David




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] failure node muting not working

2013-07-03 Thread John Dickinson
Take a look at the proxy config, starting here: 
https://github.com/openstack/swift/blob/master/etc/proxy-server.conf-sample#L70

The error_suppression_interval and error_suppression_limit control the window 
you are looking for. With the default values, 10 errors in 60 seconds will 
prevent the proxy from using that particular storage node for another 60 
seconds.

--John



On Jul 2, 2013, at 8:57 PM, Zhou, Yuan yuan.z...@intel.com wrote:

 Hi lists,
  
 We’re trying to evaluate the node failure performance in Swift.
 According the docs Swift should be able to mute the failed nodes:
 ‘if a storage node does not respond in a reasonable about of time, the proxy 
 considers it to be unavailable and will not attempt to communicate with it 
 for a while.’
  
 We did a simple test on a 5 nodes cluster:
 1.   Using COSBench to keep downloading files from the clusters.
 2.   Stop the networking on SN1, there are lots of ‘connection timeout 
 0.5s’ error occurs in Proxy’s log
 3.   Keep workload running and wait for about 1hour
 4.   The same error still occurs in Proxy, which means the node is not 
 muted, but we expect the SN1 is muted in proxy side and there is no 
 ‘connection  timeout ’ error in Proxy
  
 So is there any special works needs to be done to use this feature?
  
 Regards, -yuanz
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vmware] VMwareAPI sub-team status update

2013-07-03 Thread David Ripton

On 07/03/2013 11:51 AM, Shawn Hartsock wrote:


Work in progress:
* https://review.openstack.org/#/c/35502/ - I can't click the Work in progress 
button I'm not sure how else to signal that I'm still working... help?


It's your patch.  You should be able to.  (I just successfully put one 
of my reviews into WIP state and then back to Ready for Review, so it's 
not globally broken.)  Are you logged into Gerrit?


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Work around DB in OpenStack (Oslo, Nova, Cinder, Glance)

2013-07-03 Thread Monty Taylor


On 07/03/2013 07:26 AM, Johannes Erdfelt wrote:
 On Wed, Jul 03, 2013, Michael Still mi...@stillhq.com wrote:
 On Wed, Jul 3, 2013 at 3:50 AM, Boris Pavlovic bo...@pavlovic.me wrote:

 Question:
   Why we should put in oslo slqlalchemy-migrate monkey patches, when we are
 planing to switch to alembic?

 Answer:
If we don’t put in oslo sqlalchemy-migrate monkey patches. We won't be
 able to work on 7 point at all until 8 and 10 points will be implemented in
 every project. Also work around 8 point is not finished, so we are not able
 to implement 10 points in any of project. So this blocks almost all work in
 all projects. I think that these 100-200 lines of code are not so big price
 for saving few cycles of time.

 We've talked in the past (Folsom summit?) about alembic, but I'm not
 aware of anyone who is actually working on it. Is someone working on
 moving us to alembic? If not, it seems unfair to block database work
 on something no one is actually working on.
 
 I've started working on a non-alembic migration path that was discussed
 at the Grizzly summit.

 While alembic is better than sqlalchemy-migrate, it still requires long
 downtimes when some migrations are run. We discussed moving to an
 expand/contract cycle where migrations add new columns, allow migrations
 to slowly (relatively speaking) migrate data over, then (possibly) remove
 any old columns.

I think if you're working on a non-alembic plan and boris is working on
an alembic plan, then something is going to be unhappy in the
not-too-distant future. Can we get alignment on this?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Work around DB in OpenStack (Oslo, Nova, Cinder, Glance)

2013-07-03 Thread Monty Taylor


On 07/02/2013 10:50 AM, Boris Pavlovic wrote:
 ###
 Goal
 ###
 
 We should fix work with DB, unify it in all projects and use oslo code
 for all common things.

Just wanted to say a quick word that isn't about migrations...

Thank you. This is all great, and I'm thrilled someone is taking on the
task of fixing what is probably one of OpenStack's biggest nightmares.

 In more words:
 
 DB API
 
   *) Fully cover by tests.
 
   *) Run tests against all backends (now they are runed only against
 sqlite).
 
   *) Unique constraints (instead of select + insert)
  a) Provide unique constraints.
  b) Add missing unique constraints.
 
   *) DB Archiving
  a) create shadow tables
  b) add tests that checks that shadow and main table are synced.
  c) add code that work with shadow tables.
 
   *) DB API performance optimization
 a) Remove unused joins..
 b) 1 query instead of N (where it is possible).
 c) Add methods that could improve performance.
 d) Drop unused methods.
 
   *) DB reconnect
 a) Don’t break huge task if we lost connection for a moment.. just
 retry DB query.
 
   *) DB Session cleanup
 a) do not use session parameter in public DB API methods.
 b) fix places where we are doing N queries in N transactions instead
 of 1.
 c) get only data that is used (e.g. len(query.all()) = query.count()).
 
 
 
 DB Migrations
 
   *) Test DB Migrations against all backends and real data.
 
   *) Fix: DB schemas after Migrations should be same in different backends
 
   *) Fix: hidden bugs, that are caused by wrong migrations:
  a) fix indexes. e.g. 152 migration in Nova drop all Indexes that
 has deleted column
  b) fix wrong types
  c) drop unused tables
 
   *) Switch from sqlalchemy-migrate to something that is not death (e.g.
 alembic).
 
 
 
 DB Models
 
   *) Fix: Schema that is created by Models should be the same as after
 migrations.
  
   *) Fix: Unit tests should be runed on DB that was created by Models
 not migrations.
 
   *) Add test that checks that Models are synced with migrations.
 
 
 
 Oslo Code
 
   *) Base Sqlalchemy Models.
 
   *) Work around engine and session.
 
   *) SqlAlchemy Utils - that helps us with migrations and tests.
 
   *) Test migrations Base.
 
   *) Use common test wrapper that allows us to run tests on different
 backends.
 
 
 ###
Implementation
 ###
 
   This is really really huge task. And we are almost done with Nova=).
 
   In OpenStack for such work there is only one approach (“baby steps”
 development deriven). So we are making tons of patches that could be
 easy reviewed. But there is also minuses in such approach. It is pretty
 hard to track work on high level. And sometimes there are misunderstand.
  
   For example with oslo code. In few words at this moment we would like
 to add (for some time) in oslo monkey patching for sqlalchemy-migrate.
 And I got reasonable question from Doug Hellmann. Why? I answer because
 of our “baby steps”. But if you don’t have a list of baby steps it is
 pretty hard to understand why our baby steps need this thing. And why we
 don’t switch to alembic firstly. So I would like to describe our Road
 Map and write list of baby steps.
 
 
 ---
 
 OSLO
 
   *) (Merged) Base code for Models and sqlalchemy engine (session)
 
   *) (On review) Sqlalchemy utils that are used to:
   1. Fix bugs in sqlalchemy-migrate
   2. Base code for migrations that provides Unique Constraints.
   3. Utils for db.archiving helps us to create and check shadow tables.
 
   *) (On review) Testtools wrapper
We should have only one testtool wrapper in all projects. And
 this is the one of base steps in task of running tests against all backends.
 
   *) (On review) Test migrations base
Base classes that provides us to test our migrations against all
 backends on real data
 
   *) (On review, not finished yet) DB Reconnect.  
 
   *) (Not finished) Test that checks that schemas and models are synced
 
 ---
 
 ${PROJECT_NAME}
 
 
 In different projects we could work absolutely simultaneously, and first
 candidates are Glance and Cinder. But inside project we could also work
 simultaneously. Here is the workflow:
 
 
   1) (SYNC) Use base code for Models and sqlalchemy engines (from oslo)
 
   2) (SYNC) Use test migrations base (from oslo)
 
   3) (SYNC) Use SqlAlchemy utils (from oslo)
 
   4) (1 patch) Switch to OSLO DB code
 
   5) (1 patch) Remove ported test migrations
 
   6) (1 Migration) Provide unique constraints (change type 

Re: [openstack-dev] [Metrics][Nova] Using Bicho database to get stats about code review

2013-07-03 Thread Jesus M. Gonzalez-Barahona
On Wed, 2013-07-03 at 12:30 -0400, Russell Bryant wrote:
 On 07/03/2013 12:14 PM, Jesus M. Gonzalez-Barahona wrote:
  Hi all,
  
  Bicho [1] now has a Gerrit backend, which has been tested with
  OpenStack's Gerrit. We have used it to produce the MySQL database dump
  available at [2] ( gerrit.mysql.7z ). You can use it to compute the
  metrics mentioned in the previous threads about code review, and some
  others.
  
  [1] https://github.com/MetricsGrimoire/Bicho
  [2] http://activity.openstack.org/dash/browser/data/db/
  
  The database dump will be updated daily, starting in a few days.
  
  For some examples on how to run queries on it, or how to produce the
  database using Bicho, fresh from OpenStack's gerrit, have a look at [3].
  
  [3] https://github.com/MetricsGrimoire/Bicho/wiki/Gerrit-backend
  
  At some point, we plan to visualize the data as a part of the
  development dashboard [4], so any comment on interesting metrics, or
  bugs, will be welcome. For now, we're taking not of all metrics
  mentioned in the previous posts about code review stats.
  
  [4] http://activity.openstack.org/dash/browser/
 
 Thanks for sharing!  I don't think I understand the last sentence,
 though.  Can you clarify?
 

Oooops. It should read For now, we're taking notice of all metrics
mentioned in the previous posts about code review stats. That would
mean that, in a while, we expect to produce charts on the evolution of
some of those metrics over time.

Sorry for the typo.

Saludos,

Jesus.

-- 
-- 
Bitergia: http://bitergia.com http://blog.bitergia.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vmware] VMwareAPI sub-team status update

2013-07-03 Thread Shawn Hartsock
Yes. I'm logged into gerrit and I can review my own patch. I can't abandon or 
press work in progress on it.

# Shawn Hartsock

- Original Message -
 From: David Ripton drip...@redhat.com
 To: openstack-dev@lists.openstack.org
 Sent: Wednesday, July 3, 2013 1:25:47 PM
 Subject: Re: [openstack-dev] [vmware] VMwareAPI sub-team status update
 
 On 07/03/2013 11:51 AM, Shawn Hartsock wrote:
 
  Work in progress:
  * https://review.openstack.org/#/c/35502/ - I can't click the Work in
  progress button I'm not sure how else to signal that I'm still working...
  help?
 
 It's your patch.  You should be able to.  (I just successfully put one
 of my reviews into WIP state and then back to Ready for Review, so it's
 not globally broken.)  Are you logged into Gerrit?
 
 --
 David Ripton   Red Hat   drip...@redhat.com
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Work around DB in OpenStack (Oslo, Nova, Cinder, Glance)

2013-07-03 Thread Boris Pavlovic
Hi Monty,

 I think if you're working on a non-alembic plan and boris is working on
 an alembic plan, then something is going to be unhappy in the
 not-too-distant future. Can we get alignment on this?


As I said before, we are preparing our DB code to move from
sqlalchemy-migrate to something another.
There will be a tons of work before we will be able to rewrite or migration
scripts to alembic or something else.

And we are not sure that we would like to use alembic=)


Best regards,
Boris Pavlovic



On Wed, Jul 3, 2013 at 9:30 PM, Monty Taylor mord...@inaugust.com wrote:



 On 07/02/2013 10:50 AM, Boris Pavlovic wrote:
 
 ###
  Goal
 
 ###
 
  We should fix work with DB, unify it in all projects and use oslo code
  for all common things.

 Just wanted to say a quick word that isn't about migrations...

 Thank you. This is all great, and I'm thrilled someone is taking on the
 task of fixing what is probably one of OpenStack's biggest nightmares.

  In more words:
 
  DB API
 
*) Fully cover by tests.
 
*) Run tests against all backends (now they are runed only against
  sqlite).
 
*) Unique constraints (instead of select + insert)
   a) Provide unique constraints.
   b) Add missing unique constraints.
 
*) DB Archiving
   a) create shadow tables
   b) add tests that checks that shadow and main table are synced.
   c) add code that work with shadow tables.
 
*) DB API performance optimization
  a) Remove unused joins..
  b) 1 query instead of N (where it is possible).
  c) Add methods that could improve performance.
  d) Drop unused methods.
 
*) DB reconnect
  a) Don’t break huge task if we lost connection for a moment.. just
  retry DB query.
 
*) DB Session cleanup
  a) do not use session parameter in public DB API methods.
  b) fix places where we are doing N queries in N transactions instead
  of 1.
  c) get only data that is used (e.g. len(query.all()) =
 query.count()).
 
  
 
  DB Migrations
 
*) Test DB Migrations against all backends and real data.
 
*) Fix: DB schemas after Migrations should be same in different
 backends
 
*) Fix: hidden bugs, that are caused by wrong migrations:
   a) fix indexes. e.g. 152 migration in Nova drop all Indexes that
  has deleted column
   b) fix wrong types
   c) drop unused tables
 
*) Switch from sqlalchemy-migrate to something that is not death (e.g.
  alembic).
 
  
 
  DB Models
 
*) Fix: Schema that is created by Models should be the same as after
  migrations.
 
*) Fix: Unit tests should be runed on DB that was created by Models
  not migrations.
 
*) Add test that checks that Models are synced with migrations.
 
  
 
  Oslo Code
 
*) Base Sqlalchemy Models.
 
*) Work around engine and session.
 
*) SqlAlchemy Utils - that helps us with migrations and tests.
 
*) Test migrations Base.
 
*) Use common test wrapper that allows us to run tests on different
  backends.
 
 
 
 ###
 Implementation
 
 ###
 
This is really really huge task. And we are almost done with Nova=).
 
In OpenStack for such work there is only one approach (“baby steps”
  development deriven). So we are making tons of patches that could be
  easy reviewed. But there is also minuses in such approach. It is pretty
  hard to track work on high level. And sometimes there are misunderstand.
 
For example with oslo code. In few words at this moment we would like
  to add (for some time) in oslo monkey patching for sqlalchemy-migrate.
  And I got reasonable question from Doug Hellmann. Why? I answer because
  of our “baby steps”. But if you don’t have a list of baby steps it is
  pretty hard to understand why our baby steps need this thing. And why we
  don’t switch to alembic firstly. So I would like to describe our Road
  Map and write list of baby steps.
 
 
  ---
 
  OSLO
 
*) (Merged) Base code for Models and sqlalchemy engine (session)
 
*) (On review) Sqlalchemy utils that are used to:
1. Fix bugs in sqlalchemy-migrate
2. Base code for migrations that provides Unique Constraints.
3. Utils for db.archiving helps us to create and check shadow
 tables.
 
*) (On review) Testtools wrapper
 We should have only one testtool wrapper in all projects. And
  this is the one of base steps in task of running tests against all
 backends.
 
*) (On review) Test migrations base
 Base classes that provides us to test our migrations against all
  backends on real data

Re: [openstack-dev] Adding a clean shutdown for stop/delete breaks Jenkins

2013-07-03 Thread Clark Boylan
On Wed, Jul 3, 2013 at 10:03 AM, Day, Phil philip@hp.com wrote:
 Thanks Clark,

 So the process would be to get a new version of devstack-vm-gate.sh merged in 
 here first, and then submit my change in Nova right ?

 Is there any guidance on how I could check my change to the 
 devstack-vm-gate.sh ahead of submitting it ?
https://github.com/openstack-infra/devstack-gate/blob/master/README.rst#simulating-devstack-gate-tests
should have the info you need to test changes locally.

 Thanks,
 Phil

 -Original Message-
 From: Clark Boylan [mailto:clark.boy...@gmail.com]
 Sent: 03 July 2013 17:48
 To: OpenStack Development Mailing List
 Subject: Re: [openstack-dev] Adding a clean shutdown for stop/delete breaks
 Jenkins

 On Wed, Jul 3, 2013 at 9:30 AM, Day, Phil philip@hp.com wrote:
  Hi Folks,
 I can't really speak to the stuff that was here so snip.
 
  i)Make the default wait time much shorter so that Jenkins runs OK
  (tries this with 10 seconds and it works fine), and assume that users
  will configure it to a more realistic value.
 
 I know Robert Collins and others would like to see our defaults be reasonable
 for a mid sized deployment so we shouldn't use a default to accomodate
 Jenkins.
  ii)   Keep the default at 120 seconds, but make the Jenkins jobs use a
  specific configuration setting (is this possible, and iof so can
  someone point me at where to make the change) ?
 
 It is possible. You can expose and set config options through the 
 devstack-gate
 project [1] which runs the devstack tests in Jenkins.
  iii) Increase the time allowed for Jenkins
 
 I don't think we want to do this as it already takes quite a bit of time to 
 get
 through the gate (the three hour timeout seems long, but sdague and others
 would have a better idea of what it should be).
  iv) The ever popular something else ...
 
 Nothing comes to mind immediately.

 [1] https://github.com/openstack-infra/devstack-gate
 You will probably find the devstack-vm-gate.sh and devstack-vm-gate-wrap.sh
 scripts to be most useful.

 Clark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Changing pollster/pipeline parameters

2013-07-03 Thread Neal, Phil
A couple of related questions on managing CM pollster behavior via config 
type files:

1. I'm trying to make some modifications to the timing of the Glance (image) 
polling in a default CM install. It looks like the pipeline interval fields are 
the way to do it, and that I should tweak it using a yaml config file, but I 
can't seem to verify based on a read-through of the code. Can anyone confirm?

2. Similarly, it looks like disabling pollsters is done via the oslo.cfg logic 
in the agent manager. I'd like to populate that using a config fileis there 
logic already to do that that I haven't come across yet?

- Phil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift]After the X-Container-Read how to get the list of readable container

2013-07-03 Thread Hajime Takase
Hi guys,

I have successfully figure out to grant read only permission or write
permission to non operator users(operator is defined in
/etc/swift/proxy.conf) but I found out that those users can't get the
account info of the tenant using get_account() or GET,or they won't know
the list of the readable or writable containers.

I was thinking to store those additional data in local databases,but since
I want to use PC client as well,I would rather prefer using the server side
command to get the list of readable or writable containers.Is there any way
to do this using the API?

Hajime
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Help with database migration error

2013-07-03 Thread Matt Riedemann
What is the sql_connection value in your cisco_plugins.ini file?  Looks 
like sqlalchemy is having issues parsing the URL.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Henry Gessau ges...@cisco.com
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
Date:   07/02/2013 09:05 PM
Subject:[openstack-dev] [Neutron] Help with database migration 
error



I have not worked with databases much and this is my first attempt
at a database migration. I am trying to follow this Howto:
https://wiki.openstack.org/wiki/Neutron/DatabaseMigration

I get the following error at step 3:

/opt/stack/quantum[master] $ quantum-db-manage --config-file 
/etc/quantum/quantum.conf --config-file 
/etc/quantum/plugins/cisco/cisco_plugins.ini stamp head
Traceback (most recent call last):
  File /usr/local/bin/quantum-db-manage, line 9, in module
load_entry_point('quantum==2013.2.a882.g0fc6605', 'console_scripts', 
'quantum-db-manage')()
  File /opt/stack/quantum/quantum/db/migration/cli.py, line 136, in main
CONF.command.func(config, CONF.command.name)
  File /opt/stack/quantum/quantum/db/migration/cli.py, line 81, in 
do_stamp
sql=CONF.command.sql)
  File /opt/stack/quantum/quantum/db/migration/cli.py, line 54, in 
do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File /usr/local/lib/python2.7/dist-packages/alembic/command.py, line 
221, in stamp
script.run_env()
  File /usr/local/lib/python2.7/dist-packages/alembic/script.py, line 
193, in run_env
util.load_python_file(self.dir, 'env.py')
  File /usr/local/lib/python2.7/dist-packages/alembic/util.py, line 177, 
in load_python_file
module = imp.load_source(module_id, path, open(path, 'rb'))
  File 
/opt/stack/quantum/quantum/db/migration/alembic_migrations/env.py, line 
100, in module
run_migrations_online()
  File 
/opt/stack/quantum/quantum/db/migration/alembic_migrations/env.py, line 
73, in run_migrations_online
poolclass=pool.NullPool)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/__init__.py, 
line 338, in create_engine
return strategy.create(*args, **kwargs)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/strategies.py, 
line 48, in create
u = url.make_url(name_or_url)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/url.py, line 
178, in make_url
return _parse_rfc1738_args(name_or_url)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/url.py, line 
219, in _parse_rfc1738_args
Could not parse rfc1738 URL from string '%s' % name)
sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string ''


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


image/gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues with git review for a dependent commit

2013-07-03 Thread Jeremy Stanley
On 2013-07-03 09:04:04 -0700 (-0700), Vishvananda Ishaya wrote:
 Oh, I didn't see that you added -x/-X/-N.

You in the collective sense at least. Credit for that goes to Miklos
Vajna, who had to do a good bit of convincing us it was safe/useful.
And now I use it frequently, if fairly carefully, for things like
building dependent series from previously unrelated changes. I'm
glad I listened!

 I can simplify my backport script[1] significantly now.
 
 [1] https://gist.github.com/vishvananda/2206428

Neat! I had no idea you were doing that via git-review.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Help with database migration error

2013-07-03 Thread Henry Gessau
Matt, thanks for looking. I should have followed up with a note that I did
find what the problem was.

The Cisco plugin is a special case -- it is a wrapper that loads
sub-plugins. The database connection is specified in config file of the
first sub-plugin. By specifying that instead of cisco_plugins.ini everything
works.

-- Henry

On Wed, Jul 03, at 10:10 pm, Matt Riedemann mrie...@us.ibm.com wrote:

 What is the sql_connection value in your cisco_plugins.ini file? Looks like
 sqlalchemy is having issues parsing the URL.
 
 
 
 Thanks,
 
 *MATT RIEDEMANN*
 Advisory Software Engineer
 Cloud Solutions and OpenStack Development
 
 *Phone:*1-507-253-7622| *Mobile:*1-507-990-1889*
 E-mail:*_mrie...@us.ibm.com_ mailto:mrie...@us.ibm.com  
 IBM
 
 3605 Hwy 52 N
 Rochester, MN 55901-1407
 United States
 
 
 
 
 
 
 From:Henry Gessau ges...@cisco.com
 To:OpenStack Development Mailing List
 openstack-dev@lists.openstack.org,
 Date:07/02/2013 09:05 PM
 Subject:[openstack-dev] [Neutron] Help with database migration error
 
 
 
 
 I have not worked with databases much and this is my first attempt
 at a database migration. I am trying to follow this Howto:
 https://wiki.openstack.org/wiki/Neutron/DatabaseMigration
 
 I get the following error at step 3:
 
 /opt/stack/quantum[master] $ quantum-db-manage --config-file
 /etc/quantum/quantum.conf --config-file
 /etc/quantum/plugins/cisco/cisco_plugins.ini stamp head
 Traceback (most recent call last):
  File /usr/local/bin/quantum-db-manage, line 9, in module
load_entry_point('quantum==2013.2.a882.g0fc6605', 'console_scripts',
 'quantum-db-manage')()
  File /opt/stack/quantum/quantum/db/migration/cli.py, line 136, in main
CONF.command.func(config, CONF.command.name)
  File /opt/stack/quantum/quantum/db/migration/cli.py, line 81, in do_stamp
sql=CONF.command.sql)
  File /opt/stack/quantum/quantum/db/migration/cli.py, line 54, in
 do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File /usr/local/lib/python2.7/dist-packages/alembic/command.py, line 221,
 in stamp
script.run_env()
  File /usr/local/lib/python2.7/dist-packages/alembic/script.py, line 193,
 in run_env
util.load_python_file(self.dir, 'env.py')
  File /usr/local/lib/python2.7/dist-packages/alembic/util.py, line 177, in
 load_python_file
module = imp.load_source(module_id, path, open(path, 'rb'))
  File /opt/stack/quantum/quantum/db/migration/alembic_migrations/env.py,
 line 100, in module
run_migrations_online()
  File /opt/stack/quantum/quantum/db/migration/alembic_migrations/env.py,
 line 73, in run_migrations_online
poolclass=pool.NullPool)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/__init__.py, line
 338, in create_engine
return strategy.create(*args, **kwargs)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/strategies.py,
 line 48, in create
u = url.make_url(name_or_url)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/url.py, line 178,
 in make_url
return _parse_rfc1738_args(name_or_url)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/url.py, line 219,
 in _parse_rfc1738_args
Could not parse rfc1738 URL from string '%s' % name)
 sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string ''
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Propose to add copying the reference images when creating a volume

2013-07-03 Thread Sheng Bo Hou
Hi Vish,

I would like you to review this patch 
https://review.openstack.org/#/c/35460/.

I think this approach takes the least effort to fix this issue.
When we boot an instance from a volume, nova can get the volume 
information by _get_volume. kerel_id and ramdisk_id are already in the 
volume information. We just need to make nova retrieve them. In the code 
of creating instance, kernel_id and ramdisk_id are accessed by checking 
properties(many parts), but the volume information saved them in 
volume_image_metadata. I just convert the data structure a bit and save 
two of this params in properties, and it can work.

Well, if you do not see it favorable, we can work it out in another way.

Thank you.

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com 
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang 
West Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193



Vishvananda Ishaya vishvana...@gmail.com 
2013/07/02 01:14
Please respond to
OpenStack Development Mailing List openstack-dev@lists.openstack.org


To
OpenStack Development Mailing List openstack-dev@lists.openstack.org, 
cc
jsbry...@us.ibm.com, Duncan Thomas duncan.tho...@gmail.com John 
Griffith duncan.thomas@gmail.comduncan.thomas
Subject
Re: [openstack-dev] [cinder] Propose to add copying the reference images 
when creating a volume







On Jul 1, 2013, at 3:35 AM, Sheng Bo Hou sb...@cn.ibm.com wrote:

Hi Mate, 

First, thanks for answering. 
I was trying to find the way to prepare the bootable volume. 
Take the default image downloaded by devstack, there are three images: 
cirros-0.3.0-x86_64-uec, cirros-0.3.0-x86_64-uec-kernel and 
cirros-0.3.0-x86_64-uec-ramdisk. 
cirros-0.3.0-x86_64-uec-kernel is referred as the kernel image and 
cirros-0.3.0-x86_64-uec-ramdisk is referred as the ramdisk image. 

Issue: If only the image(cirros-0.3.0-x86_64-uec) is copied to the volume 
when creating a volume) from an image, this volume is unable to boot an 
instance without the references to the kernel and the ramdisk images. The 
current cinder only copies the image cirros-0.3.0-x86_64-uec to one 
targeted volume(Vol-1), which is marked as bootable but unable to do a 
successful boot with the current nova code, even if image-id is removed in 
the parameter. 

Possible solutions: There are two ways in my mind to resolve it. One is we 
just need the code change in Nova to let it find the reference images for 
the bootable volume(Vol-1) and there is no need to change anything in 
cinder, since the kernel and ramdisk id are saved in the 
volume_glance_metadata, where the references point to the images(kernel 
and ramdisk) for the volume(Vol-1). 


You should be able to create an image in glance that references the volume 
in block device mapping but also has a kernel_id and ramdisk_id parameter 
so it can boot properly. I know this is kind of an odd way to do things, 
but this seems like an edge case and I think it is a valid workaround.

Vish

The other is that if we need multiple images to boot an instance, we need 
a new way to create the bootable volume. For example, we can create three 
separate volumes for three of the images and set the new references in 
volume_glance_metadata with the kernel_volume_id and ramdisk_volume_id. 
The benefit of this approach is that the volume can live independent of 
the existence of the original images. Even the images get lost 
accidentally, the volumes are still sufficient to boot an instance, 
because all the information have been copied to Cinder part. 

I am trying to looking for the another way to prepare your bootable 
volume as you mentioned and asking for the suggestions. 
And I think the second approach could be one way. Do you think it is a 
good approach? 

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com 
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang 
West Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193 


Mate Lakat mate.la...@citrix.com 
2013/07/01 04:18 

Please respond to
OpenStack Development Mailing List openstack-dev@lists.openstack.org



To
OpenStack Development Mailing List openstack-dev@lists.openstack.org, 
cc
jsbry...@us.ibm.com, Duncan Thomas duncan.tho...@gmail.com John 
Griffith duncan.thomas@gmail.comduncan.thomas 
Subject
Re: [openstack-dev] [cinder] Propose to add copying the reference images 
when creating a volume








Hi,

I just proposed a patch for the boot_from_volume_exercise.sh to get rid
of --image. To be honest, I did