Re: [openstack-dev] Work around DB in OpenStack (Oslo, Nova, Cinder, Glance)

2013-07-11 Thread Mark McLoughlin
On Wed, 2013-07-10 at 14:14 -0600, John Griffith wrote:

 
 Given that Cinder doesn't have anybody actively engaged in this other
 than what's being proposed and worked on by Boris and folks, we'd be a
 willing candidate for most of these changes, particularly if they're
 accepted in Nova to begin with.
 
 
 The question of having it in oslo-incubator or not, I think ultimately
 that's likely to be the best thing, but as is evident by this thread
 it seems there are a number of things that are going to have to be
 sorted before that happens, and I'm not convinced that move things to
 OSLO first then fix is the right answer.  In my opinion things should
 be pretty solid before they go into the OSLO repo, but that's just my
 2 cents.
 
 
 AS is evident by the approval of the BP's in Cinder and the reviews on
 the patches that have been submitted thus far Cinder is fine going the
 direction/implementations that have been proposed by Boris.  I would
 like to see the debate around the archiving strategy and use of
 alembic settled, but regardless on the Cinder side I would like to
 move forward and make progress and as there's no other real effort to
 move forward with improving the DB code in Cinder (which I think is
 needed and very valuable) I'm fine with most of what's being proposed.

My conclusion from that (admittedly based on limited understanding)
would be that everything Boris is proposing makes sense to copy from
Nova to oslo-incubator so Cinder can re-use it, with the exception of
the DB archiving strategy.

i.e. we'd improve Nova's DB archiving strategy before having Cinder
adopt it.

Cheers,
Mark.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Review] Use of exception for non-exceptional cases

2013-07-11 Thread Mark McLoughlin
On Wed, 2013-07-10 at 21:14 +0200, Thomas Hervé wrote:
 
 
 On Wed, Jul 10, 2013 at 8:32 PM, Mark McLoughlin mar...@redhat.com
 wrote:
 On Wed, 2013-07-10 at 11:01 -0700, Nachi Ueno wrote:
 
  Personally, I prefer not to use exception for such cases.
 
 
 
 The key here is personally. I don't think we have to agree on all
 style issues.

When it results in a patch submitter getting a -1 from one person for
choosing EAFP and a -1 from another person for choosing LBYL, then
yes ... actually we do need to agree.

 My instinct is the same, but EAFP does seem to be the python
 way. There
 are times I can tolerate the EAFP approach but, even then, I
 generally
 think LBYL is cleaner.
 
 I can live with something like this:
 
   try:
   return obj.foo
   except AttributeError:
   pass
 
 but this is obviously broken:
 
   try:
   return self.do_something(obj.foo)
   except AttributeError:
   pass
 
 since AttributeError will mask a typo with the do_something()
 call or an
 AttributeError raised from inside do_something()
 
 But I fail to see what's wrong with this:
 
   if hasattr(obj, 'foo'):
   return obj.foo
 
 
 hasattr is a bit dangerous as it catches more exceptions than it needs
 too. See for example 
 http://stackoverflow.com/questions/903130/hasattr-vs-try-except-block-to-deal-with-non-existent-attributes/16186050#16186050
  for an explanation.

That answer does begin with this, though:

  I almost always use hasattr: it's the correct choice for most cases.

and, frankly, a __getattr__() method that returns ValueError is broken.

i.e. the conclusion would be that we should only avoid hasattr() in some
very limited cases where the underlying __getattr__() does weird things
or where using it can result in a race condition.

Cheers,
Mark.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Blueprint: Separate translation domain for log messages

2013-07-11 Thread Joe Gordon
On Thu, Jul 11, 2013 at 9:39 AM, Mark McLoughlin mar...@redhat.com wrote:

 Hi Daisy,

 On Wed, 2013-07-10 at 21:48 +0800, Ying Chun Guo wrote:
  Hi, Mark
 
  I think there is a blueprint we discussed in the Havana summit to
  separate translation domains.
 
 https://blueprints.launchpad.net/oslo/+spec/log-messages-translation-domain
 
  I don't see any progress there.
  Do you have any plan to implement it?
  The translation team set the command line message as high priority,
  but log messages as low priority.
  So we want the domains can be separated.

 Given that there's been no progress on this, I suggest we take a
 pragmatic approach to allow us to move forward

   - The fact that the high priority and low priority messages are
 mixed together means we can't get to high levels of translations
 for the high priority messages.

   - It's time we deal with this issue around high priority messages with
 some urgency, even if that means hurting the ability to have low
 priority messages translated.

   - In other words, we should submit patches to have the low priority
 messages no longer marked for translation.

   - Once someone comes up with a solution for a separate translation
 domain for low priority messages, we can go back and mark the low
 priority messages for translation again.

   - This sounds like a lot of churn, but every low priority message
 will need to be touched even if we come up with a solution for a
 separate translation domain e.g. changing _() to l_()

 The key thing here is to have some very concrete rules about which
 messages are high priority and low priority. The question is more subtle
 than it seems.


What about starting with log messages are all low priority (except perhaps
for the error level?).  A quick grep shows these are *roughly* half of the
translations.



 For example, if a Nova instance fails to boot, we include the instance
 fault in the detailed nova show output. Should those fault messages
 be translated? If so, tracking down all the possible error messages that
 might wind up there is actually quite difficult.

 That said, I'd be happy if we erred on the side of if we're not sure
 whether it's user-visible, let's assume it's not approach.

 Cheers,
 Mark.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Blueprint: Separate translation domain for log messages

2013-07-11 Thread Mark McLoughlin
On Thu, 2013-07-11 at 09:56 +0100, Joe Gordon wrote:
 
 
 
 On Thu, Jul 11, 2013 at 9:39 AM, Mark McLoughlin mar...@redhat.com wrote:
 Hi Daisy,
 
 On Wed, 2013-07-10 at 21:48 +0800, Ying Chun Guo wrote:
  Hi, Mark
 
  I think there is a blueprint we discussed in the Havana summit to
  separate translation domains.
  
 https://blueprints.launchpad.net/oslo/+spec/log-messages-translation-domain
 
  I don't see any progress there.
  Do you have any plan to implement it?
  The translation team set the command line message as high priority,
  but log messages as low priority.
  So we want the domains can be separated.
 
 Given that there's been no progress on this, I suggest we take a
 pragmatic approach to allow us to move forward
 
   - The fact that the high priority and low priority messages are
 mixed together means we can't get to high levels of translations
 for the high priority messages.
 
   - It's time we deal with this issue around high priority messages 
 with
 some urgency, even if that means hurting the ability to have low
 priority messages translated.
 
   - In other words, we should submit patches to have the low priority
 messages no longer marked for translation.
 
   - Once someone comes up with a solution for a separate translation
 domain for low priority messages, we can go back and mark the low
 priority messages for translation again.
 
   - This sounds like a lot of churn, but every low priority message
 will need to be touched even if we come up with a solution for a
 separate translation domain e.g. changing _() to l_()
 
 The key thing here is to have some very concrete rules about which
 messages are high priority and low priority. The question is more 
 subtle
 than it seems.
 
 
 What about starting with log messages are all low priority (except
 perhaps for the error level?).  A quick grep shows these are *roughly*
 half of the translations.

I'm cool with that as a first baby step. Whatever it takes to get the
number of log messages down to something that is reasonable to expect
translators to translate.

Unless it's plausible for a translator to get to 100% of our messages,
then some of the highly user-visible messages won't be translated.

Cheers,
Mark.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Question -- whu does object data always go to a single swift node?

2013-07-11 Thread Thierry Carrez
Snider, Tim wrote:
 Here’s a novice question.
 
 My stack has 2 swift nodes. As expect curl commands addressed to the
 controller node to get auth and url information bounce between the 2
 swift nodes as expected.

This list is about the future development of OpenStack.

You should ask your question to the general mailing-list:
https://wiki.openstack.org/wiki/Mailing_Lists

You can also use the http://ask.openstack.org website.
Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Swift deep dive code overview of DiskFile object refactoring - G+ Hangout, Wed. July 17th, 3 PM EDT

2013-07-11 Thread Thierry Carrez
Peter Portante wrote:
 We are hosting a G+ Hangout session for those interested in OpenStack
 Swift to do an
 overview, code walk-through, discussion and feedback on the proposed
 DiskFile
 refactoring changes to define as a supported API.

Any particular reason (slides ?) why you're using a G+ hangout instead
of an IRC meeting for that ? The text nature of the latter makes it more
easily searched, indexed and archived, so it sounds like a better match
for the discussion/feedback part...

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Combination of ComputeCapabilitiesFilter and AggregateInstanceExtraSpecsFilter

2013-07-11 Thread Jérôme Gallard
Thanks a lot for your answers and for solving the issue.

Regards,
Jérôme

On Mon, Jul 8, 2013 at 3:05 PM, Russell Bryant rbry...@redhat.com wrote:
 On 07/05/2013 08:14 PM, Qiu Yu wrote:
 Russell,

 Should ComputeCapabilitiesFilter also be restricted to use scoped
 format only? Currently it recognize and compare BOTH scoped and
 non-scoped key, which is causing the conflict.

 I've already submitted a bug and patch review before.

 https://bugs.launchpad.net/nova/+bug/1191185
 https://review.openstack.org/#/c/33143/

 But removing non-scoped support breaks backwards compatibility.  We
 should avoid that whenever possible.  In this case, there's a pretty
 easy solution to avoid conflicts while also not breaking backwards
 compatibility.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Work around DB in OpenStack (Oslo, Nova, Cinder, Glance)

2013-07-11 Thread Boris Pavlovic
Mark, John, Nikola,

Current in oslo we would like to put only 2 functions:
1) generic method for creating shadow table
2) generic method that the columns are same in shadow and main table

So migration that adds shadow table could be done after all other works,
when we finish improving of db-archiving utils (that moves deleted rows to
shadow tables), to avoid problems that noticed Nikola.

These 2 functions won't be affected and will be used in future in cinder,
glance and they are already used in Nova. So I don't see any problem to
push it into oslo at this moment.


Best regards,
Boris Pavlovic




On Thu, Jul 11, 2013 at 11:25 AM, Mark McLoughlin mar...@redhat.com wrote:

 On Wed, 2013-07-10 at 14:14 -0600, John Griffith wrote:

 
  Given that Cinder doesn't have anybody actively engaged in this other
  than what's being proposed and worked on by Boris and folks, we'd be a
  willing candidate for most of these changes, particularly if they're
  accepted in Nova to begin with.
 
 
  The question of having it in oslo-incubator or not, I think ultimately
  that's likely to be the best thing, but as is evident by this thread
  it seems there are a number of things that are going to have to be
  sorted before that happens, and I'm not convinced that move things to
  OSLO first then fix is the right answer.  In my opinion things should
  be pretty solid before they go into the OSLO repo, but that's just my
  2 cents.
 
 
  AS is evident by the approval of the BP's in Cinder and the reviews on
  the patches that have been submitted thus far Cinder is fine going the
  direction/implementations that have been proposed by Boris.  I would
  like to see the debate around the archiving strategy and use of
  alembic settled, but regardless on the Cinder side I would like to
  move forward and make progress and as there's no other real effort to
  move forward with improving the DB code in Cinder (which I think is
  needed and very valuable) I'm fine with most of what's being proposed.

 My conclusion from that (admittedly based on limited understanding)
 would be that everything Boris is proposing makes sense to copy from
 Nova to oslo-incubator so Cinder can re-use it, with the exception of
 the DB archiving strategy.

 i.e. we'd improve Nova's DB archiving strategy before having Cinder
 adopt it.

 Cheers,
 Mark.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Change in openstack/neutron[master]: Add method to get iptables traffic counters

2013-07-11 Thread Sylvain Afchain
Hi Brian,

First thanks for the reviews and your detailed email.

Second I will update the blueprint specs. as soon as possible, but for example 
it will look like that:

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts  bytes target prot opt in out source   
destination 
   55   245 metering-r-aef1456343  all  --  *  *   0.0.0.0/0
0.0.0.0/0   /* jump to rules the label aef1456343 */
   55   245 metering-r-badf566782  all  --  *  *   0.0.0.0/0
0.0.0.0/0   

Chain metering-l-aef1456343 (1 references)  /* the chain for the label 
aef1456343 */
pkts  bytes target prot opt in out source   
destination 

Chain metering-l-badf566782 (1 references)  /* the chain for the label 
badf566782 */
pkts  bytes target prot opt in out source   
destination 

Chain metering-r-aef1456343 (1 references)
pkts  bytes target prot opt in out source   
destination 
   20 100 RETURN all  --  *  *   0.0.0.0/0   
!10.0.0.0/24  /* don't want to count this traffic */   
   00 RETURN all  --  *  *   0.0.0.0/0   
!20.0.0.0/24  /* don't want to count this traffic */   
   25  145 metering-l-aef1456343  all  --  *  *   0.0.0.0/0 
   0.0.0.0/0/* count the remaining traffic */ 

Chain metering-r-badf566782 (1 references)
pkts  bytes target prot opt in out source   
destination 
   00 metering-l-badf56678  all  --  *  *   0.0.0.0/0   
 30.0.0.0/24 /* want to count only this */


Of course the in/out interfaces will be set in order to get the ingress or the 
egress traffic.

I agree with you I could add a single rule to the chain of the label and get 
the traffic of the first entry, though I found this approach less generic. 
I mean, to be forced to add a rule at the top of a chain to get its traffic. My 
approach is I don't want the counters of a specific rule but I want to count
the traffic going through the chain.

Thoughts?

Regards,

Sylvain.

- Original Message -
From: Brian Haley brian.ha...@hp.com
To: sylvain afchain sylvain.afch...@enovance.com
Cc: openstack-dev@lists.openstack.org
Sent: Thursday, July 11, 2013 2:30:24 AM
Subject: Re: Change in openstack/neutron[master]: Add method to get iptables 
traffic counters

On 07/08/2013 01:10 PM, Sylvain Afchain (Code Review) wrote:
 Sylvain Afchain has posted comments on this change.
 
 Change subject: Add method to get iptables traffic counters
snip
 --
 To view, visit https://review.openstack.org/35624

Hi Sylvain,

Instead of trying to ask questions directly in the review itself (since it will 
mess-up formatting) I'll just send this to you and the list since I had some 
questions on the traffic counter changes you've been doing.

First, thanks for working on this, it's definitely something I'm interested in, 
and I'm trying to review all your changes.

Second, do you have more than just the short description from the blueprint for 
how the iptables chains/rules will look like when created?  I'm still a little 
confused with this change (above) and how it's matching chains to get 
packet/byte statistics.  I'm thinking it can be done within a single chain so 
that you can do an 'iptables -L $chain' call to get just what you need, instead 
of parsing the entire table.

For example, I did something similar in Nova (out of tree), and it used a 
single chain per-VM, such that you could get it's statistics with a single 
iptables call like:

(sorry if this wraps)
$ sudo iptables -t mangle -L nova-meter-output-91 -n -v -x [-Z]
Chain nova-meter-output-91 (1 references)
pkts  bytes target prot opt in out source   
destination 
  805210 247931149all  --  *  *   0.0.0.0/0
0.0.0.0/0/* inst-91 packets transmitted total */ 
   15510   964648 all  --  *  *   0.0.0.0/0
x.y.0.0/16
   21282  3075403 all  --  *  *   0.0.0.0/0
x.z.0.0/16
   [...]

None of the rules in the chain has a jump target, so they simply count 
packets/bytes as they pass through, and the chain is called from a single 
location based on IP address, so in iptables-save format it looks like this:

-A nova-meter-output -s $my_ip/32 -i bridge1 -j nova-meter-output-91 
-A nova-meter-output-91 -m comment --comment inst-91 packets transmitted total
-A nova-meter-output-91 -d x.y.0.0/16
-A nova-meter-output-91 -d x.z.0.0/16
[...]

Obviously with Neutron, and doing this at the router egress, things change, but 
I think it could still be done in a single OUTPUT chain in the filter table.

Thoughts?

-Brian

___
OpenStack-dev mailing list

Re: [openstack-dev] [Review] Use of exception for non-exceptional cases

2013-07-11 Thread Mark McLoughlin
On Wed, 2013-07-10 at 19:49 -0400, Monty Taylor wrote:
 I'd like top-post and hijack this thread for another exception related
 thing:
 
 a) Anyone writing code such as:
 
 try:
   blah()
 except SomeException:
   raise SomeOtherExceptionLeavingOutStackContextFromSomeException
 
 should be mocked ruthlessly.

Ok, mock me ruthlessly then.

Part of designing any API is specifying what predictable exceptions it
will raise. For any predictable error condition, you don't want callers
to have to catch random exceptions from the underlying libraries you
might be calling into.

Say if I was designing an image downloading API, I'd do something like
this:

  https://gist.github.com/markmc/5973868

Assume there's a tonne more stuff that the API would do. You don't want
callers to have to catch socket.error exceptions and whatever other
exceptions might be thrown.

That works out as:

  Traceback (most recent call last):
File t.py, line 20, in module
  download_image('localhost', 3366, 'foobar')
File t.py, line 18, in download_image
  raise ImageDownloadFailure(host, port, path, e.strerror)
  __main__.ImageDownloadFailure: Failed to download foobar from localhost:3366: 
Connection refused

Which is a pretty clear exception.

But I think what you're saying is missing is the stack trace from the
underlying exception.

As I understood it, Python doesn't have a way of chaining exceptions
like this but e.g. Java does. A little bit more poking right now shows
up this:

  http://www.python.org/dev/peps/pep-3134/

i.e. we can't do the right thing until Python 3, where we'd do:

 def download_image(host, port, path):
 try:
 s = socket.create_connection((host, port))
 except socket.error as e:
 raise ImageDownloadFailure(host, port, path, e.strerror) from e

I haven't read the PEP in detail yet, though.

Cheers,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] How to write unit tests for db methods?

2013-07-11 Thread Akshat Kakkar

The methods of read/write/update/delete of records in the tables are written 
using SQLalchemy only and no direct sql is used.

I have implemented the things on the lines of trusts only.  Similar to trusts, 
I am also having RESTful APIs and unit tests for them are succesfully written. 
In test_backend_sql.py, it is seen that no unit tests are defined for trusts. 
So, it's confusing for me to implement the unit test for my backend sql code.

I know I am confusing the things and I apologise for that, but I am asking 
because of that confusion only!




 From: Adam Young ayo...@redhat.com
To: openstack-dev@lists.openstack.org 
Sent: Thursday, 11 July 2013 4:28 AM
Subject: Re: [openstack-dev] [Keystone] How to write unit tests for db methods?
 


On 07/10/2013 06:56 AM, Akshat Kakkar wrote:

I have added 2 tables to keystone.
This should be done in a migration, and should be tested using the
test_db_update.py file.


I have methods which do the read/write/update/delete of records in these 
tables. 
PLease explain.  We are not doing direct sql, but rather using SQLalchemy.



I want to write unit test for all this. These methods of mine inherit from 
keystone.common.sql and hence any call that these methods will make will go to 
the db returned by keystone.common.sql when creating a session. For writing a 
unit test this db should be a test db and not the production db. So, how can I 
have a session of test db? or is there altogether a different way of writing 
the unit test.

See test_backend_sql.py







 From: Dolph Mathews dolph.math...@gmail.com
To: Akshat Kakkar the_aks...@yahoo.co.in; OpenStack Development Mailing List 
openstack-dev@lists.openstack.org 
Sent: Tuesday, 9 July 2013 7:39 PM
Subject: Re: [openstack-dev] [Keystone] How to write unit tests for db methods?
 


I'm assuming you're referring to testing backend drivers as opposed to 
database migrations (tests/test_sql_upgrade.py). 


Backend agnostic tests land in tests/test_backend.py. Backend-specific tests, 
overrides, etc belong in tests/test_backend_sql.py, tests/test_backend_kvs.py, 
etc.


Generally, you can't assume that keystone is backed by a database, however, as 
it's entirely possible to deploy without one.



On Tue, Jul 9, 2013 at 10:55 AM, Akshat Kakkar the_aks...@yahoo.co.in wrote:

How to write unit tests in keystone for the methods which are directly calling 
the backend db? I understand that for testing purpose it should be a *fake 
db*, but how to do that in keystone?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






-- 



-Dolph 




___
OpenStack-dev mailing list OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Need help writing gate tests

2013-07-11 Thread Sean Dague

On 07/10/2013 11:01 PM, Clark Boylan wrote:

On Wed, Jul 10, 2013 at 7:32 PM, Adam Young ayo...@redhat.com wrote:

I want to write 3 new Jenkins gate tests:   Run the Keystone unit tests
against

1. A live LDAP server
2. MySQL
3. Postgresql

Right now, we know that the unit tests will fail against the live DBs, so we
want those two to be non-voting.  The Live LDAP one should be the scheme as
set up by devstack, and should be voting (can be non-voting to start)

where do I start?  Do I need to do this in
https://github.com/openstack-infra/config or
http://ci.openstack.org/devstack-gate.html?


Adding a Jenkins job typically involves two pieces of config in
openstack-infra/config. First you need to add the job to the Jenkins
Job Builder config so that the job gets into Jenkins. This is done in
the files under
modules/openstack_project/files/jenkins_job_builder/config. There are
tons of examples in there and documentation can be found at
http://ci.openstack.org/jjb.html. The other config that is needed is
an update to the zuul layout.yaml file telling zuul when to run the
jobs. The layout file is at
modules/openstack_project/files/zuul/layout.yaml and documentation for
that can be found at http://ci.openstack.org/zuul.html.

Our CentOS 6 and Ubuntu Precise slaves (used to run python 2.6 and 2.7
unittests) have MySQL and PostgreSQL servers running on them and are
available to the unittests. You can see how Nova makes use of these
servers at 
https://github.com/openstack/nova/blob/master/nova/tests/db/test_migrations.py#L31.
I prefer having opportunistic tests like Nova because it keeps the
number of special tests in our system down. If this isn't possible
because the tests don't currently pass you will probably want to add a
new test that runs something like `tox -evenv -- #command to run tests
against real DBs`.


It's not just nova cinder, glance, and ironic all do the same thing.

Chris Yeoh actually tried to get the same thing into keystone in both G3 
and H1, but it was blocked by the keystone team.


I'd really look at trying to do what nova/cinder/glance/ironic all 
already do here. If it has to land through oslo first, that's a thing to 
do, however nova's been gating on mysql in unit tests since early in 
Grizzly, so it's been proved out pretty well.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Swift deep dive code overview of DiskFile object refactoring - G+ Hangout, Wed. July 17th, 3 PM EDT

2013-07-11 Thread Peter Portante
face-to-face discussions


On Thu, Jul 11, 2013 at 5:12 AM, Thierry Carrez thie...@openstack.orgwrote:

 Peter Portante wrote:
  We are hosting a G+ Hangout session for those interested in OpenStack
  Swift to do an
  overview, code walk-through, discussion and feedback on the proposed
  DiskFile
  refactoring changes to define as a supported API.

 Any particular reason (slides ?) why you're using a G+ hangout instead
 of an IRC meeting for that ? The text nature of the latter makes it more
 easily searched, indexed and archived, so it sounds like a better match
 for the discussion/feedback part...

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The danger of capping python-*clients in core projects, and forbidding it in the future

2013-07-11 Thread Sean Dague

On 07/11/2013 05:06 AM, Thierry Carrez wrote:

Sean Dague wrote:

I think we need to get strict on projects and prevent them from capping
their client requirements. That will also put burden on clients that
they don't break backwards compatibility (which I think was a goal
regardless).


Indeed. The whole idea behind a single release channel for python client
libraries was that you should always be running the latest, as they
should drastically enforce backward compatibility.

Any reason why those caps were introduced in the first place ?


Well global requirements specifies caps for most clients:

python-cinderclient=1.0.4,2
python-ceilometerclient=1.0.1
python-heatclient=0.2.2
python-glanceclient=0.9.0,2
python-keystoneclient=0.2.1,0.4
python-memcached
python-neutronclient=2.2.3,3.0.0
python-novaclient=2.12.0,3
python-quantumclient=2.2.0,3.0.0
python-swiftclient=1.2,2

I assume projects just copied those lines into their requirements. Then 
keystoneclient bumped release number, and got outside the boundary that 
was allowed by some project.


I know a flury of python-keystoneclient patches went in after 
python-keystoneclient 0.3.0 released, but has a broken compatibility issue.


So step one is purge from global requirements.

Step two purge from projects.

Step three enforce they don't come back.

-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Review] Use of exception for non-exceptional cases

2013-07-11 Thread David Stanek
On Thu, Jul 11, 2013 at 5:20 AM, Mark McLoughlin mar...@redhat.com wrote:


 But I think what you're saying is missing is the stack trace from the
 underlying exception.

 As I understood it, Python doesn't have a way of chaining exceptions
 like this but e.g. Java does. A little bit more poking right now shows
 up this:

   http://www.python.org/dev/peps/pep-3134/

 i.e. we can't do the right thing until Python 3, where we'd do:

  def download_image(host, port, path):
  try:
  s = socket.create_connection((host, port))
  except socket.error as e:
  raise ImageDownloadFailure(host, port, path, e.strerror) from e

 I haven't read the PEP in detail yet, though.


You can actually do this in Python 2 and keep the original context:

  def download_image(host, port, path):
  try:
  s = socket.create_connection((host, port))
  except socket.error as e:
  raise ImageDownloadFailure, e, sys.exc_info()[-1]

This will keep the original message and stack trace, but change the type.
 You can also change the message if you want my mucking with e's message.
 I've done that to add a string like  (socket.error) at the end of the
exception message so I could see the original type.

If you really, really wanted to use a bare except you could also do
something like:

  try:
  do_something_that_raises_an_exception()
  except:
  exc_value, exc_tb = sys.exc_info()[1:]
  raise MyException, exc_value, exc_tb


-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Gate is broken: all gate-tempest-devstack-* are failing

2013-07-11 Thread Sean Dague

On 07/11/2013 06:31 AM, Joe Gordon wrote:

It looks like gate is down:

https://jenkins.openstack.org/job/gate-tempest-devstack-vm-full/
https://bugs.launchpad.net/keystone/+bug/1200161


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Ok, so this is the thing that came up in IRC the other day, while pip 
doesn't enforce we stay at the same version python-keystoneclient, entry 
points do. So we are getting a delayed failure on cinder-scheduler - 
http://logs.openstack.org/36492/4/check/gate-tempest-devstack-vm-postgres-full/30761/logs/screen-c-sch.txt.gz


Cinder uncapping python-keystoneclient will get us past this. Though I'm 
not quite sure how we got to this break point in the first place.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Gate is broken: all gate-tempest-devstack-* are failing

2013-07-11 Thread Dirk Müller
Hi Sean,

 Cinder uncapping python-keystoneclient will get us past this.

There is a review exactly proposing that:

https://review.openstack.org/#/c/36344/


 Though I'm not
 quite sure how we got to this break point in the first place.


I think this is due to the django_openstack_auth breakage that let
this one slip by (there was for a short amount of time a = 0.3
requirement on python-keystoneclient from somewhere).

Greetings,
Dirk

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Gate is broken: all gate-tempest-devstack-* are failing

2013-07-11 Thread Sean Dague

On 07/11/2013 08:48 AM, John Griffith wrote:




On Thu, Jul 11, 2013 at 6:29 AM, Dirk Müller d...@dmllr.de
mailto:d...@dmllr.de wrote:

Hi Sean,

  Cinder uncapping python-keystoneclient will get us past this.

There is a review exactly proposing that:

https://review.openstack.org/#/c/36344/


Actually for a number of reasons:
https://review.openstack.org/#/c/36559/ is what we needed,
which I gave up on last night a bit after midnight when James Blair moved it
to the front of the queue and it encountered a hiccup, at which point
some other
core Cinder folks took over baby-sitting it and it's finally through.




  Though I'm not
  quite sure how we got to this break point in the first place.


I think this is due to the django_openstack_auth breakage that let
this one slip by (there was for a short amount of time a = 0.3
requirement on python-keystoneclient from somewhere).

Yep, although it wasn't that short of a period of time.  I also raised
this concern
over the ML regarding common-requirements etc and had ZERO response.


I think the issue is that it came in the fire drill when we were running 
around getting to the bottom of the last gate fail sorry.


I guess I thought my full uncapping strategy might supercede just a sync 
issue, no?


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Gate is broken: all gate-tempest-devstack-* are failing

2013-07-11 Thread Joe Gordon
On Thu, Jul 11, 2013 at 1:48 PM, John Griffith
john.griff...@solidfire.comwrote:




 On Thu, Jul 11, 2013 at 6:29 AM, Dirk Müller d...@dmllr.de wrote:

 Hi Sean,

  Cinder uncapping python-keystoneclient will get us past this.

 There is a review exactly proposing that:

 https://review.openstack.org/#/c/36344/


 Actually for a number of reasons: https://review.openstack.org/#/c/36559/ is
 what we needed,
 which I gave up on last night a bit after midnight when James Blair moved
 it
 to the front of the queue and it encountered a hiccup, at which point some
 other
 core Cinder folks took over baby-sitting it and it's finally through.



That patch took 12 hours to get through! meaning the gate was down for 12
hours or so.  We should be able to do better then that in the future.
Only question is how?






  Though I'm not
  quite sure how we got to this break point in the first place.


 I think this is due to the django_openstack_auth breakage that let
 this one slip by (there was for a short amount of time a = 0.3
 requirement on python-keystoneclient from somewhere).


 Yep, although it wasn't that short of a period of time.  I also raised
 this concern
 over the ML regarding common-requirements etc and had ZERO response.


 Greetings,
 Dirk

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The danger of capping python-*clients in core projects, and forbidding it in the future

2013-07-11 Thread Dirk Müller
 Let's submit a multi-project bug on launchpad, and be serious for changing
 these global requirements in following days

https://bugs.launchpad.net/keystone/+bug/1200214

created.

Greetings,
Dirk

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The danger of capping python-*clients in core projects, and forbidding it in the future

2013-07-11 Thread Dirk Müller
Hi Thierry,

 Indeed. The whole idea behind a single release channel for python client
 libraries was that you should always be running the latest, as they
 should drastically enforce backward compatibility.

Well, backward compatibility can be tricky when it comes to test.
We've for example recently had an issue where the newer keystoneclient
broke mocking in tests. It is debateable if tests are part of the
backward compatibility or not.

See for example https://bugs.launchpad.net/horizon/+bug/1196823

This is currently also preventing me from being able to get a change
on stable/grizzly past gating checks (which stumble on exactly this
regression).

Greetings,
Dirk

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Meeting agenda for Thu Jul 11rd at 1500 UTC

2013-07-11 Thread Julien Danjou
The Ceilometer project team holds a meeting in #openstack-meeting, see
https://wiki.openstack.org/wiki/Meetings/MeteringAgenda for more details.

Next meeting is on Thu Jul 11rd at 1500 UTC 

Please add your name with the agenda item, so we know who to call on during
the meeting.
* Action from previous meeting
  * jd__ Write a terminology page in the documentation
* Deprecate the counter term? 
* Review Havana-2 milestone
  * https://launchpad.net/ceilometer/+milestone/havana-2
* dhellmann - Tempest tests 
* Release python-ceilometerclient? 
* Open discussion

If you are not able to attend or have additional topic(s) you would like
to add, please update the agenda on the wiki.

Cheers,
-- 
Julien Danjou
# Free Software hacker # freelance consultant
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The danger of capping python-*clients in core projects, and forbidding it in the future

2013-07-11 Thread Sean Dague

On 07/11/2013 09:12 AM, Dirk Müller wrote:

Let's submit a multi-project bug on launchpad, and be serious for changing
these global requirements in following days


https://bugs.launchpad.net/keystone/+bug/1200214


Great!

This is the first review we need to land to make progress:

https://review.openstack.org/#/c/36631/

-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Need help writing gate tests

2013-07-11 Thread Adam Young

On 07/11/2013 06:30 AM, Sean Dague wrote:

On 07/10/2013 11:01 PM, Clark Boylan wrote:

On Wed, Jul 10, 2013 at 7:32 PM, Adam Young ayo...@redhat.com wrote:

I want to write 3 new Jenkins gate tests:   Run the Keystone unit tests
against

1. A live LDAP server
2. MySQL
3. Postgresql

Right now, we know that the unit tests will fail against the live 
DBs, so we
want those two to be non-voting.  The Live LDAP one should be the 
scheme as

set up by devstack, and should be voting (can be non-voting to start)

where do I start?  Do I need to do this in
https://github.com/openstack-infra/config or
http://ci.openstack.org/devstack-gate.html?


Adding a Jenkins job typically involves two pieces of config in
openstack-infra/config. First you need to add the job to the Jenkins
Job Builder config so that the job gets into Jenkins. This is done in
the files under
modules/openstack_project/files/jenkins_job_builder/config. There are
tons of examples in there and documentation can be found at
http://ci.openstack.org/jjb.html. The other config that is needed is
an update to the zuul layout.yaml file telling zuul when to run the
jobs. The layout file is at
modules/openstack_project/files/zuul/layout.yaml and documentation for
that can be found at http://ci.openstack.org/zuul.html.

Our CentOS 6 and Ubuntu Precise slaves (used to run python 2.6 and 2.7
unittests) have MySQL and PostgreSQL servers running on them and are
available to the unittests. You can see how Nova makes use of these
servers at 
https://github.com/openstack/nova/blob/master/nova/tests/db/test_migrations.py#L31.

I prefer having opportunistic tests like Nova because it keeps the
number of special tests in our system down. If this isn't possible
because the tests don't currently pass you will probably want to add a
new test that runs something like `tox -evenv -- #command to run tests
against real DBs`.


It's not just nova cinder, glance, and ironic all do the same thing.

Chris Yeoh actually tried to get the same thing into keystone in both 
G3 and H1, but it was blocked by the keystone team.


No, he submitted a review request, we responded that more work was 
needed, and then the effort got overtaken by other things.  We certainly 
didn't block him, as we are as interested in the result as he/you are.
  We are more than willing to work with him on that.  We already have 
our own migration tests, and we were working together to get his patch 
and ours working in sync.  Still planning on doing that.




I'd really look at trying to do what nova/cinder/glance/ironic all 
already do here. If it has to land through oslo first, that's a thing 
to do, however nova's been gating on mysql in unit tests since early 
in Grizzly, 
We have Mysql and Postgres based integration tests, just not the unit 
tests.  I recall we wanted to get Chris's stuff into Oslo, but I don't 
know what the state of that is.



so it's been proved out pretty well.

-Sean




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Savanna-all] [savanna-all] merging savanna-extra elements

2013-07-11 Thread Ivan Berezovskiy
Matt,

install.d is good place to install packages (like java and hadoop) and
editing its configuration files. First scripts for Fedora was in
subdirectory install.d too, but during installation I got error related to
proc-trigger (input/output errors in proc-trigger). So I decided to change
subdirectory.
Regarding 70,80,90-... vs 11,12,13... This number is position of script
that runs in it subdirectory. All scripts of elements in the same
directories are sorted by number. So the values of these numbers are not
important, but all numbers should be unique within each subdirectories.

--
Thanks, Ivan


2013/7/10 Matthew Farrellee m...@redhat.com

 Ivan,

 $ tree elements/hadoop/install.d
 elements/hadoop/install.d
 |-- 70-setup-java
 |-- 80-setup-hadoop
 `-- 90-setup-ssh
 0 directories, 3 files

 $ tree elements/hadoop_fedora/post-**install.d
 elements/hadoop_fedora/post-**install.d
 |-- 11-setup-java
 |-- 12-setup-hadoop
 `-- 13-connection-setup
 0 directories, 3 files

 I want to align these two directory structures and filenames.

 install.d vs post-install.d, which is preferred?

 70,80,90-... vs 11,12,13-..., which is preferred?

 Best,


 matt

 --
 Mailing list: 
 https://launchpad.net/~**savanna-allhttps://launchpad.net/~savanna-all
 Post to : 
 savanna-all@lists.launchpad.**netsavanna-...@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**savanna-allhttps://launchpad.net/~savanna-all
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Current biggest OpenStack gate fail culprit - neutron bug #1194026

2013-07-11 Thread Dan Smith
 In the corner to my left, our current largest gate reset culprit
 appears to be neutron bug #1194026 - weighing in with 62 rechecks
 since June 24th (http://status.openstack.org/rechecks/)

So, with some of the highest rates of patch traffic we've seen over the
last couple of weeks before the H2 deadline, I think this is really
becoming a problem. I think merge times are through the roof as a
result.

Since the neutron gate is not a full tempest run, I think we should
consider making a temporary change. I know that turning it into a
non-voting job is not a popular solution, and I hate to even suggest
it. However, it's just a subset of the tests anyway and I think the
impact is currently overshadowing the potential for regression
detection, given the relatively small amount of coverage. Is this
something people would consider?

Of course, the other option is to try to skip the offending test if
we're running with neutron support, which may help. Since we don't know
what the problem is and it *seems* to be an issue with resources not
becoming available before a timeout (AIUI), I worry that this will just
move the problem elsewhere.

Thoughts?

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Current biggest OpenStack gate fail culprit - neutron bug #1194026

2013-07-11 Thread Mark McLoughlin
On Thu, 2013-07-11 at 09:28 -0600, John Griffith wrote:
 On Thu, Jul 11, 2013 at 9:16 AM, Dan Smith d...@danplanet.com wrote:
 
   In the corner to my left, our current largest gate reset culprit
   appears to be neutron bug #1194026 - weighing in with 62 rechecks
   since June 24th (http://status.openstack.org/rechecks/)
 
  So, with some of the highest rates of patch traffic we've seen over the
  last couple of weeks before the H2 deadline, I think this is really
  becoming a problem. I think merge times are through the roof as a
  result.
 
  Since the neutron gate is not a full tempest run, I think we should
  consider making a temporary change. I know that turning it into a
  non-voting job is not a popular solution, and I hate to even suggest
  it. However, it's just a subset of the tests anyway and I think the
 
 
 Well to be blunt, if there's not even anybody assigned to the defect and
 it's significantly impacting
 the progress of every other project.  I don't know that it's such a bad
 idea.  The process worked, it
 identified an issue, now it's known/understood however it's causing
 significant turmoil everywhere else.
 Are we gaining anything by having it continue to fail and do rechecks for
 the next week?

I feel a similar way to this as I do about regressions for which we've
identified a root cause patch, even though it's a completely separate
thing. In those cases, we should take decisive action to revert quickly.

This is holding people up, there's level of failure is unacceptable,
continuing to frustrate people is not going to get it fixed faster. In
this case, we should take decisive action and make it non-voting
quickly.

Cheers,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Current biggest OpenStack gate fail culprit - neutron bug #1194026

2013-07-11 Thread Russell Bryant
On 07/11/2013 11:46 AM, Mark McLoughlin wrote:
 On Thu, 2013-07-11 at 09:28 -0600, John Griffith wrote:
 On Thu, Jul 11, 2013 at 9:16 AM, Dan Smith d...@danplanet.com wrote:

 In the corner to my left, our current largest gate reset culprit
 appears to be neutron bug #1194026 - weighing in with 62 rechecks
 since June 24th (http://status.openstack.org/rechecks/)

 So, with some of the highest rates of patch traffic we've seen over the
 last couple of weeks before the H2 deadline, I think this is really
 becoming a problem. I think merge times are through the roof as a
 result.

 Since the neutron gate is not a full tempest run, I think we should
 consider making a temporary change. I know that turning it into a
 non-voting job is not a popular solution, and I hate to even suggest
 it. However, it's just a subset of the tests anyway and I think the


 Well to be blunt, if there's not even anybody assigned to the defect and
 it's significantly impacting
 the progress of every other project.  I don't know that it's such a bad
 idea.  The process worked, it
 identified an issue, now it's known/understood however it's causing
 significant turmoil everywhere else.
 Are we gaining anything by having it continue to fail and do rechecks for
 the next week?
 
 I feel a similar way to this as I do about regressions for which we've
 identified a root cause patch, even though it's a completely separate
 thing. In those cases, we should take decisive action to revert quickly.
 
 This is holding people up, there's level of failure is unacceptable,
 continuing to frustrate people is not going to get it fixed faster. In
 this case, we should take decisive action and make it non-voting
 quickly.

Change to make it non-voting proposed here:

https://review.openstack.org/36685

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Current biggest OpenStack gate fail culprit - neutron bug #1194026

2013-07-11 Thread John Griffith
On Thu, Jul 11, 2013 at 9:38 AM, Russell Bryant rbry...@redhat.com wrote:

 On 07/11/2013 11:28 AM, John Griffith wrote:
 
 
 
  On Thu, Jul 11, 2013 at 9:16 AM, Dan Smith d...@danplanet.com
  mailto:d...@danplanet.com wrote:
 
   In the corner to my left, our current largest gate reset culprit
   appears to be neutron bug #1194026 - weighing in with 62 rechecks
   since June 24th (http://status.openstack.org/rechecks/)
 
  So, with some of the highest rates of patch traffic we've seen over
 the
  last couple of weeks before the H2 deadline, I think this is really
  becoming a problem. I think merge times are through the roof as a
  result.
 
  Since the neutron gate is not a full tempest run, I think we should
  consider making a temporary change. I know that turning it into a
  non-voting job is not a popular solution, and I hate to even suggest
  it. However, it's just a subset of the tests anyway and I think the
 
 
  Well to be blunt, if there's not even anybody assigned to the defect and
  it's significantly impacting
  the progress of every other project.  I don't know that it's such a bad
  idea.  The process worked, it
  identified an issue, now it's known/understood however it's causing
  significant turmoil everywhere else.
  Are we gaining anything by having it continue to fail and do rechecks
  for the next week?

 +1 to making it non-voting until this is resolved.  This is a sensitive
 week for gate and check times.

  impact is currently overshadowing the potential for regression
  detection, given the relatively small amount of coverage. Is this
  something people would consider?
 
  Of course, the other option is to try to skip the offending test if
  we're running with neutron support, which may help. Since we don't
 know
  what the problem is and it *seems* to be an issue with resources not
  becoming available before a timeout (AIUI), I worry that this will
 just
  move the problem elsewhere.

 Disabling a specific test would be preferred IMO, but if that's not
 sufficient, I'd +1 downgrading the whole thing to non-voting for now.


Excellent point, test_008_check_public_network_connectivity.  If it's
possible to log the results but not fail the gate for this I think that
would be ideal, otherwise skip that test for now.


 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Current biggest OpenStack gate fail culprit - neutron bug #1194026

2013-07-11 Thread Sean Dague

On 07/11/2013 11:54 AM, Sean Dague wrote:

On 07/11/2013 11:33 AM, Matthew Treinish wrote:

On Thu, Jul 11, 2013 at 08:16:26AM -0700, Dan Smith wrote:

In the corner to my left, our current largest gate reset culprit
appears to be neutron bug #1194026 - weighing in with 62 rechecks
since June 24th (http://status.openstack.org/rechecks/)


So, with some of the highest rates of patch traffic we've seen over the
last couple of weeks before the H2 deadline, I think this is really
becoming a problem. I think merge times are through the roof as a
result.

Since the neutron gate is not a full tempest run, I think we should
consider making a temporary change. I know that turning it into a
non-voting job is not a popular solution, and I hate to even suggest
it. However, it's just a subset of the tests anyway and I think the
impact is currently overshadowing the potential for regression
detection, given the relatively small amount of coverage. Is this
something people would consider?


I don't think this is the way to go. Even though it's limited coverage
without it Neutron would have no gating integrated testing run on it
at all.
In my experience this will just cause more difficulty down the road when
we decide to switch it back to voting. Things tend to bit rot fairly
quickly.



Of course, the other option is to try to skip the offending test if
we're running with neutron support, which may help. Since we don't know
what the problem is and it *seems* to be an issue with resources not
becoming available before a timeout (AIUI), I worry that this will just
move the problem elsewhere.


So if it is a single test (or set of tests) failing then this is
doable. We
can do this in the short term, but if it just moves the problem
elsewhere then
we're just in the same situation right? So what's the harm in trying
this?


Let's start with the test skip.

I am however pretty frustrated that we're really not getting anyone from
neutron looking at this. We're at 121 rechecks (plus I'm sure there were
plenty of no bug rechecks, I've seen a couple). So 150+ gate resets
because of this bug. Which is 150hrs worth of delay put into the gate.


Actually, I'm revising my point of view. If we skip the test, people 
can't debug in the gate. if we make the job non-voting, the neutron team 
can submit patches up and run rechecks on them to try to reproduce the fail.


So let's go non-voting here.

-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Current biggest OpenStack gate fail culprit - neutron bug #1194026

2013-07-11 Thread Thierry Carrez
John Griffith wrote:
 Well to be blunt, if there's not even anybody assigned to the defect and
 it's significantly impacting
 the progress of every other project.  I don't know that it's such a bad
 idea.

There is someone assigned to it since it was raised at the release
meeting. He doesn't seem to make a lot of progress though.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Current biggest OpenStack gate fail culprit - neutron bug #1194026

2013-07-11 Thread Nachi Ueno
Hi Sean

Sorry for it taking long time to fixing this problem.
At least, 3 neutron core dev is working on this issue, but
it is kind of timing issue so we are struggling to replicate it.

I'm also OK to move it for non-voting now.


2013/7/11 Thierry Carrez thie...@openstack.org:
 Sean Dague wrote:
 On 07/11/2013 11:54 AM, Sean Dague wrote:
 Let's start with the test skip.

 I am however pretty frustrated that we're really not getting anyone from
 neutron looking at this. We're at 121 rechecks (plus I'm sure there were
 plenty of no bug rechecks, I've seen a couple). So 150+ gate resets
 because of this bug. Which is 150hrs worth of delay put into the gate.

 Actually, I'm revising my point of view. If we skip the test, people
 can't debug in the gate. if we make the job non-voting, the neutron team
 can submit patches up and run rechecks on them to try to reproduce the
 fail.

 So let's go non-voting here.

 The problem with this approach is that you'll fail to notice OTHER
 (genuine) issues that your patch introduces, since you'll be trained to
 ignore that line (and not necessarily look into the details).

 Disabling only the flaky test sounds like a better way to go to me. Yes
 it makes fixing the flaky test slightly more difficult, but at least it
 doesn't increase the regression risk.

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] SQLAlchemy-migrate needs a new maintainer

2013-07-11 Thread Thomas Goirand
Hi,

Discussing with Jan Dittberner, who is upstream for sqlalchemy-migrate,
it appears that he doesn't have time to maintain it.

Is the OpenStack project willing to take over? Jan is ok to hand over
everything, moving to Github, give access to Pypi, etc. Below is his
reply to me when I asked him.

Or is the OpenStack project moving toward Alembic as well?

Thoughts anyone?

Thomas Goirand (zigo)

On 07/12/2013 01:27 AM, Jan Dittberner wrote:
 I would be very happy to hand over the maintenance of sqlalchemy-migrate to
 a team that actually uses it. At the moment I take care of the Google Code
 [1] project for sqlalchemy-migrate and maintain a Jenkins instance at
 http://jenkins.gnuviech-server.de/. I'm all in favour of moving to github,
 Google Code was just choosen because it was available at the time the
 project moved from the initial developer's (Evan Rosson) personal server. I
 can also give access to the PyPI project page [2] to a prospective new
 maintainer/team.
 
 I wrote some sphinx documentation and improved the tests a while ago but I
 have no time to maintain it properly. I switched to alembic for my small
 personal projects.
 
 [1] https://code.google.com/p/sqlalchemy-migrate/
 [2] https://pypi.python.org/pypi/sqlalchemy-migrate
 
 
 Best regards
 Jan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Olso]About blueprint: Separate translation domain for log messages

2013-07-11 Thread Ying Chun Guo

Hi, Olso dev team

I remember there was a blueprint discussed in the Havana summit to separate
translation domains.
https://blueprints.launchpad.net/oslo/+spec/log-messages-translation-domain

I don't see any progress there.
Do you have any plan to implement it?
The translation team set the command line message as high priority, but log
messages as low priority.
So we want the domains can be separated.

Regards
Daisy___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SQLAlchemy-migrate needs a new maintainer

2013-07-11 Thread David Ripton
OpenStack is currently divided.  Older projects like Nova use 
sqlalchemy-migrate.  Some newer projects like Neutron use alembic.


I'd personally like to see everything in Alembic, but migrating all the 
Nova scripts in a way that didn't break compatibility will be a big 
challenge.  It's easier for projects with less to port.


Another option is to take over maintaining sqlalchemy-migrate and bend 
it to our needs.  (It's mostly okay, but the big issue for me is its use 
of strictly incrementing integer sequence numbers.  That both causes 
problems when competing patches in review race for the same filename, 
and when we try to backport some but not all migration scripts to a 
stable branch.)  We already apply some patches to upstream, so having a 
friendly maintainer who would apply patches that OpenStack needs would 
be helpful.


This will be a topic at the DB meeting today at 1900 UTC (about 20 
minutes from when I send this email).  So please attend if it's 
important to you.



On 07/11/2013 02:18 PM, Thomas Goirand wrote:


Discussing with Jan Dittberner, who is upstream for sqlalchemy-migrate,
it appears that he doesn't have time to maintain it.

Is the OpenStack project willing to take over? Jan is ok to hand over
everything, moving to Github, give access to Pypi, etc. Below is his
reply to me when I asked him.

Or is the OpenStack project moving toward Alembic as well?

Thoughts anyone?

Thomas Goirand (zigo)

On 07/12/2013 01:27 AM, Jan Dittberner wrote:

I would be very happy to hand over the maintenance of sqlalchemy-migrate to
a team that actually uses it. At the moment I take care of the Google Code
[1] project for sqlalchemy-migrate and maintain a Jenkins instance at
http://jenkins.gnuviech-server.de/. I'm all in favour of moving to github,
Google Code was just choosen because it was available at the time the
project moved from the initial developer's (Evan Rosson) personal server. I
can also give access to the PyPI project page [2] to a prospective new
maintainer/team.

I wrote some sphinx documentation and improved the tests a while ago but I
have no time to maintain it properly. I switched to alembic for my small
personal projects.

[1] https://code.google.com/p/sqlalchemy-migrate/
[2] https://pypi.python.org/pypi/sqlalchemy-migrate


Best regards
Jan



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Savanna-all] Blueprints for EDP components

2013-07-11 Thread Alexander Kuznetsov
Hi,

Blueprints for EDP components on launchpad are added

https://blueprints.launchpad.net/savanna/+spec/job-manager-components
https://blueprints.launchpad.net/savanna/+spec/data-discovery-component
https://blueprints.launchpad.net/savanna/+spec/job-source-component
https://blueprints.launchpad.net/savanna/+spec/methods-for-plugin-api-to-support-edp

Each blueprint contains short component descriptions, objects model and
methods, which will be implemented in this component.

Your comments and suggestions are welcome.

Thanks,
Alexander Kuznetsov.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SQLAlchemy-migrate needs a new maintainer

2013-07-11 Thread Monty Taylor


On 07/11/2013 02:40 PM, David Ripton wrote:
 OpenStack is currently divided.  Older projects like Nova use
 sqlalchemy-migrate.  Some newer projects like Neutron use alembic.
 
 I'd personally like to see everything in Alembic, but migrating all the
 Nova scripts in a way that didn't break compatibility will be a big
 challenge.  It's easier for projects with less to port.
 
 Another option is to take over maintaining sqlalchemy-migrate and bend
 it to our needs.  (It's mostly okay, but the big issue for me is its use
 of strictly incrementing integer sequence numbers.  That both causes
 problems when competing patches in review race for the same filename,
 and when we try to backport some but not all migration scripts to a
 stable branch.)  We already apply some patches to upstream, so having a
 friendly maintainer who would apply patches that OpenStack needs would
 be helpful.
 
 This will be a topic at the DB meeting today at 1900 UTC (about 20
 minutes from when I send this email).  So please attend if it's
 important to you.
 
 
 On 07/11/2013 02:18 PM, Thomas Goirand wrote:
 
 Discussing with Jan Dittberner, who is upstream for sqlalchemy-migrate,
 it appears that he doesn't have time to maintain it.

 Is the OpenStack project willing to take over? Jan is ok to hand over
 everything, moving to Github, give access to Pypi, etc. Below is his
 reply to me when I asked him.

Hi - We discussed this in the db meeting and decided that as much as
we're not thrilled with sqlalchemy-migrate (I believe boris-42 summed it
up as bad bad bad very bad things) we've got a pretty strong
dependency on it right now and for the next while.

SO - let's work on getting it moved into our systems and then we at
least have the ability to patch/release if needed.

 Or is the OpenStack project moving toward Alembic as well?

 Thoughts anyone?

 Thomas Goirand (zigo)

 On 07/12/2013 01:27 AM, Jan Dittberner wrote:
 I would be very happy to hand over the maintenance of
 sqlalchemy-migrate to
 a team that actually uses it. At the moment I take care of the Google
 Code
 [1] project for sqlalchemy-migrate and maintain a Jenkins instance at
 http://jenkins.gnuviech-server.de/. I'm all in favour of moving to
 github,
 Google Code was just choosen because it was available at the time the
 project moved from the initial developer's (Evan Rosson) personal
 server. I
 can also give access to the PyPI project page [2] to a prospective new
 maintainer/team.

 I wrote some sphinx documentation and improved the tests a while ago
 but I
 have no time to maintain it properly. I switched to alembic for my small
 personal projects.

 [1] https://code.google.com/p/sqlalchemy-migrate/
 [2] https://pypi.python.org/pypi/sqlalchemy-migrate


 Best regards
 Jan


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SQLAlchemy-migrate needs a new maintainer

2013-07-11 Thread Monty Taylor


On 07/11/2013 03:12 PM, Monty Taylor wrote:
 
 
 On 07/11/2013 02:40 PM, David Ripton wrote:
 OpenStack is currently divided.  Older projects like Nova use
 sqlalchemy-migrate.  Some newer projects like Neutron use alembic.

 I'd personally like to see everything in Alembic, but migrating all the
 Nova scripts in a way that didn't break compatibility will be a big
 challenge.  It's easier for projects with less to port.

 Another option is to take over maintaining sqlalchemy-migrate and bend
 it to our needs.  (It's mostly okay, but the big issue for me is its use
 of strictly incrementing integer sequence numbers.  That both causes
 problems when competing patches in review race for the same filename,
 and when we try to backport some but not all migration scripts to a
 stable branch.)  We already apply some patches to upstream, so having a
 friendly maintainer who would apply patches that OpenStack needs would
 be helpful.

 This will be a topic at the DB meeting today at 1900 UTC (about 20
 minutes from when I send this email).  So please attend if it's
 important to you.


 On 07/11/2013 02:18 PM, Thomas Goirand wrote:

 Discussing with Jan Dittberner, who is upstream for sqlalchemy-migrate,
 it appears that he doesn't have time to maintain it.

 Is the OpenStack project willing to take over? Jan is ok to hand over
 everything, moving to Github, give access to Pypi, etc. Below is his
 reply to me when I asked him.
 
 Hi - We discussed this in the db meeting and decided that as much as
 we're not thrilled with sqlalchemy-migrate (I believe boris-42 summed it
 up as bad bad bad very bad things) we've got a pretty strong
 dependency on it right now and for the next while.
 
 SO - let's work on getting it moved into our systems and then we at
 least have the ability to patch/release if needed.

We've got the upstream pypi and rtfd credentials now, the project should
be moved in to openstack systems soon enough. I also went through and
cleaned up build and test stuff work work like our stuff works (if we're
going to be maintaining it, we might as well, you know, do it how we do
things)

This brings us to the most important question:

Who wants to be on the core team?

 Or is the OpenStack project moving toward Alembic as well?

 Thoughts anyone?

 Thomas Goirand (zigo)

 On 07/12/2013 01:27 AM, Jan Dittberner wrote:
 I would be very happy to hand over the maintenance of
 sqlalchemy-migrate to
 a team that actually uses it. At the moment I take care of the Google
 Code
 [1] project for sqlalchemy-migrate and maintain a Jenkins instance at
 http://jenkins.gnuviech-server.de/. I'm all in favour of moving to
 github,
 Google Code was just choosen because it was available at the time the
 project moved from the initial developer's (Evan Rosson) personal
 server. I
 can also give access to the PyPI project page [2] to a prospective new
 maintainer/team.

 I wrote some sphinx documentation and improved the tests a while ago
 but I
 have no time to maintain it properly. I switched to alembic for my small
 personal projects.

 [1] https://code.google.com/p/sqlalchemy-migrate/
 [2] https://pypi.python.org/pypi/sqlalchemy-migrate


 Best regards
 Jan


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SQLAlchemy-migrate needs a new maintainer

2013-07-11 Thread Thomas Goirand
On 07/12/2013 07:29 AM, Monty Taylor wrote:
 We've got the upstream pypi and rtfd credentials now, the project should
 be moved in to openstack systems soon enough. I also went through and
 cleaned up build and test stuff work work like our stuff works (if we're
 going to be maintaining it, we might as well, you know, do it how we do
 things)

Cool! You definitively rox my friend.

You might as well want to apply Fedora's patch (thanks to Pádraig Brady)
for SQLAlchemy 0.8:
http://pkgs.fedoraproject.org/cgit/python-migrate.git/commit/?id=603ed1d1

which was the reason I started the discussion with Jan Dittberner. Jan
wrote to me that there's more to = 0.8 compat than just this patch, but
I had not time to dig it through.

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] Improve inject network configuration

2013-07-11 Thread Jae Sang Lee
Hi, stackers.

When creating vm using multi nics, User should power up the second
interface on the instance manually for use second IP.
http://docs.openstack.org/trunk/openstack-compute/admin/content/using-multi-nics.html

I intend to fix interfaces.template file, so It can be possible power up
other interface during booting time automatically.

I registered blueprint for this a month ago.(
https://blueprints.launchpad.net/nova/+spec/better-network-injection) But
not yet approved.

If you have permission to approve who read this mail, please approve my
blueprint.


Thanks.

Jay Lee
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] AMQP Version upgrade plans?

2013-07-11 Thread Russell Bryant
On 07/11/2013 12:06 PM, William Henry wrote:
 
 
 - Original Message -
 On 07/08/2013 10:51 AM, Ted Ross wrote:
 If someone from the Qpid community were to work on integrating the new
 AMQP 1.0 technology into OpenStack, where would be the right place to
 start?  Would it be to add a new transport to oslo.messaging?

 I think so, yes.  oslo.messaging is new, but it will deprecate the
 existing 'rpc' library in oslo-incubator.  All projects will need to
 move to oslo.messaging, so for something new I would focus efforts there.
 
 I think that one of the important points that Ted brought up is that AMQP 1.0 
 doesn't have the concepts of broker artifacts like exchanges etc.
 
 A recent change I proposed to the existing impl_qpid.py which focuses more on 
 addressing and not exchanges is a very important first step to solve several 
 issues: a recent qpidd leak issue, transitioning to AMQP 1.0 (addressing), 
 and possible HA solutions.
 
 This is an area I'd really like to continue to help out in. I'm back from 
 some vacation and would like to get stuck in soon.

Regarding the qpid exchange leak, that issue is mitigated largely by the
fact that the only time we declare a direct exchange is for replies to
an rpc.  In previous versions there was a new one of these for *every*
method call, which made this problem really bad.  In the current code,
we only create a single one.

However, we're still left with a leak.  The fact that RabbitMQ supports
auto-delete on exchanges and Qpid doesn't is what got us into this spot,
since this code works just like the kombu driver with respect to all of
this.

As for migration to AMQP 1.0, how do these changes help?  Supporting
AMQP 1.0 requires an entirely new driver that uses Qpid Proton, right?
How does changing addressing in the current Qpid driver (that will never
do 1.0) help?

I'm curious what you mean by possible HA solutions.  Can you elaborate?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SQLAlchemy-migrate needs a new maintainer

2013-07-11 Thread Jeremy Stanley
On 2013-07-12 08:01:58 +0800 (+0800), Thomas Goirand wrote:
[...]
 You might as well want to apply Fedora's patch (thanks to Pádraig Brady)
 for SQLAlchemy 0.8:
 http://pkgs.fedoraproject.org/cgit/python-migrate.git/commit/?id=603ed1d1
 
 which was the reason I started the discussion with Jan Dittberner.
[...]

At this point any of you can simply propose it as a code review
change to the stackforge/sqlalchemy-migrate project. Also, if anyone
wants to volunteer as initial members of sqlalchemy-migrate-core...
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SQLAlchemy-migrate needs a new maintainer

2013-07-11 Thread Monty Taylor


On 07/11/2013 08:01 PM, Thomas Goirand wrote:
 On 07/12/2013 07:29 AM, Monty Taylor wrote:
 We've got the upstream pypi and rtfd credentials now, the project should
 be moved in to openstack systems soon enough. I also went through and
 cleaned up build and test stuff work work like our stuff works (if we're
 going to be maintaining it, we might as well, you know, do it how we do
 things)
 
 Cool! You definitively rox my friend.
 
 You might as well want to apply Fedora's patch (thanks to Pádraig Brady)
 for SQLAlchemy 0.8:
 http://pkgs.fedoraproject.org/cgit/python-migrate.git/commit/?id=603ed1d1
 
 which was the reason I started the discussion with Jan Dittberner. Jan
 wrote to me that there's more to = 0.8 compat than just this patch, but
 I had not time to dig it through.

Done.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Revert Pass instance host-id to Quantum using port bindings extension.

2013-07-11 Thread Kyle Mestery (kmestery)
I agree with Andre's concerns around the implications of polling in what Aaron 
is proposing, and in fact, this is one reason the existing change is so nice. 
The ML2 sub-team talked about this at a recent meeting, and we liked the 
approach which Yong had taken with the patch. But as Andre says, we're all for 
working to simplify things, keeping the goals he mentioned in mind.

What are your thoughts Aaron?

Thanks,
Kyle

On Jul 11, 2013, at 8:44 PM, Andre Pech ap...@aristanetworks.com wrote:

 Hey Aaron,
 
 As an interested party in the change, figured I'd take a stab at responding. 
 I've talked with people at BigSwitch and Cisco about this change, so I know 
 others are interested in this as well, but I'll let them give their 
 perspective.
 
 At a high level, our goal at Arista is similar to what you mention. We want 
 to integrate the provisioning of the physical network within Neutron in 
 conjunction with the virtual network. We have no interest in controlling the 
 virtual switch layer, and so we'd like to do this in a way that does not tie 
 us to any particular virtual switching technology (should work just as well 
 with OVS, LinuxBridge, or whatever future virtual switch technology a 
 customer may choose to use). And it needs the chance to be inline - the 
 provisioning of the physical network has to happen alongside the virtual 
 network, so that failures to provision the physical network can be propogated 
 to the user in the same way as a failure to set up the virtual network.
 
 The thing I like most about the current solution is that it's event-driven. 
 There's no polling of the information out of band from nova (I'm not sure how 
 accepted it would be to poll this info directly from neutron, which would 
 then force you to do it from an outside system). It also doesn't require any 
 coordination with agents running on the compute side (in line with the goal 
 of working across multiple virtual switching technologies).
 
 I'd be really happy with another solution, but I'd be great to see those 
 properties preserved. I have reservations about the alternatives you're 
 proposing. Happy to hop on a call with other interested parties to come up 
 with a better middleground that allows you to do the simplification you're 
 proposing while still giving Neutron an explicit hook to learn about the host 
 a VM was placed on.
 
 Andre
 
 
 On Thu, Jul 11, 2013 at 1:30 PM, Aaron Rosen aro...@nicira.com wrote:
 Hi, 
 
 I think we should revert this patch that was added here 
 (https://review.openstack.org/#/c/29767/). What this patch does is when 
 nova-compute calls into quantum to create the port it passes in the hostname 
 on which the instance was booted on. The idea of the patch was that providing 
 this information would allow hardware device vendors management stations to 
 allow them to segment the network in a more precise manager (for example 
 automatically trunk the vlan on the physical switch port connected to the 
 compute node on which the vm instance was started). 
 
 In my opinion I don't think this is the right approach. There are several 
 other ways to get this information of where a specific port lives. For 
 example, in the OVS plugin case the agent running on the nova-compute node 
 can update the port in quantum to provide this information. Alternatively, 
 quantum could query nova using the port.device_id to determine which server 
 the instance is on. 
 
 My motivation for removing this code is I now have the free cycles to work on 
 https://blueprints.launchpad.net/nova/+spec/nova-api-quantum-create-port  
 discussed here 
 (http://lists.openstack.org/pipermail/openstack-dev/2013-May/009088.html)  . 
 This was about moving the quantum port creation from the nova-compute host to 
 nova-api if a network-uuid is passed in. This will allow us to remove all the 
 quantum logic from the nova-compute nodes and simplify orchestration. 
 
 Thoughts? 
 
 Best, 
 
 Aaron
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The danger of capping python-*clients in core projects, and forbidding it in the future

2013-07-11 Thread Gareth
I heard there's a talk about this issue in #openstack-infra last night
(china standard time), what's the conclusion of that?

BTW, how to find meeting log of #openstack-infra? I didn't find it in
http://eavesdrop.openstack.org/


On Thu, Jul 11, 2013 at 11:35 PM, Dirk Müller d...@dmllr.de wrote:

  See for example https://bugs.launchpad.net/horizon/+bug/1196823
  This is arguably a deficiency of mox, which (apparently?) doesn't let us
 mock properties automatically.

 I agree, but it is just one example. other test-only issues can happen as
 well.

 Similar problem: the *client packages are not self-contained, they
 have pretty strict dependencies on other packages. One case I already
 run into was a dependency on python-requests: newer python-*client
 packages (rightfully) require requests = 1.x. running those on a
 system that has OpenStack services from Grizzly or Folsom installed
 cause a conflict: there are one or two that require requests to be 
 1.0.

 When you run gating on this scenario, I think the same flipping would
 happen on e.g. requests as well, due to *client or the module being
 installed in varying order.

 Greetings,
 Dirk

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Gareth

*Cloud Computing, OpenStack, Fitness, Basketball*
*OpenStack contributor*
*Company: UnitedStack http://www.ustack.com*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The danger of capping python-*clients in core projects, and forbidding it in the future

2013-07-11 Thread Monty Taylor


On 07/11/2013 11:38 PM, Gareth wrote:
 I heard there's a talk about this issue in #openstack-infra last night
 (china standard time), what's the conclusion of that?
 
 BTW, how to find meeting log of #openstack-infra? I didn't find it
 in http://eavesdrop.openstack.org/

We don't log it currently. There is a wider conversation going on about
which things we should log and which things we should not log ... but
for the time being I've submitted this:

https://review.openstack.org/36773

to add -infra. I think we talk about enough things that have
ramifications on everyone in there that we should really capture it.
 On Thu, Jul 11, 2013 at 11:35 PM, Dirk Müller d...@dmllr.de
 mailto:d...@dmllr.de wrote:
 
  See for example https://bugs.launchpad.net/horizon/+bug/1196823
  This is arguably a deficiency of mox, which (apparently?) doesn't
 let us mock properties automatically.
 
 I agree, but it is just one example. other test-only issues can
 happen as well.
 
 Similar problem: the *client packages are not self-contained, they
 have pretty strict dependencies on other packages. One case I already
 run into was a dependency on python-requests: newer python-*client
 packages (rightfully) require requests = 1.x. running those on a
 system that has OpenStack services from Grizzly or Folsom installed
 cause a conflict: there are one or two that require requests to be 
 1.0.
 
 When you run gating on this scenario, I think the same flipping would
 happen on e.g. requests as well, due to *client or the module being
 installed in varying order.
 
 Greetings,
 Dirk
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Gareth
 
 /Cloud Computing, OpenStack, Fitness, Basketball/
 /OpenStack contributor/
 /Company: UnitedStack http://www.ustack.com/
 /My promise: if you find any spelling or grammar mistakes in my email
 from Mar 1 2013, notify me /
 /and I'll donate $1 or ¥1 to an open organization you specify./
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The danger of capping python-*clients in core projects, and forbidding it in the future

2013-07-11 Thread Gareth
Thanks, Monty

but in my review https://review.openstack.org/#/c/36684/ , Doug said we
will go without upper bound with those python-*clients
and in this one https://review.openstack.org/#/c/36753/ , keystoneclient
still keep '0.4' and requirements test doesn't fail in keystoneclient (
https://jenkins.openstack.org/job/gate-cinder-requirements/96/console it
failed on glanceclient)




On Fri, Jul 12, 2013 at 11:54 AM, Monty Taylor mord...@inaugust.com wrote:



 On 07/11/2013 11:38 PM, Gareth wrote:
  I heard there's a talk about this issue in #openstack-infra last night
  (china standard time), what's the conclusion of that?
 
  BTW, how to find meeting log of #openstack-infra? I didn't find it
  in http://eavesdrop.openstack.org/

 We don't log it currently. There is a wider conversation going on about
 which things we should log and which things we should not log ... but
 for the time being I've submitted this:

 https://review.openstack.org/36773

 to add -infra. I think we talk about enough things that have
 ramifications on everyone in there that we should really capture it.
  On Thu, Jul 11, 2013 at 11:35 PM, Dirk Müller d...@dmllr.de
  mailto:d...@dmllr.de wrote:
 
   See for example https://bugs.launchpad.net/horizon/+bug/1196823
   This is arguably a deficiency of mox, which (apparently?) doesn't
  let us mock properties automatically.
 
  I agree, but it is just one example. other test-only issues can
  happen as well.
 
  Similar problem: the *client packages are not self-contained, they
  have pretty strict dependencies on other packages. One case I already
  run into was a dependency on python-requests: newer python-*client
  packages (rightfully) require requests = 1.x. running those on a
  system that has OpenStack services from Grizzly or Folsom installed
  cause a conflict: there are one or two that require requests to be 
  1.0.
 
  When you run gating on this scenario, I think the same flipping would
  happen on e.g. requests as well, due to *client or the module being
  installed in varying order.
 
  Greetings,
  Dirk
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Gareth
 
  /Cloud Computing, OpenStack, Fitness, Basketball/
  /OpenStack contributor/
  /Company: UnitedStack http://www.ustack.com/
  /My promise: if you find any spelling or grammar mistakes in my email
  from Mar 1 2013, notify me /
  /and I'll donate $1 or ¥1 to an open organization you specify./
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Gareth

*Cloud Computing, OpenStack, Fitness, Basketball*
*OpenStack contributor*
*Company: UnitedStack http://www.ustack.com*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The danger of capping python-*clients in core projects, and forbidding it in the future

2013-07-11 Thread Gareth
so, what's the final conclusion about this issue?


On Fri, Jul 12, 2013 at 12:11 PM, Gareth academicgar...@gmail.com wrote:

 Thanks, Monty

 but in my review https://review.openstack.org/#/c/36684/ , Doug said we
 will go without upper bound with those python-*clients
 and in this one https://review.openstack.org/#/c/36753/ , keystoneclient
 still keep '0.4' and requirements test doesn't fail in keystoneclient (
 https://jenkins.openstack.org/job/gate-cinder-requirements/96/console it
 failed on glanceclient)




 On Fri, Jul 12, 2013 at 11:54 AM, Monty Taylor mord...@inaugust.comwrote:



 On 07/11/2013 11:38 PM, Gareth wrote:
  I heard there's a talk about this issue in #openstack-infra last night
  (china standard time), what's the conclusion of that?
 
  BTW, how to find meeting log of #openstack-infra? I didn't find it
  in http://eavesdrop.openstack.org/

 We don't log it currently. There is a wider conversation going on about
 which things we should log and which things we should not log ... but
 for the time being I've submitted this:

 https://review.openstack.org/36773

 to add -infra. I think we talk about enough things that have
 ramifications on everyone in there that we should really capture it.
  On Thu, Jul 11, 2013 at 11:35 PM, Dirk Müller d...@dmllr.de
  mailto:d...@dmllr.de wrote:
 
   See for example https://bugs.launchpad.net/horizon/+bug/1196823
   This is arguably a deficiency of mox, which (apparently?) doesn't
  let us mock properties automatically.
 
  I agree, but it is just one example. other test-only issues can
  happen as well.
 
  Similar problem: the *client packages are not self-contained, they
  have pretty strict dependencies on other packages. One case I
 already
  run into was a dependency on python-requests: newer python-*client
  packages (rightfully) require requests = 1.x. running those on a
  system that has OpenStack services from Grizzly or Folsom installed
  cause a conflict: there are one or two that require requests to be 
  1.0.
 
  When you run gating on this scenario, I think the same flipping
 would
  happen on e.g. requests as well, due to *client or the module being
  installed in varying order.
 
  Greetings,
  Dirk
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Gareth
 
  /Cloud Computing, OpenStack, Fitness, Basketball/
  /OpenStack contributor/
  /Company: UnitedStack http://www.ustack.com/
  /My promise: if you find any spelling or grammar mistakes in my email
  from Mar 1 2013, notify me /
  /and I'll donate $1 or ¥1 to an open organization you specify./
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Gareth

 *Cloud Computing, OpenStack, Fitness, Basketball*
 *OpenStack contributor*
 *Company: UnitedStack http://www.ustack.com*
 *My promise: if you find any spelling or grammar mistakes in my email
 from Mar 1 2013, notify me *
 *and I'll donate $1 or ¥1 to an open organization you specify.*




-- 
Gareth

*Cloud Computing, OpenStack, Fitness, Basketball*
*OpenStack contributor*
*Company: UnitedStack http://www.ustack.com*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Improve inject network configuration

2013-07-11 Thread Brian Lamar



Russell Bryant wrote:

On 07/11/2013 08:53 PM, Jae Sang Lee wrote:

Hi, stackers.

When creating vm using multi nics, User should power up the second
interface on the instance manually for use second IP.
http://docs.openstack.org/trunk/openstack-compute/admin/content/using-multi-nics.html


I intend to fix interfaces.template file, so It can be possible power up
other interface during booting time automatically.

I registered blueprint for this a month
ago.(https://blueprints.launchpad.net/nova/+spec/better-network-injection)

But not yet approved.

If you have permission to approve who read this mail, please approve my
blueprint.


Honestly, I think network injection is evil and I'd rather remove it
completely. I'm certainly not too interested in trying to add more
features to it.



Can you elaborate on this a little more? Do you not like file injection 
or dynamic network allocation?


Can you provide alternative strategies that could be applied to solve 
the issue of dynamically brining up interfaces or do you think this is 
out of the project scope (controlling the internals of VMs)?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Revert Pass instance host-id to Quantum using port bindings extension.

2013-07-11 Thread Sumit Naiksatam
I agree with Andre and Kyle here. I am not sure that the polling
option is even going to work for certain use cases where the host_id
information is required when creating the port (for instance, to
decide the VIF type).

Thanks,
~Sumit.

On Thu, Jul 11, 2013 at 7:27 PM, Kyle Mestery (kmestery)
kmest...@cisco.com wrote:
 I agree with Andre's concerns around the implications of polling in what 
 Aaron is proposing, and in fact, this is one reason the existing change is so 
 nice. The ML2 sub-team talked about this at a recent meeting, and we liked 
 the approach which Yong had taken with the patch. But as Andre says, we're 
 all for working to simplify things, keeping the goals he mentioned in mind.

 What are your thoughts Aaron?

 Thanks,
 Kyle

 On Jul 11, 2013, at 8:44 PM, Andre Pech ap...@aristanetworks.com wrote:

 Hey Aaron,

 As an interested party in the change, figured I'd take a stab at responding. 
 I've talked with people at BigSwitch and Cisco about this change, so I know 
 others are interested in this as well, but I'll let them give their 
 perspective.

 At a high level, our goal at Arista is similar to what you mention. We want 
 to integrate the provisioning of the physical network within Neutron in 
 conjunction with the virtual network. We have no interest in controlling the 
 virtual switch layer, and so we'd like to do this in a way that does not tie 
 us to any particular virtual switching technology (should work just as well 
 with OVS, LinuxBridge, or whatever future virtual switch technology a 
 customer may choose to use). And it needs the chance to be inline - the 
 provisioning of the physical network has to happen alongside the virtual 
 network, so that failures to provision the physical network can be 
 propogated to the user in the same way as a failure to set up the virtual 
 network.

 The thing I like most about the current solution is that it's event-driven. 
 There's no polling of the information out of band from nova (I'm not sure 
 how accepted it would be to poll this info directly from neutron, which 
 would then force you to do it from an outside system). It also doesn't 
 require any coordination with agents running on the compute side (in line 
 with the goal of working across multiple virtual switching technologies).

 I'd be really happy with another solution, but I'd be great to see those 
 properties preserved. I have reservations about the alternatives you're 
 proposing. Happy to hop on a call with other interested parties to come up 
 with a better middleground that allows you to do the simplification you're 
 proposing while still giving Neutron an explicit hook to learn about the 
 host a VM was placed on.

 Andre


 On Thu, Jul 11, 2013 at 1:30 PM, Aaron Rosen aro...@nicira.com wrote:
 Hi,

 I think we should revert this patch that was added here 
 (https://review.openstack.org/#/c/29767/). What this patch does is when 
 nova-compute calls into quantum to create the port it passes in the hostname 
 on which the instance was booted on. The idea of the patch was that 
 providing this information would allow hardware device vendors management 
 stations to allow them to segment the network in a more precise manager (for 
 example automatically trunk the vlan on the physical switch port connected 
 to the compute node on which the vm instance was started).

 In my opinion I don't think this is the right approach. There are several 
 other ways to get this information of where a specific port lives. For 
 example, in the OVS plugin case the agent running on the nova-compute node 
 can update the port in quantum to provide this information. Alternatively, 
 quantum could query nova using the port.device_id to determine which server 
 the instance is on.

 My motivation for removing this code is I now have the free cycles to work 
 on https://blueprints.launchpad.net/nova/+spec/nova-api-quantum-create-port  
 discussed here 
 (http://lists.openstack.org/pipermail/openstack-dev/2013-May/009088.html)  . 
 This was about moving the quantum port creation from the nova-compute host 
 to nova-api if a network-uuid is passed in. This will allow us to remove all 
 the quantum logic from the nova-compute nodes and simplify orchestration.

 Thoughts?

 Best,

 Aaron

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org